Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge branch 'fbcon-locking-fixes' of ssh://people.freedesktop.org/~airlied/linux into drm-next

This pulls in most of Linus tree up to -rc6, this fixes the worst lockdep
reported issues and re-enables fbcon lockdep.

(not the fbcon maintainer)
* 'fbcon-locking-fixes' of ssh://people.freedesktop.org/~airlied/linux: (529 commits)
Revert "Revert "console: implement lockdep support for console_lock""
fbcon: fix locking harder
fb: Yet another band-aid for fixing lockdep mess
fb: rework locking to fix lock ordering on takeover

+4487 -2553
+1
Documentation/device-mapper/dm-raid.txt
··· 141 141 1.2.0 Handle creation of arrays that contain failed devices. 142 142 1.3.0 Added support for RAID 10 143 143 1.3.1 Allow device replacement/rebuild for RAID 10 144 + 1.3.2 Fix/improve redundancy checking for RAID10
+3 -2
Documentation/devicetree/bindings/pinctrl/atmel,at91-pinctrl.txt
··· 81 81 Required properties for pin configuration node: 82 82 - atmel,pins: 4 integers array, represents a group of pins mux and config 83 83 setting. The format is atmel,pins = <PIN_BANK PIN_BANK_NUM PERIPH CONFIG>. 84 - The PERIPH 0 means gpio. 84 + The PERIPH 0 means gpio, PERIPH 1 is periph A, PERIPH 2 is periph B... 85 + PIN_BANK 0 is pioA, PIN_BANK 1 is pioB... 85 86 86 87 Bits used for CONFIG: 87 88 PULL_UP (1 << 0): indicate this pin need a pull up. ··· 127 126 pinctrl_dbgu: dbgu-0 { 128 127 atmel,pins = 129 128 <1 14 0x1 0x0 /* PB14 periph A */ 130 - 1 15 0x1 0x1>; /* PB15 periph with pullup */ 129 + 1 15 0x1 0x1>; /* PB15 periph A with pullup */ 131 130 }; 132 131 }; 133 132 };
+9 -9
Documentation/filesystems/f2fs.txt
··· 175 175 align with the zone size <-| 176 176 |-> align with the segment size 177 177 _________________________________________________________________________ 178 - | | | Node | Segment | Segment | | 179 - | Superblock | Checkpoint | Address | Info. | Summary | Main | 180 - | (SB) | (CP) | Table (NAT) | Table (SIT) | Area (SSA) | | 178 + | | | Segment | Node | Segment | | 179 + | Superblock | Checkpoint | Info. | Address | Summary | Main | 180 + | (SB) | (CP) | Table (SIT) | Table (NAT) | Area (SSA) | | 181 181 |____________|_____2______|______N______|______N______|______N_____|__N___| 182 182 . . 183 183 . . ··· 200 200 : It contains file system information, bitmaps for valid NAT/SIT sets, orphan 201 201 inode lists, and summary entries of current active segments. 202 202 203 - - Node Address Table (NAT) 204 - : It is composed of a block address table for all the node blocks stored in 205 - Main area. 206 - 207 203 - Segment Information Table (SIT) 208 204 : It contains segment information such as valid block count and bitmap for the 209 205 validity of all the blocks. 206 + 207 + - Node Address Table (NAT) 208 + : It is composed of a block address table for all the node blocks stored in 209 + Main area. 210 210 211 211 - Segment Summary Area (SSA) 212 212 : It contains summary entries which contains the owner information of all the ··· 236 236 valid, as shown as below. 237 237 238 238 +--------+----------+---------+ 239 - | CP | NAT | SIT | 239 + | CP | SIT | NAT | 240 240 +--------+----------+---------+ 241 241 . . . . 242 242 . . . . 243 243 . . . . 244 244 +-------+-------+--------+--------+--------+--------+ 245 - | CP #0 | CP #1 | NAT #0 | NAT #1 | SIT #0 | SIT #1 | 245 + | CP #0 | CP #1 | SIT #0 | SIT #1 | NAT #0 | NAT #1 | 246 246 +-------+-------+--------+--------+--------+--------+ 247 247 | ^ ^ 248 248 | | |
Documentation/hid/hid-sensor.txt
+1 -1
Documentation/kernel-parameters.txt
··· 2438 2438 real-time workloads. It can also improve energy 2439 2439 efficiency for asymmetric multiprocessors. 2440 2440 2441 - rcu_nocbs_poll [KNL,BOOT] 2441 + rcu_nocb_poll [KNL,BOOT] 2442 2442 Rather than requiring that offloaded CPUs 2443 2443 (specified by rcu_nocbs= above) explicitly 2444 2444 awaken the corresponding "rcuoN" kthreads,
+26 -1
Documentation/x86/boot.txt
··· 57 57 Protocol 2.11: (Kernel 3.6) Added a field for offset of EFI handover 58 58 protocol entry point. 59 59 60 + Protocol 2.12: (Kernel 3.8) Added the xloadflags field and extension fields 61 + to struct boot_params for for loading bzImage and ramdisk 62 + above 4G in 64bit. 63 + 60 64 **** MEMORY LAYOUT 61 65 62 66 The traditional memory map for the kernel loader, used for Image or ··· 186 182 0230/4 2.05+ kernel_alignment Physical addr alignment required for kernel 187 183 0234/1 2.05+ relocatable_kernel Whether kernel is relocatable or not 188 184 0235/1 2.10+ min_alignment Minimum alignment, as a power of two 189 - 0236/2 N/A pad3 Unused 185 + 0236/2 2.12+ xloadflags Boot protocol option flags 190 186 0238/4 2.06+ cmdline_size Maximum size of the kernel command line 191 187 023C/4 2.07+ hardware_subarch Hardware subarchitecture 192 188 0240/8 2.07+ hardware_subarch_data Subarchitecture-specific data ··· 585 581 There may be a considerable performance cost with an excessively 586 582 misaligned kernel. Therefore, a loader should typically try each 587 583 power-of-two alignment from kernel_alignment down to this alignment. 584 + 585 + Field name: xloadflags 586 + Type: read 587 + Offset/size: 0x236/2 588 + Protocol: 2.12+ 589 + 590 + This field is a bitmask. 591 + 592 + Bit 0 (read): XLF_KERNEL_64 593 + - If 1, this kernel has the legacy 64-bit entry point at 0x200. 594 + 595 + Bit 1 (read): XLF_CAN_BE_LOADED_ABOVE_4G 596 + - If 1, kernel/boot_params/cmdline/ramdisk can be above 4G. 597 + 598 + Bit 2 (read): XLF_EFI_HANDOVER_32 599 + - If 1, the kernel supports the 32-bit EFI handoff entry point 600 + given at handover_offset. 601 + 602 + Bit 3 (read): XLF_EFI_HANDOVER_64 603 + - If 1, the kernel supports the 64-bit EFI handoff entry point 604 + given at handover_offset + 0x200. 588 605 589 606 Field name: cmdline_size 590 607 Type: read
+4
Documentation/x86/zero-page.txt
··· 19 19 090/010 ALL hd1_info hd1 disk parameter, OBSOLETE!! 20 20 0A0/010 ALL sys_desc_table System description table (struct sys_desc_table) 21 21 0B0/010 ALL olpc_ofw_header OLPC's OpenFirmware CIF and friends 22 + 0C0/004 ALL ext_ramdisk_image ramdisk_image high 32bits 23 + 0C4/004 ALL ext_ramdisk_size ramdisk_size high 32bits 24 + 0C8/004 ALL ext_cmd_line_ptr cmd_line_ptr high 32bits 22 25 140/080 ALL edid_info Video mode setup (struct edid_info) 23 26 1C0/020 ALL efi_info EFI 32 information (struct efi_info) 24 27 1E0/004 ALL alk_mem_k Alternative mem check, in KB ··· 30 27 1E9/001 ALL eddbuf_entries Number of entries in eddbuf (below) 31 28 1EA/001 ALL edd_mbr_sig_buf_entries Number of entries in edd_mbr_sig_buffer 32 29 (below) 30 + 1EF/001 ALL sentinel Used to detect broken bootloaders 33 31 290/040 ALL edd_mbr_sig_buffer EDD MBR signatures 34 32 2D0/A00 ALL e820_map E820 memory map table 35 33 (array of struct e820entry)
+5 -5
MAINTAINERS
··· 2966 2966 F: drivers/net/ethernet/i825xx/eexpress.* 2967 2967 2968 2968 ETHERNET BRIDGE 2969 - M: Stephen Hemminger <shemminger@vyatta.com> 2969 + M: Stephen Hemminger <stephen@networkplumber.org> 2970 2970 L: bridge@lists.linux-foundation.org 2971 2971 L: netdev@vger.kernel.org 2972 2972 W: http://www.linuxfoundation.org/en/Net:Bridge ··· 4905 4905 4906 4906 MARVELL GIGABIT ETHERNET DRIVERS (skge/sky2) 4907 4907 M: Mirko Lindner <mlindner@marvell.com> 4908 - M: Stephen Hemminger <shemminger@vyatta.com> 4908 + M: Stephen Hemminger <stephen@networkplumber.org> 4909 4909 L: netdev@vger.kernel.org 4910 4910 S: Maintained 4911 4911 F: drivers/net/ethernet/marvell/sk* ··· 5180 5180 F: drivers/infiniband/hw/nes/ 5181 5181 5182 5182 NETEM NETWORK EMULATOR 5183 - M: Stephen Hemminger <shemminger@vyatta.com> 5183 + M: Stephen Hemminger <stephen@networkplumber.org> 5184 5184 L: netem@lists.linux-foundation.org 5185 5185 S: Maintained 5186 5186 F: net/sched/sch_netem.c ··· 6585 6585 F: include/media/s3c_camif.h 6586 6586 6587 6587 SERIAL DRIVERS 6588 - M: Alan Cox <alan@linux.intel.com> 6588 + M: Greg Kroah-Hartman <gregkh@linuxfoundation.org> 6589 6589 L: linux-serial@vger.kernel.org 6590 6590 S: Maintained 6591 6591 F: drivers/tty/serial ··· 7088 7088 F: sound/ 7089 7089 7090 7090 SOUND - SOC LAYER / DYNAMIC AUDIO POWER MANAGEMENT (ASoC) 7091 - M: Liam Girdwood <lrg@ti.com> 7091 + M: Liam Girdwood <lgirdwood@gmail.com> 7092 7092 M: Mark Brown <broonie@opensource.wolfsonmicro.com> 7093 7093 T: git git://git.kernel.org/pub/scm/linux/kernel/git/broonie/sound.git 7094 7094 L: alsa-devel@alsa-project.org (moderated for non-subscribers)
+3 -3
Makefile
··· 1 1 VERSION = 3 2 2 PATCHLEVEL = 8 3 3 SUBLEVEL = 0 4 - EXTRAVERSION = -rc4 5 - NAME = Terrified Chipmunk 4 + EXTRAVERSION = -rc6 5 + NAME = Unicycling Gorilla 6 6 7 7 # *DOCUMENTATION* 8 8 # To see a list of typical targets execute "make help" ··· 169 169 -e s/arm.*/arm/ -e s/sa110/arm/ \ 170 170 -e s/s390x/s390/ -e s/parisc64/parisc/ \ 171 171 -e s/ppc.*/powerpc/ -e s/mips.*/mips/ \ 172 - -e s/sh[234].*/sh/ ) 172 + -e s/sh[234].*/sh/ -e s/aarch64.*/arm64/ ) 173 173 174 174 # Cross compiling and selecting different set of gcc/bin-utils 175 175 # ---------------------------------------------------------------------------
+1 -1
arch/arm/boot/dts/armada-370-db.dts
··· 26 26 27 27 memory { 28 28 device_type = "memory"; 29 - reg = <0x00000000 0x20000000>; /* 512 MB */ 29 + reg = <0x00000000 0x40000000>; /* 1 GB */ 30 30 }; 31 31 32 32 soc {
+6 -8
arch/arm/boot/dts/armada-xp-mv78230.dtsi
··· 50 50 }; 51 51 52 52 gpio0: gpio@d0018100 { 53 - compatible = "marvell,armadaxp-gpio"; 54 - reg = <0xd0018100 0x40>, 55 - <0xd0018800 0x30>; 53 + compatible = "marvell,orion-gpio"; 54 + reg = <0xd0018100 0x40>; 56 55 ngpios = <32>; 57 56 gpio-controller; 58 57 #gpio-cells = <2>; 59 58 interrupt-controller; 60 59 #interrupts-cells = <2>; 61 - interrupts = <16>, <17>, <18>, <19>; 60 + interrupts = <82>, <83>, <84>, <85>; 62 61 }; 63 62 64 63 gpio1: gpio@d0018140 { 65 - compatible = "marvell,armadaxp-gpio"; 66 - reg = <0xd0018140 0x40>, 67 - <0xd0018840 0x30>; 64 + compatible = "marvell,orion-gpio"; 65 + reg = <0xd0018140 0x40>; 68 66 ngpios = <17>; 69 67 gpio-controller; 70 68 #gpio-cells = <2>; 71 69 interrupt-controller; 72 70 #interrupts-cells = <2>; 73 - interrupts = <20>, <21>, <22>; 71 + interrupts = <87>, <88>, <89>; 74 72 }; 75 73 }; 76 74 };
+9 -12
arch/arm/boot/dts/armada-xp-mv78260.dtsi
··· 51 51 }; 52 52 53 53 gpio0: gpio@d0018100 { 54 - compatible = "marvell,armadaxp-gpio"; 55 - reg = <0xd0018100 0x40>, 56 - <0xd0018800 0x30>; 54 + compatible = "marvell,orion-gpio"; 55 + reg = <0xd0018100 0x40>; 57 56 ngpios = <32>; 58 57 gpio-controller; 59 58 #gpio-cells = <2>; 60 59 interrupt-controller; 61 60 #interrupts-cells = <2>; 62 - interrupts = <16>, <17>, <18>, <19>; 61 + interrupts = <82>, <83>, <84>, <85>; 63 62 }; 64 63 65 64 gpio1: gpio@d0018140 { 66 - compatible = "marvell,armadaxp-gpio"; 67 - reg = <0xd0018140 0x40>, 68 - <0xd0018840 0x30>; 65 + compatible = "marvell,orion-gpio"; 66 + reg = <0xd0018140 0x40>; 69 67 ngpios = <32>; 70 68 gpio-controller; 71 69 #gpio-cells = <2>; 72 70 interrupt-controller; 73 71 #interrupts-cells = <2>; 74 - interrupts = <20>, <21>, <22>, <23>; 72 + interrupts = <87>, <88>, <89>, <90>; 75 73 }; 76 74 77 75 gpio2: gpio@d0018180 { 78 - compatible = "marvell,armadaxp-gpio"; 79 - reg = <0xd0018180 0x40>, 80 - <0xd0018870 0x30>; 76 + compatible = "marvell,orion-gpio"; 77 + reg = <0xd0018180 0x40>; 81 78 ngpios = <3>; 82 79 gpio-controller; 83 80 #gpio-cells = <2>; 84 81 interrupt-controller; 85 82 #interrupts-cells = <2>; 86 - interrupts = <24>; 83 + interrupts = <91>; 87 84 }; 88 85 89 86 ethernet@d0034000 {
+9 -12
arch/arm/boot/dts/armada-xp-mv78460.dtsi
··· 66 66 }; 67 67 68 68 gpio0: gpio@d0018100 { 69 - compatible = "marvell,armadaxp-gpio"; 70 - reg = <0xd0018100 0x40>, 71 - <0xd0018800 0x30>; 69 + compatible = "marvell,orion-gpio"; 70 + reg = <0xd0018100 0x40>; 72 71 ngpios = <32>; 73 72 gpio-controller; 74 73 #gpio-cells = <2>; 75 74 interrupt-controller; 76 75 #interrupts-cells = <2>; 77 - interrupts = <16>, <17>, <18>, <19>; 76 + interrupts = <82>, <83>, <84>, <85>; 78 77 }; 79 78 80 79 gpio1: gpio@d0018140 { 81 - compatible = "marvell,armadaxp-gpio"; 82 - reg = <0xd0018140 0x40>, 83 - <0xd0018840 0x30>; 80 + compatible = "marvell,orion-gpio"; 81 + reg = <0xd0018140 0x40>; 84 82 ngpios = <32>; 85 83 gpio-controller; 86 84 #gpio-cells = <2>; 87 85 interrupt-controller; 88 86 #interrupts-cells = <2>; 89 - interrupts = <20>, <21>, <22>, <23>; 87 + interrupts = <87>, <88>, <89>, <90>; 90 88 }; 91 89 92 90 gpio2: gpio@d0018180 { 93 - compatible = "marvell,armadaxp-gpio"; 94 - reg = <0xd0018180 0x40>, 95 - <0xd0018870 0x30>; 91 + compatible = "marvell,orion-gpio"; 92 + reg = <0xd0018180 0x40>; 96 93 ngpios = <3>; 97 94 gpio-controller; 98 95 #gpio-cells = <2>; 99 96 interrupt-controller; 100 97 #interrupts-cells = <2>; 101 - interrupts = <24>; 98 + interrupts = <91>; 102 99 }; 103 100 104 101 ethernet@d0034000 {
+2 -2
arch/arm/boot/dts/at91rm9200.dtsi
··· 336 336 337 337 i2c@0 { 338 338 compatible = "i2c-gpio"; 339 - gpios = <&pioA 23 0 /* sda */ 340 - &pioA 24 0 /* scl */ 339 + gpios = <&pioA 25 0 /* sda */ 340 + &pioA 26 0 /* scl */ 341 341 >; 342 342 i2c-gpio,sda-open-drain; 343 343 i2c-gpio,scl-open-drain;
+40 -20
arch/arm/boot/dts/at91sam9x5.dtsi
··· 143 143 atmel,pins = 144 144 <0 3 0x1 0x0>; /* PA3 periph A */ 145 145 }; 146 + 147 + pinctrl_usart0_sck: usart0_sck-0 { 148 + atmel,pins = 149 + <0 4 0x1 0x0>; /* PA4 periph A */ 150 + }; 146 151 }; 147 152 148 153 usart1 { ··· 159 154 160 155 pinctrl_usart1_rts: usart1_rts-0 { 161 156 atmel,pins = 162 - <3 27 0x3 0x0>; /* PC27 periph C */ 157 + <2 27 0x3 0x0>; /* PC27 periph C */ 163 158 }; 164 159 165 160 pinctrl_usart1_cts: usart1_cts-0 { 166 161 atmel,pins = 167 - <3 28 0x3 0x0>; /* PC28 periph C */ 162 + <2 28 0x3 0x0>; /* PC28 periph C */ 163 + }; 164 + 165 + pinctrl_usart1_sck: usart1_sck-0 { 166 + atmel,pins = 167 + <2 28 0x3 0x0>; /* PC29 periph C */ 168 168 }; 169 169 }; 170 170 ··· 182 172 183 173 pinctrl_uart2_rts: uart2_rts-0 { 184 174 atmel,pins = 185 - <0 0 0x2 0x0>; /* PB0 periph B */ 175 + <1 0 0x2 0x0>; /* PB0 periph B */ 186 176 }; 187 177 188 178 pinctrl_uart2_cts: uart2_cts-0 { 189 179 atmel,pins = 190 - <0 1 0x2 0x0>; /* PB1 periph B */ 180 + <1 1 0x2 0x0>; /* PB1 periph B */ 181 + }; 182 + 183 + pinctrl_usart2_sck: usart2_sck-0 { 184 + atmel,pins = 185 + <1 2 0x2 0x0>; /* PB2 periph B */ 191 186 }; 192 187 }; 193 188 194 189 usart3 { 195 190 pinctrl_uart3: usart3-0 { 196 191 atmel,pins = 197 - <3 23 0x2 0x1 /* PC22 periph B with pullup */ 198 - 3 23 0x2 0x0>; /* PC23 periph B */ 192 + <2 23 0x2 0x1 /* PC22 periph B with pullup */ 193 + 2 23 0x2 0x0>; /* PC23 periph B */ 199 194 }; 200 195 201 196 pinctrl_usart3_rts: usart3_rts-0 { 202 197 atmel,pins = 203 - <3 24 0x2 0x0>; /* PC24 periph B */ 198 + <2 24 0x2 0x0>; /* PC24 periph B */ 204 199 }; 205 200 206 201 pinctrl_usart3_cts: usart3_cts-0 { 207 202 atmel,pins = 208 - <3 25 0x2 0x0>; /* PC25 periph B */ 203 + <2 25 0x2 0x0>; /* PC25 periph B */ 204 + }; 205 + 206 + pinctrl_usart3_sck: usart3_sck-0 { 207 + atmel,pins = 208 + <2 26 0x2 0x0>; /* PC26 periph B */ 209 209 }; 210 210 }; 211 211 212 212 uart0 { 213 213 pinctrl_uart0: uart0-0 { 214 214 atmel,pins = 215 - <3 8 0x3 0x0 /* PC8 periph C */ 216 - 3 9 0x3 0x1>; /* PC9 periph C with pullup */ 215 + <2 8 0x3 0x0 /* PC8 periph C */ 216 + 2 9 0x3 0x1>; /* PC9 periph C with pullup */ 217 217 }; 218 218 }; 219 219 220 220 uart1 { 221 221 pinctrl_uart1: uart1-0 { 222 222 atmel,pins = 223 - <3 16 0x3 0x0 /* PC16 periph C */ 224 - 3 17 0x3 0x1>; /* PC17 periph C with pullup */ 223 + <2 16 0x3 0x0 /* PC16 periph C */ 224 + 2 17 0x3 0x1>; /* PC17 periph C with pullup */ 225 225 }; 226 226 }; 227 227 ··· 260 240 261 241 pinctrl_macb0_rmii_mii: macb0_rmii_mii-0 { 262 242 atmel,pins = 263 - <1 8 0x1 0x0 /* PA8 periph A */ 264 - 1 11 0x1 0x0 /* PA11 periph A */ 265 - 1 12 0x1 0x0 /* PA12 periph A */ 266 - 1 13 0x1 0x0 /* PA13 periph A */ 267 - 1 14 0x1 0x0 /* PA14 periph A */ 268 - 1 15 0x1 0x0 /* PA15 periph A */ 269 - 1 16 0x1 0x0 /* PA16 periph A */ 270 - 1 17 0x1 0x0>; /* PA17 periph A */ 243 + <1 8 0x1 0x0 /* PB8 periph A */ 244 + 1 11 0x1 0x0 /* PB11 periph A */ 245 + 1 12 0x1 0x0 /* PB12 periph A */ 246 + 1 13 0x1 0x0 /* PB13 periph A */ 247 + 1 14 0x1 0x0 /* PB14 periph A */ 248 + 1 15 0x1 0x0 /* PB15 periph A */ 249 + 1 16 0x1 0x0 /* PB16 periph A */ 250 + 1 17 0x1 0x0>; /* PB17 periph A */ 271 251 }; 272 252 }; 273 253
+6 -6
arch/arm/boot/dts/cros5250-common.dtsi
··· 96 96 fifo-depth = <0x80>; 97 97 card-detect-delay = <200>; 98 98 samsung,dw-mshc-ciu-div = <3>; 99 - samsung,dw-mshc-sdr-timing = <2 3 3>; 100 - samsung,dw-mshc-ddr-timing = <1 2 3>; 99 + samsung,dw-mshc-sdr-timing = <2 3>; 100 + samsung,dw-mshc-ddr-timing = <1 2>; 101 101 102 102 slot@0 { 103 103 reg = <0>; ··· 120 120 fifo-depth = <0x80>; 121 121 card-detect-delay = <200>; 122 122 samsung,dw-mshc-ciu-div = <3>; 123 - samsung,dw-mshc-sdr-timing = <2 3 3>; 124 - samsung,dw-mshc-ddr-timing = <1 2 3>; 123 + samsung,dw-mshc-sdr-timing = <2 3>; 124 + samsung,dw-mshc-ddr-timing = <1 2>; 125 125 126 126 slot@0 { 127 127 reg = <0>; ··· 141 141 fifo-depth = <0x80>; 142 142 card-detect-delay = <200>; 143 143 samsung,dw-mshc-ciu-div = <3>; 144 - samsung,dw-mshc-sdr-timing = <2 3 3>; 145 - samsung,dw-mshc-ddr-timing = <1 2 3>; 144 + samsung,dw-mshc-sdr-timing = <2 3>; 145 + samsung,dw-mshc-ddr-timing = <1 2>; 146 146 147 147 slot@0 { 148 148 reg = <0>;
+12 -2
arch/arm/boot/dts/dove-cubox.dts
··· 26 26 }; 27 27 28 28 &uart0 { status = "okay"; }; 29 - &sdio0 { status = "okay"; }; 30 29 &sata0 { status = "okay"; }; 31 30 &i2c0 { status = "okay"; }; 31 + 32 + &sdio0 { 33 + status = "okay"; 34 + /* sdio0 card detect is connected to wrong pin on CuBox */ 35 + cd-gpios = <&gpio0 12 1>; 36 + }; 32 37 33 38 &spi0 { 34 39 status = "okay"; ··· 47 42 }; 48 43 49 44 &pinctrl { 50 - pinctrl-0 = <&pmx_gpio_18>; 45 + pinctrl-0 = <&pmx_gpio_12 &pmx_gpio_18>; 51 46 pinctrl-names = "default"; 47 + 48 + pmx_gpio_12: pmx-gpio-12 { 49 + marvell,pins = "mpp12"; 50 + marvell,function = "gpio"; 51 + }; 52 52 53 53 pmx_gpio_18: pmx-gpio-18 { 54 54 marvell,pins = "mpp18";
+4 -4
arch/arm/boot/dts/exynos5250-smdk5250.dts
··· 115 115 fifo-depth = <0x80>; 116 116 card-detect-delay = <200>; 117 117 samsung,dw-mshc-ciu-div = <3>; 118 - samsung,dw-mshc-sdr-timing = <2 3 3>; 119 - samsung,dw-mshc-ddr-timing = <1 2 3>; 118 + samsung,dw-mshc-sdr-timing = <2 3>; 119 + samsung,dw-mshc-ddr-timing = <1 2>; 120 120 121 121 slot@0 { 122 122 reg = <0>; ··· 139 139 fifo-depth = <0x80>; 140 140 card-detect-delay = <200>; 141 141 samsung,dw-mshc-ciu-div = <3>; 142 - samsung,dw-mshc-sdr-timing = <2 3 3>; 143 - samsung,dw-mshc-ddr-timing = <1 2 3>; 142 + samsung,dw-mshc-sdr-timing = <2 3>; 143 + samsung,dw-mshc-ddr-timing = <1 2>; 144 144 145 145 slot@0 { 146 146 reg = <0>;
+16
arch/arm/boot/dts/kirkwood-ns2-common.dtsi
··· 1 1 /include/ "kirkwood.dtsi" 2 + /include/ "kirkwood-6281.dtsi" 2 3 3 4 / { 4 5 chosen { ··· 7 6 }; 8 7 9 8 ocp@f1000000 { 9 + pinctrl: pinctrl@10000 { 10 + pinctrl-0 = < &pmx_spi &pmx_twsi0 &pmx_uart0 11 + &pmx_ns2_sata0 &pmx_ns2_sata1>; 12 + pinctrl-names = "default"; 13 + 14 + pmx_ns2_sata0: pmx-ns2-sata0 { 15 + marvell,pins = "mpp21"; 16 + marvell,function = "sata0"; 17 + }; 18 + pmx_ns2_sata1: pmx-ns2-sata1 { 19 + marvell,pins = "mpp20"; 20 + marvell,function = "sata1"; 21 + }; 22 + }; 23 + 10 24 serial@12000 { 11 25 clock-frequency = <166666667>; 12 26 status = "okay";
+2
arch/arm/boot/dts/kirkwood.dtsi
··· 36 36 reg = <0x10100 0x40>; 37 37 ngpios = <32>; 38 38 interrupt-controller; 39 + #interrupt-cells = <2>; 39 40 interrupts = <35>, <36>, <37>, <38>; 40 41 }; 41 42 ··· 47 46 reg = <0x10140 0x40>; 48 47 ngpios = <18>; 49 48 interrupt-controller; 49 + #interrupt-cells = <2>; 50 50 interrupts = <39>, <40>, <41>; 51 51 }; 52 52
+2
arch/arm/boot/dts/kizbox.dts
··· 48 48 49 49 macb0: ethernet@fffc4000 { 50 50 phy-mode = "mii"; 51 + pinctrl-0 = <&pinctrl_macb_rmii 52 + &pinctrl_macb_rmii_mii_alt>; 51 53 status = "okay"; 52 54 }; 53 55
+4 -2
arch/arm/boot/dts/sunxi.dtsi
··· 60 60 }; 61 61 62 62 uart0: uart@01c28000 { 63 - compatible = "ns8250"; 63 + compatible = "snps,dw-apb-uart"; 64 64 reg = <0x01c28000 0x400>; 65 65 interrupts = <1>; 66 66 reg-shift = <2>; 67 + reg-io-width = <4>; 67 68 clock-frequency = <24000000>; 68 69 status = "disabled"; 69 70 }; 70 71 71 72 uart1: uart@01c28400 { 72 - compatible = "ns8250"; 73 + compatible = "snps,dw-apb-uart"; 73 74 reg = <0x01c28400 0x400>; 74 75 interrupts = <2>; 75 76 reg-shift = <2>; 77 + reg-io-width = <4>; 76 78 clock-frequency = <24000000>; 77 79 status = "disabled"; 78 80 };
-2
arch/arm/boot/dts/vexpress-v2p-ca15_a7.dts
··· 45 45 reg = <1>; 46 46 }; 47 47 48 - /* A7s disabled till big.LITTLE patches are available... 49 48 cpu2: cpu@2 { 50 49 device_type = "cpu"; 51 50 compatible = "arm,cortex-a7"; ··· 62 63 compatible = "arm,cortex-a7"; 63 64 reg = <0x102>; 64 65 }; 65 - */ 66 66 }; 67 67 68 68 memory@80000000 {
+2 -1
arch/arm/configs/at91_dt_defconfig
··· 19 19 CONFIG_SOC_AT91SAM9263=y 20 20 CONFIG_SOC_AT91SAM9G45=y 21 21 CONFIG_SOC_AT91SAM9X5=y 22 + CONFIG_SOC_AT91SAM9N12=y 22 23 CONFIG_MACH_AT91SAM_DT=y 23 24 CONFIG_AT91_PROGRAMMABLE_CLOCKS=y 24 25 CONFIG_AT91_TIMER_HZ=128 ··· 32 31 CONFIG_ZBOOT_ROM_BSS=0x0 33 32 CONFIG_ARM_APPENDED_DTB=y 34 33 CONFIG_ARM_ATAG_DTB_COMPAT=y 35 - CONFIG_CMDLINE="mem=128M console=ttyS0,115200 initrd=0x21100000,25165824 root=/dev/ram0 rw" 34 + CONFIG_CMDLINE="console=ttyS0,115200 initrd=0x21100000,25165824 root=/dev/ram0 rw" 36 35 CONFIG_KEXEC=y 37 36 CONFIG_AUTO_ZRELADDR=y 38 37 # CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set
+2
arch/arm/kernel/debug.S
··· 100 100 b 1b 101 101 ENDPROC(printch) 102 102 103 + #ifdef CONFIG_MMU 103 104 ENTRY(debug_ll_addr) 104 105 addruart r2, r3, ip 105 106 str r2, [r0] 106 107 str r3, [r1] 107 108 mov pc, lr 108 109 ENDPROC(debug_ll_addr) 110 + #endif 109 111 110 112 #else 111 113
+4 -1
arch/arm/kernel/head.S
··· 246 246 247 247 /* 248 248 * Then map boot params address in r2 if specified. 249 + * We map 2 sections in case the ATAGs/DTB crosses a section boundary. 249 250 */ 250 251 mov r0, r2, lsr #SECTION_SHIFT 251 252 movs r0, r0, lsl #SECTION_SHIFT ··· 254 253 addne r3, r3, #PAGE_OFFSET 255 254 addne r3, r4, r3, lsr #(SECTION_SHIFT - PMD_ORDER) 256 255 orrne r6, r7, r0 256 + strne r6, [r3], #1 << PMD_ORDER 257 + addne r6, r6, #1 << SECTION_SHIFT 257 258 strne r6, [r3] 258 259 259 260 #ifdef CONFIG_DEBUG_LL ··· 334 331 * as it has already been validated by the primary processor. 335 332 */ 336 333 #ifdef CONFIG_ARM_VIRT_EXT 337 - bl __hyp_stub_install 334 + bl __hyp_stub_install_secondary 338 335 #endif 339 336 safe_svcmode_maskall r9 340 337
+6 -12
arch/arm/kernel/hyp-stub.S
··· 99 99 * immediately. 100 100 */ 101 101 compare_cpu_mode_with_primary r4, r5, r6, r7 102 - bxne lr 102 + movne pc, lr 103 103 104 104 /* 105 105 * Once we have given up on one CPU, we do not try to install the ··· 111 111 */ 112 112 113 113 cmp r4, #HYP_MODE 114 - bxne lr @ give up if the CPU is not in HYP mode 114 + movne pc, lr @ give up if the CPU is not in HYP mode 115 115 116 116 /* 117 117 * Configure HSCTLR to set correct exception endianness/instruction set ··· 120 120 * Eventually, CPU-specific code might be needed -- assume not for now 121 121 * 122 122 * This code relies on the "eret" instruction to synchronize the 123 - * various coprocessor accesses. 123 + * various coprocessor accesses. This is done when we switch to SVC 124 + * (see safe_svcmode_maskall). 124 125 */ 125 126 @ Now install the hypervisor stub: 126 127 adr r7, __hyp_stub_vectors ··· 156 155 1: 157 156 #endif 158 157 159 - bic r7, r4, #MODE_MASK 160 - orr r7, r7, #SVC_MODE 161 - THUMB( orr r7, r7, #PSR_T_BIT ) 162 - msr spsr_cxsf, r7 @ This is SPSR_hyp. 163 - 164 - __MSR_ELR_HYP(14) @ msr elr_hyp, lr 165 - __ERET @ return, switching to SVC mode 166 - @ The boot CPU mode is left in r4. 158 + bx lr @ The boot CPU mode is left in r4. 167 159 ENDPROC(__hyp_stub_install_secondary) 168 160 169 161 __hyp_stub_do_trap: ··· 194 200 @ fall through 195 201 ENTRY(__hyp_set_vectors) 196 202 __HVC(0) 197 - bx lr 203 + mov pc, lr 198 204 ENDPROC(__hyp_set_vectors) 199 205 200 206 #ifndef ZIMAGE
+2
arch/arm/mach-at91/setup.c
··· 105 105 switch (socid) { 106 106 case ARCH_ID_AT91RM9200: 107 107 at91_soc_initdata.type = AT91_SOC_RM9200; 108 + if (at91_soc_initdata.subtype == AT91_SOC_SUBTYPE_NONE) 109 + at91_soc_initdata.subtype = AT91_SOC_RM9200_BGA; 108 110 at91_boot_soc = at91rm9200_soc; 109 111 break; 110 112
+1
arch/arm/mach-imx/Kconfig
··· 851 851 select HAVE_CAN_FLEXCAN if CAN 852 852 select HAVE_IMX_GPC 853 853 select HAVE_IMX_MMDC 854 + select HAVE_IMX_SRC 854 855 select HAVE_SMP 855 856 select MFD_SYSCON 856 857 select PINCTRL
+3 -3
arch/arm/mach-imx/clk-imx25.c
··· 254 254 clk_register_clkdev(clk[ipg], "ipg", "mxc-ehci.2"); 255 255 clk_register_clkdev(clk[usbotg_ahb], "ahb", "mxc-ehci.2"); 256 256 clk_register_clkdev(clk[usb_div], "per", "mxc-ehci.2"); 257 - clk_register_clkdev(clk[ipg], "ipg", "fsl-usb2-udc"); 258 - clk_register_clkdev(clk[usbotg_ahb], "ahb", "fsl-usb2-udc"); 259 - clk_register_clkdev(clk[usb_div], "per", "fsl-usb2-udc"); 257 + clk_register_clkdev(clk[ipg], "ipg", "imx-udc-mx27"); 258 + clk_register_clkdev(clk[usbotg_ahb], "ahb", "imx-udc-mx27"); 259 + clk_register_clkdev(clk[usb_div], "per", "imx-udc-mx27"); 260 260 clk_register_clkdev(clk[nfc_ipg_per], NULL, "imx25-nand.0"); 261 261 /* i.mx25 has the i.mx35 type cspi */ 262 262 clk_register_clkdev(clk[cspi1_ipg], NULL, "imx35-cspi.0");
+3 -3
arch/arm/mach-imx/clk-imx27.c
··· 236 236 clk_register_clkdev(clk[lcdc_ahb_gate], "ahb", "imx21-fb.0"); 237 237 clk_register_clkdev(clk[csi_ahb_gate], "ahb", "imx27-camera.0"); 238 238 clk_register_clkdev(clk[per4_gate], "per", "imx27-camera.0"); 239 - clk_register_clkdev(clk[usb_div], "per", "fsl-usb2-udc"); 240 - clk_register_clkdev(clk[usb_ipg_gate], "ipg", "fsl-usb2-udc"); 241 - clk_register_clkdev(clk[usb_ahb_gate], "ahb", "fsl-usb2-udc"); 239 + clk_register_clkdev(clk[usb_div], "per", "imx-udc-mx27"); 240 + clk_register_clkdev(clk[usb_ipg_gate], "ipg", "imx-udc-mx27"); 241 + clk_register_clkdev(clk[usb_ahb_gate], "ahb", "imx-udc-mx27"); 242 242 clk_register_clkdev(clk[usb_div], "per", "mxc-ehci.0"); 243 243 clk_register_clkdev(clk[usb_ipg_gate], "ipg", "mxc-ehci.0"); 244 244 clk_register_clkdev(clk[usb_ahb_gate], "ahb", "mxc-ehci.0");
+3 -3
arch/arm/mach-imx/clk-imx31.c
··· 139 139 clk_register_clkdev(clk[usb_div_post], "per", "mxc-ehci.2"); 140 140 clk_register_clkdev(clk[usb_gate], "ahb", "mxc-ehci.2"); 141 141 clk_register_clkdev(clk[ipg], "ipg", "mxc-ehci.2"); 142 - clk_register_clkdev(clk[usb_div_post], "per", "fsl-usb2-udc"); 143 - clk_register_clkdev(clk[usb_gate], "ahb", "fsl-usb2-udc"); 144 - clk_register_clkdev(clk[ipg], "ipg", "fsl-usb2-udc"); 142 + clk_register_clkdev(clk[usb_div_post], "per", "imx-udc-mx27"); 143 + clk_register_clkdev(clk[usb_gate], "ahb", "imx-udc-mx27"); 144 + clk_register_clkdev(clk[ipg], "ipg", "imx-udc-mx27"); 145 145 clk_register_clkdev(clk[csi_gate], NULL, "mx3-camera.0"); 146 146 /* i.mx31 has the i.mx21 type uart */ 147 147 clk_register_clkdev(clk[uart1_gate], "per", "imx21-uart.0");
+3 -3
arch/arm/mach-imx/clk-imx35.c
··· 251 251 clk_register_clkdev(clk[usb_div], "per", "mxc-ehci.2"); 252 252 clk_register_clkdev(clk[ipg], "ipg", "mxc-ehci.2"); 253 253 clk_register_clkdev(clk[usbotg_gate], "ahb", "mxc-ehci.2"); 254 - clk_register_clkdev(clk[usb_div], "per", "fsl-usb2-udc"); 255 - clk_register_clkdev(clk[ipg], "ipg", "fsl-usb2-udc"); 256 - clk_register_clkdev(clk[usbotg_gate], "ahb", "fsl-usb2-udc"); 254 + clk_register_clkdev(clk[usb_div], "per", "imx-udc-mx27"); 255 + clk_register_clkdev(clk[ipg], "ipg", "imx-udc-mx27"); 256 + clk_register_clkdev(clk[usbotg_gate], "ahb", "imx-udc-mx27"); 257 257 clk_register_clkdev(clk[wdog_gate], NULL, "imx2-wdt.0"); 258 258 clk_register_clkdev(clk[nfc_div], NULL, "imx25-nand.0"); 259 259 clk_register_clkdev(clk[csi_gate], NULL, "mx3-camera.0");
+3 -3
arch/arm/mach-imx/clk-imx51-imx53.c
··· 269 269 clk_register_clkdev(clk[usboh3_per_gate], "per", "mxc-ehci.2"); 270 270 clk_register_clkdev(clk[usboh3_gate], "ipg", "mxc-ehci.2"); 271 271 clk_register_clkdev(clk[usboh3_gate], "ahb", "mxc-ehci.2"); 272 - clk_register_clkdev(clk[usboh3_per_gate], "per", "fsl-usb2-udc"); 273 - clk_register_clkdev(clk[usboh3_gate], "ipg", "fsl-usb2-udc"); 274 - clk_register_clkdev(clk[usboh3_gate], "ahb", "fsl-usb2-udc"); 272 + clk_register_clkdev(clk[usboh3_per_gate], "per", "imx-udc-mx51"); 273 + clk_register_clkdev(clk[usboh3_gate], "ipg", "imx-udc-mx51"); 274 + clk_register_clkdev(clk[usboh3_gate], "ahb", "imx-udc-mx51"); 275 275 clk_register_clkdev(clk[nfc_gate], NULL, "imx51-nand"); 276 276 clk_register_clkdev(clk[ssi1_ipg_gate], NULL, "imx-ssi.0"); 277 277 clk_register_clkdev(clk[ssi2_ipg_gate], NULL, "imx-ssi.1");
+3
arch/arm/mach-imx/clk-imx6q.c
··· 436 436 for (i = 0; i < ARRAY_SIZE(clks_init_on); i++) 437 437 clk_prepare_enable(clk[clks_init_on[i]]); 438 438 439 + /* Set initial power mode */ 440 + imx6q_set_lpm(WAIT_CLOCKED); 441 + 439 442 np = of_find_compatible_node(NULL, NULL, "fsl,imx6q-gpt"); 440 443 base = of_iomap(np, 0); 441 444 WARN_ON(!base);
+1
arch/arm/mach-imx/common.h
··· 142 142 extern void imx6q_clock_map_io(void); 143 143 144 144 extern void imx_cpu_die(unsigned int cpu); 145 + extern int imx_cpu_kill(unsigned int cpu); 145 146 146 147 #ifdef CONFIG_PM 147 148 extern void imx6q_pm_init(void);
+1
arch/arm/mach-imx/devices/devices-common.h
··· 63 63 64 64 #include <linux/fsl_devices.h> 65 65 struct imx_fsl_usb2_udc_data { 66 + const char *devid; 66 67 resource_size_t iobase; 67 68 resource_size_t irq; 68 69 };
+8 -7
arch/arm/mach-imx/devices/platform-fsl-usb2-udc.c
··· 11 11 #include "../hardware.h" 12 12 #include "devices-common.h" 13 13 14 - #define imx_fsl_usb2_udc_data_entry_single(soc) \ 14 + #define imx_fsl_usb2_udc_data_entry_single(soc, _devid) \ 15 15 { \ 16 + .devid = _devid, \ 16 17 .iobase = soc ## _USB_OTG_BASE_ADDR, \ 17 18 .irq = soc ## _INT_USB_OTG, \ 18 19 } 19 20 20 21 #ifdef CONFIG_SOC_IMX25 21 22 const struct imx_fsl_usb2_udc_data imx25_fsl_usb2_udc_data __initconst = 22 - imx_fsl_usb2_udc_data_entry_single(MX25); 23 + imx_fsl_usb2_udc_data_entry_single(MX25, "imx-udc-mx27"); 23 24 #endif /* ifdef CONFIG_SOC_IMX25 */ 24 25 25 26 #ifdef CONFIG_SOC_IMX27 26 27 const struct imx_fsl_usb2_udc_data imx27_fsl_usb2_udc_data __initconst = 27 - imx_fsl_usb2_udc_data_entry_single(MX27); 28 + imx_fsl_usb2_udc_data_entry_single(MX27, "imx-udc-mx27"); 28 29 #endif /* ifdef CONFIG_SOC_IMX27 */ 29 30 30 31 #ifdef CONFIG_SOC_IMX31 31 32 const struct imx_fsl_usb2_udc_data imx31_fsl_usb2_udc_data __initconst = 32 - imx_fsl_usb2_udc_data_entry_single(MX31); 33 + imx_fsl_usb2_udc_data_entry_single(MX31, "imx-udc-mx27"); 33 34 #endif /* ifdef CONFIG_SOC_IMX31 */ 34 35 35 36 #ifdef CONFIG_SOC_IMX35 36 37 const struct imx_fsl_usb2_udc_data imx35_fsl_usb2_udc_data __initconst = 37 - imx_fsl_usb2_udc_data_entry_single(MX35); 38 + imx_fsl_usb2_udc_data_entry_single(MX35, "imx-udc-mx27"); 38 39 #endif /* ifdef CONFIG_SOC_IMX35 */ 39 40 40 41 #ifdef CONFIG_SOC_IMX51 41 42 const struct imx_fsl_usb2_udc_data imx51_fsl_usb2_udc_data __initconst = 42 - imx_fsl_usb2_udc_data_entry_single(MX51); 43 + imx_fsl_usb2_udc_data_entry_single(MX51, "imx-udc-mx51"); 43 44 #endif 44 45 45 46 struct platform_device *__init imx_add_fsl_usb2_udc( ··· 58 57 .flags = IORESOURCE_IRQ, 59 58 }, 60 59 }; 61 - return imx_add_platform_device_dmamask("fsl-usb2-udc", -1, 60 + return imx_add_platform_device_dmamask(data->devid, -1, 62 61 res, ARRAY_SIZE(res), 63 62 pdata, sizeof(*pdata), DMA_BIT_MASK(32)); 64 63 }
+1 -1
arch/arm/mach-imx/devices/platform-imx-fb.c
··· 54 54 .flags = IORESOURCE_IRQ, 55 55 }, 56 56 }; 57 - return imx_add_platform_device_dmamask("imx-fb", 0, 57 + return imx_add_platform_device_dmamask(data->devid, 0, 58 58 res, ARRAY_SIZE(res), 59 59 pdata, sizeof(*pdata), DMA_BIT_MASK(32)); 60 60 }
+6 -4
arch/arm/mach-imx/hotplug.c
··· 46 46 void imx_cpu_die(unsigned int cpu) 47 47 { 48 48 cpu_enter_lowpower(); 49 - imx_enable_cpu(cpu, false); 49 + cpu_do_idle(); 50 + } 50 51 51 - /* spin here until hardware takes it down */ 52 - while (1) 53 - ; 52 + int imx_cpu_kill(unsigned int cpu) 53 + { 54 + imx_enable_cpu(cpu, false); 55 + return 1; 54 56 }
arch/arm/mach-imx/iram.h include/linux/platform_data/imx-iram.h
+1 -2
arch/arm/mach-imx/iram_alloc.c
··· 22 22 #include <linux/module.h> 23 23 #include <linux/spinlock.h> 24 24 #include <linux/genalloc.h> 25 - 26 - #include "iram.h" 25 + #include "linux/platform_data/imx-iram.h" 27 26 28 27 static unsigned long iram_phys_base; 29 28 static void __iomem *iram_virt_base;
+1
arch/arm/mach-imx/platsmp.c
··· 92 92 .smp_boot_secondary = imx_boot_secondary, 93 93 #ifdef CONFIG_HOTPLUG_CPU 94 94 .cpu_die = imx_cpu_die, 95 + .cpu_kill = imx_cpu_kill, 95 96 #endif 96 97 };
+1
arch/arm/mach-imx/pm-imx6q.c
··· 41 41 cpu_suspend(0, imx6q_suspend_finish); 42 42 imx_smp_prepare(); 43 43 imx_gpc_post_resume(); 44 + imx6q_set_lpm(WAIT_CLOCKED); 44 45 break; 45 46 default: 46 47 return -EINVAL;
+10 -4
arch/arm/mach-integrator/pci_v3.c
··· 475 475 { 476 476 int ret = 0; 477 477 478 + if (!ap_syscon_base) 479 + return -EINVAL; 480 + 478 481 if (nr == 0) { 479 482 sys->mem_offset = PHYS_PCI_MEM_BASE; 480 483 ret = pci_v3_setup_resources(sys); 481 - /* Remap the Integrator system controller */ 482 - ap_syscon_base = ioremap(INTEGRATOR_SC_BASE, 0x100); 483 - if (!ap_syscon_base) 484 - return -EINVAL; 485 484 } 486 485 487 486 return ret; ··· 495 496 unsigned long flags; 496 497 unsigned int temp; 497 498 int ret; 499 + 500 + /* Remap the Integrator system controller */ 501 + ap_syscon_base = ioremap(INTEGRATOR_SC_BASE, 0x100); 502 + if (!ap_syscon_base) { 503 + pr_err("unable to remap the AP syscon for PCIv3\n"); 504 + return; 505 + } 498 506 499 507 pcibios_min_mem = 0x00100000; 500 508
-38
arch/arm/mach-kirkwood/board-ns2.c
··· 18 18 #include <linux/gpio.h> 19 19 #include <linux/of.h> 20 20 #include "common.h" 21 - #include "mpp.h" 22 21 23 22 static struct mv643xx_eth_platform_data ns2_ge00_data = { 24 23 .phy_addr = MV643XX_ETH_PHY_ADDR(8), 25 - }; 26 - 27 - static unsigned int ns2_mpp_config[] __initdata = { 28 - MPP0_SPI_SCn, 29 - MPP1_SPI_MOSI, 30 - MPP2_SPI_SCK, 31 - MPP3_SPI_MISO, 32 - MPP4_NF_IO6, 33 - MPP5_NF_IO7, 34 - MPP6_SYSRST_OUTn, 35 - MPP7_GPO, /* Fan speed (bit 1) */ 36 - MPP8_TW0_SDA, 37 - MPP9_TW0_SCK, 38 - MPP10_UART0_TXD, 39 - MPP11_UART0_RXD, 40 - MPP12_GPO, /* Red led */ 41 - MPP14_GPIO, /* USB fuse */ 42 - MPP16_GPIO, /* SATA 0 power */ 43 - MPP17_GPIO, /* SATA 1 power */ 44 - MPP18_NF_IO0, 45 - MPP19_NF_IO1, 46 - MPP20_SATA1_ACTn, 47 - MPP21_SATA0_ACTn, 48 - MPP22_GPIO, /* Fan speed (bit 0) */ 49 - MPP23_GPIO, /* Fan power */ 50 - MPP24_GPIO, /* USB mode select */ 51 - MPP25_GPIO, /* Fan rotation fail */ 52 - MPP26_GPIO, /* USB device vbus */ 53 - MPP28_GPIO, /* USB enable host vbus */ 54 - MPP29_GPIO, /* Blue led (slow register) */ 55 - MPP30_GPIO, /* Blue led (command register) */ 56 - MPP31_GPIO, /* Board power off */ 57 - MPP32_GPIO, /* Power button (0 = Released, 1 = Pushed) */ 58 - MPP33_GPO, /* Fan speed (bit 2) */ 59 - 0 60 24 }; 61 25 62 26 #define NS2_GPIO_POWER_OFF 31 ··· 35 71 /* 36 72 * Basic setup. Needs to be called early. 37 73 */ 38 - kirkwood_mpp_conf(ns2_mpp_config); 39 - 40 74 if (of_machine_is_compatible("lacie,netspace_lite_v2") || 41 75 of_machine_is_compatible("lacie,netspace_mini_v2")) 42 76 ns2_ge00_data.phy_addr = MV643XX_ETH_PHY_ADDR(0);
+2
arch/arm/mach-mvebu/Makefile
··· 1 1 ccflags-$(CONFIG_ARCH_MULTIPLATFORM) := -I$(srctree)/$(src)/include \ 2 2 -I$(srctree)/arch/arm/plat-orion/include 3 3 4 + AFLAGS_coherency_ll.o := -Wa,-march=armv7-a 5 + 4 6 obj-y += system-controller.o 5 7 obj-$(CONFIG_MACH_ARMADA_370_XP) += armada-370-xp.o irq-armada-370-xp.o addr-map.o coherency.o coherency_ll.o pmsu.o 6 8 obj-$(CONFIG_SMP) += platsmp.o headsmp.o
+6
arch/arm/mach-omap2/board-omap4panda.c
··· 397 397 OMAP_PULL_ENA), 398 398 OMAP4_MUX(ABE_MCBSP1_FSX, OMAP_MUX_MODE0 | OMAP_PIN_INPUT), 399 399 400 + /* UART2 - BT/FM/GPS shared transport */ 401 + OMAP4_MUX(UART2_CTS, OMAP_PIN_INPUT | OMAP_MUX_MODE0), 402 + OMAP4_MUX(UART2_RTS, OMAP_PIN_OUTPUT | OMAP_MUX_MODE0), 403 + OMAP4_MUX(UART2_RX, OMAP_PIN_INPUT | OMAP_MUX_MODE0), 404 + OMAP4_MUX(UART2_TX, OMAP_PIN_OUTPUT | OMAP_MUX_MODE0), 405 + 400 406 { .reg_offset = OMAP_MUX_TERMINATOR }, 401 407 }; 402 408
+2
arch/arm/mach-omap2/cclock2420_data.c
··· 1935 1935 omap2_init_clk_hw_omap_clocks(c->lk.clk); 1936 1936 } 1937 1937 1938 + omap2xxx_clkt_vps_late_init(); 1939 + 1938 1940 omap2_clk_disable_autoidle_all(); 1939 1941 1940 1942 omap2_clk_enable_init_clocks(enable_init_clks,
+2
arch/arm/mach-omap2/cclock2430_data.c
··· 2050 2050 omap2_init_clk_hw_omap_clocks(c->lk.clk); 2051 2051 } 2052 2052 2053 + omap2xxx_clkt_vps_late_init(); 2054 + 2053 2055 omap2_clk_disable_autoidle_all(); 2054 2056 2055 2057 omap2_clk_enable_init_clocks(enable_init_clks,
+6 -7
arch/arm/mach-omap2/cclock44xx_data.c
··· 2026 2026 * On OMAP4460 the ABE DPLL fails to turn on if in idle low-power 2027 2027 * state when turning the ABE clock domain. Workaround this by 2028 2028 * locking the ABE DPLL on boot. 2029 + * Lock the ABE DPLL in any case to avoid issues with audio. 2029 2030 */ 2030 - if (cpu_is_omap446x()) { 2031 - rc = clk_set_parent(&abe_dpll_refclk_mux_ck, &sys_32k_ck); 2032 - if (!rc) 2033 - rc = clk_set_rate(&dpll_abe_ck, OMAP4_DPLL_ABE_DEFFREQ); 2034 - if (rc) 2035 - pr_err("%s: failed to configure ABE DPLL!\n", __func__); 2036 - } 2031 + rc = clk_set_parent(&abe_dpll_refclk_mux_ck, &sys_32k_ck); 2032 + if (!rc) 2033 + rc = clk_set_rate(&dpll_abe_ck, OMAP4_DPLL_ABE_DEFFREQ); 2034 + if (rc) 2035 + pr_err("%s: failed to configure ABE DPLL!\n", __func__); 2037 2036 2038 2037 return 0; 2039 2038 }
+1 -1
arch/arm/mach-omap2/devices.c
··· 639 639 return cnt; 640 640 } 641 641 642 - static void omap_init_ocp2scp(void) 642 + static void __init omap_init_ocp2scp(void) 643 643 { 644 644 struct omap_hwmod *oh; 645 645 struct platform_device *pdev;
+2 -1
arch/arm/mach-omap2/drm.c
··· 25 25 #include <linux/dma-mapping.h> 26 26 #include <linux/platform_data/omap_drm.h> 27 27 28 + #include "soc.h" 28 29 #include "omap_device.h" 29 30 #include "omap_hwmod.h" 30 31 ··· 57 56 oh->name); 58 57 } 59 58 60 - platform_data.omaprev = GET_OMAP_REVISION(); 59 + platform_data.omaprev = GET_OMAP_TYPE; 61 60 62 61 return platform_device_register(&omap_drm_device); 63 62
+5 -1
arch/arm/mach-omap2/omap_hwmod_44xx_data.c
··· 2132 2132 * currently reset very early during boot, before I2C is 2133 2133 * available, so it doesn't seem that we have any choice in 2134 2134 * the kernel other than to avoid resetting it. 2135 + * 2136 + * Also, McPDM needs to be configured to NO_IDLE mode when it 2137 + * is in used otherwise vital clocks will be gated which 2138 + * results 'slow motion' audio playback. 2135 2139 */ 2136 - .flags = HWMOD_EXT_OPT_MAIN_CLK, 2140 + .flags = HWMOD_EXT_OPT_MAIN_CLK | HWMOD_SWSUP_SIDLE, 2137 2141 .mpu_irqs = omap44xx_mcpdm_irqs, 2138 2142 .sdma_reqs = omap44xx_mcpdm_sdma_reqs, 2139 2143 .main_clk = "mcpdm_fck",
+2 -6
arch/arm/mach-omap2/timer.c
··· 165 165 struct device_node *np; 166 166 167 167 for_each_matching_node(np, match) { 168 - if (!of_device_is_available(np)) { 169 - of_node_put(np); 168 + if (!of_device_is_available(np)) 170 169 continue; 171 - } 172 170 173 - if (property && !of_get_property(np, property, NULL)) { 174 - of_node_put(np); 171 + if (property && !of_get_property(np, property, NULL)) 175 172 continue; 176 - } 177 173 178 174 of_add_property(np, &device_disabled); 179 175 return np;
+1 -1
arch/arm/mach-s3c64xx/mach-crag6410-module.c
··· 47 47 .bus_num = 0, 48 48 .chip_select = 0, 49 49 .mode = SPI_MODE_0, 50 - .irq = S3C_EINT(5), 50 + .irq = S3C_EINT(4), 51 51 .controller_data = &wm0010_spi_csinfo, 52 52 .platform_data = &wm0010_pdata, 53 53 },
+2
arch/arm/mach-s3c64xx/pm.c
··· 338 338 for (i = 0; i < ARRAY_SIZE(s3c64xx_pm_domains); i++) 339 339 pm_genpd_init(&s3c64xx_pm_domains[i]->pd, NULL, false); 340 340 341 + #ifdef CONFIG_S3C_DEV_FB 341 342 if (dev_get_platdata(&s3c_device_fb.dev)) 342 343 pm_genpd_add_device(&s3c64xx_pm_f.pd, &s3c_device_fb.dev); 344 + #endif 343 345 344 346 return 0; 345 347 }
+10 -8
arch/arm/mm/dma-mapping.c
··· 774 774 size_t size, enum dma_data_direction dir, 775 775 void (*op)(const void *, size_t, int)) 776 776 { 777 + unsigned long pfn; 778 + size_t left = size; 779 + 780 + pfn = page_to_pfn(page) + offset / PAGE_SIZE; 781 + offset %= PAGE_SIZE; 782 + 777 783 /* 778 784 * A single sg entry may refer to multiple physically contiguous 779 785 * pages. But we still need to process highmem pages individually. 780 786 * If highmem is not configured then the bulk of this loop gets 781 787 * optimized out. 782 788 */ 783 - size_t left = size; 784 789 do { 785 790 size_t len = left; 786 791 void *vaddr; 787 792 793 + page = pfn_to_page(pfn); 794 + 788 795 if (PageHighMem(page)) { 789 - if (len + offset > PAGE_SIZE) { 790 - if (offset >= PAGE_SIZE) { 791 - page += offset / PAGE_SIZE; 792 - offset %= PAGE_SIZE; 793 - } 796 + if (len + offset > PAGE_SIZE) 794 797 len = PAGE_SIZE - offset; 795 - } 796 798 vaddr = kmap_high_get(page); 797 799 if (vaddr) { 798 800 vaddr += offset; ··· 811 809 op(vaddr, len, dir); 812 810 } 813 811 offset = 0; 814 - page++; 812 + pfn++; 815 813 left -= len; 816 814 } while (left); 817 815 }
+1 -1
arch/arm/mm/mmu.c
··· 283 283 }, 284 284 [MT_MEMORY_SO] = { 285 285 .prot_pte = L_PTE_PRESENT | L_PTE_YOUNG | L_PTE_DIRTY | 286 - L_PTE_MT_UNCACHED, 286 + L_PTE_MT_UNCACHED | L_PTE_XN, 287 287 .prot_l1 = PMD_TYPE_TABLE, 288 288 .prot_sect = PMD_TYPE_SECT | PMD_SECT_AP_WRITE | PMD_SECT_S | 289 289 PMD_SECT_UNCACHED | PMD_SECT_XN,
+1 -1
arch/arm/plat-versatile/headsmp.S
··· 20 20 */ 21 21 ENTRY(versatile_secondary_startup) 22 22 mrc p15, 0, r0, c0, c0, 5 23 - and r0, r0, #15 23 + bic r0, #0xff000000 24 24 adr r4, 1f 25 25 ldmia r4, {r5, r6} 26 26 sub r4, r4, r5
+3 -3
arch/arm/vfp/entry.S
··· 22 22 @ IRQs disabled. 23 23 @ 24 24 ENTRY(do_vfp) 25 - #ifdef CONFIG_PREEMPT 25 + #ifdef CONFIG_PREEMPT_COUNT 26 26 ldr r4, [r10, #TI_PREEMPT] @ get preempt count 27 27 add r11, r4, #1 @ increment it 28 28 str r11, [r10, #TI_PREEMPT] ··· 35 35 ENDPROC(do_vfp) 36 36 37 37 ENTRY(vfp_null_entry) 38 - #ifdef CONFIG_PREEMPT 38 + #ifdef CONFIG_PREEMPT_COUNT 39 39 get_thread_info r10 40 40 ldr r4, [r10, #TI_PREEMPT] @ get preempt count 41 41 sub r11, r4, #1 @ decrement it ··· 53 53 54 54 __INIT 55 55 ENTRY(vfp_testing_entry) 56 - #ifdef CONFIG_PREEMPT 56 + #ifdef CONFIG_PREEMPT_COUNT 57 57 get_thread_info r10 58 58 ldr r4, [r10, #TI_PREEMPT] @ get preempt count 59 59 sub r11, r4, #1 @ decrement it
+2 -2
arch/arm/vfp/vfphw.S
··· 168 168 @ else it's one 32-bit instruction, so 169 169 @ always subtract 4 from the following 170 170 @ instruction address. 171 - #ifdef CONFIG_PREEMPT 171 + #ifdef CONFIG_PREEMPT_COUNT 172 172 get_thread_info r10 173 173 ldr r4, [r10, #TI_PREEMPT] @ get preempt count 174 174 sub r11, r4, #1 @ decrement it ··· 192 192 @ not recognised by VFP 193 193 194 194 DBGSTR "not VFP" 195 - #ifdef CONFIG_PREEMPT 195 + #ifdef CONFIG_PREEMPT_COUNT 196 196 get_thread_info r10 197 197 ldr r4, [r10, #TI_PREEMPT] @ get preempt count 198 198 sub r11, r4, #1 @ decrement it
+4 -1
arch/arm64/include/asm/elf.h
··· 26 26 27 27 typedef unsigned long elf_greg_t; 28 28 29 - #define ELF_NGREG (sizeof (struct pt_regs) / sizeof(elf_greg_t)) 29 + #define ELF_NGREG (sizeof(struct user_pt_regs) / sizeof(elf_greg_t)) 30 + #define ELF_CORE_COPY_REGS(dest, regs) \ 31 + *(struct user_pt_regs *)&(dest) = (regs)->user_regs; 32 + 30 33 typedef elf_greg_t elf_gregset_t[ELF_NGREG]; 31 34 typedef struct user_fpsimd_state elf_fpregset_t; 32 35
-27
arch/ia64/kernel/ptrace.c
··· 672 672 read_unlock(&tasklist_lock); 673 673 } 674 674 675 - static inline int 676 - thread_matches (struct task_struct *thread, unsigned long addr) 677 - { 678 - unsigned long thread_rbs_end; 679 - struct pt_regs *thread_regs; 680 - 681 - if (ptrace_check_attach(thread, 0) < 0) 682 - /* 683 - * If the thread is not in an attachable state, we'll 684 - * ignore it. The net effect is that if ADDR happens 685 - * to overlap with the portion of the thread's 686 - * register backing store that is currently residing 687 - * on the thread's kernel stack, then ptrace() may end 688 - * up accessing a stale value. But if the thread 689 - * isn't stopped, that's a problem anyhow, so we're 690 - * doing as well as we can... 691 - */ 692 - return 0; 693 - 694 - thread_regs = task_pt_regs(thread); 695 - thread_rbs_end = ia64_get_user_rbs_end(thread, thread_regs, NULL); 696 - if (!on_kernel_rbs(addr, thread_regs->ar_bspstore, thread_rbs_end)) 697 - return 0; 698 - 699 - return 1; /* looks like we've got a winner */ 700 - } 701 - 702 675 /* 703 676 * Write f32-f127 back to task->thread.fph if it has been modified. 704 677 */
+16
arch/m68k/include/asm/dma-mapping.h
··· 21 21 extern void dma_free_coherent(struct device *, size_t, 22 22 void *, dma_addr_t); 23 23 24 + static inline void *dma_alloc_attrs(struct device *dev, size_t size, 25 + dma_addr_t *dma_handle, gfp_t flag, 26 + struct dma_attrs *attrs) 27 + { 28 + /* attrs is not supported and ignored */ 29 + return dma_alloc_coherent(dev, size, dma_handle, flag); 30 + } 31 + 32 + static inline void dma_free_attrs(struct device *dev, size_t size, 33 + void *cpu_addr, dma_addr_t dma_handle, 34 + struct dma_attrs *attrs) 35 + { 36 + /* attrs is not supported and ignored */ 37 + dma_free_coherent(dev, size, cpu_addr, dma_handle); 38 + } 39 + 24 40 static inline void *dma_alloc_noncoherent(struct device *dev, size_t size, 25 41 dma_addr_t *handle, gfp_t flag) 26 42 {
+2
arch/m68k/include/asm/pgtable_no.h
··· 64 64 */ 65 65 #define VMALLOC_START 0 66 66 #define VMALLOC_END 0xffffffff 67 + #define KMAP_START 0 68 + #define KMAP_END 0xffffffff 67 69 68 70 #include <asm-generic/pgtable.h> 69 71
+1 -1
arch/m68k/include/asm/unistd.h
··· 4 4 #include <uapi/asm/unistd.h> 5 5 6 6 7 - #define NR_syscalls 348 7 + #define NR_syscalls 349 8 8 9 9 #define __ARCH_WANT_OLD_READDIR 10 10 #define __ARCH_WANT_OLD_STAT
+1
arch/m68k/include/uapi/asm/unistd.h
··· 353 353 #define __NR_process_vm_readv 345 354 354 #define __NR_process_vm_writev 346 355 355 #define __NR_kcmp 347 356 + #define __NR_finit_module 348 356 357 357 358 #endif /* _UAPI_ASM_M68K_UNISTD_H_ */
+1
arch/m68k/kernel/syscalltable.S
··· 368 368 .long sys_process_vm_readv /* 345 */ 369 369 .long sys_process_vm_writev 370 370 .long sys_kcmp 371 + .long sys_finit_module 371 372
+5 -3
arch/m68k/mm/init.c
··· 39 39 void *empty_zero_page; 40 40 EXPORT_SYMBOL(empty_zero_page); 41 41 42 + #if !defined(CONFIG_SUN3) && !defined(CONFIG_COLDFIRE) 43 + extern void init_pointer_table(unsigned long ptable); 44 + extern pmd_t *zero_pgtable; 45 + #endif 46 + 42 47 #ifdef CONFIG_MMU 43 48 44 49 pg_data_t pg_data_map[MAX_NUMNODES]; ··· 73 68 pg_data_map[node].bdata = bootmem_node_data + node; 74 69 node_set_online(node); 75 70 } 76 - 77 - extern void init_pointer_table(unsigned long ptable); 78 - extern pmd_t *zero_pgtable; 79 71 80 72 #else /* CONFIG_MMU */ 81 73
+3
arch/mips/bcm47xx/Kconfig
··· 8 8 select SSB_DRIVER_EXTIF 9 9 select SSB_EMBEDDED 10 10 select SSB_B43_PCI_BRIDGE if PCI 11 + select SSB_DRIVER_PCICORE if PCI 11 12 select SSB_PCICORE_HOSTMODE if PCI 12 13 select SSB_DRIVER_GPIO 14 + select GPIOLIB 13 15 default y 14 16 help 15 17 Add support for old Broadcom BCM47xx boards with Sonics Silicon Backplane support. ··· 27 25 select BCMA_HOST_PCI if PCI 28 26 select BCMA_DRIVER_PCI_HOSTMODE if PCI 29 27 select BCMA_DRIVER_GPIO 28 + select GPIOLIB 30 29 default y 31 30 help 32 31 Add support for new Broadcom BCM47xx boards with Broadcom specific Advanced Microcontroller Bus.
+5 -4
arch/mips/cavium-octeon/executive/cvmx-l2c.c
··· 30 30 * measurement, and debugging facilities. 31 31 */ 32 32 33 + #include <linux/compiler.h> 33 34 #include <linux/irqflags.h> 34 35 #include <asm/octeon/cvmx.h> 35 36 #include <asm/octeon/cvmx-l2c.h> ··· 286 285 */ 287 286 static void fault_in(uint64_t addr, int len) 288 287 { 289 - volatile char *ptr; 290 - volatile char dummy; 288 + char *ptr; 289 + 291 290 /* 292 291 * Adjust addr and length so we get all cache lines even for 293 292 * small ranges spanning two cache lines. 294 293 */ 295 294 len += addr & CVMX_CACHE_LINE_MASK; 296 295 addr &= ~CVMX_CACHE_LINE_MASK; 297 - ptr = (volatile char *)cvmx_phys_to_ptr(addr); 296 + ptr = cvmx_phys_to_ptr(addr); 298 297 /* 299 298 * Invalidate L1 cache to make sure all loads result in data 300 299 * being in L2. 301 300 */ 302 301 CVMX_DCACHE_INVALIDATE; 303 302 while (len > 0) { 304 - dummy += *ptr; 303 + ACCESS_ONCE(*ptr); 305 304 len -= CVMX_CACHE_LINE_SIZE; 306 305 ptr += CVMX_CACHE_LINE_SIZE; 307 306 }
arch/mips/include/asm/break.h arch/mips/include/uapi/asm/break.h
+1 -1
arch/mips/include/asm/dsp.h
··· 16 16 #include <asm/mipsregs.h> 17 17 18 18 #define DSP_DEFAULT 0x00000000 19 - #define DSP_MASK 0x3ff 19 + #define DSP_MASK 0x3f 20 20 21 21 #define __enable_dsp_hazard() \ 22 22 do { \
+1
arch/mips/include/asm/inst.h
··· 353 353 struct u_format u_format; 354 354 struct c_format c_format; 355 355 struct r_format r_format; 356 + struct p_format p_format; 356 357 struct f_format f_format; 357 358 struct ma_format ma_format; 358 359 struct b_format b_format;
+1 -1
arch/mips/include/asm/mach-pnx833x/war.h
··· 21 21 #define R10000_LLSC_WAR 0 22 22 #define MIPS34K_MISSED_ITLB_WAR 0 23 23 24 - #endif /* __ASM_MIPS_MACH_PNX8550_WAR_H */ 24 + #endif /* __ASM_MIPS_MACH_PNX833X_WAR_H */
+1
arch/mips/include/asm/pgtable-64.h
··· 230 230 #else 231 231 #define pte_pfn(x) ((unsigned long)((x).pte >> _PFN_SHIFT)) 232 232 #define pfn_pte(pfn, prot) __pte(((pfn) << _PFN_SHIFT) | pgprot_val(prot)) 233 + #define pfn_pmd(pfn, prot) __pmd(((pfn) << _PFN_SHIFT) | pgprot_val(prot)) 233 234 #endif 234 235 235 236 #define __pgd_offset(address) pgd_index(address)
+1
arch/mips/include/uapi/asm/Kbuild
··· 3 3 4 4 header-y += auxvec.h 5 5 header-y += bitsperlong.h 6 + header-y += break.h 6 7 header-y += byteorder.h 7 8 header-y += cachectl.h 8 9 header-y += errno.h
+35 -1
arch/mips/kernel/ftrace.c
··· 25 25 #define MCOUNT_OFFSET_INSNS 4 26 26 #endif 27 27 28 + /* Arch override because MIPS doesn't need to run this from stop_machine() */ 29 + void arch_ftrace_update_code(int command) 30 + { 31 + ftrace_modify_all_code(command); 32 + } 33 + 28 34 /* 29 35 * Check if the address is in kernel space 30 36 * ··· 95 89 return 0; 96 90 } 97 91 92 + #ifndef CONFIG_64BIT 93 + static int ftrace_modify_code_2(unsigned long ip, unsigned int new_code1, 94 + unsigned int new_code2) 95 + { 96 + int faulted; 97 + 98 + safe_store_code(new_code1, ip, faulted); 99 + if (unlikely(faulted)) 100 + return -EFAULT; 101 + ip += 4; 102 + safe_store_code(new_code2, ip, faulted); 103 + if (unlikely(faulted)) 104 + return -EFAULT; 105 + flush_icache_range(ip, ip + 8); /* original ip + 12 */ 106 + return 0; 107 + } 108 + #endif 109 + 98 110 /* 99 111 * The details about the calling site of mcount on MIPS 100 112 * ··· 155 131 * needed. 156 132 */ 157 133 new = in_kernel_space(ip) ? INSN_NOP : INSN_B_1F; 158 - 134 + #ifdef CONFIG_64BIT 159 135 return ftrace_modify_code(ip, new); 136 + #else 137 + /* 138 + * On 32 bit MIPS platforms, gcc adds a stack adjust 139 + * instruction in the delay slot after the branch to 140 + * mcount and expects mcount to restore the sp on return. 141 + * This is based on a legacy API and does nothing but 142 + * waste instructions so it's being removed at runtime. 143 + */ 144 + return ftrace_modify_code_2(ip, new, INSN_NOP); 145 + #endif 160 146 } 161 147 162 148 int ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr)
+4 -3
arch/mips/kernel/mcount.S
··· 46 46 PTR_L a5, PT_R9(sp) 47 47 PTR_L a6, PT_R10(sp) 48 48 PTR_L a7, PT_R11(sp) 49 - PTR_ADDIU sp, PT_SIZE 50 49 #else 51 - PTR_ADDIU sp, (PT_SIZE + 8) 50 + PTR_ADDIU sp, PT_SIZE 52 51 #endif 53 52 .endm 54 53 ··· 68 69 .globl _mcount 69 70 _mcount: 70 71 b ftrace_stub 71 - nop 72 + addiu sp,sp,8 73 + 74 + /* When tracing is activated, it calls ftrace_caller+8 (aka here) */ 72 75 lw t1, function_trace_stop 73 76 bnez t1, ftrace_stub 74 77 nop
+1 -1
arch/mips/kernel/vpe.c
··· 705 705 706 706 printk(KERN_WARNING 707 707 "VPE loader: TC %d is already in use.\n", 708 - t->index); 708 + v->tc->index); 709 709 return -ENOEXEC; 710 710 } 711 711 } else {
+1 -1
arch/mips/lantiq/irq.c
··· 408 408 #endif 409 409 410 410 /* tell oprofile which irq to use */ 411 - cp0_perfcount_irq = LTQ_PERF_IRQ; 411 + cp0_perfcount_irq = irq_create_mapping(ltq_domain, LTQ_PERF_IRQ); 412 412 413 413 /* 414 414 * if the timer irq is not one of the mips irqs we need to
+1 -1
arch/mips/lib/delay.c
··· 21 21 " .set noreorder \n" 22 22 " .align 3 \n" 23 23 "1: bnez %0, 1b \n" 24 - #if __SIZEOF_LONG__ == 4 24 + #if BITS_PER_LONG == 32 25 25 " subu %0, 1 \n" 26 26 #else 27 27 " dsubu %0, 1 \n"
-6
arch/mips/mm/ioremap.c
··· 190 190 191 191 EXPORT_SYMBOL(__ioremap); 192 192 EXPORT_SYMBOL(__iounmap); 193 - 194 - int __virt_addr_valid(const volatile void *kaddr) 195 - { 196 - return pfn_valid(PFN_DOWN(virt_to_phys(kaddr))); 197 - } 198 - EXPORT_SYMBOL_GPL(__virt_addr_valid);
+6
arch/mips/mm/mmap.c
··· 192 192 193 193 return ret; 194 194 } 195 + 196 + int __virt_addr_valid(const volatile void *kaddr) 197 + { 198 + return pfn_valid(PFN_DOWN(virt_to_phys(kaddr))); 199 + } 200 + EXPORT_SYMBOL_GPL(__virt_addr_valid);
+4 -1
arch/mips/netlogic/xlr/setup.c
··· 193 193 194 194 void __init prom_init(void) 195 195 { 196 - int i, *argv, *envp; /* passed as 32 bit ptrs */ 196 + int *argv, *envp; /* passed as 32 bit ptrs */ 197 197 struct psb_info *prom_infop; 198 + #ifdef CONFIG_SMP 199 + int i; 200 + #endif 198 201 199 202 /* truncate to 32 bit and sign extend all args */ 200 203 argv = (int *)(long)(int)fw_arg1;
+1 -1
arch/mips/pci/pci-ar71xx.c
··· 24 24 #include <asm/mach-ath79/pci.h> 25 25 26 26 #define AR71XX_PCI_MEM_BASE 0x10000000 27 - #define AR71XX_PCI_MEM_SIZE 0x08000000 27 + #define AR71XX_PCI_MEM_SIZE 0x07000000 28 28 29 29 #define AR71XX_PCI_WIN0_OFFS 0x10000000 30 30 #define AR71XX_PCI_WIN1_OFFS 0x11000000
+1 -1
arch/mips/pci/pci-ar724x.c
··· 21 21 #define AR724X_PCI_CTRL_SIZE 0x100 22 22 23 23 #define AR724X_PCI_MEM_BASE 0x10000000 24 - #define AR724X_PCI_MEM_SIZE 0x08000000 24 + #define AR724X_PCI_MEM_SIZE 0x04000000 25 25 26 26 #define AR724X_PCI_REG_RESET 0x18 27 27 #define AR724X_PCI_REG_INT_STATUS 0x4c
+14 -6
arch/parisc/kernel/entry.S
··· 1865 1865 1866 1866 /* Are we being ptraced? */ 1867 1867 ldw TASK_FLAGS(%r1),%r19 1868 - ldi (_TIF_SINGLESTEP|_TIF_BLOCKSTEP),%r2 1868 + ldi _TIF_SYSCALL_TRACE_MASK,%r2 1869 1869 and,COND(=) %r19,%r2,%r0 1870 1870 b,n syscall_restore_rfi 1871 1871 ··· 1978 1978 /* sr2 should be set to zero for userspace syscalls */ 1979 1979 STREG %r0,TASK_PT_SR2(%r1) 1980 1980 1981 - pt_regs_ok: 1982 1981 LDREG TASK_PT_GR31(%r1),%r2 1983 - depi 3,31,2,%r2 /* ensure return to user mode. */ 1984 - STREG %r2,TASK_PT_IAOQ0(%r1) 1982 + depi 3,31,2,%r2 /* ensure return to user mode. */ 1983 + STREG %r2,TASK_PT_IAOQ0(%r1) 1985 1984 ldo 4(%r2),%r2 1986 1985 STREG %r2,TASK_PT_IAOQ1(%r1) 1987 - copy %r25,%r16 1988 1986 b intr_restore 1989 - nop 1987 + copy %r25,%r16 1988 + 1989 + pt_regs_ok: 1990 + LDREG TASK_PT_IAOQ0(%r1),%r2 1991 + depi 3,31,2,%r2 /* ensure return to user mode. */ 1992 + STREG %r2,TASK_PT_IAOQ0(%r1) 1993 + LDREG TASK_PT_IAOQ1(%r1),%r2 1994 + depi 3,31,2,%r2 1995 + STREG %r2,TASK_PT_IAOQ1(%r1) 1996 + b intr_restore 1997 + copy %r25,%r16 1990 1998 1991 1999 .import schedule,code 1992 2000 syscall_do_resched:
+4 -2
arch/parisc/kernel/irq.c
··· 410 410 { 411 411 local_irq_disable(); /* PARANOID - should already be disabled */ 412 412 mtctl(~0UL, 23); /* EIRR : clear all pending external intr */ 413 - claim_cpu_irqs(); 414 413 #ifdef CONFIG_SMP 415 - if (!cpu_eiem) 414 + if (!cpu_eiem) { 415 + claim_cpu_irqs(); 416 416 cpu_eiem = EIEM_MASK(IPI_IRQ) | EIEM_MASK(TIMER_IRQ); 417 + } 417 418 #else 419 + claim_cpu_irqs(); 418 420 cpu_eiem = EIEM_MASK(TIMER_IRQ); 419 421 #endif 420 422 set_eiem(cpu_eiem); /* EIEM : enable all external intr */
+1 -1
arch/parisc/kernel/ptrace.c
··· 26 26 #include <asm/asm-offsets.h> 27 27 28 28 /* PSW bits we allow the debugger to modify */ 29 - #define USER_PSW_BITS (PSW_N | PSW_V | PSW_CB) 29 + #define USER_PSW_BITS (PSW_N | PSW_B | PSW_V | PSW_CB) 30 30 31 31 /* 32 32 * Called by kernel/ptrace.c when detaching..
+3 -1
arch/parisc/kernel/signal.c
··· 190 190 DBG(1,"get_sigframe: ka = %#lx, sp = %#lx, frame_size = %#lx\n", 191 191 (unsigned long)ka, sp, frame_size); 192 192 193 + /* Align alternate stack and reserve 64 bytes for the signal 194 + handler's frame marker. */ 193 195 if ((ka->sa.sa_flags & SA_ONSTACK) != 0 && ! sas_ss_flags(sp)) 194 - sp = current->sas_ss_sp; /* Stacks grow up! */ 196 + sp = (current->sas_ss_sp + 0x7f) & ~0x3f; /* Stacks grow up! */ 195 197 196 198 DBG(1,"get_sigframe: Returning sp = %#lx\n", (unsigned long)sp); 197 199 return (void __user *) sp; /* Stacks grow up. Fun. */
+5 -6
arch/parisc/math-emu/cnv_float.h
··· 347 347 Sgl_isinexact_to_fix(sgl_value,exponent) 348 348 349 349 #define Duint_from_sgl_mantissa(sgl_value,exponent,dresultA,dresultB) \ 350 - {Sall(sgl_value) <<= SGL_EXP_LENGTH; /* left-justify */ \ 350 + {unsigned int val = Sall(sgl_value) << SGL_EXP_LENGTH; \ 351 351 if (exponent <= 31) { \ 352 - Dintp1(dresultA) = 0; \ 353 - Dintp2(dresultB) = (unsigned)Sall(sgl_value) >> (31 - exponent); \ 352 + Dintp1(dresultA) = 0; \ 353 + Dintp2(dresultB) = val >> (31 - exponent); \ 354 354 } \ 355 355 else { \ 356 - Dintp1(dresultA) = Sall(sgl_value) >> (63 - exponent); \ 357 - Dintp2(dresultB) = Sall(sgl_value) << (exponent - 31); \ 356 + Dintp1(dresultA) = val >> (63 - exponent); \ 357 + Dintp2(dresultB) = exponent <= 62 ? val << (exponent - 31) : 0; \ 358 358 } \ 359 - Sall(sgl_value) >>= SGL_EXP_LENGTH; /* return to original */ \ 360 359 } 361 360 362 361 #define Duint_setzero(dresultA,dresultB) \
+2
arch/powerpc/kernel/entry_32.S
··· 439 439 ret_from_kernel_thread: 440 440 REST_NVGPRS(r1) 441 441 bl schedule_tail 442 + li r3,0 443 + stw r3,0(r1) 442 444 mtlr r14 443 445 mr r3,r15 444 446 PPC440EP_ERR42
+13
arch/powerpc/kernel/entry_64.S
··· 664 664 ld r4,TI_FLAGS(r9) 665 665 andi. r0,r4,_TIF_NEED_RESCHED 666 666 bne 1b 667 + 668 + /* 669 + * arch_local_irq_restore() from preempt_schedule_irq above may 670 + * enable hard interrupt but we really should disable interrupts 671 + * when we return from the interrupt, and so that we don't get 672 + * interrupted after loading SRR0/1. 673 + */ 674 + #ifdef CONFIG_PPC_BOOK3E 675 + wrteei 0 676 + #else 677 + ld r10,PACAKMSR(r13) /* Get kernel MSR without EE */ 678 + mtmsrd r10,1 /* Update machine state */ 679 + #endif /* CONFIG_PPC_BOOK3E */ 667 680 #endif /* CONFIG_PREEMPT */ 668 681 669 682 .globl fast_exc_return_irq
+3 -2
arch/powerpc/kernel/kgdb.c
··· 154 154 static int kgdb_singlestep(struct pt_regs *regs) 155 155 { 156 156 struct thread_info *thread_info, *exception_thread_info; 157 - struct thread_info *backup_current_thread_info = \ 158 - (struct thread_info *)kmalloc(sizeof(struct thread_info), GFP_KERNEL); 157 + struct thread_info *backup_current_thread_info; 159 158 160 159 if (user_mode(regs)) 161 160 return 0; 162 161 162 + backup_current_thread_info = (struct thread_info *)kmalloc(sizeof(struct thread_info), GFP_KERNEL); 163 163 /* 164 164 * On Book E and perhaps other processors, singlestep is handled on 165 165 * the critical exception stack. This causes current_thread_info() ··· 185 185 /* Restore current_thread_info lastly. */ 186 186 memcpy(exception_thread_info, backup_current_thread_info, sizeof *thread_info); 187 187 188 + kfree(backup_current_thread_info); 188 189 return 1; 189 190 } 190 191
+7 -2
arch/powerpc/kernel/time.c
··· 494 494 set_dec(DECREMENTER_MAX); 495 495 496 496 /* Some implementations of hotplug will get timer interrupts while 497 - * offline, just ignore these 497 + * offline, just ignore these and we also need to set 498 + * decrementers_next_tb as MAX to make sure __check_irq_replay 499 + * don't replay timer interrupt when return, otherwise we'll trap 500 + * here infinitely :( 498 501 */ 499 - if (!cpu_online(smp_processor_id())) 502 + if (!cpu_online(smp_processor_id())) { 503 + *next_tb = ~(u64)0; 500 504 return; 505 + } 501 506 502 507 /* Conditionally hard-enable interrupts now that the DEC has been 503 508 * bumped to its maximum value
+2
arch/powerpc/kvm/emulate.c
··· 39 39 #define OP_31_XOP_TRAP 4 40 40 #define OP_31_XOP_LWZX 23 41 41 #define OP_31_XOP_TRAP_64 68 42 + #define OP_31_XOP_DCBF 86 42 43 #define OP_31_XOP_LBZX 87 43 44 #define OP_31_XOP_STWX 151 44 45 #define OP_31_XOP_STBX 215 ··· 375 374 emulated = kvmppc_emulate_mtspr(vcpu, sprn, rs); 376 375 break; 377 376 377 + case OP_31_XOP_DCBF: 378 378 case OP_31_XOP_DCBI: 379 379 /* Do nothing. The guest is performing dcbi because 380 380 * hardware DMA is not snooped by the dcache, but
+35 -27
arch/powerpc/mm/hash_low_64.S
··· 115 115 sldi r29,r5,SID_SHIFT - VPN_SHIFT 116 116 rldicl r28,r3,64 - VPN_SHIFT,64 - (SID_SHIFT - VPN_SHIFT) 117 117 or r29,r28,r29 118 - 119 - /* Calculate hash value for primary slot and store it in r28 */ 120 - rldicl r5,r5,0,25 /* vsid & 0x0000007fffffffff */ 121 - rldicl r0,r3,64-12,48 /* (ea >> 12) & 0xffff */ 122 - xor r28,r5,r0 118 + /* 119 + * Calculate hash value for primary slot and store it in r28 120 + * r3 = va, r5 = vsid 121 + * r0 = (va >> 12) & ((1ul << (28 - 12)) -1) 122 + */ 123 + rldicl r0,r3,64-12,48 124 + xor r28,r5,r0 /* hash */ 123 125 b 4f 124 126 125 127 3: /* Calc vpn and put it in r29 */ ··· 132 130 /* 133 131 * calculate hash value for primary slot and 134 132 * store it in r28 for 1T segment 133 + * r3 = va, r5 = vsid 135 134 */ 136 - rldic r28,r5,25,25 /* (vsid << 25) & 0x7fffffffff */ 137 - clrldi r5,r5,40 /* vsid & 0xffffff */ 138 - rldicl r0,r3,64-12,36 /* (ea >> 12) & 0xfffffff */ 139 - xor r28,r28,r5 135 + sldi r28,r5,25 /* vsid << 25 */ 136 + /* r0 = (va >> 12) & ((1ul << (40 - 12)) -1) */ 137 + rldicl r0,r3,64-12,36 138 + xor r28,r28,r5 /* vsid ^ ( vsid << 25) */ 140 139 xor r28,r28,r0 /* hash */ 141 140 142 141 /* Convert linux PTE bits into HW equivalents */ ··· 410 407 */ 411 408 rldicl r28,r3,64 - VPN_SHIFT,64 - (SID_SHIFT - VPN_SHIFT) 412 409 or r29,r28,r29 413 - 414 - /* Calculate hash value for primary slot and store it in r28 */ 415 - rldicl r5,r5,0,25 /* vsid & 0x0000007fffffffff */ 416 - rldicl r0,r3,64-12,48 /* (ea >> 12) & 0xffff */ 417 - xor r28,r5,r0 410 + /* 411 + * Calculate hash value for primary slot and store it in r28 412 + * r3 = va, r5 = vsid 413 + * r0 = (va >> 12) & ((1ul << (28 - 12)) -1) 414 + */ 415 + rldicl r0,r3,64-12,48 416 + xor r28,r5,r0 /* hash */ 418 417 b 4f 419 418 420 419 3: /* Calc vpn and put it in r29 */ ··· 431 426 /* 432 427 * Calculate hash value for primary slot and 433 428 * store it in r28 for 1T segment 429 + * r3 = va, r5 = vsid 434 430 */ 435 - rldic r28,r5,25,25 /* (vsid << 25) & 0x7fffffffff */ 436 - clrldi r5,r5,40 /* vsid & 0xffffff */ 437 - rldicl r0,r3,64-12,36 /* (ea >> 12) & 0xfffffff */ 438 - xor r28,r28,r5 431 + sldi r28,r5,25 /* vsid << 25 */ 432 + /* r0 = (va >> 12) & ((1ul << (40 - 12)) -1) */ 433 + rldicl r0,r3,64-12,36 434 + xor r28,r28,r5 /* vsid ^ ( vsid << 25) */ 439 435 xor r28,r28,r0 /* hash */ 440 436 441 437 /* Convert linux PTE bits into HW equivalents */ ··· 758 752 rldicl r28,r3,64 - VPN_SHIFT,64 - (SID_SHIFT - VPN_SHIFT) 759 753 or r29,r28,r29 760 754 761 - /* Calculate hash value for primary slot and store it in r28 */ 762 - rldicl r5,r5,0,25 /* vsid & 0x0000007fffffffff */ 763 - rldicl r0,r3,64-16,52 /* (ea >> 16) & 0xfff */ 764 - xor r28,r5,r0 755 + /* Calculate hash value for primary slot and store it in r28 756 + * r3 = va, r5 = vsid 757 + * r0 = (va >> 16) & ((1ul << (28 - 16)) -1) 758 + */ 759 + rldicl r0,r3,64-16,52 760 + xor r28,r5,r0 /* hash */ 765 761 b 4f 766 762 767 763 3: /* Calc vpn and put it in r29 */ 768 764 sldi r29,r5,SID_SHIFT_1T - VPN_SHIFT 769 765 rldicl r28,r3,64 - VPN_SHIFT,64 - (SID_SHIFT_1T - VPN_SHIFT) 770 766 or r29,r28,r29 771 - 772 767 /* 773 768 * calculate hash value for primary slot and 774 769 * store it in r28 for 1T segment 770 + * r3 = va, r5 = vsid 775 771 */ 776 - rldic r28,r5,25,25 /* (vsid << 25) & 0x7fffffffff */ 777 - clrldi r5,r5,40 /* vsid & 0xffffff */ 778 - rldicl r0,r3,64-16,40 /* (ea >> 16) & 0xffffff */ 779 - xor r28,r28,r5 772 + sldi r28,r5,25 /* vsid << 25 */ 773 + /* r0 = (va >> 16) & ((1ul << (40 - 16)) -1) */ 774 + rldicl r0,r3,64-16,40 775 + xor r28,r28,r5 /* vsid ^ ( vsid << 25) */ 780 776 xor r28,r28,r0 /* hash */ 781 777 782 778 /* Convert linux PTE bits into HW equivalents */
+1 -1
arch/powerpc/oprofile/op_model_power4.c
··· 52 52 for (pmc = 0; pmc < 4; pmc++) { 53 53 psel = mmcr1 & (OPROFILE_PM_PMCSEL_MSK 54 54 << (OPROFILE_MAX_PMC_NUM - pmc) 55 - * OPROFILE_MAX_PMC_NUM); 55 + * OPROFILE_PMSEL_FIELD_WIDTH); 56 56 psel = (psel >> ((OPROFILE_MAX_PMC_NUM - pmc) 57 57 * OPROFILE_PMSEL_FIELD_WIDTH)) & ~1ULL; 58 58 unit = mmcr1 & (OPROFILE_PM_UNIT_MSK
+7
arch/powerpc/platforms/pasemi/cpufreq.c
··· 236 236 237 237 static int pas_cpufreq_cpu_exit(struct cpufreq_policy *policy) 238 238 { 239 + /* 240 + * We don't support CPU hotplug. Don't unmap after the system 241 + * has already made it to a running state. 242 + */ 243 + if (system_state != SYSTEM_BOOTING) 244 + return 0; 245 + 239 246 if (sdcasr_mapbase) 240 247 iounmap(sdcasr_mapbase); 241 248 if (sdcpwr_mapbase)
+12
arch/s390/include/asm/pgtable.h
··· 1365 1365 __pmd_idte(address, pmdp); 1366 1366 } 1367 1367 1368 + #define __HAVE_ARCH_PMDP_SET_WRPROTECT 1369 + static inline void pmdp_set_wrprotect(struct mm_struct *mm, 1370 + unsigned long address, pmd_t *pmdp) 1371 + { 1372 + pmd_t pmd = *pmdp; 1373 + 1374 + if (pmd_write(pmd)) { 1375 + __pmd_idte(address, pmdp); 1376 + set_pmd_at(mm, address, pmdp, pmd_wrprotect(pmd)); 1377 + } 1378 + } 1379 + 1368 1380 static inline pmd_t mk_pmd_phys(unsigned long physpage, pgprot_t pgprot) 1369 1381 { 1370 1382 pmd_t __pmd;
+1
arch/x86/Kconfig
··· 2138 2138 config OLPC_XO1_SCI 2139 2139 bool "OLPC XO-1 SCI extras" 2140 2140 depends on OLPC && OLPC_XO1_PM 2141 + depends on INPUT=y 2141 2142 select POWER_SUPPLY 2142 2143 select GPIO_CS5535 2143 2144 select MFD_CORE
+2 -2
arch/x86/boot/Makefile
··· 71 71 $(obj)/bzImage: asflags-y := $(SVGA_MODE) 72 72 73 73 quiet_cmd_image = BUILD $@ 74 - cmd_image = $(obj)/tools/build $(obj)/setup.bin $(obj)/vmlinux.bin > $@ 74 + cmd_image = $(obj)/tools/build $(obj)/setup.bin $(obj)/vmlinux.bin $(obj)/zoffset.h > $@ 75 75 76 76 $(obj)/bzImage: $(obj)/setup.bin $(obj)/vmlinux.bin $(obj)/tools/build FORCE 77 77 $(call if_changed,image) ··· 92 92 $(obj)/voffset.h: vmlinux FORCE 93 93 $(call if_changed,voffset) 94 94 95 - sed-zoffset := -e 's/^\([0-9a-fA-F]*\) . \(startup_32\|input_data\|_end\|z_.*\)$$/\#define ZO_\2 0x\1/p' 95 + sed-zoffset := -e 's/^\([0-9a-fA-F]*\) . \(startup_32\|startup_64\|efi_pe_entry\|efi_stub_entry\|input_data\|_end\|z_.*\)$$/\#define ZO_\2 0x\1/p' 96 96 97 97 quiet_cmd_zoffset = ZOFFSET $@ 98 98 cmd_zoffset = $(NM) $< | sed -n $(sed-zoffset) > $@
+11 -10
arch/x86/boot/compressed/eboot.c
··· 256 256 int i; 257 257 struct setup_data *data; 258 258 259 - data = (struct setup_data *)params->hdr.setup_data; 259 + data = (struct setup_data *)(unsigned long)params->hdr.setup_data; 260 260 261 261 while (data && data->next) 262 - data = (struct setup_data *)data->next; 262 + data = (struct setup_data *)(unsigned long)data->next; 263 263 264 264 status = efi_call_phys5(sys_table->boottime->locate_handle, 265 265 EFI_LOCATE_BY_PROTOCOL, &pci_proto, ··· 295 295 if (!pci) 296 296 continue; 297 297 298 + #ifdef CONFIG_X86_64 298 299 status = efi_call_phys4(pci->attributes, pci, 299 300 EfiPciIoAttributeOperationGet, 0, 300 301 &attributes); 301 - 302 + #else 303 + status = efi_call_phys5(pci->attributes, pci, 304 + EfiPciIoAttributeOperationGet, 0, 0, 305 + &attributes); 306 + #endif 302 307 if (status != EFI_SUCCESS) 303 - continue; 304 - 305 - if (!(attributes & EFI_PCI_IO_ATTRIBUTE_EMBEDDED_ROM)) 306 308 continue; 307 309 308 310 if (!pci->romimage || !pci->romsize) ··· 347 345 memcpy(rom->romdata, pci->romimage, pci->romsize); 348 346 349 347 if (data) 350 - data->next = (uint64_t)rom; 348 + data->next = (unsigned long)rom; 351 349 else 352 - params->hdr.setup_data = (uint64_t)rom; 350 + params->hdr.setup_data = (unsigned long)rom; 353 351 354 352 data = (struct setup_data *)rom; 355 353 ··· 434 432 * Once we've found a GOP supporting ConOut, 435 433 * don't bother looking any further. 436 434 */ 435 + first_gop = gop; 437 436 if (conout_found) 438 437 break; 439 - 440 - first_gop = gop; 441 438 } 442 439 } 443 440
+5 -3
arch/x86/boot/compressed/head_32.S
··· 35 35 #ifdef CONFIG_EFI_STUB 36 36 jmp preferred_addr 37 37 38 - .balign 0x10 39 38 /* 40 39 * We don't need the return address, so set up the stack so 41 - * efi_main() can find its arugments. 40 + * efi_main() can find its arguments. 42 41 */ 42 + ENTRY(efi_pe_entry) 43 43 add $0x4, %esp 44 44 45 45 call make_boot_params ··· 50 50 pushl %eax 51 51 pushl %esi 52 52 pushl %ecx 53 + sub $0x4, %esp 53 54 54 - .org 0x30,0x90 55 + ENTRY(efi_stub_entry) 56 + add $0x4, %esp 55 57 call efi_main 56 58 cmpl $0, %eax 57 59 movl %eax, %esi
+4 -4
arch/x86/boot/compressed/head_64.S
··· 201 201 */ 202 202 #ifdef CONFIG_EFI_STUB 203 203 /* 204 - * The entry point for the PE/COFF executable is 0x210, so only 205 - * legacy boot loaders will execute this jmp. 204 + * The entry point for the PE/COFF executable is efi_pe_entry, so 205 + * only legacy boot loaders will execute this jmp. 206 206 */ 207 207 jmp preferred_addr 208 208 209 - .org 0x210 209 + ENTRY(efi_pe_entry) 210 210 mov %rcx, %rdi 211 211 mov %rdx, %rsi 212 212 pushq %rdi ··· 218 218 popq %rsi 219 219 popq %rdi 220 220 221 - .org 0x230,0x90 221 + ENTRY(efi_stub_entry) 222 222 call efi_main 223 223 movq %rax,%rsi 224 224 cmpq $0,%rax
+29 -10
arch/x86/boot/header.S
··· 21 21 #include <asm/e820.h> 22 22 #include <asm/page_types.h> 23 23 #include <asm/setup.h> 24 + #include <asm/bootparam.h> 24 25 #include "boot.h" 25 26 #include "voffset.h" 26 27 #include "zoffset.h" ··· 256 255 # header, from the old boot sector. 257 256 258 257 .section ".header", "a" 258 + .globl sentinel 259 + sentinel: .byte 0xff, 0xff /* Used to detect broken loaders */ 260 + 259 261 .globl hdr 260 262 hdr: 261 263 setup_sects: .byte 0 /* Filled in by build.c */ ··· 283 279 # Part 2 of the header, from the old setup.S 284 280 285 281 .ascii "HdrS" # header signature 286 - .word 0x020b # header version number (>= 0x0105) 282 + .word 0x020c # header version number (>= 0x0105) 287 283 # or else old loadlin-1.5 will fail) 288 284 .globl realmode_swtch 289 285 realmode_swtch: .word 0, 0 # default_switch, SETUPSEG ··· 301 297 302 298 # flags, unused bits must be zero (RFU) bit within loadflags 303 299 loadflags: 304 - LOADED_HIGH = 1 # If set, the kernel is loaded high 305 - CAN_USE_HEAP = 0x80 # If set, the loader also has set 306 - # heap_end_ptr to tell how much 307 - # space behind setup.S can be used for 308 - # heap purposes. 309 - # Only the loader knows what is free 310 - .byte LOADED_HIGH 300 + .byte LOADED_HIGH # The kernel is to be loaded high 311 301 312 302 setup_move_size: .word 0x8000 # size to move, when setup is not 313 303 # loaded at 0x90000. We will move setup ··· 367 369 relocatable_kernel: .byte 0 368 370 #endif 369 371 min_alignment: .byte MIN_KERNEL_ALIGN_LG2 # minimum alignment 370 - pad3: .word 0 372 + 373 + xloadflags: 374 + #ifdef CONFIG_X86_64 375 + # define XLF0 XLF_KERNEL_64 /* 64-bit kernel */ 376 + #else 377 + # define XLF0 0 378 + #endif 379 + #ifdef CONFIG_EFI_STUB 380 + # ifdef CONFIG_X86_64 381 + # define XLF23 XLF_EFI_HANDOVER_64 /* 64-bit EFI handover ok */ 382 + # else 383 + # define XLF23 XLF_EFI_HANDOVER_32 /* 32-bit EFI handover ok */ 384 + # endif 385 + #else 386 + # define XLF23 0 387 + #endif 388 + .word XLF0 | XLF23 371 389 372 390 cmdline_size: .long COMMAND_LINE_SIZE-1 #length of the command line, 373 391 #added with boot protocol ··· 411 397 #define INIT_SIZE VO_INIT_SIZE 412 398 #endif 413 399 init_size: .long INIT_SIZE # kernel initialization size 414 - handover_offset: .long 0x30 # offset to the handover 400 + handover_offset: 401 + #ifdef CONFIG_EFI_STUB 402 + .long 0x30 # offset to the handover 415 403 # protocol entry point 404 + #else 405 + .long 0 406 + #endif 416 407 417 408 # End of setup header ##################################################### 418 409
+1 -1
arch/x86/boot/setup.ld
··· 13 13 .bstext : { *(.bstext) } 14 14 .bsdata : { *(.bsdata) } 15 15 16 - . = 497; 16 + . = 495; 17 17 .header : { *(.header) } 18 18 .entrytext : { *(.entrytext) } 19 19 .inittext : { *(.inittext) }
+63 -18
arch/x86/boot/tools/build.c
··· 52 52 53 53 #define PECOFF_RELOC_RESERVE 0x20 54 54 55 + unsigned long efi_stub_entry; 56 + unsigned long efi_pe_entry; 57 + unsigned long startup_64; 58 + 55 59 /*----------------------------------------------------------------------*/ 56 60 57 61 static const u32 crctab32[] = { ··· 136 132 137 133 static void usage(void) 138 134 { 139 - die("Usage: build setup system [> image]"); 135 + die("Usage: build setup system [zoffset.h] [> image]"); 140 136 } 141 137 142 138 #ifdef CONFIG_EFI_STUB ··· 210 206 */ 211 207 put_unaligned_le32(file_sz - 512, &buf[pe_header + 0x1c]); 212 208 213 - #ifdef CONFIG_X86_32 214 209 /* 215 - * Address of entry point. 216 - * 217 - * The EFI stub entry point is +16 bytes from the start of 218 - * the .text section. 210 + * Address of entry point for PE/COFF executable 219 211 */ 220 - put_unaligned_le32(text_start + 16, &buf[pe_header + 0x28]); 221 - #else 222 - /* 223 - * Address of entry point. startup_32 is at the beginning and 224 - * the 64-bit entry point (startup_64) is always 512 bytes 225 - * after. The EFI stub entry point is 16 bytes after that, as 226 - * the first instruction allows legacy loaders to jump over 227 - * the EFI stub initialisation 228 - */ 229 - put_unaligned_le32(text_start + 528, &buf[pe_header + 0x28]); 230 - #endif /* CONFIG_X86_32 */ 212 + put_unaligned_le32(text_start + efi_pe_entry, &buf[pe_header + 0x28]); 231 213 232 214 update_pecoff_section_header(".text", text_start, text_sz); 233 215 } 234 216 235 217 #endif /* CONFIG_EFI_STUB */ 218 + 219 + 220 + /* 221 + * Parse zoffset.h and find the entry points. We could just #include zoffset.h 222 + * but that would mean tools/build would have to be rebuilt every time. It's 223 + * not as if parsing it is hard... 224 + */ 225 + #define PARSE_ZOFS(p, sym) do { \ 226 + if (!strncmp(p, "#define ZO_" #sym " ", 11+sizeof(#sym))) \ 227 + sym = strtoul(p + 11 + sizeof(#sym), NULL, 16); \ 228 + } while (0) 229 + 230 + static void parse_zoffset(char *fname) 231 + { 232 + FILE *file; 233 + char *p; 234 + int c; 235 + 236 + file = fopen(fname, "r"); 237 + if (!file) 238 + die("Unable to open `%s': %m", fname); 239 + c = fread(buf, 1, sizeof(buf) - 1, file); 240 + if (ferror(file)) 241 + die("read-error on `zoffset.h'"); 242 + buf[c] = 0; 243 + 244 + p = (char *)buf; 245 + 246 + while (p && *p) { 247 + PARSE_ZOFS(p, efi_stub_entry); 248 + PARSE_ZOFS(p, efi_pe_entry); 249 + PARSE_ZOFS(p, startup_64); 250 + 251 + p = strchr(p, '\n'); 252 + while (p && (*p == '\r' || *p == '\n')) 253 + p++; 254 + } 255 + } 236 256 237 257 int main(int argc, char ** argv) 238 258 { ··· 269 241 void *kernel; 270 242 u32 crc = 0xffffffffUL; 271 243 272 - if (argc != 3) 244 + /* Defaults for old kernel */ 245 + #ifdef CONFIG_X86_32 246 + efi_pe_entry = 0x10; 247 + efi_stub_entry = 0x30; 248 + #else 249 + efi_pe_entry = 0x210; 250 + efi_stub_entry = 0x230; 251 + startup_64 = 0x200; 252 + #endif 253 + 254 + if (argc == 4) 255 + parse_zoffset(argv[3]); 256 + else if (argc != 3) 273 257 usage(); 274 258 275 259 /* Copy the setup code */ ··· 339 299 340 300 #ifdef CONFIG_EFI_STUB 341 301 update_pecoff_text(setup_sectors * 512, sz + i + ((sys_size * 16) - sz)); 302 + 303 + #ifdef CONFIG_X86_64 /* Yes, this is really how we defined it :( */ 304 + efi_stub_entry -= 0x200; 305 + #endif 306 + put_unaligned_le32(efi_stub_entry, &buf[0x264]); 342 307 #endif 343 308 344 309 crc = partial_crc32(buf, i, crc);
+2 -2
arch/x86/ia32/ia32entry.S
··· 207 207 testl $(_TIF_ALLWORK_MASK & ~_TIF_SYSCALL_AUDIT),TI_flags+THREAD_INFO(%rsp,RIP-ARGOFFSET) 208 208 jnz ia32_ret_from_sys_call 209 209 TRACE_IRQS_ON 210 - sti 210 + ENABLE_INTERRUPTS(CLBR_NONE) 211 211 movl %eax,%esi /* second arg, syscall return value */ 212 212 cmpl $-MAX_ERRNO,%eax /* is it an error ? */ 213 213 jbe 1f ··· 217 217 call __audit_syscall_exit 218 218 movq RAX-ARGOFFSET(%rsp),%rax /* reload syscall return value */ 219 219 movl $(_TIF_ALLWORK_MASK & ~_TIF_SYSCALL_AUDIT),%edi 220 - cli 220 + DISABLE_INTERRUPTS(CLBR_NONE) 221 221 TRACE_IRQS_OFF 222 222 testl %edi,TI_flags+THREAD_INFO(%rsp,RIP-ARGOFFSET) 223 223 jz \exit
+1
arch/x86/include/asm/efi.h
··· 94 94 #endif /* CONFIG_X86_32 */ 95 95 96 96 extern int add_efi_memmap; 97 + extern unsigned long x86_efi_facility; 97 98 extern void efi_set_executable(efi_memory_desc_t *md, bool executable); 98 99 extern int efi_memblock_x86_reserve_range(void); 99 100 extern void efi_call_phys_prelog(void);
+1 -1
arch/x86/include/asm/uv/uv.h
··· 16 16 extern const struct cpumask *uv_flush_tlb_others(const struct cpumask *cpumask, 17 17 struct mm_struct *mm, 18 18 unsigned long start, 19 - unsigned end, 19 + unsigned long end, 20 20 unsigned int cpu); 21 21 22 22 #else /* X86_UV */
+46 -17
arch/x86/include/uapi/asm/bootparam.h
··· 1 1 #ifndef _ASM_X86_BOOTPARAM_H 2 2 #define _ASM_X86_BOOTPARAM_H 3 3 4 + /* setup_data types */ 5 + #define SETUP_NONE 0 6 + #define SETUP_E820_EXT 1 7 + #define SETUP_DTB 2 8 + #define SETUP_PCI 3 9 + 10 + /* ram_size flags */ 11 + #define RAMDISK_IMAGE_START_MASK 0x07FF 12 + #define RAMDISK_PROMPT_FLAG 0x8000 13 + #define RAMDISK_LOAD_FLAG 0x4000 14 + 15 + /* loadflags */ 16 + #define LOADED_HIGH (1<<0) 17 + #define QUIET_FLAG (1<<5) 18 + #define KEEP_SEGMENTS (1<<6) 19 + #define CAN_USE_HEAP (1<<7) 20 + 21 + /* xloadflags */ 22 + #define XLF_KERNEL_64 (1<<0) 23 + #define XLF_CAN_BE_LOADED_ABOVE_4G (1<<1) 24 + #define XLF_EFI_HANDOVER_32 (1<<2) 25 + #define XLF_EFI_HANDOVER_64 (1<<3) 26 + 27 + #ifndef __ASSEMBLY__ 28 + 4 29 #include <linux/types.h> 5 30 #include <linux/screen_info.h> 6 31 #include <linux/apm_bios.h> ··· 33 8 #include <asm/e820.h> 34 9 #include <asm/ist.h> 35 10 #include <video/edid.h> 36 - 37 - /* setup data types */ 38 - #define SETUP_NONE 0 39 - #define SETUP_E820_EXT 1 40 - #define SETUP_DTB 2 41 - #define SETUP_PCI 3 42 11 43 12 /* extensible setup data list node */ 44 13 struct setup_data { ··· 47 28 __u16 root_flags; 48 29 __u32 syssize; 49 30 __u16 ram_size; 50 - #define RAMDISK_IMAGE_START_MASK 0x07FF 51 - #define RAMDISK_PROMPT_FLAG 0x8000 52 - #define RAMDISK_LOAD_FLAG 0x4000 53 31 __u16 vid_mode; 54 32 __u16 root_dev; 55 33 __u16 boot_flag; ··· 58 42 __u16 kernel_version; 59 43 __u8 type_of_loader; 60 44 __u8 loadflags; 61 - #define LOADED_HIGH (1<<0) 62 - #define QUIET_FLAG (1<<5) 63 - #define KEEP_SEGMENTS (1<<6) 64 - #define CAN_USE_HEAP (1<<7) 65 45 __u16 setup_move_size; 66 46 __u32 code32_start; 67 47 __u32 ramdisk_image; ··· 70 58 __u32 initrd_addr_max; 71 59 __u32 kernel_alignment; 72 60 __u8 relocatable_kernel; 73 - __u8 _pad2[3]; 61 + __u8 min_alignment; 62 + __u16 xloadflags; 74 63 __u32 cmdline_size; 75 64 __u32 hardware_subarch; 76 65 __u64 hardware_subarch_data; ··· 119 106 __u8 hd1_info[16]; /* obsolete! */ /* 0x090 */ 120 107 struct sys_desc_table sys_desc_table; /* 0x0a0 */ 121 108 struct olpc_ofw_header olpc_ofw_header; /* 0x0b0 */ 122 - __u8 _pad4[128]; /* 0x0c0 */ 109 + __u32 ext_ramdisk_image; /* 0x0c0 */ 110 + __u32 ext_ramdisk_size; /* 0x0c4 */ 111 + __u32 ext_cmd_line_ptr; /* 0x0c8 */ 112 + __u8 _pad4[116]; /* 0x0cc */ 123 113 struct edid_info edid_info; /* 0x140 */ 124 114 struct efi_info efi_info; /* 0x1c0 */ 125 115 __u32 alt_mem_k; /* 0x1e0 */ ··· 131 115 __u8 eddbuf_entries; /* 0x1e9 */ 132 116 __u8 edd_mbr_sig_buf_entries; /* 0x1ea */ 133 117 __u8 kbd_status; /* 0x1eb */ 134 - __u8 _pad6[5]; /* 0x1ec */ 118 + __u8 _pad5[3]; /* 0x1ec */ 119 + /* 120 + * The sentinel is set to a nonzero value (0xff) in header.S. 121 + * 122 + * A bootloader is supposed to only take setup_header and put 123 + * it into a clean boot_params buffer. If it turns out that 124 + * it is clumsy or too generous with the buffer, it most 125 + * probably will pick up the sentinel variable too. The fact 126 + * that this variable then is still 0xff will let kernel 127 + * know that some variables in boot_params are invalid and 128 + * kernel should zero out certain portions of boot_params. 129 + */ 130 + __u8 sentinel; /* 0x1ef */ 131 + __u8 _pad6[1]; /* 0x1f0 */ 135 132 struct setup_header hdr; /* setup header */ /* 0x1f1 */ 136 133 __u8 _pad7[0x290-0x1f1-sizeof(struct setup_header)]; 137 134 __u32 edd_mbr_sig_buffer[EDD_MBR_SIG_MAX]; /* 0x290 */ ··· 163 134 X86_NR_SUBARCHS, 164 135 }; 165 136 166 - 137 + #endif /* __ASSEMBLY__ */ 167 138 168 139 #endif /* _ASM_X86_BOOTPARAM_H */
+3 -4
arch/x86/kernel/cpu/intel_cacheinfo.c
··· 298 298 unsigned int); 299 299 }; 300 300 301 - #ifdef CONFIG_AMD_NB 302 - 301 + #if defined(CONFIG_AMD_NB) && defined(CONFIG_SYSFS) 303 302 /* 304 303 * L3 cache descriptors 305 304 */ ··· 523 524 static struct _cache_attr subcaches = 524 525 __ATTR(subcaches, 0644, show_subcaches, store_subcaches); 525 526 526 - #else /* CONFIG_AMD_NB */ 527 + #else 527 528 #define amd_init_l3_cache(x, y) 528 - #endif /* CONFIG_AMD_NB */ 529 + #endif /* CONFIG_AMD_NB && CONFIG_SYSFS */ 529 530 530 531 static int 531 532 __cpuinit cpuid4_cache_lookup_regs(int index,
-6
arch/x86/kernel/cpu/perf_event.c
··· 340 340 /* BTS is currently only allowed for user-mode. */ 341 341 if (!attr->exclude_kernel) 342 342 return -EOPNOTSUPP; 343 - 344 - if (!attr->exclude_guest) 345 - return -EOPNOTSUPP; 346 343 } 347 344 348 345 hwc->config |= config; ··· 381 384 { 382 385 if (event->attr.precise_ip) { 383 386 int precise = 0; 384 - 385 - if (!event->attr.exclude_guest) 386 - return -EOPNOTSUPP; 387 387 388 388 /* Support for constant skid */ 389 389 if (x86_pmu.pebs_active && !x86_pmu.pebs_broken) {
+5 -1
arch/x86/kernel/cpu/perf_event_intel.c
··· 2019 2019 break; 2020 2020 2021 2021 case 28: /* Atom */ 2022 - case 54: /* Cedariew */ 2022 + case 38: /* Lincroft */ 2023 + case 39: /* Penwell */ 2024 + case 53: /* Cloverview */ 2025 + case 54: /* Cedarview */ 2023 2026 memcpy(hw_cache_event_ids, atom_hw_cache_event_ids, 2024 2027 sizeof(hw_cache_event_ids)); 2025 2028 ··· 2087 2084 pr_cont("SandyBridge events, "); 2088 2085 break; 2089 2086 case 58: /* IvyBridge */ 2087 + case 62: /* IvyBridge EP */ 2090 2088 memcpy(hw_cache_event_ids, snb_hw_cache_event_ids, 2091 2089 sizeof(hw_cache_event_ids)); 2092 2090 memcpy(hw_cache_extra_regs, snb_hw_cache_extra_regs,
+1 -1
arch/x86/kernel/cpu/perf_event_p6.c
··· 19 19 20 20 }; 21 21 22 - static __initconst u64 p6_hw_cache_event_ids 22 + static u64 p6_hw_cache_event_ids 23 23 [PERF_COUNT_HW_CACHE_MAX] 24 24 [PERF_COUNT_HW_CACHE_OP_MAX] 25 25 [PERF_COUNT_HW_CACHE_RESULT_MAX] =
-1
arch/x86/kernel/entry_32.S
··· 1065 1065 lea 16(%esp),%esp 1066 1066 CFI_ADJUST_CFA_OFFSET -16 1067 1067 jz 5f 1068 - addl $16,%esp 1069 1068 jmp iret_exc 1070 1069 5: pushl_cfi $-1 /* orig_ax = -1 => not a system call */ 1071 1070 SAVE_ALL
+3 -4
arch/x86/kernel/entry_64.S
··· 1781 1781 * Leave room for the "copied" frame 1782 1782 */ 1783 1783 subq $(5*8), %rsp 1784 + CFI_ADJUST_CFA_OFFSET 5*8 1784 1785 1785 1786 /* Copy the stack frame to the Saved frame */ 1786 1787 .rept 5 ··· 1864 1863 nmi_swapgs: 1865 1864 SWAPGS_UNSAFE_STACK 1866 1865 nmi_restore: 1867 - RESTORE_ALL 8 1868 - 1869 - /* Pop the extra iret frame */ 1870 - addq $(5*8), %rsp 1866 + /* Pop the extra iret frame at once */ 1867 + RESTORE_ALL 6*8 1871 1868 1872 1869 /* Clear the NMI executing stack variable */ 1873 1870 movq $0, 5*8(%rsp)
+7 -2
arch/x86/kernel/head_32.S
··· 300 300 leal -__PAGE_OFFSET(%ecx),%esp 301 301 302 302 default_entry: 303 + #define CR0_STATE (X86_CR0_PE | X86_CR0_MP | X86_CR0_ET | \ 304 + X86_CR0_NE | X86_CR0_WP | X86_CR0_AM | \ 305 + X86_CR0_PG) 306 + movl $(CR0_STATE & ~X86_CR0_PG),%eax 307 + movl %eax,%cr0 308 + 303 309 /* 304 310 * New page tables may be in 4Mbyte page mode and may 305 311 * be using the global pages. ··· 370 364 */ 371 365 movl $pa(initial_page_table), %eax 372 366 movl %eax,%cr3 /* set the page table pointer.. */ 373 - movl %cr0,%eax 374 - orl $X86_CR0_PG,%eax 367 + movl $CR0_STATE,%eax 375 368 movl %eax,%cr0 /* ..and set paging (PG) bit */ 376 369 ljmp $__BOOT_CS,$1f /* Clear prefetch and normalize %eip */ 377 370 1:
+3
arch/x86/kernel/msr.c
··· 174 174 unsigned int cpu; 175 175 struct cpuinfo_x86 *c; 176 176 177 + if (!capable(CAP_SYS_RAWIO)) 178 + return -EPERM; 179 + 177 180 cpu = iminor(file->f_path.dentry->d_inode); 178 181 if (cpu >= nr_cpu_ids || !cpu_online(cpu)) 179 182 return -ENXIO; /* No such CPU */
+1 -1
arch/x86/kernel/pci-dma.c
··· 56 56 EXPORT_SYMBOL(x86_dma_fallback_dev); 57 57 58 58 /* Number of entries preallocated for DMA-API debugging */ 59 - #define PREALLOC_DMA_DEBUG_ENTRIES 32768 59 + #define PREALLOC_DMA_DEBUG_ENTRIES 65536 60 60 61 61 int dma_set_mask(struct device *dev, u64 mask) 62 62 {
+1 -1
arch/x86/kernel/reboot.c
··· 584 584 break; 585 585 586 586 case BOOT_EFI: 587 - if (efi_enabled) 587 + if (efi_enabled(EFI_RUNTIME_SERVICES)) 588 588 efi.reset_system(reboot_mode ? 589 589 EFI_RESET_WARM : 590 590 EFI_RESET_COLD,
+14 -14
arch/x86/kernel/setup.c
··· 807 807 #ifdef CONFIG_EFI 808 808 if (!strncmp((char *)&boot_params.efi_info.efi_loader_signature, 809 809 "EL32", 4)) { 810 - efi_enabled = 1; 811 - efi_64bit = false; 810 + set_bit(EFI_BOOT, &x86_efi_facility); 812 811 } else if (!strncmp((char *)&boot_params.efi_info.efi_loader_signature, 813 812 "EL64", 4)) { 814 - efi_enabled = 1; 815 - efi_64bit = true; 813 + set_bit(EFI_BOOT, &x86_efi_facility); 814 + set_bit(EFI_64BIT, &x86_efi_facility); 816 815 } 817 - if (efi_enabled && efi_memblock_x86_reserve_range()) 818 - efi_enabled = 0; 816 + 817 + if (efi_enabled(EFI_BOOT)) 818 + efi_memblock_x86_reserve_range(); 819 819 #endif 820 820 821 821 x86_init.oem.arch_setup(); ··· 888 888 889 889 finish_e820_parsing(); 890 890 891 - if (efi_enabled) 891 + if (efi_enabled(EFI_BOOT)) 892 892 efi_init(); 893 893 894 894 dmi_scan_machine(); ··· 971 971 * The EFI specification says that boot service code won't be called 972 972 * after ExitBootServices(). This is, in fact, a lie. 973 973 */ 974 - if (efi_enabled) 974 + if (efi_enabled(EFI_MEMMAP)) 975 975 efi_reserve_boot_services(); 976 976 977 977 /* preallocate 4k for mptable mpc */ ··· 1114 1114 1115 1115 #ifdef CONFIG_VT 1116 1116 #if defined(CONFIG_VGA_CONSOLE) 1117 - if (!efi_enabled || (efi_mem_type(0xa0000) != EFI_CONVENTIONAL_MEMORY)) 1117 + if (!efi_enabled(EFI_BOOT) || (efi_mem_type(0xa0000) != EFI_CONVENTIONAL_MEMORY)) 1118 1118 conswitchp = &vga_con; 1119 1119 #elif defined(CONFIG_DUMMY_CONSOLE) 1120 1120 conswitchp = &dummy_con; ··· 1131 1131 register_refined_jiffies(CLOCK_TICK_RATE); 1132 1132 1133 1133 #ifdef CONFIG_EFI 1134 - /* Once setup is done above, disable efi_enabled on mismatched 1135 - * firmware/kernel archtectures since there is no support for 1136 - * runtime services. 1134 + /* Once setup is done above, unmap the EFI memory map on 1135 + * mismatched firmware/kernel archtectures since there is no 1136 + * support for runtime services. 1137 1137 */ 1138 - if (efi_enabled && IS_ENABLED(CONFIG_X86_64) != efi_64bit) { 1138 + if (efi_enabled(EFI_BOOT) && 1139 + IS_ENABLED(CONFIG_X86_64) != efi_enabled(EFI_64BIT)) { 1139 1140 pr_info("efi: Setup done, disabling due to 32/64-bit mismatch\n"); 1140 1141 efi_unmap_memmap(); 1141 - efi_enabled = 0; 1142 1142 } 1143 1143 #endif 1144 1144 }
+5 -4
arch/x86/kernel/step.c
··· 165 165 * Ensure irq/preemption can't change debugctl in between. 166 166 * Note also that both TIF_BLOCKSTEP and debugctl should 167 167 * be changed atomically wrt preemption. 168 - * FIXME: this means that set/clear TIF_BLOCKSTEP is simply 169 - * wrong if task != current, SIGKILL can wakeup the stopped 170 - * tracee and set/clear can play with the running task, this 171 - * can confuse the next __switch_to_xtra(). 168 + * 169 + * NOTE: this means that set/clear TIF_BLOCKSTEP is only safe if 170 + * task is current or it can't be running, otherwise we can race 171 + * with __switch_to_xtra(). We rely on ptrace_freeze_traced() but 172 + * PTRACE_KILL is not safe. 172 173 */ 173 174 local_irq_disable(); 174 175 debugctl = get_debugctlmsr();
+35 -24
arch/x86/platform/efi/efi.c
··· 51 51 52 52 #define EFI_DEBUG 1 53 53 54 - int efi_enabled; 55 - EXPORT_SYMBOL(efi_enabled); 56 - 57 54 struct efi __read_mostly efi = { 58 55 .mps = EFI_INVALID_TABLE_ADDR, 59 56 .acpi = EFI_INVALID_TABLE_ADDR, ··· 66 69 67 70 struct efi_memory_map memmap; 68 71 69 - bool efi_64bit; 70 - 71 72 static struct efi efi_phys __initdata; 72 73 static efi_system_table_t efi_systab __initdata; 73 74 74 75 static inline bool efi_is_native(void) 75 76 { 76 - return IS_ENABLED(CONFIG_X86_64) == efi_64bit; 77 + return IS_ENABLED(CONFIG_X86_64) == efi_enabled(EFI_64BIT); 77 78 } 79 + 80 + unsigned long x86_efi_facility; 81 + 82 + /* 83 + * Returns 1 if 'facility' is enabled, 0 otherwise. 84 + */ 85 + int efi_enabled(int facility) 86 + { 87 + return test_bit(facility, &x86_efi_facility) != 0; 88 + } 89 + EXPORT_SYMBOL(efi_enabled); 78 90 79 91 static int __init setup_noefi(char *arg) 80 92 { 81 - efi_enabled = 0; 93 + clear_bit(EFI_BOOT, &x86_efi_facility); 82 94 return 0; 83 95 } 84 96 early_param("noefi", setup_noefi); ··· 432 426 433 427 void __init efi_unmap_memmap(void) 434 428 { 429 + clear_bit(EFI_MEMMAP, &x86_efi_facility); 435 430 if (memmap.map) { 436 431 early_iounmap(memmap.map, memmap.nr_map * memmap.desc_size); 437 432 memmap.map = NULL; ··· 467 460 468 461 static int __init efi_systab_init(void *phys) 469 462 { 470 - if (efi_64bit) { 463 + if (efi_enabled(EFI_64BIT)) { 471 464 efi_system_table_64_t *systab64; 472 465 u64 tmp = 0; 473 466 ··· 559 552 void *config_tables, *tablep; 560 553 int i, sz; 561 554 562 - if (efi_64bit) 555 + if (efi_enabled(EFI_64BIT)) 563 556 sz = sizeof(efi_config_table_64_t); 564 557 else 565 558 sz = sizeof(efi_config_table_32_t); ··· 579 572 efi_guid_t guid; 580 573 unsigned long table; 581 574 582 - if (efi_64bit) { 575 + if (efi_enabled(EFI_64BIT)) { 583 576 u64 table64; 584 577 guid = ((efi_config_table_64_t *)tablep)->guid; 585 578 table64 = ((efi_config_table_64_t *)tablep)->table; ··· 691 684 if (boot_params.efi_info.efi_systab_hi || 692 685 boot_params.efi_info.efi_memmap_hi) { 693 686 pr_info("Table located above 4GB, disabling EFI.\n"); 694 - efi_enabled = 0; 695 687 return; 696 688 } 697 689 efi_phys.systab = (efi_system_table_t *)boot_params.efi_info.efi_systab; ··· 700 694 ((__u64)boot_params.efi_info.efi_systab_hi<<32)); 701 695 #endif 702 696 703 - if (efi_systab_init(efi_phys.systab)) { 704 - efi_enabled = 0; 697 + if (efi_systab_init(efi_phys.systab)) 705 698 return; 706 - } 699 + 700 + set_bit(EFI_SYSTEM_TABLES, &x86_efi_facility); 707 701 708 702 /* 709 703 * Show what we know for posterity ··· 721 715 efi.systab->hdr.revision >> 16, 722 716 efi.systab->hdr.revision & 0xffff, vendor); 723 717 724 - if (efi_config_init(efi.systab->tables, efi.systab->nr_tables)) { 725 - efi_enabled = 0; 718 + if (efi_config_init(efi.systab->tables, efi.systab->nr_tables)) 726 719 return; 727 - } 720 + 721 + set_bit(EFI_CONFIG_TABLES, &x86_efi_facility); 728 722 729 723 /* 730 724 * Note: We currently don't support runtime services on an EFI ··· 733 727 734 728 if (!efi_is_native()) 735 729 pr_info("No EFI runtime due to 32/64-bit mismatch with kernel\n"); 736 - else if (efi_runtime_init()) { 737 - efi_enabled = 0; 738 - return; 730 + else { 731 + if (efi_runtime_init()) 732 + return; 733 + set_bit(EFI_RUNTIME_SERVICES, &x86_efi_facility); 739 734 } 740 735 741 - if (efi_memmap_init()) { 742 - efi_enabled = 0; 736 + if (efi_memmap_init()) 743 737 return; 744 - } 738 + 739 + set_bit(EFI_MEMMAP, &x86_efi_facility); 740 + 745 741 #ifdef CONFIG_X86_32 746 742 if (efi_is_native()) { 747 743 x86_platform.get_wallclock = efi_get_time; ··· 949 941 * 950 942 * Call EFI services through wrapper functions. 951 943 */ 952 - efi.runtime_version = efi_systab.fw_revision; 944 + efi.runtime_version = efi_systab.hdr.revision; 953 945 efi.get_time = virt_efi_get_time; 954 946 efi.set_time = virt_efi_set_time; 955 947 efi.get_wakeup_time = virt_efi_get_wakeup_time; ··· 976 968 { 977 969 efi_memory_desc_t *md; 978 970 void *p; 971 + 972 + if (!efi_enabled(EFI_MEMMAP)) 973 + return 0; 979 974 980 975 for (p = memmap.map; p < memmap.map_end; p += memmap.desc_size) { 981 976 md = p;
+17 -5
arch/x86/platform/efi/efi_64.c
··· 38 38 #include <asm/cacheflush.h> 39 39 #include <asm/fixmap.h> 40 40 41 - static pgd_t save_pgd __initdata; 41 + static pgd_t *save_pgd __initdata; 42 42 static unsigned long efi_flags __initdata; 43 43 44 44 static void __init early_code_mapping_set_exec(int executable) ··· 61 61 void __init efi_call_phys_prelog(void) 62 62 { 63 63 unsigned long vaddress; 64 + int pgd; 65 + int n_pgds; 64 66 65 67 early_code_mapping_set_exec(1); 66 68 local_irq_save(efi_flags); 67 - vaddress = (unsigned long)__va(0x0UL); 68 - save_pgd = *pgd_offset_k(0x0UL); 69 - set_pgd(pgd_offset_k(0x0UL), *pgd_offset_k(vaddress)); 69 + 70 + n_pgds = DIV_ROUND_UP((max_pfn << PAGE_SHIFT), PGDIR_SIZE); 71 + save_pgd = kmalloc(n_pgds * sizeof(pgd_t), GFP_KERNEL); 72 + 73 + for (pgd = 0; pgd < n_pgds; pgd++) { 74 + save_pgd[pgd] = *pgd_offset_k(pgd * PGDIR_SIZE); 75 + vaddress = (unsigned long)__va(pgd * PGDIR_SIZE); 76 + set_pgd(pgd_offset_k(pgd * PGDIR_SIZE), *pgd_offset_k(vaddress)); 77 + } 70 78 __flush_tlb_all(); 71 79 } 72 80 ··· 83 75 /* 84 76 * After the lock is released, the original page table is restored. 85 77 */ 86 - set_pgd(pgd_offset_k(0x0UL), save_pgd); 78 + int pgd; 79 + int n_pgds = DIV_ROUND_UP((max_pfn << PAGE_SHIFT) , PGDIR_SIZE); 80 + for (pgd = 0; pgd < n_pgds; pgd++) 81 + set_pgd(pgd_offset_k(pgd * PGDIR_SIZE), save_pgd[pgd]); 82 + kfree(save_pgd); 87 83 __flush_tlb_all(); 88 84 local_irq_restore(efi_flags); 89 85 early_code_mapping_set_exec(0);
+7 -3
arch/x86/platform/uv/tlb_uv.c
··· 1034 1034 * globally purge translation cache of a virtual address or all TLB's 1035 1035 * @cpumask: mask of all cpu's in which the address is to be removed 1036 1036 * @mm: mm_struct containing virtual address range 1037 - * @va: virtual address to be removed (or TLB_FLUSH_ALL for all TLB's on cpu) 1037 + * @start: start virtual address to be removed from TLB 1038 + * @end: end virtual address to be remove from TLB 1038 1039 * @cpu: the current cpu 1039 1040 * 1040 1041 * This is the entry point for initiating any UV global TLB shootdown. ··· 1057 1056 */ 1058 1057 const struct cpumask *uv_flush_tlb_others(const struct cpumask *cpumask, 1059 1058 struct mm_struct *mm, unsigned long start, 1060 - unsigned end, unsigned int cpu) 1059 + unsigned long end, unsigned int cpu) 1061 1060 { 1062 1061 int locals = 0; 1063 1062 int remotes = 0; ··· 1114 1113 1115 1114 record_send_statistics(stat, locals, hubs, remotes, bau_desc); 1116 1115 1117 - bau_desc->payload.address = start; 1116 + if (!end || (end - start) <= PAGE_SIZE) 1117 + bau_desc->payload.address = start; 1118 + else 1119 + bau_desc->payload.address = TLB_FLUSH_ALL; 1118 1120 bau_desc->payload.sending_cpu = cpu; 1119 1121 /* 1120 1122 * uv_flush_send_and_wait returns 0 if all cpu's were messaged,
+8 -2
arch/x86/tools/insn_sanity.c
··· 55 55 static void usage(const char *err) 56 56 { 57 57 if (err) 58 - fprintf(stderr, "Error: %s\n\n", err); 58 + fprintf(stderr, "%s: Error: %s\n\n", prog, err); 59 59 fprintf(stderr, "Usage: %s [-y|-n|-v] [-s seed[,no]] [-m max] [-i input]\n", prog); 60 60 fprintf(stderr, "\t-y 64bit mode\n"); 61 61 fprintf(stderr, "\t-n 32bit mode\n"); ··· 269 269 insns++; 270 270 } 271 271 272 - fprintf(stdout, "%s: decoded and checked %d %s instructions with %d errors (seed:0x%x)\n", (errors) ? "Failure" : "Success", insns, (input_file) ? "given" : "random", errors, seed); 272 + fprintf(stdout, "%s: %s: decoded and checked %d %s instructions with %d errors (seed:0x%x)\n", 273 + prog, 274 + (errors) ? "Failure" : "Success", 275 + insns, 276 + (input_file) ? "given" : "random", 277 + errors, 278 + seed); 273 279 274 280 return errors ? 1 : 0; 275 281 }
+4 -2
arch/x86/tools/relocs.c
··· 814 814 read_relocs(fp); 815 815 if (show_absolute_syms) { 816 816 print_absolute_symbols(); 817 - return 0; 817 + goto out; 818 818 } 819 819 if (show_absolute_relocs) { 820 820 print_absolute_relocs(); 821 - return 0; 821 + goto out; 822 822 } 823 823 emit_relocs(as_text, use_real_mode); 824 + out: 825 + fclose(fp); 824 826 return 0; 825 827 }
-7
arch/x86/xen/smp.c
··· 432 432 play_dead_common(); 433 433 HYPERVISOR_vcpu_op(VCPUOP_down, smp_processor_id(), NULL); 434 434 cpu_bringup(); 435 - /* 436 - * Balance out the preempt calls - as we are running in cpu_idle 437 - * loop which has been called at bootup from cpu_bringup_and_idle. 438 - * The cpucpu_bringup_and_idle called cpu_bringup which made a 439 - * preempt_disable() So this preempt_enable will balance it out. 440 - */ 441 - preempt_enable(); 442 435 } 443 436 444 437 #else /* !CONFIG_HOTPLUG_CPU */
+3
drivers/acpi/apei/apei-base.c
··· 590 590 if (bit_width == 32 && bit_offset == 0 && (*paddr & 0x03) == 0 && 591 591 *access_bit_width < 32) 592 592 *access_bit_width = 32; 593 + else if (bit_width == 64 && bit_offset == 0 && (*paddr & 0x07) == 0 && 594 + *access_bit_width < 64) 595 + *access_bit_width = 64; 593 596 594 597 if ((bit_width + bit_offset) > *access_bit_width) { 595 598 pr_warning(FW_BUG APEI_PFX
+1 -1
drivers/acpi/osl.c
··· 250 250 return acpi_rsdp; 251 251 #endif 252 252 253 - if (efi_enabled) { 253 + if (efi_enabled(EFI_CONFIG_TABLES)) { 254 254 if (efi.acpi20 != EFI_INVALID_TABLE_ADDR) 255 255 return efi.acpi20; 256 256 else if (efi.acpi != EFI_INVALID_TABLE_ADDR)
+4
drivers/acpi/processor_idle.c
··· 958 958 return -EINVAL; 959 959 } 960 960 961 + if (!dev) 962 + return -EINVAL; 963 + 961 964 dev->cpu = pr->id; 962 965 963 966 if (max_cstate == 0) ··· 1152 1149 } 1153 1150 1154 1151 /* Populate Updated C-state information */ 1152 + acpi_processor_get_power_info(pr); 1155 1153 acpi_processor_setup_cpuidle_states(pr); 1156 1154 1157 1155 /* Enable all cpuidle devices */
+7
drivers/acpi/processor_perflib.c
··· 340 340 if ((boot_cpu_data.x86 == 0x10 && boot_cpu_data.x86_model < 10) 341 341 || boot_cpu_data.x86 == 0x11) { 342 342 rdmsr(MSR_AMD_PSTATE_DEF_BASE + index, lo, hi); 343 + /* 344 + * MSR C001_0064+: 345 + * Bit 63: PstateEn. Read-write. If set, the P-state is valid. 346 + */ 347 + if (!(hi & BIT(31))) 348 + return; 349 + 343 350 fid = lo & 0x3f; 344 351 did = (lo >> 6) & 7; 345 352 if (boot_cpu_data.x86 == 0x10)
+7 -1
drivers/ata/ahci.c
··· 53 53 54 54 enum { 55 55 AHCI_PCI_BAR_STA2X11 = 0, 56 + AHCI_PCI_BAR_ENMOTUS = 2, 56 57 AHCI_PCI_BAR_STANDARD = 5, 57 58 }; 58 59 ··· 410 409 { PCI_VDEVICE(ASMEDIA, 0x0602), board_ahci }, /* ASM1060 */ 411 410 { PCI_VDEVICE(ASMEDIA, 0x0611), board_ahci }, /* ASM1061 */ 412 411 { PCI_VDEVICE(ASMEDIA, 0x0612), board_ahci }, /* ASM1062 */ 412 + 413 + /* Enmotus */ 414 + { PCI_DEVICE(0x1c44, 0x8000), board_ahci }, 413 415 414 416 /* Generic, PCI class code for AHCI */ 415 417 { PCI_ANY_ID, PCI_ANY_ID, PCI_ANY_ID, PCI_ANY_ID, ··· 1102 1098 dev_info(&pdev->dev, 1103 1099 "PDC42819 can only drive SATA devices with this driver\n"); 1104 1100 1105 - /* The Connext uses non-standard BAR */ 1101 + /* Both Connext and Enmotus devices use non-standard BARs */ 1106 1102 if (pdev->vendor == PCI_VENDOR_ID_STMICRO && pdev->device == 0xCC06) 1107 1103 ahci_pci_bar = AHCI_PCI_BAR_STA2X11; 1104 + else if (pdev->vendor == 0x1c44 && pdev->device == 0x8000) 1105 + ahci_pci_bar = AHCI_PCI_BAR_ENMOTUS; 1108 1106 1109 1107 /* acquire resources */ 1110 1108 rc = pcim_enable_device(pdev);
+3 -3
drivers/ata/libahci.c
··· 1951 1951 /* Use the nominal value 10 ms if the read MDAT is zero, 1952 1952 * the nominal value of DETO is 20 ms. 1953 1953 */ 1954 - if (dev->sata_settings[ATA_LOG_DEVSLP_VALID] & 1954 + if (dev->devslp_timing[ATA_LOG_DEVSLP_VALID] & 1955 1955 ATA_LOG_DEVSLP_VALID_MASK) { 1956 - mdat = dev->sata_settings[ATA_LOG_DEVSLP_MDAT] & 1956 + mdat = dev->devslp_timing[ATA_LOG_DEVSLP_MDAT] & 1957 1957 ATA_LOG_DEVSLP_MDAT_MASK; 1958 1958 if (!mdat) 1959 1959 mdat = 10; 1960 - deto = dev->sata_settings[ATA_LOG_DEVSLP_DETO]; 1960 + deto = dev->devslp_timing[ATA_LOG_DEVSLP_DETO]; 1961 1961 if (!deto) 1962 1962 deto = 20; 1963 1963 } else {
+13 -9
drivers/ata/libata-core.c
··· 2325 2325 } 2326 2326 } 2327 2327 2328 - /* check and mark DevSlp capability */ 2329 - if (ata_id_has_devslp(dev->id)) 2330 - dev->flags |= ATA_DFLAG_DEVSLP; 2331 - 2332 - /* Obtain SATA Settings page from Identify Device Data Log, 2333 - * which contains DevSlp timing variables etc. 2334 - * Exclude old devices with ata_id_has_ncq() 2328 + /* Check and mark DevSlp capability. Get DevSlp timing variables 2329 + * from SATA Settings page of Identify Device Data Log. 2335 2330 */ 2336 - if (ata_id_has_ncq(dev->id)) { 2331 + if (ata_id_has_devslp(dev->id)) { 2332 + u8 sata_setting[ATA_SECT_SIZE]; 2333 + int i, j; 2334 + 2335 + dev->flags |= ATA_DFLAG_DEVSLP; 2337 2336 err_mask = ata_read_log_page(dev, 2338 2337 ATA_LOG_SATA_ID_DEV_DATA, 2339 2338 ATA_LOG_SATA_SETTINGS, 2340 - dev->sata_settings, 2339 + sata_setting, 2341 2340 1); 2342 2341 if (err_mask) 2343 2342 ata_dev_dbg(dev, 2344 2343 "failed to get Identify Device Data, Emask 0x%x\n", 2345 2344 err_mask); 2345 + else 2346 + for (i = 0; i < ATA_LOG_DEVSLP_SIZE; i++) { 2347 + j = ATA_LOG_DEVSLP_OFFSET + i; 2348 + dev->devslp_timing[i] = sata_setting[j]; 2349 + } 2346 2350 } 2347 2351 2348 2352 dev->cdb_len = 16;
+1 -1
drivers/ata/libata-eh.c
··· 2094 2094 */ 2095 2095 static inline int ata_eh_worth_retry(struct ata_queued_cmd *qc) 2096 2096 { 2097 - if (qc->flags & AC_ERR_MEDIA) 2097 + if (qc->err_mask & AC_ERR_MEDIA) 2098 2098 return 0; /* don't retry media errors */ 2099 2099 if (qc->flags & ATA_QCFLAG_IO) 2100 2100 return 1; /* otherwise retry anything from fs stack */
-2
drivers/base/regmap/regmap-debugfs.c
··· 121 121 c->max = p - 1; 122 122 list_add_tail(&c->list, 123 123 &map->debugfs_off_cache); 124 - } else { 125 - return base; 126 124 } 127 125 128 126 /*
+1 -1
drivers/base/regmap/regmap.c
··· 1106 1106 * @val_count: Number of registers to write 1107 1107 * 1108 1108 * This function is intended to be used for writing a large block of 1109 - * data to be device either in single transfer or multiple transfer. 1109 + * data to the device either in single transfer or multiple transfer. 1110 1110 * 1111 1111 * A value of zero will be returned on success, a negative errno will 1112 1112 * be returned in error cases.
+6 -1
drivers/block/virtio_blk.c
··· 889 889 { 890 890 struct virtio_blk *vblk = vdev->priv; 891 891 int index = vblk->index; 892 + int refc; 892 893 893 894 /* Prevent config work handler from accessing the device. */ 894 895 mutex_lock(&vblk->config_lock); ··· 904 903 905 904 flush_work(&vblk->config_work); 906 905 906 + refc = atomic_read(&disk_to_dev(vblk->disk)->kobj.kref.refcount); 907 907 put_disk(vblk->disk); 908 908 mempool_destroy(vblk->pool); 909 909 vdev->config->del_vqs(vdev); 910 910 kfree(vblk); 911 - ida_simple_remove(&vd_index_ida, index); 911 + 912 + /* Only free device id if we don't have any users */ 913 + if (refc == 1) 914 + ida_simple_remove(&vd_index_ida, index); 912 915 } 913 916 914 917 #ifdef CONFIG_PM
+10
drivers/bluetooth/ath3k.c
··· 77 77 { USB_DEVICE(0x0CF3, 0x311D) }, 78 78 { USB_DEVICE(0x13d3, 0x3375) }, 79 79 { USB_DEVICE(0x04CA, 0x3005) }, 80 + { USB_DEVICE(0x04CA, 0x3006) }, 81 + { USB_DEVICE(0x04CA, 0x3008) }, 80 82 { USB_DEVICE(0x13d3, 0x3362) }, 81 83 { USB_DEVICE(0x0CF3, 0xE004) }, 82 84 { USB_DEVICE(0x0930, 0x0219) }, 83 85 { USB_DEVICE(0x0489, 0xe057) }, 86 + { USB_DEVICE(0x13d3, 0x3393) }, 87 + { USB_DEVICE(0x0489, 0xe04e) }, 88 + { USB_DEVICE(0x0489, 0xe056) }, 84 89 85 90 /* Atheros AR5BBU12 with sflash firmware */ 86 91 { USB_DEVICE(0x0489, 0xE02C) }, ··· 109 104 { USB_DEVICE(0x0cf3, 0x311D), .driver_info = BTUSB_ATH3012 }, 110 105 { USB_DEVICE(0x13d3, 0x3375), .driver_info = BTUSB_ATH3012 }, 111 106 { USB_DEVICE(0x04ca, 0x3005), .driver_info = BTUSB_ATH3012 }, 107 + { USB_DEVICE(0x04ca, 0x3006), .driver_info = BTUSB_ATH3012 }, 108 + { USB_DEVICE(0x04ca, 0x3008), .driver_info = BTUSB_ATH3012 }, 112 109 { USB_DEVICE(0x13d3, 0x3362), .driver_info = BTUSB_ATH3012 }, 113 110 { USB_DEVICE(0x0cf3, 0xe004), .driver_info = BTUSB_ATH3012 }, 114 111 { USB_DEVICE(0x0930, 0x0219), .driver_info = BTUSB_ATH3012 }, 115 112 { USB_DEVICE(0x0489, 0xe057), .driver_info = BTUSB_ATH3012 }, 113 + { USB_DEVICE(0x13d3, 0x3393), .driver_info = BTUSB_ATH3012 }, 114 + { USB_DEVICE(0x0489, 0xe04e), .driver_info = BTUSB_ATH3012 }, 115 + { USB_DEVICE(0x0489, 0xe056), .driver_info = BTUSB_ATH3012 }, 116 116 117 117 /* Atheros AR5BBU22 with sflash firmware */ 118 118 { USB_DEVICE(0x0489, 0xE03C), .driver_info = BTUSB_ATH3012 },
+5
drivers/bluetooth/btusb.c
··· 135 135 { USB_DEVICE(0x0cf3, 0x311d), .driver_info = BTUSB_ATH3012 }, 136 136 { USB_DEVICE(0x13d3, 0x3375), .driver_info = BTUSB_ATH3012 }, 137 137 { USB_DEVICE(0x04ca, 0x3005), .driver_info = BTUSB_ATH3012 }, 138 + { USB_DEVICE(0x04ca, 0x3006), .driver_info = BTUSB_ATH3012 }, 139 + { USB_DEVICE(0x04ca, 0x3008), .driver_info = BTUSB_ATH3012 }, 138 140 { USB_DEVICE(0x13d3, 0x3362), .driver_info = BTUSB_ATH3012 }, 139 141 { USB_DEVICE(0x0cf3, 0xe004), .driver_info = BTUSB_ATH3012 }, 140 142 { USB_DEVICE(0x0930, 0x0219), .driver_info = BTUSB_ATH3012 }, 141 143 { USB_DEVICE(0x0489, 0xe057), .driver_info = BTUSB_ATH3012 }, 144 + { USB_DEVICE(0x13d3, 0x3393), .driver_info = BTUSB_ATH3012 }, 145 + { USB_DEVICE(0x0489, 0xe04e), .driver_info = BTUSB_ATH3012 }, 146 + { USB_DEVICE(0x0489, 0xe056), .driver_info = BTUSB_ATH3012 }, 142 147 143 148 /* Atheros AR5BBU12 with sflash firmware */ 144 149 { USB_DEVICE(0x0489, 0xe02c), .driver_info = BTUSB_IGNORE },
+6 -3
drivers/clk/mvebu/clk-cpu.c
··· 124 124 125 125 clks = kzalloc(ncpus * sizeof(*clks), GFP_KERNEL); 126 126 if (WARN_ON(!clks)) 127 - return; 127 + goto clks_out; 128 128 129 129 for_each_node_by_type(dn, "cpu") { 130 130 struct clk_init_data init; ··· 134 134 int cpu, err; 135 135 136 136 if (WARN_ON(!clk_name)) 137 - return; 137 + goto bail_out; 138 138 139 139 err = of_property_read_u32(dn, "reg", &cpu); 140 140 if (WARN_ON(err)) 141 - return; 141 + goto bail_out; 142 142 143 143 sprintf(clk_name, "cpu%d", cpu); 144 144 parent_clk = of_clk_get(node, 0); ··· 167 167 return; 168 168 bail_out: 169 169 kfree(clks); 170 + while(ncpus--) 171 + kfree(cpuclk[ncpus].clk_name); 172 + clks_out: 170 173 kfree(cpuclk); 171 174 } 172 175
+1 -1
drivers/cpufreq/Kconfig.x86
··· 106 106 config X86_POWERNOW_K8 107 107 tristate "AMD Opteron/Athlon64 PowerNow!" 108 108 select CPU_FREQ_TABLE 109 - depends on ACPI && ACPI_PROCESSOR 109 + depends on ACPI && ACPI_PROCESSOR && X86_ACPI_CPUFREQ 110 110 help 111 111 This adds the CPUFreq driver for K8/early Opteron/Athlon64 processors. 112 112 Support for K10 and newer processors is now in acpi-cpufreq.
+7
drivers/cpufreq/acpi-cpufreq.c
··· 1030 1030 late_initcall(acpi_cpufreq_init); 1031 1031 module_exit(acpi_cpufreq_exit); 1032 1032 1033 + static const struct x86_cpu_id acpi_cpufreq_ids[] = { 1034 + X86_FEATURE_MATCH(X86_FEATURE_ACPI), 1035 + X86_FEATURE_MATCH(X86_FEATURE_HW_PSTATE), 1036 + {} 1037 + }; 1038 + MODULE_DEVICE_TABLE(x86cpu, acpi_cpufreq_ids); 1039 + 1033 1040 MODULE_ALIAS("acpi");
+5
drivers/cpufreq/cpufreq-cpu0.c
··· 71 71 } 72 72 73 73 if (cpu_reg) { 74 + rcu_read_lock(); 74 75 opp = opp_find_freq_ceil(cpu_dev, &freq_Hz); 75 76 if (IS_ERR(opp)) { 77 + rcu_read_unlock(); 76 78 pr_err("failed to find OPP for %ld\n", freq_Hz); 77 79 return PTR_ERR(opp); 78 80 } 79 81 volt = opp_get_voltage(opp); 82 + rcu_read_unlock(); 80 83 tol = volt * voltage_tolerance / 100; 81 84 volt_old = regulator_get_voltage(cpu_reg); 82 85 } ··· 239 236 */ 240 237 for (i = 0; freq_table[i].frequency != CPUFREQ_TABLE_END; i++) 241 238 ; 239 + rcu_read_lock(); 242 240 opp = opp_find_freq_exact(cpu_dev, 243 241 freq_table[0].frequency * 1000, true); 244 242 min_uV = opp_get_voltage(opp); 245 243 opp = opp_find_freq_exact(cpu_dev, 246 244 freq_table[i-1].frequency * 1000, true); 247 245 max_uV = opp_get_voltage(opp); 246 + rcu_read_unlock(); 248 247 ret = regulator_set_voltage_time(cpu_reg, min_uV, max_uV); 249 248 if (ret > 0) 250 249 transition_latency += ret * 1000;
+3
drivers/cpufreq/omap-cpufreq.c
··· 110 110 freq = ret; 111 111 112 112 if (mpu_reg) { 113 + rcu_read_lock(); 113 114 opp = opp_find_freq_ceil(mpu_dev, &freq); 114 115 if (IS_ERR(opp)) { 116 + rcu_read_unlock(); 115 117 dev_err(mpu_dev, "%s: unable to find MPU OPP for %d\n", 116 118 __func__, freqs.new); 117 119 return -EINVAL; 118 120 } 119 121 volt = opp_get_voltage(opp); 122 + rcu_read_unlock(); 120 123 tol = volt * OPP_TOLERANCE / 100; 121 124 volt_old = regulator_get_voltage(mpu_reg); 122 125 }
+5
drivers/devfreq/devfreq.c
··· 994 994 * @freq: The frequency given to target function 995 995 * @flags: Flags handed from devfreq framework. 996 996 * 997 + * Locking: This function must be called under rcu_read_lock(). opp is a rcu 998 + * protected pointer. The reason for the same is that the opp pointer which is 999 + * returned will remain valid for use with opp_get_{voltage, freq} only while 1000 + * under the locked area. The pointer returned must be used prior to unlocking 1001 + * with rcu_read_unlock() to maintain the integrity of the pointer. 997 1002 */ 998 1003 struct opp *devfreq_recommended_opp(struct device *dev, unsigned long *freq, 999 1004 u32 flags)
+67 -27
drivers/devfreq/exynos4_bus.c
··· 73 73 #define EX4210_LV_NUM (LV_2 + 1) 74 74 #define EX4x12_LV_NUM (LV_4 + 1) 75 75 76 + /** 77 + * struct busfreq_opp_info - opp information for bus 78 + * @rate: Frequency in hertz 79 + * @volt: Voltage in microvolts corresponding to this OPP 80 + */ 81 + struct busfreq_opp_info { 82 + unsigned long rate; 83 + unsigned long volt; 84 + }; 85 + 76 86 struct busfreq_data { 77 87 enum exynos4_busf_type type; 78 88 struct device *dev; ··· 90 80 bool disabled; 91 81 struct regulator *vdd_int; 92 82 struct regulator *vdd_mif; /* Exynos4412/4212 only */ 93 - struct opp *curr_opp; 83 + struct busfreq_opp_info curr_oppinfo; 94 84 struct exynos4_ppmu dmc[2]; 95 85 96 86 struct notifier_block pm_notifier; ··· 306 296 }; 307 297 308 298 309 - static int exynos4210_set_busclk(struct busfreq_data *data, struct opp *opp) 299 + static int exynos4210_set_busclk(struct busfreq_data *data, 300 + struct busfreq_opp_info *oppi) 310 301 { 311 302 unsigned int index; 312 303 unsigned int tmp; 313 304 314 305 for (index = LV_0; index < EX4210_LV_NUM; index++) 315 - if (opp_get_freq(opp) == exynos4210_busclk_table[index].clk) 306 + if (oppi->rate == exynos4210_busclk_table[index].clk) 316 307 break; 317 308 318 309 if (index == EX4210_LV_NUM) ··· 372 361 return 0; 373 362 } 374 363 375 - static int exynos4x12_set_busclk(struct busfreq_data *data, struct opp *opp) 364 + static int exynos4x12_set_busclk(struct busfreq_data *data, 365 + struct busfreq_opp_info *oppi) 376 366 { 377 367 unsigned int index; 378 368 unsigned int tmp; 379 369 380 370 for (index = LV_0; index < EX4x12_LV_NUM; index++) 381 - if (opp_get_freq(opp) == exynos4x12_mifclk_table[index].clk) 371 + if (oppi->rate == exynos4x12_mifclk_table[index].clk) 382 372 break; 383 373 384 374 if (index == EX4x12_LV_NUM) ··· 588 576 return -EINVAL; 589 577 } 590 578 591 - static int exynos4_bus_setvolt(struct busfreq_data *data, struct opp *opp, 592 - struct opp *oldopp) 579 + static int exynos4_bus_setvolt(struct busfreq_data *data, 580 + struct busfreq_opp_info *oppi, 581 + struct busfreq_opp_info *oldoppi) 593 582 { 594 583 int err = 0, tmp; 595 - unsigned long volt = opp_get_voltage(opp); 584 + unsigned long volt = oppi->volt; 596 585 597 586 switch (data->type) { 598 587 case TYPE_BUSF_EXYNOS4210: ··· 608 595 if (err) 609 596 break; 610 597 611 - tmp = exynos4x12_get_intspec(opp_get_freq(opp)); 598 + tmp = exynos4x12_get_intspec(oppi->rate); 612 599 if (tmp < 0) { 613 600 err = tmp; 614 601 regulator_set_voltage(data->vdd_mif, 615 - opp_get_voltage(oldopp), 602 + oldoppi->volt, 616 603 MAX_SAFEVOLT); 617 604 break; 618 605 } ··· 622 609 /* Try to recover */ 623 610 if (err) 624 611 regulator_set_voltage(data->vdd_mif, 625 - opp_get_voltage(oldopp), 612 + oldoppi->volt, 626 613 MAX_SAFEVOLT); 627 614 break; 628 615 default: ··· 639 626 struct platform_device *pdev = container_of(dev, struct platform_device, 640 627 dev); 641 628 struct busfreq_data *data = platform_get_drvdata(pdev); 642 - struct opp *opp = devfreq_recommended_opp(dev, _freq, flags); 643 - unsigned long freq = opp_get_freq(opp); 644 - unsigned long old_freq = opp_get_freq(data->curr_opp); 629 + struct opp *opp; 630 + unsigned long freq; 631 + unsigned long old_freq = data->curr_oppinfo.rate; 632 + struct busfreq_opp_info new_oppinfo; 645 633 646 - if (IS_ERR(opp)) 634 + rcu_read_lock(); 635 + opp = devfreq_recommended_opp(dev, _freq, flags); 636 + if (IS_ERR(opp)) { 637 + rcu_read_unlock(); 647 638 return PTR_ERR(opp); 639 + } 640 + new_oppinfo.rate = opp_get_freq(opp); 641 + new_oppinfo.volt = opp_get_voltage(opp); 642 + rcu_read_unlock(); 643 + freq = new_oppinfo.rate; 648 644 649 645 if (old_freq == freq) 650 646 return 0; 651 647 652 - dev_dbg(dev, "targetting %lukHz %luuV\n", freq, opp_get_voltage(opp)); 648 + dev_dbg(dev, "targetting %lukHz %luuV\n", freq, new_oppinfo.volt); 653 649 654 650 mutex_lock(&data->lock); 655 651 ··· 666 644 goto out; 667 645 668 646 if (old_freq < freq) 669 - err = exynos4_bus_setvolt(data, opp, data->curr_opp); 647 + err = exynos4_bus_setvolt(data, &new_oppinfo, 648 + &data->curr_oppinfo); 670 649 if (err) 671 650 goto out; 672 651 673 652 if (old_freq != freq) { 674 653 switch (data->type) { 675 654 case TYPE_BUSF_EXYNOS4210: 676 - err = exynos4210_set_busclk(data, opp); 655 + err = exynos4210_set_busclk(data, &new_oppinfo); 677 656 break; 678 657 case TYPE_BUSF_EXYNOS4x12: 679 - err = exynos4x12_set_busclk(data, opp); 658 + err = exynos4x12_set_busclk(data, &new_oppinfo); 680 659 break; 681 660 default: 682 661 err = -EINVAL; ··· 687 664 goto out; 688 665 689 666 if (old_freq > freq) 690 - err = exynos4_bus_setvolt(data, opp, data->curr_opp); 667 + err = exynos4_bus_setvolt(data, &new_oppinfo, 668 + &data->curr_oppinfo); 691 669 if (err) 692 670 goto out; 693 671 694 - data->curr_opp = opp; 672 + data->curr_oppinfo = new_oppinfo; 695 673 out: 696 674 mutex_unlock(&data->lock); 697 675 return err; ··· 726 702 727 703 exynos4_read_ppmu(data); 728 704 busier_dmc = exynos4_get_busier_dmc(data); 729 - stat->current_frequency = opp_get_freq(data->curr_opp); 705 + stat->current_frequency = data->curr_oppinfo.rate; 730 706 731 707 if (busier_dmc) 732 708 addr = S5P_VA_DMC1; ··· 957 933 struct busfreq_data *data = container_of(this, struct busfreq_data, 958 934 pm_notifier); 959 935 struct opp *opp; 936 + struct busfreq_opp_info new_oppinfo; 960 937 unsigned long maxfreq = ULONG_MAX; 961 938 int err = 0; 962 939 ··· 968 943 969 944 data->disabled = true; 970 945 946 + rcu_read_lock(); 971 947 opp = opp_find_freq_floor(data->dev, &maxfreq); 948 + if (IS_ERR(opp)) { 949 + rcu_read_unlock(); 950 + dev_err(data->dev, "%s: unable to find a min freq\n", 951 + __func__); 952 + return PTR_ERR(opp); 953 + } 954 + new_oppinfo.rate = opp_get_freq(opp); 955 + new_oppinfo.volt = opp_get_voltage(opp); 956 + rcu_read_unlock(); 972 957 973 - err = exynos4_bus_setvolt(data, opp, data->curr_opp); 958 + err = exynos4_bus_setvolt(data, &new_oppinfo, 959 + &data->curr_oppinfo); 974 960 if (err) 975 961 goto unlock; 976 962 977 963 switch (data->type) { 978 964 case TYPE_BUSF_EXYNOS4210: 979 - err = exynos4210_set_busclk(data, opp); 965 + err = exynos4210_set_busclk(data, &new_oppinfo); 980 966 break; 981 967 case TYPE_BUSF_EXYNOS4x12: 982 - err = exynos4x12_set_busclk(data, opp); 968 + err = exynos4x12_set_busclk(data, &new_oppinfo); 983 969 break; 984 970 default: 985 971 err = -EINVAL; ··· 998 962 if (err) 999 963 goto unlock; 1000 964 1001 - data->curr_opp = opp; 965 + data->curr_oppinfo = new_oppinfo; 1002 966 unlock: 1003 967 mutex_unlock(&data->lock); 1004 968 if (err) ··· 1063 1027 } 1064 1028 } 1065 1029 1030 + rcu_read_lock(); 1066 1031 opp = opp_find_freq_floor(dev, &exynos4_devfreq_profile.initial_freq); 1067 1032 if (IS_ERR(opp)) { 1033 + rcu_read_unlock(); 1068 1034 dev_err(dev, "Invalid initial frequency %lu kHz.\n", 1069 1035 exynos4_devfreq_profile.initial_freq); 1070 1036 return PTR_ERR(opp); 1071 1037 } 1072 - data->curr_opp = opp; 1038 + data->curr_oppinfo.rate = opp_get_freq(opp); 1039 + data->curr_oppinfo.volt = opp_get_voltage(opp); 1040 + rcu_read_unlock(); 1073 1041 1074 1042 platform_set_drvdata(pdev, data); 1075 1043
+2 -3
drivers/dma/imx-dma.c
··· 684 684 break; 685 685 } 686 686 687 - imxdmac->hw_chaining = 1; 688 - if (!imxdma_hw_chain(imxdmac)) 689 - return -EINVAL; 687 + imxdmac->hw_chaining = 0; 688 + 690 689 imxdmac->ccr_from_device = (mode | IMX_DMA_TYPE_FIFO) | 691 690 ((IMX_DMA_MEMSIZE_32 | IMX_DMA_TYPE_LINEAR) << 2) | 692 691 CCR_REN;
+1 -1
drivers/dma/ioat/dma_v3.c
··· 951 951 goto free_resources; 952 952 } 953 953 } 954 - dma_sync_single_for_device(dev, dest_dma, PAGE_SIZE, DMA_TO_DEVICE); 954 + dma_sync_single_for_device(dev, dest_dma, PAGE_SIZE, DMA_FROM_DEVICE); 955 955 956 956 /* skip validate if the capability is not present */ 957 957 if (!dma_has_cap(DMA_XOR_VAL, dma_chan->device->cap_mask))
+6 -2
drivers/dma/tegra20-apb-dma.c
··· 266 266 if (async_tx_test_ack(&dma_desc->txd)) { 267 267 list_del(&dma_desc->node); 268 268 spin_unlock_irqrestore(&tdc->lock, flags); 269 + dma_desc->txd.flags = 0; 269 270 return dma_desc; 270 271 } 271 272 } ··· 1051 1050 TEGRA_APBDMA_AHBSEQ_WRAP_SHIFT; 1052 1051 ahb_seq |= TEGRA_APBDMA_AHBSEQ_BUS_WIDTH_32; 1053 1052 1054 - csr |= TEGRA_APBDMA_CSR_FLOW | TEGRA_APBDMA_CSR_IE_EOC; 1053 + csr |= TEGRA_APBDMA_CSR_FLOW; 1054 + if (flags & DMA_PREP_INTERRUPT) 1055 + csr |= TEGRA_APBDMA_CSR_IE_EOC; 1055 1056 csr |= tdc->dma_sconfig.slave_id << TEGRA_APBDMA_CSR_REQ_SEL_SHIFT; 1056 1057 1057 1058 apb_seq |= TEGRA_APBDMA_APBSEQ_WRAP_WORD_1; ··· 1098 1095 mem += len; 1099 1096 } 1100 1097 sg_req->last_sg = true; 1101 - dma_desc->txd.flags = 0; 1098 + if (flags & DMA_CTRL_ACK) 1099 + dma_desc->txd.flags = DMA_CTRL_ACK; 1102 1100 1103 1101 /* 1104 1102 * Make sure that mode should not be conflicting with currently
+3 -3
drivers/edac/edac_mc.c
··· 340 340 /* 341 341 * Alocate and fill the csrow/channels structs 342 342 */ 343 - mci->csrows = kcalloc(sizeof(*mci->csrows), tot_csrows, GFP_KERNEL); 343 + mci->csrows = kcalloc(tot_csrows, sizeof(*mci->csrows), GFP_KERNEL); 344 344 if (!mci->csrows) 345 345 goto error; 346 346 for (row = 0; row < tot_csrows; row++) { ··· 351 351 csr->csrow_idx = row; 352 352 csr->mci = mci; 353 353 csr->nr_channels = tot_channels; 354 - csr->channels = kcalloc(sizeof(*csr->channels), tot_channels, 354 + csr->channels = kcalloc(tot_channels, sizeof(*csr->channels), 355 355 GFP_KERNEL); 356 356 if (!csr->channels) 357 357 goto error; ··· 369 369 /* 370 370 * Allocate and fill the dimm structs 371 371 */ 372 - mci->dimms = kcalloc(sizeof(*mci->dimms), tot_dimms, GFP_KERNEL); 372 + mci->dimms = kcalloc(tot_dimms, sizeof(*mci->dimms), GFP_KERNEL); 373 373 if (!mci->dimms) 374 374 goto error; 375 375
+1 -1
drivers/edac/edac_pci_sysfs.c
··· 256 256 struct edac_pci_dev_attribute *edac_pci_dev; 257 257 edac_pci_dev = (struct edac_pci_dev_attribute *)attr; 258 258 259 - if (edac_pci_dev->show) 259 + if (edac_pci_dev->store) 260 260 return edac_pci_dev->store(edac_pci_dev->value, buffer, count); 261 261 return -EIO; 262 262 }
+1 -1
drivers/firmware/dmi_scan.c
··· 471 471 char __iomem *p, *q; 472 472 int rc; 473 473 474 - if (efi_enabled) { 474 + if (efi_enabled(EFI_CONFIG_TABLES)) { 475 475 if (efi.smbios == EFI_INVALID_TABLE_ADDR) 476 476 goto error; 477 477
+5 -4
drivers/firmware/efivars.c
··· 674 674 err = -EACCES; 675 675 break; 676 676 case EFI_NOT_FOUND: 677 - err = -ENOENT; 677 + err = -EIO; 678 678 break; 679 679 default: 680 680 err = -EINVAL; ··· 793 793 spin_unlock(&efivars->lock); 794 794 efivar_unregister(var); 795 795 drop_nlink(inode); 796 + d_delete(file->f_dentry); 796 797 dput(file->f_dentry); 797 798 798 799 } else { ··· 995 994 list_del(&var->list); 996 995 spin_unlock(&efivars->lock); 997 996 efivar_unregister(var); 998 - drop_nlink(dir); 997 + drop_nlink(dentry->d_inode); 999 998 dput(dentry); 1000 999 return 0; 1001 1000 } ··· 1783 1782 printk(KERN_INFO "EFI Variables Facility v%s %s\n", EFIVARS_VERSION, 1784 1783 EFIVARS_DATE); 1785 1784 1786 - if (!efi_enabled) 1785 + if (!efi_enabled(EFI_RUNTIME_SERVICES)) 1787 1786 return 0; 1788 1787 1789 1788 /* For now we'll register the efi directory at /sys/firmware/efi */ ··· 1823 1822 static void __exit 1824 1823 efivars_exit(void) 1825 1824 { 1826 - if (efi_enabled) { 1825 + if (efi_enabled(EFI_RUNTIME_SERVICES)) { 1827 1826 unregister_efivars(&__efivars); 1828 1827 kobject_put(efi_kobj); 1829 1828 }
+1 -1
drivers/firmware/iscsi_ibft_find.c
··· 99 99 /* iBFT 1.03 section 1.4.3.1 mandates that UEFI machines will 100 100 * only use ACPI for this */ 101 101 102 - if (!efi_enabled) 102 + if (!efi_enabled(EFI_BOOT)) 103 103 find_ibft_in_mem(); 104 104 105 105 if (ibft_addr) {
-6
drivers/gpio/gpio-mvebu.c
··· 547 547 mvchip->membase = devm_request_and_ioremap(&pdev->dev, res); 548 548 if (! mvchip->membase) { 549 549 dev_err(&pdev->dev, "Cannot ioremap\n"); 550 - kfree(mvchip->chip.label); 551 550 return -ENOMEM; 552 551 } 553 552 ··· 556 557 res = platform_get_resource(pdev, IORESOURCE_MEM, 1); 557 558 if (! res) { 558 559 dev_err(&pdev->dev, "Cannot get memory resource\n"); 559 - kfree(mvchip->chip.label); 560 560 return -ENODEV; 561 561 } 562 562 563 563 mvchip->percpu_membase = devm_request_and_ioremap(&pdev->dev, res); 564 564 if (! mvchip->percpu_membase) { 565 565 dev_err(&pdev->dev, "Cannot ioremap\n"); 566 - kfree(mvchip->chip.label); 567 566 return -ENOMEM; 568 567 } 569 568 } ··· 622 625 mvchip->irqbase = irq_alloc_descs(-1, 0, ngpios, -1); 623 626 if (mvchip->irqbase < 0) { 624 627 dev_err(&pdev->dev, "no irqs\n"); 625 - kfree(mvchip->chip.label); 626 628 return -ENOMEM; 627 629 } 628 630 ··· 629 633 mvchip->membase, handle_level_irq); 630 634 if (! gc) { 631 635 dev_err(&pdev->dev, "Cannot allocate generic irq_chip\n"); 632 - kfree(mvchip->chip.label); 633 636 return -ENOMEM; 634 637 } 635 638 ··· 663 668 irq_remove_generic_chip(gc, IRQ_MSK(ngpios), IRQ_NOREQUEST, 664 669 IRQ_LEVEL | IRQ_NOPROBE); 665 670 kfree(gc); 666 - kfree(mvchip->chip.label); 667 671 return -ENODEV; 668 672 } 669 673
+7 -7
drivers/gpio/gpio-samsung.c
··· 32 32 33 33 #include <mach/hardware.h> 34 34 #include <mach/map.h> 35 - #include <mach/regs-clock.h> 36 35 #include <mach/regs-gpio.h> 37 36 38 37 #include <plat/cpu.h> ··· 445 446 }; 446 447 #endif 447 448 448 - #if defined(CONFIG_ARCH_EXYNOS4) || defined(CONFIG_ARCH_EXYNOS5) 449 + #if defined(CONFIG_ARCH_EXYNOS4) || defined(CONFIG_SOC_EXYNOS5250) 449 450 static struct samsung_gpio_cfg exynos_gpio_cfg = { 450 451 .set_pull = exynos_gpio_setpull, 451 452 .get_pull = exynos_gpio_getpull, ··· 2445 2446 }; 2446 2447 #endif 2447 2448 2448 - #ifdef CONFIG_ARCH_EXYNOS5 2449 + #ifdef CONFIG_SOC_EXYNOS5250 2449 2450 static struct samsung_gpio_chip exynos5_gpios_1[] = { 2450 2451 { 2451 2452 .chip = { ··· 2613 2614 }; 2614 2615 #endif 2615 2616 2616 - #ifdef CONFIG_ARCH_EXYNOS5 2617 + #ifdef CONFIG_SOC_EXYNOS5250 2617 2618 static struct samsung_gpio_chip exynos5_gpios_2[] = { 2618 2619 { 2619 2620 .chip = { ··· 2674 2675 }; 2675 2676 #endif 2676 2677 2677 - #ifdef CONFIG_ARCH_EXYNOS5 2678 + #ifdef CONFIG_SOC_EXYNOS5250 2678 2679 static struct samsung_gpio_chip exynos5_gpios_3[] = { 2679 2680 { 2680 2681 .chip = { ··· 2710 2711 }; 2711 2712 #endif 2712 2713 2713 - #ifdef CONFIG_ARCH_EXYNOS5 2714 + #ifdef CONFIG_SOC_EXYNOS5250 2714 2715 static struct samsung_gpio_chip exynos5_gpios_4[] = { 2715 2716 { 2716 2717 .chip = { ··· 3009 3010 int i, nr_chips; 3010 3011 int group = 0; 3011 3012 3012 - #ifdef CONFIG_PINCTRL_SAMSUNG 3013 + #if defined(CONFIG_PINCTRL_EXYNOS) || defined(CONFIG_PINCTRL_EXYNOS5440) 3013 3014 /* 3014 3015 * This gpio driver includes support for device tree support and there 3015 3016 * are platforms using it. In order to maintain compatibility with those ··· 3025 3026 static const struct of_device_id exynos_pinctrl_ids[] = { 3026 3027 { .compatible = "samsung,pinctrl-exynos4210", }, 3027 3028 { .compatible = "samsung,pinctrl-exynos4x12", }, 3029 + { .compatible = "samsung,pinctrl-exynos5440", }, 3028 3030 }; 3029 3031 for_each_matching_node(pctrl_np, exynos_pinctrl_ids) 3030 3032 if (pctrl_np && of_device_is_available(pctrl_np))
+2 -2
drivers/gpu/drm/exynos/Kconfig
··· 24 24 25 25 config DRM_EXYNOS_FIMD 26 26 bool "Exynos DRM FIMD" 27 - depends on DRM_EXYNOS && !FB_S3C 27 + depends on DRM_EXYNOS && !FB_S3C && !ARCH_MULTIPLATFORM 28 28 help 29 29 Choose this option if you want to use Exynos FIMD for DRM. 30 30 ··· 48 48 49 49 config DRM_EXYNOS_IPP 50 50 bool "Exynos DRM IPP" 51 - depends on DRM_EXYNOS 51 + depends on DRM_EXYNOS && !ARCH_MULTIPLATFORM 52 52 help 53 53 Choose this option if you want to use IPP feature for DRM. 54 54
+15 -18
drivers/gpu/drm/exynos/exynos_drm_connector.c
··· 18 18 #include "exynos_drm_drv.h" 19 19 #include "exynos_drm_encoder.h" 20 20 21 - #define MAX_EDID 256 22 21 #define to_exynos_connector(x) container_of(x, struct exynos_drm_connector,\ 23 22 drm_connector) 24 23 ··· 95 96 to_exynos_connector(connector); 96 97 struct exynos_drm_manager *manager = exynos_connector->manager; 97 98 struct exynos_drm_display_ops *display_ops = manager->display_ops; 98 - unsigned int count; 99 + struct edid *edid = NULL; 100 + unsigned int count = 0; 101 + int ret; 99 102 100 103 DRM_DEBUG_KMS("%s\n", __FILE__); 101 104 ··· 115 114 * because lcd panel has only one mode. 116 115 */ 117 116 if (display_ops->get_edid) { 118 - int ret; 119 - void *edid; 120 - 121 - edid = kzalloc(MAX_EDID, GFP_KERNEL); 122 - if (!edid) { 123 - DRM_ERROR("failed to allocate edid\n"); 124 - return 0; 117 + edid = display_ops->get_edid(manager->dev, connector); 118 + if (IS_ERR_OR_NULL(edid)) { 119 + ret = PTR_ERR(edid); 120 + edid = NULL; 121 + DRM_ERROR("Panel operation get_edid failed %d\n", ret); 122 + goto out; 125 123 } 126 124 127 - ret = display_ops->get_edid(manager->dev, connector, 128 - edid, MAX_EDID); 129 - if (ret < 0) { 130 - DRM_ERROR("failed to get edid data.\n"); 131 - kfree(edid); 132 - edid = NULL; 133 - return 0; 125 + count = drm_add_edid_modes(connector, edid); 126 + if (count < 0) { 127 + DRM_ERROR("Add edid modes failed %d\n", count); 128 + goto out; 134 129 } 135 130 136 131 drm_mode_connector_update_edid_property(connector, edid); 137 - count = drm_add_edid_modes(connector, edid); 138 - kfree(edid); 139 132 } else { 140 133 struct exynos_drm_panel_info *panel; 141 134 struct drm_display_mode *mode = drm_mode_create(connector->dev); ··· 156 161 count = 1; 157 162 } 158 163 164 + out: 165 + kfree(edid); 159 166 return count; 160 167 } 161 168
+11 -13
drivers/gpu/drm/exynos/exynos_drm_dmabuf.c
··· 19 19 struct exynos_drm_dmabuf_attachment { 20 20 struct sg_table sgt; 21 21 enum dma_data_direction dir; 22 + bool is_mapped; 22 23 }; 23 24 24 25 static int exynos_gem_attach_dma_buf(struct dma_buf *dmabuf, ··· 73 72 74 73 DRM_DEBUG_PRIME("%s\n", __FILE__); 75 74 76 - if (WARN_ON(dir == DMA_NONE)) 77 - return ERR_PTR(-EINVAL); 78 - 79 75 /* just return current sgt if already requested. */ 80 - if (exynos_attach->dir == dir) 76 + if (exynos_attach->dir == dir && exynos_attach->is_mapped) 81 77 return &exynos_attach->sgt; 82 - 83 - /* reattaching is not allowed. */ 84 - if (WARN_ON(exynos_attach->dir != DMA_NONE)) 85 - return ERR_PTR(-EBUSY); 86 78 87 79 buf = gem_obj->buffer; 88 80 if (!buf) { ··· 101 107 wr = sg_next(wr); 102 108 } 103 109 104 - nents = dma_map_sg(attach->dev, sgt->sgl, sgt->orig_nents, dir); 105 - if (!nents) { 106 - DRM_ERROR("failed to map sgl with iommu.\n"); 107 - sgt = ERR_PTR(-EIO); 108 - goto err_unlock; 110 + if (dir != DMA_NONE) { 111 + nents = dma_map_sg(attach->dev, sgt->sgl, sgt->orig_nents, dir); 112 + if (!nents) { 113 + DRM_ERROR("failed to map sgl with iommu.\n"); 114 + sg_free_table(sgt); 115 + sgt = ERR_PTR(-EIO); 116 + goto err_unlock; 117 + } 109 118 } 110 119 120 + exynos_attach->is_mapped = true; 111 121 exynos_attach->dir = dir; 112 122 attach->priv = exynos_attach; 113 123
+2 -2
drivers/gpu/drm/exynos/exynos_drm_drv.h
··· 148 148 struct exynos_drm_display_ops { 149 149 enum exynos_drm_output_type type; 150 150 bool (*is_connected)(struct device *dev); 151 - int (*get_edid)(struct device *dev, struct drm_connector *connector, 152 - u8 *edid, int len); 151 + struct edid *(*get_edid)(struct device *dev, 152 + struct drm_connector *connector); 153 153 void *(*get_panel)(struct device *dev); 154 154 int (*check_timing)(struct device *dev, void *timing); 155 155 int (*power_on)(struct device *dev, int mode);
+1 -1
drivers/gpu/drm/exynos/exynos_drm_g2d.c
··· 324 324 g2d_userptr = NULL; 325 325 } 326 326 327 - dma_addr_t *g2d_userptr_get_dma_addr(struct drm_device *drm_dev, 327 + static dma_addr_t *g2d_userptr_get_dma_addr(struct drm_device *drm_dev, 328 328 unsigned long userptr, 329 329 unsigned long size, 330 330 struct drm_file *filp,
+4 -5
drivers/gpu/drm/exynos/exynos_drm_hdmi.c
··· 108 108 return false; 109 109 } 110 110 111 - static int drm_hdmi_get_edid(struct device *dev, 112 - struct drm_connector *connector, u8 *edid, int len) 111 + static struct edid *drm_hdmi_get_edid(struct device *dev, 112 + struct drm_connector *connector) 113 113 { 114 114 struct drm_hdmi_context *ctx = to_context(dev); 115 115 116 116 DRM_DEBUG_KMS("%s\n", __FILE__); 117 117 118 118 if (hdmi_ops && hdmi_ops->get_edid) 119 - return hdmi_ops->get_edid(ctx->hdmi_ctx->ctx, connector, edid, 120 - len); 119 + return hdmi_ops->get_edid(ctx->hdmi_ctx->ctx, connector); 121 120 122 - return 0; 121 + return NULL; 123 122 } 124 123 125 124 static int drm_hdmi_check_timing(struct device *dev, void *timing)
+2 -2
drivers/gpu/drm/exynos/exynos_drm_hdmi.h
··· 30 30 struct exynos_hdmi_ops { 31 31 /* display */ 32 32 bool (*is_connected)(void *ctx); 33 - int (*get_edid)(void *ctx, struct drm_connector *connector, 34 - u8 *edid, int len); 33 + struct edid *(*get_edid)(void *ctx, 34 + struct drm_connector *connector); 35 35 int (*check_timing)(void *ctx, void *timing); 36 36 int (*power_on)(void *ctx, int mode); 37 37
+1 -1
drivers/gpu/drm/exynos/exynos_drm_ipp.c
··· 869 869 } 870 870 } 871 871 872 - void ipp_handle_cmd_work(struct device *dev, 872 + static void ipp_handle_cmd_work(struct device *dev, 873 873 struct exynos_drm_ippdrv *ippdrv, 874 874 struct drm_exynos_ipp_cmd_work *cmd_work, 875 875 struct drm_exynos_ipp_cmd_node *c_node)
+2 -2
drivers/gpu/drm/exynos/exynos_drm_rotator.c
··· 734 734 return 0; 735 735 } 736 736 737 - struct rot_limit_table rot_limit_tbl = { 737 + static struct rot_limit_table rot_limit_tbl = { 738 738 .ycbcr420_2p = { 739 739 .min_w = 32, 740 740 .min_h = 32, ··· 751 751 }, 752 752 }; 753 753 754 - struct platform_device_id rotator_driver_ids[] = { 754 + static struct platform_device_id rotator_driver_ids[] = { 755 755 { 756 756 .name = "exynos-rot", 757 757 .driver_data = (unsigned long)&rot_limit_tbl,
+16 -10
drivers/gpu/drm/exynos/exynos_drm_vidi.c
··· 98 98 return ctx->connected ? true : false; 99 99 } 100 100 101 - static int vidi_get_edid(struct device *dev, struct drm_connector *connector, 102 - u8 *edid, int len) 101 + static struct edid *vidi_get_edid(struct device *dev, 102 + struct drm_connector *connector) 103 103 { 104 104 struct vidi_context *ctx = get_vidi_context(dev); 105 + struct edid *edid; 106 + int edid_len; 105 107 106 108 DRM_DEBUG_KMS("%s\n", __FILE__); 107 109 ··· 113 111 */ 114 112 if (!ctx->raw_edid) { 115 113 DRM_DEBUG_KMS("raw_edid is null.\n"); 116 - return -EFAULT; 114 + return ERR_PTR(-EFAULT); 117 115 } 118 116 119 - memcpy(edid, ctx->raw_edid, min((1 + ctx->raw_edid->extensions) 120 - * EDID_LENGTH, len)); 117 + edid_len = (1 + ctx->raw_edid->extensions) * EDID_LENGTH; 118 + edid = kzalloc(edid_len, GFP_KERNEL); 119 + if (!edid) { 120 + DRM_DEBUG_KMS("failed to allocate edid\n"); 121 + return ERR_PTR(-ENOMEM); 122 + } 121 123 122 - return 0; 124 + memcpy(edid, ctx->raw_edid, edid_len); 125 + return edid; 123 126 } 124 127 125 128 static void *vidi_get_panel(struct device *dev) ··· 521 514 struct exynos_drm_manager *manager; 522 515 struct exynos_drm_display_ops *display_ops; 523 516 struct drm_exynos_vidi_connection *vidi = data; 524 - struct edid *raw_edid; 525 517 int edid_len; 526 518 527 519 DRM_DEBUG_KMS("%s\n", __FILE__); ··· 557 551 } 558 552 559 553 if (vidi->connection) { 560 - if (!vidi->edid) { 561 - DRM_DEBUG_KMS("edid data is null.\n"); 554 + struct edid *raw_edid = (struct edid *)(uint32_t)vidi->edid; 555 + if (!drm_edid_is_valid(raw_edid)) { 556 + DRM_DEBUG_KMS("edid data is invalid.\n"); 562 557 return -EINVAL; 563 558 } 564 - raw_edid = (struct edid *)(uint32_t)vidi->edid; 565 559 edid_len = (1 + raw_edid->extensions) * EDID_LENGTH; 566 560 ctx->raw_edid = kzalloc(edid_len, GFP_KERNEL); 567 561 if (!ctx->raw_edid) {
+38 -83
drivers/gpu/drm/exynos/exynos_hdmi.c
··· 34 34 #include <linux/regulator/consumer.h> 35 35 #include <linux/io.h> 36 36 #include <linux/of_gpio.h> 37 - #include <plat/gpio-cfg.h> 38 37 39 38 #include <drm/exynos_drm.h> 40 39 ··· 97 98 98 99 void __iomem *regs; 99 100 void *parent_ctx; 100 - int external_irq; 101 - int internal_irq; 101 + int irq; 102 102 103 103 struct i2c_client *ddc_port; 104 104 struct i2c_client *hdmiphy_port; ··· 1389 1391 return hdata->hpd; 1390 1392 } 1391 1393 1392 - static int hdmi_get_edid(void *ctx, struct drm_connector *connector, 1393 - u8 *edid, int len) 1394 + static struct edid *hdmi_get_edid(void *ctx, struct drm_connector *connector) 1394 1395 { 1395 1396 struct edid *raw_edid; 1396 1397 struct hdmi_context *hdata = ctx; ··· 1397 1400 DRM_DEBUG_KMS("[%d] %s\n", __LINE__, __func__); 1398 1401 1399 1402 if (!hdata->ddc_port) 1400 - return -ENODEV; 1403 + return ERR_PTR(-ENODEV); 1401 1404 1402 1405 raw_edid = drm_get_edid(connector, hdata->ddc_port->adapter); 1403 - if (raw_edid) { 1404 - hdata->dvi_mode = !drm_detect_hdmi_monitor(raw_edid); 1405 - memcpy(edid, raw_edid, min((1 + raw_edid->extensions) 1406 - * EDID_LENGTH, len)); 1407 - DRM_DEBUG_KMS("%s : width[%d] x height[%d]\n", 1408 - (hdata->dvi_mode ? "dvi monitor" : "hdmi monitor"), 1409 - raw_edid->width_cm, raw_edid->height_cm); 1410 - kfree(raw_edid); 1411 - } else { 1412 - return -ENODEV; 1413 - } 1406 + if (!raw_edid) 1407 + return ERR_PTR(-ENODEV); 1414 1408 1415 - return 0; 1409 + hdata->dvi_mode = !drm_detect_hdmi_monitor(raw_edid); 1410 + DRM_DEBUG_KMS("%s : width[%d] x height[%d]\n", 1411 + (hdata->dvi_mode ? "dvi monitor" : "hdmi monitor"), 1412 + raw_edid->width_cm, raw_edid->height_cm); 1413 + 1414 + return raw_edid; 1416 1415 } 1417 1416 1418 1417 static int hdmi_v13_check_timing(struct fb_videomode *check_timing) ··· 1645 1652 1646 1653 /* resetting HDMI core */ 1647 1654 hdmi_reg_writemask(hdata, reg, 0, HDMI_CORE_SW_RSTOUT); 1648 - mdelay(10); 1655 + usleep_range(10000, 12000); 1649 1656 hdmi_reg_writemask(hdata, reg, ~0, HDMI_CORE_SW_RSTOUT); 1650 - mdelay(10); 1657 + usleep_range(10000, 12000); 1651 1658 } 1652 1659 1653 1660 static void hdmi_conf_init(struct hdmi_context *hdata) 1654 1661 { 1655 1662 struct hdmi_infoframe infoframe; 1656 1663 1657 - /* disable HPD interrupts */ 1664 + /* disable HPD interrupts from HDMI IP block, use GPIO instead */ 1658 1665 hdmi_reg_writemask(hdata, HDMI_INTC_CON, 0, HDMI_INTC_EN_GLOBAL | 1659 1666 HDMI_INTC_EN_HPD_PLUG | HDMI_INTC_EN_HPD_UNPLUG); 1660 1667 ··· 1772 1779 u32 val = hdmi_reg_read(hdata, HDMI_V13_PHY_STATUS); 1773 1780 if (val & HDMI_PHY_STATUS_READY) 1774 1781 break; 1775 - mdelay(1); 1782 + usleep_range(1000, 2000); 1776 1783 } 1777 1784 /* steady state not achieved */ 1778 1785 if (tries == 0) { ··· 1939 1946 u32 val = hdmi_reg_read(hdata, HDMI_PHY_STATUS_0); 1940 1947 if (val & HDMI_PHY_STATUS_READY) 1941 1948 break; 1942 - mdelay(1); 1949 + usleep_range(1000, 2000); 1943 1950 } 1944 1951 /* steady state not achieved */ 1945 1952 if (tries == 0) { ··· 1991 1998 1992 1999 /* reset hdmiphy */ 1993 2000 hdmi_reg_writemask(hdata, reg, ~0, HDMI_PHY_SW_RSTOUT); 1994 - mdelay(10); 2001 + usleep_range(10000, 12000); 1995 2002 hdmi_reg_writemask(hdata, reg, 0, HDMI_PHY_SW_RSTOUT); 1996 - mdelay(10); 2003 + usleep_range(10000, 12000); 1997 2004 } 1998 2005 1999 2006 static void hdmiphy_poweron(struct hdmi_context *hdata) ··· 2041 2048 return; 2042 2049 } 2043 2050 2044 - mdelay(10); 2051 + usleep_range(10000, 12000); 2045 2052 2046 2053 /* operation mode */ 2047 2054 operation[0] = 0x1f; ··· 2163 2170 2164 2171 DRM_DEBUG_KMS("[%d] %s\n", __LINE__, __func__); 2165 2172 2173 + mutex_lock(&hdata->hdmi_mutex); 2174 + if (!hdata->powered) { 2175 + mutex_unlock(&hdata->hdmi_mutex); 2176 + return; 2177 + } 2178 + mutex_unlock(&hdata->hdmi_mutex); 2179 + 2166 2180 hdmi_conf_apply(hdata); 2167 2181 } 2168 2182 ··· 2265 2265 .dpms = hdmi_dpms, 2266 2266 }; 2267 2267 2268 - static irqreturn_t hdmi_external_irq_thread(int irq, void *arg) 2268 + static irqreturn_t hdmi_irq_thread(int irq, void *arg) 2269 2269 { 2270 2270 struct exynos_drm_hdmi_context *ctx = arg; 2271 2271 struct hdmi_context *hdata = ctx->ctx; ··· 2273 2273 mutex_lock(&hdata->hdmi_mutex); 2274 2274 hdata->hpd = gpio_get_value(hdata->hpd_gpio); 2275 2275 mutex_unlock(&hdata->hdmi_mutex); 2276 - 2277 - if (ctx->drm_dev) 2278 - drm_helper_hpd_irq_event(ctx->drm_dev); 2279 - 2280 - return IRQ_HANDLED; 2281 - } 2282 - 2283 - static irqreturn_t hdmi_internal_irq_thread(int irq, void *arg) 2284 - { 2285 - struct exynos_drm_hdmi_context *ctx = arg; 2286 - struct hdmi_context *hdata = ctx->ctx; 2287 - u32 intc_flag; 2288 - 2289 - intc_flag = hdmi_reg_read(hdata, HDMI_INTC_FLAG); 2290 - /* clearing flags for HPD plug/unplug */ 2291 - if (intc_flag & HDMI_INTC_FLAG_HPD_UNPLUG) { 2292 - DRM_DEBUG_KMS("unplugged\n"); 2293 - hdmi_reg_writemask(hdata, HDMI_INTC_FLAG, ~0, 2294 - HDMI_INTC_FLAG_HPD_UNPLUG); 2295 - } 2296 - if (intc_flag & HDMI_INTC_FLAG_HPD_PLUG) { 2297 - DRM_DEBUG_KMS("plugged\n"); 2298 - hdmi_reg_writemask(hdata, HDMI_INTC_FLAG, ~0, 2299 - HDMI_INTC_FLAG_HPD_PLUG); 2300 - } 2301 2276 2302 2277 if (ctx->drm_dev) 2303 2278 drm_helper_hpd_irq_event(ctx->drm_dev); ··· 2530 2555 2531 2556 hdata->hdmiphy_port = hdmi_hdmiphy; 2532 2557 2533 - hdata->external_irq = gpio_to_irq(hdata->hpd_gpio); 2534 - if (hdata->external_irq < 0) { 2535 - DRM_ERROR("failed to get GPIO external irq\n"); 2536 - ret = hdata->external_irq; 2537 - goto err_hdmiphy; 2538 - } 2539 - 2540 - hdata->internal_irq = platform_get_irq(pdev, 0); 2541 - if (hdata->internal_irq < 0) { 2542 - DRM_ERROR("failed to get platform internal irq\n"); 2543 - ret = hdata->internal_irq; 2558 + hdata->irq = gpio_to_irq(hdata->hpd_gpio); 2559 + if (hdata->irq < 0) { 2560 + DRM_ERROR("failed to get GPIO irq\n"); 2561 + ret = hdata->irq; 2544 2562 goto err_hdmiphy; 2545 2563 } 2546 2564 2547 2565 hdata->hpd = gpio_get_value(hdata->hpd_gpio); 2548 2566 2549 - ret = request_threaded_irq(hdata->external_irq, NULL, 2550 - hdmi_external_irq_thread, IRQF_TRIGGER_RISING | 2567 + ret = request_threaded_irq(hdata->irq, NULL, 2568 + hdmi_irq_thread, IRQF_TRIGGER_RISING | 2551 2569 IRQF_TRIGGER_FALLING | IRQF_ONESHOT, 2552 - "hdmi_external", drm_hdmi_ctx); 2570 + "hdmi", drm_hdmi_ctx); 2553 2571 if (ret) { 2554 - DRM_ERROR("failed to register hdmi external interrupt\n"); 2572 + DRM_ERROR("failed to register hdmi interrupt\n"); 2555 2573 goto err_hdmiphy; 2556 - } 2557 - 2558 - ret = request_threaded_irq(hdata->internal_irq, NULL, 2559 - hdmi_internal_irq_thread, IRQF_ONESHOT, 2560 - "hdmi_internal", drm_hdmi_ctx); 2561 - if (ret) { 2562 - DRM_ERROR("failed to register hdmi internal interrupt\n"); 2563 - goto err_free_irq; 2564 2574 } 2565 2575 2566 2576 /* Attach HDMI Driver to common hdmi. */ ··· 2558 2598 2559 2599 return 0; 2560 2600 2561 - err_free_irq: 2562 - free_irq(hdata->external_irq, drm_hdmi_ctx); 2563 2601 err_hdmiphy: 2564 2602 i2c_del_driver(&hdmiphy_driver); 2565 2603 err_ddc: ··· 2575 2617 2576 2618 pm_runtime_disable(dev); 2577 2619 2578 - free_irq(hdata->internal_irq, hdata); 2579 - free_irq(hdata->external_irq, hdata); 2620 + free_irq(hdata->irq, hdata); 2580 2621 2581 2622 2582 2623 /* hdmiphy i2c driver */ ··· 2594 2637 2595 2638 DRM_DEBUG_KMS("[%d] %s\n", __LINE__, __func__); 2596 2639 2597 - disable_irq(hdata->internal_irq); 2598 - disable_irq(hdata->external_irq); 2640 + disable_irq(hdata->irq); 2599 2641 2600 2642 hdata->hpd = false; 2601 2643 if (ctx->drm_dev) ··· 2619 2663 2620 2664 hdata->hpd = gpio_get_value(hdata->hpd_gpio); 2621 2665 2622 - enable_irq(hdata->external_irq); 2623 - enable_irq(hdata->internal_irq); 2666 + enable_irq(hdata->irq); 2624 2667 2625 2668 if (!pm_runtime_suspended(dev)) { 2626 2669 DRM_DEBUG_KMS("%s : Already resumed\n", __func__);
+8 -1
drivers/gpu/drm/exynos/exynos_mixer.c
··· 600 600 /* waiting until VP_SRESET_PROCESSING is 0 */ 601 601 if (~vp_reg_read(res, VP_SRESET) & VP_SRESET_PROCESSING) 602 602 break; 603 - mdelay(10); 603 + usleep_range(10000, 12000); 604 604 } 605 605 WARN(tries == 0, "failed to reset Video Processor\n"); 606 606 } ··· 775 775 struct mixer_context *mixer_ctx = ctx; 776 776 777 777 DRM_DEBUG_KMS("[%d] %s, win: %d\n", __LINE__, __func__, win); 778 + 779 + mutex_lock(&mixer_ctx->mixer_mutex); 780 + if (!mixer_ctx->powered) { 781 + mutex_unlock(&mixer_ctx->mixer_mutex); 782 + return; 783 + } 784 + mutex_unlock(&mixer_ctx->mixer_mutex); 778 785 779 786 if (win > 1 && mixer_ctx->vp_enabled) 780 787 vp_video_buffer(mixer_ctx, win);
+5
drivers/gpu/drm/i915/i915_debugfs.c
··· 30 30 #include <linux/debugfs.h> 31 31 #include <linux/slab.h> 32 32 #include <linux/export.h> 33 + #include <generated/utsrelease.h> 33 34 #include <drm/drmP.h> 34 35 #include "intel_drv.h" 35 36 #include "intel_ringbuffer.h" ··· 645 644 seq_printf(m, "%s command stream:\n", ring_str(ring)); 646 645 seq_printf(m, " HEAD: 0x%08x\n", error->head[ring]); 647 646 seq_printf(m, " TAIL: 0x%08x\n", error->tail[ring]); 647 + seq_printf(m, " CTL: 0x%08x\n", error->ctl[ring]); 648 648 seq_printf(m, " ACTHD: 0x%08x\n", error->acthd[ring]); 649 649 seq_printf(m, " IPEIR: 0x%08x\n", error->ipeir[ring]); 650 650 seq_printf(m, " IPEHR: 0x%08x\n", error->ipehr[ring]); ··· 694 692 695 693 seq_printf(m, "Time: %ld s %ld us\n", error->time.tv_sec, 696 694 error->time.tv_usec); 695 + seq_printf(m, "Kernel: " UTS_RELEASE); 697 696 seq_printf(m, "PCI ID: 0x%04x\n", dev->pci_device); 698 697 seq_printf(m, "EIR: 0x%08x\n", error->eir); 699 698 seq_printf(m, "IER: 0x%08x\n", error->ier); 700 699 seq_printf(m, "PGTBL_ER: 0x%08x\n", error->pgtbl_er); 700 + seq_printf(m, "FORCEWAKE: 0x%08x\n", error->forcewake); 701 + seq_printf(m, "DERRMR: 0x%08x\n", error->derrmr); 701 702 seq_printf(m, "CCID: 0x%08x\n", error->ccid); 702 703 703 704 for (i = 0; i < dev_priv->num_fence_regs; i++)
+3
drivers/gpu/drm/i915/i915_drv.h
··· 209 209 u32 pgtbl_er; 210 210 u32 ier; 211 211 u32 ccid; 212 + u32 derrmr; 213 + u32 forcewake; 212 214 bool waiting[I915_NUM_RINGS]; 213 215 u32 pipestat[I915_MAX_PIPES]; 214 216 u32 tail[I915_NUM_RINGS]; 215 217 u32 head[I915_NUM_RINGS]; 218 + u32 ctl[I915_NUM_RINGS]; 216 219 u32 ipeir[I915_NUM_RINGS]; 217 220 u32 ipehr[I915_NUM_RINGS]; 218 221 u32 instdone[I915_NUM_RINGS];
+21
drivers/gpu/drm/i915/i915_gem_execbuffer.c
··· 615 615 total = 0; 616 616 for (i = 0; i < count; i++) { 617 617 struct drm_i915_gem_relocation_entry __user *user_relocs; 618 + u64 invalid_offset = (u64)-1; 619 + int j; 618 620 619 621 user_relocs = (void __user *)(uintptr_t)exec[i].relocs_ptr; 620 622 ··· 625 623 ret = -EFAULT; 626 624 mutex_lock(&dev->struct_mutex); 627 625 goto err; 626 + } 627 + 628 + /* As we do not update the known relocation offsets after 629 + * relocating (due to the complexities in lock handling), 630 + * we need to mark them as invalid now so that we force the 631 + * relocation processing next time. Just in case the target 632 + * object is evicted and then rebound into its old 633 + * presumed_offset before the next execbuffer - if that 634 + * happened we would make the mistake of assuming that the 635 + * relocations were valid. 636 + */ 637 + for (j = 0; j < exec[i].relocation_count; j++) { 638 + if (copy_to_user(&user_relocs[j].presumed_offset, 639 + &invalid_offset, 640 + sizeof(invalid_offset))) { 641 + ret = -EFAULT; 642 + mutex_lock(&dev->struct_mutex); 643 + goto err; 644 + } 628 645 } 629 646 630 647 reloc_offset[i] = total;
+11
drivers/gpu/drm/i915/i915_irq.c
··· 1228 1228 error->acthd[ring->id] = intel_ring_get_active_head(ring); 1229 1229 error->head[ring->id] = I915_READ_HEAD(ring); 1230 1230 error->tail[ring->id] = I915_READ_TAIL(ring); 1231 + error->ctl[ring->id] = I915_READ_CTL(ring); 1231 1232 1232 1233 error->cpu_ring_head[ring->id] = ring->head; 1233 1234 error->cpu_ring_tail[ring->id] = ring->tail; ··· 1323 1322 error->ier = I915_READ16(IER); 1324 1323 else 1325 1324 error->ier = I915_READ(IER); 1325 + 1326 + if (INTEL_INFO(dev)->gen >= 6) 1327 + error->derrmr = I915_READ(DERRMR); 1328 + 1329 + if (IS_VALLEYVIEW(dev)) 1330 + error->forcewake = I915_READ(FORCEWAKE_VLV); 1331 + else if (INTEL_INFO(dev)->gen >= 7) 1332 + error->forcewake = I915_READ(FORCEWAKE_MT); 1333 + else if (INTEL_INFO(dev)->gen == 6) 1334 + error->forcewake = I915_READ(FORCEWAKE); 1326 1335 1327 1336 for_each_pipe(pipe) 1328 1337 error->pipestat[pipe] = I915_READ(PIPESTAT(pipe));
+3
drivers/gpu/drm/i915/i915_reg.h
··· 521 521 #define GEN7_ERR_INT 0x44040 522 522 #define ERR_INT_MMIO_UNCLAIMED (1<<13) 523 523 524 + #define DERRMR 0x44050 525 + 524 526 /* GM45+ chicken bits -- debug workaround bits that may be required 525 527 * for various sorts of correct behavior. The top 16 bits of each are 526 528 * the enables for writing to the corresponding low bit. ··· 542 540 #define MI_MODE 0x0209c 543 541 # define VS_TIMER_DISPATCH (1 << 6) 544 542 # define MI_FLUSH_ENABLE (1 << 12) 543 + # define ASYNC_FLIP_PERF_DISABLE (1 << 14) 545 544 546 545 #define GEN6_GT_MODE 0x20d0 547 546 #define GEN6_GT_MODE_HI (1 << 9)
+32 -15
drivers/gpu/drm/i915/intel_dp.c
··· 2650 2650 2651 2651 static void 2652 2652 intel_dp_init_panel_power_sequencer(struct drm_device *dev, 2653 - struct intel_dp *intel_dp) 2653 + struct intel_dp *intel_dp, 2654 + struct edp_power_seq *out) 2654 2655 { 2655 2656 struct drm_i915_private *dev_priv = dev->dev_private; 2656 2657 struct edp_power_seq cur, vbt, spec, final; ··· 2722 2721 intel_dp->panel_power_cycle_delay = get_delay(t11_t12); 2723 2722 #undef get_delay 2724 2723 2724 + DRM_DEBUG_KMS("panel power up delay %d, power down delay %d, power cycle delay %d\n", 2725 + intel_dp->panel_power_up_delay, intel_dp->panel_power_down_delay, 2726 + intel_dp->panel_power_cycle_delay); 2727 + 2728 + DRM_DEBUG_KMS("backlight on delay %d, off delay %d\n", 2729 + intel_dp->backlight_on_delay, intel_dp->backlight_off_delay); 2730 + 2731 + if (out) 2732 + *out = final; 2733 + } 2734 + 2735 + static void 2736 + intel_dp_init_panel_power_sequencer_registers(struct drm_device *dev, 2737 + struct intel_dp *intel_dp, 2738 + struct edp_power_seq *seq) 2739 + { 2740 + struct drm_i915_private *dev_priv = dev->dev_private; 2741 + u32 pp_on, pp_off, pp_div; 2742 + 2725 2743 /* And finally store the new values in the power sequencer. */ 2726 - pp_on = (final.t1_t3 << PANEL_POWER_UP_DELAY_SHIFT) | 2727 - (final.t8 << PANEL_LIGHT_ON_DELAY_SHIFT); 2728 - pp_off = (final.t9 << PANEL_LIGHT_OFF_DELAY_SHIFT) | 2729 - (final.t10 << PANEL_POWER_DOWN_DELAY_SHIFT); 2744 + pp_on = (seq->t1_t3 << PANEL_POWER_UP_DELAY_SHIFT) | 2745 + (seq->t8 << PANEL_LIGHT_ON_DELAY_SHIFT); 2746 + pp_off = (seq->t9 << PANEL_LIGHT_OFF_DELAY_SHIFT) | 2747 + (seq->t10 << PANEL_POWER_DOWN_DELAY_SHIFT); 2730 2748 /* Compute the divisor for the pp clock, simply match the Bspec 2731 2749 * formula. */ 2732 2750 pp_div = ((100 * intel_pch_rawclk(dev))/2 - 1) 2733 2751 << PP_REFERENCE_DIVIDER_SHIFT; 2734 - pp_div |= (DIV_ROUND_UP(final.t11_t12, 1000) 2752 + pp_div |= (DIV_ROUND_UP(seq->t11_t12, 1000) 2735 2753 << PANEL_POWER_CYCLE_DELAY_SHIFT); 2736 2754 2737 2755 /* Haswell doesn't have any port selection bits for the panel ··· 2765 2745 I915_WRITE(PCH_PP_ON_DELAYS, pp_on); 2766 2746 I915_WRITE(PCH_PP_OFF_DELAYS, pp_off); 2767 2747 I915_WRITE(PCH_PP_DIVISOR, pp_div); 2768 - 2769 - 2770 - DRM_DEBUG_KMS("panel power up delay %d, power down delay %d, power cycle delay %d\n", 2771 - intel_dp->panel_power_up_delay, intel_dp->panel_power_down_delay, 2772 - intel_dp->panel_power_cycle_delay); 2773 - 2774 - DRM_DEBUG_KMS("backlight on delay %d, off delay %d\n", 2775 - intel_dp->backlight_on_delay, intel_dp->backlight_off_delay); 2776 2748 2777 2749 DRM_DEBUG_KMS("panel power sequencer register settings: PP_ON %#x, PP_OFF %#x, PP_DIV %#x\n", 2778 2750 I915_READ(PCH_PP_ON_DELAYS), ··· 2782 2770 struct drm_device *dev = intel_encoder->base.dev; 2783 2771 struct drm_i915_private *dev_priv = dev->dev_private; 2784 2772 struct drm_display_mode *fixed_mode = NULL; 2773 + struct edp_power_seq power_seq = { 0 }; 2785 2774 enum port port = intel_dig_port->port; 2786 2775 const char *name = NULL; 2787 2776 int type; ··· 2855 2842 } 2856 2843 2857 2844 if (is_edp(intel_dp)) 2858 - intel_dp_init_panel_power_sequencer(dev, intel_dp); 2845 + intel_dp_init_panel_power_sequencer(dev, intel_dp, &power_seq); 2859 2846 2860 2847 intel_dp_i2c_init(intel_dp, intel_connector, name); 2861 2848 ··· 2881 2868 intel_dp_destroy(connector); 2882 2869 return; 2883 2870 } 2871 + 2872 + /* We now know it's not a ghost, init power sequence regs. */ 2873 + intel_dp_init_panel_power_sequencer_registers(dev, intel_dp, 2874 + &power_seq); 2884 2875 2885 2876 ironlake_edp_panel_vdd_on(intel_dp); 2886 2877 edid = drm_get_edid(connector, &intel_dp->adapter);
+12 -5
drivers/gpu/drm/i915/intel_pm.c
··· 4279 4279 static void __gen6_gt_force_wake_mt_reset(struct drm_i915_private *dev_priv) 4280 4280 { 4281 4281 I915_WRITE_NOTRACE(FORCEWAKE_MT, _MASKED_BIT_DISABLE(0xffff)); 4282 - POSTING_READ(ECOBUS); /* something from same cacheline, but !FORCEWAKE */ 4282 + /* something from same cacheline, but !FORCEWAKE_MT */ 4283 + POSTING_READ(ECOBUS); 4283 4284 } 4284 4285 4285 4286 static void __gen6_gt_force_wake_mt_get(struct drm_i915_private *dev_priv) ··· 4297 4296 DRM_ERROR("Timed out waiting for forcewake old ack to clear.\n"); 4298 4297 4299 4298 I915_WRITE_NOTRACE(FORCEWAKE_MT, _MASKED_BIT_ENABLE(FORCEWAKE_KERNEL)); 4300 - POSTING_READ(ECOBUS); /* something from same cacheline, but !FORCEWAKE */ 4299 + /* something from same cacheline, but !FORCEWAKE_MT */ 4300 + POSTING_READ(ECOBUS); 4301 4301 4302 4302 if (wait_for_atomic((I915_READ_NOTRACE(forcewake_ack) & 1), 4303 4303 FORCEWAKE_ACK_TIMEOUT_MS)) ··· 4335 4333 static void __gen6_gt_force_wake_put(struct drm_i915_private *dev_priv) 4336 4334 { 4337 4335 I915_WRITE_NOTRACE(FORCEWAKE, 0); 4338 - /* gen6_gt_check_fifodbg doubles as the POSTING_READ */ 4336 + /* something from same cacheline, but !FORCEWAKE */ 4337 + POSTING_READ(ECOBUS); 4339 4338 gen6_gt_check_fifodbg(dev_priv); 4340 4339 } 4341 4340 4342 4341 static void __gen6_gt_force_wake_mt_put(struct drm_i915_private *dev_priv) 4343 4342 { 4344 4343 I915_WRITE_NOTRACE(FORCEWAKE_MT, _MASKED_BIT_DISABLE(FORCEWAKE_KERNEL)); 4345 - /* gen6_gt_check_fifodbg doubles as the POSTING_READ */ 4344 + /* something from same cacheline, but !FORCEWAKE_MT */ 4345 + POSTING_READ(ECOBUS); 4346 4346 gen6_gt_check_fifodbg(dev_priv); 4347 4347 } 4348 4348 ··· 4384 4380 static void vlv_force_wake_reset(struct drm_i915_private *dev_priv) 4385 4381 { 4386 4382 I915_WRITE_NOTRACE(FORCEWAKE_VLV, _MASKED_BIT_DISABLE(0xffff)); 4383 + /* something from same cacheline, but !FORCEWAKE_VLV */ 4384 + POSTING_READ(FORCEWAKE_ACK_VLV); 4387 4385 } 4388 4386 4389 4387 static void vlv_force_wake_get(struct drm_i915_private *dev_priv) ··· 4406 4400 static void vlv_force_wake_put(struct drm_i915_private *dev_priv) 4407 4401 { 4408 4402 I915_WRITE_NOTRACE(FORCEWAKE_VLV, _MASKED_BIT_DISABLE(FORCEWAKE_KERNEL)); 4409 - /* The below doubles as a POSTING_READ */ 4403 + /* something from same cacheline, but !FORCEWAKE_VLV */ 4404 + POSTING_READ(FORCEWAKE_ACK_VLV); 4410 4405 gen6_gt_check_fifodbg(dev_priv); 4411 4406 } 4412 4407
+18 -6
drivers/gpu/drm/i915/intel_ringbuffer.c
··· 505 505 struct drm_i915_private *dev_priv = dev->dev_private; 506 506 int ret = init_ring_common(ring); 507 507 508 - if (INTEL_INFO(dev)->gen > 3) { 508 + if (INTEL_INFO(dev)->gen > 3) 509 509 I915_WRITE(MI_MODE, _MASKED_BIT_ENABLE(VS_TIMER_DISPATCH)); 510 - if (IS_GEN7(dev)) 511 - I915_WRITE(GFX_MODE_GEN7, 512 - _MASKED_BIT_DISABLE(GFX_TLB_INVALIDATE_ALWAYS) | 513 - _MASKED_BIT_ENABLE(GFX_REPLAY_MODE)); 514 - } 510 + 511 + /* We need to disable the AsyncFlip performance optimisations in order 512 + * to use MI_WAIT_FOR_EVENT within the CS. It should already be 513 + * programmed to '1' on all products. 514 + */ 515 + if (INTEL_INFO(dev)->gen >= 6) 516 + I915_WRITE(MI_MODE, _MASKED_BIT_ENABLE(ASYNC_FLIP_PERF_DISABLE)); 517 + 518 + /* Required for the hardware to program scanline values for waiting */ 519 + if (INTEL_INFO(dev)->gen == 6) 520 + I915_WRITE(GFX_MODE, 521 + _MASKED_BIT_ENABLE(GFX_TLB_INVALIDATE_ALWAYS)); 522 + 523 + if (IS_GEN7(dev)) 524 + I915_WRITE(GFX_MODE_GEN7, 525 + _MASKED_BIT_DISABLE(GFX_TLB_INVALIDATE_ALWAYS) | 526 + _MASKED_BIT_ENABLE(GFX_REPLAY_MODE)); 515 527 516 528 if (INTEL_INFO(dev)->gen >= 5) { 517 529 ret = init_pipe_control(ring);
+30 -3
drivers/gpu/drm/radeon/evergreen.c
··· 1313 1313 if (!(tmp & EVERGREEN_CRTC_BLANK_DATA_EN)) { 1314 1314 radeon_wait_for_vblank(rdev, i); 1315 1315 tmp |= EVERGREEN_CRTC_BLANK_DATA_EN; 1316 + WREG32(EVERGREEN_CRTC_UPDATE_LOCK + crtc_offsets[i], 1); 1316 1317 WREG32(EVERGREEN_CRTC_BLANK_CONTROL + crtc_offsets[i], tmp); 1318 + WREG32(EVERGREEN_CRTC_UPDATE_LOCK + crtc_offsets[i], 0); 1317 1319 } 1318 1320 } else { 1319 1321 tmp = RREG32(EVERGREEN_CRTC_CONTROL + crtc_offsets[i]); 1320 1322 if (!(tmp & EVERGREEN_CRTC_DISP_READ_REQUEST_DISABLE)) { 1321 1323 radeon_wait_for_vblank(rdev, i); 1322 1324 tmp |= EVERGREEN_CRTC_DISP_READ_REQUEST_DISABLE; 1325 + WREG32(EVERGREEN_CRTC_UPDATE_LOCK + crtc_offsets[i], 1); 1323 1326 WREG32(EVERGREEN_CRTC_CONTROL + crtc_offsets[i], tmp); 1327 + WREG32(EVERGREEN_CRTC_UPDATE_LOCK + crtc_offsets[i], 0); 1324 1328 } 1325 1329 } 1326 1330 /* wait for the next frame */ ··· 1349 1345 blackout &= ~BLACKOUT_MODE_MASK; 1350 1346 WREG32(MC_SHARED_BLACKOUT_CNTL, blackout | 1); 1351 1347 } 1348 + /* wait for the MC to settle */ 1349 + udelay(100); 1352 1350 } 1353 1351 1354 1352 void evergreen_mc_resume(struct radeon_device *rdev, struct evergreen_mc_save *save) ··· 1384 1378 if (ASIC_IS_DCE6(rdev)) { 1385 1379 tmp = RREG32(EVERGREEN_CRTC_BLANK_CONTROL + crtc_offsets[i]); 1386 1380 tmp |= EVERGREEN_CRTC_BLANK_DATA_EN; 1381 + WREG32(EVERGREEN_CRTC_UPDATE_LOCK + crtc_offsets[i], 1); 1387 1382 WREG32(EVERGREEN_CRTC_BLANK_CONTROL + crtc_offsets[i], tmp); 1383 + WREG32(EVERGREEN_CRTC_UPDATE_LOCK + crtc_offsets[i], 0); 1388 1384 } else { 1389 1385 tmp = RREG32(EVERGREEN_CRTC_CONTROL + crtc_offsets[i]); 1390 1386 tmp &= ~EVERGREEN_CRTC_DISP_READ_REQUEST_DISABLE; 1387 + WREG32(EVERGREEN_CRTC_UPDATE_LOCK + crtc_offsets[i], 1); 1391 1388 WREG32(EVERGREEN_CRTC_CONTROL + crtc_offsets[i], tmp); 1389 + WREG32(EVERGREEN_CRTC_UPDATE_LOCK + crtc_offsets[i], 0); 1392 1390 } 1393 1391 /* wait for the next frame */ 1394 1392 frame_count = radeon_get_vblank_counter(rdev, i); ··· 2046 2036 WREG32(HDP_ADDR_CONFIG, gb_addr_config); 2047 2037 WREG32(DMA_TILING_CONFIG, gb_addr_config); 2048 2038 2049 - tmp = gb_addr_config & NUM_PIPES_MASK; 2050 - tmp = r6xx_remap_render_backend(rdev, tmp, rdev->config.evergreen.max_backends, 2051 - EVERGREEN_MAX_BACKENDS, disabled_rb_mask); 2039 + if ((rdev->config.evergreen.max_backends == 1) && 2040 + (rdev->flags & RADEON_IS_IGP)) { 2041 + if ((disabled_rb_mask & 3) == 1) { 2042 + /* RB0 disabled, RB1 enabled */ 2043 + tmp = 0x11111111; 2044 + } else { 2045 + /* RB1 disabled, RB0 enabled */ 2046 + tmp = 0x00000000; 2047 + } 2048 + } else { 2049 + tmp = gb_addr_config & NUM_PIPES_MASK; 2050 + tmp = r6xx_remap_render_backend(rdev, tmp, rdev->config.evergreen.max_backends, 2051 + EVERGREEN_MAX_BACKENDS, disabled_rb_mask); 2052 + } 2052 2053 WREG32(GB_BACKEND_MAP, tmp); 2053 2054 2054 2055 WREG32(CGTS_SYS_TCC_DISABLE, 0); ··· 2421 2400 static int evergreen_gpu_soft_reset(struct radeon_device *rdev, u32 reset_mask) 2422 2401 { 2423 2402 struct evergreen_mc_save save; 2403 + 2404 + if (!(RREG32(GRBM_STATUS) & GUI_ACTIVE)) 2405 + reset_mask &= ~(RADEON_RESET_GFX | RADEON_RESET_COMPUTE); 2406 + 2407 + if (RREG32(DMA_STATUS_REG) & DMA_IDLE) 2408 + reset_mask &= ~RADEON_RESET_DMA; 2424 2409 2425 2410 if (reset_mask == 0) 2426 2411 return 0;
+12 -2
drivers/gpu/drm/radeon/ni.c
··· 1216 1216 int cayman_dma_resume(struct radeon_device *rdev) 1217 1217 { 1218 1218 struct radeon_ring *ring; 1219 - u32 rb_cntl, dma_cntl; 1219 + u32 rb_cntl, dma_cntl, ib_cntl; 1220 1220 u32 rb_bufsz; 1221 1221 u32 reg_offset, wb_offset; 1222 1222 int i, r; ··· 1265 1265 WREG32(DMA_RB_BASE + reg_offset, ring->gpu_addr >> 8); 1266 1266 1267 1267 /* enable DMA IBs */ 1268 - WREG32(DMA_IB_CNTL + reg_offset, DMA_IB_ENABLE | CMD_VMID_FORCE); 1268 + ib_cntl = DMA_IB_ENABLE | CMD_VMID_FORCE; 1269 + #ifdef __BIG_ENDIAN 1270 + ib_cntl |= DMA_IB_SWAP_ENABLE; 1271 + #endif 1272 + WREG32(DMA_IB_CNTL + reg_offset, ib_cntl); 1269 1273 1270 1274 dma_cntl = RREG32(DMA_CNTL + reg_offset); 1271 1275 dma_cntl &= ~CTXEMPTY_INT_ENABLE; ··· 1412 1408 static int cayman_gpu_soft_reset(struct radeon_device *rdev, u32 reset_mask) 1413 1409 { 1414 1410 struct evergreen_mc_save save; 1411 + 1412 + if (!(RREG32(GRBM_STATUS) & GUI_ACTIVE)) 1413 + reset_mask &= ~(RADEON_RESET_GFX | RADEON_RESET_COMPUTE); 1414 + 1415 + if (RREG32(DMA_STATUS_REG) & DMA_IDLE) 1416 + reset_mask &= ~RADEON_RESET_DMA; 1415 1417 1416 1418 if (reset_mask == 0) 1417 1419 return 0;
+17 -4
drivers/gpu/drm/radeon/r600.c
··· 1378 1378 { 1379 1379 struct rv515_mc_save save; 1380 1380 1381 + if (!(RREG32(GRBM_STATUS) & GUI_ACTIVE)) 1382 + reset_mask &= ~(RADEON_RESET_GFX | RADEON_RESET_COMPUTE); 1383 + 1384 + if (RREG32(DMA_STATUS_REG) & DMA_IDLE) 1385 + reset_mask &= ~RADEON_RESET_DMA; 1386 + 1381 1387 if (reset_mask == 0) 1382 1388 return 0; 1383 1389 ··· 1462 1456 u32 disabled_rb_mask) 1463 1457 { 1464 1458 u32 rendering_pipe_num, rb_num_width, req_rb_num; 1465 - u32 pipe_rb_ratio, pipe_rb_remain; 1459 + u32 pipe_rb_ratio, pipe_rb_remain, tmp; 1466 1460 u32 data = 0, mask = 1 << (max_rb_num - 1); 1467 1461 unsigned i, j; 1468 1462 1469 1463 /* mask out the RBs that don't exist on that asic */ 1470 - disabled_rb_mask |= (0xff << max_rb_num) & 0xff; 1464 + tmp = disabled_rb_mask | ((0xff << max_rb_num) & 0xff); 1465 + /* make sure at least one RB is available */ 1466 + if ((tmp & 0xff) != 0xff) 1467 + disabled_rb_mask = tmp; 1471 1468 1472 1469 rendering_pipe_num = 1 << tiling_pipe_num; 1473 1470 req_rb_num = total_max_rb_num - r600_count_pipe_bits(disabled_rb_mask); ··· 2316 2307 int r600_dma_resume(struct radeon_device *rdev) 2317 2308 { 2318 2309 struct radeon_ring *ring = &rdev->ring[R600_RING_TYPE_DMA_INDEX]; 2319 - u32 rb_cntl, dma_cntl; 2310 + u32 rb_cntl, dma_cntl, ib_cntl; 2320 2311 u32 rb_bufsz; 2321 2312 int r; 2322 2313 ··· 2356 2347 WREG32(DMA_RB_BASE, ring->gpu_addr >> 8); 2357 2348 2358 2349 /* enable DMA IBs */ 2359 - WREG32(DMA_IB_CNTL, DMA_IB_ENABLE); 2350 + ib_cntl = DMA_IB_ENABLE; 2351 + #ifdef __BIG_ENDIAN 2352 + ib_cntl |= DMA_IB_SWAP_ENABLE; 2353 + #endif 2354 + WREG32(DMA_IB_CNTL, ib_cntl); 2360 2355 2361 2356 dma_cntl = RREG32(DMA_CNTL); 2362 2357 dma_cntl &= ~CTXEMPTY_INT_ENABLE;
+2 -1
drivers/gpu/drm/radeon/radeon.h
··· 324 324 struct list_head list; 325 325 /* Protected by tbo.reserved */ 326 326 u32 placements[3]; 327 - u32 busy_placements[3]; 328 327 struct ttm_placement placement; 329 328 struct ttm_buffer_object tbo; 330 329 struct ttm_bo_kmap_obj kmap; ··· 653 654 u32 ptr_reg_mask; 654 655 u32 nop; 655 656 u32 idx; 657 + u64 last_semaphore_signal_addr; 658 + u64 last_semaphore_wait_addr; 656 659 }; 657 660 658 661 /*
+3 -3
drivers/gpu/drm/radeon/radeon_asic.c
··· 1445 1445 .vm = { 1446 1446 .init = &cayman_vm_init, 1447 1447 .fini = &cayman_vm_fini, 1448 - .pt_ring_index = R600_RING_TYPE_DMA_INDEX, 1448 + .pt_ring_index = RADEON_RING_TYPE_GFX_INDEX, 1449 1449 .set_page = &cayman_vm_set_page, 1450 1450 }, 1451 1451 .ring = { ··· 1572 1572 .vm = { 1573 1573 .init = &cayman_vm_init, 1574 1574 .fini = &cayman_vm_fini, 1575 - .pt_ring_index = R600_RING_TYPE_DMA_INDEX, 1575 + .pt_ring_index = RADEON_RING_TYPE_GFX_INDEX, 1576 1576 .set_page = &cayman_vm_set_page, 1577 1577 }, 1578 1578 .ring = { ··· 1699 1699 .vm = { 1700 1700 .init = &si_vm_init, 1701 1701 .fini = &si_vm_fini, 1702 - .pt_ring_index = R600_RING_TYPE_DMA_INDEX, 1702 + .pt_ring_index = RADEON_RING_TYPE_GFX_INDEX, 1703 1703 .set_page = &si_vm_set_page, 1704 1704 }, 1705 1705 .ring = {
+8
drivers/gpu/drm/radeon/radeon_combios.c
··· 2470 2470 1), 2471 2471 ATOM_DEVICE_CRT1_SUPPORT); 2472 2472 } 2473 + /* RV100 board with external TDMS bit mis-set. 2474 + * Actually uses internal TMDS, clear the bit. 2475 + */ 2476 + if (dev->pdev->device == 0x5159 && 2477 + dev->pdev->subsystem_vendor == 0x1014 && 2478 + dev->pdev->subsystem_device == 0x029A) { 2479 + tmp &= ~(1 << 4); 2480 + } 2473 2481 if ((tmp >> 4) & 0x1) { 2474 2482 devices |= ATOM_DEVICE_DFP2_SUPPORT; 2475 2483 radeon_add_legacy_encoder(dev,
+2
drivers/gpu/drm/radeon/radeon_cs.c
··· 286 286 p->chunks[p->chunk_ib_idx].kpage[1] == NULL) { 287 287 kfree(p->chunks[p->chunk_ib_idx].kpage[0]); 288 288 kfree(p->chunks[p->chunk_ib_idx].kpage[1]); 289 + p->chunks[p->chunk_ib_idx].kpage[0] = NULL; 290 + p->chunks[p->chunk_ib_idx].kpage[1] = NULL; 289 291 return -ENOMEM; 290 292 } 291 293 }
+2 -1
drivers/gpu/drm/radeon/radeon_cursor.c
··· 241 241 y = 0; 242 242 } 243 243 244 - if (ASIC_IS_AVIVO(rdev)) { 244 + /* fixed on DCE6 and newer */ 245 + if (ASIC_IS_AVIVO(rdev) && !ASIC_IS_DCE6(rdev)) { 245 246 int i = 0; 246 247 struct drm_crtc *crtc_p; 247 248
+2 -1
drivers/gpu/drm/radeon/radeon_device.c
··· 429 429 { 430 430 uint32_t reg; 431 431 432 - if (efi_enabled && rdev->pdev->subsystem_vendor == PCI_VENDOR_ID_APPLE) 432 + if (efi_enabled(EFI_BOOT) && 433 + rdev->pdev->subsystem_vendor == PCI_VENDOR_ID_APPLE) 433 434 return false; 434 435 435 436 /* first check CRTCs */
+4 -2
drivers/gpu/drm/radeon/radeon_display.c
··· 1115 1115 } 1116 1116 1117 1117 radeon_fb = kzalloc(sizeof(*radeon_fb), GFP_KERNEL); 1118 - if (radeon_fb == NULL) 1118 + if (radeon_fb == NULL) { 1119 + drm_gem_object_unreference_unlocked(obj); 1119 1120 return ERR_PTR(-ENOMEM); 1121 + } 1120 1122 1121 1123 ret = radeon_framebuffer_init(dev, radeon_fb, mode_cmd, obj); 1122 1124 if (ret) { 1123 1125 kfree(radeon_fb); 1124 1126 drm_gem_object_unreference_unlocked(obj); 1125 - return NULL; 1127 + return ERR_PTR(ret); 1126 1128 } 1127 1129 1128 1130 return &radeon_fb->base;
+2 -1
drivers/gpu/drm/radeon/radeon_drv.c
··· 69 69 * 2.26.0 - r600-eg: fix htile size computation 70 70 * 2.27.0 - r600-SI: Add CS ioctl support for async DMA 71 71 * 2.28.0 - r600-eg: Add MEM_WRITE packet support 72 + * 2.29.0 - R500 FP16 color clear registers 72 73 */ 73 74 #define KMS_DRIVER_MAJOR 2 74 - #define KMS_DRIVER_MINOR 28 75 + #define KMS_DRIVER_MINOR 29 75 76 #define KMS_DRIVER_PATCHLEVEL 0 76 77 int radeon_driver_load_kms(struct drm_device *dev, unsigned long flags); 77 78 int radeon_driver_unload_kms(struct drm_device *dev);
+10 -8
drivers/gpu/drm/radeon/radeon_object.c
··· 84 84 rbo->placement.fpfn = 0; 85 85 rbo->placement.lpfn = 0; 86 86 rbo->placement.placement = rbo->placements; 87 + rbo->placement.busy_placement = rbo->placements; 87 88 if (domain & RADEON_GEM_DOMAIN_VRAM) 88 89 rbo->placements[c++] = TTM_PL_FLAG_WC | TTM_PL_FLAG_UNCACHED | 89 90 TTM_PL_FLAG_VRAM; ··· 105 104 if (!c) 106 105 rbo->placements[c++] = TTM_PL_MASK_CACHING | TTM_PL_FLAG_SYSTEM; 107 106 rbo->placement.num_placement = c; 108 - 109 - c = 0; 110 - rbo->placement.busy_placement = rbo->busy_placements; 111 - if (rbo->rdev->flags & RADEON_IS_AGP) { 112 - rbo->busy_placements[c++] = TTM_PL_FLAG_WC | TTM_PL_FLAG_TT; 113 - } else { 114 - rbo->busy_placements[c++] = TTM_PL_FLAG_CACHED | TTM_PL_FLAG_TT; 115 - } 116 107 rbo->placement.num_busy_placement = c; 117 108 } 118 109 ··· 350 357 { 351 358 struct radeon_bo_list *lobj; 352 359 struct radeon_bo *bo; 360 + u32 domain; 353 361 int r; 354 362 355 363 r = ttm_eu_reserve_buffers(head); ··· 360 366 list_for_each_entry(lobj, head, tv.head) { 361 367 bo = lobj->bo; 362 368 if (!bo->pin_count) { 369 + domain = lobj->wdomain ? lobj->wdomain : lobj->rdomain; 370 + 371 + retry: 372 + radeon_ttm_placement_from_domain(bo, domain); 363 373 r = ttm_bo_validate(&bo->tbo, &bo->placement, 364 374 true, false); 365 375 if (unlikely(r)) { 376 + if (r != -ERESTARTSYS && domain == RADEON_GEM_DOMAIN_VRAM) { 377 + domain |= RADEON_GEM_DOMAIN_GTT; 378 + goto retry; 379 + } 366 380 return r; 367 381 } 368 382 }
+5
drivers/gpu/drm/radeon/radeon_ring.c
··· 377 377 { 378 378 int r; 379 379 380 + /* make sure we aren't trying to allocate more space than there is on the ring */ 381 + if (ndw > (ring->ring_size / 4)) 382 + return -ENOMEM; 380 383 /* Align requested size with padding so unlock_commit can 381 384 * pad safely */ 382 385 ndw = (ndw + ring->align_mask) & ~ring->align_mask; ··· 787 784 } 788 785 seq_printf(m, "driver's copy of the wptr: 0x%08x [%5d]\n", ring->wptr, ring->wptr); 789 786 seq_printf(m, "driver's copy of the rptr: 0x%08x [%5d]\n", ring->rptr, ring->rptr); 787 + seq_printf(m, "last semaphore signal addr : 0x%016llx\n", ring->last_semaphore_signal_addr); 788 + seq_printf(m, "last semaphore wait addr : 0x%016llx\n", ring->last_semaphore_wait_addr); 790 789 seq_printf(m, "%u free dwords in ring\n", ring->ring_free_dw); 791 790 seq_printf(m, "%u dwords in ring\n", count); 792 791 /* print 8 dw before current rptr as often it's the last executed
+4
drivers/gpu/drm/radeon/radeon_semaphore.c
··· 95 95 /* we assume caller has already allocated space on waiters ring */ 96 96 radeon_semaphore_emit_wait(rdev, waiter, semaphore); 97 97 98 + /* for debugging lockup only, used by sysfs debug files */ 99 + rdev->ring[signaler].last_semaphore_signal_addr = semaphore->gpu_addr; 100 + rdev->ring[waiter].last_semaphore_wait_addr = semaphore->gpu_addr; 101 + 98 102 return 0; 99 103 } 100 104
+1
drivers/gpu/drm/radeon/reg_srcs/cayman
··· 1 1 cayman 0x9400 2 2 0x0000802C GRBM_GFX_INDEX 3 + 0x00008040 WAIT_UNTIL 3 4 0x000084FC CP_STRMOUT_CNTL 4 5 0x000085F0 CP_COHER_CNTL 5 6 0x000085F4 CP_COHER_SIZE
+2
drivers/gpu/drm/radeon/reg_srcs/rv515
··· 324 324 0x46AC US_OUT_FMT_2 325 325 0x46B0 US_OUT_FMT_3 326 326 0x46B4 US_W_FMT 327 + 0x46C0 RB3D_COLOR_CLEAR_VALUE_AR 328 + 0x46C4 RB3D_COLOR_CLEAR_VALUE_GB 327 329 0x4BC0 FG_FOG_BLEND 328 330 0x4BC4 FG_FOG_FACTOR 329 331 0x4BC8 FG_FOG_COLOR_R
+2
drivers/gpu/drm/radeon/rv515.c
··· 336 336 WREG32(R600_CITF_CNTL, blackout); 337 337 } 338 338 } 339 + /* wait for the MC to settle */ 340 + udelay(100); 339 341 } 340 342 341 343 void rv515_mc_resume(struct radeon_device *rdev, struct rv515_mc_save *save)
+6
drivers/gpu/drm/radeon/si.c
··· 2215 2215 { 2216 2216 struct evergreen_mc_save save; 2217 2217 2218 + if (!(RREG32(GRBM_STATUS) & GUI_ACTIVE)) 2219 + reset_mask &= ~(RADEON_RESET_GFX | RADEON_RESET_COMPUTE); 2220 + 2221 + if (RREG32(DMA_STATUS_REG) & DMA_IDLE) 2222 + reset_mask &= ~RADEON_RESET_DMA; 2223 + 2218 2224 if (reset_mask == 0) 2219 2225 return 0; 2220 2226
+1
drivers/gpu/drm/ttm/ttm_bo.c
··· 434 434 bo->mem = tmp_mem; 435 435 bdev->driver->move_notify(bo, mem); 436 436 bo->mem = *mem; 437 + *mem = tmp_mem; 437 438 } 438 439 439 440 goto out_err;
+9 -2
drivers/gpu/drm/ttm/ttm_bo_util.c
··· 344 344 345 345 if (ttm->state == tt_unpopulated) { 346 346 ret = ttm->bdev->driver->ttm_tt_populate(ttm); 347 - if (ret) 347 + if (ret) { 348 + /* if we fail here don't nuke the mm node 349 + * as the bo still owns it */ 350 + old_copy.mm_node = NULL; 348 351 goto out1; 352 + } 349 353 } 350 354 351 355 add = 0; ··· 375 371 prot); 376 372 } else 377 373 ret = ttm_copy_io_page(new_iomap, old_iomap, page); 378 - if (ret) 374 + if (ret) { 375 + /* failing here, means keep old copy as-is */ 376 + old_copy.mm_node = NULL; 379 377 goto out1; 378 + } 380 379 } 381 380 mb(); 382 381 out2:
+3
drivers/gpu/vga/vga_switcheroo.c
··· 25 25 #include <linux/fb.h> 26 26 27 27 #include <linux/pci.h> 28 + #include <linux/console.h> 28 29 #include <linux/vga_switcheroo.h> 29 30 30 31 #include <linux/vgaarb.h> ··· 338 337 339 338 if (new_client->fb_info) { 340 339 struct fb_event event; 340 + console_lock(); 341 341 event.info = new_client->fb_info; 342 342 fb_notifier_call_chain(FB_EVENT_REMAP_ALL_CONSOLE, &event); 343 + console_unlock(); 343 344 } 344 345 345 346 ret = vgasr_priv.handler->switchto(new_client->id);
+3
drivers/hid/hid-ids.h
··· 306 306 #define USB_VENDOR_ID_EZKEY 0x0518 307 307 #define USB_DEVICE_ID_BTC_8193 0x0002 308 308 309 + #define USB_VENDOR_ID_FORMOSA 0x147a 310 + #define USB_DEVICE_ID_FORMOSA_IR_RECEIVER 0xe03e 311 + 309 312 #define USB_VENDOR_ID_FREESCALE 0x15A2 310 313 #define USB_DEVICE_ID_FREESCALE_MX28 0x004F 311 314
+12 -1
drivers/hid/i2c-hid/i2c-hid.c
··· 540 540 { 541 541 struct i2c_client *client = hid->driver_data; 542 542 int report_id = buf[0]; 543 + int ret; 543 544 544 545 if (report_type == HID_INPUT_REPORT) 545 546 return -EINVAL; 546 547 547 - return i2c_hid_set_report(client, 548 + if (report_id) { 549 + buf++; 550 + count--; 551 + } 552 + 553 + ret = i2c_hid_set_report(client, 548 554 report_type == HID_FEATURE_REPORT ? 0x03 : 0x02, 549 555 report_id, buf, count); 556 + 557 + if (report_id && ret >= 0) 558 + ret++; /* add report_id to the number of transfered bytes */ 559 + 560 + return ret; 550 561 } 551 562 552 563 static int i2c_hid_parse(struct hid_device *hid)
+1
drivers/hid/usbhid/hid-quirks.c
··· 70 70 { USB_VENDOR_ID_CH, USB_DEVICE_ID_CH_AXIS_295, HID_QUIRK_NOGET }, 71 71 { USB_VENDOR_ID_DMI, USB_DEVICE_ID_DMI_ENC, HID_QUIRK_NOGET }, 72 72 { USB_VENDOR_ID_ELO, USB_DEVICE_ID_ELO_TS2700, HID_QUIRK_NOGET }, 73 + { USB_VENDOR_ID_FORMOSA, USB_DEVICE_ID_FORMOSA_IR_RECEIVER, HID_QUIRK_NO_INIT_REPORTS }, 73 74 { USB_VENDOR_ID_FREESCALE, USB_DEVICE_ID_FREESCALE_MX28, HID_QUIRK_NOGET }, 74 75 { USB_VENDOR_ID_MGE, USB_DEVICE_ID_MGE_UPS, HID_QUIRK_NOGET }, 75 76 { USB_VENDOR_ID_NOVATEK, USB_DEVICE_ID_NOVATEK_MOUSE, HID_QUIRK_NO_INIT_REPORTS },
+21 -14
drivers/hv/hv_balloon.c
··· 403 403 */ 404 404 405 405 struct dm_info_msg { 406 - struct dm_info_header header; 406 + struct dm_header hdr; 407 407 __u32 reserved; 408 408 __u32 info_size; 409 409 __u8 info[]; ··· 503 503 504 504 static void process_info(struct hv_dynmem_device *dm, struct dm_info_msg *msg) 505 505 { 506 - switch (msg->header.type) { 506 + struct dm_info_header *info_hdr; 507 + 508 + info_hdr = (struct dm_info_header *)msg->info; 509 + 510 + switch (info_hdr->type) { 507 511 case INFO_TYPE_MAX_PAGE_CNT: 508 512 pr_info("Received INFO_TYPE_MAX_PAGE_CNT\n"); 509 - pr_info("Data Size is %d\n", msg->header.data_size); 513 + pr_info("Data Size is %d\n", info_hdr->data_size); 510 514 break; 511 515 default: 512 - pr_info("Received Unknown type: %d\n", msg->header.type); 516 + pr_info("Received Unknown type: %d\n", info_hdr->type); 513 517 } 514 518 } 515 519 ··· 883 879 balloon_onchannelcallback, dev); 884 880 885 881 if (ret) 886 - return ret; 882 + goto probe_error0; 887 883 888 884 dm_device.dev = dev; 889 885 dm_device.state = DM_INITIALIZING; ··· 895 891 kthread_run(dm_thread_func, &dm_device, "hv_balloon"); 896 892 if (IS_ERR(dm_device.thread)) { 897 893 ret = PTR_ERR(dm_device.thread); 898 - goto probe_error0; 894 + goto probe_error1; 899 895 } 900 896 901 897 hv_set_drvdata(dev, &dm_device); ··· 918 914 VM_PKT_DATA_INBAND, 919 915 VMBUS_DATA_PACKET_FLAG_COMPLETION_REQUESTED); 920 916 if (ret) 921 - goto probe_error1; 917 + goto probe_error2; 922 918 923 919 t = wait_for_completion_timeout(&dm_device.host_event, 5*HZ); 924 920 if (t == 0) { 925 921 ret = -ETIMEDOUT; 926 - goto probe_error1; 922 + goto probe_error2; 927 923 } 928 924 929 925 /* ··· 932 928 */ 933 929 if (dm_device.state == DM_INIT_ERROR) { 934 930 ret = -ETIMEDOUT; 935 - goto probe_error1; 931 + goto probe_error2; 936 932 } 937 933 /* 938 934 * Now submit our capabilities to the host. ··· 965 961 VM_PKT_DATA_INBAND, 966 962 VMBUS_DATA_PACKET_FLAG_COMPLETION_REQUESTED); 967 963 if (ret) 968 - goto probe_error1; 964 + goto probe_error2; 969 965 970 966 t = wait_for_completion_timeout(&dm_device.host_event, 5*HZ); 971 967 if (t == 0) { 972 968 ret = -ETIMEDOUT; 973 - goto probe_error1; 969 + goto probe_error2; 974 970 } 975 971 976 972 /* ··· 979 975 */ 980 976 if (dm_device.state == DM_INIT_ERROR) { 981 977 ret = -ETIMEDOUT; 982 - goto probe_error1; 978 + goto probe_error2; 983 979 } 984 980 985 981 dm_device.state = DM_INITIALIZED; 986 982 987 983 return 0; 988 984 989 - probe_error1: 985 + probe_error2: 990 986 kthread_stop(dm_device.thread); 991 987 992 - probe_error0: 988 + probe_error1: 993 989 vmbus_close(dev->channel); 990 + probe_error0: 991 + kfree(send_buffer); 994 992 return ret; 995 993 } 996 994 ··· 1005 999 1006 1000 vmbus_close(dev->channel); 1007 1001 kthread_stop(dm->thread); 1002 + kfree(send_buffer); 1008 1003 1009 1004 return 0; 1010 1005 }
+4
drivers/i2c/busses/i2c-designware-core.c
··· 34 34 #include <linux/io.h> 35 35 #include <linux/pm_runtime.h> 36 36 #include <linux/delay.h> 37 + #include <linux/module.h> 37 38 #include "i2c-designware-core.h" 38 39 39 40 /* ··· 726 725 return dw_readl(dev, DW_IC_COMP_PARAM_1); 727 726 } 728 727 EXPORT_SYMBOL_GPL(i2c_dw_read_comp_param); 728 + 729 + MODULE_DESCRIPTION("Synopsys DesignWare I2C bus adapter core"); 730 + MODULE_LICENSE("GPL");
+4 -2
drivers/i2c/busses/i2c-mxs.c
··· 127 127 struct device *dev; 128 128 void __iomem *regs; 129 129 struct completion cmd_complete; 130 - u32 cmd_err; 130 + int cmd_err; 131 131 struct i2c_adapter adapter; 132 132 const struct mxs_i2c_speed_config *speed; 133 133 ··· 316 316 if (msg->len == 0) 317 317 return -EINVAL; 318 318 319 - init_completion(&i2c->cmd_complete); 319 + INIT_COMPLETION(i2c->cmd_complete); 320 320 i2c->cmd_err = 0; 321 321 322 322 ret = mxs_i2c_dma_setup_xfer(adap, msg, flags); ··· 472 472 473 473 i2c->dev = dev; 474 474 i2c->speed = &mxs_i2c_95kHz_config; 475 + 476 + init_completion(&i2c->cmd_complete); 475 477 476 478 if (dev->of_node) { 477 479 err = mxs_i2c_get_ofdata(i2c);
+3 -3
drivers/i2c/busses/i2c-omap.c
··· 803 803 if (stat & OMAP_I2C_STAT_AL) { 804 804 dev_err(dev->dev, "Arbitration lost\n"); 805 805 dev->cmd_err |= OMAP_I2C_STAT_AL; 806 - omap_i2c_ack_stat(dev, OMAP_I2C_STAT_NACK); 806 + omap_i2c_ack_stat(dev, OMAP_I2C_STAT_AL); 807 807 } 808 808 809 809 return -EIO; ··· 963 963 i2c_omap_errata_i207(dev, stat); 964 964 965 965 omap_i2c_ack_stat(dev, OMAP_I2C_STAT_RDR); 966 - break; 966 + continue; 967 967 } 968 968 969 969 if (stat & OMAP_I2C_STAT_RRDY) { ··· 989 989 break; 990 990 991 991 omap_i2c_ack_stat(dev, OMAP_I2C_STAT_XDR); 992 - break; 992 + continue; 993 993 } 994 994 995 995 if (stat & OMAP_I2C_STAT_XRDY) {
+4
drivers/i2c/busses/i2c-sirf.c
··· 12 12 #include <linux/slab.h> 13 13 #include <linux/platform_device.h> 14 14 #include <linux/i2c.h> 15 + #include <linux/of_i2c.h> 15 16 #include <linux/clk.h> 16 17 #include <linux/err.h> 17 18 #include <linux/io.h> ··· 329 328 adap->algo = &i2c_sirfsoc_algo; 330 329 adap->algo_data = siic; 331 330 331 + adap->dev.of_node = pdev->dev.of_node; 332 332 adap->dev.parent = &pdev->dev; 333 333 adap->nr = pdev->id; 334 334 ··· 372 370 } 373 371 374 372 clk_disable(clk); 373 + 374 + of_i2c_register_devices(adap); 375 375 376 376 dev_info(&pdev->dev, " I2C adapter ready to operate\n"); 377 377
+1 -1
drivers/i2c/muxes/i2c-mux-pinctrl.c
··· 167 167 } 168 168 169 169 mux->busses = devm_kzalloc(&pdev->dev, 170 - sizeof(mux->busses) * mux->pdata->bus_count, 170 + sizeof(*mux->busses) * mux->pdata->bus_count, 171 171 GFP_KERNEL); 172 172 if (!mux->busses) { 173 173 dev_err(&pdev->dev, "Cannot allocate busses\n");
+1 -2
drivers/idle/intel_idle.c
··· 448 448 else 449 449 on_each_cpu(__setup_broadcast_timer, (void *)true, 1); 450 450 451 - register_cpu_notifier(&cpu_hotplug_notifier); 452 - 453 451 pr_debug(PREFIX "v" INTEL_IDLE_VERSION 454 452 " model 0x%X\n", boot_cpu_data.x86_model); 455 453 ··· 610 612 return retval; 611 613 } 612 614 } 615 + register_cpu_notifier(&cpu_hotplug_notifier); 613 616 614 617 return 0; 615 618 }
+34
drivers/iommu/amd_iommu_init.c
··· 975 975 } 976 976 977 977 /* 978 + * Family15h Model 10h-1fh erratum 746 (IOMMU Logging May Stall Translations) 979 + * Workaround: 980 + * BIOS should disable L2B micellaneous clock gating by setting 981 + * L2_L2B_CK_GATE_CONTROL[CKGateL2BMiscDisable](D0F2xF4_x90[2]) = 1b 982 + */ 983 + static void __init amd_iommu_erratum_746_workaround(struct amd_iommu *iommu) 984 + { 985 + u32 value; 986 + 987 + if ((boot_cpu_data.x86 != 0x15) || 988 + (boot_cpu_data.x86_model < 0x10) || 989 + (boot_cpu_data.x86_model > 0x1f)) 990 + return; 991 + 992 + pci_write_config_dword(iommu->dev, 0xf0, 0x90); 993 + pci_read_config_dword(iommu->dev, 0xf4, &value); 994 + 995 + if (value & BIT(2)) 996 + return; 997 + 998 + /* Select NB indirect register 0x90 and enable writing */ 999 + pci_write_config_dword(iommu->dev, 0xf0, 0x90 | (1 << 8)); 1000 + 1001 + pci_write_config_dword(iommu->dev, 0xf4, value | 0x4); 1002 + pr_info("AMD-Vi: Applying erratum 746 workaround for IOMMU at %s\n", 1003 + dev_name(&iommu->dev->dev)); 1004 + 1005 + /* Clear the enable writing bit */ 1006 + pci_write_config_dword(iommu->dev, 0xf0, 0x90); 1007 + } 1008 + 1009 + /* 978 1010 * This function clues the initialization function for one IOMMU 979 1011 * together and also allocates the command buffer and programs the 980 1012 * hardware. It does NOT enable the IOMMU. This is done afterwards. ··· 1203 1171 for (i = 0; i < 0x83; i++) 1204 1172 iommu->stored_l2[i] = iommu_read_l2(iommu, i); 1205 1173 } 1174 + 1175 + amd_iommu_erratum_746_workaround(iommu); 1206 1176 1207 1177 return pci_enable_device(iommu->dev); 1208 1178 }
+15 -6
drivers/iommu/intel-iommu.c
··· 4234 4234 .pgsize_bitmap = INTEL_IOMMU_PGSIZES, 4235 4235 }; 4236 4236 4237 + static void quirk_iommu_g4x_gfx(struct pci_dev *dev) 4238 + { 4239 + /* G4x/GM45 integrated gfx dmar support is totally busted. */ 4240 + printk(KERN_INFO "DMAR: Disabling IOMMU for graphics on this chipset\n"); 4241 + dmar_map_gfx = 0; 4242 + } 4243 + 4244 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x2a40, quirk_iommu_g4x_gfx); 4245 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x2e00, quirk_iommu_g4x_gfx); 4246 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x2e10, quirk_iommu_g4x_gfx); 4247 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x2e20, quirk_iommu_g4x_gfx); 4248 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x2e30, quirk_iommu_g4x_gfx); 4249 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x2e40, quirk_iommu_g4x_gfx); 4250 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x2e90, quirk_iommu_g4x_gfx); 4251 + 4237 4252 static void quirk_iommu_rwbf(struct pci_dev *dev) 4238 4253 { 4239 4254 /* ··· 4257 4242 */ 4258 4243 printk(KERN_INFO "DMAR: Forcing write-buffer flush capability\n"); 4259 4244 rwbf_quirk = 1; 4260 - 4261 - /* https://bugzilla.redhat.com/show_bug.cgi?id=538163 */ 4262 - if (dev->revision == 0x07) { 4263 - printk(KERN_INFO "DMAR: Disabling IOMMU for graphics on this chipset\n"); 4264 - dmar_map_gfx = 0; 4265 - } 4266 4245 } 4267 4246 4268 4247 DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x2a40, quirk_iommu_rwbf);
+2
drivers/isdn/gigaset/capi.c
··· 248 248 CAPIMSG_APPID(data), CAPIMSG_MSGID(data), l, 249 249 CAPIMSG_CONTROL(data)); 250 250 l -= 12; 251 + if (l <= 0) 252 + return; 251 253 dbgline = kmalloc(3 * l, GFP_ATOMIC); 252 254 if (!dbgline) 253 255 return;
+37 -64
drivers/md/dm-raid.c
··· 340 340 } 341 341 342 342 /* 343 - * validate_rebuild_devices 343 + * validate_raid_redundancy 344 344 * @rs 345 345 * 346 - * Determine if the devices specified for rebuild can result in a valid 347 - * usable array that is capable of rebuilding the given devices. 346 + * Determine if there are enough devices in the array that haven't 347 + * failed (or are being rebuilt) to form a usable array. 348 348 * 349 349 * Returns: 0 on success, -EINVAL on failure. 350 350 */ 351 - static int validate_rebuild_devices(struct raid_set *rs) 351 + static int validate_raid_redundancy(struct raid_set *rs) 352 352 { 353 353 unsigned i, rebuild_cnt = 0; 354 354 unsigned rebuilds_per_group, copies, d; 355 355 356 - if (!(rs->print_flags & DMPF_REBUILD)) 357 - return 0; 358 - 359 356 for (i = 0; i < rs->md.raid_disks; i++) 360 - if (!test_bit(In_sync, &rs->dev[i].rdev.flags)) 357 + if (!test_bit(In_sync, &rs->dev[i].rdev.flags) || 358 + !rs->dev[i].rdev.sb_page) 361 359 rebuild_cnt++; 362 360 363 361 switch (rs->raid_type->level) { ··· 391 393 * A A B B C 392 394 * C D D E E 393 395 */ 394 - rebuilds_per_group = 0; 395 396 for (i = 0; i < rs->md.raid_disks * copies; i++) { 397 + if (!(i % copies)) 398 + rebuilds_per_group = 0; 396 399 d = i % rs->md.raid_disks; 397 - if (!test_bit(In_sync, &rs->dev[d].rdev.flags) && 400 + if ((!rs->dev[d].rdev.sb_page || 401 + !test_bit(In_sync, &rs->dev[d].rdev.flags)) && 398 402 (++rebuilds_per_group >= copies)) 399 403 goto too_many; 400 - if (!((i + 1) % copies)) 401 - rebuilds_per_group = 0; 402 404 } 403 405 break; 404 406 default: 405 - DMERR("The rebuild parameter is not supported for %s", 406 - rs->raid_type->name); 407 - rs->ti->error = "Rebuild not supported for this RAID type"; 408 - return -EINVAL; 407 + if (rebuild_cnt) 408 + return -EINVAL; 409 409 } 410 410 411 411 return 0; 412 412 413 413 too_many: 414 - rs->ti->error = "Too many rebuild devices specified"; 415 414 return -EINVAL; 416 415 } 417 416 ··· 658 663 return -EINVAL; 659 664 } 660 665 rs->md.dev_sectors = sectors_per_dev; 661 - 662 - if (validate_rebuild_devices(rs)) 663 - return -EINVAL; 664 666 665 667 /* Assume there are no metadata devices until the drives are parsed */ 666 668 rs->md.persistent = 0; ··· 987 995 static int analyse_superblocks(struct dm_target *ti, struct raid_set *rs) 988 996 { 989 997 int ret; 990 - unsigned redundancy = 0; 991 998 struct raid_dev *dev; 992 999 struct md_rdev *rdev, *tmp, *freshest; 993 1000 struct mddev *mddev = &rs->md; 994 - 995 - switch (rs->raid_type->level) { 996 - case 1: 997 - redundancy = rs->md.raid_disks - 1; 998 - break; 999 - case 4: 1000 - case 5: 1001 - case 6: 1002 - redundancy = rs->raid_type->parity_devs; 1003 - break; 1004 - case 10: 1005 - redundancy = raid10_md_layout_to_copies(mddev->layout) - 1; 1006 - break; 1007 - default: 1008 - ti->error = "Unknown RAID type"; 1009 - return -EINVAL; 1010 - } 1011 1001 1012 1002 freshest = NULL; 1013 1003 rdev_for_each_safe(rdev, tmp, mddev) { ··· 1019 1045 break; 1020 1046 default: 1021 1047 dev = container_of(rdev, struct raid_dev, rdev); 1022 - if (redundancy--) { 1023 - if (dev->meta_dev) 1024 - dm_put_device(ti, dev->meta_dev); 1048 + if (dev->meta_dev) 1049 + dm_put_device(ti, dev->meta_dev); 1025 1050 1026 - dev->meta_dev = NULL; 1027 - rdev->meta_bdev = NULL; 1051 + dev->meta_dev = NULL; 1052 + rdev->meta_bdev = NULL; 1028 1053 1029 - if (rdev->sb_page) 1030 - put_page(rdev->sb_page); 1054 + if (rdev->sb_page) 1055 + put_page(rdev->sb_page); 1031 1056 1032 - rdev->sb_page = NULL; 1057 + rdev->sb_page = NULL; 1033 1058 1034 - rdev->sb_loaded = 0; 1059 + rdev->sb_loaded = 0; 1035 1060 1036 - /* 1037 - * We might be able to salvage the data device 1038 - * even though the meta device has failed. For 1039 - * now, we behave as though '- -' had been 1040 - * set for this device in the table. 1041 - */ 1042 - if (dev->data_dev) 1043 - dm_put_device(ti, dev->data_dev); 1061 + /* 1062 + * We might be able to salvage the data device 1063 + * even though the meta device has failed. For 1064 + * now, we behave as though '- -' had been 1065 + * set for this device in the table. 1066 + */ 1067 + if (dev->data_dev) 1068 + dm_put_device(ti, dev->data_dev); 1044 1069 1045 - dev->data_dev = NULL; 1046 - rdev->bdev = NULL; 1070 + dev->data_dev = NULL; 1071 + rdev->bdev = NULL; 1047 1072 1048 - list_del(&rdev->same_set); 1049 - 1050 - continue; 1051 - } 1052 - ti->error = "Failed to load superblock"; 1053 - return ret; 1073 + list_del(&rdev->same_set); 1054 1074 } 1055 1075 } 1056 1076 1057 1077 if (!freshest) 1058 1078 return 0; 1079 + 1080 + if (validate_raid_redundancy(rs)) { 1081 + rs->ti->error = "Insufficient redundancy to activate array"; 1082 + return -EINVAL; 1083 + } 1059 1084 1060 1085 /* 1061 1086 * Validation of the freshest device provides the source of ··· 1405 1432 1406 1433 static struct target_type raid_target = { 1407 1434 .name = "raid", 1408 - .version = {1, 4, 0}, 1435 + .version = {1, 4, 1}, 1409 1436 .module = THIS_MODULE, 1410 1437 .ctr = raid_ctr, 1411 1438 .dtr = raid_dtr,
+1 -12
drivers/md/dm-thin.c
··· 2746 2746 return 0; 2747 2747 } 2748 2748 2749 - /* 2750 - * A thin device always inherits its queue limits from its pool. 2751 - */ 2752 - static void thin_io_hints(struct dm_target *ti, struct queue_limits *limits) 2753 - { 2754 - struct thin_c *tc = ti->private; 2755 - 2756 - *limits = bdev_get_queue(tc->pool_dev->bdev)->limits; 2757 - } 2758 - 2759 2749 static struct target_type thin_target = { 2760 2750 .name = "thin", 2761 - .version = {1, 6, 0}, 2751 + .version = {1, 7, 0}, 2762 2752 .module = THIS_MODULE, 2763 2753 .ctr = thin_ctr, 2764 2754 .dtr = thin_dtr, ··· 2757 2767 .postsuspend = thin_postsuspend, 2758 2768 .status = thin_status, 2759 2769 .iterate_devices = thin_iterate_devices, 2760 - .io_hints = thin_io_hints, 2761 2770 }; 2762 2771 2763 2772 /*----------------------------------------------------------------*/
+4 -2
drivers/md/dm.c
··· 1188 1188 { 1189 1189 struct dm_target *ti; 1190 1190 sector_t len; 1191 + unsigned num_requests; 1191 1192 1192 1193 do { 1193 1194 ti = dm_table_find_target(ci->map, ci->sector); ··· 1201 1200 * reconfiguration might also have changed that since the 1202 1201 * check was performed. 1203 1202 */ 1204 - if (!get_num_requests || !get_num_requests(ti)) 1203 + num_requests = get_num_requests ? get_num_requests(ti) : 0; 1204 + if (!num_requests) 1205 1205 return -EOPNOTSUPP; 1206 1206 1207 1207 if (is_split_required && !is_split_required(ti)) ··· 1210 1208 else 1211 1209 len = min(ci->sector_count, max_io_len(ci->sector, ti)); 1212 1210 1213 - __issue_target_requests(ci, ti, ti->num_discard_requests, len); 1211 + __issue_target_requests(ci, ti, num_requests, len); 1214 1212 1215 1213 ci->sector += len; 1216 1214 } while (ci->sector_count -= len);
+1 -1
drivers/media/i2c/m5mols/m5mols_core.c
··· 556 556 mutex_lock(&info->lock); 557 557 558 558 format = __find_format(info, fh, fmt->which, info->res_type); 559 - if (!format) 559 + if (format) 560 560 fmt->format = *format; 561 561 else 562 562 ret = -EINVAL;
+1 -1
drivers/media/platform/coda.c
··· 23 23 #include <linux/slab.h> 24 24 #include <linux/videodev2.h> 25 25 #include <linux/of.h> 26 + #include <linux/platform_data/imx-iram.h> 26 27 27 - #include <mach/iram.h> 28 28 #include <media/v4l2-ctrls.h> 29 29 #include <media/v4l2-device.h> 30 30 #include <media/v4l2-ioctl.h>
-3
drivers/media/platform/omap3isp/ispvideo.c
··· 35 35 #include <linux/vmalloc.h> 36 36 #include <media/v4l2-dev.h> 37 37 #include <media/v4l2-ioctl.h> 38 - #include <plat/iommu.h> 39 - #include <plat/iovmm.h> 40 - #include <plat/omap-pm.h> 41 38 42 39 #include "ispvideo.h" 43 40 #include "isp.h"
+1 -1
drivers/media/platform/s5p-fimc/fimc-mdevice.c
··· 593 593 { 594 594 struct media_entity *source, *sink; 595 595 unsigned int flags = MEDIA_LNK_FL_ENABLED; 596 - int i, ret; 596 + int i, ret = 0; 597 597 598 598 for (i = 0; i < FIMC_LITE_MAX_DEVS; i++) { 599 599 struct fimc_lite *fimc = fmd->fimc_lite[i];
+37 -51
drivers/media/platform/s5p-mfc/s5p_mfc.c
··· 412 412 } 413 413 414 414 /* Error handling for interrupt */ 415 - static void s5p_mfc_handle_error(struct s5p_mfc_ctx *ctx, 416 - unsigned int reason, unsigned int err) 415 + static void s5p_mfc_handle_error(struct s5p_mfc_dev *dev, 416 + struct s5p_mfc_ctx *ctx, unsigned int reason, unsigned int err) 417 417 { 418 - struct s5p_mfc_dev *dev; 419 418 unsigned long flags; 420 419 421 - /* If no context is available then all necessary 422 - * processing has been done. */ 423 - if (ctx == NULL) 424 - return; 425 - 426 - dev = ctx->dev; 427 420 mfc_err("Interrupt Error: %08x\n", err); 428 - s5p_mfc_hw_call(dev->mfc_ops, clear_int_flags, dev); 429 - wake_up_dev(dev, reason, err); 430 421 431 - /* Error recovery is dependent on the state of context */ 432 - switch (ctx->state) { 433 - case MFCINST_INIT: 434 - /* This error had to happen while acquireing instance */ 435 - case MFCINST_GOT_INST: 436 - /* This error had to happen while parsing the header */ 437 - case MFCINST_HEAD_PARSED: 438 - /* This error had to happen while setting dst buffers */ 439 - case MFCINST_RETURN_INST: 440 - /* This error had to happen while releasing instance */ 441 - clear_work_bit(ctx); 442 - wake_up_ctx(ctx, reason, err); 443 - if (test_and_clear_bit(0, &dev->hw_lock) == 0) 444 - BUG(); 445 - s5p_mfc_clock_off(); 446 - ctx->state = MFCINST_ERROR; 447 - break; 448 - case MFCINST_FINISHING: 449 - case MFCINST_FINISHED: 450 - case MFCINST_RUNNING: 451 - /* It is higly probable that an error occured 452 - * while decoding a frame */ 453 - clear_work_bit(ctx); 454 - ctx->state = MFCINST_ERROR; 455 - /* Mark all dst buffers as having an error */ 456 - spin_lock_irqsave(&dev->irqlock, flags); 457 - s5p_mfc_hw_call(dev->mfc_ops, cleanup_queue, &ctx->dst_queue, 458 - &ctx->vq_dst); 459 - /* Mark all src buffers as having an error */ 460 - s5p_mfc_hw_call(dev->mfc_ops, cleanup_queue, &ctx->src_queue, 461 - &ctx->vq_src); 462 - spin_unlock_irqrestore(&dev->irqlock, flags); 463 - if (test_and_clear_bit(0, &dev->hw_lock) == 0) 464 - BUG(); 465 - s5p_mfc_clock_off(); 466 - break; 467 - default: 468 - mfc_err("Encountered an error interrupt which had not been handled\n"); 469 - break; 422 + if (ctx != NULL) { 423 + /* Error recovery is dependent on the state of context */ 424 + switch (ctx->state) { 425 + case MFCINST_RES_CHANGE_INIT: 426 + case MFCINST_RES_CHANGE_FLUSH: 427 + case MFCINST_RES_CHANGE_END: 428 + case MFCINST_FINISHING: 429 + case MFCINST_FINISHED: 430 + case MFCINST_RUNNING: 431 + /* It is higly probable that an error occured 432 + * while decoding a frame */ 433 + clear_work_bit(ctx); 434 + ctx->state = MFCINST_ERROR; 435 + /* Mark all dst buffers as having an error */ 436 + spin_lock_irqsave(&dev->irqlock, flags); 437 + s5p_mfc_hw_call(dev->mfc_ops, cleanup_queue, 438 + &ctx->dst_queue, &ctx->vq_dst); 439 + /* Mark all src buffers as having an error */ 440 + s5p_mfc_hw_call(dev->mfc_ops, cleanup_queue, 441 + &ctx->src_queue, &ctx->vq_src); 442 + spin_unlock_irqrestore(&dev->irqlock, flags); 443 + wake_up_ctx(ctx, reason, err); 444 + break; 445 + default: 446 + clear_work_bit(ctx); 447 + ctx->state = MFCINST_ERROR; 448 + wake_up_ctx(ctx, reason, err); 449 + break; 450 + } 470 451 } 452 + if (test_and_clear_bit(0, &dev->hw_lock) == 0) 453 + BUG(); 454 + s5p_mfc_hw_call(dev->mfc_ops, clear_int_flags, dev); 455 + s5p_mfc_clock_off(); 456 + wake_up_dev(dev, reason, err); 471 457 return; 472 458 } 473 459 ··· 618 632 dev->warn_start) 619 633 s5p_mfc_handle_frame(ctx, reason, err); 620 634 else 621 - s5p_mfc_handle_error(ctx, reason, err); 635 + s5p_mfc_handle_error(dev, ctx, reason, err); 622 636 clear_bit(0, &dev->enter_suspend); 623 637 break; 624 638
+1
drivers/media/usb/gspca/kinect.c
··· 381 381 /* -- module initialisation -- */ 382 382 static const struct usb_device_id device_table[] = { 383 383 {USB_DEVICE(0x045e, 0x02ae)}, 384 + {USB_DEVICE(0x045e, 0x02bf)}, 384 385 {} 385 386 }; 386 387
+8 -5
drivers/media/usb/gspca/sonixb.c
··· 496 496 } 497 497 } 498 498 499 - static void i2c_w(struct gspca_dev *gspca_dev, const __u8 *buffer) 499 + static void i2c_w(struct gspca_dev *gspca_dev, const u8 *buf) 500 500 { 501 501 int retry = 60; 502 502 ··· 504 504 return; 505 505 506 506 /* is i2c ready */ 507 - reg_w(gspca_dev, 0x08, buffer, 8); 507 + reg_w(gspca_dev, 0x08, buf, 8); 508 508 while (retry--) { 509 509 if (gspca_dev->usb_err < 0) 510 510 return; 511 - msleep(10); 511 + msleep(1); 512 512 reg_r(gspca_dev, 0x08); 513 513 if (gspca_dev->usb_buf[0] & 0x04) { 514 514 if (gspca_dev->usb_buf[0] & 0x08) { 515 515 dev_err(gspca_dev->v4l2_dev.dev, 516 - "i2c write error\n"); 516 + "i2c error writing %02x %02x %02x %02x" 517 + " %02x %02x %02x %02x\n", 518 + buf[0], buf[1], buf[2], buf[3], 519 + buf[4], buf[5], buf[6], buf[7]); 517 520 gspca_dev->usb_err = -EIO; 518 521 } 519 522 return; ··· 533 530 for (;;) { 534 531 if (gspca_dev->usb_err < 0) 535 532 return; 536 - reg_w(gspca_dev, 0x08, *buffer, 8); 533 + i2c_w(gspca_dev, *buffer); 537 534 len -= 8; 538 535 if (len <= 0) 539 536 break;
+1
drivers/media/usb/gspca/sonixj.c
··· 1550 1550 0, 1551 1551 gspca_dev->usb_buf, 8, 1552 1552 500); 1553 + msleep(2); 1553 1554 if (ret < 0) { 1554 1555 pr_err("i2c_w1 err %d\n", ret); 1555 1556 gspca_dev->usb_err = ret;
+3 -1
drivers/media/usb/uvc/uvc_ctrl.c
··· 1431 1431 int ret; 1432 1432 1433 1433 ctrl = uvc_find_control(chain, xctrl->id, &mapping); 1434 - if (ctrl == NULL || (ctrl->info.flags & UVC_CTRL_FLAG_SET_CUR) == 0) 1434 + if (ctrl == NULL) 1435 1435 return -EINVAL; 1436 + if (!(ctrl->info.flags & UVC_CTRL_FLAG_SET_CUR)) 1437 + return -EACCES; 1436 1438 1437 1439 /* Clamp out of range values. */ 1438 1440 switch (mapping->v4l2_type) {
+2 -4
drivers/media/usb/uvc/uvc_v4l2.c
··· 657 657 ret = uvc_ctrl_get(chain, ctrl); 658 658 if (ret < 0) { 659 659 uvc_ctrl_rollback(handle); 660 - ctrls->error_idx = ret == -ENOENT 661 - ? ctrls->count : i; 660 + ctrls->error_idx = i; 662 661 return ret; 663 662 } 664 663 } ··· 685 686 ret = uvc_ctrl_set(chain, ctrl); 686 687 if (ret < 0) { 687 688 uvc_ctrl_rollback(handle); 688 - ctrls->error_idx = (ret == -ENOENT && 689 - cmd == VIDIOC_S_EXT_CTRLS) 689 + ctrls->error_idx = cmd == VIDIOC_S_EXT_CTRLS 690 690 ? ctrls->count : i; 691 691 return ret; 692 692 }
+3 -1
drivers/media/v4l2-core/videobuf2-core.c
··· 921 921 * In videobuf we use our internal V4l2_planes struct for 922 922 * single-planar buffers as well, for simplicity. 923 923 */ 924 - if (V4L2_TYPE_IS_OUTPUT(b->type)) 924 + if (V4L2_TYPE_IS_OUTPUT(b->type)) { 925 925 v4l2_planes[0].bytesused = b->bytesused; 926 + v4l2_planes[0].data_offset = 0; 927 + } 926 928 927 929 if (b->memory == V4L2_MEMORY_USERPTR) { 928 930 v4l2_planes[0].m.userptr = b->m.userptr;
+1
drivers/mfd/Kconfig
··· 237 237 depends on I2C=y && GPIOLIB 238 238 select MFD_CORE 239 239 select REGMAP_I2C 240 + select REGMAP_IRQ 240 241 select IRQ_DOMAIN 241 242 help 242 243 if you say yes here you get support for the TPS65910 series of
+1
drivers/mfd/ab8500-core.c
··· 19 19 #include <linux/mfd/core.h> 20 20 #include <linux/mfd/abx500.h> 21 21 #include <linux/mfd/abx500/ab8500.h> 22 + #include <linux/mfd/abx500/ab8500-bm.h> 22 23 #include <linux/mfd/dbx500-prcmu.h> 23 24 #include <linux/regulator/ab8500.h> 24 25 #include <linux/of.h>
+6 -1
drivers/mfd/arizona-core.c
··· 239 239 return ret; 240 240 } 241 241 242 - regcache_sync(arizona->regmap); 242 + ret = regcache_sync(arizona->regmap); 243 + if (ret != 0) { 244 + dev_err(arizona->dev, "Failed to restore register cache\n"); 245 + regulator_disable(arizona->dcvdd); 246 + return ret; 247 + } 243 248 244 249 return 0; 245 250 }
+2 -16
drivers/mfd/arizona-irq.c
··· 176 176 aod = &wm5102_aod; 177 177 irq = &wm5102_irq; 178 178 179 - switch (arizona->rev) { 180 - case 0: 181 - case 1: 182 - ctrlif_error = false; 183 - break; 184 - default: 185 - break; 186 - } 179 + ctrlif_error = false; 187 180 break; 188 181 #endif 189 182 #ifdef CONFIG_MFD_WM5110 ··· 184 191 aod = &wm5110_aod; 185 192 irq = &wm5110_irq; 186 193 187 - switch (arizona->rev) { 188 - case 0: 189 - case 1: 190 - ctrlif_error = false; 191 - break; 192 - default: 193 - break; 194 - } 194 + ctrlif_error = false; 195 195 break; 196 196 #endif 197 197 default:
+61
drivers/mfd/da9052-i2c.c
··· 27 27 #include <linux/of_device.h> 28 28 #endif 29 29 30 + /* I2C safe register check */ 31 + static inline bool i2c_safe_reg(unsigned char reg) 32 + { 33 + switch (reg) { 34 + case DA9052_STATUS_A_REG: 35 + case DA9052_STATUS_B_REG: 36 + case DA9052_STATUS_C_REG: 37 + case DA9052_STATUS_D_REG: 38 + case DA9052_ADC_RES_L_REG: 39 + case DA9052_ADC_RES_H_REG: 40 + case DA9052_VDD_RES_REG: 41 + case DA9052_ICHG_AV_REG: 42 + case DA9052_TBAT_RES_REG: 43 + case DA9052_ADCIN4_RES_REG: 44 + case DA9052_ADCIN5_RES_REG: 45 + case DA9052_ADCIN6_RES_REG: 46 + case DA9052_TJUNC_RES_REG: 47 + case DA9052_TSI_X_MSB_REG: 48 + case DA9052_TSI_Y_MSB_REG: 49 + case DA9052_TSI_LSB_REG: 50 + case DA9052_TSI_Z_MSB_REG: 51 + return true; 52 + default: 53 + return false; 54 + } 55 + } 56 + 57 + /* 58 + * There is an issue with DA9052 and DA9053_AA/BA/BB PMIC where the PMIC 59 + * gets lockup up or fails to respond following a system reset. 60 + * This fix is to follow any read or write with a dummy read to a safe 61 + * register. 62 + */ 63 + int da9052_i2c_fix(struct da9052 *da9052, unsigned char reg) 64 + { 65 + int val; 66 + 67 + switch (da9052->chip_id) { 68 + case DA9052: 69 + case DA9053_AA: 70 + case DA9053_BA: 71 + case DA9053_BB: 72 + /* A dummy read to a safe register address. */ 73 + if (!i2c_safe_reg(reg)) 74 + return regmap_read(da9052->regmap, 75 + DA9052_PARK_REGISTER, 76 + &val); 77 + break; 78 + default: 79 + /* 80 + * For other chips parking of I2C register 81 + * to a safe place is not required. 82 + */ 83 + break; 84 + } 85 + 86 + return 0; 87 + } 88 + EXPORT_SYMBOL(da9052_i2c_fix); 89 + 30 90 static int da9052_i2c_enable_multiwrite(struct da9052 *da9052) 31 91 { 32 92 int reg_val, ret; ··· 143 83 144 84 da9052->dev = &client->dev; 145 85 da9052->chip_irq = client->irq; 86 + da9052->fix_io = da9052_i2c_fix; 146 87 147 88 i2c_set_clientdata(client, da9052); 148 89
+9 -4
drivers/mfd/db8500-prcmu.c
··· 2524 2524 2525 2525 for (n = 0; n < NUM_PRCMU_WAKEUPS; n++) { 2526 2526 if (ev & prcmu_irq_bit[n]) 2527 - generic_handle_irq(IRQ_PRCMU_BASE + n); 2527 + generic_handle_irq(irq_find_mapping(db8500_irq_domain, n)); 2528 2528 } 2529 2529 r = true; 2530 2530 break; ··· 2737 2737 } 2738 2738 2739 2739 static struct irq_domain_ops db8500_irq_ops = { 2740 - .map = db8500_irq_map, 2741 - .xlate = irq_domain_xlate_twocell, 2740 + .map = db8500_irq_map, 2741 + .xlate = irq_domain_xlate_twocell, 2742 2742 }; 2743 2743 2744 2744 static int db8500_irq_init(struct device_node *np) 2745 2745 { 2746 - int irq_base = -1; 2746 + int irq_base = 0; 2747 + int i; 2747 2748 2748 2749 /* In the device tree case, just take some IRQs */ 2749 2750 if (!np) ··· 2758 2757 pr_err("Failed to create irqdomain\n"); 2759 2758 return -ENOSYS; 2760 2759 } 2760 + 2761 + /* All wakeups will be used, so create mappings for all */ 2762 + for (i = 0; i < NUM_PRCMU_WAKEUPS; i++) 2763 + irq_create_mapping(db8500_irq_domain, i); 2761 2764 2762 2765 return 0; 2763 2766 }
+9 -9
drivers/mfd/max77686.c
··· 93 93 if (max77686 == NULL) 94 94 return -ENOMEM; 95 95 96 - max77686->regmap = regmap_init_i2c(i2c, &max77686_regmap_config); 97 - if (IS_ERR(max77686->regmap)) { 98 - ret = PTR_ERR(max77686->regmap); 99 - dev_err(max77686->dev, "Failed to allocate register map: %d\n", 100 - ret); 101 - kfree(max77686); 102 - return ret; 103 - } 104 - 105 96 i2c_set_clientdata(i2c, max77686); 106 97 max77686->dev = &i2c->dev; 107 98 max77686->i2c = i2c; ··· 101 110 max77686->wakeup = pdata->wakeup; 102 111 max77686->irq_gpio = pdata->irq_gpio; 103 112 max77686->irq = i2c->irq; 113 + 114 + max77686->regmap = regmap_init_i2c(i2c, &max77686_regmap_config); 115 + if (IS_ERR(max77686->regmap)) { 116 + ret = PTR_ERR(max77686->regmap); 117 + dev_err(max77686->dev, "Failed to allocate register map: %d\n", 118 + ret); 119 + kfree(max77686); 120 + return ret; 121 + } 104 122 105 123 if (regmap_read(max77686->regmap, 106 124 MAX77686_REG_DEVICE_ID, &data) < 0) {
+18 -16
drivers/mfd/max77693.c
··· 114 114 u8 reg_data; 115 115 int ret = 0; 116 116 117 + if (!pdata) { 118 + dev_err(&i2c->dev, "No platform data found.\n"); 119 + return -EINVAL; 120 + } 121 + 117 122 max77693 = devm_kzalloc(&i2c->dev, 118 123 sizeof(struct max77693_dev), GFP_KERNEL); 119 124 if (max77693 == NULL) 120 125 return -ENOMEM; 121 - 122 - max77693->regmap = devm_regmap_init_i2c(i2c, &max77693_regmap_config); 123 - if (IS_ERR(max77693->regmap)) { 124 - ret = PTR_ERR(max77693->regmap); 125 - dev_err(max77693->dev,"failed to allocate register map: %d\n", 126 - ret); 127 - goto err_regmap; 128 - } 129 126 130 127 i2c_set_clientdata(i2c, max77693); 131 128 max77693->dev = &i2c->dev; ··· 130 133 max77693->irq = i2c->irq; 131 134 max77693->type = id->driver_data; 132 135 133 - if (!pdata) 134 - goto err_regmap; 136 + max77693->regmap = devm_regmap_init_i2c(i2c, &max77693_regmap_config); 137 + if (IS_ERR(max77693->regmap)) { 138 + ret = PTR_ERR(max77693->regmap); 139 + dev_err(max77693->dev, "failed to allocate register map: %d\n", 140 + ret); 141 + return ret; 142 + } 135 143 136 144 max77693->wakeup = pdata->wakeup; 137 145 138 - if (max77693_read_reg(max77693->regmap, 139 - MAX77693_PMIC_REG_PMIC_ID2, &reg_data) < 0) { 146 + ret = max77693_read_reg(max77693->regmap, MAX77693_PMIC_REG_PMIC_ID2, 147 + &reg_data); 148 + if (ret < 0) { 140 149 dev_err(max77693->dev, "device not found on this channel\n"); 141 - ret = -ENODEV; 142 - goto err_regmap; 150 + return ret; 143 151 } else 144 152 dev_info(max77693->dev, "device ID: 0x%x\n", reg_data); 145 153 ··· 165 163 ret = PTR_ERR(max77693->regmap_muic); 166 164 dev_err(max77693->dev, 167 165 "failed to allocate register map: %d\n", ret); 168 - goto err_regmap; 166 + goto err_regmap_muic; 169 167 } 170 168 171 169 ret = max77693_irq_init(max77693); ··· 186 184 err_mfd: 187 185 max77693_irq_exit(max77693); 188 186 err_irq: 187 + err_regmap_muic: 189 188 i2c_unregister_device(max77693->muic); 190 189 i2c_unregister_device(max77693->haptic); 191 - err_regmap: 192 190 return ret; 193 191 } 194 192
+2 -3
drivers/mfd/pcf50633-core.c
··· 208 208 if (!pcf) 209 209 return -ENOMEM; 210 210 211 + i2c_set_clientdata(client, pcf); 212 + pcf->dev = &client->dev; 211 213 pcf->pdata = pdata; 212 214 213 215 mutex_init(&pcf->lock); ··· 220 218 dev_err(pcf->dev, "Failed to allocate register map: %d\n", ret); 221 219 return ret; 222 220 } 223 - 224 - i2c_set_clientdata(client, pcf); 225 - pcf->dev = &client->dev; 226 221 227 222 version = pcf50633_reg_read(pcf, 0); 228 223 variant = pcf50633_reg_read(pcf, 1);
+29
drivers/mfd/rtl8411.c
··· 112 112 BPP_LDO_POWB, BPP_LDO_SUSPEND); 113 113 } 114 114 115 + static int rtl8411_switch_output_voltage(struct rtsx_pcr *pcr, u8 voltage) 116 + { 117 + u8 mask, val; 118 + 119 + mask = (BPP_REG_TUNED18 << BPP_TUNED18_SHIFT_8411) | BPP_PAD_MASK; 120 + if (voltage == OUTPUT_3V3) 121 + val = (BPP_ASIC_3V3 << BPP_TUNED18_SHIFT_8411) | BPP_PAD_3V3; 122 + else if (voltage == OUTPUT_1V8) 123 + val = (BPP_ASIC_1V8 << BPP_TUNED18_SHIFT_8411) | BPP_PAD_1V8; 124 + else 125 + return -EINVAL; 126 + 127 + return rtsx_pci_write_register(pcr, LDO_CTL, mask, val); 128 + } 129 + 115 130 static unsigned int rtl8411_cd_deglitch(struct rtsx_pcr *pcr) 116 131 { 117 132 unsigned int card_exist; ··· 178 163 return card_exist; 179 164 } 180 165 166 + static int rtl8411_conv_clk_and_div_n(int input, int dir) 167 + { 168 + int output; 169 + 170 + if (dir == CLK_TO_DIV_N) 171 + output = input * 4 / 5 - 2; 172 + else 173 + output = (input + 2) * 5 / 4; 174 + 175 + return output; 176 + } 177 + 181 178 static const struct pcr_ops rtl8411_pcr_ops = { 182 179 .extra_init_hw = rtl8411_extra_init_hw, 183 180 .optimize_phy = NULL, ··· 199 172 .disable_auto_blink = rtl8411_disable_auto_blink, 200 173 .card_power_on = rtl8411_card_power_on, 201 174 .card_power_off = rtl8411_card_power_off, 175 + .switch_output_voltage = rtl8411_switch_output_voltage, 202 176 .cd_deglitch = rtl8411_cd_deglitch, 177 + .conv_clk_and_div_n = rtl8411_conv_clk_and_div_n, 203 178 }; 204 179 205 180 /* SD Pull Control Enable:
+21
drivers/mfd/rts5209.c
··· 144 144 return rtsx_pci_send_cmd(pcr, 100); 145 145 } 146 146 147 + static int rts5209_switch_output_voltage(struct rtsx_pcr *pcr, u8 voltage) 148 + { 149 + int err; 150 + 151 + if (voltage == OUTPUT_3V3) { 152 + err = rtsx_pci_write_phy_register(pcr, 0x08, 0x4FC0 | 0x24); 153 + if (err < 0) 154 + return err; 155 + } else if (voltage == OUTPUT_1V8) { 156 + err = rtsx_pci_write_phy_register(pcr, 0x08, 0x4C40 | 0x24); 157 + if (err < 0) 158 + return err; 159 + } else { 160 + return -EINVAL; 161 + } 162 + 163 + return 0; 164 + } 165 + 147 166 static const struct pcr_ops rts5209_pcr_ops = { 148 167 .extra_init_hw = rts5209_extra_init_hw, 149 168 .optimize_phy = rts5209_optimize_phy, ··· 172 153 .disable_auto_blink = rts5209_disable_auto_blink, 173 154 .card_power_on = rts5209_card_power_on, 174 155 .card_power_off = rts5209_card_power_off, 156 + .switch_output_voltage = rts5209_switch_output_voltage, 175 157 .cd_deglitch = NULL, 158 + .conv_clk_and_div_n = NULL, 176 159 }; 177 160 178 161 /* SD Pull Control Enable:
+21
drivers/mfd/rts5229.c
··· 114 114 return rtsx_pci_send_cmd(pcr, 100); 115 115 } 116 116 117 + static int rts5229_switch_output_voltage(struct rtsx_pcr *pcr, u8 voltage) 118 + { 119 + int err; 120 + 121 + if (voltage == OUTPUT_3V3) { 122 + err = rtsx_pci_write_phy_register(pcr, 0x08, 0x4FC0 | 0x24); 123 + if (err < 0) 124 + return err; 125 + } else if (voltage == OUTPUT_1V8) { 126 + err = rtsx_pci_write_phy_register(pcr, 0x08, 0x4C40 | 0x24); 127 + if (err < 0) 128 + return err; 129 + } else { 130 + return -EINVAL; 131 + } 132 + 133 + return 0; 134 + } 135 + 117 136 static const struct pcr_ops rts5229_pcr_ops = { 118 137 .extra_init_hw = rts5229_extra_init_hw, 119 138 .optimize_phy = rts5229_optimize_phy, ··· 142 123 .disable_auto_blink = rts5229_disable_auto_blink, 143 124 .card_power_on = rts5229_card_power_on, 144 125 .card_power_off = rts5229_card_power_off, 126 + .switch_output_voltage = rts5229_switch_output_voltage, 145 127 .cd_deglitch = NULL, 128 + .conv_clk_and_div_n = NULL, 146 129 }; 147 130 148 131 /* SD Pull Control Enable:
+23 -4
drivers/mfd/rtsx_pcr.c
··· 630 630 if (clk == pcr->cur_clock) 631 631 return 0; 632 632 633 - N = (u8)(clk - 2); 633 + if (pcr->ops->conv_clk_and_div_n) 634 + N = (u8)pcr->ops->conv_clk_and_div_n(clk, CLK_TO_DIV_N); 635 + else 636 + N = (u8)(clk - 2); 634 637 if ((clk <= 2) || (N > max_N)) 635 638 return -EINVAL; 636 639 ··· 644 641 /* Make sure that the SSC clock div_n is equal or greater than min_N */ 645 642 div = CLK_DIV_1; 646 643 while ((N < min_N) && (div < max_div)) { 647 - N = (N + 2) * 2 - 2; 644 + if (pcr->ops->conv_clk_and_div_n) { 645 + int dbl_clk = pcr->ops->conv_clk_and_div_n(N, 646 + DIV_N_TO_CLK) * 2; 647 + N = (u8)pcr->ops->conv_clk_and_div_n(dbl_clk, 648 + CLK_TO_DIV_N); 649 + } else { 650 + N = (N + 2) * 2 - 2; 651 + } 648 652 div++; 649 653 } 650 654 dev_dbg(&(pcr->pci->dev), "N = %d, div = %d\n", N, div); ··· 712 702 return 0; 713 703 } 714 704 EXPORT_SYMBOL_GPL(rtsx_pci_card_power_off); 705 + 706 + int rtsx_pci_switch_output_voltage(struct rtsx_pcr *pcr, u8 voltage) 707 + { 708 + if (pcr->ops->switch_output_voltage) 709 + return pcr->ops->switch_output_voltage(pcr, voltage); 710 + 711 + return 0; 712 + } 713 + EXPORT_SYMBOL_GPL(rtsx_pci_switch_output_voltage); 715 714 716 715 unsigned int rtsx_pci_card_exist(struct rtsx_pcr *pcr) 717 716 { ··· 786 767 787 768 spin_unlock_irqrestore(&pcr->lock, flags); 788 769 789 - if (card_detect & SD_EXIST) 770 + if ((card_detect & SD_EXIST) && pcr->slots[RTSX_SD_CARD].card_event) 790 771 pcr->slots[RTSX_SD_CARD].card_event( 791 772 pcr->slots[RTSX_SD_CARD].p_dev); 792 - if (card_detect & MS_EXIST) 773 + if ((card_detect & MS_EXIST) && pcr->slots[RTSX_MS_CARD].card_event) 793 774 pcr->slots[RTSX_MS_CARD].card_event( 794 775 pcr->slots[RTSX_MS_CARD].p_dev); 795 776 }
+5 -12
drivers/mfd/tc3589x.c
··· 219 219 } 220 220 221 221 static struct irq_domain_ops tc3589x_irq_ops = { 222 - .map = tc3589x_irq_map, 222 + .map = tc3589x_irq_map, 223 223 .unmap = tc3589x_irq_unmap, 224 - .xlate = irq_domain_xlate_twocell, 224 + .xlate = irq_domain_xlate_twocell, 225 225 }; 226 226 227 227 static int tc3589x_irq_init(struct tc3589x *tc3589x, struct device_node *np) 228 228 { 229 229 int base = tc3589x->irq_base; 230 230 231 - if (base) { 232 - tc3589x->domain = irq_domain_add_legacy( 233 - NULL, TC3589x_NR_INTERNAL_IRQS, base, 234 - 0, &tc3589x_irq_ops, tc3589x); 235 - } 236 - else { 237 - tc3589x->domain = irq_domain_add_linear( 238 - np, TC3589x_NR_INTERNAL_IRQS, 239 - &tc3589x_irq_ops, tc3589x); 240 - } 231 + tc3589x->domain = irq_domain_add_simple( 232 + np, TC3589x_NR_INTERNAL_IRQS, base, 233 + &tc3589x_irq_ops, tc3589x); 241 234 242 235 if (!tc3589x->domain) { 243 236 dev_err(tc3589x->dev, "Failed to create irqdomain\n");
+1 -1
drivers/mfd/twl4030-power.c
··· 159 159 static int twl4030_write_script(u8 address, struct twl4030_ins *script, 160 160 int len) 161 161 { 162 - int err; 162 + int err = -EINVAL; 163 163 164 164 for (; len; len--, address++, script++) { 165 165 if (len == 1) {
+6 -2
drivers/mfd/vexpress-config.c
··· 67 67 68 68 return bridge; 69 69 } 70 + EXPORT_SYMBOL(vexpress_config_bridge_register); 70 71 71 72 void vexpress_config_bridge_unregister(struct vexpress_config_bridge *bridge) 72 73 { ··· 84 83 while (!list_empty(&__bridge.transactions)) 85 84 cpu_relax(); 86 85 } 86 + EXPORT_SYMBOL(vexpress_config_bridge_unregister); 87 87 88 88 89 89 struct vexpress_config_func { ··· 144 142 145 143 return func; 146 144 } 145 + EXPORT_SYMBOL(__vexpress_config_func_get); 147 146 148 147 void vexpress_config_func_put(struct vexpress_config_func *func) 149 148 { ··· 152 149 of_node_put(func->bridge->node); 153 150 kfree(func); 154 151 } 155 - 152 + EXPORT_SYMBOL(vexpress_config_func_put); 156 153 157 154 struct vexpress_config_trans { 158 155 struct vexpress_config_func *func; ··· 232 229 233 230 complete(&trans->completion); 234 231 } 232 + EXPORT_SYMBOL(vexpress_config_complete); 235 233 236 234 int vexpress_config_wait(struct vexpress_config_trans *trans) 237 235 { ··· 240 236 241 237 return trans->status; 242 238 } 243 - 239 + EXPORT_SYMBOL(vexpress_config_wait); 244 240 245 241 int vexpress_config_read(struct vexpress_config_func *func, int offset, 246 242 u32 *data)
+20 -12
drivers/mfd/vexpress-sysreg.c
··· 313 313 } 314 314 315 315 316 - void __init vexpress_sysreg_early_init(void __iomem *base) 316 + void __init vexpress_sysreg_setup(struct device_node *node) 317 317 { 318 - struct device_node *node = of_find_compatible_node(NULL, NULL, 319 - "arm,vexpress-sysreg"); 320 - 321 - if (node) 322 - base = of_iomap(node, 0); 323 - 324 - if (WARN_ON(!base)) 318 + if (WARN_ON(!vexpress_sysreg_base)) 325 319 return; 326 - 327 - vexpress_sysreg_base = base; 328 320 329 321 if (readl(vexpress_sysreg_base + SYS_MISC) & SYS_MISC_MASTERSITE) 330 322 vexpress_master_site = VEXPRESS_SITE_DB2; ··· 328 336 WARN_ON(!vexpress_sysreg_config_bridge); 329 337 } 330 338 339 + void __init vexpress_sysreg_early_init(void __iomem *base) 340 + { 341 + vexpress_sysreg_base = base; 342 + vexpress_sysreg_setup(NULL); 343 + } 344 + 331 345 void __init vexpress_sysreg_of_early_init(void) 332 346 { 333 - vexpress_sysreg_early_init(NULL); 347 + struct device_node *node = of_find_compatible_node(NULL, NULL, 348 + "arm,vexpress-sysreg"); 349 + 350 + if (node) { 351 + vexpress_sysreg_base = of_iomap(node, 0); 352 + vexpress_sysreg_setup(node); 353 + } else { 354 + pr_info("vexpress-sysreg: No Device Tree node found."); 355 + } 334 356 } 335 357 336 358 ··· 432 426 return -EBUSY; 433 427 } 434 428 435 - if (!vexpress_sysreg_base) 429 + if (!vexpress_sysreg_base) { 436 430 vexpress_sysreg_base = devm_ioremap(&pdev->dev, res->start, 437 431 resource_size(res)); 432 + vexpress_sysreg_setup(pdev->dev.of_node); 433 + } 438 434 439 435 if (!vexpress_sysreg_base) { 440 436 dev_err(&pdev->dev, "Failed to obtain base address!\n");
+1 -1
drivers/mfd/wm5102-tables.c
··· 1882 1882 } 1883 1883 } 1884 1884 1885 - #define WM5102_MAX_REGISTER 0x1a8fff 1885 + #define WM5102_MAX_REGISTER 0x1a9800 1886 1886 1887 1887 const struct regmap_config wm5102_spi_regmap = { 1888 1888 .reg_bits = 32,
+36 -1
drivers/misc/ti-st/st_kim.c
··· 468 468 if (pdata->chip_enable) 469 469 pdata->chip_enable(kim_gdata); 470 470 471 + /* Configure BT nShutdown to HIGH state */ 472 + gpio_set_value(kim_gdata->nshutdown, GPIO_LOW); 473 + mdelay(5); /* FIXME: a proper toggle */ 474 + gpio_set_value(kim_gdata->nshutdown, GPIO_HIGH); 475 + mdelay(100); 471 476 /* re-initialize the completion */ 472 477 INIT_COMPLETION(kim_gdata->ldisc_installed); 473 478 /* send notification to UIM */ ··· 514 509 * (b) upon failure to either install ldisc or download firmware. 515 510 * The function is responsible to (a) notify UIM about un-installation, 516 511 * (b) flush UART if the ldisc was installed. 517 - * (c) invoke platform's chip disabling routine. 512 + * (c) reset BT_EN - pull down nshutdown at the end. 513 + * (d) invoke platform's chip disabling routine. 518 514 */ 519 515 long st_kim_stop(void *kim_data) 520 516 { ··· 546 540 pr_err(" timed out waiting for ldisc to be un-installed"); 547 541 err = -ETIMEDOUT; 548 542 } 543 + 544 + /* By default configure BT nShutdown to LOW state */ 545 + gpio_set_value(kim_gdata->nshutdown, GPIO_LOW); 546 + mdelay(1); 547 + gpio_set_value(kim_gdata->nshutdown, GPIO_HIGH); 548 + mdelay(1); 549 + gpio_set_value(kim_gdata->nshutdown, GPIO_LOW); 549 550 550 551 /* platform specific disable */ 551 552 if (pdata->chip_disable) ··· 746 733 /* refer to itself */ 747 734 kim_gdata->core_data->kim_data = kim_gdata; 748 735 736 + /* Claim the chip enable nShutdown gpio from the system */ 737 + kim_gdata->nshutdown = pdata->nshutdown_gpio; 738 + err = gpio_request(kim_gdata->nshutdown, "kim"); 739 + if (unlikely(err)) { 740 + pr_err(" gpio %ld request failed ", kim_gdata->nshutdown); 741 + return err; 742 + } 743 + 744 + /* Configure nShutdown GPIO as output=0 */ 745 + err = gpio_direction_output(kim_gdata->nshutdown, 0); 746 + if (unlikely(err)) { 747 + pr_err(" unable to configure gpio %ld", kim_gdata->nshutdown); 748 + return err; 749 + } 749 750 /* get reference of pdev for request_firmware 750 751 */ 751 752 kim_gdata->kim_pdev = pdev; ··· 806 779 807 780 static int kim_remove(struct platform_device *pdev) 808 781 { 782 + /* free the GPIOs requested */ 783 + struct ti_st_plat_data *pdata = pdev->dev.platform_data; 809 784 struct kim_data_s *kim_gdata; 810 785 811 786 kim_gdata = dev_get_drvdata(&pdev->dev); 787 + 788 + /* Free the Bluetooth/FM/GPIO 789 + * nShutdown gpio from the system 790 + */ 791 + gpio_free(pdata->nshutdown_gpio); 792 + pr_info("nshutdown GPIO Freed"); 812 793 813 794 debugfs_remove_recursive(kim_debugfs_dir); 814 795 sysfs_remove_group(&pdev->dev.kobj, &uim_attr_grp);
+30 -62
drivers/mmc/host/mvsdio.c
··· 50 50 struct timer_list timer; 51 51 struct mmc_host *mmc; 52 52 struct device *dev; 53 - struct resource *res; 54 - int irq; 55 53 struct clk *clk; 56 54 int gpio_card_detect; 57 55 int gpio_write_protect; ··· 716 718 if (!r || irq < 0 || !mvsd_data) 717 719 return -ENXIO; 718 720 719 - r = request_mem_region(r->start, SZ_1K, DRIVER_NAME); 720 - if (!r) 721 - return -EBUSY; 722 - 723 721 mmc = mmc_alloc_host(sizeof(struct mvsd_host), &pdev->dev); 724 722 if (!mmc) { 725 723 ret = -ENOMEM; ··· 725 731 host = mmc_priv(mmc); 726 732 host->mmc = mmc; 727 733 host->dev = &pdev->dev; 728 - host->res = r; 729 734 host->base_clock = mvsd_data->clock / 2; 735 + host->clk = ERR_PTR(-EINVAL); 730 736 731 737 mmc->ops = &mvsd_ops; 732 738 ··· 746 752 747 753 spin_lock_init(&host->lock); 748 754 749 - host->base = ioremap(r->start, SZ_4K); 755 + host->base = devm_request_and_ioremap(&pdev->dev, r); 750 756 if (!host->base) { 751 757 ret = -ENOMEM; 752 758 goto out; ··· 759 765 760 766 mvsd_power_down(host); 761 767 762 - ret = request_irq(irq, mvsd_irq, 0, DRIVER_NAME, host); 768 + ret = devm_request_irq(&pdev->dev, irq, mvsd_irq, 0, DRIVER_NAME, host); 763 769 if (ret) { 764 770 pr_err("%s: cannot assign irq %d\n", DRIVER_NAME, irq); 765 771 goto out; 766 - } else 767 - host->irq = irq; 772 + } 768 773 769 774 /* Not all platforms can gate the clock, so it is not 770 775 an error if the clock does not exists. */ 771 - host->clk = clk_get(&pdev->dev, NULL); 772 - if (!IS_ERR(host->clk)) { 776 + host->clk = devm_clk_get(&pdev->dev, NULL); 777 + if (!IS_ERR(host->clk)) 773 778 clk_prepare_enable(host->clk); 774 - } 775 779 776 780 if (mvsd_data->gpio_card_detect) { 777 - ret = gpio_request(mvsd_data->gpio_card_detect, 778 - DRIVER_NAME " cd"); 781 + ret = devm_gpio_request_one(&pdev->dev, 782 + mvsd_data->gpio_card_detect, 783 + GPIOF_IN, DRIVER_NAME " cd"); 779 784 if (ret == 0) { 780 - gpio_direction_input(mvsd_data->gpio_card_detect); 781 785 irq = gpio_to_irq(mvsd_data->gpio_card_detect); 782 - ret = request_irq(irq, mvsd_card_detect_irq, 783 - IRQ_TYPE_EDGE_RISING | IRQ_TYPE_EDGE_FALLING, 784 - DRIVER_NAME " cd", host); 786 + ret = devm_request_irq(&pdev->dev, irq, 787 + mvsd_card_detect_irq, 788 + IRQ_TYPE_EDGE_RISING | 789 + IRQ_TYPE_EDGE_FALLING, 790 + DRIVER_NAME " cd", host); 785 791 if (ret == 0) 786 792 host->gpio_card_detect = 787 793 mvsd_data->gpio_card_detect; 788 794 else 789 - gpio_free(mvsd_data->gpio_card_detect); 795 + devm_gpio_free(&pdev->dev, 796 + mvsd_data->gpio_card_detect); 790 797 } 791 798 } 792 799 if (!host->gpio_card_detect) 793 800 mmc->caps |= MMC_CAP_NEEDS_POLL; 794 801 795 802 if (mvsd_data->gpio_write_protect) { 796 - ret = gpio_request(mvsd_data->gpio_write_protect, 797 - DRIVER_NAME " wp"); 803 + ret = devm_gpio_request_one(&pdev->dev, 804 + mvsd_data->gpio_write_protect, 805 + GPIOF_IN, DRIVER_NAME " wp"); 798 806 if (ret == 0) { 799 - gpio_direction_input(mvsd_data->gpio_write_protect); 800 807 host->gpio_write_protect = 801 808 mvsd_data->gpio_write_protect; 802 809 } ··· 819 824 return 0; 820 825 821 826 out: 822 - if (host) { 823 - if (host->irq) 824 - free_irq(host->irq, host); 825 - if (host->gpio_card_detect) { 826 - free_irq(gpio_to_irq(host->gpio_card_detect), host); 827 - gpio_free(host->gpio_card_detect); 828 - } 829 - if (host->gpio_write_protect) 830 - gpio_free(host->gpio_write_protect); 831 - if (host->base) 832 - iounmap(host->base); 833 - } 834 - if (r) 835 - release_resource(r); 836 - if (mmc) 837 - if (!IS_ERR_OR_NULL(host->clk)) { 827 + if (mmc) { 828 + if (!IS_ERR(host->clk)) 838 829 clk_disable_unprepare(host->clk); 839 - clk_put(host->clk); 840 - } 841 830 mmc_free_host(mmc); 831 + } 842 832 843 833 return ret; 844 834 } ··· 832 852 { 833 853 struct mmc_host *mmc = platform_get_drvdata(pdev); 834 854 835 - if (mmc) { 836 - struct mvsd_host *host = mmc_priv(mmc); 855 + struct mvsd_host *host = mmc_priv(mmc); 837 856 838 - if (host->gpio_card_detect) { 839 - free_irq(gpio_to_irq(host->gpio_card_detect), host); 840 - gpio_free(host->gpio_card_detect); 841 - } 842 - mmc_remove_host(mmc); 843 - free_irq(host->irq, host); 844 - if (host->gpio_write_protect) 845 - gpio_free(host->gpio_write_protect); 846 - del_timer_sync(&host->timer); 847 - mvsd_power_down(host); 848 - iounmap(host->base); 849 - release_resource(host->res); 857 + mmc_remove_host(mmc); 858 + del_timer_sync(&host->timer); 859 + mvsd_power_down(host); 850 860 851 - if (!IS_ERR(host->clk)) { 852 - clk_disable_unprepare(host->clk); 853 - clk_put(host->clk); 854 - } 855 - mmc_free_host(mmc); 856 - } 861 + if (!IS_ERR(host->clk)) 862 + clk_disable_unprepare(host->clk); 863 + mmc_free_host(mmc); 864 + 857 865 platform_set_drvdata(pdev, NULL); 858 866 return 0; 859 867 }
+5 -25
drivers/mmc/host/rtsx_pci_sdmmc.c
··· 1060 1060 return 0; 1061 1061 } 1062 1062 1063 - static int sd_change_bank_voltage(struct realtek_pci_sdmmc *host, u8 voltage) 1064 - { 1065 - struct rtsx_pcr *pcr = host->pcr; 1066 - int err; 1067 - 1068 - if (voltage == SD_IO_3V3) { 1069 - err = rtsx_pci_write_phy_register(pcr, 0x08, 0x4FC0 | 0x24); 1070 - if (err < 0) 1071 - return err; 1072 - } else if (voltage == SD_IO_1V8) { 1073 - err = rtsx_pci_write_phy_register(pcr, 0x08, 0x4C40 | 0x24); 1074 - if (err < 0) 1075 - return err; 1076 - } else { 1077 - return -EINVAL; 1078 - } 1079 - 1080 - return 0; 1081 - } 1082 - 1083 1063 static int sdmmc_switch_voltage(struct mmc_host *mmc, struct mmc_ios *ios) 1084 1064 { 1085 1065 struct realtek_pci_sdmmc *host = mmc_priv(mmc); ··· 1078 1098 rtsx_pci_start_run(pcr); 1079 1099 1080 1100 if (ios->signal_voltage == MMC_SIGNAL_VOLTAGE_330) 1081 - voltage = SD_IO_3V3; 1101 + voltage = OUTPUT_3V3; 1082 1102 else 1083 - voltage = SD_IO_1V8; 1103 + voltage = OUTPUT_1V8; 1084 1104 1085 - if (voltage == SD_IO_1V8) { 1105 + if (voltage == OUTPUT_1V8) { 1086 1106 err = rtsx_pci_write_register(pcr, 1087 1107 SD30_DRIVE_SEL, 0x07, DRIVER_TYPE_B); 1088 1108 if (err < 0) ··· 1093 1113 goto out; 1094 1114 } 1095 1115 1096 - err = sd_change_bank_voltage(host, voltage); 1116 + err = rtsx_pci_switch_output_voltage(pcr, voltage); 1097 1117 if (err < 0) 1098 1118 goto out; 1099 1119 1100 - if (voltage == SD_IO_1V8) { 1120 + if (voltage == OUTPUT_1V8) { 1101 1121 err = sd_wait_voltage_stable_2(host); 1102 1122 if (err < 0) 1103 1123 goto out;
+1
drivers/mtd/devices/Kconfig
··· 272 272 tristate "M-Systems Disk-On-Chip G3" 273 273 select BCH 274 274 select BCH_CONST_PARAMS 275 + select BITREVERSE 275 276 ---help--- 276 277 This provides an MTD device driver for the M-Systems DiskOnChip 277 278 G3 devices.
+1 -1
drivers/mtd/maps/physmap_of.c
··· 170 170 resource_size_t res_size; 171 171 struct mtd_part_parser_data ppdata; 172 172 bool map_indirect; 173 - const char *mtd_name; 173 + const char *mtd_name = NULL; 174 174 175 175 match = of_match_device(of_flash_match, &dev->dev); 176 176 if (!match)
+2 -2
drivers/mtd/nand/bcm47xxnflash/ops_bcm4706.c
··· 17 17 #include "bcm47xxnflash.h" 18 18 19 19 /* Broadcom uses 1'000'000 but it seems to be too many. Tests on WNDR4500 has 20 - * shown 164 retries as maxiumum. */ 21 - #define NFLASH_READY_RETRIES 1000 20 + * shown ~1000 retries as maxiumum. */ 21 + #define NFLASH_READY_RETRIES 10000 22 22 23 23 #define NFLASH_SECTOR_SIZE 512 24 24
+1 -1
drivers/mtd/nand/davinci_nand.c
··· 523 523 static const struct of_device_id davinci_nand_of_match[] = { 524 524 {.compatible = "ti,davinci-nand", }, 525 525 {}, 526 - } 526 + }; 527 527 MODULE_DEVICE_TABLE(of, davinci_nand_of_match); 528 528 529 529 static struct davinci_nand_pdata
+5 -2
drivers/mtd/nand/nand_base.c
··· 2857 2857 int i; 2858 2858 int val; 2859 2859 2860 - /* ONFI need to be probed in 8 bits mode */ 2861 - WARN_ON(chip->options & NAND_BUSWIDTH_16); 2860 + /* ONFI need to be probed in 8 bits mode, and 16 bits should be selected with NAND_BUSWIDTH_AUTO */ 2861 + if (chip->options & NAND_BUSWIDTH_16) { 2862 + pr_err("Trying ONFI probe in 16 bits mode, aborting !\n"); 2863 + return 0; 2864 + } 2862 2865 /* Try ONFI for unknown chip or LP */ 2863 2866 chip->cmdfunc(mtd, NAND_CMD_READID, 0x20, -1); 2864 2867 if (chip->read_byte(mtd) != 'O' || chip->read_byte(mtd) != 'N' ||
+2 -2
drivers/net/can/c_can/c_can.c
··· 960 960 break; 961 961 case LEC_ACK_ERROR: 962 962 netdev_dbg(dev, "ack error\n"); 963 - cf->data[2] |= (CAN_ERR_PROT_LOC_ACK | 963 + cf->data[3] |= (CAN_ERR_PROT_LOC_ACK | 964 964 CAN_ERR_PROT_LOC_ACK_DEL); 965 965 break; 966 966 case LEC_BIT1_ERROR: ··· 973 973 break; 974 974 case LEC_CRC_ERROR: 975 975 netdev_dbg(dev, "CRC error\n"); 976 - cf->data[2] |= (CAN_ERR_PROT_LOC_CRC_SEQ | 976 + cf->data[3] |= (CAN_ERR_PROT_LOC_CRC_SEQ | 977 977 CAN_ERR_PROT_LOC_CRC_DEL); 978 978 break; 979 979 default:
+1 -1
drivers/net/can/pch_can.c
··· 560 560 stats->rx_errors++; 561 561 break; 562 562 case PCH_CRC_ERR: 563 - cf->data[2] |= CAN_ERR_PROT_LOC_CRC_SEQ | 563 + cf->data[3] |= CAN_ERR_PROT_LOC_CRC_SEQ | 564 564 CAN_ERR_PROT_LOC_CRC_DEL; 565 565 priv->can.can_stats.bus_error++; 566 566 stats->rx_errors++;
+2 -2
drivers/net/can/ti_hecc.c
··· 746 746 } 747 747 if (err_status & HECC_CANES_CRCE) { 748 748 hecc_set_bit(priv, HECC_CANES, HECC_CANES_CRCE); 749 - cf->data[2] |= CAN_ERR_PROT_LOC_CRC_SEQ | 749 + cf->data[3] |= CAN_ERR_PROT_LOC_CRC_SEQ | 750 750 CAN_ERR_PROT_LOC_CRC_DEL; 751 751 } 752 752 if (err_status & HECC_CANES_ACKE) { 753 753 hecc_set_bit(priv, HECC_CANES, HECC_CANES_ACKE); 754 - cf->data[2] |= CAN_ERR_PROT_LOC_ACK | 754 + cf->data[3] |= CAN_ERR_PROT_LOC_ACK | 755 755 CAN_ERR_PROT_LOC_ACK_DEL; 756 756 } 757 757 }
+1 -1
drivers/net/ethernet/3com/3c574_cs.c
··· 432 432 netdev_info(dev, "%s at io %#3lx, irq %d, hw_addr %pM\n", 433 433 cardname, dev->base_addr, dev->irq, dev->dev_addr); 434 434 netdev_info(dev, " %dK FIFO split %s Rx:Tx, %sMII interface.\n", 435 - 8 << config & Ram_size, 435 + 8 << (config & Ram_size), 436 436 ram_split[(config & Ram_split) >> Ram_split_shift], 437 437 config & Autoselect ? "autoselect " : ""); 438 438
+39 -23
drivers/net/ethernet/broadcom/tg3.c
··· 1283 1283 return tg3_writephy(tp, MII_TG3_AUX_CTRL, set | reg); 1284 1284 } 1285 1285 1286 - #define TG3_PHY_AUXCTL_SMDSP_ENABLE(tp) \ 1287 - tg3_phy_auxctl_write((tp), MII_TG3_AUXCTL_SHDWSEL_AUXCTL, \ 1288 - MII_TG3_AUXCTL_ACTL_SMDSP_ENA | \ 1289 - MII_TG3_AUXCTL_ACTL_TX_6DB) 1286 + static int tg3_phy_toggle_auxctl_smdsp(struct tg3 *tp, bool enable) 1287 + { 1288 + u32 val; 1289 + int err; 1290 1290 1291 - #define TG3_PHY_AUXCTL_SMDSP_DISABLE(tp) \ 1292 - tg3_phy_auxctl_write((tp), MII_TG3_AUXCTL_SHDWSEL_AUXCTL, \ 1293 - MII_TG3_AUXCTL_ACTL_TX_6DB); 1291 + err = tg3_phy_auxctl_read(tp, MII_TG3_AUXCTL_SHDWSEL_AUXCTL, &val); 1292 + 1293 + if (err) 1294 + return err; 1295 + if (enable) 1296 + 1297 + val |= MII_TG3_AUXCTL_ACTL_SMDSP_ENA; 1298 + else 1299 + val &= ~MII_TG3_AUXCTL_ACTL_SMDSP_ENA; 1300 + 1301 + err = tg3_phy_auxctl_write((tp), MII_TG3_AUXCTL_SHDWSEL_AUXCTL, 1302 + val | MII_TG3_AUXCTL_ACTL_TX_6DB); 1303 + 1304 + return err; 1305 + } 1294 1306 1295 1307 static int tg3_bmcr_reset(struct tg3 *tp) 1296 1308 { ··· 2235 2223 2236 2224 otp = tp->phy_otp; 2237 2225 2238 - if (TG3_PHY_AUXCTL_SMDSP_ENABLE(tp)) 2226 + if (tg3_phy_toggle_auxctl_smdsp(tp, true)) 2239 2227 return; 2240 2228 2241 2229 phy = ((otp & TG3_OTP_AGCTGT_MASK) >> TG3_OTP_AGCTGT_SHIFT); ··· 2260 2248 ((otp & TG3_OTP_RCOFF_MASK) >> TG3_OTP_RCOFF_SHIFT); 2261 2249 tg3_phydsp_write(tp, MII_TG3_DSP_EXP97, phy); 2262 2250 2263 - TG3_PHY_AUXCTL_SMDSP_DISABLE(tp); 2251 + tg3_phy_toggle_auxctl_smdsp(tp, false); 2264 2252 } 2265 2253 2266 2254 static void tg3_phy_eee_adjust(struct tg3 *tp, u32 current_link_up) ··· 2296 2284 2297 2285 if (!tp->setlpicnt) { 2298 2286 if (current_link_up == 1 && 2299 - !TG3_PHY_AUXCTL_SMDSP_ENABLE(tp)) { 2287 + !tg3_phy_toggle_auxctl_smdsp(tp, true)) { 2300 2288 tg3_phydsp_write(tp, MII_TG3_DSP_TAP26, 0x0000); 2301 - TG3_PHY_AUXCTL_SMDSP_DISABLE(tp); 2289 + tg3_phy_toggle_auxctl_smdsp(tp, false); 2302 2290 } 2303 2291 2304 2292 val = tr32(TG3_CPMU_EEE_MODE); ··· 2314 2302 (GET_ASIC_REV(tp->pci_chip_rev_id) == ASIC_REV_5717 || 2315 2303 GET_ASIC_REV(tp->pci_chip_rev_id) == ASIC_REV_5719 || 2316 2304 tg3_flag(tp, 57765_CLASS)) && 2317 - !TG3_PHY_AUXCTL_SMDSP_ENABLE(tp)) { 2305 + !tg3_phy_toggle_auxctl_smdsp(tp, true)) { 2318 2306 val = MII_TG3_DSP_TAP26_ALNOKO | 2319 2307 MII_TG3_DSP_TAP26_RMRXSTO; 2320 2308 tg3_phydsp_write(tp, MII_TG3_DSP_TAP26, val); 2321 - TG3_PHY_AUXCTL_SMDSP_DISABLE(tp); 2309 + tg3_phy_toggle_auxctl_smdsp(tp, false); 2322 2310 } 2323 2311 2324 2312 val = tr32(TG3_CPMU_EEE_MODE); ··· 2462 2450 tg3_writephy(tp, MII_CTRL1000, 2463 2451 CTL1000_AS_MASTER | CTL1000_ENABLE_MASTER); 2464 2452 2465 - err = TG3_PHY_AUXCTL_SMDSP_ENABLE(tp); 2453 + err = tg3_phy_toggle_auxctl_smdsp(tp, true); 2466 2454 if (err) 2467 2455 return err; 2468 2456 ··· 2483 2471 tg3_writephy(tp, MII_TG3_DSP_ADDRESS, 0x8200); 2484 2472 tg3_writephy(tp, MII_TG3_DSP_CONTROL, 0x0000); 2485 2473 2486 - TG3_PHY_AUXCTL_SMDSP_DISABLE(tp); 2474 + tg3_phy_toggle_auxctl_smdsp(tp, false); 2487 2475 2488 2476 tg3_writephy(tp, MII_CTRL1000, phy9_orig); 2489 2477 ··· 2584 2572 2585 2573 out: 2586 2574 if ((tp->phy_flags & TG3_PHYFLG_ADC_BUG) && 2587 - !TG3_PHY_AUXCTL_SMDSP_ENABLE(tp)) { 2575 + !tg3_phy_toggle_auxctl_smdsp(tp, true)) { 2588 2576 tg3_phydsp_write(tp, 0x201f, 0x2aaa); 2589 2577 tg3_phydsp_write(tp, 0x000a, 0x0323); 2590 - TG3_PHY_AUXCTL_SMDSP_DISABLE(tp); 2578 + tg3_phy_toggle_auxctl_smdsp(tp, false); 2591 2579 } 2592 2580 2593 2581 if (tp->phy_flags & TG3_PHYFLG_5704_A0_BUG) { ··· 2596 2584 } 2597 2585 2598 2586 if (tp->phy_flags & TG3_PHYFLG_BER_BUG) { 2599 - if (!TG3_PHY_AUXCTL_SMDSP_ENABLE(tp)) { 2587 + if (!tg3_phy_toggle_auxctl_smdsp(tp, true)) { 2600 2588 tg3_phydsp_write(tp, 0x000a, 0x310b); 2601 2589 tg3_phydsp_write(tp, 0x201f, 0x9506); 2602 2590 tg3_phydsp_write(tp, 0x401f, 0x14e2); 2603 - TG3_PHY_AUXCTL_SMDSP_DISABLE(tp); 2591 + tg3_phy_toggle_auxctl_smdsp(tp, false); 2604 2592 } 2605 2593 } else if (tp->phy_flags & TG3_PHYFLG_JITTER_BUG) { 2606 - if (!TG3_PHY_AUXCTL_SMDSP_ENABLE(tp)) { 2594 + if (!tg3_phy_toggle_auxctl_smdsp(tp, true)) { 2607 2595 tg3_writephy(tp, MII_TG3_DSP_ADDRESS, 0x000a); 2608 2596 if (tp->phy_flags & TG3_PHYFLG_ADJUST_TRIM) { 2609 2597 tg3_writephy(tp, MII_TG3_DSP_RW_PORT, 0x110b); ··· 2612 2600 } else 2613 2601 tg3_writephy(tp, MII_TG3_DSP_RW_PORT, 0x010b); 2614 2602 2615 - TG3_PHY_AUXCTL_SMDSP_DISABLE(tp); 2603 + tg3_phy_toggle_auxctl_smdsp(tp, false); 2616 2604 } 2617 2605 } 2618 2606 ··· 4021 4009 tw32(TG3_CPMU_EEE_MODE, 4022 4010 tr32(TG3_CPMU_EEE_MODE) & ~TG3_CPMU_EEEMD_LPI_ENABLE); 4023 4011 4024 - err = TG3_PHY_AUXCTL_SMDSP_ENABLE(tp); 4012 + err = tg3_phy_toggle_auxctl_smdsp(tp, true); 4025 4013 if (!err) { 4026 4014 u32 err2; 4027 4015 ··· 4054 4042 MII_TG3_DSP_CH34TP2_HIBW01); 4055 4043 } 4056 4044 4057 - err2 = TG3_PHY_AUXCTL_SMDSP_DISABLE(tp); 4045 + err2 = tg3_phy_toggle_auxctl_smdsp(tp, false); 4058 4046 if (!err) 4059 4047 err = err2; 4060 4048 } ··· 6961 6949 { 6962 6950 int i; 6963 6951 struct tg3 *tp = netdev_priv(dev); 6952 + 6953 + if (tg3_irq_sync(tp)) 6954 + return; 6964 6955 6965 6956 for (i = 0; i < tp->irq_cnt; i++) 6966 6957 tg3_interrupt(tp->napi[i].irq_vec, &tp->napi[i]); ··· 16382 16367 tp->pm_cap = pm_cap; 16383 16368 tp->rx_mode = TG3_DEF_RX_MODE; 16384 16369 tp->tx_mode = TG3_DEF_TX_MODE; 16370 + tp->irq_sync = 1; 16385 16371 16386 16372 if (tg3_debug > 0) 16387 16373 tp->msg_enable = tg3_debug;
+4
drivers/net/ethernet/calxeda/xgmac.c
··· 548 548 return -1; 549 549 } 550 550 551 + /* All frames should fit into a single buffer */ 552 + if (!(status & RXDESC_FIRST_SEG) || !(status & RXDESC_LAST_SEG)) 553 + return -1; 554 + 551 555 /* Check if packet has checksum already */ 552 556 if ((status & RXDESC_FRAME_TYPE) && (status & RXDESC_EXT_STATUS) && 553 557 !(ext_status & RXDESC_IP_PAYLOAD_MASK))
+13 -2
drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
··· 1994 1994 { 1995 1995 const struct port_info *pi = netdev_priv(dev); 1996 1996 struct adapter *adap = pi->adapter; 1997 + struct sge_rspq *q; 1998 + int i; 1999 + int r = 0; 1997 2000 1998 - return set_rxq_intr_params(adap, &adap->sge.ethrxq[pi->first_qset].rspq, 1999 - c->rx_coalesce_usecs, c->rx_max_coalesced_frames); 2001 + for (i = pi->first_qset; i < pi->first_qset + pi->nqsets; i++) { 2002 + q = &adap->sge.ethrxq[i].rspq; 2003 + r = set_rxq_intr_params(adap, q, c->rx_coalesce_usecs, 2004 + c->rx_max_coalesced_frames); 2005 + if (r) { 2006 + dev_err(&dev->dev, "failed to set coalesce %d\n", r); 2007 + break; 2008 + } 2009 + } 2010 + return r; 2000 2011 } 2001 2012 2002 2013 static int get_coalesce(struct net_device *dev, struct ethtool_coalesce *c)
+2 -1
drivers/net/ethernet/intel/ixgbe/Makefile
··· 32 32 33 33 obj-$(CONFIG_IXGBE) += ixgbe.o 34 34 35 - ixgbe-objs := ixgbe_main.o ixgbe_common.o ixgbe_ethtool.o ixgbe_debugfs.o\ 35 + ixgbe-objs := ixgbe_main.o ixgbe_common.o ixgbe_ethtool.o \ 36 36 ixgbe_82599.o ixgbe_82598.o ixgbe_phy.o ixgbe_sriov.o \ 37 37 ixgbe_mbx.o ixgbe_x540.o ixgbe_lib.o ixgbe_ptp.o 38 38 ··· 40 40 ixgbe_dcb_82599.o ixgbe_dcb_nl.o 41 41 42 42 ixgbe-$(CONFIG_IXGBE_HWMON) += ixgbe_sysfs.o 43 + ixgbe-$(CONFIG_DEBUG_FS) += ixgbe_debugfs.o 43 44 ixgbe-$(CONFIG_FCOE:m=y) += ixgbe_fcoe.o
-5
drivers/net/ethernet/intel/ixgbe/ixgbe_debugfs.c
··· 24 24 Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497 25 25 26 26 *******************************************************************************/ 27 - 28 - #ifdef CONFIG_DEBUG_FS 29 - 30 27 #include <linux/debugfs.h> 31 28 #include <linux/module.h> 32 29 ··· 274 277 { 275 278 debugfs_remove_recursive(ixgbe_dbg_root); 276 279 } 277 - 278 - #endif /* CONFIG_DEBUG_FS */
+2 -2
drivers/net/ethernet/intel/ixgbe/ixgbe_ptp.c
··· 660 660 break; 661 661 case HWTSTAMP_FILTER_PTP_V1_L4_SYNC: 662 662 tsync_rx_ctl |= IXGBE_TSYNCRXCTL_TYPE_L4_V1; 663 - tsync_rx_mtrl = IXGBE_RXMTRL_V1_SYNC_MSG; 663 + tsync_rx_mtrl |= IXGBE_RXMTRL_V1_SYNC_MSG; 664 664 break; 665 665 case HWTSTAMP_FILTER_PTP_V1_L4_DELAY_REQ: 666 666 tsync_rx_ctl |= IXGBE_TSYNCRXCTL_TYPE_L4_V1; 667 - tsync_rx_mtrl = IXGBE_RXMTRL_V1_DELAY_REQ_MSG; 667 + tsync_rx_mtrl |= IXGBE_RXMTRL_V1_DELAY_REQ_MSG; 668 668 break; 669 669 case HWTSTAMP_FILTER_PTP_V2_EVENT: 670 670 case HWTSTAMP_FILTER_PTP_V2_L2_EVENT:
+9 -4
drivers/net/ethernet/mellanox/mlx4/en_tx.c
··· 630 630 ring->tx_csum++; 631 631 } 632 632 633 - /* Copy dst mac address to wqe */ 634 - ethh = (struct ethhdr *)skb->data; 635 - tx_desc->ctrl.srcrb_flags16[0] = get_unaligned((__be16 *)ethh->h_dest); 636 - tx_desc->ctrl.imm = get_unaligned((__be32 *)(ethh->h_dest + 2)); 633 + if (mlx4_is_mfunc(mdev->dev) || priv->validate_loopback) { 634 + /* Copy dst mac address to wqe. This allows loopback in eSwitch, 635 + * so that VFs and PF can communicate with each other 636 + */ 637 + ethh = (struct ethhdr *)skb->data; 638 + tx_desc->ctrl.srcrb_flags16[0] = get_unaligned((__be16 *)ethh->h_dest); 639 + tx_desc->ctrl.imm = get_unaligned((__be32 *)(ethh->h_dest + 2)); 640 + } 641 + 637 642 /* Handle LSO (TSO) packets */ 638 643 if (lso_header_size) { 639 644 /* Mark opcode as LSO */
+2 -9
drivers/net/ethernet/mellanox/mlx4/main.c
··· 1790 1790 int i; 1791 1791 1792 1792 if (msi_x) { 1793 - /* In multifunction mode each function gets 2 msi-X vectors 1794 - * one for data path completions anf the other for asynch events 1795 - * or command completions */ 1796 - if (mlx4_is_mfunc(dev)) { 1797 - nreq = 2; 1798 - } else { 1799 - nreq = min_t(int, dev->caps.num_eqs - 1800 - dev->caps.reserved_eqs, nreq); 1801 - } 1793 + nreq = min_t(int, dev->caps.num_eqs - dev->caps.reserved_eqs, 1794 + nreq); 1802 1795 1803 1796 entries = kcalloc(nreq, sizeof *entries, GFP_KERNEL); 1804 1797 if (!entries)
+1 -1
drivers/net/ethernet/qlogic/netxen/netxen_nic_init.c
··· 144 144 buffrag->length, PCI_DMA_TODEVICE); 145 145 buffrag->dma = 0ULL; 146 146 } 147 - for (j = 0; j < cmd_buf->frag_count; j++) { 147 + for (j = 1; j < cmd_buf->frag_count; j++) { 148 148 buffrag++; 149 149 if (buffrag->dma) { 150 150 pci_unmap_page(adapter->pdev, buffrag->dma,
+2
drivers/net/ethernet/qlogic/netxen/netxen_nic_main.c
··· 1963 1963 while (--i >= 0) { 1964 1964 nf = &pbuf->frag_array[i+1]; 1965 1965 pci_unmap_page(pdev, nf->dma, nf->length, PCI_DMA_TODEVICE); 1966 + nf->dma = 0ULL; 1966 1967 } 1967 1968 1968 1969 nf = &pbuf->frag_array[0]; 1969 1970 pci_unmap_single(pdev, nf->dma, skb_headlen(skb), PCI_DMA_TODEVICE); 1971 + nf->dma = 0ULL; 1970 1972 1971 1973 out_err: 1972 1974 return -ENOMEM;
+6 -15
drivers/net/ethernet/realtek/r8169.c
··· 1826 1826 1827 1827 if (opts2 & RxVlanTag) 1828 1828 __vlan_hwaccel_put_tag(skb, swab16(opts2 & 0xffff)); 1829 - 1830 - desc->opts2 = 0; 1831 1829 } 1832 1830 1833 1831 static int rtl8169_gset_tbi(struct net_device *dev, struct ethtool_cmd *cmd) ··· 6062 6064 !(status & (RxRWT | RxFOVF)) && 6063 6065 (dev->features & NETIF_F_RXALL)) 6064 6066 goto process_pkt; 6065 - 6066 - rtl8169_mark_to_asic(desc, rx_buf_sz); 6067 6067 } else { 6068 6068 struct sk_buff *skb; 6069 6069 dma_addr_t addr; ··· 6082 6086 if (unlikely(rtl8169_fragmented_frame(status))) { 6083 6087 dev->stats.rx_dropped++; 6084 6088 dev->stats.rx_length_errors++; 6085 - rtl8169_mark_to_asic(desc, rx_buf_sz); 6086 - continue; 6089 + goto release_descriptor; 6087 6090 } 6088 6091 6089 6092 skb = rtl8169_try_rx_copy(tp->Rx_databuff[entry], 6090 6093 tp, pkt_size, addr); 6091 - rtl8169_mark_to_asic(desc, rx_buf_sz); 6092 6094 if (!skb) { 6093 6095 dev->stats.rx_dropped++; 6094 - continue; 6096 + goto release_descriptor; 6095 6097 } 6096 6098 6097 6099 rtl8169_rx_csum(skb, status); ··· 6105 6111 tp->rx_stats.bytes += pkt_size; 6106 6112 u64_stats_update_end(&tp->rx_stats.syncp); 6107 6113 } 6108 - 6109 - /* Work around for AMD plateform. */ 6110 - if ((desc->opts2 & cpu_to_le32(0xfffe000)) && 6111 - (tp->mac_version == RTL_GIGA_MAC_VER_05)) { 6112 - desc->opts2 = 0; 6113 - cur_rx++; 6114 - } 6114 + release_descriptor: 6115 + desc->opts2 = 0; 6116 + wmb(); 6117 + rtl8169_mark_to_asic(desc, rx_buf_sz); 6115 6118 } 6116 6119 6117 6120 count = cur_rx - tp->cur_rx;
+1 -1
drivers/net/hyperv/hyperv_net.h
··· 84 84 }; 85 85 86 86 struct netvsc_device_info { 87 - unsigned char mac_adr[6]; 87 + unsigned char mac_adr[ETH_ALEN]; 88 88 bool link_state; /* 0 - link up, 1 - link down */ 89 89 int ring_size; 90 90 };
+1 -1
drivers/net/hyperv/netvsc_drv.c
··· 349 349 struct net_device_context *ndevctx = netdev_priv(ndev); 350 350 struct hv_device *hdev = ndevctx->device_ctx; 351 351 struct sockaddr *addr = p; 352 - char save_adr[14]; 352 + char save_adr[ETH_ALEN]; 353 353 unsigned char save_aatype; 354 354 int err; 355 355
+5
drivers/net/loopback.c
··· 77 77 78 78 skb_orphan(skb); 79 79 80 + /* Before queueing this packet to netif_rx(), 81 + * make sure dst is refcounted. 82 + */ 83 + skb_dst_force(skb); 84 + 80 85 skb->protocol = eth_type_trans(skb, dev); 81 86 82 87 /* it's OK to use per_cpu_ptr() because BHs are off */
+4 -1
drivers/net/macvlan.c
··· 822 822 823 823 static size_t macvlan_get_size(const struct net_device *dev) 824 824 { 825 - return nla_total_size(4); 825 + return (0 826 + + nla_total_size(4) /* IFLA_MACVLAN_MODE */ 827 + + nla_total_size(2) /* IFLA_MACVLAN_FLAGS */ 828 + ); 826 829 } 827 830 828 831 static int macvlan_fill_info(struct sk_buff *skb,
+20 -9
drivers/net/phy/icplus.c
··· 36 36 37 37 /* IP101A/G - IP1001 */ 38 38 #define IP10XX_SPEC_CTRL_STATUS 16 /* Spec. Control Register */ 39 + #define IP1001_RXPHASE_SEL (1<<0) /* Add delay on RX_CLK */ 40 + #define IP1001_TXPHASE_SEL (1<<1) /* Add delay on TX_CLK */ 39 41 #define IP1001_SPEC_CTRL_STATUS_2 20 /* IP1001 Spec. Control Reg 2 */ 40 - #define IP1001_PHASE_SEL_MASK 3 /* IP1001 RX/TXPHASE_SEL */ 41 42 #define IP1001_APS_ON 11 /* IP1001 APS Mode bit */ 42 43 #define IP101A_G_APS_ON 2 /* IP101A/G APS Mode bit */ 43 44 #define IP101A_G_IRQ_CONF_STATUS 0x11 /* Conf Info IRQ & Status Reg */ ··· 139 138 if (c < 0) 140 139 return c; 141 140 142 - /* INTR pin used: speed/link/duplex will cause an interrupt */ 143 - c = phy_write(phydev, IP101A_G_IRQ_CONF_STATUS, IP101A_G_IRQ_DEFAULT); 144 - if (c < 0) 145 - return c; 141 + if ((phydev->interface == PHY_INTERFACE_MODE_RGMII) || 142 + (phydev->interface == PHY_INTERFACE_MODE_RGMII_ID) || 143 + (phydev->interface == PHY_INTERFACE_MODE_RGMII_RXID) || 144 + (phydev->interface == PHY_INTERFACE_MODE_RGMII_TXID)) { 146 145 147 - if (phydev->interface == PHY_INTERFACE_MODE_RGMII) { 148 - /* Additional delay (2ns) used to adjust RX clock phase 149 - * at RGMII interface */ 150 146 c = phy_read(phydev, IP10XX_SPEC_CTRL_STATUS); 151 147 if (c < 0) 152 148 return c; 153 149 154 - c |= IP1001_PHASE_SEL_MASK; 150 + c &= ~(IP1001_RXPHASE_SEL | IP1001_TXPHASE_SEL); 151 + 152 + if (phydev->interface == PHY_INTERFACE_MODE_RGMII_ID) 153 + c |= (IP1001_RXPHASE_SEL | IP1001_TXPHASE_SEL); 154 + else if (phydev->interface == PHY_INTERFACE_MODE_RGMII_RXID) 155 + c |= IP1001_RXPHASE_SEL; 156 + else if (phydev->interface == PHY_INTERFACE_MODE_RGMII_TXID) 157 + c |= IP1001_TXPHASE_SEL; 158 + 155 159 c = phy_write(phydev, IP10XX_SPEC_CTRL_STATUS, c); 156 160 if (c < 0) 157 161 return c; ··· 170 164 int c; 171 165 172 166 c = ip1xx_reset(phydev); 167 + if (c < 0) 168 + return c; 169 + 170 + /* INTR pin used: speed/link/duplex will cause an interrupt */ 171 + c = phy_write(phydev, IP101A_G_IRQ_CONF_STATUS, IP101A_G_IRQ_DEFAULT); 173 172 if (c < 0) 174 173 return c; 175 174
-9
drivers/net/phy/marvell.c
··· 353 353 int err; 354 354 int temp; 355 355 356 - /* Enable Fiber/Copper auto selection */ 357 - temp = phy_read(phydev, MII_M1111_PHY_EXT_SR); 358 - temp &= ~MII_M1111_HWCFG_FIBER_COPPER_AUTO; 359 - phy_write(phydev, MII_M1111_PHY_EXT_SR, temp); 360 - 361 - temp = phy_read(phydev, MII_BMCR); 362 - temp |= BMCR_RESET; 363 - phy_write(phydev, MII_BMCR, temp); 364 - 365 356 if ((phydev->interface == PHY_INTERFACE_MODE_RGMII) || 366 357 (phydev->interface == PHY_INTERFACE_MODE_RGMII_ID) || 367 358 (phydev->interface == PHY_INTERFACE_MODE_RGMII_RXID) ||
+32 -13
drivers/net/tun.c
··· 109 109 unsigned char addr[FLT_EXACT_COUNT][ETH_ALEN]; 110 110 }; 111 111 112 - /* 1024 is probably a high enough limit: modern hypervisors seem to support on 113 - * the order of 100-200 CPUs so this leaves us some breathing space if we want 114 - * to match a queue per guest CPU. 115 - */ 116 - #define MAX_TAP_QUEUES 1024 112 + /* DEFAULT_MAX_NUM_RSS_QUEUES were choosed to let the rx/tx queues allocated for 113 + * the netdevice to be fit in one page. So we can make sure the success of 114 + * memory allocation. TODO: increase the limit. */ 115 + #define MAX_TAP_QUEUES DEFAULT_MAX_NUM_RSS_QUEUES 116 + #define MAX_TAP_FLOWS 4096 117 117 118 118 #define TUN_FLOW_EXPIRE (3 * HZ) 119 119 ··· 185 185 unsigned long ageing_time; 186 186 unsigned int numdisabled; 187 187 struct list_head disabled; 188 + void *security; 189 + u32 flow_count; 188 190 }; 189 191 190 192 static inline u32 tun_hashfn(u32 rxhash) ··· 220 218 e->queue_index = queue_index; 221 219 e->tun = tun; 222 220 hlist_add_head_rcu(&e->hash_link, head); 221 + ++tun->flow_count; 223 222 } 224 223 return e; 225 224 } ··· 231 228 e->rxhash, e->queue_index); 232 229 hlist_del_rcu(&e->hash_link); 233 230 kfree_rcu(e, rcu); 231 + --tun->flow_count; 234 232 } 235 233 236 234 static void tun_flow_flush(struct tun_struct *tun) ··· 321 317 e->updated = jiffies; 322 318 } else { 323 319 spin_lock_bh(&tun->lock); 324 - if (!tun_flow_find(head, rxhash)) 320 + if (!tun_flow_find(head, rxhash) && 321 + tun->flow_count < MAX_TAP_FLOWS) 325 322 tun_flow_create(tun, head, rxhash, queue_index); 326 323 327 324 if (!timer_pending(&tun->flow_gc_timer)) ··· 494 489 { 495 490 struct tun_file *tfile = file->private_data; 496 491 int err; 492 + 493 + err = security_tun_dev_attach(tfile->socket.sk, tun->security); 494 + if (err < 0) 495 + goto out; 497 496 498 497 err = -EINVAL; 499 498 if (rtnl_dereference(tfile->tun)) ··· 1382 1373 1383 1374 BUG_ON(!(list_empty(&tun->disabled))); 1384 1375 tun_flow_uninit(tun); 1376 + security_tun_dev_free_security(tun->security); 1385 1377 free_netdev(dev); 1386 1378 } 1387 1379 ··· 1572 1562 1573 1563 if (tun_not_capable(tun)) 1574 1564 return -EPERM; 1575 - err = security_tun_dev_attach(tfile->socket.sk); 1565 + err = security_tun_dev_open(tun->security); 1576 1566 if (err < 0) 1577 1567 return err; 1578 1568 ··· 1587 1577 else { 1588 1578 char *name; 1589 1579 unsigned long flags = 0; 1580 + int queues = ifr->ifr_flags & IFF_MULTI_QUEUE ? 1581 + MAX_TAP_QUEUES : 1; 1590 1582 1591 1583 if (!ns_capable(net->user_ns, CAP_NET_ADMIN)) 1592 1584 return -EPERM; ··· 1612 1600 name = ifr->ifr_name; 1613 1601 1614 1602 dev = alloc_netdev_mqs(sizeof(struct tun_struct), name, 1615 - tun_setup, 1616 - MAX_TAP_QUEUES, MAX_TAP_QUEUES); 1603 + tun_setup, queues, queues); 1604 + 1617 1605 if (!dev) 1618 1606 return -ENOMEM; 1619 1607 ··· 1631 1619 1632 1620 spin_lock_init(&tun->lock); 1633 1621 1634 - security_tun_dev_post_create(&tfile->sk); 1622 + err = security_tun_dev_alloc_security(&tun->security); 1623 + if (err < 0) 1624 + goto err_free_dev; 1635 1625 1636 1626 tun_net_init(dev); 1637 1627 ··· 1803 1789 1804 1790 if (ifr->ifr_flags & IFF_ATTACH_QUEUE) { 1805 1791 tun = tfile->detached; 1806 - if (!tun) 1792 + if (!tun) { 1807 1793 ret = -EINVAL; 1808 - else 1809 - ret = tun_attach(tun, file); 1794 + goto unlock; 1795 + } 1796 + ret = security_tun_dev_attach_queue(tun->security); 1797 + if (ret < 0) 1798 + goto unlock; 1799 + ret = tun_attach(tun, file); 1810 1800 } else if (ifr->ifr_flags & IFF_DETACH_QUEUE) { 1811 1801 tun = rtnl_dereference(tfile->tun); 1812 1802 if (!tun || !(tun->flags & TUN_TAP_MQ)) ··· 1820 1802 } else 1821 1803 ret = -EINVAL; 1822 1804 1805 + unlock: 1823 1806 rtnl_unlock(); 1824 1807 return ret; 1825 1808 }
+19
drivers/net/usb/cdc_mbim.c
··· 374 374 .tx_fixup = cdc_mbim_tx_fixup, 375 375 }; 376 376 377 + /* MBIM and NCM devices should not need a ZLP after NTBs with 378 + * dwNtbOutMaxSize length. This driver_info is for the exceptional 379 + * devices requiring it anyway, allowing them to be supported without 380 + * forcing the performance penalty on all the sane devices. 381 + */ 382 + static const struct driver_info cdc_mbim_info_zlp = { 383 + .description = "CDC MBIM", 384 + .flags = FLAG_NO_SETINT | FLAG_MULTI_PACKET | FLAG_WWAN | FLAG_SEND_ZLP, 385 + .bind = cdc_mbim_bind, 386 + .unbind = cdc_mbim_unbind, 387 + .manage_power = cdc_mbim_manage_power, 388 + .rx_fixup = cdc_mbim_rx_fixup, 389 + .tx_fixup = cdc_mbim_tx_fixup, 390 + }; 391 + 377 392 static const struct usb_device_id mbim_devs[] = { 378 393 /* This duplicate NCM entry is intentional. MBIM devices can 379 394 * be disguised as NCM by default, and this is necessary to ··· 399 384 */ 400 385 { USB_INTERFACE_INFO(USB_CLASS_COMM, USB_CDC_SUBCLASS_NCM, USB_CDC_PROTO_NONE), 401 386 .driver_info = (unsigned long)&cdc_mbim_info, 387 + }, 388 + /* Sierra Wireless MC7710 need ZLPs */ 389 + { USB_DEVICE_AND_INTERFACE_INFO(0x1199, 0x68a2, USB_CLASS_COMM, USB_CDC_SUBCLASS_MBIM, USB_CDC_PROTO_NONE), 390 + .driver_info = (unsigned long)&cdc_mbim_info_zlp, 402 391 }, 403 392 { USB_INTERFACE_INFO(USB_CLASS_COMM, USB_CDC_SUBCLASS_MBIM, USB_CDC_PROTO_NONE), 404 393 .driver_info = (unsigned long)&cdc_mbim_info,
+30 -1
drivers/net/usb/cdc_ncm.c
··· 435 435 len -= temp; 436 436 } 437 437 438 + /* some buggy devices have an IAD but no CDC Union */ 439 + if (!ctx->union_desc && intf->intf_assoc && intf->intf_assoc->bInterfaceCount == 2) { 440 + ctx->control = intf; 441 + ctx->data = usb_ifnum_to_if(dev->udev, intf->cur_altsetting->desc.bInterfaceNumber + 1); 442 + dev_dbg(&intf->dev, "CDC Union missing - got slave from IAD\n"); 443 + } 444 + 438 445 /* check if we got everything */ 439 446 if ((ctx->control == NULL) || (ctx->data == NULL) || 440 447 ((!ctx->mbim_desc) && ((ctx->ether_desc == NULL) || (ctx->control != intf)))) ··· 504 497 error2: 505 498 usb_set_intfdata(ctx->control, NULL); 506 499 usb_set_intfdata(ctx->data, NULL); 507 - usb_driver_release_interface(driver, ctx->data); 500 + if (ctx->data != ctx->control) 501 + usb_driver_release_interface(driver, ctx->data); 508 502 error: 509 503 cdc_ncm_free((struct cdc_ncm_ctx *)dev->data[0]); 510 504 dev->data[0] = 0; ··· 1163 1155 .tx_fixup = cdc_ncm_tx_fixup, 1164 1156 }; 1165 1157 1158 + /* Same as wwan_info, but with FLAG_NOARP */ 1159 + static const struct driver_info wwan_noarp_info = { 1160 + .description = "Mobile Broadband Network Device (NO ARP)", 1161 + .flags = FLAG_POINTTOPOINT | FLAG_NO_SETINT | FLAG_MULTI_PACKET 1162 + | FLAG_WWAN | FLAG_NOARP, 1163 + .bind = cdc_ncm_bind, 1164 + .unbind = cdc_ncm_unbind, 1165 + .check_connect = cdc_ncm_check_connect, 1166 + .manage_power = usbnet_manage_power, 1167 + .status = cdc_ncm_status, 1168 + .rx_fixup = cdc_ncm_rx_fixup, 1169 + .tx_fixup = cdc_ncm_tx_fixup, 1170 + }; 1171 + 1166 1172 static const struct usb_device_id cdc_devs[] = { 1167 1173 /* Ericsson MBM devices like F5521gw */ 1168 1174 { .match_flags = USB_DEVICE_ID_MATCH_INT_INFO ··· 1214 1192 }, 1215 1193 { USB_VENDOR_AND_INTERFACE_INFO(0x12d1, 0xff, 0x02, 0x46), 1216 1194 .driver_info = (unsigned long)&wwan_info, 1195 + }, 1196 + 1197 + /* Infineon(now Intel) HSPA Modem platform */ 1198 + { USB_DEVICE_AND_INTERFACE_INFO(0x1519, 0x0443, 1199 + USB_CLASS_COMM, 1200 + USB_CDC_SUBCLASS_NCM, USB_CDC_PROTO_NONE), 1201 + .driver_info = (unsigned long)&wwan_noarp_info, 1217 1202 }, 1218 1203 1219 1204 /* Generic CDC-NCM devices */
+35 -17
drivers/net/usb/dm9601.c
··· 45 45 #define DM_MCAST_ADDR 0x16 /* 8 bytes */ 46 46 #define DM_GPR_CTRL 0x1e 47 47 #define DM_GPR_DATA 0x1f 48 + #define DM_CHIP_ID 0x2c 49 + #define DM_MODE_CTRL 0x91 /* only on dm9620 */ 50 + 51 + /* chip id values */ 52 + #define ID_DM9601 0 53 + #define ID_DM9620 1 48 54 49 55 #define DM_MAX_MCAST 64 50 56 #define DM_MCAST_SIZE 8 ··· 58 52 #define DM_TX_OVERHEAD 2 /* 2 byte header */ 59 53 #define DM_RX_OVERHEAD 7 /* 3 byte header + 4 byte crc tail */ 60 54 #define DM_TIMEOUT 1000 61 - 62 55 63 56 static int dm_read(struct usbnet *dev, u8 reg, u16 length, void *data) 64 57 { ··· 89 84 90 85 static int dm_write_reg(struct usbnet *dev, u8 reg, u8 value) 91 86 { 92 - return usbnet_write_cmd(dev, DM_WRITE_REGS, 87 + return usbnet_write_cmd(dev, DM_WRITE_REG, 93 88 USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_DEVICE, 94 89 value, reg, NULL, 0); 95 90 } 96 91 97 - static void dm_write_async_helper(struct usbnet *dev, u8 reg, u8 value, 98 - u16 length, void *data) 92 + static void dm_write_async(struct usbnet *dev, u8 reg, u16 length, void *data) 99 93 { 100 94 usbnet_write_cmd_async(dev, DM_WRITE_REGS, 101 95 USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_DEVICE, 102 - value, reg, data, length); 103 - } 104 - 105 - static void dm_write_async(struct usbnet *dev, u8 reg, u16 length, void *data) 106 - { 107 - netdev_dbg(dev->net, "dm_write_async() reg=0x%02x length=%d\n", reg, length); 108 - 109 - dm_write_async_helper(dev, reg, 0, length, data); 96 + 0, reg, data, length); 110 97 } 111 98 112 99 static void dm_write_reg_async(struct usbnet *dev, u8 reg, u8 value) 113 100 { 114 - netdev_dbg(dev->net, "dm_write_reg_async() reg=0x%02x value=0x%02x\n", 115 - reg, value); 116 - 117 - dm_write_async_helper(dev, reg, value, 0, NULL); 101 + usbnet_write_cmd_async(dev, DM_WRITE_REG, 102 + USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_DEVICE, 103 + value, reg, NULL, 0); 118 104 } 119 105 120 106 static int dm_read_shared_word(struct usbnet *dev, int phy, u8 reg, __le16 *value) ··· 354 358 static int dm9601_bind(struct usbnet *dev, struct usb_interface *intf) 355 359 { 356 360 int ret; 357 - u8 mac[ETH_ALEN]; 361 + u8 mac[ETH_ALEN], id; 358 362 359 363 ret = usbnet_get_endpoints(dev, intf); 360 364 if (ret) ··· 393 397 "dm9601: No valid MAC address in EEPROM, using %pM\n", 394 398 dev->net->dev_addr); 395 399 __dm9601_set_mac_address(dev); 400 + } 401 + 402 + if (dm_read_reg(dev, DM_CHIP_ID, &id) < 0) { 403 + netdev_err(dev->net, "Error reading chip ID\n"); 404 + ret = -ENODEV; 405 + goto out; 406 + } 407 + 408 + /* put dm9620 devices in dm9601 mode */ 409 + if (id == ID_DM9620) { 410 + u8 mode; 411 + 412 + if (dm_read_reg(dev, DM_MODE_CTRL, &mode) < 0) { 413 + netdev_err(dev->net, "Error reading MODE_CTRL\n"); 414 + ret = -ENODEV; 415 + goto out; 416 + } 417 + dm_write_reg(dev, DM_MODE_CTRL, mode & 0x7f); 396 418 } 397 419 398 420 /* power up phy */ ··· 593 579 }, 594 580 { 595 581 USB_DEVICE(0x0a46, 0x9000), /* DM9000E */ 582 + .driver_info = (unsigned long)&dm9601_info, 583 + }, 584 + { 585 + USB_DEVICE(0x0a46, 0x9620), /* DM9620 USB to Fast Ethernet Adapter */ 596 586 .driver_info = (unsigned long)&dm9601_info, 597 587 }, 598 588 {}, // END
+2
drivers/net/usb/qmi_wwan.c
··· 433 433 {QMI_FIXED_INTF(0x19d2, 0x0199, 1)}, /* ZTE MF820S */ 434 434 {QMI_FIXED_INTF(0x19d2, 0x0200, 1)}, 435 435 {QMI_FIXED_INTF(0x19d2, 0x0257, 3)}, /* ZTE MF821 */ 436 + {QMI_FIXED_INTF(0x19d2, 0x0265, 4)}, /* ONDA MT8205 4G LTE */ 436 437 {QMI_FIXED_INTF(0x19d2, 0x0284, 4)}, /* ZTE MF880 */ 437 438 {QMI_FIXED_INTF(0x19d2, 0x0326, 4)}, /* ZTE MF821D */ 438 439 {QMI_FIXED_INTF(0x19d2, 0x1008, 4)}, /* ZTE (Vodafone) K3570-Z */ ··· 460 459 {QMI_FIXED_INTF(0x1199, 0x68a2, 19)}, /* Sierra Wireless MC7710 in QMI mode */ 461 460 {QMI_FIXED_INTF(0x1199, 0x901c, 8)}, /* Sierra Wireless EM7700 */ 462 461 {QMI_FIXED_INTF(0x1bbb, 0x011e, 4)}, /* Telekom Speedstick LTE II (Alcatel One Touch L100V LTE) */ 462 + {QMI_FIXED_INTF(0x2357, 0x0201, 4)}, /* TP-LINK HSUPA Modem MA180 */ 463 463 464 464 /* 4. Gobi 1000 devices */ 465 465 {QMI_GOBI1K_DEVICE(0x05c6, 0x9212)}, /* Acer Gobi Modem Device */
+4
drivers/net/usb/usbnet.c
··· 1448 1448 if ((dev->driver_info->flags & FLAG_WWAN) != 0) 1449 1449 strcpy(net->name, "wwan%d"); 1450 1450 1451 + /* devices that cannot do ARP */ 1452 + if ((dev->driver_info->flags & FLAG_NOARP) != 0) 1453 + net->flags |= IFF_NOARP; 1454 + 1451 1455 /* maybe the remote can't receive an Ethernet MTU */ 1452 1456 if (net->mtu > (dev->hard_mtu - net->hard_header_len)) 1453 1457 net->mtu = dev->hard_mtu - net->hard_header_len;
+98 -20
drivers/net/virtio_net.c
··· 26 26 #include <linux/scatterlist.h> 27 27 #include <linux/if_vlan.h> 28 28 #include <linux/slab.h> 29 + #include <linux/cpu.h> 29 30 30 31 static int napi_weight = 128; 31 32 module_param(napi_weight, int, 0444); ··· 124 123 125 124 /* Does the affinity hint is set for virtqueues? */ 126 125 bool affinity_hint_set; 126 + 127 + /* Per-cpu variable to show the mapping from CPU to virtqueue */ 128 + int __percpu *vq_index; 129 + 130 + /* CPU hot plug notifier */ 131 + struct notifier_block nb; 127 132 }; 128 133 129 134 struct skb_vnet_hdr { ··· 1020 1013 return 0; 1021 1014 } 1022 1015 1023 - static void virtnet_set_affinity(struct virtnet_info *vi, bool set) 1016 + static void virtnet_clean_affinity(struct virtnet_info *vi, long hcpu) 1024 1017 { 1025 1018 int i; 1019 + int cpu; 1020 + 1021 + if (vi->affinity_hint_set) { 1022 + for (i = 0; i < vi->max_queue_pairs; i++) { 1023 + virtqueue_set_affinity(vi->rq[i].vq, -1); 1024 + virtqueue_set_affinity(vi->sq[i].vq, -1); 1025 + } 1026 + 1027 + vi->affinity_hint_set = false; 1028 + } 1029 + 1030 + i = 0; 1031 + for_each_online_cpu(cpu) { 1032 + if (cpu == hcpu) { 1033 + *per_cpu_ptr(vi->vq_index, cpu) = -1; 1034 + } else { 1035 + *per_cpu_ptr(vi->vq_index, cpu) = 1036 + ++i % vi->curr_queue_pairs; 1037 + } 1038 + } 1039 + } 1040 + 1041 + static void virtnet_set_affinity(struct virtnet_info *vi) 1042 + { 1043 + int i; 1044 + int cpu; 1026 1045 1027 1046 /* In multiqueue mode, when the number of cpu is equal to the number of 1028 1047 * queue pairs, we let the queue pairs to be private to one cpu by 1029 1048 * setting the affinity hint to eliminate the contention. 1030 1049 */ 1031 - if ((vi->curr_queue_pairs == 1 || 1032 - vi->max_queue_pairs != num_online_cpus()) && set) { 1033 - if (vi->affinity_hint_set) 1034 - set = false; 1035 - else 1036 - return; 1050 + if (vi->curr_queue_pairs == 1 || 1051 + vi->max_queue_pairs != num_online_cpus()) { 1052 + virtnet_clean_affinity(vi, -1); 1053 + return; 1037 1054 } 1038 1055 1039 - for (i = 0; i < vi->max_queue_pairs; i++) { 1040 - int cpu = set ? i : -1; 1056 + i = 0; 1057 + for_each_online_cpu(cpu) { 1041 1058 virtqueue_set_affinity(vi->rq[i].vq, cpu); 1042 1059 virtqueue_set_affinity(vi->sq[i].vq, cpu); 1060 + *per_cpu_ptr(vi->vq_index, cpu) = i; 1061 + i++; 1043 1062 } 1044 1063 1045 - if (set) 1046 - vi->affinity_hint_set = true; 1047 - else 1048 - vi->affinity_hint_set = false; 1064 + vi->affinity_hint_set = true; 1065 + } 1066 + 1067 + static int virtnet_cpu_callback(struct notifier_block *nfb, 1068 + unsigned long action, void *hcpu) 1069 + { 1070 + struct virtnet_info *vi = container_of(nfb, struct virtnet_info, nb); 1071 + 1072 + switch(action & ~CPU_TASKS_FROZEN) { 1073 + case CPU_ONLINE: 1074 + case CPU_DOWN_FAILED: 1075 + case CPU_DEAD: 1076 + virtnet_set_affinity(vi); 1077 + break; 1078 + case CPU_DOWN_PREPARE: 1079 + virtnet_clean_affinity(vi, (long)hcpu); 1080 + break; 1081 + default: 1082 + break; 1083 + } 1084 + return NOTIFY_OK; 1049 1085 } 1050 1086 1051 1087 static void virtnet_get_ringparam(struct net_device *dev, ··· 1132 1082 if (queue_pairs > vi->max_queue_pairs) 1133 1083 return -EINVAL; 1134 1084 1085 + get_online_cpus(); 1135 1086 err = virtnet_set_queues(vi, queue_pairs); 1136 1087 if (!err) { 1137 1088 netif_set_real_num_tx_queues(dev, queue_pairs); 1138 1089 netif_set_real_num_rx_queues(dev, queue_pairs); 1139 1090 1140 - virtnet_set_affinity(vi, true); 1091 + virtnet_set_affinity(vi); 1141 1092 } 1093 + put_online_cpus(); 1142 1094 1143 1095 return err; 1144 1096 } ··· 1179 1127 1180 1128 /* To avoid contending a lock hold by a vcpu who would exit to host, select the 1181 1129 * txq based on the processor id. 1182 - * TODO: handle cpu hotplug. 1183 1130 */ 1184 1131 static u16 virtnet_select_queue(struct net_device *dev, struct sk_buff *skb) 1185 1132 { 1186 - int txq = skb_rx_queue_recorded(skb) ? skb_get_rx_queue(skb) : 1187 - smp_processor_id(); 1133 + int txq; 1134 + struct virtnet_info *vi = netdev_priv(dev); 1135 + 1136 + if (skb_rx_queue_recorded(skb)) { 1137 + txq = skb_get_rx_queue(skb); 1138 + } else { 1139 + txq = *__this_cpu_ptr(vi->vq_index); 1140 + if (txq == -1) 1141 + txq = 0; 1142 + } 1188 1143 1189 1144 while (unlikely(txq >= dev->real_num_tx_queues)) 1190 1145 txq -= dev->real_num_tx_queues; ··· 1307 1248 { 1308 1249 struct virtio_device *vdev = vi->vdev; 1309 1250 1310 - virtnet_set_affinity(vi, false); 1251 + virtnet_clean_affinity(vi, -1); 1311 1252 1312 1253 vdev->config->del_vqs(vdev); 1313 1254 ··· 1430 1371 if (ret) 1431 1372 goto err_free; 1432 1373 1433 - virtnet_set_affinity(vi, true); 1374 + get_online_cpus(); 1375 + virtnet_set_affinity(vi); 1376 + put_online_cpus(); 1377 + 1434 1378 return 0; 1435 1379 1436 1380 err_free: ··· 1515 1453 if (vi->stats == NULL) 1516 1454 goto free; 1517 1455 1456 + vi->vq_index = alloc_percpu(int); 1457 + if (vi->vq_index == NULL) 1458 + goto free_stats; 1459 + 1518 1460 mutex_init(&vi->config_lock); 1519 1461 vi->config_enable = true; 1520 1462 INIT_WORK(&vi->config_work, virtnet_config_changed_work); ··· 1542 1476 /* Allocate/initialize the rx/tx queues, and invoke find_vqs */ 1543 1477 err = init_vqs(vi); 1544 1478 if (err) 1545 - goto free_stats; 1479 + goto free_index; 1546 1480 1547 1481 netif_set_real_num_tx_queues(dev, 1); 1548 1482 netif_set_real_num_rx_queues(dev, 1); ··· 1563 1497 err = -ENOMEM; 1564 1498 goto free_recv_bufs; 1565 1499 } 1500 + } 1501 + 1502 + vi->nb.notifier_call = &virtnet_cpu_callback; 1503 + err = register_hotcpu_notifier(&vi->nb); 1504 + if (err) { 1505 + pr_debug("virtio_net: registering cpu notifier failed\n"); 1506 + goto free_recv_bufs; 1566 1507 } 1567 1508 1568 1509 /* Assume link up if device can't report link status, ··· 1593 1520 free_vqs: 1594 1521 cancel_delayed_work_sync(&vi->refill); 1595 1522 virtnet_del_vqs(vi); 1523 + free_index: 1524 + free_percpu(vi->vq_index); 1596 1525 free_stats: 1597 1526 free_percpu(vi->stats); 1598 1527 free: ··· 1618 1543 { 1619 1544 struct virtnet_info *vi = vdev->priv; 1620 1545 1546 + unregister_hotcpu_notifier(&vi->nb); 1547 + 1621 1548 /* Prevent config work handler from accessing the device. */ 1622 1549 mutex_lock(&vi->config_lock); 1623 1550 vi->config_enable = false; ··· 1631 1554 1632 1555 flush_work(&vi->config_work); 1633 1556 1557 + free_percpu(vi->vq_index); 1634 1558 free_percpu(vi->stats); 1635 1559 free_netdev(vi->dev); 1636 1560 }
+2
drivers/net/wireless/ath/ath9k/ar9003_calib.c
··· 976 976 AR_PHY_CL_TAB_1, 977 977 AR_PHY_CL_TAB_2 }; 978 978 979 + ar9003_hw_set_chain_masks(ah, ah->caps.rx_chainmask, ah->caps.tx_chainmask); 980 + 979 981 if (rtt) { 980 982 if (!ar9003_hw_rtt_restore(ah, chan)) 981 983 run_rtt_cal = true;
+7 -20
drivers/net/wireless/ath/ath9k/ar9003_phy.c
··· 586 586 ath9k_hw_synth_delay(ah, chan, synthDelay); 587 587 } 588 588 589 - static void ar9003_hw_set_chain_masks(struct ath_hw *ah, u8 rx, u8 tx) 589 + void ar9003_hw_set_chain_masks(struct ath_hw *ah, u8 rx, u8 tx) 590 590 { 591 - switch (rx) { 592 - case 0x5: 591 + if (ah->caps.tx_chainmask == 5 || ah->caps.rx_chainmask == 5) 593 592 REG_SET_BIT(ah, AR_PHY_ANALOG_SWAP, 594 593 AR_PHY_SWAP_ALT_CHAIN); 595 - case 0x3: 596 - case 0x1: 597 - case 0x2: 598 - case 0x7: 599 - REG_WRITE(ah, AR_PHY_RX_CHAINMASK, rx); 600 - REG_WRITE(ah, AR_PHY_CAL_CHAINMASK, rx); 601 - break; 602 - default: 603 - break; 604 - } 594 + 595 + REG_WRITE(ah, AR_PHY_RX_CHAINMASK, rx); 596 + REG_WRITE(ah, AR_PHY_CAL_CHAINMASK, rx); 605 597 606 598 if ((ah->caps.hw_caps & ATH9K_HW_CAP_APM) && (tx == 0x7)) 607 - REG_WRITE(ah, AR_SELFGEN_MASK, 0x3); 608 - else 609 - REG_WRITE(ah, AR_SELFGEN_MASK, tx); 599 + tx = 3; 610 600 611 - if (tx == 0x5) { 612 - REG_SET_BIT(ah, AR_PHY_ANALOG_SWAP, 613 - AR_PHY_SWAP_ALT_CHAIN); 614 - } 601 + REG_WRITE(ah, AR_SELFGEN_MASK, tx); 615 602 } 616 603 617 604 /*
-3
drivers/net/wireless/ath/ath9k/ath9k.h
··· 317 317 u32 *rxlink; 318 318 u32 num_pkts; 319 319 unsigned int rxfilter; 320 - spinlock_t rxbuflock; 321 320 struct list_head rxbuf; 322 321 struct ath_descdma rxdma; 323 322 struct ath_buf *rx_bufptr; ··· 327 328 328 329 int ath_startrecv(struct ath_softc *sc); 329 330 bool ath_stoprecv(struct ath_softc *sc); 330 - void ath_flushrecv(struct ath_softc *sc); 331 331 u32 ath_calcrxfilter(struct ath_softc *sc); 332 332 int ath_rx_init(struct ath_softc *sc, int nbufs); 333 333 void ath_rx_cleanup(struct ath_softc *sc); ··· 644 646 enum sc_op_flags { 645 647 SC_OP_INVALID, 646 648 SC_OP_BEACONS, 647 - SC_OP_RXFLUSH, 648 649 SC_OP_ANI_RUN, 649 650 SC_OP_PRIM_STA_VIF, 650 651 SC_OP_HW_RESET,
+1 -1
drivers/net/wireless/ath/ath9k/beacon.c
··· 147 147 skb->len, DMA_TO_DEVICE); 148 148 dev_kfree_skb_any(skb); 149 149 bf->bf_buf_addr = 0; 150 + bf->bf_mpdu = NULL; 150 151 } 151 152 152 153 skb = ieee80211_beacon_get(hw, vif); ··· 360 359 return; 361 360 362 361 bf = ath9k_beacon_generate(sc->hw, vif); 363 - WARN_ON(!bf); 364 362 365 363 if (sc->beacon.bmisscnt != 0) { 366 364 ath_dbg(common, BSTUCK, "resume beacon xmit after %u misses\n",
-1
drivers/net/wireless/ath/ath9k/debug.c
··· 861 861 RXS_ERR("RX-LENGTH-ERR", rx_len_err); 862 862 RXS_ERR("RX-OOM-ERR", rx_oom_err); 863 863 RXS_ERR("RX-RATE-ERR", rx_rate_err); 864 - RXS_ERR("RX-DROP-RXFLUSH", rx_drop_rxflush); 865 864 RXS_ERR("RX-TOO-MANY-FRAGS", rx_too_many_frags_err); 866 865 867 866 PHY_ERR("UNDERRUN ERR", ATH9K_PHYERR_UNDERRUN);
-2
drivers/net/wireless/ath/ath9k/debug.h
··· 216 216 * @rx_oom_err: No. of frames dropped due to OOM issues. 217 217 * @rx_rate_err: No. of frames dropped due to rate errors. 218 218 * @rx_too_many_frags_err: Frames dropped due to too-many-frags received. 219 - * @rx_drop_rxflush: No. of frames dropped due to RX-FLUSH. 220 219 * @rx_beacons: No. of beacons received. 221 220 * @rx_frags: No. of rx-fragements received. 222 221 */ ··· 234 235 u32 rx_oom_err; 235 236 u32 rx_rate_err; 236 237 u32 rx_too_many_frags_err; 237 - u32 rx_drop_rxflush; 238 238 u32 rx_beacons; 239 239 u32 rx_frags; 240 240 };
+2
drivers/net/wireless/ath/ath9k/htc_hst.c
··· 344 344 endpoint->ep_callbacks.tx(endpoint->ep_callbacks.priv, 345 345 skb, htc_hdr->endpoint_id, 346 346 txok); 347 + } else { 348 + kfree_skb(skb); 347 349 } 348 350 } 349 351
+1
drivers/net/wireless/ath/ath9k/hw.h
··· 1066 1066 int ar9003_paprd_init_table(struct ath_hw *ah); 1067 1067 bool ar9003_paprd_is_done(struct ath_hw *ah); 1068 1068 bool ar9003_is_paprd_enabled(struct ath_hw *ah); 1069 + void ar9003_hw_set_chain_masks(struct ath_hw *ah, u8 rx, u8 tx); 1069 1070 1070 1071 /* Hardware family op attach helpers */ 1071 1072 void ar5008_hw_attach_phy_ops(struct ath_hw *ah);
+9 -13
drivers/net/wireless/ath/ath9k/main.c
··· 182 182 ath_start_ani(sc); 183 183 } 184 184 185 - static bool ath_prepare_reset(struct ath_softc *sc, bool retry_tx, bool flush) 185 + static bool ath_prepare_reset(struct ath_softc *sc, bool retry_tx) 186 186 { 187 187 struct ath_hw *ah = sc->sc_ah; 188 188 bool ret = true; ··· 201 201 202 202 if (!ath_drain_all_txq(sc, retry_tx)) 203 203 ret = false; 204 - 205 - if (!flush) { 206 - if (ah->caps.hw_caps & ATH9K_HW_CAP_EDMA) 207 - ath_rx_tasklet(sc, 1, true); 208 - ath_rx_tasklet(sc, 1, false); 209 - } else { 210 - ath_flushrecv(sc); 211 - } 212 204 213 205 return ret; 214 206 } ··· 254 262 struct ath_common *common = ath9k_hw_common(ah); 255 263 struct ath9k_hw_cal_data *caldata = NULL; 256 264 bool fastcc = true; 257 - bool flush = false; 258 265 int r; 259 266 260 267 __ath_cancel_work(sc); 261 268 269 + tasklet_disable(&sc->intr_tq); 262 270 spin_lock_bh(&sc->sc_pcu_lock); 263 271 264 272 if (!(sc->hw->conf.flags & IEEE80211_CONF_OFFCHANNEL)) { ··· 268 276 269 277 if (!hchan) { 270 278 fastcc = false; 271 - flush = true; 272 279 hchan = ah->curchan; 273 280 } 274 281 275 - if (!ath_prepare_reset(sc, retry_tx, flush)) 282 + if (!ath_prepare_reset(sc, retry_tx)) 276 283 fastcc = false; 277 284 278 285 ath_dbg(common, CONFIG, "Reset to %u MHz, HT40: %d fastcc: %d\n", ··· 293 302 294 303 out: 295 304 spin_unlock_bh(&sc->sc_pcu_lock); 305 + tasklet_enable(&sc->intr_tq); 306 + 296 307 return r; 297 308 } 298 309 ··· 797 804 ath9k_hw_cfg_gpio_input(ah, ah->led_pin); 798 805 } 799 806 800 - ath_prepare_reset(sc, false, true); 807 + ath_prepare_reset(sc, false); 801 808 802 809 if (sc->rx.frag) { 803 810 dev_kfree_skb_any(sc->rx.frag); ··· 1826 1833 1827 1834 static bool validate_antenna_mask(struct ath_hw *ah, u32 val) 1828 1835 { 1836 + if (AR_SREV_9300_20_OR_LATER(ah)) 1837 + return true; 1838 + 1829 1839 switch (val & 0x7) { 1830 1840 case 0x1: 1831 1841 case 0x3:
+15 -39
drivers/net/wireless/ath/ath9k/recv.c
··· 254 254 255 255 static void ath_edma_start_recv(struct ath_softc *sc) 256 256 { 257 - spin_lock_bh(&sc->rx.rxbuflock); 258 - 259 257 ath9k_hw_rxena(sc->sc_ah); 260 258 261 259 ath_rx_addbuffer_edma(sc, ATH9K_RX_QUEUE_HP, ··· 265 267 ath_opmode_init(sc); 266 268 267 269 ath9k_hw_startpcureceive(sc->sc_ah, !!(sc->hw->conf.flags & IEEE80211_CONF_OFFCHANNEL)); 268 - 269 - spin_unlock_bh(&sc->rx.rxbuflock); 270 270 } 271 271 272 272 static void ath_edma_stop_recv(struct ath_softc *sc) ··· 281 285 int error = 0; 282 286 283 287 spin_lock_init(&sc->sc_pcu_lock); 284 - spin_lock_init(&sc->rx.rxbuflock); 285 - clear_bit(SC_OP_RXFLUSH, &sc->sc_flags); 286 288 287 289 common->rx_bufsize = IEEE80211_MAX_MPDU_LEN / 2 + 288 290 sc->sc_ah->caps.rx_status_len; ··· 441 447 return 0; 442 448 } 443 449 444 - spin_lock_bh(&sc->rx.rxbuflock); 445 450 if (list_empty(&sc->rx.rxbuf)) 446 451 goto start_recv; 447 452 ··· 461 468 ath_opmode_init(sc); 462 469 ath9k_hw_startpcureceive(ah, !!(sc->hw->conf.flags & IEEE80211_CONF_OFFCHANNEL)); 463 470 464 - spin_unlock_bh(&sc->rx.rxbuflock); 465 - 466 471 return 0; 472 + } 473 + 474 + static void ath_flushrecv(struct ath_softc *sc) 475 + { 476 + if (sc->sc_ah->caps.hw_caps & ATH9K_HW_CAP_EDMA) 477 + ath_rx_tasklet(sc, 1, true); 478 + ath_rx_tasklet(sc, 1, false); 467 479 } 468 480 469 481 bool ath_stoprecv(struct ath_softc *sc) ··· 476 478 struct ath_hw *ah = sc->sc_ah; 477 479 bool stopped, reset = false; 478 480 479 - spin_lock_bh(&sc->rx.rxbuflock); 480 481 ath9k_hw_abortpcurecv(ah); 481 482 ath9k_hw_setrxfilter(ah, 0); 482 483 stopped = ath9k_hw_stopdmarecv(ah, &reset); 484 + 485 + ath_flushrecv(sc); 483 486 484 487 if (sc->sc_ah->caps.hw_caps & ATH9K_HW_CAP_EDMA) 485 488 ath_edma_stop_recv(sc); 486 489 else 487 490 sc->rx.rxlink = NULL; 488 - spin_unlock_bh(&sc->rx.rxbuflock); 489 491 490 492 if (!(ah->ah_flags & AH_UNPLUGGED) && 491 493 unlikely(!stopped)) { ··· 495 497 ATH_DBG_WARN_ON_ONCE(!stopped); 496 498 } 497 499 return stopped && !reset; 498 - } 499 - 500 - void ath_flushrecv(struct ath_softc *sc) 501 - { 502 - set_bit(SC_OP_RXFLUSH, &sc->sc_flags); 503 - if (sc->sc_ah->caps.hw_caps & ATH9K_HW_CAP_EDMA) 504 - ath_rx_tasklet(sc, 1, true); 505 - ath_rx_tasklet(sc, 1, false); 506 - clear_bit(SC_OP_RXFLUSH, &sc->sc_flags); 507 500 } 508 501 509 502 static bool ath_beacon_dtim_pending_cab(struct sk_buff *skb) ··· 733 744 return NULL; 734 745 } 735 746 747 + list_del(&bf->list); 736 748 if (!bf->bf_mpdu) 737 749 return bf; 738 750 ··· 1049 1059 dma_type = DMA_FROM_DEVICE; 1050 1060 1051 1061 qtype = hp ? ATH9K_RX_QUEUE_HP : ATH9K_RX_QUEUE_LP; 1052 - spin_lock_bh(&sc->rx.rxbuflock); 1053 1062 1054 1063 tsf = ath9k_hw_gettsf64(ah); 1055 1064 tsf_lower = tsf & 0xffffffff; 1056 1065 1057 1066 do { 1058 1067 bool decrypt_error = false; 1059 - /* If handling rx interrupt and flush is in progress => exit */ 1060 - if (test_bit(SC_OP_RXFLUSH, &sc->sc_flags) && (flush == 0)) 1061 - break; 1062 1068 1063 1069 memset(&rs, 0, sizeof(rs)); 1064 1070 if (edma) ··· 1096 1110 sc->rx.num_pkts++; 1097 1111 1098 1112 ath_debug_stat_rx(sc, &rs); 1099 - 1100 - /* 1101 - * If we're asked to flush receive queue, directly 1102 - * chain it back at the queue without processing it. 1103 - */ 1104 - if (test_bit(SC_OP_RXFLUSH, &sc->sc_flags)) { 1105 - RX_STAT_INC(rx_drop_rxflush); 1106 - goto requeue_drop_frag; 1107 - } 1108 1113 1109 1114 memset(rxs, 0, sizeof(struct ieee80211_rx_status)); 1110 1115 ··· 1231 1254 sc->rx.frag = NULL; 1232 1255 } 1233 1256 requeue: 1257 + list_add_tail(&bf->list, &sc->rx.rxbuf); 1258 + if (flush) 1259 + continue; 1260 + 1234 1261 if (edma) { 1235 - list_add_tail(&bf->list, &sc->rx.rxbuf); 1236 1262 ath_rx_edma_buf_link(sc, qtype); 1237 1263 } else { 1238 - list_move_tail(&bf->list, &sc->rx.rxbuf); 1239 1264 ath_rx_buf_link(sc, bf); 1240 - if (!flush) 1241 - ath9k_hw_rxena(ah); 1265 + ath9k_hw_rxena(ah); 1242 1266 } 1243 1267 } while (1); 1244 - 1245 - spin_unlock_bh(&sc->rx.rxbuflock); 1246 1268 1247 1269 if (!(ah->imask & ATH9K_INT_RXEOL)) { 1248 1270 ah->imask |= (ATH9K_INT_RXEOL | ATH9K_INT_RXORN);
+4 -3
drivers/net/wireless/brcm80211/brcmsmac/mac80211_if.c
··· 1407 1407 #endif 1408 1408 t->ms = ms; 1409 1409 t->periodic = (bool) periodic; 1410 - t->set = true; 1411 - 1412 - atomic_inc(&t->wl->callbacks); 1410 + if (!t->set) { 1411 + t->set = true; 1412 + atomic_inc(&t->wl->callbacks); 1413 + } 1413 1414 1414 1415 ieee80211_queue_delayed_work(hw, &t->dly_wrk, msecs_to_jiffies(ms)); 1415 1416 }
+14 -21
drivers/net/wireless/iwlegacy/common.c
··· 3958 3958 3959 3959 memset(&il->staging, 0, sizeof(il->staging)); 3960 3960 3961 - if (!il->vif) { 3961 + switch (il->iw_mode) { 3962 + case NL80211_IFTYPE_UNSPECIFIED: 3962 3963 il->staging.dev_type = RXON_DEV_TYPE_ESS; 3963 - } else if (il->vif->type == NL80211_IFTYPE_STATION) { 3964 + break; 3965 + case NL80211_IFTYPE_STATION: 3964 3966 il->staging.dev_type = RXON_DEV_TYPE_ESS; 3965 3967 il->staging.filter_flags = RXON_FILTER_ACCEPT_GRP_MSK; 3966 - } else if (il->vif->type == NL80211_IFTYPE_ADHOC) { 3968 + break; 3969 + case NL80211_IFTYPE_ADHOC: 3967 3970 il->staging.dev_type = RXON_DEV_TYPE_IBSS; 3968 3971 il->staging.flags = RXON_FLG_SHORT_PREAMBLE_MSK; 3969 3972 il->staging.filter_flags = 3970 3973 RXON_FILTER_BCON_AWARE_MSK | RXON_FILTER_ACCEPT_GRP_MSK; 3971 - } else { 3974 + break; 3975 + default: 3972 3976 IL_ERR("Unsupported interface type %d\n", il->vif->type); 3973 3977 return; 3974 3978 } ··· 4554 4550 EXPORT_SYMBOL(il_mac_add_interface); 4555 4551 4556 4552 static void 4557 - il_teardown_interface(struct il_priv *il, struct ieee80211_vif *vif, 4558 - bool mode_change) 4553 + il_teardown_interface(struct il_priv *il, struct ieee80211_vif *vif) 4559 4554 { 4560 4555 lockdep_assert_held(&il->mutex); 4561 4556 ··· 4563 4560 il_force_scan_end(il); 4564 4561 } 4565 4562 4566 - if (!mode_change) 4567 - il_set_mode(il); 4568 - 4563 + il_set_mode(il); 4569 4564 } 4570 4565 4571 4566 void ··· 4576 4575 4577 4576 WARN_ON(il->vif != vif); 4578 4577 il->vif = NULL; 4579 - 4580 - il_teardown_interface(il, vif, false); 4578 + il->iw_mode = NL80211_IFTYPE_UNSPECIFIED; 4579 + il_teardown_interface(il, vif); 4581 4580 memset(il->bssid, 0, ETH_ALEN); 4582 4581 4583 4582 D_MAC80211("leave\n"); ··· 4686 4685 } 4687 4686 4688 4687 /* success */ 4689 - il_teardown_interface(il, vif, true); 4690 4688 vif->type = newtype; 4691 4689 vif->p2p = false; 4692 - err = il_set_mode(il); 4693 - WARN_ON(err); 4694 - /* 4695 - * We've switched internally, but submitting to the 4696 - * device may have failed for some reason. Mask this 4697 - * error, because otherwise mac80211 will not switch 4698 - * (and set the interface type back) and we'll be 4699 - * out of sync with it. 4700 - */ 4690 + il->iw_mode = newtype; 4691 + il_teardown_interface(il, vif); 4701 4692 err = 0; 4702 4693 4703 4694 out:
+2
drivers/net/wireless/iwlwifi/dvm/tx.c
··· 1079 1079 { 1080 1080 u16 status = le16_to_cpu(tx_resp->status.status); 1081 1081 1082 + info->flags &= ~IEEE80211_TX_CTL_AMPDU; 1083 + 1082 1084 info->status.rates[0].count = tx_resp->failure_frame + 1; 1083 1085 info->flags |= iwl_tx_status_to_mac80211(status); 1084 1086 iwlagn_hwrate_to_tx_control(priv, le32_to_cpu(tx_resp->rate_n_flags),
+2 -15
drivers/net/wireless/mwifiex/cfg80211.c
··· 1459 1459 struct cfg80211_ssid req_ssid; 1460 1460 int ret, auth_type = 0; 1461 1461 struct cfg80211_bss *bss = NULL; 1462 - u8 is_scanning_required = 0, config_bands = 0; 1462 + u8 is_scanning_required = 0; 1463 1463 1464 1464 memset(&req_ssid, 0, sizeof(struct cfg80211_ssid)); 1465 1465 ··· 1477 1477 1478 1478 /* disconnect before try to associate */ 1479 1479 mwifiex_deauthenticate(priv, NULL); 1480 - 1481 - if (channel) { 1482 - if (mode == NL80211_IFTYPE_STATION) { 1483 - if (channel->band == IEEE80211_BAND_2GHZ) 1484 - config_bands = BAND_B | BAND_G | BAND_GN; 1485 - else 1486 - config_bands = BAND_A | BAND_AN; 1487 - 1488 - if (!((config_bands | priv->adapter->fw_bands) & 1489 - ~priv->adapter->fw_bands)) 1490 - priv->adapter->config_bands = config_bands; 1491 - } 1492 - } 1493 1480 1494 1481 /* As this is new association, clear locally stored 1495 1482 * keys and security related flags */ ··· 1694 1707 1695 1708 if (cfg80211_get_chandef_type(&params->chandef) != 1696 1709 NL80211_CHAN_NO_HT) 1697 - config_bands |= BAND_GN; 1710 + config_bands |= BAND_G | BAND_GN; 1698 1711 } else { 1699 1712 if (cfg80211_get_chandef_type(&params->chandef) == 1700 1713 NL80211_CHAN_NO_HT)
+1 -1
drivers/net/wireless/mwifiex/pcie.c
··· 161 161 162 162 if (pdev) { 163 163 card = (struct pcie_service_card *) pci_get_drvdata(pdev); 164 - if (!card || card->adapter) { 164 + if (!card || !card->adapter) { 165 165 pr_err("Card or adapter structure is not valid\n"); 166 166 return 0; 167 167 }
+14
drivers/net/wireless/mwifiex/sta_ioctl.c
··· 283 283 if (ret) 284 284 goto done; 285 285 286 + if (bss_desc) { 287 + u8 config_bands = 0; 288 + 289 + if (mwifiex_band_to_radio_type((u8) bss_desc->bss_band) 290 + == HostCmd_SCAN_RADIO_TYPE_BG) 291 + config_bands = BAND_B | BAND_G | BAND_GN; 292 + else 293 + config_bands = BAND_A | BAND_AN; 294 + 295 + if (!((config_bands | adapter->fw_bands) & 296 + ~adapter->fw_bands)) 297 + adapter->config_bands = config_bands; 298 + } 299 + 286 300 ret = mwifiex_check_network_compatibility(priv, bss_desc); 287 301 if (ret) 288 302 goto done;
+2 -2
drivers/net/wireless/rtlwifi/Kconfig
··· 57 57 58 58 config RTLWIFI 59 59 tristate 60 - depends on RTL8192CE || RTL8192CU || RTL8192SE || RTL8192DE 60 + depends on RTL8192CE || RTL8192CU || RTL8192SE || RTL8192DE || RTL8723AE 61 61 default m 62 62 63 63 config RTLWIFI_DEBUG 64 64 bool "Additional debugging output" 65 - depends on RTL8192CE || RTL8192CU || RTL8192SE || RTL8192DE 65 + depends on RTL8192CE || RTL8192CU || RTL8192SE || RTL8192DE || RTL8723AE 66 66 default y 67 67 68 68 config RTL8192C_COMMON
+1 -1
drivers/pci/hotplug/pciehp.h
··· 44 44 extern int pciehp_poll_time; 45 45 extern bool pciehp_debug; 46 46 extern bool pciehp_force; 47 - extern struct workqueue_struct *pciehp_wq; 48 47 49 48 #define dbg(format, arg...) \ 50 49 do { \ ··· 77 78 struct hotplug_slot *hotplug_slot; 78 79 struct delayed_work work; /* work for button event */ 79 80 struct mutex lock; 81 + struct workqueue_struct *wq; 80 82 }; 81 83 82 84 struct event_info {
+2 -9
drivers/pci/hotplug/pciehp_core.c
··· 42 42 bool pciehp_poll_mode; 43 43 int pciehp_poll_time; 44 44 bool pciehp_force; 45 - struct workqueue_struct *pciehp_wq; 46 45 47 46 #define DRIVER_VERSION "0.4" 48 47 #define DRIVER_AUTHOR "Dan Zink <dan.zink@compaq.com>, Greg Kroah-Hartman <greg@kroah.com>, Dely Sy <dely.l.sy@intel.com>" ··· 339 340 { 340 341 int retval = 0; 341 342 342 - pciehp_wq = alloc_workqueue("pciehp", 0, 0); 343 - if (!pciehp_wq) 344 - return -ENOMEM; 345 - 346 343 pciehp_firmware_init(); 347 344 retval = pcie_port_service_register(&hpdriver_portdrv); 348 345 dbg("pcie_port_service_register = %d\n", retval); 349 346 info(DRIVER_DESC " version: " DRIVER_VERSION "\n"); 350 - if (retval) { 351 - destroy_workqueue(pciehp_wq); 347 + if (retval) 352 348 dbg("Failure to register service\n"); 353 - } 349 + 354 350 return retval; 355 351 } 356 352 ··· 353 359 { 354 360 dbg("unload_pciehpd()\n"); 355 361 pcie_port_service_unregister(&hpdriver_portdrv); 356 - destroy_workqueue(pciehp_wq); 357 362 info(DRIVER_DESC " version: " DRIVER_VERSION " unloaded\n"); 358 363 } 359 364
+4 -4
drivers/pci/hotplug/pciehp_ctrl.c
··· 49 49 info->p_slot = p_slot; 50 50 INIT_WORK(&info->work, interrupt_event_handler); 51 51 52 - queue_work(pciehp_wq, &info->work); 52 + queue_work(p_slot->wq, &info->work); 53 53 54 54 return 0; 55 55 } ··· 344 344 kfree(info); 345 345 goto out; 346 346 } 347 - queue_work(pciehp_wq, &info->work); 347 + queue_work(p_slot->wq, &info->work); 348 348 out: 349 349 mutex_unlock(&p_slot->lock); 350 350 } ··· 377 377 if (ATTN_LED(ctrl)) 378 378 pciehp_set_attention_status(p_slot, 0); 379 379 380 - queue_delayed_work(pciehp_wq, &p_slot->work, 5*HZ); 380 + queue_delayed_work(p_slot->wq, &p_slot->work, 5*HZ); 381 381 break; 382 382 case BLINKINGOFF_STATE: 383 383 case BLINKINGON_STATE: ··· 439 439 else 440 440 p_slot->state = POWERON_STATE; 441 441 442 - queue_work(pciehp_wq, &info->work); 442 + queue_work(p_slot->wq, &info->work); 443 443 } 444 444 445 445 static void interrupt_event_handler(struct work_struct *work)
+10 -1
drivers/pci/hotplug/pciehp_hpc.c
··· 773 773 static int pcie_init_slot(struct controller *ctrl) 774 774 { 775 775 struct slot *slot; 776 + char name[32]; 776 777 777 778 slot = kzalloc(sizeof(*slot), GFP_KERNEL); 778 779 if (!slot) 779 780 return -ENOMEM; 781 + 782 + snprintf(name, sizeof(name), "pciehp-%u", PSN(ctrl)); 783 + slot->wq = alloc_workqueue(name, 0, 0); 784 + if (!slot->wq) 785 + goto abort; 780 786 781 787 slot->ctrl = ctrl; 782 788 mutex_init(&slot->lock); 783 789 INIT_DELAYED_WORK(&slot->work, pciehp_queue_pushbutton_work); 784 790 ctrl->slot = slot; 785 791 return 0; 792 + abort: 793 + kfree(slot); 794 + return -ENOMEM; 786 795 } 787 796 788 797 static void pcie_cleanup_slot(struct controller *ctrl) 789 798 { 790 799 struct slot *slot = ctrl->slot; 791 800 cancel_delayed_work(&slot->work); 792 - flush_workqueue(pciehp_wq); 801 + destroy_workqueue(slot->wq); 793 802 kfree(slot); 794 803 } 795 804
+1 -2
drivers/pci/hotplug/shpchp.h
··· 46 46 extern bool shpchp_poll_mode; 47 47 extern int shpchp_poll_time; 48 48 extern bool shpchp_debug; 49 - extern struct workqueue_struct *shpchp_wq; 50 - extern struct workqueue_struct *shpchp_ordered_wq; 51 49 52 50 #define dbg(format, arg...) \ 53 51 do { \ ··· 89 91 struct list_head slot_list; 90 92 struct delayed_work work; /* work for button event */ 91 93 struct mutex lock; 94 + struct workqueue_struct *wq; 92 95 u8 hp_slot; 93 96 }; 94 97
+14 -22
drivers/pci/hotplug/shpchp_core.c
··· 39 39 bool shpchp_debug; 40 40 bool shpchp_poll_mode; 41 41 int shpchp_poll_time; 42 - struct workqueue_struct *shpchp_wq; 43 - struct workqueue_struct *shpchp_ordered_wq; 44 42 45 43 #define DRIVER_VERSION "0.4" 46 44 #define DRIVER_AUTHOR "Dan Zink <dan.zink@compaq.com>, Greg Kroah-Hartman <greg@kroah.com>, Dely Sy <dely.l.sy@intel.com>" ··· 127 129 slot->device = ctrl->slot_device_offset + i; 128 130 slot->hpc_ops = ctrl->hpc_ops; 129 131 slot->number = ctrl->first_slot + (ctrl->slot_num_inc * i); 132 + 133 + snprintf(name, sizeof(name), "shpchp-%d", slot->number); 134 + slot->wq = alloc_workqueue(name, 0, 0); 135 + if (!slot->wq) { 136 + retval = -ENOMEM; 137 + goto error_info; 138 + } 139 + 130 140 mutex_init(&slot->lock); 131 141 INIT_DELAYED_WORK(&slot->work, shpchp_queue_pushbutton_work); 132 142 ··· 154 148 if (retval) { 155 149 ctrl_err(ctrl, "pci_hp_register failed with error %d\n", 156 150 retval); 157 - goto error_info; 151 + goto error_slotwq; 158 152 } 159 153 160 154 get_power_status(hotplug_slot, &info->power_status); ··· 166 160 } 167 161 168 162 return 0; 163 + error_slotwq: 164 + destroy_workqueue(slot->wq); 169 165 error_info: 170 166 kfree(info); 171 167 error_hpslot: ··· 188 180 slot = list_entry(tmp, struct slot, slot_list); 189 181 list_del(&slot->slot_list); 190 182 cancel_delayed_work(&slot->work); 191 - flush_workqueue(shpchp_wq); 192 - flush_workqueue(shpchp_ordered_wq); 183 + destroy_workqueue(slot->wq); 193 184 pci_hp_deregister(slot->hotplug_slot); 194 185 } 195 186 } ··· 371 364 372 365 static int __init shpcd_init(void) 373 366 { 374 - int retval = 0; 375 - 376 - shpchp_wq = alloc_ordered_workqueue("shpchp", 0); 377 - if (!shpchp_wq) 378 - return -ENOMEM; 379 - 380 - shpchp_ordered_wq = alloc_ordered_workqueue("shpchp_ordered", 0); 381 - if (!shpchp_ordered_wq) { 382 - destroy_workqueue(shpchp_wq); 383 - return -ENOMEM; 384 - } 367 + int retval; 385 368 386 369 retval = pci_register_driver(&shpc_driver); 387 370 dbg("%s: pci_register_driver = %d\n", __func__, retval); 388 371 info(DRIVER_DESC " version: " DRIVER_VERSION "\n"); 389 - if (retval) { 390 - destroy_workqueue(shpchp_ordered_wq); 391 - destroy_workqueue(shpchp_wq); 392 - } 372 + 393 373 return retval; 394 374 } 395 375 ··· 384 390 { 385 391 dbg("unload_shpchpd()\n"); 386 392 pci_unregister_driver(&shpc_driver); 387 - destroy_workqueue(shpchp_ordered_wq); 388 - destroy_workqueue(shpchp_wq); 389 393 info(DRIVER_DESC " version: " DRIVER_VERSION " unloaded\n"); 390 394 } 391 395
+3 -3
drivers/pci/hotplug/shpchp_ctrl.c
··· 51 51 info->p_slot = p_slot; 52 52 INIT_WORK(&info->work, interrupt_event_handler); 53 53 54 - queue_work(shpchp_wq, &info->work); 54 + queue_work(p_slot->wq, &info->work); 55 55 56 56 return 0; 57 57 } ··· 453 453 kfree(info); 454 454 goto out; 455 455 } 456 - queue_work(shpchp_ordered_wq, &info->work); 456 + queue_work(p_slot->wq, &info->work); 457 457 out: 458 458 mutex_unlock(&p_slot->lock); 459 459 } ··· 501 501 p_slot->hpc_ops->green_led_blink(p_slot); 502 502 p_slot->hpc_ops->set_attention_status(p_slot, 0); 503 503 504 - queue_delayed_work(shpchp_wq, &p_slot->work, 5*HZ); 504 + queue_delayed_work(p_slot->wq, &p_slot->work, 5*HZ); 505 505 break; 506 506 case BLINKINGOFF_STATE: 507 507 case BLINKINGON_STATE:
+1 -1
drivers/pci/pcie/Kconfig
··· 82 82 83 83 config PCIE_PME 84 84 def_bool y 85 - depends on PCIEPORTBUS && PM_RUNTIME && EXPERIMENTAL && ACPI 85 + depends on PCIEPORTBUS && PM_RUNTIME && ACPI
+1
drivers/pci/pcie/aer/aerdrv_core.c
··· 630 630 continue; 631 631 } 632 632 do_recovery(pdev, entry.severity); 633 + pci_dev_put(pdev); 633 634 } 634 635 } 635 636 #endif
+3
drivers/pci/pcie/aspm.c
··· 771 771 { 772 772 struct pci_dev *child; 773 773 774 + if (aspm_force) 775 + return; 776 + 774 777 /* 775 778 * Clear any ASPM setup that the firmware has carried out on this bus 776 779 */
-1
drivers/pinctrl/Kconfig
··· 181 181 182 182 config PINCTRL_SAMSUNG 183 183 bool 184 - depends on OF && GPIOLIB 185 184 select PINMUX 186 185 select PINCONF 187 186
+1 -1
drivers/pinctrl/mvebu/pinctrl-dove.c
··· 588 588 { 589 589 const struct of_device_id *match = 590 590 of_match_device(dove_pinctrl_of_match, &pdev->dev); 591 - pdev->dev.platform_data = match->data; 591 + pdev->dev.platform_data = (void *)match->data; 592 592 593 593 /* 594 594 * General MPP Configuration Register is part of pdma registers.
+4 -4
drivers/pinctrl/mvebu/pinctrl-kirkwood.c
··· 66 66 MPP_VAR_FUNCTION(0x5, "sata0", "act", V(0, 1, 1, 1, 1, 0)), 67 67 MPP_VAR_FUNCTION(0xb, "lcd", "vsync", V(0, 0, 0, 0, 1, 0))), 68 68 MPP_MODE(6, 69 - MPP_VAR_FUNCTION(0x0, "sysrst", "out", V(1, 1, 1, 1, 1, 1)), 70 - MPP_VAR_FUNCTION(0x1, "spi", "mosi", V(1, 1, 1, 1, 1, 1)), 71 - MPP_VAR_FUNCTION(0x2, "ptp", "trig", V(1, 1, 1, 1, 0, 0))), 69 + MPP_VAR_FUNCTION(0x1, "sysrst", "out", V(1, 1, 1, 1, 1, 1)), 70 + MPP_VAR_FUNCTION(0x2, "spi", "mosi", V(1, 1, 1, 1, 1, 1)), 71 + MPP_VAR_FUNCTION(0x3, "ptp", "trig", V(1, 1, 1, 1, 0, 0))), 72 72 MPP_MODE(7, 73 73 MPP_VAR_FUNCTION(0x0, "gpo", NULL, V(1, 1, 1, 1, 1, 1)), 74 74 MPP_VAR_FUNCTION(0x1, "pex", "rsto", V(1, 1, 1, 1, 0, 1)), ··· 458 458 { 459 459 const struct of_device_id *match = 460 460 of_match_device(kirkwood_pinctrl_of_match, &pdev->dev); 461 - pdev->dev.platform_data = match->data; 461 + pdev->dev.platform_data = (void *)match->data; 462 462 return mvebu_pinctrl_probe(pdev); 463 463 } 464 464
+5 -5
drivers/pinctrl/pinctrl-exynos5440.c
··· 599 599 } 600 600 601 601 /* parse the pin numbers listed in the 'samsung,exynos5440-pins' property */ 602 - static int __init exynos5440_pinctrl_parse_dt_pins(struct platform_device *pdev, 602 + static int exynos5440_pinctrl_parse_dt_pins(struct platform_device *pdev, 603 603 struct device_node *cfg_np, unsigned int **pin_list, 604 604 unsigned int *npins) 605 605 { ··· 630 630 * Parse the information about all the available pin groups and pin functions 631 631 * from device node of the pin-controller. 632 632 */ 633 - static int __init exynos5440_pinctrl_parse_dt(struct platform_device *pdev, 633 + static int exynos5440_pinctrl_parse_dt(struct platform_device *pdev, 634 634 struct exynos5440_pinctrl_priv_data *priv) 635 635 { 636 636 struct device *dev = &pdev->dev; ··· 723 723 } 724 724 725 725 /* register the pinctrl interface with the pinctrl subsystem */ 726 - static int __init exynos5440_pinctrl_register(struct platform_device *pdev, 726 + static int exynos5440_pinctrl_register(struct platform_device *pdev, 727 727 struct exynos5440_pinctrl_priv_data *priv) 728 728 { 729 729 struct device *dev = &pdev->dev; ··· 798 798 } 799 799 800 800 /* register the gpiolib interface with the gpiolib subsystem */ 801 - static int __init exynos5440_gpiolib_register(struct platform_device *pdev, 801 + static int exynos5440_gpiolib_register(struct platform_device *pdev, 802 802 struct exynos5440_pinctrl_priv_data *priv) 803 803 { 804 804 struct gpio_chip *gc; ··· 831 831 } 832 832 833 833 /* unregister the gpiolib interface with the gpiolib subsystem */ 834 - static int __init exynos5440_gpiolib_unregister(struct platform_device *pdev, 834 + static int exynos5440_gpiolib_unregister(struct platform_device *pdev, 835 835 struct exynos5440_pinctrl_priv_data *priv) 836 836 { 837 837 int ret = gpiochip_remove(priv->gc);
+4 -5
drivers/pinctrl/pinctrl-mxs.c
··· 146 146 static void mxs_dt_free_map(struct pinctrl_dev *pctldev, 147 147 struct pinctrl_map *map, unsigned num_maps) 148 148 { 149 - int i; 149 + u32 i; 150 150 151 151 for (i = 0; i < num_maps; i++) { 152 152 if (map[i].type == PIN_MAP_TYPE_MUX_GROUP) ··· 203 203 void __iomem *reg; 204 204 u8 bank, shift; 205 205 u16 pin; 206 - int i; 206 + u32 i; 207 207 208 208 for (i = 0; i < g->npins; i++) { 209 209 bank = PINID_TO_BANK(g->pins[i]); ··· 256 256 void __iomem *reg; 257 257 u8 ma, vol, pull, bank, shift; 258 258 u16 pin; 259 - int i; 259 + u32 i; 260 260 261 261 ma = CONFIG_TO_MA(config); 262 262 vol = CONFIG_TO_VOL(config); ··· 345 345 const char *propname = "fsl,pinmux-ids"; 346 346 char *group; 347 347 int length = strlen(np->name) + SUFFIX_LEN; 348 - int i; 349 - u32 val; 348 + u32 val, i; 350 349 351 350 group = devm_kzalloc(&pdev->dev, length, GFP_KERNEL); 352 351 if (!group)
+1 -1
drivers/pinctrl/pinctrl-nomadik.c
··· 676 676 } 677 677 EXPORT_SYMBOL(nmk_gpio_set_mode); 678 678 679 - static int nmk_prcm_gpiocr_get_mode(struct pinctrl_dev *pctldev, int gpio) 679 + static int __maybe_unused nmk_prcm_gpiocr_get_mode(struct pinctrl_dev *pctldev, int gpio) 680 680 { 681 681 int i; 682 682 u16 reg;
+2 -77
drivers/pinctrl/pinctrl-single.c
··· 30 30 #define PCS_MUX_BITS_NAME "pinctrl-single,bits" 31 31 #define PCS_REG_NAME_LEN ((sizeof(unsigned long) * 2) + 1) 32 32 #define PCS_OFF_DISABLED ~0U 33 - #define PCS_MAX_GPIO_VALUES 2 34 33 35 34 /** 36 35 * struct pcs_pingroup - pingroups for a function ··· 74 75 const char **pgnames; 75 76 int npgnames; 76 77 struct list_head node; 77 - }; 78 - 79 - /** 80 - * struct pcs_gpio_range - pinctrl gpio range 81 - * @range: subrange of the GPIO number space 82 - * @gpio_func: gpio function value in the pinmux register 83 - */ 84 - struct pcs_gpio_range { 85 - struct pinctrl_gpio_range range; 86 - int gpio_func; 87 78 }; 88 79 89 80 /** ··· 403 414 } 404 415 405 416 static int pcs_request_gpio(struct pinctrl_dev *pctldev, 406 - struct pinctrl_gpio_range *range, unsigned pin) 417 + struct pinctrl_gpio_range *range, unsigned offset) 407 418 { 408 - struct pcs_device *pcs = pinctrl_dev_get_drvdata(pctldev); 409 - struct pcs_gpio_range *gpio = NULL; 410 - int end, mux_bytes; 411 - unsigned data; 412 - 413 - gpio = container_of(range, struct pcs_gpio_range, range); 414 - end = range->pin_base + range->npins - 1; 415 - if (pin < range->pin_base || pin > end) { 416 - dev_err(pctldev->dev, 417 - "pin %d isn't in the range of %d to %d\n", 418 - pin, range->pin_base, end); 419 - return -EINVAL; 420 - } 421 - mux_bytes = pcs->width / BITS_PER_BYTE; 422 - data = pcs->read(pcs->base + pin * mux_bytes) & ~pcs->fmask; 423 - data |= gpio->gpio_func; 424 - pcs->write(data, pcs->base + pin * mux_bytes); 425 - return 0; 419 + return -ENOTSUPP; 426 420 } 427 421 428 422 static struct pinmux_ops pcs_pinmux_ops = { ··· 879 907 880 908 static struct of_device_id pcs_of_match[]; 881 909 882 - static int pcs_add_gpio_range(struct device_node *node, struct pcs_device *pcs) 883 - { 884 - struct pcs_gpio_range *gpio; 885 - struct device_node *child; 886 - struct resource r; 887 - const char name[] = "pinctrl-single"; 888 - u32 gpiores[PCS_MAX_GPIO_VALUES]; 889 - int ret, i = 0, mux_bytes = 0; 890 - 891 - for_each_child_of_node(node, child) { 892 - ret = of_address_to_resource(child, 0, &r); 893 - if (ret < 0) 894 - continue; 895 - memset(gpiores, 0, sizeof(u32) * PCS_MAX_GPIO_VALUES); 896 - ret = of_property_read_u32_array(child, "pinctrl-single,gpio", 897 - gpiores, PCS_MAX_GPIO_VALUES); 898 - if (ret < 0) 899 - continue; 900 - gpio = devm_kzalloc(pcs->dev, sizeof(*gpio), GFP_KERNEL); 901 - if (!gpio) { 902 - dev_err(pcs->dev, "failed to allocate pcs gpio\n"); 903 - return -ENOMEM; 904 - } 905 - gpio->range.name = devm_kzalloc(pcs->dev, sizeof(name), 906 - GFP_KERNEL); 907 - if (!gpio->range.name) { 908 - dev_err(pcs->dev, "failed to allocate range name\n"); 909 - return -ENOMEM; 910 - } 911 - memcpy((char *)gpio->range.name, name, sizeof(name)); 912 - 913 - gpio->range.id = i++; 914 - gpio->range.base = gpiores[0]; 915 - gpio->gpio_func = gpiores[1]; 916 - mux_bytes = pcs->width / BITS_PER_BYTE; 917 - gpio->range.pin_base = (r.start - pcs->res->start) / mux_bytes; 918 - gpio->range.npins = (r.end - r.start) / mux_bytes + 1; 919 - 920 - pinctrl_add_gpio_range(pcs->pctl, &gpio->range); 921 - } 922 - return 0; 923 - } 924 - 925 910 static int pcs_probe(struct platform_device *pdev) 926 911 { 927 912 struct device_node *np = pdev->dev.of_node; ··· 974 1045 ret = -EINVAL; 975 1046 goto free; 976 1047 } 977 - 978 - ret = pcs_add_gpio_range(np, pcs); 979 - if (ret < 0) 980 - goto free; 981 1048 982 1049 dev_info(pcs->dev, "%i pins at pa %p size %u\n", 983 1050 pcs->desc.npins, pcs->base, pcs->size);
+1 -1
drivers/platform/x86/ibm_rtl.c
··· 244 244 if (force) 245 245 pr_warn("module loaded by force\n"); 246 246 /* first ensure that we are running on IBM HW */ 247 - else if (efi_enabled || !dmi_check_system(ibm_rtl_dmi_table)) 247 + else if (efi_enabled(EFI_BOOT) || !dmi_check_system(ibm_rtl_dmi_table)) 248 248 return -ENODEV; 249 249 250 250 /* Get the address for the Extended BIOS Data Area */
+4
drivers/platform/x86/samsung-laptop.c
··· 26 26 #include <linux/seq_file.h> 27 27 #include <linux/debugfs.h> 28 28 #include <linux/ctype.h> 29 + #include <linux/efi.h> 29 30 #include <acpi/video.h> 30 31 31 32 /* ··· 1544 1543 { 1545 1544 struct samsung_laptop *samsung; 1546 1545 int ret; 1546 + 1547 + if (efi_enabled(EFI_BOOT)) 1548 + return -ENODEV; 1547 1549 1548 1550 quirks = &samsung_unknown; 1549 1551 if (!force && !dmi_check_system(samsung_dmi_table))
+1
drivers/regulator/dbx500-prcmu.c
··· 14 14 #include <linux/debugfs.h> 15 15 #include <linux/seq_file.h> 16 16 #include <linux/slab.h> 17 + #include <linux/module.h> 17 18 18 19 #include "dbx500-prcmu.h" 19 20
+1 -1
drivers/regulator/tps80031-regulator.c
··· 728 728 } 729 729 } 730 730 rdev = regulator_register(&ri->rinfo->desc, &config); 731 - if (IS_ERR_OR_NULL(rdev)) { 731 + if (IS_ERR(rdev)) { 732 732 dev_err(&pdev->dev, 733 733 "register regulator failed %s\n", 734 734 ri->rinfo->desc.name);
+1 -1
drivers/scsi/isci/init.c
··· 633 633 return -ENOMEM; 634 634 pci_set_drvdata(pdev, pci_info); 635 635 636 - if (efi_enabled) 636 + if (efi_enabled(EFI_RUNTIME_SERVICES)) 637 637 orom = isci_get_efi_var(pdev); 638 638 639 639 if (!orom)
+1 -1
drivers/staging/iio/adc/mxs-lradc.c
··· 239 239 struct mxs_lradc *lradc = iio_priv(iio); 240 240 const uint32_t chan_value = LRADC_CH_ACCUMULATE | 241 241 ((LRADC_DELAY_TIMER_LOOP - 1) << LRADC_CH_NUM_SAMPLES_OFFSET); 242 - int i, j = 0; 242 + unsigned int i, j = 0; 243 243 244 244 for_each_set_bit(i, iio->active_scan_mask, iio->masklength) { 245 245 lradc->buffer[j] = readl(lradc->base + LRADC_CH(j));
+1 -1
drivers/staging/iio/gyro/adis16080_core.c
··· 69 69 ret = spi_read(st->us, st->buf, 2); 70 70 71 71 if (ret == 0) 72 - *val = ((st->buf[0] & 0xF) << 8) | st->buf[1]; 72 + *val = sign_extend32(((st->buf[0] & 0xF) << 8) | st->buf[1], 11); 73 73 mutex_unlock(&st->buf_lock); 74 74 75 75 return ret;
+1 -1
drivers/staging/sb105x/sb_pci_mp.c
··· 3054 3054 sbdev->nr_ports = ((portnum_hex/16)*10) + (portnum_hex % 16); 3055 3055 } 3056 3056 break; 3057 - #ifdef CONFIG_PARPORT 3057 + #ifdef CONFIG_PARPORT_PC 3058 3058 case PCI_DEVICE_ID_MP2S1P : 3059 3059 sbdev->nr_ports = 2; 3060 3060
-1
drivers/staging/vt6656/bssdb.h
··· 90 90 } SRSNCapObject, *PSRSNCapObject; 91 91 92 92 // BSS info(AP) 93 - #pragma pack(1) 94 93 typedef struct tagKnownBSS { 95 94 // BSS info 96 95 BOOL bActive;
-1
drivers/staging/vt6656/int.h
··· 34 34 #include "device.h" 35 35 36 36 /*--------------------- Export Definitions -------------------------*/ 37 - #pragma pack(1) 38 37 typedef struct tagSINTData { 39 38 BYTE byTSR0; 40 39 BYTE byPkt0;
+16 -17
drivers/staging/vt6656/iocmd.h
··· 95 95 // Ioctl interface structure 96 96 // Command structure 97 97 // 98 - #pragma pack(1) 99 98 typedef struct tagSCmdRequest { 100 99 u8 name[16]; 101 100 void *data; 102 101 u16 wResult; 103 102 u16 wCmdCode; 104 - } SCmdRequest, *PSCmdRequest; 103 + } __packed SCmdRequest, *PSCmdRequest; 105 104 106 105 // 107 106 // Scan ··· 110 111 111 112 u8 ssid[SSID_MAXLEN + 2]; 112 113 113 - } SCmdScan, *PSCmdScan; 114 + } __packed SCmdScan, *PSCmdScan; 114 115 115 116 // 116 117 // BSS Join ··· 125 126 BOOL bPSEnable; 126 127 BOOL bShareKeyAuth; 127 128 128 - } SCmdBSSJoin, *PSCmdBSSJoin; 129 + } __packed SCmdBSSJoin, *PSCmdBSSJoin; 129 130 130 131 // 131 132 // Zonetype Setting ··· 136 137 BOOL bWrite; 137 138 WZONETYPE ZoneType; 138 139 139 - } SCmdZoneTypeSet, *PSCmdZoneTypeSet; 140 + } __packed SCmdZoneTypeSet, *PSCmdZoneTypeSet; 140 141 141 142 typedef struct tagSWPAResult { 142 143 char ifname[100]; ··· 144 145 u8 key_mgmt; 145 146 u8 eap_type; 146 147 BOOL authenticated; 147 - } SWPAResult, *PSWPAResult; 148 + } __packed SWPAResult, *PSWPAResult; 148 149 149 150 typedef struct tagSCmdStartAP { 150 151 ··· 156 157 BOOL bShareKeyAuth; 157 158 u8 byBasicRate; 158 159 159 - } SCmdStartAP, *PSCmdStartAP; 160 + } __packed SCmdStartAP, *PSCmdStartAP; 160 161 161 162 typedef struct tagSCmdSetWEP { 162 163 ··· 166 167 BOOL bWepKeyAvailable[WEP_NKEYS]; 167 168 u32 auWepKeyLength[WEP_NKEYS]; 168 169 169 - } SCmdSetWEP, *PSCmdSetWEP; 170 + } __packed SCmdSetWEP, *PSCmdSetWEP; 170 171 171 172 typedef struct tagSBSSIDItem { 172 173 ··· 179 180 BOOL bWEPOn; 180 181 u32 uRSSI; 181 182 182 - } SBSSIDItem; 183 + } __packed SBSSIDItem; 183 184 184 185 185 186 typedef struct tagSBSSIDList { 186 187 187 188 u32 uItem; 188 189 SBSSIDItem sBSSIDList[0]; 189 - } SBSSIDList, *PSBSSIDList; 190 + } __packed SBSSIDList, *PSBSSIDList; 190 191 191 192 192 193 typedef struct tagSNodeItem { ··· 207 208 u32 uTxAttempts; 208 209 u16 wFailureRatio; 209 210 210 - } SNodeItem; 211 + } __packed SNodeItem; 211 212 212 213 213 214 typedef struct tagSNodeList { ··· 215 216 u32 uItem; 216 217 SNodeItem sNodeList[0]; 217 218 218 - } SNodeList, *PSNodeList; 219 + } __packed SNodeList, *PSNodeList; 219 220 220 221 221 222 typedef struct tagSCmdLinkStatus { ··· 228 229 u32 uChannel; 229 230 u32 uLinkRate; 230 231 231 - } SCmdLinkStatus, *PSCmdLinkStatus; 232 + } __packed SCmdLinkStatus, *PSCmdLinkStatus; 232 233 233 234 // 234 235 // 802.11 counter ··· 246 247 u32 ReceivedFragmentCount; 247 248 u32 MulticastReceivedFrameCount; 248 249 u32 FCSErrorCount; 249 - } SDot11MIBCount, *PSDot11MIBCount; 250 + } __packed SDot11MIBCount, *PSDot11MIBCount; 250 251 251 252 252 253 ··· 354 355 u32 ullTxBroadcastBytes[2]; 355 356 u32 ullTxMulticastBytes[2]; 356 357 u32 ullTxDirectedBytes[2]; 357 - } SStatMIBCount, *PSStatMIBCount; 358 + } __packed SStatMIBCount, *PSStatMIBCount; 358 359 359 360 typedef struct tagSCmdValue { 360 361 361 362 u32 dwValue; 362 363 363 - } SCmdValue, *PSCmdValue; 364 + } __packed SCmdValue, *PSCmdValue; 364 365 365 366 // 366 367 // hostapd & viawget ioctl related ··· 430 431 u8 ssid[32]; 431 432 } scan_req; 432 433 } u; 433 - }; 434 + } __packed; 434 435 435 436 /*--------------------- Export Classes ----------------------------*/ 436 437
+3 -5
drivers/staging/vt6656/iowpa.h
··· 67 67 68 68 69 69 70 - #pragma pack(1) 71 70 typedef struct viawget_wpa_header { 72 71 u8 type; 73 72 u16 req_ie_len; 74 73 u16 resp_ie_len; 75 - } viawget_wpa_header; 74 + } __packed viawget_wpa_header; 76 75 77 76 struct viawget_wpa_param { 78 77 u32 cmd; ··· 112 113 u8 *buf; 113 114 } scan_results; 114 115 } u; 115 - }; 116 + } __packed; 116 117 117 - #pragma pack(1) 118 118 struct viawget_scan_result { 119 119 u8 bssid[6]; 120 120 u8 ssid[32]; ··· 128 130 int noise; 129 131 int level; 130 132 int maxrate; 131 - }; 133 + } __packed; 132 134 133 135 /*--------------------- Export Classes ----------------------------*/ 134 136
+1 -1
drivers/staging/wlan-ng/prism2mgmt.c
··· 406 406 /* SSID */ 407 407 req->ssid.status = P80211ENUM_msgitem_status_data_ok; 408 408 req->ssid.data.len = le16_to_cpu(item->ssid.len); 409 - req->ssid.data.len = min_t(u16, req->ssid.data.len, WLAN_BSSID_LEN); 409 + req->ssid.data.len = min_t(u16, req->ssid.data.len, WLAN_SSID_MAXLEN); 410 410 memcpy(req->ssid.data.data, item->ssid.data, req->ssid.data.len); 411 411 412 412 /* supported rates */
+7 -1
drivers/target/target_core_device.c
··· 941 941 942 942 int se_dev_set_fabric_max_sectors(struct se_device *dev, u32 fabric_max_sectors) 943 943 { 944 + int block_size = dev->dev_attrib.block_size; 945 + 944 946 if (dev->export_count) { 945 947 pr_err("dev[%p]: Unable to change SE Device" 946 948 " fabric_max_sectors while export_count is %d\n", ··· 980 978 /* 981 979 * Align max_sectors down to PAGE_SIZE to follow transport_allocate_data_tasks() 982 980 */ 981 + if (!block_size) { 982 + block_size = 512; 983 + pr_warn("Defaulting to 512 for zero block_size\n"); 984 + } 983 985 fabric_max_sectors = se_dev_align_max_sectors(fabric_max_sectors, 984 - dev->dev_attrib.block_size); 986 + block_size); 985 987 986 988 dev->dev_attrib.fabric_max_sectors = fabric_max_sectors; 987 989 pr_debug("dev[%p]: SE Device max_sectors changed to %u\n",
+5
drivers/target/target_core_fabric_configfs.c
··· 754 754 return -EFAULT; 755 755 } 756 756 757 + if (!(dev->dev_flags & DF_CONFIGURED)) { 758 + pr_err("se_device not configured yet, cannot port link\n"); 759 + return -ENODEV; 760 + } 761 + 757 762 tpg_ci = &lun_ci->ci_parent->ci_group->cg_item; 758 763 se_tpg = container_of(to_config_group(tpg_ci), 759 764 struct se_portal_group, tpg_group);
+8 -10
drivers/target/target_core_sbc.c
··· 58 58 buf[7] = dev->dev_attrib.block_size & 0xff; 59 59 60 60 rbuf = transport_kmap_data_sg(cmd); 61 - if (!rbuf) 62 - return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE; 63 - 64 - memcpy(rbuf, buf, min_t(u32, sizeof(buf), cmd->data_length)); 65 - transport_kunmap_data_sg(cmd); 61 + if (rbuf) { 62 + memcpy(rbuf, buf, min_t(u32, sizeof(buf), cmd->data_length)); 63 + transport_kunmap_data_sg(cmd); 64 + } 66 65 67 66 target_complete_cmd(cmd, GOOD); 68 67 return 0; ··· 96 97 buf[14] = 0x80; 97 98 98 99 rbuf = transport_kmap_data_sg(cmd); 99 - if (!rbuf) 100 - return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE; 101 - 102 - memcpy(rbuf, buf, min_t(u32, sizeof(buf), cmd->data_length)); 103 - transport_kunmap_data_sg(cmd); 100 + if (rbuf) { 101 + memcpy(rbuf, buf, min_t(u32, sizeof(buf), cmd->data_length)); 102 + transport_kunmap_data_sg(cmd); 103 + } 104 104 105 105 target_complete_cmd(cmd, GOOD); 106 106 return 0;
+11 -33
drivers/target/target_core_spc.c
··· 641 641 642 642 out: 643 643 rbuf = transport_kmap_data_sg(cmd); 644 - if (!rbuf) 645 - return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE; 646 - 647 - memcpy(rbuf, buf, min_t(u32, sizeof(buf), cmd->data_length)); 648 - transport_kunmap_data_sg(cmd); 644 + if (rbuf) { 645 + memcpy(rbuf, buf, min_t(u32, sizeof(buf), cmd->data_length)); 646 + transport_kunmap_data_sg(cmd); 647 + } 649 648 650 649 if (!ret) 651 650 target_complete_cmd(cmd, GOOD); ··· 850 851 { 851 852 struct se_device *dev = cmd->se_dev; 852 853 char *cdb = cmd->t_task_cdb; 853 - unsigned char *buf, *map_buf; 854 + unsigned char buf[SE_MODE_PAGE_BUF], *rbuf; 854 855 int type = dev->transport->get_device_type(dev); 855 856 int ten = (cmd->t_task_cdb[0] == MODE_SENSE_10); 856 857 bool dbd = !!(cdb[1] & 0x08); ··· 862 863 int ret; 863 864 int i; 864 865 865 - map_buf = transport_kmap_data_sg(cmd); 866 - if (!map_buf) 867 - return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE; 868 - /* 869 - * If SCF_PASSTHROUGH_SG_TO_MEM_NOALLOC is not set, then we 870 - * know we actually allocated a full page. Otherwise, if the 871 - * data buffer is too small, allocate a temporary buffer so we 872 - * don't have to worry about overruns in all our INQUIRY 873 - * emulation handling. 874 - */ 875 - if (cmd->data_length < SE_MODE_PAGE_BUF && 876 - (cmd->se_cmd_flags & SCF_PASSTHROUGH_SG_TO_MEM_NOALLOC)) { 877 - buf = kzalloc(SE_MODE_PAGE_BUF, GFP_KERNEL); 878 - if (!buf) { 879 - transport_kunmap_data_sg(cmd); 880 - return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE; 881 - } 882 - } else { 883 - buf = map_buf; 884 - } 866 + memset(buf, 0, SE_MODE_PAGE_BUF); 867 + 885 868 /* 886 869 * Skip over MODE DATA LENGTH + MEDIUM TYPE fields to byte 3 for 887 870 * MODE_SENSE_10 and byte 2 for MODE_SENSE (6). ··· 915 934 if (page == 0x3f) { 916 935 if (subpage != 0x00 && subpage != 0xff) { 917 936 pr_warn("MODE_SENSE: Invalid subpage code: 0x%02x\n", subpage); 918 - kfree(buf); 919 - transport_kunmap_data_sg(cmd); 920 937 return TCM_INVALID_CDB_FIELD; 921 938 } 922 939 ··· 951 972 pr_err("MODE SENSE: unimplemented page/subpage: 0x%02x/0x%02x\n", 952 973 page, subpage); 953 974 954 - transport_kunmap_data_sg(cmd); 955 975 return TCM_UNKNOWN_MODE_PAGE; 956 976 957 977 set_length: ··· 959 981 else 960 982 buf[0] = length - 1; 961 983 962 - if (buf != map_buf) { 963 - memcpy(map_buf, buf, cmd->data_length); 964 - kfree(buf); 984 + rbuf = transport_kmap_data_sg(cmd); 985 + if (rbuf) { 986 + memcpy(rbuf, buf, min_t(u32, SE_MODE_PAGE_BUF, cmd->data_length)); 987 + transport_kunmap_data_sg(cmd); 965 988 } 966 989 967 - transport_kunmap_data_sg(cmd); 968 990 target_complete_cmd(cmd, GOOD); 969 991 return 0; 970 992 }
+2
drivers/tty/pty.c
··· 441 441 return pty_get_pktmode(tty, (int __user *)arg); 442 442 case TIOCSIG: /* Send signal to other side of pty */ 443 443 return pty_signal(tty, (int) arg); 444 + case TIOCGPTN: /* TTY returns ENOTTY, but glibc expects EINVAL here */ 445 + return -EINVAL; 444 446 } 445 447 return -ENOIOCTLCMD; 446 448 }
+11
drivers/tty/serial/8250/8250.c
··· 300 300 UART_FCR_R_TRIG_00 | UART_FCR_T_TRIG_00, 301 301 .flags = UART_CAP_FIFO, 302 302 }, 303 + [PORT_BRCM_TRUMANAGE] = { 304 + .name = "TruManage", 305 + .fifo_size = 1, 306 + .tx_loadsz = 1024, 307 + .flags = UART_CAP_HFIFO, 308 + }, 303 309 [PORT_8250_CIR] = { 304 310 .name = "CIR port" 305 311 } ··· 1496 1490 port->icount.tx++; 1497 1491 if (uart_circ_empty(xmit)) 1498 1492 break; 1493 + if (up->capabilities & UART_CAP_HFIFO) { 1494 + if ((serial_port_in(port, UART_LSR) & BOTH_EMPTY) != 1495 + BOTH_EMPTY) 1496 + break; 1497 + } 1499 1498 } while (--count > 0); 1500 1499 1501 1500 if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS)
+1
drivers/tty/serial/8250/8250.h
··· 40 40 #define UART_CAP_AFE (1 << 11) /* MCR-based hw flow control */ 41 41 #define UART_CAP_UUE (1 << 12) /* UART needs IER bit 6 set (Xscale) */ 42 42 #define UART_CAP_RTOIE (1 << 13) /* UART needs IER bit 4 set (Xscale, Tegra) */ 43 + #define UART_CAP_HFIFO (1 << 14) /* UART has a "hidden" FIFO */ 43 44 44 45 #define UART_BUG_QUOT (1 << 0) /* UART has buggy quot LSB */ 45 46 #define UART_BUG_TXEN (1 << 1) /* UART has buggy TX IIR status */
+1 -1
drivers/tty/serial/8250/8250_dw.c
··· 79 79 } else if ((iir & UART_IIR_BUSY) == UART_IIR_BUSY) { 80 80 /* Clear the USR and write the LCR again. */ 81 81 (void)p->serial_in(p, UART_USR); 82 - p->serial_out(p, d->last_lcr, UART_LCR); 82 + p->serial_out(p, UART_LCR, d->last_lcr); 83 83 84 84 return 1; 85 85 }
+40 -2
drivers/tty/serial/8250/8250_pci.c
··· 1085 1085 return setup_port(priv, port, 2, idx * 8, 0); 1086 1086 } 1087 1087 1088 + static int 1089 + pci_brcm_trumanage_setup(struct serial_private *priv, 1090 + const struct pciserial_board *board, 1091 + struct uart_8250_port *port, int idx) 1092 + { 1093 + int ret = pci_default_setup(priv, board, port, idx); 1094 + 1095 + port->port.type = PORT_BRCM_TRUMANAGE; 1096 + port->port.flags = (port->port.flags | UPF_FIXED_PORT | UPF_FIXED_TYPE); 1097 + return ret; 1098 + } 1099 + 1088 1100 static int skip_tx_en_setup(struct serial_private *priv, 1089 1101 const struct pciserial_board *board, 1090 1102 struct uart_8250_port *port, int idx) ··· 1313 1301 #define PCI_VENDOR_ID_AGESTAR 0x5372 1314 1302 #define PCI_DEVICE_ID_AGESTAR_9375 0x6872 1315 1303 #define PCI_VENDOR_ID_ASIX 0x9710 1316 - #define PCI_DEVICE_ID_COMMTECH_4222PCIE 0x0019 1317 1304 #define PCI_DEVICE_ID_COMMTECH_4224PCIE 0x0020 1318 1305 #define PCI_DEVICE_ID_COMMTECH_4228PCIE 0x0021 1306 + #define PCI_DEVICE_ID_COMMTECH_4222PCIE 0x0022 1307 + #define PCI_DEVICE_ID_BROADCOM_TRUMANAGE 0x160a 1319 1308 1320 1309 1321 1310 /* Unknown vendors/cards - this should not be in linux/pci_ids.h */ ··· 1967 1954 .setup = pci_xr17v35x_setup, 1968 1955 }, 1969 1956 /* 1957 + * Broadcom TruManage (NetXtreme) 1958 + */ 1959 + { 1960 + .vendor = PCI_VENDOR_ID_BROADCOM, 1961 + .device = PCI_DEVICE_ID_BROADCOM_TRUMANAGE, 1962 + .subvendor = PCI_ANY_ID, 1963 + .subdevice = PCI_ANY_ID, 1964 + .setup = pci_brcm_trumanage_setup, 1965 + }, 1966 + 1967 + /* 1970 1968 * Default "match everything" terminator entry 1971 1969 */ 1972 1970 { ··· 2172 2148 pbn_ce4100_1_115200, 2173 2149 pbn_omegapci, 2174 2150 pbn_NETMOS9900_2s_115200, 2151 + pbn_brcm_trumanage, 2175 2152 }; 2176 2153 2177 2154 /* ··· 2271 2246 2272 2247 [pbn_b0_8_1152000_200] = { 2273 2248 .flags = FL_BASE0, 2274 - .num_ports = 2, 2249 + .num_ports = 8, 2275 2250 .base_baud = 1152000, 2276 2251 .uart_offset = 0x200, 2277 2252 }, ··· 2915 2890 [pbn_NETMOS9900_2s_115200] = { 2916 2891 .flags = FL_BASE0, 2917 2892 .num_ports = 2, 2893 + .base_baud = 115200, 2894 + }, 2895 + [pbn_brcm_trumanage] = { 2896 + .flags = FL_BASE0, 2897 + .num_ports = 1, 2898 + .reg_shift = 2, 2918 2899 .base_baud = 115200, 2919 2900 }, 2920 2901 }; ··· 4500 4469 { PCI_VENDOR_ID_PLX, PCI_DEVICE_ID_PLX_CRONYX_OMEGA, 4501 4470 PCI_ANY_ID, PCI_ANY_ID, 0, 0, 4502 4471 pbn_omegapci }, 4472 + 4473 + /* 4474 + * Broadcom TruManage 4475 + */ 4476 + { PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_BROADCOM_TRUMANAGE, 4477 + PCI_ANY_ID, PCI_ANY_ID, 0, 0, 4478 + pbn_brcm_trumanage }, 4503 4479 4504 4480 /* 4505 4481 * AgeStar as-prs2-009
+3 -1
drivers/tty/serial/ifx6x60.c
··· 637 637 638 638 clear_bit(IFX_SPI_STATE_IO_AVAILABLE, &ifx_dev->flags); 639 639 mrdy_set_low(ifx_dev); 640 + del_timer(&ifx_dev->spi_timer); 640 641 clear_bit(IFX_SPI_STATE_TIMER_PENDING, &ifx_dev->flags); 641 642 tasklet_kill(&ifx_dev->io_work_tasklet); 642 643 } ··· 811 810 ifx_dev->spi_xfer.cs_change = 0; 812 811 ifx_dev->spi_xfer.speed_hz = ifx_dev->spi_dev->max_speed_hz; 813 812 /* ifx_dev->spi_xfer.speed_hz = 390625; */ 814 - ifx_dev->spi_xfer.bits_per_word = spi_bpw; 813 + ifx_dev->spi_xfer.bits_per_word = 814 + ifx_dev->spi_dev->bits_per_word; 815 815 816 816 ifx_dev->spi_xfer.tx_buf = ifx_dev->tx_buffer; 817 817 ifx_dev->spi_xfer.rx_buf = ifx_dev->rx_buffer;
+4 -2
drivers/tty/serial/mxs-auart.c
··· 253 253 struct circ_buf *xmit = &s->port.state->xmit; 254 254 255 255 if (auart_dma_enabled(s)) { 256 - int i = 0; 256 + u32 i = 0; 257 257 int size; 258 258 void *buffer = s->tx_dma_buf; 259 259 ··· 412 412 413 413 u32 ctrl = readl(u->membase + AUART_CTRL2); 414 414 415 - ctrl &= ~AUART_CTRL2_RTSEN; 415 + ctrl &= ~(AUART_CTRL2_RTSEN | AUART_CTRL2_RTS); 416 416 if (mctrl & TIOCM_RTS) { 417 417 if (tty_port_cts_enabled(&u->state->port)) 418 418 ctrl |= AUART_CTRL2_RTSEN; 419 + else 420 + ctrl |= AUART_CTRL2_RTS; 419 421 } 420 422 421 423 s->ctrl = mctrl;
-1
drivers/tty/serial/samsung.c
··· 1006 1006 1007 1007 ucon &= ucon_mask; 1008 1008 wr_regl(port, S3C2410_UCON, ucon | cfg->ucon); 1009 - wr_regl(port, S3C2410_ULCON, cfg->ulcon); 1010 1009 1011 1010 /* reset both fifos */ 1012 1011 wr_regl(port, S3C2410_UFCON, cfg->ufcon | S3C2410_UFCON_RESETBOTH);
+1 -1
drivers/tty/serial/vt8500_serial.c
··· 604 604 vt8500_port->uart.flags = UPF_IOREMAP | UPF_BOOT_AUTOCONF; 605 605 606 606 vt8500_port->clk = of_clk_get(pdev->dev.of_node, 0); 607 - if (vt8500_port->clk) { 607 + if (!IS_ERR(vt8500_port->clk)) { 608 608 vt8500_port->uart.uartclk = clk_get_rate(vt8500_port->clk); 609 609 } else { 610 610 /* use the default of 24Mhz if not specified and warn */
+97 -37
drivers/tty/vt/vt.c
··· 2987 2987 2988 2988 static struct class *vtconsole_class; 2989 2989 2990 - static int bind_con_driver(const struct consw *csw, int first, int last, 2990 + static int do_bind_con_driver(const struct consw *csw, int first, int last, 2991 2991 int deflt) 2992 2992 { 2993 2993 struct module *owner = csw->owner; ··· 2998 2998 if (!try_module_get(owner)) 2999 2999 return -ENODEV; 3000 3000 3001 - console_lock(); 3001 + WARN_CONSOLE_UNLOCKED(); 3002 3002 3003 3003 /* check if driver is registered */ 3004 3004 for (i = 0; i < MAX_NR_CON_DRIVER; i++) { ··· 3083 3083 3084 3084 retval = 0; 3085 3085 err: 3086 - console_unlock(); 3087 3086 module_put(owner); 3088 3087 return retval; 3089 3088 }; 3089 + 3090 + 3091 + static int bind_con_driver(const struct consw *csw, int first, int last, 3092 + int deflt) 3093 + { 3094 + int ret; 3095 + 3096 + console_lock(); 3097 + ret = do_bind_con_driver(csw, first, last, deflt); 3098 + console_unlock(); 3099 + return ret; 3100 + } 3090 3101 3091 3102 #ifdef CONFIG_VT_HW_CONSOLE_BINDING 3092 3103 static int con_is_graphics(const struct consw *csw, int first, int last) ··· 3135 3124 */ 3136 3125 int unbind_con_driver(const struct consw *csw, int first, int last, int deflt) 3137 3126 { 3127 + int retval; 3128 + 3129 + console_lock(); 3130 + retval = do_unbind_con_driver(csw, first, last, deflt); 3131 + console_unlock(); 3132 + return retval; 3133 + } 3134 + EXPORT_SYMBOL(unbind_con_driver); 3135 + 3136 + /* unlocked version of unbind_con_driver() */ 3137 + int do_unbind_con_driver(const struct consw *csw, int first, int last, int deflt) 3138 + { 3138 3139 struct module *owner = csw->owner; 3139 3140 const struct consw *defcsw = NULL; 3140 3141 struct con_driver *con_driver = NULL, *con_back = NULL; ··· 3155 3132 if (!try_module_get(owner)) 3156 3133 return -ENODEV; 3157 3134 3158 - console_lock(); 3135 + WARN_CONSOLE_UNLOCKED(); 3159 3136 3160 3137 /* check if driver is registered and if it is unbindable */ 3161 3138 for (i = 0; i < MAX_NR_CON_DRIVER; i++) { ··· 3168 3145 } 3169 3146 } 3170 3147 3171 - if (retval) { 3172 - console_unlock(); 3148 + if (retval) 3173 3149 goto err; 3174 - } 3175 3150 3176 3151 retval = -ENODEV; 3177 3152 ··· 3185 3164 } 3186 3165 } 3187 3166 3188 - if (retval) { 3189 - console_unlock(); 3167 + if (retval) 3190 3168 goto err; 3191 - } 3192 3169 3193 - if (!con_is_bound(csw)) { 3194 - console_unlock(); 3170 + if (!con_is_bound(csw)) 3195 3171 goto err; 3196 - } 3197 3172 3198 3173 first = max(first, con_driver->first); 3199 3174 last = min(last, con_driver->last); ··· 3216 3199 if (!con_is_bound(csw)) 3217 3200 con_driver->flag &= ~CON_DRIVER_FLAG_INIT; 3218 3201 3219 - console_unlock(); 3220 3202 /* ignore return value, binding should not fail */ 3221 - bind_con_driver(defcsw, first, last, deflt); 3203 + do_bind_con_driver(defcsw, first, last, deflt); 3222 3204 err: 3223 3205 module_put(owner); 3224 3206 return retval; 3225 3207 3226 3208 } 3227 - EXPORT_SYMBOL(unbind_con_driver); 3209 + EXPORT_SYMBOL_GPL(do_unbind_con_driver); 3228 3210 3229 3211 static int vt_bind(struct con_driver *con) 3230 3212 { ··· 3508 3492 } 3509 3493 EXPORT_SYMBOL_GPL(con_debug_leave); 3510 3494 3511 - /** 3512 - * register_con_driver - register console driver to console layer 3513 - * @csw: console driver 3514 - * @first: the first console to take over, minimum value is 0 3515 - * @last: the last console to take over, maximum value is MAX_NR_CONSOLES -1 3516 - * 3517 - * DESCRIPTION: This function registers a console driver which can later 3518 - * bind to a range of consoles specified by @first and @last. It will 3519 - * also initialize the console driver by calling con_startup(). 3520 - */ 3521 - int register_con_driver(const struct consw *csw, int first, int last) 3495 + static int do_register_con_driver(const struct consw *csw, int first, int last) 3522 3496 { 3523 3497 struct module *owner = csw->owner; 3524 3498 struct con_driver *con_driver; 3525 3499 const char *desc; 3526 3500 int i, retval = 0; 3527 3501 3502 + WARN_CONSOLE_UNLOCKED(); 3503 + 3528 3504 if (!try_module_get(owner)) 3529 3505 return -ENODEV; 3530 - 3531 - console_lock(); 3532 3506 3533 3507 for (i = 0; i < MAX_NR_CON_DRIVER; i++) { 3534 3508 con_driver = &registered_con_driver[i]; ··· 3572 3566 } 3573 3567 3574 3568 err: 3575 - console_unlock(); 3576 3569 module_put(owner); 3570 + return retval; 3571 + } 3572 + 3573 + /** 3574 + * register_con_driver - register console driver to console layer 3575 + * @csw: console driver 3576 + * @first: the first console to take over, minimum value is 0 3577 + * @last: the last console to take over, maximum value is MAX_NR_CONSOLES -1 3578 + * 3579 + * DESCRIPTION: This function registers a console driver which can later 3580 + * bind to a range of consoles specified by @first and @last. It will 3581 + * also initialize the console driver by calling con_startup(). 3582 + */ 3583 + int register_con_driver(const struct consw *csw, int first, int last) 3584 + { 3585 + int retval; 3586 + 3587 + console_lock(); 3588 + retval = do_register_con_driver(csw, first, last); 3589 + console_unlock(); 3577 3590 return retval; 3578 3591 } 3579 3592 EXPORT_SYMBOL(register_con_driver); ··· 3610 3585 */ 3611 3586 int unregister_con_driver(const struct consw *csw) 3612 3587 { 3613 - int i, retval = -ENODEV; 3588 + int retval; 3614 3589 3615 3590 console_lock(); 3591 + retval = do_unregister_con_driver(csw); 3592 + console_unlock(); 3593 + return retval; 3594 + } 3595 + EXPORT_SYMBOL(unregister_con_driver); 3596 + 3597 + int do_unregister_con_driver(const struct consw *csw) 3598 + { 3599 + int i, retval = -ENODEV; 3616 3600 3617 3601 /* cannot unregister a bound driver */ 3618 3602 if (con_is_bound(csw)) ··· 3647 3613 } 3648 3614 } 3649 3615 err: 3650 - console_unlock(); 3651 3616 return retval; 3652 3617 } 3653 - EXPORT_SYMBOL(unregister_con_driver); 3618 + EXPORT_SYMBOL_GPL(do_unregister_con_driver); 3654 3619 3655 3620 /* 3656 3621 * If we support more console drivers, this function is used 3657 3622 * when a driver wants to take over some existing consoles 3658 3623 * and become default driver for newly opened ones. 3659 3624 * 3660 - * take_over_console is basically a register followed by unbind 3625 + * take_over_console is basically a register followed by unbind 3626 + */ 3627 + int do_take_over_console(const struct consw *csw, int first, int last, int deflt) 3628 + { 3629 + int err; 3630 + 3631 + err = do_register_con_driver(csw, first, last); 3632 + /* 3633 + * If we get an busy error we still want to bind the console driver 3634 + * and return success, as we may have unbound the console driver 3635 + * but not unregistered it. 3636 + */ 3637 + if (err == -EBUSY) 3638 + err = 0; 3639 + if (!err) 3640 + do_bind_con_driver(csw, first, last, deflt); 3641 + 3642 + return err; 3643 + } 3644 + EXPORT_SYMBOL_GPL(do_take_over_console); 3645 + 3646 + /* 3647 + * If we support more console drivers, this function is used 3648 + * when a driver wants to take over some existing consoles 3649 + * and become default driver for newly opened ones. 3650 + * 3651 + * take_over_console is basically a register followed by unbind 3661 3652 */ 3662 3653 int take_over_console(const struct consw *csw, int first, int last, int deflt) 3663 3654 { 3664 3655 int err; 3665 3656 3666 3657 err = register_con_driver(csw, first, last); 3667 - /* if we get an busy error we still want to bind the console driver 3658 + /* 3659 + * If we get an busy error we still want to bind the console driver 3668 3660 * and return success, as we may have unbound the console driver 3669 -  * but not unregistered it. 3670 - */ 3661 + * but not unregistered it. 3662 + */ 3671 3663 if (err == -EBUSY) 3672 3664 err = 0; 3673 3665 if (!err)
+1
drivers/usb/dwc3/gadget.c
··· 1605 1605 1606 1606 if (epnum == 0 || epnum == 1) { 1607 1607 dep->endpoint.maxpacket = 512; 1608 + dep->endpoint.maxburst = 1; 1608 1609 dep->endpoint.ops = &dwc3_gadget_ep0_ops; 1609 1610 if (!epnum) 1610 1611 dwc->gadget.ep0 = &dep->endpoint;
+3 -3
drivers/usb/gadget/f_fs.c
··· 1153 1153 pr_err("%s: unmapped value: %lu\n", opts, value); 1154 1154 return -EINVAL; 1155 1155 } 1156 - } 1157 - else if (!memcmp(opts, "gid", 3)) 1156 + } else if (!memcmp(opts, "gid", 3)) { 1158 1157 data->perms.gid = make_kgid(current_user_ns(), value); 1159 1158 if (!gid_valid(data->perms.gid)) { 1160 1159 pr_err("%s: unmapped value: %lu\n", opts, value); 1161 1160 return -EINVAL; 1162 1161 } 1163 - else 1162 + } else { 1164 1163 goto invalid; 1164 + } 1165 1165 break; 1166 1166 1167 1167 default:
+26 -14
drivers/usb/gadget/fsl_mxc_udc.c
··· 18 18 #include <linux/platform_device.h> 19 19 #include <linux/io.h> 20 20 21 - #include <mach/hardware.h> 22 - 23 21 static struct clk *mxc_ahb_clk; 24 22 static struct clk *mxc_per_clk; 25 23 static struct clk *mxc_ipg_clk; 26 24 27 25 /* workaround ENGcm09152 for i.MX35 */ 28 - #define USBPHYCTRL_OTGBASE_OFFSET 0x608 26 + #define MX35_USBPHYCTRL_OFFSET 0x600 27 + #define USBPHYCTRL_OTGBASE_OFFSET 0x8 29 28 #define USBPHYCTRL_EVDO (1 << 23) 30 29 31 30 int fsl_udc_clk_init(struct platform_device *pdev) ··· 58 59 clk_prepare_enable(mxc_per_clk); 59 60 60 61 /* make sure USB_CLK is running at 60 MHz +/- 1000 Hz */ 61 - if (!cpu_is_mx51()) { 62 + if (!strcmp(pdev->id_entry->name, "imx-udc-mx27")) { 62 63 freq = clk_get_rate(mxc_per_clk); 63 64 if (pdata->phy_mode != FSL_USB2_PHY_ULPI && 64 65 (freq < 59999000 || freq > 60001000)) { ··· 78 79 return ret; 79 80 } 80 81 81 - void fsl_udc_clk_finalize(struct platform_device *pdev) 82 + int fsl_udc_clk_finalize(struct platform_device *pdev) 82 83 { 83 84 struct fsl_usb2_platform_data *pdata = pdev->dev.platform_data; 84 - if (cpu_is_mx35()) { 85 - unsigned int v; 85 + int ret = 0; 86 86 87 - /* workaround ENGcm09152 for i.MX35 */ 88 - if (pdata->workaround & FLS_USB2_WORKAROUND_ENGCM09152) { 89 - v = readl(MX35_IO_ADDRESS(MX35_USB_BASE_ADDR + 90 - USBPHYCTRL_OTGBASE_OFFSET)); 91 - writel(v | USBPHYCTRL_EVDO, 92 - MX35_IO_ADDRESS(MX35_USB_BASE_ADDR + 93 - USBPHYCTRL_OTGBASE_OFFSET)); 87 + /* workaround ENGcm09152 for i.MX35 */ 88 + if (pdata->workaround & FLS_USB2_WORKAROUND_ENGCM09152) { 89 + unsigned int v; 90 + struct resource *res = platform_get_resource 91 + (pdev, IORESOURCE_MEM, 0); 92 + void __iomem *phy_regs = ioremap(res->start + 93 + MX35_USBPHYCTRL_OFFSET, 512); 94 + if (!phy_regs) { 95 + dev_err(&pdev->dev, "ioremap for phy address fails\n"); 96 + ret = -EINVAL; 97 + goto ioremap_err; 94 98 } 99 + 100 + v = readl(phy_regs + USBPHYCTRL_OTGBASE_OFFSET); 101 + writel(v | USBPHYCTRL_EVDO, 102 + phy_regs + USBPHYCTRL_OTGBASE_OFFSET); 103 + 104 + iounmap(phy_regs); 95 105 } 96 106 107 + 108 + ioremap_err: 97 109 /* ULPI transceivers don't need usbpll */ 98 110 if (pdata->phy_mode == FSL_USB2_PHY_ULPI) { 99 111 clk_disable_unprepare(mxc_per_clk); 100 112 mxc_per_clk = NULL; 101 113 } 114 + 115 + return ret; 102 116 } 103 117 104 118 void fsl_udc_clk_release(void)
+25 -17
drivers/usb/gadget/fsl_udc_core.c
··· 41 41 #include <linux/fsl_devices.h> 42 42 #include <linux/dmapool.h> 43 43 #include <linux/delay.h> 44 + #include <linux/of_device.h> 44 45 45 46 #include <asm/byteorder.h> 46 47 #include <asm/io.h> ··· 2439 2438 unsigned int i; 2440 2439 u32 dccparams; 2441 2440 2442 - if (strcmp(pdev->name, driver_name)) { 2443 - VDBG("Wrong device"); 2444 - return -ENODEV; 2445 - } 2446 - 2447 2441 udc_controller = kzalloc(sizeof(struct fsl_udc), GFP_KERNEL); 2448 2442 if (udc_controller == NULL) { 2449 2443 ERR("malloc udc failed\n"); ··· 2543 2547 dr_controller_setup(udc_controller); 2544 2548 } 2545 2549 2546 - fsl_udc_clk_finalize(pdev); 2550 + ret = fsl_udc_clk_finalize(pdev); 2551 + if (ret) 2552 + goto err_free_irq; 2547 2553 2548 2554 /* Setup gadget structure */ 2549 2555 udc_controller->gadget.ops = &fsl_gadget_ops; ··· 2754 2756 2755 2757 return fsl_udc_resume(NULL); 2756 2758 } 2757 - 2758 2759 /*------------------------------------------------------------------------- 2759 2760 Register entry point for the peripheral controller driver 2760 2761 --------------------------------------------------------------------------*/ 2761 - 2762 + static const struct platform_device_id fsl_udc_devtype[] = { 2763 + { 2764 + .name = "imx-udc-mx27", 2765 + }, { 2766 + .name = "imx-udc-mx51", 2767 + }, { 2768 + /* sentinel */ 2769 + } 2770 + }; 2771 + MODULE_DEVICE_TABLE(platform, fsl_udc_devtype); 2762 2772 static struct platform_driver udc_driver = { 2763 - .remove = __exit_p(fsl_udc_remove), 2773 + .remove = __exit_p(fsl_udc_remove), 2774 + /* Just for FSL i.mx SoC currently */ 2775 + .id_table = fsl_udc_devtype, 2764 2776 /* these suspend and resume are not usb suspend and resume */ 2765 - .suspend = fsl_udc_suspend, 2766 - .resume = fsl_udc_resume, 2767 - .driver = { 2768 - .name = (char *)driver_name, 2769 - .owner = THIS_MODULE, 2770 - /* udc suspend/resume called from OTG driver */ 2771 - .suspend = fsl_udc_otg_suspend, 2772 - .resume = fsl_udc_otg_resume, 2777 + .suspend = fsl_udc_suspend, 2778 + .resume = fsl_udc_resume, 2779 + .driver = { 2780 + .name = (char *)driver_name, 2781 + .owner = THIS_MODULE, 2782 + /* udc suspend/resume called from OTG driver */ 2783 + .suspend = fsl_udc_otg_suspend, 2784 + .resume = fsl_udc_otg_resume, 2773 2785 }, 2774 2786 }; 2775 2787
+3 -2
drivers/usb/gadget/fsl_usb2_udc.h
··· 592 592 struct platform_device; 593 593 #ifdef CONFIG_ARCH_MXC 594 594 int fsl_udc_clk_init(struct platform_device *pdev); 595 - void fsl_udc_clk_finalize(struct platform_device *pdev); 595 + int fsl_udc_clk_finalize(struct platform_device *pdev); 596 596 void fsl_udc_clk_release(void); 597 597 #else 598 598 static inline int fsl_udc_clk_init(struct platform_device *pdev) 599 599 { 600 600 return 0; 601 601 } 602 - static inline void fsl_udc_clk_finalize(struct platform_device *pdev) 602 + static inline int fsl_udc_clk_finalize(struct platform_device *pdev) 603 603 { 604 + return 0; 604 605 } 605 606 static inline void fsl_udc_clk_release(void) 606 607 {
+1 -1
drivers/usb/host/Kconfig
··· 148 148 Variation of ARC USB block used in some Freescale chips. 149 149 150 150 config USB_EHCI_MXC 151 - bool "Support for Freescale i.MX on-chip EHCI USB controller" 151 + tristate "Support for Freescale i.MX on-chip EHCI USB controller" 152 152 depends on USB_EHCI_HCD && ARCH_MXC 153 153 select USB_EHCI_ROOT_HUB_TT 154 154 ---help---
+1
drivers/usb/host/Makefile
··· 26 26 obj-$(CONFIG_USB_EHCI_HCD) += ehci-hcd.o 27 27 obj-$(CONFIG_USB_EHCI_PCI) += ehci-pci.o 28 28 obj-$(CONFIG_USB_EHCI_HCD_PLATFORM) += ehci-platform.o 29 + obj-$(CONFIG_USB_EHCI_MXC) += ehci-mxc.o 29 30 30 31 obj-$(CONFIG_USB_OXU210HP_HCD) += oxu210hp-hcd.o 31 32 obj-$(CONFIG_USB_ISP116X_HCD) += isp116x-hcd.o
+2 -10
drivers/usb/host/ehci-hcd.c
··· 74 74 #undef VERBOSE_DEBUG 75 75 #undef EHCI_URB_TRACE 76 76 77 - #ifdef DEBUG 78 - #define EHCI_STATS 79 - #endif 80 - 81 77 /* magic numbers that can affect system performance */ 82 78 #define EHCI_TUNE_CERR 3 /* 0-3 qtd retries; 0 == don't stop */ 83 79 #define EHCI_TUNE_RL_HS 4 /* nak throttle; see 4.9 */ ··· 1246 1250 #define PLATFORM_DRIVER ehci_fsl_driver 1247 1251 #endif 1248 1252 1249 - #ifdef CONFIG_USB_EHCI_MXC 1250 - #include "ehci-mxc.c" 1251 - #define PLATFORM_DRIVER ehci_mxc_driver 1252 - #endif 1253 - 1254 1253 #ifdef CONFIG_USB_EHCI_SH 1255 1254 #include "ehci-sh.c" 1256 1255 #define PLATFORM_DRIVER ehci_hcd_sh_driver ··· 1343 1352 1344 1353 #if !IS_ENABLED(CONFIG_USB_EHCI_PCI) && \ 1345 1354 !IS_ENABLED(CONFIG_USB_EHCI_HCD_PLATFORM) && \ 1346 - !defined(CONFIG_USB_CHIPIDEA_HOST) && \ 1355 + !IS_ENABLED(CONFIG_USB_CHIPIDEA_HOST) && \ 1356 + !IS_ENABLED(CONFIG_USB_EHCI_MXC) && \ 1347 1357 !defined(PLATFORM_DRIVER) && \ 1348 1358 !defined(PS3_SYSTEM_BUS_DRIVER) && \ 1349 1359 !defined(OF_PLATFORM_DRIVER) && \
+50 -70
drivers/usb/host/ehci-mxc.c
··· 17 17 * Inc., 675 Mass Ave, Cambridge, MA 02139, USA. 18 18 */ 19 19 20 + #include <linux/kernel.h> 21 + #include <linux/module.h> 22 + #include <linux/io.h> 20 23 #include <linux/platform_device.h> 21 24 #include <linux/clk.h> 22 25 #include <linux/delay.h> 23 26 #include <linux/usb/otg.h> 24 27 #include <linux/usb/ulpi.h> 25 28 #include <linux/slab.h> 29 + #include <linux/usb.h> 30 + #include <linux/usb/hcd.h> 26 31 27 32 #include <linux/platform_data/usb-ehci-mxc.h> 28 33 29 34 #include <asm/mach-types.h> 30 35 36 + #include "ehci.h" 37 + 38 + #define DRIVER_DESC "Freescale On-Chip EHCI Host driver" 39 + 40 + static const char hcd_name[] = "ehci-mxc"; 41 + 31 42 #define ULPI_VIEWPORT_OFFSET 0x170 32 43 33 44 struct ehci_mxc_priv { 34 45 struct clk *usbclk, *ahbclk, *phyclk; 35 - struct usb_hcd *hcd; 36 46 }; 37 47 38 - /* called during probe() after chip reset completes */ 39 - static int ehci_mxc_setup(struct usb_hcd *hcd) 40 - { 41 - hcd->has_tt = 1; 48 + static struct hc_driver __read_mostly ehci_mxc_hc_driver; 42 49 43 - return ehci_setup(hcd); 44 - } 45 - 46 - static const struct hc_driver ehci_mxc_hc_driver = { 47 - .description = hcd_name, 48 - .product_desc = "Freescale On-Chip EHCI Host Controller", 49 - .hcd_priv_size = sizeof(struct ehci_hcd), 50 - 51 - /* 52 - * generic hardware linkage 53 - */ 54 - .irq = ehci_irq, 55 - .flags = HCD_USB2 | HCD_MEMORY, 56 - 57 - /* 58 - * basic lifecycle operations 59 - */ 60 - .reset = ehci_mxc_setup, 61 - .start = ehci_run, 62 - .stop = ehci_stop, 63 - .shutdown = ehci_shutdown, 64 - 65 - /* 66 - * managing i/o requests and associated device resources 67 - */ 68 - .urb_enqueue = ehci_urb_enqueue, 69 - .urb_dequeue = ehci_urb_dequeue, 70 - .endpoint_disable = ehci_endpoint_disable, 71 - .endpoint_reset = ehci_endpoint_reset, 72 - 73 - /* 74 - * scheduling support 75 - */ 76 - .get_frame_number = ehci_get_frame, 77 - 78 - /* 79 - * root hub support 80 - */ 81 - .hub_status_data = ehci_hub_status_data, 82 - .hub_control = ehci_hub_control, 83 - .bus_suspend = ehci_bus_suspend, 84 - .bus_resume = ehci_bus_resume, 85 - .relinquish_port = ehci_relinquish_port, 86 - .port_handed_over = ehci_port_handed_over, 87 - 88 - .clear_tt_buffer_complete = ehci_clear_tt_buffer_complete, 50 + static const struct ehci_driver_overrides ehci_mxc_overrides __initdata = { 51 + .extra_priv_size = sizeof(struct ehci_mxc_priv), 89 52 }; 90 53 91 54 static int ehci_mxc_drv_probe(struct platform_device *pdev) ··· 75 112 if (!hcd) 76 113 return -ENOMEM; 77 114 78 - priv = devm_kzalloc(&pdev->dev, sizeof(*priv), GFP_KERNEL); 79 - if (!priv) { 80 - ret = -ENOMEM; 81 - goto err_alloc; 82 - } 83 - 84 115 res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 85 116 if (!res) { 86 117 dev_err(dev, "Found HC with no register addr. Check setup!\n"); ··· 91 134 ret = -EFAULT; 92 135 goto err_alloc; 93 136 } 137 + 138 + hcd->has_tt = 1; 139 + ehci = hcd_to_ehci(hcd); 140 + priv = (struct ehci_mxc_priv *) ehci->priv; 94 141 95 142 /* enable clocks */ 96 143 priv->usbclk = devm_clk_get(&pdev->dev, "ipg"); ··· 130 169 mdelay(10); 131 170 } 132 171 133 - ehci = hcd_to_ehci(hcd); 134 - 135 172 /* EHCI registers start at offset 0x100 */ 136 173 ehci->caps = hcd->regs + 0x100; 137 174 ehci->regs = hcd->regs + 0x100 + ··· 157 198 } 158 199 } 159 200 160 - priv->hcd = hcd; 161 - platform_set_drvdata(pdev, priv); 201 + platform_set_drvdata(pdev, hcd); 162 202 163 203 ret = usb_add_hcd(hcd, irq, IRQF_SHARED); 164 204 if (ret) ··· 202 244 static int __exit ehci_mxc_drv_remove(struct platform_device *pdev) 203 245 { 204 246 struct mxc_usbh_platform_data *pdata = pdev->dev.platform_data; 205 - struct ehci_mxc_priv *priv = platform_get_drvdata(pdev); 206 - struct usb_hcd *hcd = priv->hcd; 247 + struct usb_hcd *hcd = platform_get_drvdata(pdev); 248 + struct ehci_hcd *ehci = hcd_to_ehci(hcd); 249 + struct ehci_mxc_priv *priv = (struct ehci_mxc_priv *) ehci->priv; 250 + 251 + usb_remove_hcd(hcd); 207 252 208 253 if (pdata && pdata->exit) 209 254 pdata->exit(pdev); ··· 214 253 if (pdata->otg) 215 254 usb_phy_shutdown(pdata->otg); 216 255 217 - usb_remove_hcd(hcd); 218 - usb_put_hcd(hcd); 219 - platform_set_drvdata(pdev, NULL); 220 - 221 256 clk_disable_unprepare(priv->usbclk); 222 257 clk_disable_unprepare(priv->ahbclk); 223 258 224 259 if (priv->phyclk) 225 260 clk_disable_unprepare(priv->phyclk); 226 261 262 + usb_put_hcd(hcd); 263 + platform_set_drvdata(pdev, NULL); 227 264 return 0; 228 265 } 229 266 230 267 static void ehci_mxc_drv_shutdown(struct platform_device *pdev) 231 268 { 232 - struct ehci_mxc_priv *priv = platform_get_drvdata(pdev); 233 - struct usb_hcd *hcd = priv->hcd; 269 + struct usb_hcd *hcd = platform_get_drvdata(pdev); 234 270 235 271 if (hcd->driver->shutdown) 236 272 hcd->driver->shutdown(hcd); ··· 237 279 238 280 static struct platform_driver ehci_mxc_driver = { 239 281 .probe = ehci_mxc_drv_probe, 240 - .remove = __exit_p(ehci_mxc_drv_remove), 282 + .remove = ehci_mxc_drv_remove, 241 283 .shutdown = ehci_mxc_drv_shutdown, 242 284 .driver = { 243 285 .name = "mxc-ehci", 244 286 }, 245 287 }; 288 + 289 + static int __init ehci_mxc_init(void) 290 + { 291 + if (usb_disabled()) 292 + return -ENODEV; 293 + 294 + pr_info("%s: " DRIVER_DESC "\n", hcd_name); 295 + 296 + ehci_init_driver(&ehci_mxc_hc_driver, &ehci_mxc_overrides); 297 + return platform_driver_register(&ehci_mxc_driver); 298 + } 299 + module_init(ehci_mxc_init); 300 + 301 + static void __exit ehci_mxc_cleanup(void) 302 + { 303 + platform_driver_unregister(&ehci_mxc_driver); 304 + } 305 + module_exit(ehci_mxc_cleanup); 306 + 307 + MODULE_DESCRIPTION(DRIVER_DESC); 308 + MODULE_AUTHOR("Sascha Hauer"); 309 + MODULE_LICENSE("GPL");
+7
drivers/usb/host/ehci.h
··· 38 38 #endif 39 39 40 40 /* statistics can be kept for tuning/monitoring */ 41 + #ifdef DEBUG 42 + #define EHCI_STATS 43 + #endif 44 + 41 45 struct ehci_stats { 42 46 /* irq usage */ 43 47 unsigned long normal; ··· 225 221 #ifdef DEBUG 226 222 struct dentry *debug_dir; 227 223 #endif 224 + 225 + /* platform-specific data -- must come last */ 226 + unsigned long priv[0] __aligned(sizeof(s64)); 228 227 }; 229 228 230 229 /* convert between an HCD pointer and the corresponding EHCI_HCD */
+9 -6
drivers/usb/host/uhci-hcd.c
··· 447 447 return IRQ_NONE; 448 448 uhci_writew(uhci, status, USBSTS); /* Clear it */ 449 449 450 + spin_lock(&uhci->lock); 451 + if (unlikely(!uhci->is_initialized)) /* not yet configured */ 452 + goto done; 453 + 450 454 if (status & ~(USBSTS_USBINT | USBSTS_ERROR | USBSTS_RD)) { 451 455 if (status & USBSTS_HSE) 452 456 dev_err(uhci_dev(uhci), "host system error, " ··· 459 455 dev_err(uhci_dev(uhci), "host controller process " 460 456 "error, something bad happened!\n"); 461 457 if (status & USBSTS_HCH) { 462 - spin_lock(&uhci->lock); 463 458 if (uhci->rh_state >= UHCI_RH_RUNNING) { 464 459 dev_err(uhci_dev(uhci), 465 460 "host controller halted, " ··· 476 473 * pending unlinks */ 477 474 mod_timer(&hcd->rh_timer, jiffies); 478 475 } 479 - spin_unlock(&uhci->lock); 480 476 } 481 477 } 482 478 483 - if (status & USBSTS_RD) 479 + if (status & USBSTS_RD) { 480 + spin_unlock(&uhci->lock); 484 481 usb_hcd_poll_rh_status(hcd); 485 - else { 486 - spin_lock(&uhci->lock); 482 + } else { 487 483 uhci_scan_schedule(uhci); 484 + done: 488 485 spin_unlock(&uhci->lock); 489 486 } 490 487 ··· 665 662 */ 666 663 mb(); 667 664 665 + spin_lock_irq(&uhci->lock); 668 666 configure_hc(uhci); 669 667 uhci->is_initialized = 1; 670 - spin_lock_irq(&uhci->lock); 671 668 start_rh(uhci); 672 669 spin_unlock_irq(&uhci->lock); 673 670 return 0;
+2 -2
drivers/usb/musb/cppi_dma.c
··· 105 105 musb_writel(&tx->tx_complete, 0, ptr); 106 106 } 107 107 108 - static void __init cppi_pool_init(struct cppi *cppi, struct cppi_channel *c) 108 + static void cppi_pool_init(struct cppi *cppi, struct cppi_channel *c) 109 109 { 110 110 int j; 111 111 ··· 150 150 c->last_processed = NULL; 151 151 } 152 152 153 - static int __init cppi_controller_start(struct dma_controller *c) 153 + static int cppi_controller_start(struct dma_controller *c) 154 154 { 155 155 struct cppi *controller; 156 156 void __iomem *tibase;
+3
drivers/usb/serial/io_ti.c
··· 530 530 wait_queue_t wait; 531 531 unsigned long flags; 532 532 533 + if (!tty) 534 + return; 535 + 533 536 if (!timeout) 534 537 timeout = (HZ * EDGE_CLOSING_WAIT)/100; 535 538
+8 -1
drivers/usb/serial/option.c
··· 449 449 #define PETATEL_VENDOR_ID 0x1ff4 450 450 #define PETATEL_PRODUCT_NP10T 0x600e 451 451 452 + /* TP-LINK Incorporated products */ 453 + #define TPLINK_VENDOR_ID 0x2357 454 + #define TPLINK_PRODUCT_MA180 0x0201 455 + 452 456 /* some devices interfaces need special handling due to a number of reasons */ 453 457 enum option_blacklist_reason { 454 458 OPTION_BLACKLIST_NONE = 0, ··· 934 930 { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0254, 0xff, 0xff, 0xff) }, 935 931 { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0257, 0xff, 0xff, 0xff), /* ZTE MF821 */ 936 932 .driver_info = (kernel_ulong_t)&net_intf3_blacklist }, 937 - { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0265, 0xff, 0xff, 0xff) }, 933 + { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0265, 0xff, 0xff, 0xff), /* ONDA MT8205 */ 934 + .driver_info = (kernel_ulong_t)&net_intf4_blacklist }, 938 935 { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0284, 0xff, 0xff, 0xff), /* ZTE MF880 */ 939 936 .driver_info = (kernel_ulong_t)&net_intf4_blacklist }, 940 937 { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0317, 0xff, 0xff, 0xff) }, ··· 1316 1311 { USB_DEVICE_AND_INTERFACE_INFO(MEDIATEK_VENDOR_ID, MEDIATEK_PRODUCT_DC_4COM2, 0xff, 0x00, 0x00) }, 1317 1312 { USB_DEVICE(CELLIENT_VENDOR_ID, CELLIENT_PRODUCT_MEN200) }, 1318 1313 { USB_DEVICE(PETATEL_VENDOR_ID, PETATEL_PRODUCT_NP10T) }, 1314 + { USB_DEVICE(TPLINK_VENDOR_ID, TPLINK_PRODUCT_MA180), 1315 + .driver_info = (kernel_ulong_t)&net_intf4_blacklist }, 1319 1316 { } /* Terminating entry */ 1320 1317 }; 1321 1318 MODULE_DEVICE_TABLE(usb, option_ids);
+2 -2
drivers/vfio/pci/vfio_pci_rdwr.c
··· 240 240 filled = 1; 241 241 } else { 242 242 /* Drop writes, fill reads with FF */ 243 + filled = min((size_t)(x_end - pos), count); 243 244 if (!iswrite) { 244 245 char val = 0xFF; 245 246 size_t i; 246 247 247 - for (i = 0; i < x_end - pos; i++) { 248 + for (i = 0; i < filled; i++) { 248 249 if (put_user(val, buf + i)) 249 250 goto out; 250 251 } 251 252 } 252 253 253 - filled = x_end - pos; 254 254 } 255 255 256 256 count -= filled;
+1 -3
drivers/vhost/tcm_vhost.c
··· 575 575 576 576 /* Must use ioctl VHOST_SCSI_SET_ENDPOINT */ 577 577 tv_tpg = vs->vs_tpg; 578 - if (unlikely(!tv_tpg)) { 579 - pr_err("%s endpoint not set\n", __func__); 578 + if (unlikely(!tv_tpg)) 580 579 return; 581 - } 582 580 583 581 mutex_lock(&vq->mutex); 584 582 vhost_disable_notify(&vs->dev, vq);
+38 -6
drivers/video/console/fbcon.c
··· 529 529 return retval; 530 530 } 531 531 532 + static int do_fbcon_takeover(int show_logo) 533 + { 534 + int err, i; 535 + 536 + if (!num_registered_fb) 537 + return -ENODEV; 538 + 539 + if (!show_logo) 540 + logo_shown = FBCON_LOGO_DONTSHOW; 541 + 542 + for (i = first_fb_vc; i <= last_fb_vc; i++) 543 + con2fb_map[i] = info_idx; 544 + 545 + err = do_take_over_console(&fb_con, first_fb_vc, last_fb_vc, 546 + fbcon_is_default); 547 + 548 + if (err) { 549 + for (i = first_fb_vc; i <= last_fb_vc; i++) 550 + con2fb_map[i] = -1; 551 + info_idx = -1; 552 + } else { 553 + fbcon_has_console_bind = 1; 554 + } 555 + 556 + return err; 557 + } 558 + 532 559 static int fbcon_takeover(int show_logo) 533 560 { 534 561 int err, i; ··· 842 815 * 843 816 * Maps a virtual console @unit to a frame buffer device 844 817 * @newidx. 818 + * 819 + * This should be called with the console lock held. 845 820 */ 846 821 static int set_con2fb_map(int unit, int newidx, int user) 847 822 { ··· 861 832 862 833 if (!search_for_mapped_con() || !con_is_bound(&fb_con)) { 863 834 info_idx = newidx; 864 - return fbcon_takeover(0); 835 + return do_fbcon_takeover(0); 865 836 } 866 837 867 838 if (oldidx != -1) ··· 869 840 870 841 found = search_fb_in_map(newidx); 871 842 872 - console_lock(); 873 843 con2fb_map[unit] = newidx; 874 844 if (!err && !found) 875 845 err = con2fb_acquire_newinfo(vc, info, unit, oldidx); ··· 895 867 if (!search_fb_in_map(info_idx)) 896 868 info_idx = newidx; 897 869 898 - console_unlock(); 899 870 return err; 900 871 } 901 872 ··· 3004 2977 { 3005 2978 int ret; 3006 2979 3007 - ret = unbind_con_driver(&fb_con, first_fb_vc, last_fb_vc, 2980 + ret = do_unbind_con_driver(&fb_con, first_fb_vc, last_fb_vc, 3008 2981 fbcon_is_default); 3009 2982 3010 2983 if (!ret) ··· 3019 2992 } 3020 2993 #endif /* CONFIG_VT_HW_CONSOLE_BINDING */ 3021 2994 2995 + /* called with console_lock held */ 3022 2996 static int fbcon_fb_unbind(int idx) 3023 2997 { 3024 2998 int i, new_idx = -1, ret = 0; ··· 3046 3018 return ret; 3047 3019 } 3048 3020 3021 + /* called with console_lock held */ 3049 3022 static int fbcon_fb_unregistered(struct fb_info *info) 3050 3023 { 3051 3024 int i, idx; ··· 3079 3050 primary_device = -1; 3080 3051 3081 3052 if (!num_registered_fb) 3082 - unregister_con_driver(&fb_con); 3053 + do_unregister_con_driver(&fb_con); 3083 3054 3084 3055 return 0; 3085 3056 } 3086 3057 3058 + /* called with console_lock held */ 3087 3059 static void fbcon_remap_all(int idx) 3088 3060 { 3089 3061 int i; ··· 3129 3099 } 3130 3100 #endif /* CONFIG_FRAMEBUFFER_DETECT_PRIMARY */ 3131 3101 3102 + /* called with console_lock held */ 3132 3103 static int fbcon_fb_registered(struct fb_info *info) 3133 3104 { 3134 3105 int ret = 0, i, idx; ··· 3146 3115 } 3147 3116 3148 3117 if (info_idx != -1) 3149 - ret = fbcon_takeover(1); 3118 + ret = do_fbcon_takeover(1); 3150 3119 } else { 3151 3120 for (i = first_fb_vc; i <= last_fb_vc; i++) { 3152 3121 if (con2fb_map_boot[i] == idx) ··· 3282 3251 ret = fbcon_fb_unregistered(info); 3283 3252 break; 3284 3253 case FB_EVENT_SET_CONSOLE_MAP: 3254 + /* called with console lock held */ 3285 3255 con2fb = event->data; 3286 3256 ret = set_con2fb_map(con2fb->console - 1, 3287 3257 con2fb->framebuffer, 1);
+8 -3
drivers/video/fbmem.c
··· 1177 1177 event.data = &con2fb; 1178 1178 if (!lock_fb_info(info)) 1179 1179 return -ENODEV; 1180 + console_lock(); 1180 1181 event.info = info; 1181 1182 ret = fb_notifier_call_chain(FB_EVENT_SET_CONSOLE_MAP, &event); 1183 + console_unlock(); 1182 1184 unlock_fb_info(info); 1183 1185 break; 1184 1186 case FBIOBLANK: ··· 1652 1650 event.info = fb_info; 1653 1651 if (!lock_fb_info(fb_info)) 1654 1652 return -ENODEV; 1653 + console_lock(); 1655 1654 fb_notifier_call_chain(FB_EVENT_FB_REGISTERED, &event); 1655 + console_unlock(); 1656 1656 unlock_fb_info(fb_info); 1657 1657 return 0; 1658 1658 } ··· 1670 1666 1671 1667 if (!lock_fb_info(fb_info)) 1672 1668 return -ENODEV; 1669 + console_lock(); 1673 1670 event.info = fb_info; 1674 1671 ret = fb_notifier_call_chain(FB_EVENT_FB_UNBIND, &event); 1672 + console_unlock(); 1675 1673 unlock_fb_info(fb_info); 1676 1674 1677 1675 if (ret) ··· 1688 1682 num_registered_fb--; 1689 1683 fb_cleanup_device(fb_info); 1690 1684 event.info = fb_info; 1685 + console_lock(); 1691 1686 fb_notifier_call_chain(FB_EVENT_FB_UNREGISTERED, &event); 1687 + console_unlock(); 1692 1688 1693 1689 /* this may free fb info */ 1694 1690 put_fb_info(fb_info); ··· 1861 1853 err = 1; 1862 1854 1863 1855 if (!list_empty(&info->modelist)) { 1864 - if (!lock_fb_info(info)) 1865 - return -ENODEV; 1866 1856 event.info = info; 1867 1857 err = fb_notifier_call_chain(FB_EVENT_NEW_MODELIST, &event); 1868 - unlock_fb_info(info); 1869 1858 } 1870 1859 1871 1860 return err;
+3
drivers/video/fbsysfs.c
··· 177 177 if (i * sizeof(struct fb_videomode) != count) 178 178 return -EINVAL; 179 179 180 + if (!lock_fb_info(fb_info)) 181 + return -ENODEV; 180 182 console_lock(); 181 183 list_splice(&fb_info->modelist, &old_list); 182 184 fb_videomode_to_modelist((const struct fb_videomode *)buf, i, ··· 190 188 fb_destroy_modelist(&old_list); 191 189 192 190 console_unlock(); 191 + unlock_fb_info(fb_info); 193 192 194 193 return 0; 195 194 }
+12 -1
drivers/video/imxfb.c
··· 139 139 struct clk *clk_ahb; 140 140 struct clk *clk_per; 141 141 enum imxfb_type devtype; 142 + bool enabled; 142 143 143 144 /* 144 145 * These are the addresses we mapped ··· 537 536 538 537 static void imxfb_enable_controller(struct imxfb_info *fbi) 539 538 { 539 + 540 + if (fbi->enabled) 541 + return; 542 + 540 543 pr_debug("Enabling LCD controller\n"); 541 544 542 545 writel(fbi->screen_dma, fbi->regs + LCDC_SSA); ··· 561 556 clk_prepare_enable(fbi->clk_ipg); 562 557 clk_prepare_enable(fbi->clk_ahb); 563 558 clk_prepare_enable(fbi->clk_per); 559 + fbi->enabled = true; 564 560 565 561 if (fbi->backlight_power) 566 562 fbi->backlight_power(1); ··· 571 565 572 566 static void imxfb_disable_controller(struct imxfb_info *fbi) 573 567 { 568 + if (!fbi->enabled) 569 + return; 570 + 574 571 pr_debug("Disabling LCD controller\n"); 575 572 576 573 if (fbi->backlight_power) ··· 584 575 clk_disable_unprepare(fbi->clk_per); 585 576 clk_disable_unprepare(fbi->clk_ipg); 586 577 clk_disable_unprepare(fbi->clk_ahb); 578 + fbi->enabled = false; 587 579 588 580 writel(0, fbi->regs + LCDC_RMCR); 589 581 } ··· 739 729 740 730 memset(fbi, 0, sizeof(struct imxfb_info)); 741 731 732 + fbi->devtype = pdev->id_entry->driver_data; 733 + 742 734 strlcpy(info->fix.id, IMX_NAME, sizeof(info->fix.id)); 743 735 744 736 info->fix.type = FB_TYPE_PACKED_PIXELS; ··· 801 789 return -ENOMEM; 802 790 803 791 fbi = info->par; 804 - fbi->devtype = pdev->id_entry->driver_data; 805 792 806 793 if (!fb_mode) 807 794 fb_mode = pdata->mode[0].mode.name;
+2 -2
drivers/xen/cpu_hotplug.c
··· 25 25 static int vcpu_online(unsigned int cpu) 26 26 { 27 27 int err; 28 - char dir[32], state[32]; 28 + char dir[16], state[16]; 29 29 30 30 sprintf(dir, "cpu/%u", cpu); 31 - err = xenbus_scanf(XBT_NIL, dir, "availability", "%s", state); 31 + err = xenbus_scanf(XBT_NIL, dir, "availability", "%15s", state); 32 32 if (err != 1) { 33 33 if (!xen_initial_domain()) 34 34 printk(KERN_ERR "XENBUS: Unable to read cpu state\n");
+88 -42
drivers/xen/gntdev.c
··· 56 56 static atomic_t pages_mapped = ATOMIC_INIT(0); 57 57 58 58 static int use_ptemod; 59 + #define populate_freeable_maps use_ptemod 59 60 60 61 struct gntdev_priv { 62 + /* maps with visible offsets in the file descriptor */ 61 63 struct list_head maps; 62 - /* lock protects maps from concurrent changes */ 64 + /* maps that are not visible; will be freed on munmap. 65 + * Only populated if populate_freeable_maps == 1 */ 66 + struct list_head freeable_maps; 67 + /* lock protects maps and freeable_maps */ 63 68 spinlock_t lock; 64 69 struct mm_struct *mm; 65 70 struct mmu_notifier mn; ··· 198 193 return NULL; 199 194 } 200 195 201 - static void gntdev_put_map(struct grant_map *map) 196 + static void gntdev_put_map(struct gntdev_priv *priv, struct grant_map *map) 202 197 { 203 198 if (!map) 204 199 return; ··· 211 206 if (map->notify.flags & UNMAP_NOTIFY_SEND_EVENT) { 212 207 notify_remote_via_evtchn(map->notify.event); 213 208 evtchn_put(map->notify.event); 209 + } 210 + 211 + if (populate_freeable_maps && priv) { 212 + spin_lock(&priv->lock); 213 + list_del(&map->next); 214 + spin_unlock(&priv->lock); 214 215 } 215 216 216 217 if (map->pages && !use_ptemod) ··· 312 301 313 302 if (map->notify.flags & UNMAP_NOTIFY_CLEAR_BYTE) { 314 303 int pgno = (map->notify.addr >> PAGE_SHIFT); 315 - if (pgno >= offset && pgno < offset + pages && use_ptemod) { 316 - void __user *tmp = (void __user *) 317 - map->vma->vm_start + map->notify.addr; 318 - err = copy_to_user(tmp, &err, 1); 319 - if (err) 320 - return -EFAULT; 321 - map->notify.flags &= ~UNMAP_NOTIFY_CLEAR_BYTE; 322 - } else if (pgno >= offset && pgno < offset + pages) { 323 - uint8_t *tmp = kmap(map->pages[pgno]); 304 + if (pgno >= offset && pgno < offset + pages) { 305 + /* No need for kmap, pages are in lowmem */ 306 + uint8_t *tmp = pfn_to_kaddr(page_to_pfn(map->pages[pgno])); 324 307 tmp[map->notify.addr & (PAGE_SIZE-1)] = 0; 325 - kunmap(map->pages[pgno]); 326 308 map->notify.flags &= ~UNMAP_NOTIFY_CLEAR_BYTE; 327 309 } 328 310 } ··· 380 376 static void gntdev_vma_close(struct vm_area_struct *vma) 381 377 { 382 378 struct grant_map *map = vma->vm_private_data; 379 + struct file *file = vma->vm_file; 380 + struct gntdev_priv *priv = file->private_data; 383 381 384 382 pr_debug("gntdev_vma_close %p\n", vma); 385 - map->vma = NULL; 383 + if (use_ptemod) { 384 + /* It is possible that an mmu notifier could be running 385 + * concurrently, so take priv->lock to ensure that the vma won't 386 + * vanishing during the unmap_grant_pages call, since we will 387 + * spin here until that completes. Such a concurrent call will 388 + * not do any unmapping, since that has been done prior to 389 + * closing the vma, but it may still iterate the unmap_ops list. 390 + */ 391 + spin_lock(&priv->lock); 392 + map->vma = NULL; 393 + spin_unlock(&priv->lock); 394 + } 386 395 vma->vm_private_data = NULL; 387 - gntdev_put_map(map); 396 + gntdev_put_map(priv, map); 388 397 } 389 398 390 399 static struct vm_operations_struct gntdev_vmops = { ··· 407 390 408 391 /* ------------------------------------------------------------------ */ 409 392 393 + static void unmap_if_in_range(struct grant_map *map, 394 + unsigned long start, unsigned long end) 395 + { 396 + unsigned long mstart, mend; 397 + int err; 398 + 399 + if (!map->vma) 400 + return; 401 + if (map->vma->vm_start >= end) 402 + return; 403 + if (map->vma->vm_end <= start) 404 + return; 405 + mstart = max(start, map->vma->vm_start); 406 + mend = min(end, map->vma->vm_end); 407 + pr_debug("map %d+%d (%lx %lx), range %lx %lx, mrange %lx %lx\n", 408 + map->index, map->count, 409 + map->vma->vm_start, map->vma->vm_end, 410 + start, end, mstart, mend); 411 + err = unmap_grant_pages(map, 412 + (mstart - map->vma->vm_start) >> PAGE_SHIFT, 413 + (mend - mstart) >> PAGE_SHIFT); 414 + WARN_ON(err); 415 + } 416 + 410 417 static void mn_invl_range_start(struct mmu_notifier *mn, 411 418 struct mm_struct *mm, 412 419 unsigned long start, unsigned long end) 413 420 { 414 421 struct gntdev_priv *priv = container_of(mn, struct gntdev_priv, mn); 415 422 struct grant_map *map; 416 - unsigned long mstart, mend; 417 - int err; 418 423 419 424 spin_lock(&priv->lock); 420 425 list_for_each_entry(map, &priv->maps, next) { 421 - if (!map->vma) 422 - continue; 423 - if (map->vma->vm_start >= end) 424 - continue; 425 - if (map->vma->vm_end <= start) 426 - continue; 427 - mstart = max(start, map->vma->vm_start); 428 - mend = min(end, map->vma->vm_end); 429 - pr_debug("map %d+%d (%lx %lx), range %lx %lx, mrange %lx %lx\n", 430 - map->index, map->count, 431 - map->vma->vm_start, map->vma->vm_end, 432 - start, end, mstart, mend); 433 - err = unmap_grant_pages(map, 434 - (mstart - map->vma->vm_start) >> PAGE_SHIFT, 435 - (mend - mstart) >> PAGE_SHIFT); 436 - WARN_ON(err); 426 + unmap_if_in_range(map, start, end); 427 + } 428 + list_for_each_entry(map, &priv->freeable_maps, next) { 429 + unmap_if_in_range(map, start, end); 437 430 } 438 431 spin_unlock(&priv->lock); 439 432 } ··· 464 437 465 438 spin_lock(&priv->lock); 466 439 list_for_each_entry(map, &priv->maps, next) { 440 + if (!map->vma) 441 + continue; 442 + pr_debug("map %d+%d (%lx %lx)\n", 443 + map->index, map->count, 444 + map->vma->vm_start, map->vma->vm_end); 445 + err = unmap_grant_pages(map, /* offset */ 0, map->count); 446 + WARN_ON(err); 447 + } 448 + list_for_each_entry(map, &priv->freeable_maps, next) { 467 449 if (!map->vma) 468 450 continue; 469 451 pr_debug("map %d+%d (%lx %lx)\n", ··· 502 466 return -ENOMEM; 503 467 504 468 INIT_LIST_HEAD(&priv->maps); 469 + INIT_LIST_HEAD(&priv->freeable_maps); 505 470 spin_lock_init(&priv->lock); 506 471 507 472 if (use_ptemod) { ··· 537 500 while (!list_empty(&priv->maps)) { 538 501 map = list_entry(priv->maps.next, struct grant_map, next); 539 502 list_del(&map->next); 540 - gntdev_put_map(map); 503 + gntdev_put_map(NULL /* already removed */, map); 541 504 } 505 + WARN_ON(!list_empty(&priv->freeable_maps)); 542 506 543 507 if (use_ptemod) 544 508 mmu_notifier_unregister(&priv->mn, priv->mm); ··· 567 529 568 530 if (unlikely(atomic_add_return(op.count, &pages_mapped) > limit)) { 569 531 pr_debug("can't map: over limit\n"); 570 - gntdev_put_map(map); 532 + gntdev_put_map(NULL, map); 571 533 return err; 572 534 } 573 535 574 536 if (copy_from_user(map->grants, &u->refs, 575 537 sizeof(map->grants[0]) * op.count) != 0) { 576 - gntdev_put_map(map); 577 - return err; 538 + gntdev_put_map(NULL, map); 539 + return -EFAULT; 578 540 } 579 541 580 542 spin_lock(&priv->lock); ··· 603 565 map = gntdev_find_map_index(priv, op.index >> PAGE_SHIFT, op.count); 604 566 if (map) { 605 567 list_del(&map->next); 568 + if (populate_freeable_maps) 569 + list_add_tail(&map->next, &priv->freeable_maps); 606 570 err = 0; 607 571 } 608 572 spin_unlock(&priv->lock); 609 573 if (map) 610 - gntdev_put_map(map); 574 + gntdev_put_map(priv, map); 611 575 return err; 612 576 } 613 577 ··· 619 579 struct ioctl_gntdev_get_offset_for_vaddr op; 620 580 struct vm_area_struct *vma; 621 581 struct grant_map *map; 582 + int rv = -EINVAL; 622 583 623 584 if (copy_from_user(&op, u, sizeof(op)) != 0) 624 585 return -EFAULT; 625 586 pr_debug("priv %p, offset for vaddr %lx\n", priv, (unsigned long)op.vaddr); 626 587 588 + down_read(&current->mm->mmap_sem); 627 589 vma = find_vma(current->mm, op.vaddr); 628 590 if (!vma || vma->vm_ops != &gntdev_vmops) 629 - return -EINVAL; 591 + goto out_unlock; 630 592 631 593 map = vma->vm_private_data; 632 594 if (!map) 633 - return -EINVAL; 595 + goto out_unlock; 634 596 635 597 op.offset = map->index << PAGE_SHIFT; 636 598 op.count = map->count; 599 + rv = 0; 637 600 638 - if (copy_to_user(u, &op, sizeof(op)) != 0) 601 + out_unlock: 602 + up_read(&current->mm->mmap_sem); 603 + 604 + if (rv == 0 && copy_to_user(u, &op, sizeof(op)) != 0) 639 605 return -EFAULT; 640 - return 0; 606 + return rv; 641 607 } 642 608 643 609 static long gntdev_ioctl_notify(struct gntdev_priv *priv, void __user *u) ··· 824 778 out_put_map: 825 779 if (use_ptemod) 826 780 map->vma = NULL; 827 - gntdev_put_map(map); 781 + gntdev_put_map(priv, map); 828 782 return err; 829 783 } 830 784
+30 -20
drivers/xen/grant-table.c
··· 56 56 /* External tools reserve first few grant table entries. */ 57 57 #define NR_RESERVED_ENTRIES 8 58 58 #define GNTTAB_LIST_END 0xffffffff 59 - #define GREFS_PER_GRANT_FRAME \ 60 - (grant_table_version == 1 ? \ 61 - (PAGE_SIZE / sizeof(struct grant_entry_v1)) : \ 62 - (PAGE_SIZE / sizeof(union grant_entry_v2))) 63 59 64 60 static grant_ref_t **gnttab_list; 65 61 static unsigned int nr_grant_frames; ··· 150 154 static grant_status_t *grstatus; 151 155 152 156 static int grant_table_version; 157 + static int grefs_per_grant_frame; 153 158 154 159 static struct gnttab_free_callback *gnttab_free_callback_list; 155 160 ··· 764 767 unsigned int new_nr_grant_frames, extra_entries, i; 765 768 unsigned int nr_glist_frames, new_nr_glist_frames; 766 769 767 - new_nr_grant_frames = nr_grant_frames + more_frames; 768 - extra_entries = more_frames * GREFS_PER_GRANT_FRAME; 770 + BUG_ON(grefs_per_grant_frame == 0); 769 771 770 - nr_glist_frames = (nr_grant_frames * GREFS_PER_GRANT_FRAME + RPP - 1) / RPP; 772 + new_nr_grant_frames = nr_grant_frames + more_frames; 773 + extra_entries = more_frames * grefs_per_grant_frame; 774 + 775 + nr_glist_frames = (nr_grant_frames * grefs_per_grant_frame + RPP - 1) / RPP; 771 776 new_nr_glist_frames = 772 - (new_nr_grant_frames * GREFS_PER_GRANT_FRAME + RPP - 1) / RPP; 777 + (new_nr_grant_frames * grefs_per_grant_frame + RPP - 1) / RPP; 773 778 for (i = nr_glist_frames; i < new_nr_glist_frames; i++) { 774 779 gnttab_list[i] = (grant_ref_t *)__get_free_page(GFP_ATOMIC); 775 780 if (!gnttab_list[i]) ··· 779 780 } 780 781 781 782 782 - for (i = GREFS_PER_GRANT_FRAME * nr_grant_frames; 783 - i < GREFS_PER_GRANT_FRAME * new_nr_grant_frames - 1; i++) 783 + for (i = grefs_per_grant_frame * nr_grant_frames; 784 + i < grefs_per_grant_frame * new_nr_grant_frames - 1; i++) 784 785 gnttab_entry(i) = i + 1; 785 786 786 787 gnttab_entry(i) = gnttab_free_head; 787 - gnttab_free_head = GREFS_PER_GRANT_FRAME * nr_grant_frames; 788 + gnttab_free_head = grefs_per_grant_frame * nr_grant_frames; 788 789 gnttab_free_count += extra_entries; 789 790 790 791 nr_grant_frames = new_nr_grant_frames; ··· 956 957 957 958 static unsigned nr_status_frames(unsigned nr_grant_frames) 958 959 { 959 - return (nr_grant_frames * GREFS_PER_GRANT_FRAME + SPP - 1) / SPP; 960 + BUG_ON(grefs_per_grant_frame == 0); 961 + return (nr_grant_frames * grefs_per_grant_frame + SPP - 1) / SPP; 960 962 } 961 963 962 964 static int gnttab_map_frames_v1(xen_pfn_t *frames, unsigned int nr_gframes) ··· 1115 1115 rc = HYPERVISOR_grant_table_op(GNTTABOP_set_version, &gsv, 1); 1116 1116 if (rc == 0 && gsv.version == 2) { 1117 1117 grant_table_version = 2; 1118 + grefs_per_grant_frame = PAGE_SIZE / sizeof(union grant_entry_v2); 1118 1119 gnttab_interface = &gnttab_v2_ops; 1119 1120 } else if (grant_table_version == 2) { 1120 1121 /* ··· 1128 1127 panic("we need grant tables version 2, but only version 1 is available"); 1129 1128 } else { 1130 1129 grant_table_version = 1; 1130 + grefs_per_grant_frame = PAGE_SIZE / sizeof(struct grant_entry_v1); 1131 1131 gnttab_interface = &gnttab_v1_ops; 1132 1132 } 1133 1133 printk(KERN_INFO "Grant tables using version %d layout.\n", 1134 1134 grant_table_version); 1135 1135 } 1136 1136 1137 - int gnttab_resume(void) 1137 + static int gnttab_setup(void) 1138 1138 { 1139 1139 unsigned int max_nr_gframes; 1140 1140 1141 - gnttab_request_version(); 1142 1141 max_nr_gframes = gnttab_max_grant_frames(); 1143 1142 if (max_nr_gframes < nr_grant_frames) 1144 1143 return -ENOSYS; ··· 1161 1160 return 0; 1162 1161 } 1163 1162 1163 + int gnttab_resume(void) 1164 + { 1165 + gnttab_request_version(); 1166 + return gnttab_setup(); 1167 + } 1168 + 1164 1169 int gnttab_suspend(void) 1165 1170 { 1166 1171 gnttab_interface->unmap_frames(); ··· 1178 1171 int rc; 1179 1172 unsigned int cur, extra; 1180 1173 1174 + BUG_ON(grefs_per_grant_frame == 0); 1181 1175 cur = nr_grant_frames; 1182 - extra = ((req_entries + (GREFS_PER_GRANT_FRAME-1)) / 1183 - GREFS_PER_GRANT_FRAME); 1176 + extra = ((req_entries + (grefs_per_grant_frame-1)) / 1177 + grefs_per_grant_frame); 1184 1178 if (cur + extra > gnttab_max_grant_frames()) 1185 1179 return -ENOSPC; 1186 1180 ··· 1199 1191 unsigned int nr_init_grefs; 1200 1192 int ret; 1201 1193 1194 + gnttab_request_version(); 1202 1195 nr_grant_frames = 1; 1203 1196 boot_max_nr_grant_frames = __max_nr_grant_frames(); 1204 1197 1205 1198 /* Determine the maximum number of frames required for the 1206 1199 * grant reference free list on the current hypervisor. 1207 1200 */ 1201 + BUG_ON(grefs_per_grant_frame == 0); 1208 1202 max_nr_glist_frames = (boot_max_nr_grant_frames * 1209 - GREFS_PER_GRANT_FRAME / RPP); 1203 + grefs_per_grant_frame / RPP); 1210 1204 1211 1205 gnttab_list = kmalloc(max_nr_glist_frames * sizeof(grant_ref_t *), 1212 1206 GFP_KERNEL); 1213 1207 if (gnttab_list == NULL) 1214 1208 return -ENOMEM; 1215 1209 1216 - nr_glist_frames = (nr_grant_frames * GREFS_PER_GRANT_FRAME + RPP - 1) / RPP; 1210 + nr_glist_frames = (nr_grant_frames * grefs_per_grant_frame + RPP - 1) / RPP; 1217 1211 for (i = 0; i < nr_glist_frames; i++) { 1218 1212 gnttab_list[i] = (grant_ref_t *)__get_free_page(GFP_KERNEL); 1219 1213 if (gnttab_list[i] == NULL) { ··· 1224 1214 } 1225 1215 } 1226 1216 1227 - if (gnttab_resume() < 0) { 1217 + if (gnttab_setup() < 0) { 1228 1218 ret = -ENODEV; 1229 1219 goto ini_nomem; 1230 1220 } 1231 1221 1232 - nr_init_grefs = nr_grant_frames * GREFS_PER_GRANT_FRAME; 1222 + nr_init_grefs = nr_grant_frames * grefs_per_grant_frame; 1233 1223 1234 1224 for (i = NR_RESERVED_ENTRIES; i < nr_init_grefs - 1; i++) 1235 1225 gnttab_entry(i) = i + 1;
+47 -42
drivers/xen/privcmd.c
··· 199 199 LIST_HEAD(pagelist); 200 200 struct mmap_mfn_state state; 201 201 202 - if (!xen_initial_domain()) 203 - return -EPERM; 204 - 205 202 /* We only support privcmd_ioctl_mmap_batch for auto translated. */ 206 203 if (xen_feature(XENFEAT_auto_translated_physmap)) 207 204 return -ENOSYS; ··· 258 261 * -ENOENT if at least 1 -ENOENT has happened. 259 262 */ 260 263 int global_error; 261 - /* An array for individual errors */ 262 - int *err; 264 + int version; 263 265 264 266 /* User-space mfn array to store errors in the second pass for V1. */ 265 267 xen_pfn_t __user *user_mfn; 268 + /* User-space int array to store errors in the second pass for V2. */ 269 + int __user *user_err; 266 270 }; 267 271 268 272 /* auto translated dom0 note: if domU being created is PV, then mfn is ··· 286 288 &cur_page); 287 289 288 290 /* Store error code for second pass. */ 289 - *(st->err++) = ret; 291 + if (st->version == 1) { 292 + if (ret < 0) { 293 + /* 294 + * V1 encodes the error codes in the 32bit top nibble of the 295 + * mfn (with its known limitations vis-a-vis 64 bit callers). 296 + */ 297 + *mfnp |= (ret == -ENOENT) ? 298 + PRIVCMD_MMAPBATCH_PAGED_ERROR : 299 + PRIVCMD_MMAPBATCH_MFN_ERROR; 300 + } 301 + } else { /* st->version == 2 */ 302 + *((int *) mfnp) = ret; 303 + } 290 304 291 305 /* And see if it affects the global_error. */ 292 306 if (ret < 0) { ··· 315 305 return 0; 316 306 } 317 307 318 - static int mmap_return_errors_v1(void *data, void *state) 308 + static int mmap_return_errors(void *data, void *state) 319 309 { 320 - xen_pfn_t *mfnp = data; 321 310 struct mmap_batch_state *st = state; 322 - int err = *(st->err++); 323 311 324 - /* 325 - * V1 encodes the error codes in the 32bit top nibble of the 326 - * mfn (with its known limitations vis-a-vis 64 bit callers). 327 - */ 328 - *mfnp |= (err == -ENOENT) ? 329 - PRIVCMD_MMAPBATCH_PAGED_ERROR : 330 - PRIVCMD_MMAPBATCH_MFN_ERROR; 331 - return __put_user(*mfnp, st->user_mfn++); 312 + if (st->version == 1) { 313 + xen_pfn_t mfnp = *((xen_pfn_t *) data); 314 + if (mfnp & PRIVCMD_MMAPBATCH_MFN_ERROR) 315 + return __put_user(mfnp, st->user_mfn++); 316 + else 317 + st->user_mfn++; 318 + } else { /* st->version == 2 */ 319 + int err = *((int *) data); 320 + if (err) 321 + return __put_user(err, st->user_err++); 322 + else 323 + st->user_err++; 324 + } 325 + 326 + return 0; 332 327 } 333 328 334 329 /* Allocate pfns that are then mapped with gmfns from foreign domid. Update ··· 372 357 struct vm_area_struct *vma; 373 358 unsigned long nr_pages; 374 359 LIST_HEAD(pagelist); 375 - int *err_array = NULL; 376 360 struct mmap_batch_state state; 377 - 378 - if (!xen_initial_domain()) 379 - return -EPERM; 380 361 381 362 switch (version) { 382 363 case 1: ··· 407 396 goto out; 408 397 } 409 398 410 - err_array = kcalloc(m.num, sizeof(int), GFP_KERNEL); 411 - if (err_array == NULL) { 412 - ret = -ENOMEM; 413 - goto out; 399 + if (version == 2) { 400 + /* Zero error array now to only copy back actual errors. */ 401 + if (clear_user(m.err, sizeof(int) * m.num)) { 402 + ret = -EFAULT; 403 + goto out; 404 + } 414 405 } 415 406 416 407 down_write(&mm->mmap_sem); ··· 440 427 state.va = m.addr; 441 428 state.index = 0; 442 429 state.global_error = 0; 443 - state.err = err_array; 430 + state.version = version; 444 431 445 432 /* mmap_batch_fn guarantees ret == 0 */ 446 433 BUG_ON(traverse_pages(m.num, sizeof(xen_pfn_t), ··· 448 435 449 436 up_write(&mm->mmap_sem); 450 437 451 - if (version == 1) { 452 - if (state.global_error) { 453 - /* Write back errors in second pass. */ 454 - state.user_mfn = (xen_pfn_t *)m.arr; 455 - state.err = err_array; 456 - ret = traverse_pages(m.num, sizeof(xen_pfn_t), 457 - &pagelist, mmap_return_errors_v1, &state); 458 - } else 459 - ret = 0; 460 - 461 - } else if (version == 2) { 462 - ret = __copy_to_user(m.err, err_array, m.num * sizeof(int)); 463 - if (ret) 464 - ret = -EFAULT; 465 - } 438 + if (state.global_error) { 439 + /* Write back errors in second pass. */ 440 + state.user_mfn = (xen_pfn_t *)m.arr; 441 + state.user_err = m.err; 442 + ret = traverse_pages(m.num, sizeof(xen_pfn_t), 443 + &pagelist, mmap_return_errors, &state); 444 + } else 445 + ret = 0; 466 446 467 447 /* If we have not had any EFAULT-like global errors then set the global 468 448 * error to -ENOENT if necessary. */ ··· 463 457 ret = -ENOENT; 464 458 465 459 out: 466 - kfree(err_array); 467 460 free_page_list(&pagelist); 468 461 469 462 return ret;
+1 -1
drivers/xen/xen-pciback/pciback.h
··· 124 124 static inline void xen_pcibk_release_pci_dev(struct xen_pcibk_device *pdev, 125 125 struct pci_dev *dev) 126 126 { 127 - if (xen_pcibk_backend && xen_pcibk_backend->free) 127 + if (xen_pcibk_backend && xen_pcibk_backend->release) 128 128 return xen_pcibk_backend->release(pdev, dev); 129 129 } 130 130
-10
fs/Kconfig
··· 68 68 source "fs/autofs4/Kconfig" 69 69 source "fs/fuse/Kconfig" 70 70 71 - config CUSE 72 - tristate "Character device in Userspace support" 73 - depends on FUSE_FS 74 - help 75 - This FUSE extension allows character devices to be 76 - implemented in userspace. 77 - 78 - If you want to develop or use userspace character device 79 - based on CUSE, answer Y or M. 80 - 81 71 config GENERIC_ACL 82 72 bool 83 73 select FS_POSIX_ACL
+4 -2
fs/btrfs/extent-tree.c
··· 3997 3997 * We make the other tasks wait for the flush only when we can flush 3998 3998 * all things. 3999 3999 */ 4000 - if (ret && flush == BTRFS_RESERVE_FLUSH_ALL) { 4000 + if (ret && flush != BTRFS_RESERVE_NO_FLUSH) { 4001 4001 flushing = true; 4002 4002 space_info->flush = 1; 4003 4003 } ··· 5560 5560 int empty_cluster = 2 * 1024 * 1024; 5561 5561 struct btrfs_space_info *space_info; 5562 5562 int loop = 0; 5563 - int index = 0; 5563 + int index = __get_raid_index(data); 5564 5564 int alloc_type = (data & BTRFS_BLOCK_GROUP_DATA) ? 5565 5565 RESERVE_ALLOC_NO_ACCOUNT : RESERVE_ALLOC; 5566 5566 bool found_uncached_bg = false; ··· 6788 6788 &wc->flags[level]); 6789 6789 if (ret < 0) { 6790 6790 btrfs_tree_unlock_rw(eb, path->locks[level]); 6791 + path->locks[level] = 0; 6791 6792 return ret; 6792 6793 } 6793 6794 BUG_ON(wc->refs[level] == 0); 6794 6795 if (wc->refs[level] == 1) { 6795 6796 btrfs_tree_unlock_rw(eb, path->locks[level]); 6797 + path->locks[level] = 0; 6796 6798 return 1; 6797 6799 } 6798 6800 }
+12 -1
fs/btrfs/extent_map.c
··· 171 171 if (test_bit(EXTENT_FLAG_COMPRESSED, &prev->flags)) 172 172 return 0; 173 173 174 + if (test_bit(EXTENT_FLAG_LOGGING, &prev->flags) || 175 + test_bit(EXTENT_FLAG_LOGGING, &next->flags)) 176 + return 0; 177 + 174 178 if (extent_map_end(prev) == next->start && 175 179 prev->flags == next->flags && 176 180 prev->bdev == next->bdev && ··· 259 255 if (!em) 260 256 goto out; 261 257 262 - list_move(&em->list, &tree->modified_extents); 258 + if (!test_bit(EXTENT_FLAG_LOGGING, &em->flags)) 259 + list_move(&em->list, &tree->modified_extents); 263 260 em->generation = gen; 264 261 clear_bit(EXTENT_FLAG_PINNED, &em->flags); 265 262 em->mod_start = em->start; ··· 283 278 write_unlock(&tree->lock); 284 279 return ret; 285 280 281 + } 282 + 283 + void clear_em_logging(struct extent_map_tree *tree, struct extent_map *em) 284 + { 285 + clear_bit(EXTENT_FLAG_LOGGING, &em->flags); 286 + try_merge_map(tree, em); 286 287 } 287 288 288 289 /**
+1
fs/btrfs/extent_map.h
··· 69 69 int __init extent_map_init(void); 70 70 void extent_map_exit(void); 71 71 int unpin_extent_cache(struct extent_map_tree *tree, u64 start, u64 len, u64 gen); 72 + void clear_em_logging(struct extent_map_tree *tree, struct extent_map *em); 72 73 struct extent_map *search_extent_mapping(struct extent_map_tree *tree, 73 74 u64 start, u64 len); 74 75 #endif
+2 -2
fs/btrfs/file-item.c
··· 460 460 if (!contig) 461 461 offset = page_offset(bvec->bv_page) + bvec->bv_offset; 462 462 463 - if (!contig && (offset >= ordered->file_offset + ordered->len || 464 - offset < ordered->file_offset)) { 463 + if (offset >= ordered->file_offset + ordered->len || 464 + offset < ordered->file_offset) { 465 465 unsigned long bytes_left; 466 466 sums->len = this_sum_bytes; 467 467 this_sum_bytes = 0;
+7 -3
fs/btrfs/file.c
··· 2241 2241 if (lockend <= lockstart) 2242 2242 lockend = lockstart + root->sectorsize; 2243 2243 2244 + lockend--; 2244 2245 len = lockend - lockstart + 1; 2245 2246 2246 2247 len = max_t(u64, len, root->sectorsize); ··· 2308 2307 } 2309 2308 } 2310 2309 2311 - *offset = start; 2312 - free_extent_map(em); 2313 - break; 2310 + if (!test_bit(EXTENT_FLAG_PREALLOC, 2311 + &em->flags)) { 2312 + *offset = start; 2313 + free_extent_map(em); 2314 + break; 2315 + } 2314 2316 } 2315 2317 } 2316 2318
+12 -8
fs/btrfs/free-space-cache.c
··· 1862 1862 { 1863 1863 struct btrfs_free_space_ctl *ctl = block_group->free_space_ctl; 1864 1864 struct btrfs_free_space *info; 1865 - int ret = 0; 1865 + int ret; 1866 + bool re_search = false; 1866 1867 1867 1868 spin_lock(&ctl->tree_lock); 1868 1869 1869 1870 again: 1871 + ret = 0; 1870 1872 if (!bytes) 1871 1873 goto out_lock; 1872 1874 ··· 1881 1879 info = tree_search_offset(ctl, offset_to_bitmap(ctl, offset), 1882 1880 1, 0); 1883 1881 if (!info) { 1884 - /* the tree logging code might be calling us before we 1885 - * have fully loaded the free space rbtree for this 1886 - * block group. So it is possible the entry won't 1887 - * be in the rbtree yet at all. The caching code 1888 - * will make sure not to put it in the rbtree if 1889 - * the logging code has pinned it. 1882 + /* 1883 + * If we found a partial bit of our free space in a 1884 + * bitmap but then couldn't find the other part this may 1885 + * be a problem, so WARN about it. 1890 1886 */ 1887 + WARN_ON(re_search); 1891 1888 goto out_lock; 1892 1889 } 1893 1890 } 1894 1891 1892 + re_search = false; 1895 1893 if (!info->bitmap) { 1896 1894 unlink_free_space(ctl, info); 1897 1895 if (offset == info->offset) { ··· 1937 1935 } 1938 1936 1939 1937 ret = remove_from_bitmap(ctl, info, &offset, &bytes); 1940 - if (ret == -EAGAIN) 1938 + if (ret == -EAGAIN) { 1939 + re_search = true; 1941 1940 goto again; 1941 + } 1942 1942 BUG_ON(ret); /* logic error */ 1943 1943 out_lock: 1944 1944 spin_unlock(&ctl->tree_lock);
+102 -35
fs/btrfs/inode.c
··· 88 88 [S_IFLNK >> S_SHIFT] = BTRFS_FT_SYMLINK, 89 89 }; 90 90 91 - static int btrfs_setsize(struct inode *inode, loff_t newsize); 91 + static int btrfs_setsize(struct inode *inode, struct iattr *attr); 92 92 static int btrfs_truncate(struct inode *inode); 93 93 static int btrfs_finish_ordered_io(struct btrfs_ordered_extent *ordered_extent); 94 94 static noinline int cow_file_range(struct inode *inode, ··· 2478 2478 continue; 2479 2479 } 2480 2480 nr_truncate++; 2481 + 2482 + /* 1 for the orphan item deletion. */ 2483 + trans = btrfs_start_transaction(root, 1); 2484 + if (IS_ERR(trans)) { 2485 + ret = PTR_ERR(trans); 2486 + goto out; 2487 + } 2488 + ret = btrfs_orphan_add(trans, inode); 2489 + btrfs_end_transaction(trans, root); 2490 + if (ret) 2491 + goto out; 2492 + 2481 2493 ret = btrfs_truncate(inode); 2482 2494 } else { 2483 2495 nr_unlink++; ··· 3677 3665 block_end - cur_offset, 0); 3678 3666 if (IS_ERR(em)) { 3679 3667 err = PTR_ERR(em); 3668 + em = NULL; 3680 3669 break; 3681 3670 } 3682 3671 last_byte = min(extent_map_end(em), block_end); ··· 3761 3748 return err; 3762 3749 } 3763 3750 3764 - static int btrfs_setsize(struct inode *inode, loff_t newsize) 3751 + static int btrfs_setsize(struct inode *inode, struct iattr *attr) 3765 3752 { 3766 3753 struct btrfs_root *root = BTRFS_I(inode)->root; 3767 3754 struct btrfs_trans_handle *trans; 3768 3755 loff_t oldsize = i_size_read(inode); 3756 + loff_t newsize = attr->ia_size; 3757 + int mask = attr->ia_valid; 3769 3758 int ret; 3770 3759 3771 3760 if (newsize == oldsize) 3772 3761 return 0; 3762 + 3763 + /* 3764 + * The regular truncate() case without ATTR_CTIME and ATTR_MTIME is a 3765 + * special case where we need to update the times despite not having 3766 + * these flags set. For all other operations the VFS set these flags 3767 + * explicitly if it wants a timestamp update. 3768 + */ 3769 + if (newsize != oldsize && (!(mask & (ATTR_CTIME | ATTR_MTIME)))) 3770 + inode->i_ctime = inode->i_mtime = current_fs_time(inode->i_sb); 3773 3771 3774 3772 if (newsize > oldsize) { 3775 3773 truncate_pagecache(inode, oldsize, newsize); ··· 3807 3783 set_bit(BTRFS_INODE_ORDERED_DATA_CLOSE, 3808 3784 &BTRFS_I(inode)->runtime_flags); 3809 3785 3786 + /* 3787 + * 1 for the orphan item we're going to add 3788 + * 1 for the orphan item deletion. 3789 + */ 3790 + trans = btrfs_start_transaction(root, 2); 3791 + if (IS_ERR(trans)) 3792 + return PTR_ERR(trans); 3793 + 3794 + /* 3795 + * We need to do this in case we fail at _any_ point during the 3796 + * actual truncate. Once we do the truncate_setsize we could 3797 + * invalidate pages which forces any outstanding ordered io to 3798 + * be instantly completed which will give us extents that need 3799 + * to be truncated. If we fail to get an orphan inode down we 3800 + * could have left over extents that were never meant to live, 3801 + * so we need to garuntee from this point on that everything 3802 + * will be consistent. 3803 + */ 3804 + ret = btrfs_orphan_add(trans, inode); 3805 + btrfs_end_transaction(trans, root); 3806 + if (ret) 3807 + return ret; 3808 + 3810 3809 /* we don't support swapfiles, so vmtruncate shouldn't fail */ 3811 3810 truncate_setsize(inode, newsize); 3812 3811 ret = btrfs_truncate(inode); 3812 + if (ret && inode->i_nlink) 3813 + btrfs_orphan_del(NULL, inode); 3813 3814 } 3814 3815 3815 3816 return ret; ··· 3854 3805 return err; 3855 3806 3856 3807 if (S_ISREG(inode->i_mode) && (attr->ia_valid & ATTR_SIZE)) { 3857 - err = btrfs_setsize(inode, attr->ia_size); 3808 + err = btrfs_setsize(inode, attr); 3858 3809 if (err) 3859 3810 return err; 3860 3811 } ··· 5621 5572 return em; 5622 5573 if (em) { 5623 5574 /* 5624 - * if our em maps to a hole, there might 5625 - * actually be delalloc bytes behind it 5575 + * if our em maps to 5576 + * - a hole or 5577 + * - a pre-alloc extent, 5578 + * there might actually be delalloc bytes behind it. 5626 5579 */ 5627 - if (em->block_start != EXTENT_MAP_HOLE) 5580 + if (em->block_start != EXTENT_MAP_HOLE && 5581 + !test_bit(EXTENT_FLAG_PREALLOC, &em->flags)) 5628 5582 return em; 5629 5583 else 5630 5584 hole_em = em; ··· 5709 5657 */ 5710 5658 em->block_start = hole_em->block_start; 5711 5659 em->block_len = hole_len; 5660 + if (test_bit(EXTENT_FLAG_PREALLOC, &hole_em->flags)) 5661 + set_bit(EXTENT_FLAG_PREALLOC, &em->flags); 5712 5662 } else { 5713 5663 em->start = range_start; 5714 5664 em->len = found; ··· 6969 6915 6970 6916 /* 6971 6917 * 1 for the truncate slack space 6972 - * 1 for the orphan item we're going to add 6973 - * 1 for the orphan item deletion 6974 6918 * 1 for updating the inode. 6975 6919 */ 6976 - trans = btrfs_start_transaction(root, 4); 6920 + trans = btrfs_start_transaction(root, 2); 6977 6921 if (IS_ERR(trans)) { 6978 6922 err = PTR_ERR(trans); 6979 6923 goto out; ··· 6981 6929 ret = btrfs_block_rsv_migrate(&root->fs_info->trans_block_rsv, rsv, 6982 6930 min_size); 6983 6931 BUG_ON(ret); 6984 - 6985 - ret = btrfs_orphan_add(trans, inode); 6986 - if (ret) { 6987 - btrfs_end_transaction(trans, root); 6988 - goto out; 6989 - } 6990 6932 6991 6933 /* 6992 6934 * setattr is responsible for setting the ordered_data_close flag, ··· 7050 7004 ret = btrfs_orphan_del(trans, inode); 7051 7005 if (ret) 7052 7006 err = ret; 7053 - } else if (ret && inode->i_nlink > 0) { 7054 - /* 7055 - * Failed to do the truncate, remove us from the in memory 7056 - * orphan list. 7057 - */ 7058 - ret = btrfs_orphan_del(NULL, inode); 7059 7007 } 7060 7008 7061 7009 if (trans) { ··· 7571 7531 */ 7572 7532 int btrfs_start_delalloc_inodes(struct btrfs_root *root, int delay_iput) 7573 7533 { 7574 - struct list_head *head = &root->fs_info->delalloc_inodes; 7575 7534 struct btrfs_inode *binode; 7576 7535 struct inode *inode; 7577 7536 struct btrfs_delalloc_work *work, *next; 7578 7537 struct list_head works; 7538 + struct list_head splice; 7579 7539 int ret = 0; 7580 7540 7581 7541 if (root->fs_info->sb->s_flags & MS_RDONLY) 7582 7542 return -EROFS; 7583 7543 7584 7544 INIT_LIST_HEAD(&works); 7585 - 7545 + INIT_LIST_HEAD(&splice); 7546 + again: 7586 7547 spin_lock(&root->fs_info->delalloc_lock); 7587 - while (!list_empty(head)) { 7588 - binode = list_entry(head->next, struct btrfs_inode, 7548 + list_splice_init(&root->fs_info->delalloc_inodes, &splice); 7549 + while (!list_empty(&splice)) { 7550 + binode = list_entry(splice.next, struct btrfs_inode, 7589 7551 delalloc_inodes); 7552 + 7553 + list_del_init(&binode->delalloc_inodes); 7554 + 7590 7555 inode = igrab(&binode->vfs_inode); 7591 7556 if (!inode) 7592 - list_del_init(&binode->delalloc_inodes); 7557 + continue; 7558 + 7559 + list_add_tail(&binode->delalloc_inodes, 7560 + &root->fs_info->delalloc_inodes); 7593 7561 spin_unlock(&root->fs_info->delalloc_lock); 7594 - if (inode) { 7595 - work = btrfs_alloc_delalloc_work(inode, 0, delay_iput); 7596 - if (!work) { 7597 - ret = -ENOMEM; 7598 - goto out; 7599 - } 7600 - list_add_tail(&work->list, &works); 7601 - btrfs_queue_worker(&root->fs_info->flush_workers, 7602 - &work->work); 7562 + 7563 + work = btrfs_alloc_delalloc_work(inode, 0, delay_iput); 7564 + if (unlikely(!work)) { 7565 + ret = -ENOMEM; 7566 + goto out; 7603 7567 } 7568 + list_add_tail(&work->list, &works); 7569 + btrfs_queue_worker(&root->fs_info->flush_workers, 7570 + &work->work); 7571 + 7604 7572 cond_resched(); 7605 7573 spin_lock(&root->fs_info->delalloc_lock); 7574 + } 7575 + spin_unlock(&root->fs_info->delalloc_lock); 7576 + 7577 + list_for_each_entry_safe(work, next, &works, list) { 7578 + list_del_init(&work->list); 7579 + btrfs_wait_and_free_delalloc_work(work); 7580 + } 7581 + 7582 + spin_lock(&root->fs_info->delalloc_lock); 7583 + if (!list_empty(&root->fs_info->delalloc_inodes)) { 7584 + spin_unlock(&root->fs_info->delalloc_lock); 7585 + goto again; 7606 7586 } 7607 7587 spin_unlock(&root->fs_info->delalloc_lock); 7608 7588 ··· 7638 7578 atomic_read(&root->fs_info->async_delalloc_pages) == 0)); 7639 7579 } 7640 7580 atomic_dec(&root->fs_info->async_submit_draining); 7581 + return 0; 7641 7582 out: 7642 7583 list_for_each_entry_safe(work, next, &works, list) { 7643 7584 list_del_init(&work->list); 7644 7585 btrfs_wait_and_free_delalloc_work(work); 7586 + } 7587 + 7588 + if (!list_empty_careful(&splice)) { 7589 + spin_lock(&root->fs_info->delalloc_lock); 7590 + list_splice_tail(&splice, &root->fs_info->delalloc_inodes); 7591 + spin_unlock(&root->fs_info->delalloc_lock); 7645 7592 } 7646 7593 return ret; 7647 7594 }
+94 -35
fs/btrfs/ioctl.c
··· 1339 1339 if (atomic_xchg(&root->fs_info->mutually_exclusive_operation_running, 1340 1340 1)) { 1341 1341 pr_info("btrfs: dev add/delete/balance/replace/resize operation in progress\n"); 1342 - return -EINPROGRESS; 1342 + mnt_drop_write_file(file); 1343 + return -EINVAL; 1343 1344 } 1344 1345 1345 1346 mutex_lock(&root->fs_info->volume_mutex); ··· 1363 1362 printk(KERN_INFO "btrfs: resizing devid %llu\n", 1364 1363 (unsigned long long)devid); 1365 1364 } 1365 + 1366 1366 device = btrfs_find_device(root->fs_info, devid, NULL, NULL); 1367 1367 if (!device) { 1368 1368 printk(KERN_INFO "btrfs: resizer unable to find device %llu\n", ··· 1371 1369 ret = -EINVAL; 1372 1370 goto out_free; 1373 1371 } 1374 - if (device->fs_devices && device->fs_devices->seeding) { 1372 + 1373 + if (!device->writeable) { 1375 1374 printk(KERN_INFO "btrfs: resizer unable to apply on " 1376 - "seeding device %llu\n", 1375 + "readonly device %llu\n", 1377 1376 (unsigned long long)devid); 1378 1377 ret = -EINVAL; 1379 1378 goto out_free; ··· 1446 1443 kfree(vol_args); 1447 1444 out: 1448 1445 mutex_unlock(&root->fs_info->volume_mutex); 1449 - mnt_drop_write_file(file); 1450 1446 atomic_set(&root->fs_info->mutually_exclusive_operation_running, 0); 1447 + mnt_drop_write_file(file); 1451 1448 return ret; 1452 1449 } 1453 1450 ··· 2098 2095 err = inode_permission(inode, MAY_WRITE | MAY_EXEC); 2099 2096 if (err) 2100 2097 goto out_dput; 2101 - 2102 - /* check if subvolume may be deleted by a non-root user */ 2103 - err = btrfs_may_delete(dir, dentry, 1); 2104 - if (err) 2105 - goto out_dput; 2106 2098 } 2099 + 2100 + /* check if subvolume may be deleted by a user */ 2101 + err = btrfs_may_delete(dir, dentry, 1); 2102 + if (err) 2103 + goto out_dput; 2107 2104 2108 2105 if (btrfs_ino(inode) != BTRFS_FIRST_FREE_OBJECTID) { 2109 2106 err = -EINVAL; ··· 2186 2183 struct btrfs_ioctl_defrag_range_args *range; 2187 2184 int ret; 2188 2185 2189 - if (btrfs_root_readonly(root)) 2190 - return -EROFS; 2186 + ret = mnt_want_write_file(file); 2187 + if (ret) 2188 + return ret; 2191 2189 2192 2190 if (atomic_xchg(&root->fs_info->mutually_exclusive_operation_running, 2193 2191 1)) { 2194 2192 pr_info("btrfs: dev add/delete/balance/replace/resize operation in progress\n"); 2195 - return -EINPROGRESS; 2193 + mnt_drop_write_file(file); 2194 + return -EINVAL; 2196 2195 } 2197 - ret = mnt_want_write_file(file); 2198 - if (ret) { 2199 - atomic_set(&root->fs_info->mutually_exclusive_operation_running, 2200 - 0); 2201 - return ret; 2196 + 2197 + if (btrfs_root_readonly(root)) { 2198 + ret = -EROFS; 2199 + goto out; 2202 2200 } 2203 2201 2204 2202 switch (inode->i_mode & S_IFMT) { ··· 2251 2247 ret = -EINVAL; 2252 2248 } 2253 2249 out: 2254 - mnt_drop_write_file(file); 2255 2250 atomic_set(&root->fs_info->mutually_exclusive_operation_running, 0); 2251 + mnt_drop_write_file(file); 2256 2252 return ret; 2257 2253 } 2258 2254 ··· 2267 2263 if (atomic_xchg(&root->fs_info->mutually_exclusive_operation_running, 2268 2264 1)) { 2269 2265 pr_info("btrfs: dev add/delete/balance/replace/resize operation in progress\n"); 2270 - return -EINPROGRESS; 2266 + return -EINVAL; 2271 2267 } 2272 2268 2273 2269 mutex_lock(&root->fs_info->volume_mutex); ··· 2304 2300 1)) { 2305 2301 pr_info("btrfs: dev add/delete/balance/replace/resize operation in progress\n"); 2306 2302 mnt_drop_write_file(file); 2307 - return -EINPROGRESS; 2303 + return -EINVAL; 2308 2304 } 2309 2305 2310 2306 mutex_lock(&root->fs_info->volume_mutex); ··· 2320 2316 kfree(vol_args); 2321 2317 out: 2322 2318 mutex_unlock(&root->fs_info->volume_mutex); 2323 - mnt_drop_write_file(file); 2324 2319 atomic_set(&root->fs_info->mutually_exclusive_operation_running, 0); 2320 + mnt_drop_write_file(file); 2325 2321 return ret; 2326 2322 } 2327 2323 ··· 3441 3437 struct btrfs_fs_info *fs_info = root->fs_info; 3442 3438 struct btrfs_ioctl_balance_args *bargs; 3443 3439 struct btrfs_balance_control *bctl; 3440 + bool need_unlock; /* for mut. excl. ops lock */ 3444 3441 int ret; 3445 - int need_to_clear_lock = 0; 3446 3442 3447 3443 if (!capable(CAP_SYS_ADMIN)) 3448 3444 return -EPERM; ··· 3451 3447 if (ret) 3452 3448 return ret; 3453 3449 3454 - mutex_lock(&fs_info->volume_mutex); 3450 + again: 3451 + if (!atomic_xchg(&fs_info->mutually_exclusive_operation_running, 1)) { 3452 + mutex_lock(&fs_info->volume_mutex); 3453 + mutex_lock(&fs_info->balance_mutex); 3454 + need_unlock = true; 3455 + goto locked; 3456 + } 3457 + 3458 + /* 3459 + * mut. excl. ops lock is locked. Three possibilites: 3460 + * (1) some other op is running 3461 + * (2) balance is running 3462 + * (3) balance is paused -- special case (think resume) 3463 + */ 3455 3464 mutex_lock(&fs_info->balance_mutex); 3465 + if (fs_info->balance_ctl) { 3466 + /* this is either (2) or (3) */ 3467 + if (!atomic_read(&fs_info->balance_running)) { 3468 + mutex_unlock(&fs_info->balance_mutex); 3469 + if (!mutex_trylock(&fs_info->volume_mutex)) 3470 + goto again; 3471 + mutex_lock(&fs_info->balance_mutex); 3472 + 3473 + if (fs_info->balance_ctl && 3474 + !atomic_read(&fs_info->balance_running)) { 3475 + /* this is (3) */ 3476 + need_unlock = false; 3477 + goto locked; 3478 + } 3479 + 3480 + mutex_unlock(&fs_info->balance_mutex); 3481 + mutex_unlock(&fs_info->volume_mutex); 3482 + goto again; 3483 + } else { 3484 + /* this is (2) */ 3485 + mutex_unlock(&fs_info->balance_mutex); 3486 + ret = -EINPROGRESS; 3487 + goto out; 3488 + } 3489 + } else { 3490 + /* this is (1) */ 3491 + mutex_unlock(&fs_info->balance_mutex); 3492 + pr_info("btrfs: dev add/delete/balance/replace/resize operation in progress\n"); 3493 + ret = -EINVAL; 3494 + goto out; 3495 + } 3496 + 3497 + locked: 3498 + BUG_ON(!atomic_read(&fs_info->mutually_exclusive_operation_running)); 3456 3499 3457 3500 if (arg) { 3458 3501 bargs = memdup_user(arg, sizeof(*bargs)); 3459 3502 if (IS_ERR(bargs)) { 3460 3503 ret = PTR_ERR(bargs); 3461 - goto out; 3504 + goto out_unlock; 3462 3505 } 3463 3506 3464 3507 if (bargs->flags & BTRFS_BALANCE_RESUME) { ··· 3525 3474 bargs = NULL; 3526 3475 } 3527 3476 3528 - if (atomic_xchg(&root->fs_info->mutually_exclusive_operation_running, 3529 - 1)) { 3530 - pr_info("btrfs: dev add/delete/balance/replace/resize operation in progress\n"); 3477 + if (fs_info->balance_ctl) { 3531 3478 ret = -EINPROGRESS; 3532 3479 goto out_bargs; 3533 3480 } 3534 - need_to_clear_lock = 1; 3535 3481 3536 3482 bctl = kzalloc(sizeof(*bctl), GFP_NOFS); 3537 3483 if (!bctl) { ··· 3549 3501 } 3550 3502 3551 3503 do_balance: 3552 - ret = btrfs_balance(bctl, bargs); 3553 3504 /* 3554 - * bctl is freed in __cancel_balance or in free_fs_info if 3555 - * restriper was paused all the way until unmount 3505 + * Ownership of bctl and mutually_exclusive_operation_running 3506 + * goes to to btrfs_balance. bctl is freed in __cancel_balance, 3507 + * or, if restriper was paused all the way until unmount, in 3508 + * free_fs_info. mutually_exclusive_operation_running is 3509 + * cleared in __cancel_balance. 3556 3510 */ 3511 + need_unlock = false; 3512 + 3513 + ret = btrfs_balance(bctl, bargs); 3514 + 3557 3515 if (arg) { 3558 3516 if (copy_to_user(arg, bargs, sizeof(*bargs))) 3559 3517 ret = -EFAULT; ··· 3567 3513 3568 3514 out_bargs: 3569 3515 kfree(bargs); 3570 - out: 3571 - if (need_to_clear_lock) 3572 - atomic_set(&root->fs_info->mutually_exclusive_operation_running, 3573 - 0); 3516 + out_unlock: 3574 3517 mutex_unlock(&fs_info->balance_mutex); 3575 3518 mutex_unlock(&fs_info->volume_mutex); 3519 + if (need_unlock) 3520 + atomic_set(&fs_info->mutually_exclusive_operation_running, 0); 3521 + out: 3576 3522 mnt_drop_write_file(file); 3577 3523 return ret; 3578 3524 } ··· 3750 3696 if (IS_ERR(sa)) { 3751 3697 ret = PTR_ERR(sa); 3752 3698 goto drop_write; 3699 + } 3700 + 3701 + if (!sa->qgroupid) { 3702 + ret = -EINVAL; 3703 + goto out; 3753 3704 } 3754 3705 3755 3706 trans = btrfs_join_transaction(root);
+19 -1
fs/btrfs/qgroup.c
··· 379 379 380 380 ret = add_relation_rb(fs_info, found_key.objectid, 381 381 found_key.offset); 382 + if (ret == -ENOENT) { 383 + printk(KERN_WARNING 384 + "btrfs: orphan qgroup relation 0x%llx->0x%llx\n", 385 + (unsigned long long)found_key.objectid, 386 + (unsigned long long)found_key.offset); 387 + ret = 0; /* ignore the error */ 388 + } 382 389 if (ret) 383 390 goto out; 384 391 next2: ··· 963 956 struct btrfs_fs_info *fs_info, u64 qgroupid) 964 957 { 965 958 struct btrfs_root *quota_root; 959 + struct btrfs_qgroup *qgroup; 966 960 int ret = 0; 967 961 968 962 quota_root = fs_info->quota_root; 969 963 if (!quota_root) 970 964 return -EINVAL; 971 965 966 + /* check if there are no relations to this qgroup */ 967 + spin_lock(&fs_info->qgroup_lock); 968 + qgroup = find_qgroup_rb(fs_info, qgroupid); 969 + if (qgroup) { 970 + if (!list_empty(&qgroup->groups) || !list_empty(&qgroup->members)) { 971 + spin_unlock(&fs_info->qgroup_lock); 972 + return -EBUSY; 973 + } 974 + } 975 + spin_unlock(&fs_info->qgroup_lock); 976 + 972 977 ret = del_qgroup_item(trans, quota_root, qgroupid); 973 978 974 979 spin_lock(&fs_info->qgroup_lock); 975 980 del_qgroup_rb(quota_root->fs_info, qgroupid); 976 - 977 981 spin_unlock(&fs_info->qgroup_lock); 978 982 979 983 return ret;
+3 -1
fs/btrfs/send.c
··· 1814 1814 (unsigned long)nce->ino); 1815 1815 if (!nce_head) { 1816 1816 nce_head = kmalloc(sizeof(*nce_head), GFP_NOFS); 1817 - if (!nce_head) 1817 + if (!nce_head) { 1818 + kfree(nce); 1818 1819 return -ENOMEM; 1820 + } 1819 1821 INIT_LIST_HEAD(nce_head); 1820 1822 1821 1823 ret = radix_tree_insert(&sctx->name_cache, nce->ino, nce_head);
+1 -1
fs/btrfs/super.c
··· 267 267 function, line, errstr); 268 268 return; 269 269 } 270 - trans->transaction->aborted = errno; 270 + ACCESS_ONCE(trans->transaction->aborted) = errno; 271 271 __btrfs_std_error(root->fs_info, function, line, errno, NULL); 272 272 } 273 273 /*
+18 -1
fs/btrfs/transaction.c
··· 1468 1468 goto cleanup_transaction; 1469 1469 } 1470 1470 1471 - if (cur_trans->aborted) { 1471 + /* Stop the commit early if ->aborted is set */ 1472 + if (unlikely(ACCESS_ONCE(cur_trans->aborted))) { 1472 1473 ret = cur_trans->aborted; 1473 1474 goto cleanup_transaction; 1474 1475 } ··· 1575 1574 wait_event(cur_trans->writer_wait, 1576 1575 atomic_read(&cur_trans->num_writers) == 1); 1577 1576 1577 + /* ->aborted might be set after the previous check, so check it */ 1578 + if (unlikely(ACCESS_ONCE(cur_trans->aborted))) { 1579 + ret = cur_trans->aborted; 1580 + goto cleanup_transaction; 1581 + } 1578 1582 /* 1579 1583 * the reloc mutex makes sure that we stop 1580 1584 * the balancing code from coming in and moving ··· 1658 1652 1659 1653 ret = commit_cowonly_roots(trans, root); 1660 1654 if (ret) { 1655 + mutex_unlock(&root->fs_info->tree_log_mutex); 1656 + mutex_unlock(&root->fs_info->reloc_mutex); 1657 + goto cleanup_transaction; 1658 + } 1659 + 1660 + /* 1661 + * The tasks which save the space cache and inode cache may also 1662 + * update ->aborted, check it. 1663 + */ 1664 + if (unlikely(ACCESS_ONCE(cur_trans->aborted))) { 1665 + ret = cur_trans->aborted; 1661 1666 mutex_unlock(&root->fs_info->tree_log_mutex); 1662 1667 mutex_unlock(&root->fs_info->reloc_mutex); 1663 1668 goto cleanup_transaction;
+8 -2
fs/btrfs/tree-log.c
··· 3357 3357 if (skip_csum) 3358 3358 return 0; 3359 3359 3360 + if (em->compress_type) { 3361 + csum_offset = 0; 3362 + csum_len = block_len; 3363 + } 3364 + 3360 3365 /* block start is already adjusted for the file extent offset. */ 3361 3366 ret = btrfs_lookup_csums_range(log->fs_info->csum_root, 3362 3367 em->block_start + csum_offset, ··· 3415 3410 em = list_entry(extents.next, struct extent_map, list); 3416 3411 3417 3412 list_del_init(&em->list); 3418 - clear_bit(EXTENT_FLAG_LOGGING, &em->flags); 3419 3413 3420 3414 /* 3421 3415 * If we had an error we just need to delete everybody from our 3422 3416 * private list. 3423 3417 */ 3424 3418 if (ret) { 3419 + clear_em_logging(tree, em); 3425 3420 free_extent_map(em); 3426 3421 continue; 3427 3422 } ··· 3429 3424 write_unlock(&tree->lock); 3430 3425 3431 3426 ret = log_one_extent(trans, inode, root, em, path); 3432 - free_extent_map(em); 3433 3427 write_lock(&tree->lock); 3428 + clear_em_logging(tree, em); 3429 + free_extent_map(em); 3434 3430 } 3435 3431 WARN_ON(!list_empty(&extents)); 3436 3432 write_unlock(&tree->lock);
+17 -6
fs/btrfs/volumes.c
··· 1431 1431 } 1432 1432 } else { 1433 1433 ret = btrfs_get_bdev_and_sb(device_path, 1434 - FMODE_READ | FMODE_EXCL, 1434 + FMODE_WRITE | FMODE_EXCL, 1435 1435 root->fs_info->bdev_holder, 0, 1436 1436 &bdev, &bh); 1437 1437 if (ret) ··· 2614 2614 cache = btrfs_lookup_block_group(fs_info, chunk_offset); 2615 2615 chunk_used = btrfs_block_group_used(&cache->item); 2616 2616 2617 - user_thresh = div_factor_fine(cache->key.offset, bargs->usage); 2617 + if (bargs->usage == 0) 2618 + user_thresh = 0; 2619 + else if (bargs->usage > 100) 2620 + user_thresh = cache->key.offset; 2621 + else 2622 + user_thresh = div_factor_fine(cache->key.offset, 2623 + bargs->usage); 2624 + 2618 2625 if (chunk_used < user_thresh) 2619 2626 ret = 0; 2620 2627 ··· 2966 2959 unset_balance_control(fs_info); 2967 2960 ret = del_balance_item(fs_info->tree_root); 2968 2961 BUG_ON(ret); 2962 + 2963 + atomic_set(&fs_info->mutually_exclusive_operation_running, 0); 2969 2964 } 2970 2965 2971 2966 void update_ioctl_balance_args(struct btrfs_fs_info *fs_info, int lock, ··· 3147 3138 out: 3148 3139 if (bctl->flags & BTRFS_BALANCE_RESUME) 3149 3140 __cancel_balance(fs_info); 3150 - else 3141 + else { 3151 3142 kfree(bctl); 3143 + atomic_set(&fs_info->mutually_exclusive_operation_running, 0); 3144 + } 3152 3145 return ret; 3153 3146 } 3154 3147 ··· 3167 3156 ret = btrfs_balance(fs_info->balance_ctl, NULL); 3168 3157 } 3169 3158 3170 - atomic_set(&fs_info->mutually_exclusive_operation_running, 0); 3171 3159 mutex_unlock(&fs_info->balance_mutex); 3172 3160 mutex_unlock(&fs_info->volume_mutex); 3173 3161 ··· 3189 3179 return 0; 3190 3180 } 3191 3181 3192 - WARN_ON(atomic_xchg(&fs_info->mutually_exclusive_operation_running, 1)); 3193 3182 tsk = kthread_run(balance_kthread, fs_info, "btrfs-balance"); 3194 3183 if (IS_ERR(tsk)) 3195 3184 return PTR_ERR(tsk); ··· 3241 3232 btrfs_disk_balance_args_to_cpu(&bctl->meta, &disk_bargs); 3242 3233 btrfs_balance_sys(leaf, item, &disk_bargs); 3243 3234 btrfs_disk_balance_args_to_cpu(&bctl->sys, &disk_bargs); 3235 + 3236 + WARN_ON(atomic_xchg(&fs_info->mutually_exclusive_operation_running, 1)); 3244 3237 3245 3238 mutex_lock(&fs_info->volume_mutex); 3246 3239 mutex_lock(&fs_info->balance_mutex); ··· 3507 3496 { 1, 1, 2, 2, 2, 2 /* raid1 */ }, 3508 3497 { 1, 2, 1, 1, 1, 2 /* dup */ }, 3509 3498 { 1, 1, 0, 2, 1, 1 /* raid0 */ }, 3510 - { 1, 1, 0, 1, 1, 1 /* single */ }, 3499 + { 1, 1, 1, 1, 1, 1 /* single */ }, 3511 3500 }; 3512 3501 3513 3502 static int __btrfs_alloc_chunk(struct btrfs_trans_handle *trans,
+2
fs/cifs/cifs_dfs_ref.c
··· 226 226 compose_mount_options_err: 227 227 kfree(mountdata); 228 228 mountdata = ERR_PTR(rc); 229 + kfree(*devname); 230 + *devname = NULL; 229 231 goto compose_mount_options_out; 230 232 } 231 233
+1 -1
fs/cifs/connect.c
··· 1917 1917 } 1918 1918 case AF_INET6: { 1919 1919 struct sockaddr_in6 *saddr6 = (struct sockaddr_in6 *)srcaddr; 1920 - struct sockaddr_in6 *vaddr6 = (struct sockaddr_in6 *)&rhs; 1920 + struct sockaddr_in6 *vaddr6 = (struct sockaddr_in6 *)rhs; 1921 1921 return ipv6_addr_equal(&saddr6->sin6_addr, &vaddr6->sin6_addr); 1922 1922 } 1923 1923 default:
+6 -7
fs/f2fs/acl.c
··· 191 191 retval = f2fs_getxattr(inode, name_index, "", value, retval); 192 192 } 193 193 194 - if (retval < 0) { 195 - if (retval == -ENODATA) 196 - acl = NULL; 197 - else 198 - acl = ERR_PTR(retval); 199 - } else { 194 + if (retval > 0) 200 195 acl = f2fs_acl_from_disk(value, retval); 201 - } 196 + else if (retval == -ENODATA) 197 + acl = NULL; 198 + else 199 + acl = ERR_PTR(retval); 202 200 kfree(value); 201 + 203 202 if (!IS_ERR(acl)) 204 203 set_cached_acl(inode, type, acl); 205 204
+1 -2
fs/f2fs/checkpoint.c
··· 214 214 goto retry; 215 215 } 216 216 new->ino = ino; 217 - INIT_LIST_HEAD(&new->list); 218 217 219 218 /* add new_oentry into list which is sorted by inode number */ 220 219 if (orphan) { ··· 771 772 sbi->n_orphans = 0; 772 773 } 773 774 774 - int create_checkpoint_caches(void) 775 + int __init create_checkpoint_caches(void) 775 776 { 776 777 orphan_entry_slab = f2fs_kmem_cache_create("f2fs_orphan_entry", 777 778 sizeof(struct orphan_inode_entry), NULL);
+16 -1
fs/f2fs/data.c
··· 547 547 548 548 #define MAX_DESIRED_PAGES_WP 4096 549 549 550 + static int __f2fs_writepage(struct page *page, struct writeback_control *wbc, 551 + void *data) 552 + { 553 + struct address_space *mapping = data; 554 + int ret = mapping->a_ops->writepage(page, wbc); 555 + mapping_set_error(mapping, ret); 556 + return ret; 557 + } 558 + 550 559 static int f2fs_write_data_pages(struct address_space *mapping, 551 560 struct writeback_control *wbc) 552 561 { ··· 572 563 573 564 if (!S_ISDIR(inode->i_mode)) 574 565 mutex_lock(&sbi->writepages); 575 - ret = generic_writepages(mapping, wbc); 566 + ret = write_cache_pages(mapping, wbc, __f2fs_writepage, mapping); 576 567 if (!S_ISDIR(inode->i_mode)) 577 568 mutex_unlock(&sbi->writepages); 578 569 f2fs_submit_bio(sbi, DATA, (wbc->sync_mode == WB_SYNC_ALL)); ··· 698 689 return 0; 699 690 } 700 691 692 + static sector_t f2fs_bmap(struct address_space *mapping, sector_t block) 693 + { 694 + return generic_block_bmap(mapping, block, get_data_block_ro); 695 + } 696 + 701 697 const struct address_space_operations f2fs_dblock_aops = { 702 698 .readpage = f2fs_read_data_page, 703 699 .readpages = f2fs_read_data_pages, ··· 714 700 .invalidatepage = f2fs_invalidate_data_page, 715 701 .releasepage = f2fs_release_data_page, 716 702 .direct_IO = f2fs_direct_IO, 703 + .bmap = f2fs_bmap, 717 704 };
+21 -29
fs/f2fs/debug.c
··· 26 26 27 27 static LIST_HEAD(f2fs_stat_list); 28 28 static struct dentry *debugfs_root; 29 + static DEFINE_MUTEX(f2fs_stat_mutex); 29 30 30 31 static void update_general_status(struct f2fs_sb_info *sbi) 31 32 { ··· 181 180 int i = 0; 182 181 int j; 183 182 183 + mutex_lock(&f2fs_stat_mutex); 184 184 list_for_each_entry_safe(si, next, &f2fs_stat_list, stat_list) { 185 185 186 - mutex_lock(&si->stat_lock); 187 - if (!si->sbi) { 188 - mutex_unlock(&si->stat_lock); 189 - continue; 190 - } 191 186 update_general_status(si->sbi); 192 187 193 188 seq_printf(s, "\n=====[ partition info. #%d ]=====\n", i++); 194 - seq_printf(s, "[SB: 1] [CP: 2] [NAT: %d] [SIT: %d] ", 195 - si->nat_area_segs, si->sit_area_segs); 189 + seq_printf(s, "[SB: 1] [CP: 2] [SIT: %d] [NAT: %d] ", 190 + si->sit_area_segs, si->nat_area_segs); 196 191 seq_printf(s, "[SSA: %d] [MAIN: %d", 197 192 si->ssa_area_segs, si->main_area_segs); 198 193 seq_printf(s, "(OverProv:%d Resv:%d)]\n\n", ··· 283 286 seq_printf(s, "\nMemory: %u KB = static: %u + cached: %u\n", 284 287 (si->base_mem + si->cache_mem) >> 10, 285 288 si->base_mem >> 10, si->cache_mem >> 10); 286 - mutex_unlock(&si->stat_lock); 287 289 } 290 + mutex_unlock(&f2fs_stat_mutex); 288 291 return 0; 289 292 } 290 293 ··· 300 303 .release = single_release, 301 304 }; 302 305 303 - static int init_stats(struct f2fs_sb_info *sbi) 306 + int f2fs_build_stats(struct f2fs_sb_info *sbi) 304 307 { 305 308 struct f2fs_super_block *raw_super = F2FS_RAW_SUPER(sbi); 306 309 struct f2fs_stat_info *si; ··· 310 313 return -ENOMEM; 311 314 312 315 si = sbi->stat_info; 313 - mutex_init(&si->stat_lock); 314 - list_add_tail(&si->stat_list, &f2fs_stat_list); 315 - 316 316 si->all_area_segs = le32_to_cpu(raw_super->segment_count); 317 317 si->sit_area_segs = le32_to_cpu(raw_super->segment_count_sit); 318 318 si->nat_area_segs = le32_to_cpu(raw_super->segment_count_nat); ··· 319 325 si->main_area_zones = si->main_area_sections / 320 326 le32_to_cpu(raw_super->secs_per_zone); 321 327 si->sbi = sbi; 322 - return 0; 323 - } 324 328 325 - int f2fs_build_stats(struct f2fs_sb_info *sbi) 326 - { 327 - int retval; 329 + mutex_lock(&f2fs_stat_mutex); 330 + list_add_tail(&si->stat_list, &f2fs_stat_list); 331 + mutex_unlock(&f2fs_stat_mutex); 328 332 329 - retval = init_stats(sbi); 330 - if (retval) 331 - return retval; 332 - 333 - if (!debugfs_root) 334 - debugfs_root = debugfs_create_dir("f2fs", NULL); 335 - 336 - debugfs_create_file("status", S_IRUGO, debugfs_root, NULL, &stat_fops); 337 333 return 0; 338 334 } 339 335 ··· 331 347 { 332 348 struct f2fs_stat_info *si = sbi->stat_info; 333 349 350 + mutex_lock(&f2fs_stat_mutex); 334 351 list_del(&si->stat_list); 335 - mutex_lock(&si->stat_lock); 336 - si->sbi = NULL; 337 - mutex_unlock(&si->stat_lock); 352 + mutex_unlock(&f2fs_stat_mutex); 353 + 338 354 kfree(sbi->stat_info); 339 355 } 340 356 341 - void destroy_root_stats(void) 357 + void __init f2fs_create_root_stats(void) 358 + { 359 + debugfs_root = debugfs_create_dir("f2fs", NULL); 360 + if (debugfs_root) 361 + debugfs_create_file("status", S_IRUGO, debugfs_root, 362 + NULL, &stat_fops); 363 + } 364 + 365 + void f2fs_destroy_root_stats(void) 342 366 { 343 367 debugfs_remove_recursive(debugfs_root); 344 368 debugfs_root = NULL;
+1 -1
fs/f2fs/dir.c
··· 503 503 } 504 504 505 505 if (inode) { 506 - inode->i_ctime = dir->i_ctime = dir->i_mtime = CURRENT_TIME; 506 + inode->i_ctime = CURRENT_TIME; 507 507 drop_nlink(inode); 508 508 if (S_ISDIR(inode->i_mode)) { 509 509 drop_nlink(inode);
+11 -7
fs/f2fs/f2fs.h
··· 211 211 static inline void set_new_dnode(struct dnode_of_data *dn, struct inode *inode, 212 212 struct page *ipage, struct page *npage, nid_t nid) 213 213 { 214 + memset(dn, 0, sizeof(*dn)); 214 215 dn->inode = inode; 215 216 dn->inode_page = ipage; 216 217 dn->node_page = npage; 217 218 dn->nid = nid; 218 - dn->inode_page_locked = 0; 219 219 } 220 220 221 221 /* ··· 877 877 * super.c 878 878 */ 879 879 int f2fs_sync_fs(struct super_block *, int); 880 + extern __printf(3, 4) 881 + void f2fs_msg(struct super_block *, const char *, const char *, ...); 880 882 881 883 /* 882 884 * hash.c ··· 914 912 void flush_nat_entries(struct f2fs_sb_info *); 915 913 int build_node_manager(struct f2fs_sb_info *); 916 914 void destroy_node_manager(struct f2fs_sb_info *); 917 - int create_node_manager_caches(void); 915 + int __init create_node_manager_caches(void); 918 916 void destroy_node_manager_caches(void); 919 917 920 918 /* ··· 966 964 void block_operations(struct f2fs_sb_info *); 967 965 void write_checkpoint(struct f2fs_sb_info *, bool, bool); 968 966 void init_orphan_info(struct f2fs_sb_info *); 969 - int create_checkpoint_caches(void); 967 + int __init create_checkpoint_caches(void); 970 968 void destroy_checkpoint_caches(void); 971 969 972 970 /* ··· 986 984 int start_gc_thread(struct f2fs_sb_info *); 987 985 void stop_gc_thread(struct f2fs_sb_info *); 988 986 block_t start_bidx_of_node(unsigned int); 989 - int f2fs_gc(struct f2fs_sb_info *, int); 987 + int f2fs_gc(struct f2fs_sb_info *); 990 988 void build_gc_manager(struct f2fs_sb_info *); 991 - int create_gc_caches(void); 989 + int __init create_gc_caches(void); 992 990 void destroy_gc_caches(void); 993 991 994 992 /* ··· 1060 1058 1061 1059 int f2fs_build_stats(struct f2fs_sb_info *); 1062 1060 void f2fs_destroy_stats(struct f2fs_sb_info *); 1063 - void destroy_root_stats(void); 1061 + void __init f2fs_create_root_stats(void); 1062 + void f2fs_destroy_root_stats(void); 1064 1063 #else 1065 1064 #define stat_inc_call_count(si) 1066 1065 #define stat_inc_seg_count(si, type) ··· 1071 1068 1072 1069 static inline int f2fs_build_stats(struct f2fs_sb_info *sbi) { return 0; } 1073 1070 static inline void f2fs_destroy_stats(struct f2fs_sb_info *sbi) { } 1074 - static inline void destroy_root_stats(void) { } 1071 + static inline void __init f2fs_create_root_stats(void) { } 1072 + static inline void f2fs_destroy_root_stats(void) { } 1075 1073 #endif 1076 1074 1077 1075 extern const struct file_operations f2fs_dir_operations;
+12 -4
fs/f2fs/file.c
··· 96 96 } 97 97 98 98 static const struct vm_operations_struct f2fs_file_vm_ops = { 99 - .fault = filemap_fault, 100 - .page_mkwrite = f2fs_vm_page_mkwrite, 99 + .fault = filemap_fault, 100 + .page_mkwrite = f2fs_vm_page_mkwrite, 101 + .remap_pages = generic_file_remap_pages, 101 102 }; 102 103 103 104 static int need_to_sync_dir(struct f2fs_sb_info *sbi, struct inode *inode) ··· 137 136 ret = filemap_write_and_wait_range(inode->i_mapping, start, end); 138 137 if (ret) 139 138 return ret; 139 + 140 + /* guarantee free sections for fsync */ 141 + f2fs_balance_fs(sbi); 140 142 141 143 mutex_lock(&inode->i_mutex); 142 144 ··· 411 407 struct dnode_of_data dn; 412 408 struct f2fs_sb_info *sbi = F2FS_SB(inode->i_sb); 413 409 410 + f2fs_balance_fs(sbi); 411 + 414 412 mutex_lock_op(sbi, DATA_TRUNC); 415 413 set_new_dnode(&dn, inode, NULL, NULL, 0); 416 414 err = get_dnode_of_data(&dn, index, RDONLY_NODE); ··· 540 534 loff_t offset, loff_t len) 541 535 { 542 536 struct inode *inode = file->f_path.dentry->d_inode; 543 - struct f2fs_sb_info *sbi = F2FS_SB(inode->i_sb); 544 537 long ret; 545 538 546 539 if (mode & ~(FALLOC_FL_KEEP_SIZE | FALLOC_FL_PUNCH_HOLE)) ··· 550 545 else 551 546 ret = expand_inode_data(inode, offset, len, mode); 552 547 553 - f2fs_balance_fs(sbi); 548 + if (!ret) { 549 + inode->i_mtime = inode->i_ctime = CURRENT_TIME; 550 + mark_inode_dirty(inode); 551 + } 554 552 return ret; 555 553 } 556 554
+27 -41
fs/f2fs/gc.c
··· 78 78 79 79 sbi->bg_gc++; 80 80 81 - if (f2fs_gc(sbi, 1) == GC_NONE) 81 + if (f2fs_gc(sbi) == GC_NONE) 82 82 wait_ms = GC_THREAD_NOGC_SLEEP_TIME; 83 83 else if (wait_ms == GC_THREAD_NOGC_SLEEP_TIME) 84 84 wait_ms = GC_THREAD_MAX_SLEEP_TIME; ··· 424 424 } 425 425 426 426 /* 427 - * Calculate start block index that this node page contains 427 + * Calculate start block index indicating the given node offset. 428 + * Be careful, caller should give this node offset only indicating direct node 429 + * blocks. If any node offsets, which point the other types of node blocks such 430 + * as indirect or double indirect node blocks, are given, it must be a caller's 431 + * bug. 428 432 */ 429 433 block_t start_bidx_of_node(unsigned int node_ofs) 430 434 { ··· 655 651 return ret; 656 652 } 657 653 658 - int f2fs_gc(struct f2fs_sb_info *sbi, int nGC) 654 + int f2fs_gc(struct f2fs_sb_info *sbi) 659 655 { 660 - unsigned int segno; 661 - int old_free_secs, cur_free_secs; 662 - int gc_status, nfree; 663 656 struct list_head ilist; 657 + unsigned int segno, i; 664 658 int gc_type = BG_GC; 659 + int gc_status = GC_NONE; 665 660 666 661 INIT_LIST_HEAD(&ilist); 667 662 gc_more: 668 - nfree = 0; 669 - gc_status = GC_NONE; 663 + if (!(sbi->sb->s_flags & MS_ACTIVE)) 664 + goto stop; 670 665 671 666 if (has_not_enough_free_secs(sbi)) 672 - old_free_secs = reserved_sections(sbi); 673 - else 674 - old_free_secs = free_sections(sbi); 667 + gc_type = FG_GC; 675 668 676 - while (sbi->sb->s_flags & MS_ACTIVE) { 677 - int i; 678 - if (has_not_enough_free_secs(sbi)) 679 - gc_type = FG_GC; 669 + if (!__get_victim(sbi, &segno, gc_type, NO_CHECK_TYPE)) 670 + goto stop; 680 671 681 - cur_free_secs = free_sections(sbi) + nfree; 682 - 683 - /* We got free space successfully. */ 684 - if (nGC < cur_free_secs - old_free_secs) 672 + for (i = 0; i < sbi->segs_per_sec; i++) { 673 + /* 674 + * do_garbage_collect will give us three gc_status: 675 + * GC_ERROR, GC_DONE, and GC_BLOCKED. 676 + * If GC is finished uncleanly, we have to return 677 + * the victim to dirty segment list. 678 + */ 679 + gc_status = do_garbage_collect(sbi, segno + i, &ilist, gc_type); 680 + if (gc_status != GC_DONE) 685 681 break; 686 - 687 - if (!__get_victim(sbi, &segno, gc_type, NO_CHECK_TYPE)) 688 - break; 689 - 690 - for (i = 0; i < sbi->segs_per_sec; i++) { 691 - /* 692 - * do_garbage_collect will give us three gc_status: 693 - * GC_ERROR, GC_DONE, and GC_BLOCKED. 694 - * If GC is finished uncleanly, we have to return 695 - * the victim to dirty segment list. 696 - */ 697 - gc_status = do_garbage_collect(sbi, segno + i, 698 - &ilist, gc_type); 699 - if (gc_status != GC_DONE) 700 - goto stop; 701 - nfree++; 702 - } 703 682 } 704 - stop: 705 - if (has_not_enough_free_secs(sbi) || gc_status == GC_BLOCKED) { 683 + if (has_not_enough_free_secs(sbi)) { 706 684 write_checkpoint(sbi, (gc_status == GC_BLOCKED), false); 707 - if (nfree) 685 + if (has_not_enough_free_secs(sbi)) 708 686 goto gc_more; 709 687 } 688 + stop: 710 689 mutex_unlock(&sbi->gc_mutex); 711 690 712 691 put_gc_inode(&ilist); 713 - BUG_ON(!list_empty(&ilist)); 714 692 return gc_status; 715 693 } 716 694 ··· 701 715 DIRTY_I(sbi)->v_ops = &default_v_ops; 702 716 } 703 717 704 - int create_gc_caches(void) 718 + int __init create_gc_caches(void) 705 719 { 706 720 winode_slab = f2fs_kmem_cache_create("f2fs_gc_inodes", 707 721 sizeof(struct inode_entry), NULL);
+3
fs/f2fs/inode.c
··· 217 217 inode->i_ino == F2FS_META_INO(sbi)) 218 218 return 0; 219 219 220 + if (wbc) 221 + f2fs_balance_fs(sbi); 222 + 220 223 node_page = get_node_page(sbi, inode->i_ino); 221 224 if (IS_ERR(node_page)) 222 225 return PTR_ERR(node_page);
+12 -7
fs/f2fs/node.c
··· 1124 1124 return 0; 1125 1125 } 1126 1126 1127 + /* 1128 + * It is very important to gather dirty pages and write at once, so that we can 1129 + * submit a big bio without interfering other data writes. 1130 + * Be default, 512 pages (2MB), a segment size, is quite reasonable. 1131 + */ 1132 + #define COLLECT_DIRTY_NODES 512 1127 1133 static int f2fs_write_node_pages(struct address_space *mapping, 1128 1134 struct writeback_control *wbc) 1129 1135 { ··· 1137 1131 struct block_device *bdev = sbi->sb->s_bdev; 1138 1132 long nr_to_write = wbc->nr_to_write; 1139 1133 1140 - if (wbc->for_kupdate) 1141 - return 0; 1142 - 1143 - if (get_pages(sbi, F2FS_DIRTY_NODES) == 0) 1144 - return 0; 1145 - 1134 + /* First check balancing cached NAT entries */ 1146 1135 if (try_to_free_nats(sbi, NAT_ENTRY_PER_BLOCK)) { 1147 1136 write_checkpoint(sbi, false, false); 1148 1137 return 0; 1149 1138 } 1139 + 1140 + /* collect a number of dirty node pages and write together */ 1141 + if (get_pages(sbi, F2FS_DIRTY_NODES) < COLLECT_DIRTY_NODES) 1142 + return 0; 1150 1143 1151 1144 /* if mounting is failed, skip writing node pages */ 1152 1145 wbc->nr_to_write = bio_get_nr_vecs(bdev); ··· 1737 1732 kfree(nm_i); 1738 1733 } 1739 1734 1740 - int create_node_manager_caches(void) 1735 + int __init create_node_manager_caches(void) 1741 1736 { 1742 1737 nat_entry_slab = f2fs_kmem_cache_create("nat_entry", 1743 1738 sizeof(struct nat_entry), NULL);
+4 -6
fs/f2fs/recovery.c
··· 67 67 kunmap(page); 68 68 f2fs_put_page(page, 0); 69 69 } else { 70 - f2fs_add_link(&dent, inode); 70 + err = f2fs_add_link(&dent, inode); 71 71 } 72 72 iput(dir); 73 73 out: ··· 151 151 goto out; 152 152 } 153 153 154 - INIT_LIST_HEAD(&entry->list); 155 154 list_add_tail(&entry->list, head); 156 155 entry->blkaddr = blkaddr; 157 156 } ··· 173 174 static void destroy_fsync_dnodes(struct f2fs_sb_info *sbi, 174 175 struct list_head *head) 175 176 { 176 - struct list_head *this; 177 - struct fsync_inode_entry *entry; 178 - list_for_each(this, head) { 179 - entry = list_entry(this, struct fsync_inode_entry, list); 177 + struct fsync_inode_entry *entry, *tmp; 178 + 179 + list_for_each_entry_safe(entry, tmp, head, list) { 180 180 iput(entry->inode); 181 181 list_del(&entry->list); 182 182 kmem_cache_free(fsync_entry_slab, entry);
+1 -1
fs/f2fs/segment.c
··· 31 31 */ 32 32 if (has_not_enough_free_secs(sbi)) { 33 33 mutex_lock(&sbi->gc_mutex); 34 - f2fs_gc(sbi, 1); 34 + f2fs_gc(sbi); 35 35 } 36 36 } 37 37
+72 -25
fs/f2fs/super.c
··· 53 53 {Opt_err, NULL}, 54 54 }; 55 55 56 + void f2fs_msg(struct super_block *sb, const char *level, const char *fmt, ...) 57 + { 58 + struct va_format vaf; 59 + va_list args; 60 + 61 + va_start(args, fmt); 62 + vaf.fmt = fmt; 63 + vaf.va = &args; 64 + printk("%sF2FS-fs (%s): %pV\n", level, sb->s_id, &vaf); 65 + va_end(args); 66 + } 67 + 56 68 static void init_once(void *foo) 57 69 { 58 70 struct f2fs_inode_info *fi = (struct f2fs_inode_info *) foo; ··· 137 125 138 126 if (sync) 139 127 write_checkpoint(sbi, false, false); 128 + else 129 + f2fs_balance_fs(sbi); 140 130 141 131 return 0; 142 132 } ··· 261 247 .get_parent = f2fs_get_parent, 262 248 }; 263 249 264 - static int parse_options(struct f2fs_sb_info *sbi, char *options) 250 + static int parse_options(struct super_block *sb, struct f2fs_sb_info *sbi, 251 + char *options) 265 252 { 266 253 substring_t args[MAX_OPT_ARGS]; 267 254 char *p; ··· 301 286 break; 302 287 #else 303 288 case Opt_nouser_xattr: 304 - pr_info("nouser_xattr options not supported\n"); 289 + f2fs_msg(sb, KERN_INFO, 290 + "nouser_xattr options not supported"); 305 291 break; 306 292 #endif 307 293 #ifdef CONFIG_F2FS_FS_POSIX_ACL ··· 311 295 break; 312 296 #else 313 297 case Opt_noacl: 314 - pr_info("noacl options not supported\n"); 298 + f2fs_msg(sb, KERN_INFO, "noacl options not supported"); 315 299 break; 316 300 #endif 317 301 case Opt_active_logs: ··· 325 309 set_opt(sbi, DISABLE_EXT_IDENTIFY); 326 310 break; 327 311 default: 328 - pr_err("Unrecognized mount option \"%s\" or missing value\n", 329 - p); 312 + f2fs_msg(sb, KERN_ERR, 313 + "Unrecognized mount option \"%s\" or missing value", 314 + p); 330 315 return -EINVAL; 331 316 } 332 317 } ··· 354 337 return result; 355 338 } 356 339 357 - static int sanity_check_raw_super(struct f2fs_super_block *raw_super) 340 + static int sanity_check_raw_super(struct super_block *sb, 341 + struct f2fs_super_block *raw_super) 358 342 { 359 343 unsigned int blocksize; 360 344 361 - if (F2FS_SUPER_MAGIC != le32_to_cpu(raw_super->magic)) 345 + if (F2FS_SUPER_MAGIC != le32_to_cpu(raw_super->magic)) { 346 + f2fs_msg(sb, KERN_INFO, 347 + "Magic Mismatch, valid(0x%x) - read(0x%x)", 348 + F2FS_SUPER_MAGIC, le32_to_cpu(raw_super->magic)); 362 349 return 1; 350 + } 363 351 364 352 /* Currently, support only 4KB block size */ 365 353 blocksize = 1 << le32_to_cpu(raw_super->log_blocksize); 366 - if (blocksize != PAGE_CACHE_SIZE) 354 + if (blocksize != PAGE_CACHE_SIZE) { 355 + f2fs_msg(sb, KERN_INFO, 356 + "Invalid blocksize (%u), supports only 4KB\n", 357 + blocksize); 367 358 return 1; 359 + } 368 360 if (le32_to_cpu(raw_super->log_sectorsize) != 369 - F2FS_LOG_SECTOR_SIZE) 361 + F2FS_LOG_SECTOR_SIZE) { 362 + f2fs_msg(sb, KERN_INFO, "Invalid log sectorsize"); 370 363 return 1; 364 + } 371 365 if (le32_to_cpu(raw_super->log_sectors_per_block) != 372 - F2FS_LOG_SECTORS_PER_BLOCK) 366 + F2FS_LOG_SECTORS_PER_BLOCK) { 367 + f2fs_msg(sb, KERN_INFO, "Invalid log sectors per block"); 373 368 return 1; 369 + } 374 370 return 0; 375 371 } 376 372 ··· 443 413 if (!sbi) 444 414 return -ENOMEM; 445 415 446 - /* set a temporary block size */ 447 - if (!sb_set_blocksize(sb, F2FS_BLKSIZE)) 416 + /* set a block size */ 417 + if (!sb_set_blocksize(sb, F2FS_BLKSIZE)) { 418 + f2fs_msg(sb, KERN_ERR, "unable to set blocksize"); 448 419 goto free_sbi; 420 + } 449 421 450 422 /* read f2fs raw super block */ 451 423 raw_super_buf = sb_bread(sb, 0); 452 424 if (!raw_super_buf) { 453 425 err = -EIO; 426 + f2fs_msg(sb, KERN_ERR, "unable to read superblock"); 454 427 goto free_sbi; 455 428 } 456 429 raw_super = (struct f2fs_super_block *) ··· 471 438 set_opt(sbi, POSIX_ACL); 472 439 #endif 473 440 /* parse mount options */ 474 - if (parse_options(sbi, (char *)data)) 441 + if (parse_options(sb, sbi, (char *)data)) 475 442 goto free_sb_buf; 476 443 477 444 /* sanity checking of raw super */ 478 - if (sanity_check_raw_super(raw_super)) 445 + if (sanity_check_raw_super(sb, raw_super)) { 446 + f2fs_msg(sb, KERN_ERR, "Can't find a valid F2FS filesystem"); 479 447 goto free_sb_buf; 448 + } 480 449 481 450 sb->s_maxbytes = max_file_size(le32_to_cpu(raw_super->log_blocksize)); 482 451 sb->s_max_links = F2FS_LINK_MAX; ··· 512 477 /* get an inode for meta space */ 513 478 sbi->meta_inode = f2fs_iget(sb, F2FS_META_INO(sbi)); 514 479 if (IS_ERR(sbi->meta_inode)) { 480 + f2fs_msg(sb, KERN_ERR, "Failed to read F2FS meta data inode"); 515 481 err = PTR_ERR(sbi->meta_inode); 516 482 goto free_sb_buf; 517 483 } 518 484 519 485 err = get_valid_checkpoint(sbi); 520 - if (err) 486 + if (err) { 487 + f2fs_msg(sb, KERN_ERR, "Failed to get valid F2FS checkpoint"); 521 488 goto free_meta_inode; 489 + } 522 490 523 491 /* sanity checking of checkpoint */ 524 492 err = -EINVAL; 525 - if (sanity_check_ckpt(raw_super, sbi->ckpt)) 493 + if (sanity_check_ckpt(raw_super, sbi->ckpt)) { 494 + f2fs_msg(sb, KERN_ERR, "Invalid F2FS checkpoint"); 526 495 goto free_cp; 496 + } 527 497 528 498 sbi->total_valid_node_count = 529 499 le32_to_cpu(sbi->ckpt->valid_node_count); ··· 542 502 INIT_LIST_HEAD(&sbi->dir_inode_list); 543 503 spin_lock_init(&sbi->dir_inode_lock); 544 504 545 - /* init super block */ 546 - if (!sb_set_blocksize(sb, sbi->blocksize)) 547 - goto free_cp; 548 - 549 505 init_orphan_info(sbi); 550 506 551 507 /* setup f2fs internal modules */ 552 508 err = build_segment_manager(sbi); 553 - if (err) 509 + if (err) { 510 + f2fs_msg(sb, KERN_ERR, 511 + "Failed to initialize F2FS segment manager"); 554 512 goto free_sm; 513 + } 555 514 err = build_node_manager(sbi); 556 - if (err) 515 + if (err) { 516 + f2fs_msg(sb, KERN_ERR, 517 + "Failed to initialize F2FS node manager"); 557 518 goto free_nm; 519 + } 558 520 559 521 build_gc_manager(sbi); 560 522 561 523 /* get an inode for node space */ 562 524 sbi->node_inode = f2fs_iget(sb, F2FS_NODE_INO(sbi)); 563 525 if (IS_ERR(sbi->node_inode)) { 526 + f2fs_msg(sb, KERN_ERR, "Failed to read node inode"); 564 527 err = PTR_ERR(sbi->node_inode); 565 528 goto free_nm; 566 529 } ··· 576 533 /* read root inode and dentry */ 577 534 root = f2fs_iget(sb, F2FS_ROOT_INO(sbi)); 578 535 if (IS_ERR(root)) { 536 + f2fs_msg(sb, KERN_ERR, "Failed to read root inode"); 579 537 err = PTR_ERR(root); 580 538 goto free_node_inode; 581 539 } ··· 640 596 .fs_flags = FS_REQUIRES_DEV, 641 597 }; 642 598 643 - static int init_inodecache(void) 599 + static int __init init_inodecache(void) 644 600 { 645 601 f2fs_inode_cachep = f2fs_kmem_cache_create("f2fs_inode_cache", 646 602 sizeof(struct f2fs_inode_info), NULL); ··· 675 631 err = create_checkpoint_caches(); 676 632 if (err) 677 633 goto fail; 678 - return register_filesystem(&f2fs_fs_type); 634 + err = register_filesystem(&f2fs_fs_type); 635 + if (err) 636 + goto fail; 637 + f2fs_create_root_stats(); 679 638 fail: 680 639 return err; 681 640 } 682 641 683 642 static void __exit exit_f2fs_fs(void) 684 643 { 685 - destroy_root_stats(); 644 + f2fs_destroy_root_stats(); 686 645 unregister_filesystem(&f2fs_fs_type); 687 646 destroy_checkpoint_caches(); 688 647 destroy_gc_caches();
+2
fs/f2fs/xattr.c
··· 318 318 if (name_len > 255 || value_len > MAX_VALUE_LEN) 319 319 return -ERANGE; 320 320 321 + f2fs_balance_fs(sbi); 322 + 321 323 mutex_lock_op(sbi, NODE_NEW); 322 324 if (!fi->i_xattr_nid) { 323 325 /* Allocate new attribute block */
+14 -2
fs/fuse/Kconfig
··· 4 4 With FUSE it is possible to implement a fully functional filesystem 5 5 in a userspace program. 6 6 7 - There's also companion library: libfuse. This library along with 8 - utilities is available from the FUSE homepage: 7 + There's also a companion library: libfuse2. This library is available 8 + from the FUSE homepage: 9 9 <http://fuse.sourceforge.net/> 10 + although chances are your distribution already has that library 11 + installed if you've installed the "fuse" package itself. 10 12 11 13 See <file:Documentation/filesystems/fuse.txt> for more information. 12 14 See <file:Documentation/Changes> for needed library/utility version. 13 15 14 16 If you want to develop a userspace FS, or if you want to use 15 17 a filesystem based on FUSE, answer Y or M. 18 + 19 + config CUSE 20 + tristate "Character device in Userspace support" 21 + depends on FUSE_FS 22 + help 23 + This FUSE extension allows character devices to be 24 + implemented in userspace. 25 + 26 + If you want to develop or use a userspace character device 27 + based on CUSE, answer Y or M.
+22 -14
fs/fuse/cuse.c
··· 45 45 #include <linux/miscdevice.h> 46 46 #include <linux/mutex.h> 47 47 #include <linux/slab.h> 48 - #include <linux/spinlock.h> 49 48 #include <linux/stat.h> 50 49 #include <linux/module.h> 51 50 ··· 62 63 bool unrestricted_ioctl; 63 64 }; 64 65 65 - static DEFINE_SPINLOCK(cuse_lock); /* protects cuse_conntbl */ 66 + static DEFINE_MUTEX(cuse_lock); /* protects registration */ 66 67 static struct list_head cuse_conntbl[CUSE_CONNTBL_LEN]; 67 68 static struct class *cuse_class; 68 69 ··· 113 114 int rc; 114 115 115 116 /* look up and get the connection */ 116 - spin_lock(&cuse_lock); 117 + mutex_lock(&cuse_lock); 117 118 list_for_each_entry(pos, cuse_conntbl_head(devt), list) 118 119 if (pos->dev->devt == devt) { 119 120 fuse_conn_get(&pos->fc); 120 121 cc = pos; 121 122 break; 122 123 } 123 - spin_unlock(&cuse_lock); 124 + mutex_unlock(&cuse_lock); 124 125 125 126 /* dead? */ 126 127 if (!cc) ··· 266 267 static int cuse_parse_devinfo(char *p, size_t len, struct cuse_devinfo *devinfo) 267 268 { 268 269 char *end = p + len; 269 - char *key, *val; 270 + char *uninitialized_var(key), *uninitialized_var(val); 270 271 int rc; 271 272 272 273 while (true) { ··· 304 305 */ 305 306 static void cuse_process_init_reply(struct fuse_conn *fc, struct fuse_req *req) 306 307 { 307 - struct cuse_conn *cc = fc_to_cc(fc); 308 + struct cuse_conn *cc = fc_to_cc(fc), *pos; 308 309 struct cuse_init_out *arg = req->out.args[0].value; 309 310 struct page *page = req->pages[0]; 310 311 struct cuse_devinfo devinfo = { }; 311 312 struct device *dev; 312 313 struct cdev *cdev; 313 314 dev_t devt; 314 - int rc; 315 + int rc, i; 315 316 316 317 if (req->out.h.error || 317 318 arg->major != FUSE_KERNEL_VERSION || arg->minor < 11) { ··· 355 356 dev_set_drvdata(dev, cc); 356 357 dev_set_name(dev, "%s", devinfo.name); 357 358 359 + mutex_lock(&cuse_lock); 360 + 361 + /* make sure the device-name is unique */ 362 + for (i = 0; i < CUSE_CONNTBL_LEN; ++i) { 363 + list_for_each_entry(pos, &cuse_conntbl[i], list) 364 + if (!strcmp(dev_name(pos->dev), dev_name(dev))) 365 + goto err_unlock; 366 + } 367 + 358 368 rc = device_add(dev); 359 369 if (rc) 360 - goto err_device; 370 + goto err_unlock; 361 371 362 372 /* register cdev */ 363 373 rc = -ENOMEM; 364 374 cdev = cdev_alloc(); 365 375 if (!cdev) 366 - goto err_device; 376 + goto err_unlock; 367 377 368 378 cdev->owner = THIS_MODULE; 369 379 cdev->ops = &cuse_frontend_fops; ··· 385 377 cc->cdev = cdev; 386 378 387 379 /* make the device available */ 388 - spin_lock(&cuse_lock); 389 380 list_add(&cc->list, cuse_conntbl_head(devt)); 390 - spin_unlock(&cuse_lock); 381 + mutex_unlock(&cuse_lock); 391 382 392 383 /* announce device availability */ 393 384 dev_set_uevent_suppress(dev, 0); ··· 398 391 399 392 err_cdev: 400 393 cdev_del(cdev); 401 - err_device: 394 + err_unlock: 395 + mutex_unlock(&cuse_lock); 402 396 put_device(dev); 403 397 err_region: 404 398 unregister_chrdev_region(devt, 1); ··· 528 520 int rc; 529 521 530 522 /* remove from the conntbl, no more access from this point on */ 531 - spin_lock(&cuse_lock); 523 + mutex_lock(&cuse_lock); 532 524 list_del_init(&cc->list); 533 - spin_unlock(&cuse_lock); 525 + mutex_unlock(&cuse_lock); 534 526 535 527 /* remove device */ 536 528 if (cc->dev)
-5
fs/fuse/dev.c
··· 692 692 struct page *oldpage = *pagep; 693 693 struct page *newpage; 694 694 struct pipe_buffer *buf = cs->pipebufs; 695 - struct address_space *mapping; 696 - pgoff_t index; 697 695 698 696 unlock_request(cs->fc, cs->req); 699 697 fuse_copy_finish(cs); ··· 721 723 722 724 if (fuse_check_page(newpage) != 0) 723 725 goto out_fallback_unlock; 724 - 725 - mapping = oldpage->mapping; 726 - index = oldpage->index; 727 726 728 727 /* 729 728 * This is a new and locked page, it shouldn't be mapped or
+2 -3
fs/fuse/file.c
··· 2177 2177 return ret; 2178 2178 } 2179 2179 2180 - long fuse_file_fallocate(struct file *file, int mode, loff_t offset, 2181 - loff_t length) 2180 + static long fuse_file_fallocate(struct file *file, int mode, loff_t offset, 2181 + loff_t length) 2182 2182 { 2183 2183 struct fuse_file *ff = file->private_data; 2184 2184 struct fuse_conn *fc = ff->fc; ··· 2213 2213 2214 2214 return err; 2215 2215 } 2216 - EXPORT_SYMBOL_GPL(fuse_file_fallocate); 2217 2216 2218 2217 static const struct file_operations fuse_file_operations = { 2219 2218 .llseek = fuse_file_llseek,
+6 -1
fs/gfs2/lock_dlm.c
··· 281 281 { 282 282 struct gfs2_sbd *sdp = gl->gl_sbd; 283 283 struct lm_lockstruct *ls = &sdp->sd_lockstruct; 284 + int lvb_needs_unlock = 0; 284 285 int error; 285 286 286 287 if (gl->gl_lksb.sb_lkid == 0) { ··· 295 294 gfs2_update_request_times(gl); 296 295 297 296 /* don't want to skip dlm_unlock writing the lvb when lock is ex */ 297 + 298 + if (gl->gl_lksb.sb_lvbptr && (gl->gl_state == LM_ST_EXCLUSIVE)) 299 + lvb_needs_unlock = 1; 300 + 298 301 if (test_bit(SDF_SKIP_DLM_UNLOCK, &sdp->sd_flags) && 299 - gl->gl_lksb.sb_lvbptr && (gl->gl_state != LM_ST_EXCLUSIVE)) { 302 + !lvb_needs_unlock) { 300 303 gfs2_glock_free(gl); 301 304 return; 302 305 }
+20
fs/nfs/namespace.c
··· 177 177 return mnt; 178 178 } 179 179 180 + static int 181 + nfs_namespace_getattr(struct vfsmount *mnt, struct dentry *dentry, struct kstat *stat) 182 + { 183 + if (NFS_FH(dentry->d_inode)->size != 0) 184 + return nfs_getattr(mnt, dentry, stat); 185 + generic_fillattr(dentry->d_inode, stat); 186 + return 0; 187 + } 188 + 189 + static int 190 + nfs_namespace_setattr(struct dentry *dentry, struct iattr *attr) 191 + { 192 + if (NFS_FH(dentry->d_inode)->size != 0) 193 + return nfs_setattr(dentry, attr); 194 + return -EACCES; 195 + } 196 + 180 197 const struct inode_operations nfs_mountpoint_inode_operations = { 181 198 .getattr = nfs_getattr, 199 + .setattr = nfs_setattr, 182 200 }; 183 201 184 202 const struct inode_operations nfs_referral_inode_operations = { 203 + .getattr = nfs_namespace_getattr, 204 + .setattr = nfs_namespace_setattr, 185 205 }; 186 206 187 207 static void nfs_expire_automounts(struct work_struct *work)
+26 -36
fs/nfs/nfs4client.c
··· 236 236 error = nfs4_discover_server_trunking(clp, &old); 237 237 if (error < 0) 238 238 goto error; 239 + nfs_put_client(clp); 239 240 if (clp != old) { 240 241 clp->cl_preserve_clid = true; 241 - nfs_put_client(clp); 242 242 clp = old; 243 - atomic_inc(&clp->cl_count); 244 243 } 245 244 246 245 return clp; ··· 305 306 .clientid = new->cl_clientid, 306 307 .confirm = new->cl_confirm, 307 308 }; 308 - int status; 309 + int status = -NFS4ERR_STALE_CLIENTID; 309 310 310 311 spin_lock(&nn->nfs_client_lock); 311 312 list_for_each_entry_safe(pos, n, &nn->nfs_client_list, cl_share_link) { ··· 331 332 332 333 if (prev) 333 334 nfs_put_client(prev); 335 + prev = pos; 334 336 335 337 status = nfs4_proc_setclientid_confirm(pos, &clid, cred); 336 - if (status == 0) { 338 + switch (status) { 339 + case -NFS4ERR_STALE_CLIENTID: 340 + break; 341 + case 0: 337 342 nfs4_swap_callback_idents(pos, new); 338 343 339 - nfs_put_client(pos); 344 + prev = NULL; 340 345 *result = pos; 341 346 dprintk("NFS: <-- %s using nfs_client = %p ({%d})\n", 342 347 __func__, pos, atomic_read(&pos->cl_count)); 343 - return 0; 344 - } 345 - if (status != -NFS4ERR_STALE_CLIENTID) { 346 - nfs_put_client(pos); 347 - dprintk("NFS: <-- %s status = %d, no result\n", 348 - __func__, status); 349 - return status; 348 + default: 349 + goto out; 350 350 } 351 351 352 352 spin_lock(&nn->nfs_client_lock); 353 - prev = pos; 354 353 } 354 + spin_unlock(&nn->nfs_client_lock); 355 355 356 - /* 357 - * No matching nfs_client found. This should be impossible, 358 - * because the new nfs_client has already been added to 359 - * nfs_client_list by nfs_get_client(). 360 - * 361 - * Don't BUG(), since the caller is holding a mutex. 362 - */ 356 + /* No match found. The server lost our clientid */ 357 + out: 363 358 if (prev) 364 359 nfs_put_client(prev); 365 - spin_unlock(&nn->nfs_client_lock); 366 - pr_err("NFS: %s Error: no matching nfs_client found\n", __func__); 367 - return -NFS4ERR_STALE_CLIENTID; 360 + dprintk("NFS: <-- %s status = %d\n", __func__, status); 361 + return status; 368 362 } 369 363 370 364 #ifdef CONFIG_NFS_V4_1 ··· 424 432 { 425 433 struct nfs_net *nn = net_generic(new->cl_net, nfs_net_id); 426 434 struct nfs_client *pos, *n, *prev = NULL; 427 - int error; 435 + int status = -NFS4ERR_STALE_CLIENTID; 428 436 429 437 spin_lock(&nn->nfs_client_lock); 430 438 list_for_each_entry_safe(pos, n, &nn->nfs_client_list, cl_share_link) { ··· 440 448 nfs_put_client(prev); 441 449 prev = pos; 442 450 443 - error = nfs_wait_client_init_complete(pos); 444 - if (error < 0) { 451 + nfs4_schedule_lease_recovery(pos); 452 + status = nfs_wait_client_init_complete(pos); 453 + if (status < 0) { 445 454 nfs_put_client(pos); 446 455 spin_lock(&nn->nfs_client_lock); 447 456 continue; 448 457 } 449 - 458 + status = pos->cl_cons_state; 450 459 spin_lock(&nn->nfs_client_lock); 460 + if (status < 0) 461 + continue; 451 462 } 452 463 453 464 if (pos->rpc_ops != new->rpc_ops) ··· 468 473 if (!nfs4_match_serverowners(pos, new)) 469 474 continue; 470 475 476 + atomic_inc(&pos->cl_count); 471 477 spin_unlock(&nn->nfs_client_lock); 472 478 dprintk("NFS: <-- %s using nfs_client = %p ({%d})\n", 473 479 __func__, pos, atomic_read(&pos->cl_count)); ··· 477 481 return 0; 478 482 } 479 483 480 - /* 481 - * No matching nfs_client found. This should be impossible, 482 - * because the new nfs_client has already been added to 483 - * nfs_client_list by nfs_get_client(). 484 - * 485 - * Don't BUG(), since the caller is holding a mutex. 486 - */ 484 + /* No matching nfs_client found. */ 487 485 spin_unlock(&nn->nfs_client_lock); 488 - pr_err("NFS: %s Error: no matching nfs_client found\n", __func__); 489 - return -NFS4ERR_STALE_CLIENTID; 486 + dprintk("NFS: <-- %s status = %d\n", __func__, status); 487 + return status; 490 488 } 491 489 #endif /* CONFIG_NFS_V4_1 */ 492 490
+14 -8
fs/nfs/nfs4state.c
··· 136 136 clp->cl_confirm = clid.confirm; 137 137 138 138 status = nfs40_walk_client_list(clp, result, cred); 139 - switch (status) { 140 - case -NFS4ERR_STALE_CLIENTID: 141 - set_bit(NFS4CLNT_LEASE_CONFIRM, &clp->cl_state); 142 - case 0: 139 + if (status == 0) { 143 140 /* Sustain the lease, even if it's empty. If the clientid4 144 141 * goes stale it's of no use for trunking discovery. */ 145 142 nfs4_schedule_state_renewal(*result); 146 - break; 147 143 } 148 - 149 144 out: 150 145 return status; 151 146 } ··· 1858 1863 case -ETIMEDOUT: 1859 1864 case -EAGAIN: 1860 1865 ssleep(1); 1866 + case -NFS4ERR_STALE_CLIENTID: 1861 1867 dprintk("NFS: %s after status %d, retrying\n", 1862 1868 __func__, status); 1863 1869 goto again; ··· 2018 2022 nfs4_begin_drain_session(clp); 2019 2023 cred = nfs4_get_exchange_id_cred(clp); 2020 2024 status = nfs4_proc_destroy_session(clp->cl_session, cred); 2021 - if (status && status != -NFS4ERR_BADSESSION && 2022 - status != -NFS4ERR_DEADSESSION) { 2025 + switch (status) { 2026 + case 0: 2027 + case -NFS4ERR_BADSESSION: 2028 + case -NFS4ERR_DEADSESSION: 2029 + break; 2030 + case -NFS4ERR_BACK_CHAN_BUSY: 2031 + case -NFS4ERR_DELAY: 2032 + set_bit(NFS4CLNT_SESSION_RESET, &clp->cl_state); 2033 + status = 0; 2034 + ssleep(1); 2035 + goto out; 2036 + default: 2023 2037 status = nfs4_recovery_handle_error(clp, status); 2024 2038 goto out; 2025 2039 }
+9 -13
fs/nfs/super.c
··· 2589 2589 struct nfs_server *server; 2590 2590 struct dentry *mntroot = ERR_PTR(-ENOMEM); 2591 2591 struct nfs_subversion *nfs_mod = NFS_SB(data->sb)->nfs_client->cl_nfs_mod; 2592 - int error; 2593 2592 2594 - dprintk("--> nfs_xdev_mount_common()\n"); 2593 + dprintk("--> nfs_xdev_mount()\n"); 2595 2594 2596 2595 mount_info.mntfh = mount_info.cloned->fh; 2597 2596 2598 2597 /* create a new volume representation */ 2599 2598 server = nfs_mod->rpc_ops->clone_server(NFS_SB(data->sb), data->fh, data->fattr, data->authflavor); 2600 - if (IS_ERR(server)) { 2601 - error = PTR_ERR(server); 2602 - goto out_err; 2603 - } 2604 2599 2605 - mntroot = nfs_fs_mount_common(server, flags, dev_name, &mount_info, nfs_mod); 2606 - dprintk("<-- nfs_xdev_mount_common() = 0\n"); 2607 - out: 2600 + if (IS_ERR(server)) 2601 + mntroot = ERR_CAST(server); 2602 + else 2603 + mntroot = nfs_fs_mount_common(server, flags, 2604 + dev_name, &mount_info, nfs_mod); 2605 + 2606 + dprintk("<-- nfs_xdev_mount() = %ld\n", 2607 + IS_ERR(mntroot) ? PTR_ERR(mntroot) : 0L); 2608 2608 return mntroot; 2609 - 2610 - out_err: 2611 - dprintk("<-- nfs_xdev_mount_common() = %d [error]\n", error); 2612 - goto out; 2613 2609 } 2614 2610 2615 2611 #if IS_ENABLED(CONFIG_NFS_V4)
+1 -1
fs/xfs/xfs_aops.c
··· 86 86 } 87 87 88 88 if (ioend->io_iocb) { 89 + inode_dio_done(ioend->io_inode); 89 90 if (ioend->io_isasync) { 90 91 aio_complete(ioend->io_iocb, ioend->io_error ? 91 92 ioend->io_error : ioend->io_result, 0); 92 93 } 93 - inode_dio_done(ioend->io_inode); 94 94 } 95 95 96 96 mempool_free(ioend, xfs_ioend_pool);
+3 -3
fs/xfs/xfs_bmap.c
··· 4680 4680 return error; 4681 4681 } 4682 4682 4683 - if (bma->flags & XFS_BMAPI_STACK_SWITCH) 4684 - bma->stack_switch = 1; 4685 - 4686 4683 error = xfs_bmap_alloc(bma); 4687 4684 if (error) 4688 4685 return error; ··· 4952 4955 bma.userdata = 0; 4953 4956 bma.flist = flist; 4954 4957 bma.firstblock = firstblock; 4958 + 4959 + if (flags & XFS_BMAPI_STACK_SWITCH) 4960 + bma.stack_switch = 1; 4955 4961 4956 4962 while (bno < end && n < *nmap) { 4957 4963 inhole = eof || bma.got.br_startoff > bno;
+20
fs/xfs/xfs_buf.c
··· 487 487 struct rb_node *parent; 488 488 xfs_buf_t *bp; 489 489 xfs_daddr_t blkno = map[0].bm_bn; 490 + xfs_daddr_t eofs; 490 491 int numblks = 0; 491 492 int i; 492 493 ··· 498 497 /* Check for IOs smaller than the sector size / not sector aligned */ 499 498 ASSERT(!(numbytes < (1 << btp->bt_sshift))); 500 499 ASSERT(!(BBTOB(blkno) & (xfs_off_t)btp->bt_smask)); 500 + 501 + /* 502 + * Corrupted block numbers can get through to here, unfortunately, so we 503 + * have to check that the buffer falls within the filesystem bounds. 504 + */ 505 + eofs = XFS_FSB_TO_BB(btp->bt_mount, btp->bt_mount->m_sb.sb_dblocks); 506 + if (blkno >= eofs) { 507 + /* 508 + * XXX (dgc): we should really be returning EFSCORRUPTED here, 509 + * but none of the higher level infrastructure supports 510 + * returning a specific error on buffer lookup failures. 511 + */ 512 + xfs_alert(btp->bt_mount, 513 + "%s: Block out of range: block 0x%llx, EOFS 0x%llx ", 514 + __func__, blkno, eofs); 515 + return NULL; 516 + } 501 517 502 518 /* get tree root */ 503 519 pag = xfs_perag_get(btp->bt_mount, ··· 1505 1487 while (!list_empty(&btp->bt_lru)) { 1506 1488 bp = list_first_entry(&btp->bt_lru, struct xfs_buf, b_lru); 1507 1489 if (atomic_read(&bp->b_hold) > 1) { 1490 + trace_xfs_buf_wait_buftarg(bp, _RET_IP_); 1491 + list_move_tail(&bp->b_lru, &btp->bt_lru); 1508 1492 spin_unlock(&btp->bt_lru_lock); 1509 1493 delay(100); 1510 1494 goto restart;
+10 -2
fs/xfs/xfs_buf_item.c
··· 652 652 653 653 /* 654 654 * If the buf item isn't tracking any data, free it, otherwise drop the 655 - * reference we hold to it. 655 + * reference we hold to it. If we are aborting the transaction, this may 656 + * be the only reference to the buf item, so we free it anyway 657 + * regardless of whether it is dirty or not. A dirty abort implies a 658 + * shutdown, anyway. 656 659 */ 657 660 clean = 1; 658 661 for (i = 0; i < bip->bli_format_count; i++) { ··· 667 664 } 668 665 if (clean) 669 666 xfs_buf_item_relse(bp); 670 - else 667 + else if (aborted) { 668 + if (atomic_dec_and_test(&bip->bli_refcount)) { 669 + ASSERT(XFS_FORCED_SHUTDOWN(lip->li_mountp)); 670 + xfs_buf_item_relse(bp); 671 + } 672 + } else 671 673 atomic_dec(&bip->bli_refcount); 672 674 673 675 if (!hold)
+2 -2
fs/xfs/xfs_dfrag.c
··· 246 246 goto out_unlock; 247 247 } 248 248 249 - error = -filemap_write_and_wait(VFS_I(ip)->i_mapping); 249 + error = -filemap_write_and_wait(VFS_I(tip)->i_mapping); 250 250 if (error) 251 251 goto out_unlock; 252 - truncate_pagecache_range(VFS_I(ip), 0, -1); 252 + truncate_pagecache_range(VFS_I(tip), 0, -1); 253 253 254 254 /* Verify O_DIRECT for ftmp */ 255 255 if (VN_CACHED(VFS_I(tip)) != 0) {
+9
fs/xfs/xfs_iomap.c
··· 351 351 } 352 352 if (shift) 353 353 alloc_blocks >>= shift; 354 + 355 + /* 356 + * If we are still trying to allocate more space than is 357 + * available, squash the prealloc hard. This can happen if we 358 + * have a large file on a small filesystem and the above 359 + * lowspace thresholds are smaller than MAXEXTLEN. 360 + */ 361 + while (alloc_blocks >= freesp) 362 + alloc_blocks >>= 4; 354 363 } 355 364 356 365 if (alloc_blocks < mp->m_writeio_blocks)
+1 -1
fs/xfs/xfs_mount.c
··· 658 658 return; 659 659 } 660 660 /* quietly fail */ 661 - xfs_buf_ioerror(bp, EFSCORRUPTED); 661 + xfs_buf_ioerror(bp, EWRONGFS); 662 662 } 663 663 664 664 static void
+1
fs/xfs/xfs_trace.h
··· 341 341 DEFINE_BUF_EVENT(xfs_buf_item_iodone); 342 342 DEFINE_BUF_EVENT(xfs_buf_item_iodone_async); 343 343 DEFINE_BUF_EVENT(xfs_buf_error_relse); 344 + DEFINE_BUF_EVENT(xfs_buf_wait_buftarg); 344 345 DEFINE_BUF_EVENT(xfs_trans_read_buf_io); 345 346 DEFINE_BUF_EVENT(xfs_trans_read_buf_shut); 346 347
+16
include/asm-generic/dma-mapping-broken.h
··· 16 16 dma_free_coherent(struct device *dev, size_t size, void *cpu_addr, 17 17 dma_addr_t dma_handle); 18 18 19 + static inline void *dma_alloc_attrs(struct device *dev, size_t size, 20 + dma_addr_t *dma_handle, gfp_t flag, 21 + struct dma_attrs *attrs) 22 + { 23 + /* attrs is not supported and ignored */ 24 + return dma_alloc_coherent(dev, size, dma_handle, flag); 25 + } 26 + 27 + static inline void dma_free_attrs(struct device *dev, size_t size, 28 + void *cpu_addr, dma_addr_t dma_handle, 29 + struct dma_attrs *attrs) 30 + { 31 + /* attrs is not supported and ignored */ 32 + dma_free_coherent(dev, size, cpu_addr, dma_handle); 33 + } 34 + 19 35 #define dma_alloc_noncoherent(d, s, h, f) dma_alloc_coherent(d, s, h, f) 20 36 #define dma_free_noncoherent(d, s, v, h) dma_free_coherent(d, s, v, h) 21 37
+2 -4
include/asm-generic/pgtable.h
··· 461 461 return offset_from_zero_pfn <= (zero_page_mask >> PAGE_SHIFT); 462 462 } 463 463 464 - static inline unsigned long my_zero_pfn(unsigned long addr) 465 - { 466 - return page_to_pfn(ZERO_PAGE(addr)); 467 - } 464 + #define my_zero_pfn(addr) page_to_pfn(ZERO_PAGE(addr)) 465 + 468 466 #else 469 467 static inline int is_zero_pfn(unsigned long pfn) 470 468 {
+2
include/asm-generic/syscalls.h
··· 21 21 unsigned long fd, off_t pgoff); 22 22 #endif 23 23 24 + #ifndef CONFIG_GENERIC_SIGALTSTACK 24 25 #ifndef sys_sigaltstack 25 26 asmlinkage long sys_sigaltstack(const stack_t __user *, stack_t __user *, 26 27 struct pt_regs *); 28 + #endif 27 29 #endif 28 30 29 31 #ifndef sys_rt_sigreturn
+5 -3
include/linux/ata.h
··· 297 297 ATA_LOG_SATA_NCQ = 0x10, 298 298 ATA_LOG_SATA_ID_DEV_DATA = 0x30, 299 299 ATA_LOG_SATA_SETTINGS = 0x08, 300 - ATA_LOG_DEVSLP_MDAT = 0x30, 300 + ATA_LOG_DEVSLP_OFFSET = 0x30, 301 + ATA_LOG_DEVSLP_SIZE = 0x08, 302 + ATA_LOG_DEVSLP_MDAT = 0x00, 301 303 ATA_LOG_DEVSLP_MDAT_MASK = 0x1F, 302 - ATA_LOG_DEVSLP_DETO = 0x31, 303 - ATA_LOG_DEVSLP_VALID = 0x37, 304 + ATA_LOG_DEVSLP_DETO = 0x01, 305 + ATA_LOG_DEVSLP_VALID = 0x07, 304 306 ATA_LOG_DEVSLP_VALID_MASK = 0x80, 305 307 306 308 /* READ/WRITE LONG (obsolete) */
+2
include/linux/console.h
··· 77 77 int con_is_bound(const struct consw *csw); 78 78 int register_con_driver(const struct consw *csw, int first, int last); 79 79 int unregister_con_driver(const struct consw *csw); 80 + int do_unregister_con_driver(const struct consw *csw); 80 81 int take_over_console(const struct consw *sw, int first, int last, int deflt); 82 + int do_take_over_console(const struct consw *sw, int first, int last, int deflt); 81 83 void give_up_console(const struct consw *sw); 82 84 #ifdef CONFIG_HW_CONSOLE 83 85 int con_debug_enter(struct vc_data *vc);
+18 -6
include/linux/efi.h
··· 618 618 #endif 619 619 620 620 /* 621 - * We play games with efi_enabled so that the compiler will, if possible, remove 622 - * EFI-related code altogether. 621 + * We play games with efi_enabled so that the compiler will, if 622 + * possible, remove EFI-related code altogether. 623 623 */ 624 + #define EFI_BOOT 0 /* Were we booted from EFI? */ 625 + #define EFI_SYSTEM_TABLES 1 /* Can we use EFI system tables? */ 626 + #define EFI_CONFIG_TABLES 2 /* Can we use EFI config tables? */ 627 + #define EFI_RUNTIME_SERVICES 3 /* Can we use runtime services? */ 628 + #define EFI_MEMMAP 4 /* Can we use EFI memory map? */ 629 + #define EFI_64BIT 5 /* Is the firmware 64-bit? */ 630 + 624 631 #ifdef CONFIG_EFI 625 632 # ifdef CONFIG_X86 626 - extern int efi_enabled; 627 - extern bool efi_64bit; 633 + extern int efi_enabled(int facility); 628 634 # else 629 - # define efi_enabled 1 635 + static inline int efi_enabled(int facility) 636 + { 637 + return 1; 638 + } 630 639 # endif 631 640 #else 632 - # define efi_enabled 0 641 + static inline int efi_enabled(int facility) 642 + { 643 + return 0; 644 + } 633 645 #endif 634 646 635 647 /*
+2 -2
include/linux/libata.h
··· 652 652 u32 gscr[SATA_PMP_GSCR_DWORDS]; /* PMP GSCR block */ 653 653 }; 654 654 655 - /* Identify Device Data Log (30h), SATA Settings (page 08h) */ 656 - u8 sata_settings[ATA_SECT_SIZE]; 655 + /* DEVSLP Timing Variables from Identify Device Data Log */ 656 + u8 devslp_timing[ATA_LOG_DEVSLP_SIZE]; 657 657 658 658 /* error history */ 659 659 int spdn_cnt;
-2
include/linux/mfd/abx500.h
··· 272 272 const struct abx500_fg_parameters *fg_params; 273 273 }; 274 274 275 - extern struct abx500_bm_data ab8500_bm_data; 276 - 277 275 enum { 278 276 NTC_EXTERNAL = 0, 279 277 NTC_INTERNAL,
+4 -25
include/linux/mfd/abx500/ab8500-bm.h
··· 422 422 struct ab8500_btemp; 423 423 struct ab8500_gpadc; 424 424 struct ab8500_fg; 425 + 425 426 #ifdef CONFIG_AB8500_BM 427 + extern struct abx500_bm_data ab8500_bm_data; 428 + 426 429 void ab8500_fg_reinit(void); 427 430 void ab8500_charger_usb_state_changed(u8 bm_usb_state, u16 mA); 428 431 struct ab8500_btemp *ab8500_btemp_get(void); ··· 437 434 int ab8500_fg_inst_curr_done(struct ab8500_fg *di); 438 435 439 436 #else 440 - int ab8500_fg_inst_curr_done(struct ab8500_fg *di) 441 - { 442 - } 443 - static void ab8500_fg_reinit(void) 444 - { 445 - } 446 - static void ab8500_charger_usb_state_changed(u8 bm_usb_state, u16 mA) 447 - { 448 - } 449 - static struct ab8500_btemp *ab8500_btemp_get(void) 450 - { 451 - return NULL; 452 - } 453 - static int ab8500_btemp_get_batctrl_temp(struct ab8500_btemp *btemp) 454 - { 455 - return 0; 456 - } 457 - struct ab8500_fg *ab8500_fg_get(void) 458 - { 459 - return NULL; 460 - } 461 - static int ab8500_fg_inst_curr_blocking(struct ab8500_fg *dev) 462 - { 463 - return -ENODEV; 464 - } 437 + static struct abx500_bm_data ab8500_bm_data; 465 438 466 439 static inline int ab8500_fg_inst_curr_start(struct ab8500_fg *di) 467 440 {
+62 -4
include/linux/mfd/da9052/da9052.h
··· 99 99 u8 chip_id; 100 100 101 101 int chip_irq; 102 + 103 + /* SOC I/O transfer related fixes for DA9052/53 */ 104 + int (*fix_io) (struct da9052 *da9052, unsigned char reg); 102 105 }; 103 106 104 107 /* ADC API */ ··· 116 113 ret = regmap_read(da9052->regmap, reg, &val); 117 114 if (ret < 0) 118 115 return ret; 116 + 117 + if (da9052->fix_io) { 118 + ret = da9052->fix_io(da9052, reg); 119 + if (ret < 0) 120 + return ret; 121 + } 122 + 119 123 return val; 120 124 } 121 125 122 126 static inline int da9052_reg_write(struct da9052 *da9052, unsigned char reg, 123 127 unsigned char val) 124 128 { 125 - return regmap_write(da9052->regmap, reg, val); 129 + int ret; 130 + 131 + ret = regmap_write(da9052->regmap, reg, val); 132 + if (ret < 0) 133 + return ret; 134 + 135 + if (da9052->fix_io) { 136 + ret = da9052->fix_io(da9052, reg); 137 + if (ret < 0) 138 + return ret; 139 + } 140 + 141 + return ret; 126 142 } 127 143 128 144 static inline int da9052_group_read(struct da9052 *da9052, unsigned char reg, 129 145 unsigned reg_cnt, unsigned char *val) 130 146 { 131 - return regmap_bulk_read(da9052->regmap, reg, val, reg_cnt); 147 + int ret; 148 + 149 + ret = regmap_bulk_read(da9052->regmap, reg, val, reg_cnt); 150 + if (ret < 0) 151 + return ret; 152 + 153 + if (da9052->fix_io) { 154 + ret = da9052->fix_io(da9052, reg); 155 + if (ret < 0) 156 + return ret; 157 + } 158 + 159 + return ret; 132 160 } 133 161 134 162 static inline int da9052_group_write(struct da9052 *da9052, unsigned char reg, 135 163 unsigned reg_cnt, unsigned char *val) 136 164 { 137 - return regmap_raw_write(da9052->regmap, reg, val, reg_cnt); 165 + int ret; 166 + 167 + ret = regmap_raw_write(da9052->regmap, reg, val, reg_cnt); 168 + if (ret < 0) 169 + return ret; 170 + 171 + if (da9052->fix_io) { 172 + ret = da9052->fix_io(da9052, reg); 173 + if (ret < 0) 174 + return ret; 175 + } 176 + 177 + return ret; 138 178 } 139 179 140 180 static inline int da9052_reg_update(struct da9052 *da9052, unsigned char reg, 141 181 unsigned char bit_mask, 142 182 unsigned char reg_val) 143 183 { 144 - return regmap_update_bits(da9052->regmap, reg, bit_mask, reg_val); 184 + int ret; 185 + 186 + ret = regmap_update_bits(da9052->regmap, reg, bit_mask, reg_val); 187 + if (ret < 0) 188 + return ret; 189 + 190 + if (da9052->fix_io) { 191 + ret = da9052->fix_io(da9052, reg); 192 + if (ret < 0) 193 + return ret; 194 + } 195 + 196 + return ret; 145 197 } 146 198 147 199 int da9052_device_init(struct da9052 *da9052, u8 chip_id);
+3
include/linux/mfd/da9052/reg.h
··· 34 34 #define DA9052_STATUS_C_REG 3 35 35 #define DA9052_STATUS_D_REG 4 36 36 37 + /* PARK REGISTER */ 38 + #define DA9052_PARK_REGISTER DA9052_STATUS_D_REG 39 + 37 40 /* EVENT REGISTERS */ 38 41 #define DA9052_EVENT_A_REG 5 39 42 #define DA9052_EVENT_B_REG 6
+3
include/linux/mfd/rtsx_common.h
··· 38 38 #define RTSX_SD_CARD 0 39 39 #define RTSX_MS_CARD 1 40 40 41 + #define CLK_TO_DIV_N 0 42 + #define DIV_N_TO_CLK 1 43 + 41 44 struct platform_device; 42 45 43 46 struct rtsx_slot {
+21 -4
include/linux/mfd/rtsx_pci.h
··· 158 158 #define SG_TRANS_DATA (0x02 << 4) 159 159 #define SG_LINK_DESC (0x03 << 4) 160 160 161 - /* SD bank voltage */ 162 - #define SD_IO_3V3 0 163 - #define SD_IO_1V8 1 164 - 161 + /* Output voltage */ 162 + #define OUTPUT_3V3 0 163 + #define OUTPUT_1V8 1 165 164 166 165 /* Card Clock Enable Register */ 167 166 #define SD_CLK_EN 0x04 ··· 200 201 #define CHANGE_CLK 0x01 201 202 202 203 /* LDO_CTL */ 204 + #define BPP_ASIC_1V7 0x00 205 + #define BPP_ASIC_1V8 0x01 206 + #define BPP_ASIC_1V9 0x02 207 + #define BPP_ASIC_2V0 0x03 208 + #define BPP_ASIC_2V7 0x04 209 + #define BPP_ASIC_2V8 0x05 210 + #define BPP_ASIC_3V2 0x06 211 + #define BPP_ASIC_3V3 0x07 212 + #define BPP_REG_TUNED18 0x07 213 + #define BPP_TUNED18_SHIFT_8402 5 214 + #define BPP_TUNED18_SHIFT_8411 4 215 + #define BPP_PAD_MASK 0x04 216 + #define BPP_PAD_3V3 0x04 217 + #define BPP_PAD_1V8 0x00 203 218 #define BPP_LDO_POWB 0x03 204 219 #define BPP_LDO_ON 0x00 205 220 #define BPP_LDO_SUSPEND 0x02 ··· 701 688 int (*disable_auto_blink)(struct rtsx_pcr *pcr); 702 689 int (*card_power_on)(struct rtsx_pcr *pcr, int card); 703 690 int (*card_power_off)(struct rtsx_pcr *pcr, int card); 691 + int (*switch_output_voltage)(struct rtsx_pcr *pcr, 692 + u8 voltage); 704 693 unsigned int (*cd_deglitch)(struct rtsx_pcr *pcr); 694 + int (*conv_clk_and_div_n)(int clk, int dir); 705 695 }; 706 696 707 697 enum PDEV_STAT {PDEV_STAT_IDLE, PDEV_STAT_RUN}; ··· 799 783 u8 ssc_depth, bool initial_mode, bool double_clk, bool vpclk); 800 784 int rtsx_pci_card_power_on(struct rtsx_pcr *pcr, int card); 801 785 int rtsx_pci_card_power_off(struct rtsx_pcr *pcr, int card); 786 + int rtsx_pci_switch_output_voltage(struct rtsx_pcr *pcr, u8 voltage); 802 787 unsigned int rtsx_pci_card_exist(struct rtsx_pcr *pcr); 803 788 void rtsx_pci_complete_unfinished_transfer(struct rtsx_pcr *pcr); 804 789
+5 -5
include/linux/module.h
··· 199 199 struct module *source, *target; 200 200 }; 201 201 202 - enum module_state 203 - { 204 - MODULE_STATE_LIVE, 205 - MODULE_STATE_COMING, 206 - MODULE_STATE_GOING, 202 + enum module_state { 203 + MODULE_STATE_LIVE, /* Normal state. */ 204 + MODULE_STATE_COMING, /* Full formed, running module_init. */ 205 + MODULE_STATE_GOING, /* Going away. */ 206 + MODULE_STATE_UNFORMED, /* Still setting it up. */ 207 207 }; 208 208 209 209 /**
-1
include/linux/ptrace.h
··· 45 45 extern int ptrace_readdata(struct task_struct *tsk, unsigned long src, char __user *dst, int len); 46 46 extern int ptrace_writedata(struct task_struct *tsk, char __user *src, unsigned long dst, int len); 47 47 extern void ptrace_disable(struct task_struct *); 48 - extern int ptrace_check_attach(struct task_struct *task, bool ignore_state); 49 48 extern int ptrace_request(struct task_struct *child, long request, 50 49 unsigned long addr, unsigned long data); 51 50 extern void ptrace_notify(int exit_code);
+10 -1
include/linux/sched.h
··· 2714 2714 extern void recalc_sigpending_and_wake(struct task_struct *t); 2715 2715 extern void recalc_sigpending(void); 2716 2716 2717 - extern void signal_wake_up(struct task_struct *t, int resume_stopped); 2717 + extern void signal_wake_up_state(struct task_struct *t, unsigned int state); 2718 + 2719 + static inline void signal_wake_up(struct task_struct *t, bool resume) 2720 + { 2721 + signal_wake_up_state(t, resume ? TASK_WAKEKILL : 0); 2722 + } 2723 + static inline void ptrace_signal_wake_up(struct task_struct *t, bool resume) 2724 + { 2725 + signal_wake_up_state(t, resume ? __TASK_TRACED : 0); 2726 + } 2718 2727 2719 2728 /* 2720 2729 * Wrappers for p->thread_info->cpu access. No-op on UP.
+46 -13
include/linux/security.h
··· 989 989 * tells the LSM to decrement the number of secmark labeling rules loaded 990 990 * @req_classify_flow: 991 991 * Sets the flow's sid to the openreq sid. 992 + * @tun_dev_alloc_security: 993 + * This hook allows a module to allocate a security structure for a TUN 994 + * device. 995 + * @security pointer to a security structure pointer. 996 + * Returns a zero on success, negative values on failure. 997 + * @tun_dev_free_security: 998 + * This hook allows a module to free the security structure for a TUN 999 + * device. 1000 + * @security pointer to the TUN device's security structure 992 1001 * @tun_dev_create: 993 1002 * Check permissions prior to creating a new TUN device. 994 - * @tun_dev_post_create: 995 - * This hook allows a module to update or allocate a per-socket security 996 - * structure. 997 - * @sk contains the newly created sock structure. 1003 + * @tun_dev_attach_queue: 1004 + * Check permissions prior to attaching to a TUN device queue. 1005 + * @security pointer to the TUN device's security structure. 998 1006 * @tun_dev_attach: 999 - * Check permissions prior to attaching to a persistent TUN device. This 1000 - * hook can also be used by the module to update any security state 1007 + * This hook can be used by the module to update any security state 1001 1008 * associated with the TUN device's sock structure. 1002 1009 * @sk contains the existing sock structure. 1010 + * @security pointer to the TUN device's security structure. 1011 + * @tun_dev_open: 1012 + * This hook can be used by the module to update any security state 1013 + * associated with the TUN device's security structure. 1014 + * @security pointer to the TUN devices's security structure. 1003 1015 * 1004 1016 * Security hooks for XFRM operations. 1005 1017 * ··· 1632 1620 void (*secmark_refcount_inc) (void); 1633 1621 void (*secmark_refcount_dec) (void); 1634 1622 void (*req_classify_flow) (const struct request_sock *req, struct flowi *fl); 1635 - int (*tun_dev_create)(void); 1636 - void (*tun_dev_post_create)(struct sock *sk); 1637 - int (*tun_dev_attach)(struct sock *sk); 1623 + int (*tun_dev_alloc_security) (void **security); 1624 + void (*tun_dev_free_security) (void *security); 1625 + int (*tun_dev_create) (void); 1626 + int (*tun_dev_attach_queue) (void *security); 1627 + int (*tun_dev_attach) (struct sock *sk, void *security); 1628 + int (*tun_dev_open) (void *security); 1638 1629 #endif /* CONFIG_SECURITY_NETWORK */ 1639 1630 1640 1631 #ifdef CONFIG_SECURITY_NETWORK_XFRM ··· 2581 2566 int security_secmark_relabel_packet(u32 secid); 2582 2567 void security_secmark_refcount_inc(void); 2583 2568 void security_secmark_refcount_dec(void); 2569 + int security_tun_dev_alloc_security(void **security); 2570 + void security_tun_dev_free_security(void *security); 2584 2571 int security_tun_dev_create(void); 2585 - void security_tun_dev_post_create(struct sock *sk); 2586 - int security_tun_dev_attach(struct sock *sk); 2572 + int security_tun_dev_attach_queue(void *security); 2573 + int security_tun_dev_attach(struct sock *sk, void *security); 2574 + int security_tun_dev_open(void *security); 2587 2575 2588 2576 #else /* CONFIG_SECURITY_NETWORK */ 2589 2577 static inline int security_unix_stream_connect(struct sock *sock, ··· 2751 2733 { 2752 2734 } 2753 2735 2736 + static inline int security_tun_dev_alloc_security(void **security) 2737 + { 2738 + return 0; 2739 + } 2740 + 2741 + static inline void security_tun_dev_free_security(void *security) 2742 + { 2743 + } 2744 + 2754 2745 static inline int security_tun_dev_create(void) 2755 2746 { 2756 2747 return 0; 2757 2748 } 2758 2749 2759 - static inline void security_tun_dev_post_create(struct sock *sk) 2750 + static inline int security_tun_dev_attach_queue(void *security) 2760 2751 { 2752 + return 0; 2761 2753 } 2762 2754 2763 - static inline int security_tun_dev_attach(struct sock *sk) 2755 + static inline int security_tun_dev_attach(struct sock *sk, void *security) 2756 + { 2757 + return 0; 2758 + } 2759 + 2760 + static inline int security_tun_dev_open(void *security) 2764 2761 { 2765 2762 return 0; 2766 2763 }
+1
include/linux/usb/usbnet.h
··· 100 100 #define FLAG_LINK_INTR 0x0800 /* updates link (carrier) status */ 101 101 102 102 #define FLAG_POINTTOPOINT 0x1000 /* possibly use "usb%d" names */ 103 + #define FLAG_NOARP 0x2000 /* device can't do ARP */ 103 104 104 105 /* 105 106 * Indicates to usbnet, that USB driver accumulates multiple IP packets.
+2
include/linux/vt_kern.h
··· 130 130 int vt_waitactive(int n); 131 131 void change_console(struct vc_data *new_vc); 132 132 void reset_vc(struct vc_data *vc); 133 + extern int do_unbind_con_driver(const struct consw *csw, int first, int last, 134 + int deflt); 133 135 extern int unbind_con_driver(const struct consw *csw, int first, int last, 134 136 int deflt); 135 137 int vty_init(const struct file_operations *console_fops);
+2
include/net/ip.h
··· 143 143 extern int ip4_datagram_connect(struct sock *sk, 144 144 struct sockaddr *uaddr, int addr_len); 145 145 146 + extern void ip4_datagram_release_cb(struct sock *sk); 147 + 146 148 struct ip_reply_arg { 147 149 struct kvec iov[1]; 148 150 int flags;
+2
include/net/netfilter/nf_conntrack_core.h
··· 31 31 extern int nf_conntrack_proto_init(struct net *net); 32 32 extern void nf_conntrack_proto_fini(struct net *net); 33 33 34 + extern void nf_conntrack_cleanup_end(void); 35 + 34 36 extern bool 35 37 nf_ct_get_tuple(const struct sk_buff *skb, 36 38 unsigned int nhoff,
+2 -1
include/uapi/linux/serial_core.h
··· 50 50 #define PORT_LPC3220 22 /* NXP LPC32xx SoC "Standard" UART */ 51 51 #define PORT_8250_CIR 23 /* CIR infrared port, has its own driver */ 52 52 #define PORT_XR17V35X 24 /* Exar XR17V35x UARTs */ 53 - #define PORT_MAX_8250 24 /* max port ID */ 53 + #define PORT_BRCM_TRUMANAGE 24 54 + #define PORT_MAX_8250 25 /* max port ID */ 54 55 55 56 /* 56 57 * ARM specific type numbers. These are not currently guaranteed
+4
init/do_mounts_initrd.c
··· 36 36 static int init_linuxrc(struct subprocess_info *info, struct cred *new) 37 37 { 38 38 sys_unshare(CLONE_FS | CLONE_FILES); 39 + /* stdin/stdout/stderr for /linuxrc */ 40 + sys_open("/dev/console", O_RDWR, 0); 41 + sys_dup(0); 42 + sys_dup(0); 39 43 /* move initrd over / and chdir/chroot in initrd root */ 40 44 sys_chdir("/root"); 41 45 sys_mount(".", "/", NULL, MS_MOVE, NULL);
+4 -4
init/main.c
··· 604 604 pidmap_init(); 605 605 anon_vma_init(); 606 606 #ifdef CONFIG_X86 607 - if (efi_enabled) 607 + if (efi_enabled(EFI_RUNTIME_SERVICES)) 608 608 efi_enter_virtual_mode(); 609 609 #endif 610 610 thread_info_cache_init(); ··· 632 632 acpi_early_init(); /* before LAPIC and SMP init */ 633 633 sfi_init_late(); 634 634 635 - if (efi_enabled) { 635 + if (efi_enabled(EFI_RUNTIME_SERVICES)) { 636 636 efi_late_init(); 637 637 efi_free_boot_services(); 638 638 } ··· 802 802 (const char __user *const __user *)envp_init); 803 803 } 804 804 805 - static void __init kernel_init_freeable(void); 805 + static noinline void __init kernel_init_freeable(void); 806 806 807 807 static int __ref kernel_init(void *unused) 808 808 { ··· 845 845 "See Linux Documentation/init.txt for guidance."); 846 846 } 847 847 848 - static void __init kernel_init_freeable(void) 848 + static noinline void __init kernel_init_freeable(void) 849 849 { 850 850 /* 851 851 * Wait until kthreadd is all set-up.
+20 -7
kernel/async.c
··· 86 86 */ 87 87 static async_cookie_t __lowest_in_progress(struct async_domain *running) 88 88 { 89 + async_cookie_t first_running = next_cookie; /* infinity value */ 90 + async_cookie_t first_pending = next_cookie; /* ditto */ 89 91 struct async_entry *entry; 90 92 93 + /* 94 + * Both running and pending lists are sorted but not disjoint. 95 + * Take the first cookies from both and return the min. 96 + */ 91 97 if (!list_empty(&running->domain)) { 92 98 entry = list_first_entry(&running->domain, typeof(*entry), list); 93 - return entry->cookie; 99 + first_running = entry->cookie; 94 100 } 95 101 96 - list_for_each_entry(entry, &async_pending, list) 97 - if (entry->running == running) 98 - return entry->cookie; 102 + list_for_each_entry(entry, &async_pending, list) { 103 + if (entry->running == running) { 104 + first_pending = entry->cookie; 105 + break; 106 + } 107 + } 99 108 100 - return next_cookie; /* "infinity" value */ 109 + return min(first_running, first_pending); 101 110 } 102 111 103 112 static async_cookie_t lowest_in_progress(struct async_domain *running) ··· 127 118 { 128 119 struct async_entry *entry = 129 120 container_of(work, struct async_entry, work); 121 + struct async_entry *pos; 130 122 unsigned long flags; 131 123 ktime_t uninitialized_var(calltime), delta, rettime; 132 124 struct async_domain *running = entry->running; 133 125 134 - /* 1) move self to the running queue */ 126 + /* 1) move self to the running queue, make sure it stays sorted */ 135 127 spin_lock_irqsave(&async_lock, flags); 136 - list_move_tail(&entry->list, &running->domain); 128 + list_for_each_entry_reverse(pos, &running->domain, list) 129 + if (entry->cookie < pos->cookie) 130 + break; 131 + list_move_tail(&entry->list, &pos->list); 137 132 spin_unlock_irqrestore(&async_lock, flags); 138 133 139 134 /* 2) run (and print duration) */
+15 -8
kernel/compat.c
··· 535 535 return 0; 536 536 } 537 537 538 - asmlinkage long 539 - compat_sys_wait4(compat_pid_t pid, compat_uint_t __user *stat_addr, int options, 540 - struct compat_rusage __user *ru) 538 + COMPAT_SYSCALL_DEFINE4(wait4, 539 + compat_pid_t, pid, 540 + compat_uint_t __user *, stat_addr, 541 + int, options, 542 + struct compat_rusage __user *, ru) 541 543 { 542 544 if (!ru) { 543 545 return sys_wait4(pid, stat_addr, options, NULL); ··· 566 564 } 567 565 } 568 566 569 - asmlinkage long compat_sys_waitid(int which, compat_pid_t pid, 570 - struct compat_siginfo __user *uinfo, int options, 571 - struct compat_rusage __user *uru) 567 + COMPAT_SYSCALL_DEFINE5(waitid, 568 + int, which, compat_pid_t, pid, 569 + struct compat_siginfo __user *, uinfo, int, options, 570 + struct compat_rusage __user *, uru) 572 571 { 573 572 siginfo_t info; 574 573 struct rusage ru; ··· 587 584 return ret; 588 585 589 586 if (uru) { 590 - ret = put_compat_rusage(&ru, uru); 587 + /* sys_waitid() overwrites everything in ru */ 588 + if (COMPAT_USE_64BIT_TIME) 589 + ret = copy_to_user(uru, &ru, sizeof(ru)); 590 + else 591 + ret = put_compat_rusage(&ru, uru); 591 592 if (ret) 592 593 return ret; 593 594 } ··· 1001 994 sigset_from_compat(&s, &s32); 1002 995 1003 996 if (uts) { 1004 - if (get_compat_timespec(&t, uts)) 997 + if (compat_get_timespec(&t, uts)) 1005 998 return -EFAULT; 1006 999 } 1007 1000
+2
kernel/debug/kdb/kdb_main.c
··· 1970 1970 1971 1971 kdb_printf("Module Size modstruct Used by\n"); 1972 1972 list_for_each_entry(mod, kdb_modules, list) { 1973 + if (mod->state == MODULE_STATE_UNFORMED) 1974 + continue; 1973 1975 1974 1976 kdb_printf("%-20s%8u 0x%p ", mod->name, 1975 1977 mod->core_size, (void *)mod);
+18 -2
kernel/events/core.c
··· 908 908 } 909 909 910 910 /* 911 + * Initialize event state based on the perf_event_attr::disabled. 912 + */ 913 + static inline void perf_event__state_init(struct perf_event *event) 914 + { 915 + event->state = event->attr.disabled ? PERF_EVENT_STATE_OFF : 916 + PERF_EVENT_STATE_INACTIVE; 917 + } 918 + 919 + /* 911 920 * Called at perf_event creation and when events are attached/detached from a 912 921 * group. 913 922 */ ··· 6188 6179 event->overflow_handler = overflow_handler; 6189 6180 event->overflow_handler_context = context; 6190 6181 6191 - if (attr->disabled) 6192 - event->state = PERF_EVENT_STATE_OFF; 6182 + perf_event__state_init(event); 6193 6183 6194 6184 pmu = NULL; 6195 6185 ··· 6617 6609 6618 6610 mutex_lock(&gctx->mutex); 6619 6611 perf_remove_from_context(group_leader); 6612 + 6613 + /* 6614 + * Removing from the context ends up with disabled 6615 + * event. What we want here is event in the initial 6616 + * startup state, ready to be add into new context. 6617 + */ 6618 + perf_event__state_init(group_leader); 6620 6619 list_for_each_entry(sibling, &group_leader->sibling_list, 6621 6620 group_entry) { 6622 6621 perf_remove_from_context(sibling); 6622 + perf_event__state_init(sibling); 6623 6623 put_ctx(gctx); 6624 6624 } 6625 6625 mutex_unlock(&gctx->mutex);
+4 -2
kernel/fork.c
··· 1668 1668 int, tls_val) 1669 1669 #endif 1670 1670 { 1671 - return do_fork(clone_flags, newsp, 0, 1672 - parent_tidptr, child_tidptr); 1671 + long ret = do_fork(clone_flags, newsp, 0, parent_tidptr, child_tidptr); 1672 + asmlinkage_protect(5, ret, clone_flags, newsp, 1673 + parent_tidptr, child_tidptr, tls_val); 1674 + return ret; 1673 1675 } 1674 1676 #endif 1675 1677
+108 -46
kernel/module.c
··· 188 188 ongoing or failed initialization etc. */ 189 189 static inline int strong_try_module_get(struct module *mod) 190 190 { 191 + BUG_ON(mod && mod->state == MODULE_STATE_UNFORMED); 191 192 if (mod && mod->state == MODULE_STATE_COMING) 192 193 return -EBUSY; 193 194 if (try_module_get(mod)) ··· 344 343 #endif 345 344 }; 346 345 346 + if (mod->state == MODULE_STATE_UNFORMED) 347 + continue; 348 + 347 349 if (each_symbol_in_section(arr, ARRAY_SIZE(arr), mod, fn, data)) 348 350 return true; 349 351 } ··· 454 450 EXPORT_SYMBOL_GPL(find_symbol); 455 451 456 452 /* Search for module by name: must hold module_mutex. */ 457 - struct module *find_module(const char *name) 453 + static struct module *find_module_all(const char *name, 454 + bool even_unformed) 458 455 { 459 456 struct module *mod; 460 457 461 458 list_for_each_entry(mod, &modules, list) { 459 + if (!even_unformed && mod->state == MODULE_STATE_UNFORMED) 460 + continue; 462 461 if (strcmp(mod->name, name) == 0) 463 462 return mod; 464 463 } 465 464 return NULL; 465 + } 466 + 467 + struct module *find_module(const char *name) 468 + { 469 + return find_module_all(name, false); 466 470 } 467 471 EXPORT_SYMBOL_GPL(find_module); 468 472 ··· 537 525 preempt_disable(); 538 526 539 527 list_for_each_entry_rcu(mod, &modules, list) { 528 + if (mod->state == MODULE_STATE_UNFORMED) 529 + continue; 540 530 if (!mod->percpu_size) 541 531 continue; 542 532 for_each_possible_cpu(cpu) { ··· 1062 1048 case MODULE_STATE_GOING: 1063 1049 state = "going"; 1064 1050 break; 1051 + default: 1052 + BUG(); 1065 1053 } 1066 1054 return sprintf(buffer, "%s\n", state); 1067 1055 } ··· 1802 1786 1803 1787 mutex_lock(&module_mutex); 1804 1788 list_for_each_entry_rcu(mod, &modules, list) { 1789 + if (mod->state == MODULE_STATE_UNFORMED) 1790 + continue; 1805 1791 if ((mod->module_core) && (mod->core_text_size)) { 1806 1792 set_page_attributes(mod->module_core, 1807 1793 mod->module_core + mod->core_text_size, ··· 1825 1807 1826 1808 mutex_lock(&module_mutex); 1827 1809 list_for_each_entry_rcu(mod, &modules, list) { 1810 + if (mod->state == MODULE_STATE_UNFORMED) 1811 + continue; 1828 1812 if ((mod->module_core) && (mod->core_text_size)) { 1829 1813 set_page_attributes(mod->module_core, 1830 1814 mod->module_core + mod->core_text_size, ··· 2547 2527 err = -EFBIG; 2548 2528 goto out; 2549 2529 } 2530 + 2531 + /* Don't hand 0 to vmalloc, it whines. */ 2532 + if (stat.size == 0) { 2533 + err = -EINVAL; 2534 + goto out; 2535 + } 2536 + 2550 2537 info->hdr = vmalloc(stat.size); 2551 2538 if (!info->hdr) { 2552 2539 err = -ENOMEM; ··· 3017 2990 bool ret; 3018 2991 3019 2992 mutex_lock(&module_mutex); 3020 - mod = find_module(name); 3021 - ret = !mod || mod->state != MODULE_STATE_COMING; 2993 + mod = find_module_all(name, true); 2994 + ret = !mod || mod->state == MODULE_STATE_LIVE 2995 + || mod->state == MODULE_STATE_GOING; 3022 2996 mutex_unlock(&module_mutex); 3023 2997 3024 2998 return ret; ··· 3164 3136 goto free_copy; 3165 3137 } 3166 3138 3139 + /* 3140 + * We try to place it in the list now to make sure it's unique 3141 + * before we dedicate too many resources. In particular, 3142 + * temporary percpu memory exhaustion. 3143 + */ 3144 + mod->state = MODULE_STATE_UNFORMED; 3145 + again: 3146 + mutex_lock(&module_mutex); 3147 + if ((old = find_module_all(mod->name, true)) != NULL) { 3148 + if (old->state == MODULE_STATE_COMING 3149 + || old->state == MODULE_STATE_UNFORMED) { 3150 + /* Wait in case it fails to load. */ 3151 + mutex_unlock(&module_mutex); 3152 + err = wait_event_interruptible(module_wq, 3153 + finished_loading(mod->name)); 3154 + if (err) 3155 + goto free_module; 3156 + goto again; 3157 + } 3158 + err = -EEXIST; 3159 + mutex_unlock(&module_mutex); 3160 + goto free_module; 3161 + } 3162 + list_add_rcu(&mod->list, &modules); 3163 + mutex_unlock(&module_mutex); 3164 + 3167 3165 #ifdef CONFIG_MODULE_SIG 3168 3166 mod->sig_ok = info->sig_ok; 3169 3167 if (!mod->sig_ok) ··· 3199 3145 /* Now module is in final location, initialize linked lists, etc. */ 3200 3146 err = module_unload_init(mod); 3201 3147 if (err) 3202 - goto free_module; 3148 + goto unlink_mod; 3203 3149 3204 3150 /* Now we've got everything in the final locations, we can 3205 3151 * find optional sections. */ ··· 3234 3180 goto free_arch_cleanup; 3235 3181 } 3236 3182 3237 - /* Mark state as coming so strong_try_module_get() ignores us. */ 3238 - mod->state = MODULE_STATE_COMING; 3239 - 3240 - /* Now sew it into the lists so we can get lockdep and oops 3241 - * info during argument parsing. No one should access us, since 3242 - * strong_try_module_get() will fail. 3243 - * lockdep/oops can run asynchronous, so use the RCU list insertion 3244 - * function to insert in a way safe to concurrent readers. 3245 - * The mutex protects against concurrent writers. 3246 - */ 3247 - again: 3248 - mutex_lock(&module_mutex); 3249 - if ((old = find_module(mod->name)) != NULL) { 3250 - if (old->state == MODULE_STATE_COMING) { 3251 - /* Wait in case it fails to load. */ 3252 - mutex_unlock(&module_mutex); 3253 - err = wait_event_interruptible(module_wq, 3254 - finished_loading(mod->name)); 3255 - if (err) 3256 - goto free_arch_cleanup; 3257 - goto again; 3258 - } 3259 - err = -EEXIST; 3260 - goto unlock; 3261 - } 3262 - 3263 - /* This has to be done once we're sure module name is unique. */ 3264 3183 dynamic_debug_setup(info->debug, info->num_debug); 3265 3184 3266 - /* Find duplicate symbols */ 3185 + mutex_lock(&module_mutex); 3186 + /* Find duplicate symbols (must be called under lock). */ 3267 3187 err = verify_export_symbols(mod); 3268 3188 if (err < 0) 3269 - goto ddebug; 3189 + goto ddebug_cleanup; 3270 3190 3191 + /* This relies on module_mutex for list integrity. */ 3271 3192 module_bug_finalize(info->hdr, info->sechdrs, mod); 3272 - list_add_rcu(&mod->list, &modules); 3193 + 3194 + /* Mark state as coming so strong_try_module_get() ignores us, 3195 + * but kallsyms etc. can see us. */ 3196 + mod->state = MODULE_STATE_COMING; 3197 + 3273 3198 mutex_unlock(&module_mutex); 3274 3199 3275 3200 /* Module is ready to execute: parsing args may do that. */ 3276 3201 err = parse_args(mod->name, mod->args, mod->kp, mod->num_kp, 3277 3202 -32768, 32767, &ddebug_dyndbg_module_param_cb); 3278 3203 if (err < 0) 3279 - goto unlink; 3204 + goto bug_cleanup; 3280 3205 3281 3206 /* Link in to syfs. */ 3282 3207 err = mod_sysfs_setup(mod, info, mod->kp, mod->num_kp); 3283 3208 if (err < 0) 3284 - goto unlink; 3209 + goto bug_cleanup; 3285 3210 3286 3211 /* Get rid of temporary copy. */ 3287 3212 free_copy(info); ··· 3270 3237 3271 3238 return do_init_module(mod); 3272 3239 3273 - unlink: 3240 + bug_cleanup: 3241 + /* module_bug_cleanup needs module_mutex protection */ 3274 3242 mutex_lock(&module_mutex); 3275 - /* Unlink carefully: kallsyms could be walking list. */ 3276 - list_del_rcu(&mod->list); 3277 3243 module_bug_cleanup(mod); 3278 - wake_up_all(&module_wq); 3279 - ddebug: 3280 - dynamic_debug_remove(info->debug); 3281 - unlock: 3244 + ddebug_cleanup: 3282 3245 mutex_unlock(&module_mutex); 3246 + dynamic_debug_remove(info->debug); 3283 3247 synchronize_sched(); 3284 3248 kfree(mod->args); 3285 3249 free_arch_cleanup: ··· 3285 3255 free_modinfo(mod); 3286 3256 free_unload: 3287 3257 module_unload_free(mod); 3258 + unlink_mod: 3259 + mutex_lock(&module_mutex); 3260 + /* Unlink carefully: kallsyms could be walking list. */ 3261 + list_del_rcu(&mod->list); 3262 + wake_up_all(&module_wq); 3263 + mutex_unlock(&module_mutex); 3288 3264 free_module: 3289 3265 module_deallocate(mod, info); 3290 3266 free_copy: ··· 3413 3377 3414 3378 preempt_disable(); 3415 3379 list_for_each_entry_rcu(mod, &modules, list) { 3380 + if (mod->state == MODULE_STATE_UNFORMED) 3381 + continue; 3416 3382 if (within_module_init(addr, mod) || 3417 3383 within_module_core(addr, mod)) { 3418 3384 if (modname) ··· 3438 3400 3439 3401 preempt_disable(); 3440 3402 list_for_each_entry_rcu(mod, &modules, list) { 3403 + if (mod->state == MODULE_STATE_UNFORMED) 3404 + continue; 3441 3405 if (within_module_init(addr, mod) || 3442 3406 within_module_core(addr, mod)) { 3443 3407 const char *sym; ··· 3464 3424 3465 3425 preempt_disable(); 3466 3426 list_for_each_entry_rcu(mod, &modules, list) { 3427 + if (mod->state == MODULE_STATE_UNFORMED) 3428 + continue; 3467 3429 if (within_module_init(addr, mod) || 3468 3430 within_module_core(addr, mod)) { 3469 3431 const char *sym; ··· 3493 3451 3494 3452 preempt_disable(); 3495 3453 list_for_each_entry_rcu(mod, &modules, list) { 3454 + if (mod->state == MODULE_STATE_UNFORMED) 3455 + continue; 3496 3456 if (symnum < mod->num_symtab) { 3497 3457 *value = mod->symtab[symnum].st_value; 3498 3458 *type = mod->symtab[symnum].st_info; ··· 3537 3493 ret = mod_find_symname(mod, colon+1); 3538 3494 *colon = ':'; 3539 3495 } else { 3540 - list_for_each_entry_rcu(mod, &modules, list) 3496 + list_for_each_entry_rcu(mod, &modules, list) { 3497 + if (mod->state == MODULE_STATE_UNFORMED) 3498 + continue; 3541 3499 if ((ret = mod_find_symname(mod, name)) != 0) 3542 3500 break; 3501 + } 3543 3502 } 3544 3503 preempt_enable(); 3545 3504 return ret; ··· 3557 3510 int ret; 3558 3511 3559 3512 list_for_each_entry(mod, &modules, list) { 3513 + if (mod->state == MODULE_STATE_UNFORMED) 3514 + continue; 3560 3515 for (i = 0; i < mod->num_symtab; i++) { 3561 3516 ret = fn(data, mod->strtab + mod->symtab[i].st_name, 3562 3517 mod, mod->symtab[i].st_value); ··· 3574 3525 { 3575 3526 int bx = 0; 3576 3527 3528 + BUG_ON(mod->state == MODULE_STATE_UNFORMED); 3577 3529 if (mod->taints || 3578 3530 mod->state == MODULE_STATE_GOING || 3579 3531 mod->state == MODULE_STATE_COMING) { ··· 3615 3565 { 3616 3566 struct module *mod = list_entry(p, struct module, list); 3617 3567 char buf[8]; 3568 + 3569 + /* We always ignore unformed modules. */ 3570 + if (mod->state == MODULE_STATE_UNFORMED) 3571 + return 0; 3618 3572 3619 3573 seq_printf(m, "%s %u", 3620 3574 mod->name, mod->init_size + mod->core_size); ··· 3680 3626 3681 3627 preempt_disable(); 3682 3628 list_for_each_entry_rcu(mod, &modules, list) { 3629 + if (mod->state == MODULE_STATE_UNFORMED) 3630 + continue; 3683 3631 if (mod->num_exentries == 0) 3684 3632 continue; 3685 3633 ··· 3730 3674 if (addr < module_addr_min || addr > module_addr_max) 3731 3675 return NULL; 3732 3676 3733 - list_for_each_entry_rcu(mod, &modules, list) 3677 + list_for_each_entry_rcu(mod, &modules, list) { 3678 + if (mod->state == MODULE_STATE_UNFORMED) 3679 + continue; 3734 3680 if (within_module_core(addr, mod) 3735 3681 || within_module_init(addr, mod)) 3736 3682 return mod; 3683 + } 3737 3684 return NULL; 3738 3685 } 3739 3686 EXPORT_SYMBOL_GPL(__module_address); ··· 3789 3730 printk(KERN_DEFAULT "Modules linked in:"); 3790 3731 /* Most callers should already have preempt disabled, but make sure */ 3791 3732 preempt_disable(); 3792 - list_for_each_entry_rcu(mod, &modules, list) 3733 + list_for_each_entry_rcu(mod, &modules, list) { 3734 + if (mod->state == MODULE_STATE_UNFORMED) 3735 + continue; 3793 3736 printk(" %s%s", mod->name, module_flags(mod, buf)); 3737 + } 3794 3738 preempt_enable(); 3795 3739 if (last_unloaded_module[0]) 3796 3740 printk(" [last unloaded: %s]", last_unloaded_module);
+59 -15
kernel/ptrace.c
··· 117 117 * TASK_KILLABLE sleeps. 118 118 */ 119 119 if (child->jobctl & JOBCTL_STOP_PENDING || task_is_traced(child)) 120 - signal_wake_up(child, task_is_traced(child)); 120 + ptrace_signal_wake_up(child, true); 121 121 122 122 spin_unlock(&child->sighand->siglock); 123 + } 124 + 125 + /* Ensure that nothing can wake it up, even SIGKILL */ 126 + static bool ptrace_freeze_traced(struct task_struct *task) 127 + { 128 + bool ret = false; 129 + 130 + /* Lockless, nobody but us can set this flag */ 131 + if (task->jobctl & JOBCTL_LISTENING) 132 + return ret; 133 + 134 + spin_lock_irq(&task->sighand->siglock); 135 + if (task_is_traced(task) && !__fatal_signal_pending(task)) { 136 + task->state = __TASK_TRACED; 137 + ret = true; 138 + } 139 + spin_unlock_irq(&task->sighand->siglock); 140 + 141 + return ret; 142 + } 143 + 144 + static void ptrace_unfreeze_traced(struct task_struct *task) 145 + { 146 + if (task->state != __TASK_TRACED) 147 + return; 148 + 149 + WARN_ON(!task->ptrace || task->parent != current); 150 + 151 + spin_lock_irq(&task->sighand->siglock); 152 + if (__fatal_signal_pending(task)) 153 + wake_up_state(task, __TASK_TRACED); 154 + else 155 + task->state = TASK_TRACED; 156 + spin_unlock_irq(&task->sighand->siglock); 123 157 } 124 158 125 159 /** ··· 173 139 * RETURNS: 174 140 * 0 on success, -ESRCH if %child is not ready. 175 141 */ 176 - int ptrace_check_attach(struct task_struct *child, bool ignore_state) 142 + static int ptrace_check_attach(struct task_struct *child, bool ignore_state) 177 143 { 178 144 int ret = -ESRCH; 179 145 ··· 185 151 * be changed by us so it's not changing right after this. 186 152 */ 187 153 read_lock(&tasklist_lock); 188 - if ((child->ptrace & PT_PTRACED) && child->parent == current) { 154 + if (child->ptrace && child->parent == current) { 155 + WARN_ON(child->state == __TASK_TRACED); 189 156 /* 190 157 * child->sighand can't be NULL, release_task() 191 158 * does ptrace_unlink() before __exit_signal(). 192 159 */ 193 - spin_lock_irq(&child->sighand->siglock); 194 - WARN_ON_ONCE(task_is_stopped(child)); 195 - if (ignore_state || (task_is_traced(child) && 196 - !(child->jobctl & JOBCTL_LISTENING))) 160 + if (ignore_state || ptrace_freeze_traced(child)) 197 161 ret = 0; 198 - spin_unlock_irq(&child->sighand->siglock); 199 162 } 200 163 read_unlock(&tasklist_lock); 201 164 202 - if (!ret && !ignore_state) 203 - ret = wait_task_inactive(child, TASK_TRACED) ? 0 : -ESRCH; 165 + if (!ret && !ignore_state) { 166 + if (!wait_task_inactive(child, __TASK_TRACED)) { 167 + /* 168 + * This can only happen if may_ptrace_stop() fails and 169 + * ptrace_stop() changes ->state back to TASK_RUNNING, 170 + * so we should not worry about leaking __TASK_TRACED. 171 + */ 172 + WARN_ON(child->state == __TASK_TRACED); 173 + ret = -ESRCH; 174 + } 175 + } 204 176 205 - /* All systems go.. */ 206 177 return ret; 207 178 } 208 179 ··· 356 317 */ 357 318 if (task_is_stopped(task) && 358 319 task_set_jobctl_pending(task, JOBCTL_TRAP_STOP | JOBCTL_TRAPPING)) 359 - signal_wake_up(task, 1); 320 + signal_wake_up_state(task, __TASK_STOPPED); 360 321 361 322 spin_unlock(&task->sighand->siglock); 362 323 ··· 776 737 * tracee into STOP. 777 738 */ 778 739 if (likely(task_set_jobctl_pending(child, JOBCTL_TRAP_STOP))) 779 - signal_wake_up(child, child->jobctl & JOBCTL_LISTENING); 740 + ptrace_signal_wake_up(child, child->jobctl & JOBCTL_LISTENING); 780 741 781 742 unlock_task_sighand(child, &flags); 782 743 ret = 0; ··· 802 763 * start of this trap and now. Trigger re-trap. 803 764 */ 804 765 if (child->jobctl & JOBCTL_TRAP_NOTIFY) 805 - signal_wake_up(child, true); 766 + ptrace_signal_wake_up(child, true); 806 767 ret = 0; 807 768 } 808 769 unlock_task_sighand(child, &flags); ··· 939 900 goto out_put_task_struct; 940 901 941 902 ret = arch_ptrace(child, request, addr, data); 903 + if (ret || request != PTRACE_DETACH) 904 + ptrace_unfreeze_traced(child); 942 905 943 906 out_put_task_struct: 944 907 put_task_struct(child); ··· 1080 1039 1081 1040 ret = ptrace_check_attach(child, request == PTRACE_KILL || 1082 1041 request == PTRACE_INTERRUPT); 1083 - if (!ret) 1042 + if (!ret) { 1084 1043 ret = compat_arch_ptrace(child, request, addr, data); 1044 + if (ret || request != PTRACE_DETACH) 1045 + ptrace_unfreeze_traced(child); 1046 + } 1085 1047 1086 1048 out_put_task_struct: 1087 1049 put_task_struct(child);
+10 -3
kernel/rcutree_plugin.h
··· 40 40 #ifdef CONFIG_RCU_NOCB_CPU 41 41 static cpumask_var_t rcu_nocb_mask; /* CPUs to have callbacks offloaded. */ 42 42 static bool have_rcu_nocb_mask; /* Was rcu_nocb_mask allocated? */ 43 - static bool rcu_nocb_poll; /* Offload kthread are to poll. */ 44 - module_param(rcu_nocb_poll, bool, 0444); 43 + static bool __read_mostly rcu_nocb_poll; /* Offload kthread are to poll. */ 45 44 static char __initdata nocb_buf[NR_CPUS * 5]; 46 45 #endif /* #ifdef CONFIG_RCU_NOCB_CPU */ 47 46 ··· 2158 2159 } 2159 2160 __setup("rcu_nocbs=", rcu_nocb_setup); 2160 2161 2162 + static int __init parse_rcu_nocb_poll(char *arg) 2163 + { 2164 + rcu_nocb_poll = 1; 2165 + return 0; 2166 + } 2167 + early_param("rcu_nocb_poll", parse_rcu_nocb_poll); 2168 + 2161 2169 /* Is the specified CPU a no-CPUs CPU? */ 2162 2170 static bool is_nocb_cpu(int cpu) 2163 2171 { ··· 2372 2366 for (;;) { 2373 2367 /* If not polling, wait for next batch of callbacks. */ 2374 2368 if (!rcu_nocb_poll) 2375 - wait_event(rdp->nocb_wq, rdp->nocb_head); 2369 + wait_event_interruptible(rdp->nocb_wq, rdp->nocb_head); 2376 2370 list = ACCESS_ONCE(rdp->nocb_head); 2377 2371 if (!list) { 2378 2372 schedule_timeout_interruptible(1); 2373 + flush_signals(current); 2379 2374 continue; 2380 2375 } 2381 2376
+2 -1
kernel/sched/core.c
··· 1523 1523 */ 1524 1524 int wake_up_process(struct task_struct *p) 1525 1525 { 1526 - return try_to_wake_up(p, TASK_ALL, 0); 1526 + WARN_ON(task_is_stopped_or_traced(p)); 1527 + return try_to_wake_up(p, TASK_NORMAL, 0); 1527 1528 } 1528 1529 EXPORT_SYMBOL(wake_up_process); 1529 1530
+2 -2
kernel/sched/debug.c
··· 222 222 cfs_rq->runnable_load_avg); 223 223 SEQ_printf(m, " .%-30s: %lld\n", "blocked_load_avg", 224 224 cfs_rq->blocked_load_avg); 225 - SEQ_printf(m, " .%-30s: %ld\n", "tg_load_avg", 226 - atomic64_read(&cfs_rq->tg->load_avg)); 225 + SEQ_printf(m, " .%-30s: %lld\n", "tg_load_avg", 226 + (unsigned long long)atomic64_read(&cfs_rq->tg->load_avg)); 227 227 SEQ_printf(m, " .%-30s: %lld\n", "tg_load_contrib", 228 228 cfs_rq->tg_load_contrib); 229 229 SEQ_printf(m, " .%-30s: %d\n", "tg_runnable_contrib",
+1 -1
kernel/sched/fair.c
··· 2663 2663 hrtimer_cancel(&cfs_b->slack_timer); 2664 2664 } 2665 2665 2666 - static void unthrottle_offline_cfs_rqs(struct rq *rq) 2666 + static void __maybe_unused unthrottle_offline_cfs_rqs(struct rq *rq) 2667 2667 { 2668 2668 struct cfs_rq *cfs_rq; 2669 2669
+1 -1
kernel/sched/rt.c
··· 566 566 static int do_balance_runtime(struct rt_rq *rt_rq) 567 567 { 568 568 struct rt_bandwidth *rt_b = sched_rt_bandwidth(rt_rq); 569 - struct root_domain *rd = cpu_rq(smp_processor_id())->rd; 569 + struct root_domain *rd = rq_of_rt_rq(rt_rq)->rd; 570 570 int i, weight, more = 0; 571 571 u64 rt_period; 572 572
+12 -12
kernel/signal.c
··· 680 680 * No need to set need_resched since signal event passing 681 681 * goes through ->blocked 682 682 */ 683 - void signal_wake_up(struct task_struct *t, int resume) 683 + void signal_wake_up_state(struct task_struct *t, unsigned int state) 684 684 { 685 - unsigned int mask; 686 - 687 685 set_tsk_thread_flag(t, TIF_SIGPENDING); 688 - 689 686 /* 690 - * For SIGKILL, we want to wake it up in the stopped/traced/killable 687 + * TASK_WAKEKILL also means wake it up in the stopped/traced/killable 691 688 * case. We don't check t->state here because there is a race with it 692 689 * executing another processor and just now entering stopped state. 693 690 * By using wake_up_state, we ensure the process will wake up and 694 691 * handle its death signal. 695 692 */ 696 - mask = TASK_INTERRUPTIBLE; 697 - if (resume) 698 - mask |= TASK_WAKEKILL; 699 - if (!wake_up_state(t, mask)) 693 + if (!wake_up_state(t, state | TASK_INTERRUPTIBLE)) 700 694 kick_process(t); 701 695 } 702 696 ··· 838 844 assert_spin_locked(&t->sighand->siglock); 839 845 840 846 task_set_jobctl_pending(t, JOBCTL_TRAP_NOTIFY); 841 - signal_wake_up(t, t->jobctl & JOBCTL_LISTENING); 847 + ptrace_signal_wake_up(t, t->jobctl & JOBCTL_LISTENING); 842 848 } 843 849 844 850 /* ··· 1794 1800 * If SIGKILL was already sent before the caller unlocked 1795 1801 * ->siglock we must see ->core_state != NULL. Otherwise it 1796 1802 * is safe to enter schedule(). 1803 + * 1804 + * This is almost outdated, a task with the pending SIGKILL can't 1805 + * block in TASK_TRACED. But PTRACE_EVENT_EXIT can be reported 1806 + * after SIGKILL was already dequeued. 1797 1807 */ 1798 1808 if (unlikely(current->mm->core_state) && 1799 1809 unlikely(current->mm == current->parent->mm)) ··· 1923 1925 if (gstop_done) 1924 1926 do_notify_parent_cldstop(current, false, why); 1925 1927 1928 + /* tasklist protects us from ptrace_freeze_traced() */ 1926 1929 __set_current_state(TASK_RUNNING); 1927 1930 if (clear_code) 1928 1931 current->exit_code = 0; ··· 3115 3116 3116 3117 #ifdef CONFIG_COMPAT 3117 3118 #ifdef CONFIG_GENERIC_SIGALTSTACK 3118 - asmlinkage long compat_sys_sigaltstack(const compat_stack_t __user *uss_ptr, 3119 - compat_stack_t __user *uoss_ptr) 3119 + COMPAT_SYSCALL_DEFINE2(sigaltstack, 3120 + const compat_stack_t __user *, uss_ptr, 3121 + compat_stack_t __user *, uoss_ptr) 3120 3122 { 3121 3123 stack_t uss, uoss; 3122 3124 int ret;
+12 -1
kernel/smp.c
··· 33 33 struct call_single_data csd; 34 34 atomic_t refs; 35 35 cpumask_var_t cpumask; 36 + cpumask_var_t cpumask_ipi; 36 37 }; 37 38 38 39 static DEFINE_PER_CPU_SHARED_ALIGNED(struct call_function_data, cfd_data); ··· 57 56 if (!zalloc_cpumask_var_node(&cfd->cpumask, GFP_KERNEL, 58 57 cpu_to_node(cpu))) 59 58 return notifier_from_errno(-ENOMEM); 59 + if (!zalloc_cpumask_var_node(&cfd->cpumask_ipi, GFP_KERNEL, 60 + cpu_to_node(cpu))) 61 + return notifier_from_errno(-ENOMEM); 60 62 break; 61 63 62 64 #ifdef CONFIG_HOTPLUG_CPU ··· 69 65 case CPU_DEAD: 70 66 case CPU_DEAD_FROZEN: 71 67 free_cpumask_var(cfd->cpumask); 68 + free_cpumask_var(cfd->cpumask_ipi); 72 69 break; 73 70 #endif 74 71 }; ··· 531 526 return; 532 527 } 533 528 529 + /* 530 + * After we put an entry into the list, data->cpumask 531 + * may be cleared again when another CPU sends another IPI for 532 + * a SMP function call, so data->cpumask will be zero. 533 + */ 534 + cpumask_copy(data->cpumask_ipi, data->cpumask); 534 535 raw_spin_lock_irqsave(&call_function.lock, flags); 535 536 /* 536 537 * Place entry at the _HEAD_ of the list, so that any cpu still ··· 560 549 smp_mb(); 561 550 562 551 /* Send a message to all CPUs in the map */ 563 - arch_send_call_function_ipi_mask(data->cpumask); 552 + arch_send_call_function_ipi_mask(data->cpumask_ipi); 564 553 565 554 /* Optionally wait for the CPUs to complete */ 566 555 if (wait)
+1 -1
kernel/trace/ftrace.c
··· 3998 3998 3999 3999 struct notifier_block ftrace_module_nb = { 4000 4000 .notifier_call = ftrace_module_notify, 4001 - .priority = 0, 4001 + .priority = INT_MAX, /* Run before anything that can use kprobes */ 4002 4002 }; 4003 4003 4004 4004 extern unsigned long __start_mcount_loc[];
+1
lib/bug.c
··· 55 55 } 56 56 57 57 #ifdef CONFIG_MODULES 58 + /* Updates are protected by module mutex */ 58 59 static LIST_HEAD(module_bug_list); 59 60 60 61 static const struct bug_entry *module_find_bug(unsigned long bugaddr)
+2
lib/digsig.c
··· 162 162 memset(out1, 0, head); 163 163 memcpy(out1 + head, p, l); 164 164 165 + kfree(p); 166 + 165 167 err = pkcs_1_v1_5_decode_emsa(out1, len, mblen, out2, &len); 166 168 if (err) 167 169 goto err;
+18 -1
net/batman-adv/distributed-arp-table.c
··· 738 738 struct arphdr *arphdr; 739 739 struct ethhdr *ethhdr; 740 740 __be32 ip_src, ip_dst; 741 + uint8_t *hw_src, *hw_dst; 741 742 uint16_t type = 0; 742 743 743 744 /* pull the ethernet header */ ··· 778 777 ip_src = batadv_arp_ip_src(skb, hdr_size); 779 778 ip_dst = batadv_arp_ip_dst(skb, hdr_size); 780 779 if (ipv4_is_loopback(ip_src) || ipv4_is_multicast(ip_src) || 781 - ipv4_is_loopback(ip_dst) || ipv4_is_multicast(ip_dst)) 780 + ipv4_is_loopback(ip_dst) || ipv4_is_multicast(ip_dst) || 781 + ipv4_is_zeronet(ip_src) || ipv4_is_lbcast(ip_src) || 782 + ipv4_is_zeronet(ip_dst) || ipv4_is_lbcast(ip_dst)) 782 783 goto out; 784 + 785 + hw_src = batadv_arp_hw_src(skb, hdr_size); 786 + if (is_zero_ether_addr(hw_src) || is_multicast_ether_addr(hw_src)) 787 + goto out; 788 + 789 + /* we don't care about the destination MAC address in ARP requests */ 790 + if (arphdr->ar_op != htons(ARPOP_REQUEST)) { 791 + hw_dst = batadv_arp_hw_dst(skb, hdr_size); 792 + if (is_zero_ether_addr(hw_dst) || 793 + is_multicast_ether_addr(hw_dst)) 794 + goto out; 795 + } 783 796 784 797 type = ntohs(arphdr->ar_op); 785 798 out: ··· 1027 1012 */ 1028 1013 ret = !batadv_is_my_client(bat_priv, hw_dst); 1029 1014 out: 1015 + if (ret) 1016 + kfree_skb(skb); 1030 1017 /* if ret == false -> packet has to be delivered to the interface */ 1031 1018 return ret; 1032 1019 }
-8
net/bluetooth/hci_core.c
··· 2810 2810 if (conn) { 2811 2811 hci_conn_enter_active_mode(conn, BT_POWER_FORCE_ACTIVE_OFF); 2812 2812 2813 - hci_dev_lock(hdev); 2814 - if (test_bit(HCI_MGMT, &hdev->dev_flags) && 2815 - !test_and_set_bit(HCI_CONN_MGMT_CONNECTED, &conn->flags)) 2816 - mgmt_device_connected(hdev, &conn->dst, conn->type, 2817 - conn->dst_type, 0, NULL, 0, 2818 - conn->dev_class); 2819 - hci_dev_unlock(hdev); 2820 - 2821 2813 /* Send to upper protocol */ 2822 2814 l2cap_recv_acldata(conn, skb, flags); 2823 2815 return;
+1 -1
net/bluetooth/hci_event.c
··· 2688 2688 if (ev->opcode != HCI_OP_NOP) 2689 2689 del_timer(&hdev->cmd_timer); 2690 2690 2691 - if (ev->ncmd) { 2691 + if (ev->ncmd && !test_bit(HCI_RESET, &hdev->flags)) { 2692 2692 atomic_set(&hdev->cmd_cnt, 1); 2693 2693 if (!skb_queue_empty(&hdev->cmd_q)) 2694 2694 queue_work(hdev->workqueue, &hdev->cmd_work);
+1 -1
net/bluetooth/hidp/core.c
··· 931 931 hid->version = req->version; 932 932 hid->country = req->country; 933 933 934 - strncpy(hid->name, req->name, 128); 934 + strncpy(hid->name, req->name, sizeof(req->name) - 1); 935 935 936 936 snprintf(hid->phys, sizeof(hid->phys), "%pMR", 937 937 &bt_sk(session->ctrl_sock->sk)->src);
+11
net/bluetooth/l2cap_core.c
··· 3727 3727 static int l2cap_connect_req(struct l2cap_conn *conn, 3728 3728 struct l2cap_cmd_hdr *cmd, u8 *data) 3729 3729 { 3730 + struct hci_dev *hdev = conn->hcon->hdev; 3731 + struct hci_conn *hcon = conn->hcon; 3732 + 3733 + hci_dev_lock(hdev); 3734 + if (test_bit(HCI_MGMT, &hdev->dev_flags) && 3735 + !test_and_set_bit(HCI_CONN_MGMT_CONNECTED, &hcon->flags)) 3736 + mgmt_device_connected(hdev, &hcon->dst, hcon->type, 3737 + hcon->dst_type, 0, NULL, 0, 3738 + hcon->dev_class); 3739 + hci_dev_unlock(hdev); 3740 + 3730 3741 l2cap_connect(conn, cmd, data, L2CAP_CONN_RSP, 0); 3731 3742 return 0; 3732 3743 }
+1 -1
net/bluetooth/sco.c
··· 352 352 353 353 case BT_CONNECTED: 354 354 case BT_CONFIG: 355 - if (sco_pi(sk)->conn) { 355 + if (sco_pi(sk)->conn->hcon) { 356 356 sk->sk_state = BT_DISCONN; 357 357 sco_sock_set_timer(sk, SCO_DISCONN_TIMEOUT); 358 358 hci_conn_put(sco_pi(sk)->conn->hcon);
-2
net/core/request_sock.c
··· 186 186 struct fastopen_queue *fastopenq = 187 187 inet_csk(lsk)->icsk_accept_queue.fastopenq; 188 188 189 - BUG_ON(!spin_is_locked(&sk->sk_lock.slock) && !sock_owned_by_user(sk)); 190 - 191 189 tcp_sk(sk)->fastopen_rsk = NULL; 192 190 spin_lock_bh(&fastopenq->lock); 193 191 fastopenq->qlen--;
+4 -1
net/core/scm.c
··· 35 35 #include <net/sock.h> 36 36 #include <net/compat.h> 37 37 #include <net/scm.h> 38 + #include <net/cls_cgroup.h> 38 39 39 40 40 41 /* ··· 303 302 } 304 303 /* Bump the usage count and install the file. */ 305 304 sock = sock_from_file(fp[i], &err); 306 - if (sock) 305 + if (sock) { 307 306 sock_update_netprioidx(sock->sk, current); 307 + sock_update_classid(sock->sk, current); 308 + } 308 309 fd_install(new_fd, get_file(fp[i])); 309 310 } 310 311
+13 -31
net/core/skbuff.c
··· 1649 1649 1650 1650 static struct page *linear_to_page(struct page *page, unsigned int *len, 1651 1651 unsigned int *offset, 1652 - struct sk_buff *skb, struct sock *sk) 1652 + struct sock *sk) 1653 1653 { 1654 1654 struct page_frag *pfrag = sk_page_frag(sk); 1655 1655 ··· 1682 1682 static bool spd_fill_page(struct splice_pipe_desc *spd, 1683 1683 struct pipe_inode_info *pipe, struct page *page, 1684 1684 unsigned int *len, unsigned int offset, 1685 - struct sk_buff *skb, bool linear, 1685 + bool linear, 1686 1686 struct sock *sk) 1687 1687 { 1688 1688 if (unlikely(spd->nr_pages == MAX_SKB_FRAGS)) 1689 1689 return true; 1690 1690 1691 1691 if (linear) { 1692 - page = linear_to_page(page, len, &offset, skb, sk); 1692 + page = linear_to_page(page, len, &offset, sk); 1693 1693 if (!page) 1694 1694 return true; 1695 1695 } ··· 1706 1706 return false; 1707 1707 } 1708 1708 1709 - static inline void __segment_seek(struct page **page, unsigned int *poff, 1710 - unsigned int *plen, unsigned int off) 1711 - { 1712 - unsigned long n; 1713 - 1714 - *poff += off; 1715 - n = *poff / PAGE_SIZE; 1716 - if (n) 1717 - *page = nth_page(*page, n); 1718 - 1719 - *poff = *poff % PAGE_SIZE; 1720 - *plen -= off; 1721 - } 1722 - 1723 1709 static bool __splice_segment(struct page *page, unsigned int poff, 1724 1710 unsigned int plen, unsigned int *off, 1725 - unsigned int *len, struct sk_buff *skb, 1711 + unsigned int *len, 1726 1712 struct splice_pipe_desc *spd, bool linear, 1727 1713 struct sock *sk, 1728 1714 struct pipe_inode_info *pipe) ··· 1723 1737 } 1724 1738 1725 1739 /* ignore any bits we already processed */ 1726 - if (*off) { 1727 - __segment_seek(&page, &poff, &plen, *off); 1728 - *off = 0; 1729 - } 1740 + poff += *off; 1741 + plen -= *off; 1742 + *off = 0; 1730 1743 1731 1744 do { 1732 1745 unsigned int flen = min(*len, plen); 1733 1746 1734 - /* the linear region may spread across several pages */ 1735 - flen = min_t(unsigned int, flen, PAGE_SIZE - poff); 1736 - 1737 - if (spd_fill_page(spd, pipe, page, &flen, poff, skb, linear, sk)) 1747 + if (spd_fill_page(spd, pipe, page, &flen, poff, 1748 + linear, sk)) 1738 1749 return true; 1739 - 1740 - __segment_seek(&page, &poff, &plen, flen); 1750 + poff += flen; 1751 + plen -= flen; 1741 1752 *len -= flen; 1742 - 1743 1753 } while (*len && plen); 1744 1754 1745 1755 return false; ··· 1759 1777 if (__splice_segment(virt_to_page(skb->data), 1760 1778 (unsigned long) skb->data & (PAGE_SIZE - 1), 1761 1779 skb_headlen(skb), 1762 - offset, len, skb, spd, 1780 + offset, len, spd, 1763 1781 skb_head_is_locked(skb), 1764 1782 sk, pipe)) 1765 1783 return true; ··· 1772 1790 1773 1791 if (__splice_segment(skb_frag_page(f), 1774 1792 f->page_offset, skb_frag_size(f), 1775 - offset, len, skb, spd, false, sk, pipe)) 1793 + offset, len, spd, false, sk, pipe)) 1776 1794 return true; 1777 1795 } 1778 1796
+14 -4
net/ipv4/ah4.c
··· 269 269 skb->network_header += ah_hlen; 270 270 memcpy(skb_network_header(skb), work_iph, ihl); 271 271 __skb_pull(skb, ah_hlen + ihl); 272 - skb_set_transport_header(skb, -ihl); 272 + 273 + if (x->props.mode == XFRM_MODE_TUNNEL) 274 + skb_reset_transport_header(skb); 275 + else 276 + skb_set_transport_header(skb, -ihl); 273 277 out: 274 278 kfree(AH_SKB_CB(skb)->tmp); 275 279 xfrm_input_resume(skb, err); ··· 385 381 skb->network_header += ah_hlen; 386 382 memcpy(skb_network_header(skb), work_iph, ihl); 387 383 __skb_pull(skb, ah_hlen + ihl); 388 - skb_set_transport_header(skb, -ihl); 384 + if (x->props.mode == XFRM_MODE_TUNNEL) 385 + skb_reset_transport_header(skb); 386 + else 387 + skb_set_transport_header(skb, -ihl); 389 388 390 389 err = nexthdr; 391 390 ··· 420 413 if (!x) 421 414 return; 422 415 423 - if (icmp_hdr(skb)->type == ICMP_DEST_UNREACH) 416 + if (icmp_hdr(skb)->type == ICMP_DEST_UNREACH) { 417 + atomic_inc(&flow_cache_genid); 418 + rt_genid_bump(net); 419 + 424 420 ipv4_update_pmtu(skb, net, info, 0, 0, IPPROTO_AH, 0); 425 - else 421 + } else 426 422 ipv4_redirect(skb, net, 0, 0, IPPROTO_AH, 0); 427 423 xfrm_state_put(x); 428 424 }
+25
net/ipv4/datagram.c
··· 85 85 return err; 86 86 } 87 87 EXPORT_SYMBOL(ip4_datagram_connect); 88 + 89 + void ip4_datagram_release_cb(struct sock *sk) 90 + { 91 + const struct inet_sock *inet = inet_sk(sk); 92 + const struct ip_options_rcu *inet_opt; 93 + __be32 daddr = inet->inet_daddr; 94 + struct flowi4 fl4; 95 + struct rtable *rt; 96 + 97 + if (! __sk_dst_get(sk) || __sk_dst_check(sk, 0)) 98 + return; 99 + 100 + rcu_read_lock(); 101 + inet_opt = rcu_dereference(inet->inet_opt); 102 + if (inet_opt && inet_opt->opt.srr) 103 + daddr = inet_opt->opt.faddr; 104 + rt = ip_route_output_ports(sock_net(sk), &fl4, sk, daddr, 105 + inet->inet_saddr, inet->inet_dport, 106 + inet->inet_sport, sk->sk_protocol, 107 + RT_CONN_FLAGS(sk), sk->sk_bound_dev_if); 108 + if (!IS_ERR(rt)) 109 + __sk_dst_set(sk, &rt->dst); 110 + rcu_read_unlock(); 111 + } 112 + EXPORT_SYMBOL_GPL(ip4_datagram_release_cb);
+9 -3
net/ipv4/esp4.c
··· 346 346 347 347 pskb_trim(skb, skb->len - alen - padlen - 2); 348 348 __skb_pull(skb, hlen); 349 - skb_set_transport_header(skb, -ihl); 349 + if (x->props.mode == XFRM_MODE_TUNNEL) 350 + skb_reset_transport_header(skb); 351 + else 352 + skb_set_transport_header(skb, -ihl); 350 353 351 354 err = nexthdr[1]; 352 355 ··· 502 499 if (!x) 503 500 return; 504 501 505 - if (icmp_hdr(skb)->type == ICMP_DEST_UNREACH) 502 + if (icmp_hdr(skb)->type == ICMP_DEST_UNREACH) { 503 + atomic_inc(&flow_cache_genid); 504 + rt_genid_bump(net); 505 + 506 506 ipv4_update_pmtu(skb, net, info, 0, 0, IPPROTO_ESP, 0); 507 - else 507 + } else 508 508 ipv4_redirect(skb, net, 0, 0, IPPROTO_ESP, 0); 509 509 xfrm_state_put(x); 510 510 }
+5 -1
net/ipv4/ip_gre.c
··· 963 963 ptr--; 964 964 } 965 965 if (tunnel->parms.o_flags&GRE_CSUM) { 966 + int offset = skb_transport_offset(skb); 967 + 966 968 *ptr = 0; 967 - *(__sum16 *)ptr = ip_compute_csum((void *)(iph+1), skb->len - sizeof(struct iphdr)); 969 + *(__sum16 *)ptr = csum_fold(skb_checksum(skb, offset, 970 + skb->len - offset, 971 + 0)); 968 972 } 969 973 } 970 974
+5 -2
net/ipv4/ipcomp.c
··· 47 47 if (!x) 48 48 return; 49 49 50 - if (icmp_hdr(skb)->type == ICMP_DEST_UNREACH) 50 + if (icmp_hdr(skb)->type == ICMP_DEST_UNREACH) { 51 + atomic_inc(&flow_cache_genid); 52 + rt_genid_bump(net); 53 + 51 54 ipv4_update_pmtu(skb, net, info, 0, 0, IPPROTO_COMP, 0); 52 - else 55 + } else 53 56 ipv4_redirect(skb, net, 0, 0, IPPROTO_COMP, 0); 54 57 xfrm_state_put(x); 55 58 }
+1
net/ipv4/ping.c
··· 738 738 .recvmsg = ping_recvmsg, 739 739 .bind = ping_bind, 740 740 .backlog_rcv = ping_queue_rcv_skb, 741 + .release_cb = ip4_datagram_release_cb, 741 742 .hash = ping_v4_hash, 742 743 .unhash = ping_v4_unhash, 743 744 .get_port = ping_v4_get_port,
+1
net/ipv4/raw.c
··· 894 894 .recvmsg = raw_recvmsg, 895 895 .bind = raw_bind, 896 896 .backlog_rcv = raw_rcv_skb, 897 + .release_cb = ip4_datagram_release_cb, 897 898 .hash = raw_hash_sk, 898 899 .unhash = raw_unhash_sk, 899 900 .obj_size = sizeof(struct raw_sock),
+52 -2
net/ipv4/route.c
··· 912 912 struct dst_entry *dst = &rt->dst; 913 913 struct fib_result res; 914 914 915 + if (dst_metric_locked(dst, RTAX_MTU)) 916 + return; 917 + 915 918 if (dst->dev->mtu < mtu) 916 919 return; 917 920 ··· 965 962 } 966 963 EXPORT_SYMBOL_GPL(ipv4_update_pmtu); 967 964 968 - void ipv4_sk_update_pmtu(struct sk_buff *skb, struct sock *sk, u32 mtu) 965 + static void __ipv4_sk_update_pmtu(struct sk_buff *skb, struct sock *sk, u32 mtu) 969 966 { 970 967 const struct iphdr *iph = (const struct iphdr *) skb->data; 971 968 struct flowi4 fl4; ··· 977 974 __ip_rt_update_pmtu(rt, &fl4, mtu); 978 975 ip_rt_put(rt); 979 976 } 977 + } 978 + 979 + void ipv4_sk_update_pmtu(struct sk_buff *skb, struct sock *sk, u32 mtu) 980 + { 981 + const struct iphdr *iph = (const struct iphdr *) skb->data; 982 + struct flowi4 fl4; 983 + struct rtable *rt; 984 + struct dst_entry *dst; 985 + bool new = false; 986 + 987 + bh_lock_sock(sk); 988 + rt = (struct rtable *) __sk_dst_get(sk); 989 + 990 + if (sock_owned_by_user(sk) || !rt) { 991 + __ipv4_sk_update_pmtu(skb, sk, mtu); 992 + goto out; 993 + } 994 + 995 + __build_flow_key(&fl4, sk, iph, 0, 0, 0, 0, 0); 996 + 997 + if (!__sk_dst_check(sk, 0)) { 998 + rt = ip_route_output_flow(sock_net(sk), &fl4, sk); 999 + if (IS_ERR(rt)) 1000 + goto out; 1001 + 1002 + new = true; 1003 + } 1004 + 1005 + __ip_rt_update_pmtu((struct rtable *) rt->dst.path, &fl4, mtu); 1006 + 1007 + dst = dst_check(&rt->dst, 0); 1008 + if (!dst) { 1009 + if (new) 1010 + dst_release(&rt->dst); 1011 + 1012 + rt = ip_route_output_flow(sock_net(sk), &fl4, sk); 1013 + if (IS_ERR(rt)) 1014 + goto out; 1015 + 1016 + new = true; 1017 + } 1018 + 1019 + if (new) 1020 + __sk_dst_set(sk, &rt->dst); 1021 + 1022 + out: 1023 + bh_unlock_sock(sk); 980 1024 } 981 1025 EXPORT_SYMBOL_GPL(ipv4_sk_update_pmtu); 982 1026 ··· 1170 1120 if (!mtu || time_after_eq(jiffies, rt->dst.expires)) 1171 1121 mtu = dst_metric_raw(dst, RTAX_MTU); 1172 1122 1173 - if (mtu && rt_is_output_route(rt)) 1123 + if (mtu) 1174 1124 return mtu; 1175 1125 1176 1126 mtu = dst->dev->mtu;
+4 -5
net/ipv4/tcp_ipv4.c
··· 369 369 * We do take care of PMTU discovery (RFC1191) special case : 370 370 * we can receive locally generated ICMP messages while socket is held. 371 371 */ 372 - if (sock_owned_by_user(sk) && 373 - type != ICMP_DEST_UNREACH && 374 - code != ICMP_FRAG_NEEDED) 375 - NET_INC_STATS_BH(net, LINUX_MIB_LOCKDROPPEDICMPS); 376 - 372 + if (sock_owned_by_user(sk)) { 373 + if (!(type == ICMP_DEST_UNREACH && code == ICMP_FRAG_NEEDED)) 374 + NET_INC_STATS_BH(net, LINUX_MIB_LOCKDROPPEDICMPS); 375 + } 377 376 if (sk->sk_state == TCP_CLOSE) 378 377 goto out; 379 378
+1
net/ipv4/udp.c
··· 1952 1952 .recvmsg = udp_recvmsg, 1953 1953 .sendpage = udp_sendpage, 1954 1954 .backlog_rcv = __udp_queue_rcv_skb, 1955 + .release_cb = ip4_datagram_release_cb, 1955 1956 .hash = udp_lib_hash, 1956 1957 .unhash = udp_lib_unhash, 1957 1958 .rehash = udp_v4_rehash,
+9 -2
net/ipv6/ah6.c
··· 472 472 skb->network_header += ah_hlen; 473 473 memcpy(skb_network_header(skb), work_iph, hdr_len); 474 474 __skb_pull(skb, ah_hlen + hdr_len); 475 - skb_set_transport_header(skb, -hdr_len); 475 + if (x->props.mode == XFRM_MODE_TUNNEL) 476 + skb_reset_transport_header(skb); 477 + else 478 + skb_set_transport_header(skb, -hdr_len); 476 479 out: 477 480 kfree(AH_SKB_CB(skb)->tmp); 478 481 xfrm_input_resume(skb, err); ··· 596 593 597 594 skb->network_header += ah_hlen; 598 595 memcpy(skb_network_header(skb), work_iph, hdr_len); 599 - skb->transport_header = skb->network_header; 600 596 __skb_pull(skb, ah_hlen + hdr_len); 597 + 598 + if (x->props.mode == XFRM_MODE_TUNNEL) 599 + skb_reset_transport_header(skb); 600 + else 601 + skb_set_transport_header(skb, -hdr_len); 601 602 602 603 err = nexthdr; 603 604
+4 -1
net/ipv6/esp6.c
··· 300 300 301 301 pskb_trim(skb, skb->len - alen - padlen - 2); 302 302 __skb_pull(skb, hlen); 303 - skb_set_transport_header(skb, -hdr_len); 303 + if (x->props.mode == XFRM_MODE_TUNNEL) 304 + skb_reset_transport_header(skb); 305 + else 306 + skb_set_transport_header(skb, -hdr_len); 304 307 305 308 err = nexthdr[1]; 306 309
+12
net/ipv6/icmp.c
··· 81 81 return net->ipv6.icmp_sk[smp_processor_id()]; 82 82 } 83 83 84 + static void icmpv6_err(struct sk_buff *skb, struct inet6_skb_parm *opt, 85 + u8 type, u8 code, int offset, __be32 info) 86 + { 87 + struct net *net = dev_net(skb->dev); 88 + 89 + if (type == ICMPV6_PKT_TOOBIG) 90 + ip6_update_pmtu(skb, net, info, 0, 0); 91 + else if (type == NDISC_REDIRECT) 92 + ip6_redirect(skb, net, 0, 0); 93 + } 94 + 84 95 static int icmpv6_rcv(struct sk_buff *skb); 85 96 86 97 static const struct inet6_protocol icmpv6_protocol = { 87 98 .handler = icmpv6_rcv, 99 + .err_handler = icmpv6_err, 88 100 .flags = INET6_PROTO_NOPOLICY|INET6_PROTO_FINAL, 89 101 }; 90 102
+2 -2
net/ipv6/ip6_output.c
··· 1213 1213 if (dst_allfrag(rt->dst.path)) 1214 1214 cork->flags |= IPCORK_ALLFRAG; 1215 1215 cork->length = 0; 1216 - exthdrlen = (opt ? opt->opt_flen : 0) - rt->rt6i_nfheader_len; 1216 + exthdrlen = (opt ? opt->opt_flen : 0); 1217 1217 length += exthdrlen; 1218 1218 transhdrlen += exthdrlen; 1219 - dst_exthdrlen = rt->dst.header_len; 1219 + dst_exthdrlen = rt->dst.header_len - rt->rt6i_nfheader_len; 1220 1220 } else { 1221 1221 rt = (struct rt6_info *)cork->dst; 1222 1222 fl6 = &inet->cork.fl.u.ip6;
+3
net/ipv6/ip6mr.c
··· 1710 1710 return -EINVAL; 1711 1711 if (get_user(v, (u32 __user *)optval)) 1712 1712 return -EFAULT; 1713 + /* "pim6reg%u" should not exceed 16 bytes (IFNAMSIZ) */ 1714 + if (v != RT_TABLE_DEFAULT && v >= 100000000) 1715 + return -EINVAL; 1713 1716 if (sk == mrt->mroute6_sk) 1714 1717 return -EBUSY; 1715 1718
+11 -1
net/mac80211/cfg.c
··· 164 164 sta = sta_info_get(sdata, mac_addr); 165 165 else 166 166 sta = sta_info_get_bss(sdata, mac_addr); 167 - if (!sta) { 167 + /* 168 + * The ASSOC test makes sure the driver is ready to 169 + * receive the key. When wpa_supplicant has roamed 170 + * using FT, it attempts to set the key before 171 + * association has completed, this rejects that attempt 172 + * so it will set the key again after assocation. 173 + * 174 + * TODO: accept the key if we have a station entry and 175 + * add it to the device after the station. 176 + */ 177 + if (!sta || !test_sta_flag(sta, WLAN_STA_ASSOC)) { 168 178 ieee80211_key_free(sdata->local, key); 169 179 err = -ENOENT; 170 180 goto out_unlock;
+2 -4
net/mac80211/ieee80211_i.h
··· 1358 1358 void ieee80211_sched_scan_stopped_work(struct work_struct *work); 1359 1359 1360 1360 /* off-channel helpers */ 1361 - void ieee80211_offchannel_stop_vifs(struct ieee80211_local *local, 1362 - bool offchannel_ps_enable); 1363 - void ieee80211_offchannel_return(struct ieee80211_local *local, 1364 - bool offchannel_ps_disable); 1361 + void ieee80211_offchannel_stop_vifs(struct ieee80211_local *local); 1362 + void ieee80211_offchannel_return(struct ieee80211_local *local); 1365 1363 void ieee80211_roc_setup(struct ieee80211_local *local); 1366 1364 void ieee80211_start_next_roc(struct ieee80211_local *local); 1367 1365 void ieee80211_roc_purge(struct ieee80211_sub_if_data *sdata);
+4 -1
net/mac80211/mesh_hwmp.c
··· 215 215 skb->priority = 7; 216 216 217 217 info->control.vif = &sdata->vif; 218 + info->flags |= IEEE80211_TX_INTFL_NEED_TXPROCESSING; 218 219 ieee80211_set_qos_hdr(sdata, skb); 219 220 } 220 221 ··· 247 246 return -EAGAIN; 248 247 249 248 skb = dev_alloc_skb(local->tx_headroom + 249 + IEEE80211_ENCRYPT_HEADROOM + 250 + IEEE80211_ENCRYPT_TAILROOM + 250 251 hdr_len + 251 252 2 + 15 /* PERR IE */); 252 253 if (!skb) 253 254 return -1; 254 - skb_reserve(skb, local->tx_headroom); 255 + skb_reserve(skb, local->tx_headroom + IEEE80211_ENCRYPT_HEADROOM); 255 256 mgmt = (struct ieee80211_mgmt *) skb_put(skb, hdr_len); 256 257 memset(mgmt, 0, hdr_len); 257 258 mgmt->frame_control = cpu_to_le16(IEEE80211_FTYPE_MGMT |
+7 -12
net/mac80211/offchannel.c
··· 102 102 ieee80211_sta_reset_conn_monitor(sdata); 103 103 } 104 104 105 - void ieee80211_offchannel_stop_vifs(struct ieee80211_local *local, 106 - bool offchannel_ps_enable) 105 + void ieee80211_offchannel_stop_vifs(struct ieee80211_local *local) 107 106 { 108 107 struct ieee80211_sub_if_data *sdata; 109 108 ··· 133 134 134 135 if (sdata->vif.type != NL80211_IFTYPE_MONITOR) { 135 136 netif_tx_stop_all_queues(sdata->dev); 136 - if (offchannel_ps_enable && 137 - (sdata->vif.type == NL80211_IFTYPE_STATION) && 137 + if (sdata->vif.type == NL80211_IFTYPE_STATION && 138 138 sdata->u.mgd.associated) 139 139 ieee80211_offchannel_ps_enable(sdata); 140 140 } ··· 141 143 mutex_unlock(&local->iflist_mtx); 142 144 } 143 145 144 - void ieee80211_offchannel_return(struct ieee80211_local *local, 145 - bool offchannel_ps_disable) 146 + void ieee80211_offchannel_return(struct ieee80211_local *local) 146 147 { 147 148 struct ieee80211_sub_if_data *sdata; 148 149 ··· 160 163 continue; 161 164 162 165 /* Tell AP we're back */ 163 - if (offchannel_ps_disable && 164 - sdata->vif.type == NL80211_IFTYPE_STATION) { 165 - if (sdata->u.mgd.associated) 166 - ieee80211_offchannel_ps_disable(sdata); 167 - } 166 + if (sdata->vif.type == NL80211_IFTYPE_STATION && 167 + sdata->u.mgd.associated) 168 + ieee80211_offchannel_ps_disable(sdata); 168 169 169 170 if (sdata->vif.type != NL80211_IFTYPE_MONITOR) { 170 171 /* ··· 380 385 local->tmp_channel = NULL; 381 386 ieee80211_hw_config(local, 0); 382 387 383 - ieee80211_offchannel_return(local, true); 388 + ieee80211_offchannel_return(local); 384 389 } 385 390 386 391 ieee80211_recalc_idle(local);
+5 -10
net/mac80211/scan.c
··· 292 292 if (!was_hw_scan) { 293 293 ieee80211_configure_filter(local); 294 294 drv_sw_scan_complete(local); 295 - ieee80211_offchannel_return(local, true); 295 + ieee80211_offchannel_return(local); 296 296 } 297 297 298 298 ieee80211_recalc_idle(local); ··· 341 341 local->next_scan_state = SCAN_DECISION; 342 342 local->scan_channel_idx = 0; 343 343 344 - ieee80211_offchannel_stop_vifs(local, true); 344 + ieee80211_offchannel_stop_vifs(local); 345 345 346 346 ieee80211_configure_filter(local); 347 347 ··· 678 678 local->scan_channel = NULL; 679 679 ieee80211_hw_config(local, IEEE80211_CONF_CHANGE_CHANNEL); 680 680 681 - /* 682 - * Re-enable vifs and beaconing. Leave PS 683 - * in off-channel state..will put that back 684 - * on-channel at the end of scanning. 685 - */ 686 - ieee80211_offchannel_return(local, false); 681 + /* disable PS */ 682 + ieee80211_offchannel_return(local); 687 683 688 684 *next_delay = HZ / 5; 689 685 /* afterwards, resume scan & go to next channel */ ··· 689 693 static void ieee80211_scan_state_resume(struct ieee80211_local *local, 690 694 unsigned long *next_delay) 691 695 { 692 - /* PS already is in off-channel mode */ 693 - ieee80211_offchannel_stop_vifs(local, false); 696 + ieee80211_offchannel_stop_vifs(local); 694 697 695 698 if (local->ops->flush) { 696 699 drv_flush(local, false);
+6 -3
net/mac80211/tx.c
··· 1673 1673 chanctx_conf = 1674 1674 rcu_dereference(tmp_sdata->vif.chanctx_conf); 1675 1675 } 1676 - if (!chanctx_conf) 1677 - goto fail_rcu; 1678 1676 1679 - chan = chanctx_conf->def.chan; 1677 + if (chanctx_conf) 1678 + chan = chanctx_conf->def.chan; 1679 + else if (!local->use_chanctx) 1680 + chan = local->_oper_channel; 1681 + else 1682 + goto fail_rcu; 1680 1683 1681 1684 /* 1682 1685 * Frame injection is not allowed if beaconing is not allowed
+5 -4
net/netfilter/nf_conntrack_core.c
··· 1376 1376 synchronize_net(); 1377 1377 nf_conntrack_proto_fini(net); 1378 1378 nf_conntrack_cleanup_net(net); 1379 + } 1379 1380 1380 - if (net_eq(net, &init_net)) { 1381 - RCU_INIT_POINTER(nf_ct_destroy, NULL); 1382 - nf_conntrack_cleanup_init_net(); 1383 - } 1381 + void nf_conntrack_cleanup_end(void) 1382 + { 1383 + RCU_INIT_POINTER(nf_ct_destroy, NULL); 1384 + nf_conntrack_cleanup_init_net(); 1384 1385 } 1385 1386 1386 1387 void *nf_ct_alloc_hashtable(unsigned int *sizep, int nulls)
+1
net/netfilter/nf_conntrack_standalone.c
··· 575 575 static void __exit nf_conntrack_standalone_fini(void) 576 576 { 577 577 unregister_pernet_subsys(&nf_conntrack_net_ops); 578 + nf_conntrack_cleanup_end(); 578 579 } 579 580 580 581 module_init(nf_conntrack_standalone_init);
+20 -8
net/netfilter/x_tables.c
··· 345 345 } 346 346 EXPORT_SYMBOL_GPL(xt_find_revision); 347 347 348 - static char *textify_hooks(char *buf, size_t size, unsigned int mask) 348 + static char * 349 + textify_hooks(char *buf, size_t size, unsigned int mask, uint8_t nfproto) 349 350 { 350 - static const char *const names[] = { 351 + static const char *const inetbr_names[] = { 351 352 "PREROUTING", "INPUT", "FORWARD", 352 353 "OUTPUT", "POSTROUTING", "BROUTING", 353 354 }; 354 - unsigned int i; 355 + static const char *const arp_names[] = { 356 + "INPUT", "FORWARD", "OUTPUT", 357 + }; 358 + const char *const *names; 359 + unsigned int i, max; 355 360 char *p = buf; 356 361 bool np = false; 357 362 int res; 358 363 364 + names = (nfproto == NFPROTO_ARP) ? arp_names : inetbr_names; 365 + max = (nfproto == NFPROTO_ARP) ? ARRAY_SIZE(arp_names) : 366 + ARRAY_SIZE(inetbr_names); 359 367 *p = '\0'; 360 - for (i = 0; i < ARRAY_SIZE(names); ++i) { 368 + for (i = 0; i < max; ++i) { 361 369 if (!(mask & (1 << i))) 362 370 continue; 363 371 res = snprintf(p, size, "%s%s", np ? "/" : "", names[i]); ··· 410 402 pr_err("%s_tables: %s match: used from hooks %s, but only " 411 403 "valid from %s\n", 412 404 xt_prefix[par->family], par->match->name, 413 - textify_hooks(used, sizeof(used), par->hook_mask), 414 - textify_hooks(allow, sizeof(allow), par->match->hooks)); 405 + textify_hooks(used, sizeof(used), par->hook_mask, 406 + par->family), 407 + textify_hooks(allow, sizeof(allow), par->match->hooks, 408 + par->family)); 415 409 return -EINVAL; 416 410 } 417 411 if (par->match->proto && (par->match->proto != proto || inv_proto)) { ··· 585 575 pr_err("%s_tables: %s target: used from hooks %s, but only " 586 576 "usable from %s\n", 587 577 xt_prefix[par->family], par->target->name, 588 - textify_hooks(used, sizeof(used), par->hook_mask), 589 - textify_hooks(allow, sizeof(allow), par->target->hooks)); 578 + textify_hooks(used, sizeof(used), par->hook_mask, 579 + par->family), 580 + textify_hooks(allow, sizeof(allow), par->target->hooks, 581 + par->family)); 590 582 return -EINVAL; 591 583 } 592 584 if (par->target->proto && (par->target->proto != proto || inv_proto)) {
+2 -2
net/netfilter/xt_CT.c
··· 109 109 struct xt_ct_target_info *info = par->targinfo; 110 110 struct nf_conntrack_tuple t; 111 111 struct nf_conn *ct; 112 - int ret; 112 + int ret = -EOPNOTSUPP; 113 113 114 114 if (info->flags & ~XT_CT_NOTRACK) 115 115 return -EINVAL; ··· 247 247 struct xt_ct_target_info_v1 *info = par->targinfo; 248 248 struct nf_conntrack_tuple t; 249 249 struct nf_conn *ct; 250 - int ret; 250 + int ret = -EOPNOTSUPP; 251 251 252 252 if (info->flags & ~XT_CT_NOTRACK) 253 253 return -EINVAL;
+8 -4
net/sctp/outqueue.c
··· 224 224 225 225 /* Free the outqueue structure and any related pending chunks. 226 226 */ 227 - void sctp_outq_teardown(struct sctp_outq *q) 227 + static void __sctp_outq_teardown(struct sctp_outq *q) 228 228 { 229 229 struct sctp_transport *transport; 230 230 struct list_head *lchunk, *temp; ··· 277 277 sctp_chunk_free(chunk); 278 278 } 279 279 280 - q->error = 0; 281 - 282 280 /* Throw away any leftover control chunks. */ 283 281 list_for_each_entry_safe(chunk, tmp, &q->control_chunk_list, list) { 284 282 list_del_init(&chunk->list); ··· 284 286 } 285 287 } 286 288 289 + void sctp_outq_teardown(struct sctp_outq *q) 290 + { 291 + __sctp_outq_teardown(q); 292 + sctp_outq_init(q->asoc, q); 293 + } 294 + 287 295 /* Free the outqueue structure and any related pending chunks. */ 288 296 void sctp_outq_free(struct sctp_outq *q) 289 297 { 290 298 /* Throw away leftover chunks. */ 291 - sctp_outq_teardown(q); 299 + __sctp_outq_teardown(q); 292 300 293 301 /* If we were kmalloc()'d, free the memory. */ 294 302 if (q->malloced)
+3 -1
net/sctp/sm_statefuns.c
··· 1779 1779 1780 1780 /* Update the content of current association. */ 1781 1781 sctp_add_cmd_sf(commands, SCTP_CMD_UPDATE_ASSOC, SCTP_ASOC(new_asoc)); 1782 - sctp_add_cmd_sf(commands, SCTP_CMD_REPLY, SCTP_CHUNK(repl)); 1783 1782 sctp_add_cmd_sf(commands, SCTP_CMD_EVENT_ULP, SCTP_ULPEVENT(ev)); 1783 + sctp_add_cmd_sf(commands, SCTP_CMD_NEW_STATE, 1784 + SCTP_STATE(SCTP_STATE_ESTABLISHED)); 1785 + sctp_add_cmd_sf(commands, SCTP_CMD_REPLY, SCTP_CHUNK(repl)); 1784 1786 return SCTP_DISPOSITION_CONSUME; 1785 1787 1786 1788 nomem_ev:
+4
net/sctp/sysctl.c
··· 366 366 367 367 void sctp_sysctl_net_unregister(struct net *net) 368 368 { 369 + struct ctl_table *table; 370 + 371 + table = net->sctp.sysctl_header->ctl_table_arg; 369 372 unregister_net_sysctl_table(net->sctp.sysctl_header); 373 + kfree(table); 370 374 } 371 375 372 376 static struct ctl_table_header * sctp_sysctl_header;
+17 -1
net/sunrpc/sched.c
··· 98 98 list_add(&task->u.tk_wait.timer_list, &queue->timer_list.list); 99 99 } 100 100 101 + static void rpc_rotate_queue_owner(struct rpc_wait_queue *queue) 102 + { 103 + struct list_head *q = &queue->tasks[queue->priority]; 104 + struct rpc_task *task; 105 + 106 + if (!list_empty(q)) { 107 + task = list_first_entry(q, struct rpc_task, u.tk_wait.list); 108 + if (task->tk_owner == queue->owner) 109 + list_move_tail(&task->u.tk_wait.list, q); 110 + } 111 + } 112 + 101 113 static void rpc_set_waitqueue_priority(struct rpc_wait_queue *queue, int priority) 102 114 { 103 - queue->priority = priority; 115 + if (queue->priority != priority) { 116 + /* Fairness: rotate the list when changing priority */ 117 + rpc_rotate_queue_owner(queue); 118 + queue->priority = priority; 119 + } 104 120 } 105 121 106 122 static void rpc_set_waitqueue_owner(struct rpc_wait_queue *queue, pid_t pid)
+1 -1
net/xfrm/xfrm_policy.c
··· 2656 2656 WARN_ON(!hlist_empty(&net->xfrm.policy_inexact[dir])); 2657 2657 2658 2658 htab = &net->xfrm.policy_bydst[dir]; 2659 - sz = (htab->hmask + 1); 2659 + sz = (htab->hmask + 1) * sizeof(struct hlist_head); 2660 2660 WARN_ON(!hlist_empty(htab->table)); 2661 2661 xfrm_hash_free(htab->table, sz); 2662 2662 }
+3 -1
net/xfrm/xfrm_replay.c
··· 242 242 u32 diff; 243 243 struct xfrm_replay_state_esn *replay_esn = x->replay_esn; 244 244 u32 seq = ntohl(net_seq); 245 - u32 pos = (replay_esn->seq - 1) % replay_esn->replay_window; 245 + u32 pos; 246 246 247 247 if (!replay_esn->replay_window) 248 248 return; 249 + 250 + pos = (replay_esn->seq - 1) % replay_esn->replay_window; 249 251 250 252 if (seq > replay_esn->seq) { 251 253 diff = seq - replay_esn->seq;
+21 -3
security/capability.c
··· 709 709 { 710 710 } 711 711 712 + static int cap_tun_dev_alloc_security(void **security) 713 + { 714 + return 0; 715 + } 716 + 717 + static void cap_tun_dev_free_security(void *security) 718 + { 719 + } 720 + 712 721 static int cap_tun_dev_create(void) 713 722 { 714 723 return 0; 715 724 } 716 725 717 - static void cap_tun_dev_post_create(struct sock *sk) 726 + static int cap_tun_dev_attach_queue(void *security) 718 727 { 728 + return 0; 719 729 } 720 730 721 - static int cap_tun_dev_attach(struct sock *sk) 731 + static int cap_tun_dev_attach(struct sock *sk, void *security) 732 + { 733 + return 0; 734 + } 735 + 736 + static int cap_tun_dev_open(void *security) 722 737 { 723 738 return 0; 724 739 } ··· 1065 1050 set_to_cap_if_null(ops, secmark_refcount_inc); 1066 1051 set_to_cap_if_null(ops, secmark_refcount_dec); 1067 1052 set_to_cap_if_null(ops, req_classify_flow); 1053 + set_to_cap_if_null(ops, tun_dev_alloc_security); 1054 + set_to_cap_if_null(ops, tun_dev_free_security); 1068 1055 set_to_cap_if_null(ops, tun_dev_create); 1069 - set_to_cap_if_null(ops, tun_dev_post_create); 1056 + set_to_cap_if_null(ops, tun_dev_open); 1057 + set_to_cap_if_null(ops, tun_dev_attach_queue); 1070 1058 set_to_cap_if_null(ops, tun_dev_attach); 1071 1059 #endif /* CONFIG_SECURITY_NETWORK */ 1072 1060 #ifdef CONFIG_SECURITY_NETWORK_XFRM
+2
security/device_cgroup.c
··· 215 215 struct dev_cgroup *dev_cgroup; 216 216 217 217 dev_cgroup = cgroup_to_devcgroup(cgroup); 218 + mutex_lock(&devcgroup_mutex); 218 219 dev_exception_clean(dev_cgroup); 220 + mutex_unlock(&devcgroup_mutex); 219 221 kfree(dev_cgroup); 220 222 } 221 223
+2 -2
security/integrity/evm/evm_crypto.c
··· 205 205 rc = __vfs_setxattr_noperm(dentry, XATTR_NAME_EVM, 206 206 &xattr_data, 207 207 sizeof(xattr_data), 0); 208 - } 209 - else if (rc == -ENODATA) 208 + } else if (rc == -ENODATA && inode->i_op->removexattr) { 210 209 rc = inode->i_op->removexattr(dentry, XATTR_NAME_EVM); 210 + } 211 211 return rc; 212 212 } 213 213
+23 -5
security/security.c
··· 1254 1254 } 1255 1255 EXPORT_SYMBOL(security_secmark_refcount_dec); 1256 1256 1257 + int security_tun_dev_alloc_security(void **security) 1258 + { 1259 + return security_ops->tun_dev_alloc_security(security); 1260 + } 1261 + EXPORT_SYMBOL(security_tun_dev_alloc_security); 1262 + 1263 + void security_tun_dev_free_security(void *security) 1264 + { 1265 + security_ops->tun_dev_free_security(security); 1266 + } 1267 + EXPORT_SYMBOL(security_tun_dev_free_security); 1268 + 1257 1269 int security_tun_dev_create(void) 1258 1270 { 1259 1271 return security_ops->tun_dev_create(); 1260 1272 } 1261 1273 EXPORT_SYMBOL(security_tun_dev_create); 1262 1274 1263 - void security_tun_dev_post_create(struct sock *sk) 1275 + int security_tun_dev_attach_queue(void *security) 1264 1276 { 1265 - return security_ops->tun_dev_post_create(sk); 1277 + return security_ops->tun_dev_attach_queue(security); 1266 1278 } 1267 - EXPORT_SYMBOL(security_tun_dev_post_create); 1279 + EXPORT_SYMBOL(security_tun_dev_attach_queue); 1268 1280 1269 - int security_tun_dev_attach(struct sock *sk) 1281 + int security_tun_dev_attach(struct sock *sk, void *security) 1270 1282 { 1271 - return security_ops->tun_dev_attach(sk); 1283 + return security_ops->tun_dev_attach(sk, security); 1272 1284 } 1273 1285 EXPORT_SYMBOL(security_tun_dev_attach); 1286 + 1287 + int security_tun_dev_open(void *security) 1288 + { 1289 + return security_ops->tun_dev_open(security); 1290 + } 1291 + EXPORT_SYMBOL(security_tun_dev_open); 1274 1292 1275 1293 #endif /* CONFIG_SECURITY_NETWORK */ 1276 1294
+39 -11
security/selinux/hooks.c
··· 4399 4399 fl->flowi_secid = req->secid; 4400 4400 } 4401 4401 4402 + static int selinux_tun_dev_alloc_security(void **security) 4403 + { 4404 + struct tun_security_struct *tunsec; 4405 + 4406 + tunsec = kzalloc(sizeof(*tunsec), GFP_KERNEL); 4407 + if (!tunsec) 4408 + return -ENOMEM; 4409 + tunsec->sid = current_sid(); 4410 + 4411 + *security = tunsec; 4412 + return 0; 4413 + } 4414 + 4415 + static void selinux_tun_dev_free_security(void *security) 4416 + { 4417 + kfree(security); 4418 + } 4419 + 4402 4420 static int selinux_tun_dev_create(void) 4403 4421 { 4404 4422 u32 sid = current_sid(); ··· 4432 4414 NULL); 4433 4415 } 4434 4416 4435 - static void selinux_tun_dev_post_create(struct sock *sk) 4417 + static int selinux_tun_dev_attach_queue(void *security) 4436 4418 { 4419 + struct tun_security_struct *tunsec = security; 4420 + 4421 + return avc_has_perm(current_sid(), tunsec->sid, SECCLASS_TUN_SOCKET, 4422 + TUN_SOCKET__ATTACH_QUEUE, NULL); 4423 + } 4424 + 4425 + static int selinux_tun_dev_attach(struct sock *sk, void *security) 4426 + { 4427 + struct tun_security_struct *tunsec = security; 4437 4428 struct sk_security_struct *sksec = sk->sk_security; 4438 4429 4439 4430 /* we don't currently perform any NetLabel based labeling here and it ··· 4452 4425 * cause confusion to the TUN user that had no idea network labeling 4453 4426 * protocols were being used */ 4454 4427 4455 - /* see the comments in selinux_tun_dev_create() about why we don't use 4456 - * the sockcreate SID here */ 4457 - 4458 - sksec->sid = current_sid(); 4428 + sksec->sid = tunsec->sid; 4459 4429 sksec->sclass = SECCLASS_TUN_SOCKET; 4430 + 4431 + return 0; 4460 4432 } 4461 4433 4462 - static int selinux_tun_dev_attach(struct sock *sk) 4434 + static int selinux_tun_dev_open(void *security) 4463 4435 { 4464 - struct sk_security_struct *sksec = sk->sk_security; 4436 + struct tun_security_struct *tunsec = security; 4465 4437 u32 sid = current_sid(); 4466 4438 int err; 4467 4439 4468 - err = avc_has_perm(sid, sksec->sid, SECCLASS_TUN_SOCKET, 4440 + err = avc_has_perm(sid, tunsec->sid, SECCLASS_TUN_SOCKET, 4469 4441 TUN_SOCKET__RELABELFROM, NULL); 4470 4442 if (err) 4471 4443 return err; ··· 4472 4446 TUN_SOCKET__RELABELTO, NULL); 4473 4447 if (err) 4474 4448 return err; 4475 - 4476 - sksec->sid = sid; 4449 + tunsec->sid = sid; 4477 4450 4478 4451 return 0; 4479 4452 } ··· 5667 5642 .secmark_refcount_inc = selinux_secmark_refcount_inc, 5668 5643 .secmark_refcount_dec = selinux_secmark_refcount_dec, 5669 5644 .req_classify_flow = selinux_req_classify_flow, 5645 + .tun_dev_alloc_security = selinux_tun_dev_alloc_security, 5646 + .tun_dev_free_security = selinux_tun_dev_free_security, 5670 5647 .tun_dev_create = selinux_tun_dev_create, 5671 - .tun_dev_post_create = selinux_tun_dev_post_create, 5648 + .tun_dev_attach_queue = selinux_tun_dev_attach_queue, 5672 5649 .tun_dev_attach = selinux_tun_dev_attach, 5650 + .tun_dev_open = selinux_tun_dev_open, 5673 5651 5674 5652 #ifdef CONFIG_SECURITY_NETWORK_XFRM 5675 5653 .xfrm_policy_alloc_security = selinux_xfrm_policy_alloc,
+1 -1
security/selinux/include/classmap.h
··· 150 150 NULL } }, 151 151 { "kernel_service", { "use_as_override", "create_files_as", NULL } }, 152 152 { "tun_socket", 153 - { COMMON_SOCK_PERMS, NULL } }, 153 + { COMMON_SOCK_PERMS, "attach_queue", NULL } }, 154 154 { NULL } 155 155 };
+4
security/selinux/include/objsec.h
··· 110 110 u16 sclass; /* sock security class */ 111 111 }; 112 112 113 + struct tun_security_struct { 114 + u32 sid; /* SID for the tun device sockets */ 115 + }; 116 + 113 117 struct key_security_struct { 114 118 u32 sid; /* SID of key */ 115 119 };
+2 -3
sound/pci/hda/hda_codec.c
··· 3654 3654 hda_set_power_state(codec, AC_PWRST_D0); 3655 3655 restore_shutup_pins(codec); 3656 3656 hda_exec_init_verbs(codec); 3657 + snd_hda_jack_set_dirty_all(codec); 3657 3658 if (codec->patch_ops.resume) 3658 3659 codec->patch_ops.resume(codec); 3659 3660 else { ··· 3666 3665 3667 3666 if (codec->jackpoll_interval) 3668 3667 hda_jackpoll_work(&codec->jackpoll_work.work); 3669 - else { 3670 - snd_hda_jack_set_dirty_all(codec); 3668 + else 3671 3669 snd_hda_jack_report_sync(codec); 3672 - } 3673 3670 3674 3671 codec->in_pm = 0; 3675 3672 snd_hda_power_down(codec); /* flag down before returning */
+30 -19
sound/pci/hda/hda_intel.c
··· 656 656 #define get_azx_dev(substream) (substream->runtime->private_data) 657 657 658 658 #ifdef CONFIG_X86 659 - static void __mark_pages_wc(struct azx *chip, void *addr, size_t size, bool on) 659 + static void __mark_pages_wc(struct azx *chip, struct snd_dma_buffer *dmab, bool on) 660 660 { 661 + int pages; 662 + 661 663 if (azx_snoop(chip)) 662 664 return; 663 - if (addr && size) { 664 - int pages = (size + PAGE_SIZE - 1) >> PAGE_SHIFT; 665 + if (!dmab || !dmab->area || !dmab->bytes) 666 + return; 667 + 668 + #ifdef CONFIG_SND_DMA_SGBUF 669 + if (dmab->dev.type == SNDRV_DMA_TYPE_DEV_SG) { 670 + struct snd_sg_buf *sgbuf = dmab->private_data; 665 671 if (on) 666 - set_memory_wc((unsigned long)addr, pages); 672 + set_pages_array_wc(sgbuf->page_table, sgbuf->pages); 667 673 else 668 - set_memory_wb((unsigned long)addr, pages); 674 + set_pages_array_wb(sgbuf->page_table, sgbuf->pages); 675 + return; 669 676 } 677 + #endif 678 + 679 + pages = (dmab->bytes + PAGE_SIZE - 1) >> PAGE_SHIFT; 680 + if (on) 681 + set_memory_wc((unsigned long)dmab->area, pages); 682 + else 683 + set_memory_wb((unsigned long)dmab->area, pages); 670 684 } 671 685 672 686 static inline void mark_pages_wc(struct azx *chip, struct snd_dma_buffer *buf, 673 687 bool on) 674 688 { 675 - __mark_pages_wc(chip, buf->area, buf->bytes, on); 689 + __mark_pages_wc(chip, buf, on); 676 690 } 677 691 static inline void mark_runtime_wc(struct azx *chip, struct azx_dev *azx_dev, 678 - struct snd_pcm_runtime *runtime, bool on) 692 + struct snd_pcm_substream *substream, bool on) 679 693 { 680 694 if (azx_dev->wc_marked != on) { 681 - __mark_pages_wc(chip, runtime->dma_area, runtime->dma_bytes, on); 695 + __mark_pages_wc(chip, snd_pcm_get_dma_buf(substream), on); 682 696 azx_dev->wc_marked = on; 683 697 } 684 698 } ··· 703 689 { 704 690 } 705 691 static inline void mark_runtime_wc(struct azx *chip, struct azx_dev *azx_dev, 706 - struct snd_pcm_runtime *runtime, bool on) 692 + struct snd_pcm_substream *substream, bool on) 707 693 { 708 694 } 709 695 #endif ··· 1982 1968 { 1983 1969 struct azx_pcm *apcm = snd_pcm_substream_chip(substream); 1984 1970 struct azx *chip = apcm->chip; 1985 - struct snd_pcm_runtime *runtime = substream->runtime; 1986 1971 struct azx_dev *azx_dev = get_azx_dev(substream); 1987 1972 int ret; 1988 1973 1989 - mark_runtime_wc(chip, azx_dev, runtime, false); 1974 + mark_runtime_wc(chip, azx_dev, substream, false); 1990 1975 azx_dev->bufsize = 0; 1991 1976 azx_dev->period_bytes = 0; 1992 1977 azx_dev->format_val = 0; ··· 1993 1980 params_buffer_bytes(hw_params)); 1994 1981 if (ret < 0) 1995 1982 return ret; 1996 - mark_runtime_wc(chip, azx_dev, runtime, true); 1983 + mark_runtime_wc(chip, azx_dev, substream, true); 1997 1984 return ret; 1998 1985 } 1999 1986 ··· 2002 1989 struct azx_pcm *apcm = snd_pcm_substream_chip(substream); 2003 1990 struct azx_dev *azx_dev = get_azx_dev(substream); 2004 1991 struct azx *chip = apcm->chip; 2005 - struct snd_pcm_runtime *runtime = substream->runtime; 2006 1992 struct hda_pcm_stream *hinfo = apcm->hinfo[substream->stream]; 2007 1993 2008 1994 /* reset BDL address */ ··· 2014 2002 2015 2003 snd_hda_codec_cleanup(apcm->codec, hinfo, substream); 2016 2004 2017 - mark_runtime_wc(chip, azx_dev, runtime, false); 2005 + mark_runtime_wc(chip, azx_dev, substream, false); 2018 2006 return snd_pcm_lib_free_pages(substream); 2019 2007 } 2020 2008 ··· 3625 3613 /* 5 Series/3400 */ 3626 3614 { PCI_DEVICE(0x8086, 0x3b56), 3627 3615 .driver_data = AZX_DRIVER_SCH | AZX_DCAPS_INTEL_PCH }, 3628 - /* SCH */ 3616 + /* Poulsbo */ 3629 3617 { PCI_DEVICE(0x8086, 0x811b), 3630 - .driver_data = AZX_DRIVER_SCH | AZX_DCAPS_SCH_SNOOP | 3631 - AZX_DCAPS_BUFSIZE | AZX_DCAPS_POSFIX_LPIB }, /* Poulsbo */ 3618 + .driver_data = AZX_DRIVER_SCH | AZX_DCAPS_INTEL_PCH_NOPM }, 3619 + /* Oaktrail */ 3632 3620 { PCI_DEVICE(0x8086, 0x080a), 3633 - .driver_data = AZX_DRIVER_SCH | AZX_DCAPS_SCH_SNOOP | 3634 - AZX_DCAPS_BUFSIZE | AZX_DCAPS_POSFIX_LPIB }, /* Oaktrail */ 3621 + .driver_data = AZX_DRIVER_SCH | AZX_DCAPS_INTEL_PCH_NOPM }, 3635 3622 /* ICH */ 3636 3623 { PCI_DEVICE(0x8086, 0x2668), 3637 3624 .driver_data = AZX_DRIVER_ICH | AZX_DCAPS_OLD_SSYNC |
+9
sound/pci/hda/patch_conexant.c
··· 4636 4636 .patch = patch_conexant_auto }, 4637 4637 { .id = 0x14f15111, .name = "CX20753/4", 4638 4638 .patch = patch_conexant_auto }, 4639 + { .id = 0x14f15113, .name = "CX20755", 4640 + .patch = patch_conexant_auto }, 4641 + { .id = 0x14f15114, .name = "CX20756", 4642 + .patch = patch_conexant_auto }, 4643 + { .id = 0x14f15115, .name = "CX20757", 4644 + .patch = patch_conexant_auto }, 4639 4645 {} /* terminator */ 4640 4646 }; 4641 4647 ··· 4665 4659 MODULE_ALIAS("snd-hda-codec-id:14f1510f"); 4666 4660 MODULE_ALIAS("snd-hda-codec-id:14f15110"); 4667 4661 MODULE_ALIAS("snd-hda-codec-id:14f15111"); 4662 + MODULE_ALIAS("snd-hda-codec-id:14f15113"); 4663 + MODULE_ALIAS("snd-hda-codec-id:14f15114"); 4664 + MODULE_ALIAS("snd-hda-codec-id:14f15115"); 4668 4665 4669 4666 MODULE_LICENSE("GPL"); 4670 4667 MODULE_DESCRIPTION("Conexant HD-audio codec");
+4
sound/pci/hda/patch_realtek.c
··· 4694 4694 SND_PCI_QUIRK(0x1584, 0x9077, "Uniwill P53", ALC880_FIXUP_VOL_KNOB), 4695 4695 SND_PCI_QUIRK(0x161f, 0x203d, "W810", ALC880_FIXUP_W810), 4696 4696 SND_PCI_QUIRK(0x161f, 0x205d, "Medion Rim 2150", ALC880_FIXUP_MEDION_RIM), 4697 + SND_PCI_QUIRK(0x1631, 0xe011, "PB 13201056", ALC880_FIXUP_6ST), 4697 4698 SND_PCI_QUIRK(0x1734, 0x107c, "FSC F1734", ALC880_FIXUP_F1734), 4698 4699 SND_PCI_QUIRK(0x1734, 0x1094, "FSC Amilo M1451G", ALC880_FIXUP_FUJITSU), 4699 4700 SND_PCI_QUIRK(0x1734, 0x10ac, "FSC AMILO Xi 1526", ALC880_FIXUP_F1734), ··· 5709 5708 }; 5710 5709 5711 5710 static const struct snd_pci_quirk alc268_fixup_tbl[] = { 5711 + SND_PCI_QUIRK(0x1025, 0x015b, "Acer AOA 150 (ZG5)", ALC268_FIXUP_INV_DMIC), 5712 5712 /* below is codec SSID since multiple Toshiba laptops have the 5713 5713 * same PCI SSID 1179:ff00 5714 5714 */ ··· 6253 6251 SND_PCI_QUIRK(0x1025, 0x0349, "Acer AOD260", ALC269_FIXUP_INV_DMIC), 6254 6252 SND_PCI_QUIRK(0x103c, 0x1586, "HP", ALC269_FIXUP_MIC2_MUTE_LED), 6255 6253 SND_PCI_QUIRK(0x103c, 0x1972, "HP Pavilion 17", ALC269_FIXUP_MIC1_MUTE_LED), 6254 + SND_PCI_QUIRK(0x103c, 0x1977, "HP Pavilion 14", ALC269_FIXUP_MIC1_MUTE_LED), 6256 6255 SND_PCI_QUIRK(0x1043, 0x1427, "Asus Zenbook UX31E", ALC269VB_FIXUP_DMIC), 6257 6256 SND_PCI_QUIRK(0x1043, 0x1517, "Asus Zenbook UX31A", ALC269VB_FIXUP_DMIC), 6258 6257 SND_PCI_QUIRK(0x1043, 0x1a13, "Asus G73Jw", ALC269_FIXUP_ASUS_G73JW), ··· 6268 6265 SND_PCI_QUIRK(0x104d, 0x9084, "Sony VAIO", ALC275_FIXUP_SONY_HWEQ), 6269 6266 SND_PCI_QUIRK_VENDOR(0x104d, "Sony VAIO", ALC269_FIXUP_SONY_VAIO), 6270 6267 SND_PCI_QUIRK(0x1028, 0x0470, "Dell M101z", ALC269_FIXUP_DELL_M101Z), 6268 + SND_PCI_QUIRK(0x1025, 0x0740, "Acer AO725", ALC271_FIXUP_HP_GATE_MIC_JACK), 6271 6269 SND_PCI_QUIRK(0x1025, 0x0742, "Acer AO756", ALC271_FIXUP_HP_GATE_MIC_JACK), 6272 6270 SND_PCI_QUIRK_VENDOR(0x1025, "Acer Aspire", ALC271_FIXUP_DMIC), 6273 6271 SND_PCI_QUIRK(0x10cf, 0x1475, "Lifebook", ALC269_FIXUP_LIFEBOOK),
+4 -1
sound/soc/codecs/arizona.c
··· 685 685 } 686 686 sr_val = i; 687 687 688 - lrclk = snd_soc_params_to_bclk(params) / params_rate(params); 688 + lrclk = rates[bclk] / params_rate(params); 689 689 690 690 arizona_aif_dbg(dai, "BCLK %dHz LRCLK %dHz\n", 691 691 rates[bclk], rates[bclk] / lrclk); ··· 1081 1081 dev_err(arizona->dev, "Failed to get FLL%d clock OK IRQ: %d\n", 1082 1082 id, ret); 1083 1083 } 1084 + 1085 + regmap_update_bits(arizona->regmap, fll->base + 1, 1086 + ARIZONA_FLL1_FREERUN, 0); 1084 1087 1085 1088 return 0; 1086 1089 }
-3
sound/soc/codecs/wm2200.c
··· 1019 1019 "EQR", 1020 1020 "LHPF1", 1021 1021 "LHPF2", 1022 - "LHPF3", 1023 - "LHPF4", 1024 1022 "DSP1.1", 1025 1023 "DSP1.2", 1026 1024 "DSP1.3", ··· 1051 1053 0x25, 1052 1054 0x50, /* EQ */ 1053 1055 0x51, 1054 - 0x52, 1055 1056 0x60, /* LHPF1 */ 1056 1057 0x61, /* LHPF2 */ 1057 1058 0x68, /* DSP1 */
+1 -2
sound/soc/codecs/wm5102.c
··· 896 896 897 897 static const struct soc_enum wm5102_aec_loopback = 898 898 SOC_VALUE_ENUM_SINGLE(ARIZONA_DAC_AEC_CONTROL_1, 899 - ARIZONA_AEC_LOOPBACK_SRC_SHIFT, 900 - ARIZONA_AEC_LOOPBACK_SRC_MASK, 899 + ARIZONA_AEC_LOOPBACK_SRC_SHIFT, 0xf, 901 900 ARRAY_SIZE(wm5102_aec_loopback_texts), 902 901 wm5102_aec_loopback_texts, 903 902 wm5102_aec_loopback_values);
+1 -2
sound/soc/codecs/wm5110.c
··· 344 344 345 345 static const struct soc_enum wm5110_aec_loopback = 346 346 SOC_VALUE_ENUM_SINGLE(ARIZONA_DAC_AEC_CONTROL_1, 347 - ARIZONA_AEC_LOOPBACK_SRC_SHIFT, 348 - ARIZONA_AEC_LOOPBACK_SRC_MASK, 347 + ARIZONA_AEC_LOOPBACK_SRC_SHIFT, 0xf, 349 348 ARRAY_SIZE(wm5110_aec_loopback_texts), 350 349 wm5110_aec_loopback_texts, 351 350 wm5110_aec_loopback_values);
+3 -3
sound/soc/codecs/wm_adsp.c
··· 324 324 325 325 if (reg) { 326 326 buf = kmemdup(region->data, le32_to_cpu(region->len), 327 - GFP_KERNEL); 327 + GFP_KERNEL | GFP_DMA); 328 328 if (!buf) { 329 329 adsp_err(dsp, "Out of memory\n"); 330 330 return -ENOMEM; ··· 396 396 hdr = (void*)&firmware->data[0]; 397 397 if (memcmp(hdr->magic, "WMDR", 4) != 0) { 398 398 adsp_err(dsp, "%s: invalid magic\n", file); 399 - return -EINVAL; 399 + goto out_fw; 400 400 } 401 401 402 402 adsp_dbg(dsp, "%s: v%d.%d.%d\n", file, ··· 439 439 440 440 if (reg) { 441 441 buf = kmemdup(blk->data, le32_to_cpu(blk->len), 442 - GFP_KERNEL); 442 + GFP_KERNEL | GFP_DMA); 443 443 if (!buf) { 444 444 adsp_err(dsp, "Out of memory\n"); 445 445 return -ENOMEM;
+2 -7
sound/soc/fsl/Kconfig
··· 108 108 config SND_SOC_IMX_SSI 109 109 tristate 110 110 111 - config SND_SOC_IMX_PCM 112 - tristate 113 - 114 111 config SND_SOC_IMX_PCM_FIQ 115 - bool 112 + tristate 116 113 select FIQ 117 - select SND_SOC_IMX_PCM 118 114 119 115 config SND_SOC_IMX_PCM_DMA 120 - bool 116 + tristate 121 117 select SND_SOC_DMAENGINE_PCM 122 - select SND_SOC_IMX_PCM 123 118 124 119 config SND_SOC_IMX_AUDMUX 125 120 tristate
+4 -1
sound/soc/fsl/Makefile
··· 41 41 obj-$(CONFIG_SND_SOC_IMX_SSI) += snd-soc-imx-ssi.o 42 42 obj-$(CONFIG_SND_SOC_IMX_AUDMUX) += snd-soc-imx-audmux.o 43 43 44 - obj-$(CONFIG_SND_SOC_IMX_PCM) += snd-soc-imx-pcm.o 44 + obj-$(CONFIG_SND_SOC_IMX_PCM_FIQ) += snd-soc-imx-pcm-fiq.o 45 + snd-soc-imx-pcm-fiq-y := imx-pcm-fiq.o imx-pcm.o 46 + obj-$(CONFIG_SND_SOC_IMX_PCM_DMA) += snd-soc-imx-pcm-dma.o 47 + snd-soc-imx-pcm-dma-y := imx-pcm-dma.o imx-pcm.o 45 48 46 49 # i.MX Machine Support 47 50 snd-soc-eukrea-tlv320-objs := eukrea-tlv320.o
-3
sound/soc/fsl/imx-pcm.c
··· 31 31 runtime->dma_bytes); 32 32 return ret; 33 33 } 34 - EXPORT_SYMBOL_GPL(snd_imx_pcm_mmap); 35 34 36 35 static int imx_pcm_preallocate_dma_buffer(struct snd_pcm *pcm, int stream) 37 36 { ··· 79 80 out: 80 81 return ret; 81 82 } 82 - EXPORT_SYMBOL_GPL(imx_pcm_new); 83 83 84 84 void imx_pcm_free(struct snd_pcm *pcm) 85 85 { ··· 100 102 buf->area = NULL; 101 103 } 102 104 } 103 - EXPORT_SYMBOL_GPL(imx_pcm_free); 104 105 105 106 MODULE_DESCRIPTION("Freescale i.MX PCM driver"); 106 107 MODULE_AUTHOR("Sascha Hauer <s.hauer@pengutronix.de>");
+10 -2
sound/soc/soc-dapm.c
··· 1023 1023 1024 1024 if (SND_SOC_DAPM_EVENT_ON(event)) { 1025 1025 if (w->invert & SND_SOC_DAPM_REGULATOR_BYPASS) { 1026 - ret = regulator_allow_bypass(w->regulator, true); 1026 + ret = regulator_allow_bypass(w->regulator, false); 1027 1027 if (ret != 0) 1028 1028 dev_warn(w->dapm->dev, 1029 1029 "ASoC: Failed to bypass %s: %d\n", ··· 1033 1033 return regulator_enable(w->regulator); 1034 1034 } else { 1035 1035 if (w->invert & SND_SOC_DAPM_REGULATOR_BYPASS) { 1036 - ret = regulator_allow_bypass(w->regulator, false); 1036 + ret = regulator_allow_bypass(w->regulator, true); 1037 1037 if (ret != 0) 1038 1038 dev_warn(w->dapm->dev, 1039 1039 "ASoC: Failed to unbypass %s: %d\n", ··· 3038 3038 dev_err(dapm->dev, "ASoC: Failed to request %s: %d\n", 3039 3039 w->name, ret); 3040 3040 return NULL; 3041 + } 3042 + 3043 + if (w->invert & SND_SOC_DAPM_REGULATOR_BYPASS) { 3044 + ret = regulator_allow_bypass(w->regulator, true); 3045 + if (ret != 0) 3046 + dev_warn(w->dapm->dev, 3047 + "ASoC: Failed to unbypass %s: %d\n", 3048 + w->name, ret); 3041 3049 } 3042 3050 break; 3043 3051 case snd_soc_dapm_clock_supply:
+12 -5
sound/usb/mixer.c
··· 1331 1331 } 1332 1332 channels = (hdr->bLength - 7) / csize - 1; 1333 1333 bmaControls = hdr->bmaControls; 1334 + if (hdr->bLength < 7 + csize) { 1335 + snd_printk(KERN_ERR "usbaudio: unit %u: " 1336 + "invalid UAC_FEATURE_UNIT descriptor\n", 1337 + unitid); 1338 + return -EINVAL; 1339 + } 1334 1340 } else { 1335 1341 struct uac2_feature_unit_descriptor *ftr = _ftr; 1336 1342 csize = 4; 1337 1343 channels = (hdr->bLength - 6) / 4 - 1; 1338 1344 bmaControls = ftr->bmaControls; 1339 - } 1340 - 1341 - if (hdr->bLength < 7 || !csize || hdr->bLength < 7 + csize) { 1342 - snd_printk(KERN_ERR "usbaudio: unit %u: invalid UAC_FEATURE_UNIT descriptor\n", unitid); 1343 - return -EINVAL; 1345 + if (hdr->bLength < 6 + csize) { 1346 + snd_printk(KERN_ERR "usbaudio: unit %u: " 1347 + "invalid UAC_FEATURE_UNIT descriptor\n", 1348 + unitid); 1349 + return -EINVAL; 1350 + } 1344 1351 } 1345 1352 1346 1353 /* parse the source unit */
+10
tools/perf/MANIFEST
··· 11 11 include/linux/swab.h 12 12 arch/*/include/asm/unistd*.h 13 13 arch/*/include/asm/perf_regs.h 14 + arch/*/include/uapi/asm/unistd*.h 15 + arch/*/include/uapi/asm/perf_regs.h 14 16 arch/*/lib/memcpy*.S 15 17 arch/*/lib/memset*.S 16 18 include/linux/poison.h 17 19 include/linux/magic.h 18 20 include/linux/hw_breakpoint.h 21 + include/linux/rbtree_augmented.h 22 + include/uapi/linux/perf_event.h 23 + include/uapi/linux/const.h 24 + include/uapi/linux/swab.h 25 + include/uapi/linux/hw_breakpoint.h 19 26 arch/x86/include/asm/svm.h 20 27 arch/x86/include/asm/vmx.h 21 28 arch/x86/include/asm/kvm_host.h 29 + arch/x86/include/uapi/asm/svm.h 30 + arch/x86/include/uapi/asm/vmx.h 31 + arch/x86/include/uapi/asm/kvm.h
+1 -1
tools/perf/Makefile
··· 58 58 -e s/arm.*/arm/ -e s/sa110/arm/ \ 59 59 -e s/s390x/s390/ -e s/parisc64/parisc/ \ 60 60 -e s/ppc.*/powerpc/ -e s/mips.*/mips/ \ 61 - -e s/sh[234].*/sh/ ) 61 + -e s/sh[234].*/sh/ -e s/aarch64.*/arm64/ ) 62 62 NO_PERF_REGS := 1 63 63 64 64 CC = $(CROSS_COMPILE)gcc