Merge tag 'omap-for-v3.13/intc-ldp-fix' of git://git.kernel.org/pub/scm/linux/kernel/git/tmlind/linux-omap into fixes

From Tony Lindgren:
Fix a regression for wrong interrupt numbers for some devices after
the sparse IRQ conversion, fix DRA7 console output for earlyprintk,
and fix the LDP LCD backlight when DSS is built into the kernel and
not as a loadable module.

* tag 'omap-for-v3.13/intc-ldp-fix' of git://git.kernel.org/pub/scm/linux/kernel/git/tmlind/linux-omap:
ARM: OMAP2+: Fix LCD panel backlight regression for LDP legacy booting
ARM: OMAP2+: hwmod_data: fix missing OMAP_INTC_START in irq data
ARM: DRA7: hwmod: Fix boot crash with DEBUG_LL
+ v3.13-rc5

Signed-off-by: Olof Johansson <olof@lixom.net>

Changed files
+2040 -1097
Documentation
arch
drivers
acpi
apei
clk
clocksource
dma
firewire
firmware
gpio
gpu
iio
infiniband
ulp
isert
net
phy
pinctrl
sh-pfc
regulator
scsi
qla2xxx
staging
comedi
iio
magnetometer
imx-drm
target
tty
usb
xen
fs
include
init
kernel
mm
net
core
ipv4
ipv6
netfilter
sctp
unix
sound
tools
power
cpupower
+240
Documentation/module-signing.txt
··· 1 + ============================== 2 + KERNEL MODULE SIGNING FACILITY 3 + ============================== 4 + 5 + CONTENTS 6 + 7 + - Overview. 8 + - Configuring module signing. 9 + - Generating signing keys. 10 + - Public keys in the kernel. 11 + - Manually signing modules. 12 + - Signed modules and stripping. 13 + - Loading signed modules. 14 + - Non-valid signatures and unsigned modules. 15 + - Administering/protecting the private key. 16 + 17 + 18 + ======== 19 + OVERVIEW 20 + ======== 21 + 22 + The kernel module signing facility cryptographically signs modules during 23 + installation and then checks the signature upon loading the module. This 24 + allows increased kernel security by disallowing the loading of unsigned modules 25 + or modules signed with an invalid key. Module signing increases security by 26 + making it harder to load a malicious module into the kernel. The module 27 + signature checking is done by the kernel so that it is not necessary to have 28 + trusted userspace bits. 29 + 30 + This facility uses X.509 ITU-T standard certificates to encode the public keys 31 + involved. The signatures are not themselves encoded in any industrial standard 32 + type. The facility currently only supports the RSA public key encryption 33 + standard (though it is pluggable and permits others to be used). The possible 34 + hash algorithms that can be used are SHA-1, SHA-224, SHA-256, SHA-384, and 35 + SHA-512 (the algorithm is selected by data in the signature). 36 + 37 + 38 + ========================== 39 + CONFIGURING MODULE SIGNING 40 + ========================== 41 + 42 + The module signing facility is enabled by going to the "Enable Loadable Module 43 + Support" section of the kernel configuration and turning on 44 + 45 + CONFIG_MODULE_SIG "Module signature verification" 46 + 47 + This has a number of options available: 48 + 49 + (1) "Require modules to be validly signed" (CONFIG_MODULE_SIG_FORCE) 50 + 51 + This specifies how the kernel should deal with a module that has a 52 + signature for which the key is not known or a module that is unsigned. 53 + 54 + If this is off (ie. "permissive"), then modules for which the key is not 55 + available and modules that are unsigned are permitted, but the kernel will 56 + be marked as being tainted. 57 + 58 + If this is on (ie. "restrictive"), only modules that have a valid 59 + signature that can be verified by a public key in the kernel's possession 60 + will be loaded. All other modules will generate an error. 61 + 62 + Irrespective of the setting here, if the module has a signature block that 63 + cannot be parsed, it will be rejected out of hand. 64 + 65 + 66 + (2) "Automatically sign all modules" (CONFIG_MODULE_SIG_ALL) 67 + 68 + If this is on then modules will be automatically signed during the 69 + modules_install phase of a build. If this is off, then the modules must 70 + be signed manually using: 71 + 72 + scripts/sign-file 73 + 74 + 75 + (3) "Which hash algorithm should modules be signed with?" 76 + 77 + This presents a choice of which hash algorithm the installation phase will 78 + sign the modules with: 79 + 80 + CONFIG_SIG_SHA1 "Sign modules with SHA-1" 81 + CONFIG_SIG_SHA224 "Sign modules with SHA-224" 82 + CONFIG_SIG_SHA256 "Sign modules with SHA-256" 83 + CONFIG_SIG_SHA384 "Sign modules with SHA-384" 84 + CONFIG_SIG_SHA512 "Sign modules with SHA-512" 85 + 86 + The algorithm selected here will also be built into the kernel (rather 87 + than being a module) so that modules signed with that algorithm can have 88 + their signatures checked without causing a dependency loop. 89 + 90 + 91 + ======================= 92 + GENERATING SIGNING KEYS 93 + ======================= 94 + 95 + Cryptographic keypairs are required to generate and check signatures. A 96 + private key is used to generate a signature and the corresponding public key is 97 + used to check it. The private key is only needed during the build, after which 98 + it can be deleted or stored securely. The public key gets built into the 99 + kernel so that it can be used to check the signatures as the modules are 100 + loaded. 101 + 102 + Under normal conditions, the kernel build will automatically generate a new 103 + keypair using openssl if one does not exist in the files: 104 + 105 + signing_key.priv 106 + signing_key.x509 107 + 108 + during the building of vmlinux (the public part of the key needs to be built 109 + into vmlinux) using parameters in the: 110 + 111 + x509.genkey 112 + 113 + file (which is also generated if it does not already exist). 114 + 115 + It is strongly recommended that you provide your own x509.genkey file. 116 + 117 + Most notably, in the x509.genkey file, the req_distinguished_name section 118 + should be altered from the default: 119 + 120 + [ req_distinguished_name ] 121 + O = Magrathea 122 + CN = Glacier signing key 123 + emailAddress = slartibartfast@magrathea.h2g2 124 + 125 + The generated RSA key size can also be set with: 126 + 127 + [ req ] 128 + default_bits = 4096 129 + 130 + 131 + It is also possible to manually generate the key private/public files using the 132 + x509.genkey key generation configuration file in the root node of the Linux 133 + kernel sources tree and the openssl command. The following is an example to 134 + generate the public/private key files: 135 + 136 + openssl req -new -nodes -utf8 -sha256 -days 36500 -batch -x509 \ 137 + -config x509.genkey -outform DER -out signing_key.x509 \ 138 + -keyout signing_key.priv 139 + 140 + 141 + ========================= 142 + PUBLIC KEYS IN THE KERNEL 143 + ========================= 144 + 145 + The kernel contains a ring of public keys that can be viewed by root. They're 146 + in a keyring called ".system_keyring" that can be seen by: 147 + 148 + [root@deneb ~]# cat /proc/keys 149 + ... 150 + 223c7853 I------ 1 perm 1f030000 0 0 keyring .system_keyring: 1 151 + 302d2d52 I------ 1 perm 1f010000 0 0 asymmetri Fedora kernel signing key: d69a84e6bce3d216b979e9505b3e3ef9a7118079: X509.RSA a7118079 [] 152 + ... 153 + 154 + Beyond the public key generated specifically for module signing, any file 155 + placed in the kernel source root directory or the kernel build root directory 156 + whose name is suffixed with ".x509" will be assumed to be an X.509 public key 157 + and will be added to the keyring. 158 + 159 + Further, the architecture code may take public keys from a hardware store and 160 + add those in also (e.g. from the UEFI key database). 161 + 162 + Finally, it is possible to add additional public keys by doing: 163 + 164 + keyctl padd asymmetric "" [.system_keyring-ID] <[key-file] 165 + 166 + e.g.: 167 + 168 + keyctl padd asymmetric "" 0x223c7853 <my_public_key.x509 169 + 170 + Note, however, that the kernel will only permit keys to be added to 171 + .system_keyring _if_ the new key's X.509 wrapper is validly signed by a key 172 + that is already resident in the .system_keyring at the time the key was added. 173 + 174 + 175 + ========================= 176 + MANUALLY SIGNING MODULES 177 + ========================= 178 + 179 + To manually sign a module, use the scripts/sign-file tool available in 180 + the Linux kernel source tree. The script requires 4 arguments: 181 + 182 + 1. The hash algorithm (e.g., sha256) 183 + 2. The private key filename 184 + 3. The public key filename 185 + 4. The kernel module to be signed 186 + 187 + The following is an example to sign a kernel module: 188 + 189 + scripts/sign-file sha512 kernel-signkey.priv \ 190 + kernel-signkey.x509 module.ko 191 + 192 + The hash algorithm used does not have to match the one configured, but if it 193 + doesn't, you should make sure that hash algorithm is either built into the 194 + kernel or can be loaded without requiring itself. 195 + 196 + 197 + ============================ 198 + SIGNED MODULES AND STRIPPING 199 + ============================ 200 + 201 + A signed module has a digital signature simply appended at the end. The string 202 + "~Module signature appended~." at the end of the module's file confirms that a 203 + signature is present but it does not confirm that the signature is valid! 204 + 205 + Signed modules are BRITTLE as the signature is outside of the defined ELF 206 + container. Thus they MAY NOT be stripped once the signature is computed and 207 + attached. Note the entire module is the signed payload, including any and all 208 + debug information present at the time of signing. 209 + 210 + 211 + ====================== 212 + LOADING SIGNED MODULES 213 + ====================== 214 + 215 + Modules are loaded with insmod, modprobe, init_module() or finit_module(), 216 + exactly as for unsigned modules as no processing is done in userspace. The 217 + signature checking is all done within the kernel. 218 + 219 + 220 + ========================================= 221 + NON-VALID SIGNATURES AND UNSIGNED MODULES 222 + ========================================= 223 + 224 + If CONFIG_MODULE_SIG_FORCE is enabled or enforcemodulesig=1 is supplied on 225 + the kernel command line, the kernel will only load validly signed modules 226 + for which it has a public key. Otherwise, it will also load modules that are 227 + unsigned. Any module for which the kernel has a key, but which proves to have 228 + a signature mismatch will not be permitted to load. 229 + 230 + Any module that has an unparseable signature will be rejected. 231 + 232 + 233 + ========================================= 234 + ADMINISTERING/PROTECTING THE PRIVATE KEY 235 + ========================================= 236 + 237 + Since the private key is used to sign modules, viruses and malware could use 238 + the private key to sign modules and compromise the operating system. The 239 + private key must be either destroyed or moved to a secure location and not kept 240 + in the root node of the kernel source tree.
+6 -2
Documentation/networking/ip-sysctl.txt
··· 16 16 Default: 64 (as recommended by RFC1700) 17 17 18 18 ip_no_pmtu_disc - BOOLEAN 19 - Disable Path MTU Discovery. 20 - default FALSE 19 + Disable Path MTU Discovery. If enabled and a 20 + fragmentation-required ICMP is received, the PMTU to this 21 + destination will be set to min_pmtu (see below). You will need 22 + to raise min_pmtu to the smallest interface MTU on your system 23 + manually if you want to avoid locally generated fragments. 24 + Default: FALSE 21 25 22 26 min_pmtu - INTEGER 23 27 default 552 - minimum discovered Path MTU
+21 -4
MAINTAINERS
··· 3763 3763 3764 3764 GPIO SUBSYSTEM 3765 3765 M: Linus Walleij <linus.walleij@linaro.org> 3766 - S: Maintained 3766 + M: Alexandre Courbot <gnurou@gmail.com> 3767 3767 L: linux-gpio@vger.kernel.org 3768 - F: Documentation/gpio.txt 3768 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-gpio.git 3769 + S: Maintained 3770 + F: Documentation/gpio/ 3769 3771 F: drivers/gpio/ 3770 3772 F: include/linux/gpio* 3771 3773 F: include/asm-generic/gpio.h ··· 3834 3832 T: git git://linuxtv.org/media_tree.git 3835 3833 S: Maintained 3836 3834 F: drivers/media/usb/gspca/ 3835 + 3836 + GUID PARTITION TABLE (GPT) 3837 + M: Davidlohr Bueso <davidlohr@hp.com> 3838 + L: linux-efi@vger.kernel.org 3839 + S: Maintained 3840 + F: block/partitions/efi.* 3837 3841 3838 3842 STK1160 USB VIDEO CAPTURE DRIVER 3839 3843 M: Ezequiel Garcia <elezegarcia@gmail.com> ··· 5921 5913 M: Herbert Xu <herbert@gondor.apana.org.au> 5922 5914 M: "David S. Miller" <davem@davemloft.net> 5923 5915 L: netdev@vger.kernel.org 5924 - T: git git://git.kernel.org/pub/scm/linux/kernel/git/davem/net.git 5916 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/klassert/ipsec.git 5917 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/klassert/ipsec-next.git 5925 5918 S: Maintained 5926 5919 F: net/xfrm/ 5927 5920 F: net/key/ 5928 5921 F: net/ipv4/xfrm* 5922 + F: net/ipv4/esp4.c 5923 + F: net/ipv4/ah4.c 5924 + F: net/ipv4/ipcomp.c 5925 + F: net/ipv4/ip_vti.c 5929 5926 F: net/ipv6/xfrm* 5927 + F: net/ipv6/esp6.c 5928 + F: net/ipv6/ah6.c 5929 + F: net/ipv6/ipcomp6.c 5930 + F: net/ipv6/ip6_vti.c 5930 5931 F: include/uapi/linux/xfrm.h 5931 5932 F: include/net/xfrm.h 5932 5933 ··· 9590 9573 9591 9574 XFS FILESYSTEM 9592 9575 P: Silicon Graphics Inc 9593 - M: Dave Chinner <dchinner@fromorbit.com> 9576 + M: Dave Chinner <david@fromorbit.com> 9594 9577 M: Ben Myers <bpm@sgi.com> 9595 9578 M: xfs@oss.sgi.com 9596 9579 L: xfs@oss.sgi.com
+10 -14
Makefile
··· 1 1 VERSION = 3 2 2 PATCHLEVEL = 13 3 3 SUBLEVEL = 0 4 - EXTRAVERSION = -rc4 4 + EXTRAVERSION = -rc5 5 5 NAME = One Giant Leap for Frogkind 6 6 7 7 # *DOCUMENTATION* ··· 732 732 # Select initial ramdisk compression format, default is gzip(1). 733 733 # This shall be used by the dracut(8) tool while creating an initramfs image. 734 734 # 735 - INITRD_COMPRESS=gzip 736 - ifeq ($(CONFIG_RD_BZIP2), y) 737 - INITRD_COMPRESS=bzip2 738 - else ifeq ($(CONFIG_RD_LZMA), y) 739 - INITRD_COMPRESS=lzma 740 - else ifeq ($(CONFIG_RD_XZ), y) 741 - INITRD_COMPRESS=xz 742 - else ifeq ($(CONFIG_RD_LZO), y) 743 - INITRD_COMPRESS=lzo 744 - else ifeq ($(CONFIG_RD_LZ4), y) 745 - INITRD_COMPRESS=lz4 746 - endif 747 - export INITRD_COMPRESS 735 + INITRD_COMPRESS-y := gzip 736 + INITRD_COMPRESS-$(CONFIG_RD_BZIP2) := bzip2 737 + INITRD_COMPRESS-$(CONFIG_RD_LZMA) := lzma 738 + INITRD_COMPRESS-$(CONFIG_RD_XZ) := xz 739 + INITRD_COMPRESS-$(CONFIG_RD_LZO) := lzo 740 + INITRD_COMPRESS-$(CONFIG_RD_LZ4) := lz4 741 + # do not export INITRD_COMPRESS, since we didn't actually 742 + # choose a sane default compression above. 743 + # export INITRD_COMPRESS := $(INITRD_COMPRESS-y) 748 744 749 745 ifdef CONFIG_MODULE_SIG_ALL 750 746 MODSECKEY = ./signing_key.priv
+7 -1
arch/arc/include/uapi/asm/unistd.h
··· 8 8 9 9 /******** no-legacy-syscalls-ABI *******/ 10 10 11 - #ifndef _UAPI_ASM_ARC_UNISTD_H 11 + /* 12 + * Non-typical guard macro to enable inclusion twice in ARCH sys.c 13 + * That is how the Generic syscall wrapper generator works 14 + */ 15 + #if !defined(_UAPI_ASM_ARC_UNISTD_H) || defined(__SYSCALL) 12 16 #define _UAPI_ASM_ARC_UNISTD_H 13 17 14 18 #define __ARCH_WANT_SYS_EXECVE ··· 39 35 /* Generic syscall (fs/filesystems.c - lost in asm-generic/unistd.h */ 40 36 #define __NR_sysfs (__NR_arch_specific_syscall + 3) 41 37 __SYSCALL(__NR_sysfs, sys_sysfs) 38 + 39 + #undef __SYSCALL 42 40 43 41 #endif
+6 -1
arch/arm/mach-omap2/board-ldp.c
··· 242 242 243 243 static int ldp_twl_gpio_setup(struct device *dev, unsigned gpio, unsigned ngpio) 244 244 { 245 + int res; 246 + 245 247 /* LCD enable GPIO */ 246 248 ldp_lcd_pdata.enable_gpio = gpio + 7; 247 249 248 250 /* Backlight enable GPIO */ 249 251 ldp_lcd_pdata.backlight_gpio = gpio + 15; 252 + 253 + res = platform_device_register(&ldp_lcd_device); 254 + if (res) 255 + pr_err("Unable to register LCD: %d\n", res); 250 256 251 257 return 0; 252 258 } ··· 352 346 353 347 static struct platform_device *ldp_devices[] __initdata = { 354 348 &ldp_gpio_keys_device, 355 - &ldp_lcd_device, 356 349 }; 357 350 358 351 #ifdef CONFIG_OMAP_MUX
+2 -2
arch/arm/mach-omap2/omap_hwmod_2xxx_ipblock_data.c
··· 796 796 797 797 /* gpmc */ 798 798 static struct omap_hwmod_irq_info omap2xxx_gpmc_irqs[] = { 799 - { .irq = 20 }, 799 + { .irq = 20 + OMAP_INTC_START, }, 800 800 { .irq = -1 } 801 801 }; 802 802 ··· 841 841 }; 842 842 843 843 static struct omap_hwmod_irq_info omap2_rng_mpu_irqs[] = { 844 - { .irq = 52 }, 844 + { .irq = 52 + OMAP_INTC_START, }, 845 845 { .irq = -1 } 846 846 }; 847 847
+3 -3
arch/arm/mach-omap2/omap_hwmod_3xxx_data.c
··· 2165 2165 }; 2166 2166 2167 2167 static struct omap_hwmod_irq_info omap3xxx_gpmc_irqs[] = { 2168 - { .irq = 20 }, 2168 + { .irq = 20 + OMAP_INTC_START, }, 2169 2169 { .irq = -1 } 2170 2170 }; 2171 2171 ··· 2999 2999 3000 3000 static struct omap_hwmod omap3xxx_mmu_isp_hwmod; 3001 3001 static struct omap_hwmod_irq_info omap3xxx_mmu_isp_irqs[] = { 3002 - { .irq = 24 }, 3002 + { .irq = 24 + OMAP_INTC_START, }, 3003 3003 { .irq = -1 } 3004 3004 }; 3005 3005 ··· 3041 3041 3042 3042 static struct omap_hwmod omap3xxx_mmu_iva_hwmod; 3043 3043 static struct omap_hwmod_irq_info omap3xxx_mmu_iva_irqs[] = { 3044 - { .irq = 28 }, 3044 + { .irq = 28 + OMAP_INTC_START, }, 3045 3045 { .irq = -1 } 3046 3046 }; 3047 3047
+1 -1
arch/arm/mach-omap2/omap_hwmod_7xx_data.c
··· 1637 1637 .class = &dra7xx_uart_hwmod_class, 1638 1638 .clkdm_name = "l4per_clkdm", 1639 1639 .main_clk = "uart1_gfclk_mux", 1640 - .flags = HWMOD_SWSUP_SIDLE_ACT, 1640 + .flags = HWMOD_SWSUP_SIDLE_ACT | DEBUG_OMAP2UART1_FLAGS, 1641 1641 .prcm = { 1642 1642 .omap4 = { 1643 1643 .clkctrl_offs = DRA7XX_CM_L4PER_UART1_CLKCTRL_OFFSET,
+3 -3
arch/arm/xen/enlighten.c
··· 96 96 struct remap_data *info = data; 97 97 struct page *page = info->pages[info->index++]; 98 98 unsigned long pfn = page_to_pfn(page); 99 - pte_t pte = pfn_pte(pfn, info->prot); 99 + pte_t pte = pte_mkspecial(pfn_pte(pfn, info->prot)); 100 100 101 101 if (map_foreign_page(pfn, info->fgmfn, info->domid)) 102 102 return -EFAULT; ··· 224 224 } 225 225 if (of_address_to_resource(node, GRANT_TABLE_PHYSADDR, &res)) 226 226 return 0; 227 - xen_hvm_resume_frames = res.start >> PAGE_SHIFT; 227 + xen_hvm_resume_frames = res.start; 228 228 xen_events_irq = irq_of_parse_and_map(node, 0); 229 229 pr_info("Xen %s support found, events_irq=%d gnttab_frame_pfn=%lx\n", 230 - version, xen_events_irq, xen_hvm_resume_frames); 230 + version, xen_events_irq, (xen_hvm_resume_frames >> PAGE_SHIFT)); 231 231 xen_domain_type = XEN_HVM_DOMAIN; 232 232 233 233 xen_setup_features();
-4
arch/arm64/include/asm/xen/page-coherent.h
··· 23 23 unsigned long offset, size_t size, enum dma_data_direction dir, 24 24 struct dma_attrs *attrs) 25 25 { 26 - __generic_dma_ops(hwdev)->map_page(hwdev, page, offset, size, dir, attrs); 27 26 } 28 27 29 28 static inline void xen_dma_unmap_page(struct device *hwdev, dma_addr_t handle, 30 29 size_t size, enum dma_data_direction dir, 31 30 struct dma_attrs *attrs) 32 31 { 33 - __generic_dma_ops(hwdev)->unmap_page(hwdev, handle, size, dir, attrs); 34 32 } 35 33 36 34 static inline void xen_dma_sync_single_for_cpu(struct device *hwdev, 37 35 dma_addr_t handle, size_t size, enum dma_data_direction dir) 38 36 { 39 - __generic_dma_ops(hwdev)->sync_single_for_cpu(hwdev, handle, size, dir); 40 37 } 41 38 42 39 static inline void xen_dma_sync_single_for_device(struct device *hwdev, 43 40 dma_addr_t handle, size_t size, enum dma_data_direction dir) 44 41 { 45 - __generic_dma_ops(hwdev)->sync_single_for_device(hwdev, handle, size, dir); 46 42 } 47 43 #endif /* _ASM_ARM64_XEN_PAGE_COHERENT_H */
+17 -19
arch/arm64/kernel/ptrace.c
··· 214 214 { 215 215 int err, len, type, disabled = !ctrl.enabled; 216 216 217 - if (disabled) { 218 - len = 0; 219 - type = HW_BREAKPOINT_EMPTY; 220 - } else { 221 - err = arch_bp_generic_fields(ctrl, &len, &type); 222 - if (err) 223 - return err; 217 + attr->disabled = disabled; 218 + if (disabled) 219 + return 0; 224 220 225 - switch (note_type) { 226 - case NT_ARM_HW_BREAK: 227 - if ((type & HW_BREAKPOINT_X) != type) 228 - return -EINVAL; 229 - break; 230 - case NT_ARM_HW_WATCH: 231 - if ((type & HW_BREAKPOINT_RW) != type) 232 - return -EINVAL; 233 - break; 234 - default: 221 + err = arch_bp_generic_fields(ctrl, &len, &type); 222 + if (err) 223 + return err; 224 + 225 + switch (note_type) { 226 + case NT_ARM_HW_BREAK: 227 + if ((type & HW_BREAKPOINT_X) != type) 235 228 return -EINVAL; 236 - } 229 + break; 230 + case NT_ARM_HW_WATCH: 231 + if ((type & HW_BREAKPOINT_RW) != type) 232 + return -EINVAL; 233 + break; 234 + default: 235 + return -EINVAL; 237 236 } 238 237 239 238 attr->bp_len = len; 240 239 attr->bp_type = type; 241 - attr->disabled = disabled; 242 240 243 241 return 0; 244 242 }
+4
arch/powerpc/include/asm/kvm_book3s.h
··· 192 192 extern u32 kvmppc_alignment_dsisr(struct kvm_vcpu *vcpu, unsigned int inst); 193 193 extern ulong kvmppc_alignment_dar(struct kvm_vcpu *vcpu, unsigned int inst); 194 194 extern int kvmppc_h_pr(struct kvm_vcpu *vcpu, unsigned long cmd); 195 + extern void kvmppc_copy_to_svcpu(struct kvmppc_book3s_shadow_vcpu *svcpu, 196 + struct kvm_vcpu *vcpu); 197 + extern void kvmppc_copy_from_svcpu(struct kvm_vcpu *vcpu, 198 + struct kvmppc_book3s_shadow_vcpu *svcpu); 195 199 196 200 static inline struct kvmppc_vcpu_book3s *to_book3s(struct kvm_vcpu *vcpu) 197 201 {
+2
arch/powerpc/include/asm/kvm_book3s_asm.h
··· 79 79 ulong vmhandler; 80 80 ulong scratch0; 81 81 ulong scratch1; 82 + ulong scratch2; 82 83 u8 in_guest; 83 84 u8 restore_hid5; 84 85 u8 napping; ··· 107 106 }; 108 107 109 108 struct kvmppc_book3s_shadow_vcpu { 109 + bool in_use; 110 110 ulong gpr[14]; 111 111 u32 cr; 112 112 u32 xer;
+2 -2
arch/powerpc/include/asm/opal.h
··· 720 720 int64_t opal_pci_poll(uint64_t phb_id); 721 721 int64_t opal_return_cpu(void); 722 722 723 - int64_t opal_xscom_read(uint32_t gcid, uint32_t pcb_addr, uint64_t *val); 723 + int64_t opal_xscom_read(uint32_t gcid, uint32_t pcb_addr, __be64 *val); 724 724 int64_t opal_xscom_write(uint32_t gcid, uint32_t pcb_addr, uint64_t val); 725 725 726 726 int64_t opal_lpc_write(uint32_t chip_id, enum OpalLPCAddressType addr_type, 727 727 uint32_t addr, uint32_t data, uint32_t sz); 728 728 int64_t opal_lpc_read(uint32_t chip_id, enum OpalLPCAddressType addr_type, 729 - uint32_t addr, uint32_t *data, uint32_t sz); 729 + uint32_t addr, __be32 *data, uint32_t sz); 730 730 int64_t opal_validate_flash(uint64_t buffer, uint32_t *size, uint32_t *result); 731 731 int64_t opal_manage_flash(uint8_t op); 732 732 int64_t opal_update_flash(uint64_t blk_list);
+1 -1
arch/powerpc/include/asm/switch_to.h
··· 35 35 extern void enable_kernel_spe(void); 36 36 extern void giveup_spe(struct task_struct *); 37 37 extern void load_up_spe(struct task_struct *); 38 - extern void switch_booke_debug_regs(struct thread_struct *new_thread); 38 + extern void switch_booke_debug_regs(struct debug_reg *new_debug); 39 39 40 40 #ifndef CONFIG_SMP 41 41 extern void discard_lazy_cpu_state(void);
+1
arch/powerpc/kernel/asm-offsets.c
··· 576 576 HSTATE_FIELD(HSTATE_VMHANDLER, vmhandler); 577 577 HSTATE_FIELD(HSTATE_SCRATCH0, scratch0); 578 578 HSTATE_FIELD(HSTATE_SCRATCH1, scratch1); 579 + HSTATE_FIELD(HSTATE_SCRATCH2, scratch2); 579 580 HSTATE_FIELD(HSTATE_IN_GUEST, in_guest); 580 581 HSTATE_FIELD(HSTATE_RESTORE_HID5, restore_hid5); 581 582 HSTATE_FIELD(HSTATE_NAPPING, napping);
+3 -3
arch/powerpc/kernel/crash_dump.c
··· 124 124 void crash_free_reserved_phys_range(unsigned long begin, unsigned long end) 125 125 { 126 126 unsigned long addr; 127 - const u32 *basep, *sizep; 127 + const __be32 *basep, *sizep; 128 128 unsigned int rtas_start = 0, rtas_end = 0; 129 129 130 130 basep = of_get_property(rtas.dev, "linux,rtas-base", NULL); 131 131 sizep = of_get_property(rtas.dev, "rtas-size", NULL); 132 132 133 133 if (basep && sizep) { 134 - rtas_start = *basep; 135 - rtas_end = *basep + *sizep; 134 + rtas_start = be32_to_cpup(basep); 135 + rtas_end = rtas_start + be32_to_cpup(sizep); 136 136 } 137 137 138 138 for (addr = begin; addr < end; addr += PAGE_SIZE) {
+16 -16
arch/powerpc/kernel/process.c
··· 339 339 #endif 340 340 } 341 341 342 - static void prime_debug_regs(struct thread_struct *thread) 342 + static void prime_debug_regs(struct debug_reg *debug) 343 343 { 344 344 /* 345 345 * We could have inherited MSR_DE from userspace, since ··· 348 348 */ 349 349 mtmsr(mfmsr() & ~MSR_DE); 350 350 351 - mtspr(SPRN_IAC1, thread->debug.iac1); 352 - mtspr(SPRN_IAC2, thread->debug.iac2); 351 + mtspr(SPRN_IAC1, debug->iac1); 352 + mtspr(SPRN_IAC2, debug->iac2); 353 353 #if CONFIG_PPC_ADV_DEBUG_IACS > 2 354 - mtspr(SPRN_IAC3, thread->debug.iac3); 355 - mtspr(SPRN_IAC4, thread->debug.iac4); 354 + mtspr(SPRN_IAC3, debug->iac3); 355 + mtspr(SPRN_IAC4, debug->iac4); 356 356 #endif 357 - mtspr(SPRN_DAC1, thread->debug.dac1); 358 - mtspr(SPRN_DAC2, thread->debug.dac2); 357 + mtspr(SPRN_DAC1, debug->dac1); 358 + mtspr(SPRN_DAC2, debug->dac2); 359 359 #if CONFIG_PPC_ADV_DEBUG_DVCS > 0 360 - mtspr(SPRN_DVC1, thread->debug.dvc1); 361 - mtspr(SPRN_DVC2, thread->debug.dvc2); 360 + mtspr(SPRN_DVC1, debug->dvc1); 361 + mtspr(SPRN_DVC2, debug->dvc2); 362 362 #endif 363 - mtspr(SPRN_DBCR0, thread->debug.dbcr0); 364 - mtspr(SPRN_DBCR1, thread->debug.dbcr1); 363 + mtspr(SPRN_DBCR0, debug->dbcr0); 364 + mtspr(SPRN_DBCR1, debug->dbcr1); 365 365 #ifdef CONFIG_BOOKE 366 - mtspr(SPRN_DBCR2, thread->debug.dbcr2); 366 + mtspr(SPRN_DBCR2, debug->dbcr2); 367 367 #endif 368 368 } 369 369 /* ··· 371 371 * debug registers, set the debug registers from the values 372 372 * stored in the new thread. 373 373 */ 374 - void switch_booke_debug_regs(struct thread_struct *new_thread) 374 + void switch_booke_debug_regs(struct debug_reg *new_debug) 375 375 { 376 376 if ((current->thread.debug.dbcr0 & DBCR0_IDM) 377 - || (new_thread->debug.dbcr0 & DBCR0_IDM)) 378 - prime_debug_regs(new_thread); 377 + || (new_debug->dbcr0 & DBCR0_IDM)) 378 + prime_debug_regs(new_debug); 379 379 } 380 380 EXPORT_SYMBOL_GPL(switch_booke_debug_regs); 381 381 #else /* !CONFIG_PPC_ADV_DEBUG_REGS */ ··· 683 683 #endif /* CONFIG_SMP */ 684 684 685 685 #ifdef CONFIG_PPC_ADV_DEBUG_REGS 686 - switch_booke_debug_regs(&new->thread); 686 + switch_booke_debug_regs(&new->thread.debug); 687 687 #else 688 688 /* 689 689 * For PPC_BOOK3S_64, we use the hw-breakpoint interfaces that would
+2 -2
arch/powerpc/kernel/ptrace.c
··· 1555 1555 1556 1556 flush_fp_to_thread(child); 1557 1557 if (fpidx < (PT_FPSCR - PT_FPR0)) 1558 - memcpy(&tmp, &child->thread.fp_state.fpr, 1558 + memcpy(&tmp, &child->thread.TS_FPR(fpidx), 1559 1559 sizeof(long)); 1560 1560 else 1561 1561 tmp = child->thread.fp_state.fpscr; ··· 1588 1588 1589 1589 flush_fp_to_thread(child); 1590 1590 if (fpidx < (PT_FPSCR - PT_FPR0)) 1591 - memcpy(&child->thread.fp_state.fpr, &data, 1591 + memcpy(&child->thread.TS_FPR(fpidx), &data, 1592 1592 sizeof(long)); 1593 1593 else 1594 1594 child->thread.fp_state.fpscr = data;
+2 -2
arch/powerpc/kernel/setup-common.c
··· 479 479 if (machine_is(pseries) && firmware_has_feature(FW_FEATURE_LPAR) && 480 480 (dn = of_find_node_by_path("/rtas"))) { 481 481 int num_addr_cell, num_size_cell, maxcpus; 482 - const unsigned int *ireg; 482 + const __be32 *ireg; 483 483 484 484 num_addr_cell = of_n_addr_cells(dn); 485 485 num_size_cell = of_n_size_cells(dn); ··· 489 489 if (!ireg) 490 490 goto out; 491 491 492 - maxcpus = ireg[num_addr_cell + num_size_cell]; 492 + maxcpus = be32_to_cpup(ireg + num_addr_cell + num_size_cell); 493 493 494 494 /* Double maxcpus for processors which have SMT capability */ 495 495 if (cpu_has_feature(CPU_FTR_SMT))
+2 -2
arch/powerpc/kernel/smp.c
··· 580 580 int cpu_to_core_id(int cpu) 581 581 { 582 582 struct device_node *np; 583 - const int *reg; 583 + const __be32 *reg; 584 584 int id = -1; 585 585 586 586 np = of_get_cpu_node(cpu, NULL); ··· 591 591 if (!reg) 592 592 goto out; 593 593 594 - id = *reg; 594 + id = be32_to_cpup(reg); 595 595 out: 596 596 of_node_put(np); 597 597 return id;
+14 -4
arch/powerpc/kvm/book3s_64_mmu_hv.c
··· 469 469 slb_v = vcpu->kvm->arch.vrma_slb_v; 470 470 } 471 471 472 + preempt_disable(); 472 473 /* Find the HPTE in the hash table */ 473 474 index = kvmppc_hv_find_lock_hpte(kvm, eaddr, slb_v, 474 475 HPTE_V_VALID | HPTE_V_ABSENT); 475 - if (index < 0) 476 + if (index < 0) { 477 + preempt_enable(); 476 478 return -ENOENT; 479 + } 477 480 hptep = (unsigned long *)(kvm->arch.hpt_virt + (index << 4)); 478 481 v = hptep[0] & ~HPTE_V_HVLOCK; 479 482 gr = kvm->arch.revmap[index].guest_rpte; ··· 484 481 /* Unlock the HPTE */ 485 482 asm volatile("lwsync" : : : "memory"); 486 483 hptep[0] = v; 484 + preempt_enable(); 487 485 488 486 gpte->eaddr = eaddr; 489 487 gpte->vpage = ((v & HPTE_V_AVPN) << 4) | ((eaddr >> 12) & 0xfff); ··· 669 665 return -EFAULT; 670 666 } else { 671 667 page = pages[0]; 668 + pfn = page_to_pfn(page); 672 669 if (PageHuge(page)) { 673 670 page = compound_head(page); 674 671 pte_size <<= compound_order(page); ··· 694 689 } 695 690 rcu_read_unlock_sched(); 696 691 } 697 - pfn = page_to_pfn(page); 698 692 } 699 693 700 694 ret = -EFAULT; ··· 711 707 r = (r & ~(HPTE_R_W|HPTE_R_I|HPTE_R_G)) | HPTE_R_M; 712 708 } 713 709 714 - /* Set the HPTE to point to pfn */ 715 - r = (r & ~(HPTE_R_PP0 - pte_size)) | (pfn << PAGE_SHIFT); 710 + /* 711 + * Set the HPTE to point to pfn. 712 + * Since the pfn is at PAGE_SIZE granularity, make sure we 713 + * don't mask out lower-order bits if psize < PAGE_SIZE. 714 + */ 715 + if (psize < PAGE_SIZE) 716 + psize = PAGE_SIZE; 717 + r = (r & ~(HPTE_R_PP0 - psize)) | ((pfn << PAGE_SHIFT) & ~(psize - 1)); 716 718 if (hpte_is_writable(r) && !write_ok) 717 719 r = hpte_make_readonly(r); 718 720 ret = RESUME_GUEST;
+14 -10
arch/powerpc/kvm/book3s_hv.c
··· 131 131 static void kvmppc_core_vcpu_load_hv(struct kvm_vcpu *vcpu, int cpu) 132 132 { 133 133 struct kvmppc_vcore *vc = vcpu->arch.vcore; 134 + unsigned long flags; 134 135 135 - spin_lock(&vcpu->arch.tbacct_lock); 136 + spin_lock_irqsave(&vcpu->arch.tbacct_lock, flags); 136 137 if (vc->runner == vcpu && vc->vcore_state != VCORE_INACTIVE && 137 138 vc->preempt_tb != TB_NIL) { 138 139 vc->stolen_tb += mftb() - vc->preempt_tb; ··· 144 143 vcpu->arch.busy_stolen += mftb() - vcpu->arch.busy_preempt; 145 144 vcpu->arch.busy_preempt = TB_NIL; 146 145 } 147 - spin_unlock(&vcpu->arch.tbacct_lock); 146 + spin_unlock_irqrestore(&vcpu->arch.tbacct_lock, flags); 148 147 } 149 148 150 149 static void kvmppc_core_vcpu_put_hv(struct kvm_vcpu *vcpu) 151 150 { 152 151 struct kvmppc_vcore *vc = vcpu->arch.vcore; 152 + unsigned long flags; 153 153 154 - spin_lock(&vcpu->arch.tbacct_lock); 154 + spin_lock_irqsave(&vcpu->arch.tbacct_lock, flags); 155 155 if (vc->runner == vcpu && vc->vcore_state != VCORE_INACTIVE) 156 156 vc->preempt_tb = mftb(); 157 157 if (vcpu->arch.state == KVMPPC_VCPU_BUSY_IN_HOST) 158 158 vcpu->arch.busy_preempt = mftb(); 159 - spin_unlock(&vcpu->arch.tbacct_lock); 159 + spin_unlock_irqrestore(&vcpu->arch.tbacct_lock, flags); 160 160 } 161 161 162 162 static void kvmppc_set_msr_hv(struct kvm_vcpu *vcpu, u64 msr) ··· 488 486 */ 489 487 if (vc->vcore_state != VCORE_INACTIVE && 490 488 vc->runner->arch.run_task != current) { 491 - spin_lock(&vc->runner->arch.tbacct_lock); 489 + spin_lock_irq(&vc->runner->arch.tbacct_lock); 492 490 p = vc->stolen_tb; 493 491 if (vc->preempt_tb != TB_NIL) 494 492 p += now - vc->preempt_tb; 495 - spin_unlock(&vc->runner->arch.tbacct_lock); 493 + spin_unlock_irq(&vc->runner->arch.tbacct_lock); 496 494 } else { 497 495 p = vc->stolen_tb; 498 496 } ··· 514 512 core_stolen = vcore_stolen_time(vc, now); 515 513 stolen = core_stolen - vcpu->arch.stolen_logged; 516 514 vcpu->arch.stolen_logged = core_stolen; 517 - spin_lock(&vcpu->arch.tbacct_lock); 515 + spin_lock_irq(&vcpu->arch.tbacct_lock); 518 516 stolen += vcpu->arch.busy_stolen; 519 517 vcpu->arch.busy_stolen = 0; 520 - spin_unlock(&vcpu->arch.tbacct_lock); 518 + spin_unlock_irq(&vcpu->arch.tbacct_lock); 521 519 if (!dt || !vpa) 522 520 return; 523 521 memset(dt, 0, sizeof(struct dtl_entry)); ··· 591 589 if (list_empty(&vcpu->kvm->arch.rtas_tokens)) 592 590 return RESUME_HOST; 593 591 592 + idx = srcu_read_lock(&vcpu->kvm->srcu); 594 593 rc = kvmppc_rtas_hcall(vcpu); 594 + srcu_read_unlock(&vcpu->kvm->srcu, idx); 595 595 596 596 if (rc == -ENOENT) 597 597 return RESUME_HOST; ··· 1119 1115 1120 1116 if (vcpu->arch.state != KVMPPC_VCPU_RUNNABLE) 1121 1117 return; 1122 - spin_lock(&vcpu->arch.tbacct_lock); 1118 + spin_lock_irq(&vcpu->arch.tbacct_lock); 1123 1119 now = mftb(); 1124 1120 vcpu->arch.busy_stolen += vcore_stolen_time(vc, now) - 1125 1121 vcpu->arch.stolen_logged; 1126 1122 vcpu->arch.busy_preempt = now; 1127 1123 vcpu->arch.state = KVMPPC_VCPU_BUSY_IN_HOST; 1128 - spin_unlock(&vcpu->arch.tbacct_lock); 1124 + spin_unlock_irq(&vcpu->arch.tbacct_lock); 1129 1125 --vc->n_runnable; 1130 1126 list_del(&vcpu->arch.run_list); 1131 1127 }
+7 -2
arch/powerpc/kvm/book3s_hv_rm_mmu.c
··· 225 225 is_io = pa & (HPTE_R_I | HPTE_R_W); 226 226 pte_size = PAGE_SIZE << (pa & KVMPPC_PAGE_ORDER_MASK); 227 227 pa &= PAGE_MASK; 228 + pa |= gpa & ~PAGE_MASK; 228 229 } else { 229 230 /* Translate to host virtual address */ 230 231 hva = __gfn_to_hva_memslot(memslot, gfn); ··· 239 238 ptel = hpte_make_readonly(ptel); 240 239 is_io = hpte_cache_bits(pte_val(pte)); 241 240 pa = pte_pfn(pte) << PAGE_SHIFT; 241 + pa |= hva & (pte_size - 1); 242 + pa |= gpa & ~PAGE_MASK; 242 243 } 243 244 } 244 245 245 246 if (pte_size < psize) 246 247 return H_PARAMETER; 247 - if (pa && pte_size > psize) 248 - pa |= gpa & (pte_size - 1); 249 248 250 249 ptel &= ~(HPTE_R_PP0 - psize); 251 250 ptel |= pa; ··· 750 749 20, /* 1M, unsupported */ 751 750 }; 752 751 752 + /* When called from virtmode, this func should be protected by 753 + * preempt_disable(), otherwise, the holding of HPTE_V_HVLOCK 754 + * can trigger deadlock issue. 755 + */ 753 756 long kvmppc_hv_find_lock_hpte(struct kvm *kvm, gva_t eaddr, unsigned long slb_v, 754 757 unsigned long valid) 755 758 {
+13 -10
arch/powerpc/kvm/book3s_hv_rmhandlers.S
··· 153 153 154 154 13: b machine_check_fwnmi 155 155 156 - 157 156 /* 158 157 * We come in here when wakened from nap mode on a secondary hw thread. 159 158 * Relocation is off and most register values are lost. ··· 223 224 /* Clear our vcpu pointer so we don't come back in early */ 224 225 li r0, 0 225 226 std r0, HSTATE_KVM_VCPU(r13) 227 + /* 228 + * Make sure we clear HSTATE_KVM_VCPU(r13) before incrementing 229 + * the nap_count, because once the increment to nap_count is 230 + * visible we could be given another vcpu. 231 + */ 226 232 lwsync 227 233 /* Clear any pending IPI - we're an offline thread */ 228 234 ld r5, HSTATE_XICS_PHYS(r13) ··· 245 241 /* increment the nap count and then go to nap mode */ 246 242 ld r4, HSTATE_KVM_VCORE(r13) 247 243 addi r4, r4, VCORE_NAP_COUNT 248 - lwsync /* make previous updates visible */ 249 244 51: lwarx r3, 0, r4 250 245 addi r3, r3, 1 251 246 stwcx. r3, 0, r4 ··· 754 751 * guest CR, R12 saved in shadow VCPU SCRATCH1/0 755 752 * guest R13 saved in SPRN_SCRATCH0 756 753 */ 757 - /* abuse host_r2 as third scratch area; we get r2 from PACATOC(r13) */ 758 - std r9, HSTATE_HOST_R2(r13) 754 + std r9, HSTATE_SCRATCH2(r13) 759 755 760 756 lbz r9, HSTATE_IN_GUEST(r13) 761 757 cmpwi r9, KVM_GUEST_MODE_HOST_HV 762 758 beq kvmppc_bad_host_intr 763 759 #ifdef CONFIG_KVM_BOOK3S_PR_POSSIBLE 764 760 cmpwi r9, KVM_GUEST_MODE_GUEST 765 - ld r9, HSTATE_HOST_R2(r13) 761 + ld r9, HSTATE_SCRATCH2(r13) 766 762 beq kvmppc_interrupt_pr 767 763 #endif 768 764 /* We're now back in the host but in guest MMU context */ ··· 781 779 std r6, VCPU_GPR(R6)(r9) 782 780 std r7, VCPU_GPR(R7)(r9) 783 781 std r8, VCPU_GPR(R8)(r9) 784 - ld r0, HSTATE_HOST_R2(r13) 782 + ld r0, HSTATE_SCRATCH2(r13) 785 783 std r0, VCPU_GPR(R9)(r9) 786 784 std r10, VCPU_GPR(R10)(r9) 787 785 std r11, VCPU_GPR(R11)(r9) ··· 992 990 */ 993 991 /* Increment the threads-exiting-guest count in the 0xff00 994 992 bits of vcore->entry_exit_count */ 995 - lwsync 996 993 ld r5,HSTATE_KVM_VCORE(r13) 997 994 addi r6,r5,VCORE_ENTRY_EXIT 998 995 41: lwarx r3,0,r6 999 996 addi r0,r3,0x100 1000 997 stwcx. r0,0,r6 1001 998 bne 41b 1002 - lwsync 999 + isync /* order stwcx. vs. reading napping_threads */ 1003 1000 1004 1001 /* 1005 1002 * At this point we have an interrupt that we have to pass ··· 1031 1030 sld r0,r0,r4 1032 1031 andc. r3,r3,r0 /* no sense IPI'ing ourselves */ 1033 1032 beq 43f 1033 + /* Order entry/exit update vs. IPIs */ 1034 + sync 1034 1035 mulli r4,r4,PACA_SIZE /* get paca for thread 0 */ 1035 1036 subf r6,r4,r13 1036 1037 42: andi. r0,r3,1 ··· 1641 1638 bge kvm_cede_exit 1642 1639 stwcx. r4,0,r6 1643 1640 bne 31b 1641 + /* order napping_threads update vs testing entry_exit_count */ 1642 + isync 1644 1643 li r0,1 1645 1644 stb r0,HSTATE_NAPPING(r13) 1646 - /* order napping_threads update vs testing entry_exit_count */ 1647 - lwsync 1648 1645 mr r4,r3 1649 1646 lwz r7,VCORE_ENTRY_EXIT(r5) 1650 1647 cmpwi r7,0x100
+11 -8
arch/powerpc/kvm/book3s_interrupts.S
··· 129 129 * R12 = exit handler id 130 130 * R13 = PACA 131 131 * SVCPU.* = guest * 132 + * MSR.EE = 1 132 133 * 133 134 */ 134 135 136 + PPC_LL r3, GPR4(r1) /* vcpu pointer */ 137 + 138 + /* 139 + * kvmppc_copy_from_svcpu can clobber volatile registers, save 140 + * the exit handler id to the vcpu and restore it from there later. 141 + */ 142 + stw r12, VCPU_TRAP(r3) 143 + 135 144 /* Transfer reg values from shadow vcpu back to vcpu struct */ 136 145 /* On 64-bit, interrupts are still off at this point */ 137 - PPC_LL r3, GPR4(r1) /* vcpu pointer */ 146 + 138 147 GET_SHADOW_VCPU(r4) 139 148 bl FUNC(kvmppc_copy_from_svcpu) 140 149 nop 141 150 142 151 #ifdef CONFIG_PPC_BOOK3S_64 143 - /* Re-enable interrupts */ 144 - ld r3, HSTATE_HOST_MSR(r13) 145 - ori r3, r3, MSR_EE 146 - MTMSR_EERI(r3) 147 - 148 152 /* 149 153 * Reload kernel SPRG3 value. 150 154 * No need to save guest value as usermode can't modify SPRG3. 151 155 */ 152 156 ld r3, PACA_SPRG3(r13) 153 157 mtspr SPRN_SPRG3, r3 154 - 155 158 #endif /* CONFIG_PPC_BOOK3S_64 */ 156 159 157 160 /* R7 = vcpu */ ··· 180 177 PPC_STL r31, VCPU_GPR(R31)(r7) 181 178 182 179 /* Pass the exit number as 3rd argument to kvmppc_handle_exit */ 183 - mr r5, r12 180 + lwz r5, VCPU_TRAP(r7) 184 181 185 182 /* Restore r3 (kvm_run) and r4 (vcpu) */ 186 183 REST_2GPRS(3, r1)
+22
arch/powerpc/kvm/book3s_pr.c
··· 66 66 struct kvmppc_book3s_shadow_vcpu *svcpu = svcpu_get(vcpu); 67 67 memcpy(svcpu->slb, to_book3s(vcpu)->slb_shadow, sizeof(svcpu->slb)); 68 68 svcpu->slb_max = to_book3s(vcpu)->slb_shadow_max; 69 + svcpu->in_use = 0; 69 70 svcpu_put(svcpu); 70 71 #endif 71 72 vcpu->cpu = smp_processor_id(); ··· 79 78 { 80 79 #ifdef CONFIG_PPC_BOOK3S_64 81 80 struct kvmppc_book3s_shadow_vcpu *svcpu = svcpu_get(vcpu); 81 + if (svcpu->in_use) { 82 + kvmppc_copy_from_svcpu(vcpu, svcpu); 83 + } 82 84 memcpy(to_book3s(vcpu)->slb_shadow, svcpu->slb, sizeof(svcpu->slb)); 83 85 to_book3s(vcpu)->slb_shadow_max = svcpu->slb_max; 84 86 svcpu_put(svcpu); ··· 114 110 svcpu->ctr = vcpu->arch.ctr; 115 111 svcpu->lr = vcpu->arch.lr; 116 112 svcpu->pc = vcpu->arch.pc; 113 + svcpu->in_use = true; 117 114 } 118 115 119 116 /* Copy data touched by real-mode code from shadow vcpu back to vcpu */ 120 117 void kvmppc_copy_from_svcpu(struct kvm_vcpu *vcpu, 121 118 struct kvmppc_book3s_shadow_vcpu *svcpu) 122 119 { 120 + /* 121 + * vcpu_put would just call us again because in_use hasn't 122 + * been updated yet. 123 + */ 124 + preempt_disable(); 125 + 126 + /* 127 + * Maybe we were already preempted and synced the svcpu from 128 + * our preempt notifiers. Don't bother touching this svcpu then. 129 + */ 130 + if (!svcpu->in_use) 131 + goto out; 132 + 123 133 vcpu->arch.gpr[0] = svcpu->gpr[0]; 124 134 vcpu->arch.gpr[1] = svcpu->gpr[1]; 125 135 vcpu->arch.gpr[2] = svcpu->gpr[2]; ··· 157 139 vcpu->arch.fault_dar = svcpu->fault_dar; 158 140 vcpu->arch.fault_dsisr = svcpu->fault_dsisr; 159 141 vcpu->arch.last_inst = svcpu->last_inst; 142 + svcpu->in_use = false; 143 + 144 + out: 145 + preempt_enable(); 160 146 } 161 147 162 148 static int kvmppc_core_check_requests_pr(struct kvm_vcpu *vcpu)
+1 -5
arch/powerpc/kvm/book3s_rmhandlers.S
··· 153 153 154 154 li r6, MSR_IR | MSR_DR 155 155 andc r6, r5, r6 /* Clear DR and IR in MSR value */ 156 - #ifdef CONFIG_PPC_BOOK3S_32 157 156 /* 158 157 * Set EE in HOST_MSR so that it's enabled when we get into our 159 - * C exit handler function. On 64-bit we delay enabling 160 - * interrupts until we have finished transferring stuff 161 - * to or from the PACA. 158 + * C exit handler function. 162 159 */ 163 160 ori r5, r5, MSR_EE 164 - #endif 165 161 mtsrr0 r7 166 162 mtsrr1 r6 167 163 RFI
+6 -6
arch/powerpc/kvm/booke.c
··· 681 681 int kvmppc_vcpu_run(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu) 682 682 { 683 683 int ret, s; 684 - struct thread_struct thread; 684 + struct debug_reg debug; 685 685 #ifdef CONFIG_PPC_FPU 686 686 struct thread_fp_state fp; 687 687 int fpexc_mode; ··· 723 723 #endif 724 724 725 725 /* Switch to guest debug context */ 726 - thread.debug = vcpu->arch.shadow_dbg_reg; 727 - switch_booke_debug_regs(&thread); 728 - thread.debug = current->thread.debug; 726 + debug = vcpu->arch.shadow_dbg_reg; 727 + switch_booke_debug_regs(&debug); 728 + debug = current->thread.debug; 729 729 current->thread.debug = vcpu->arch.shadow_dbg_reg; 730 730 731 731 kvmppc_fix_ee_before_entry(); ··· 736 736 We also get here with interrupts enabled. */ 737 737 738 738 /* Switch back to user space debug context */ 739 - switch_booke_debug_regs(&thread); 740 - current->thread.debug = thread.debug; 739 + switch_booke_debug_regs(&debug); 740 + current->thread.debug = debug; 741 741 742 742 #ifdef CONFIG_PPC_FPU 743 743 kvmppc_save_guest_fp(vcpu);
+6 -6
arch/powerpc/platforms/powernv/opal-lpc.c
··· 24 24 static u8 opal_lpc_inb(unsigned long port) 25 25 { 26 26 int64_t rc; 27 - uint32_t data; 27 + __be32 data; 28 28 29 29 if (opal_lpc_chip_id < 0 || port > 0xffff) 30 30 return 0xff; 31 31 rc = opal_lpc_read(opal_lpc_chip_id, OPAL_LPC_IO, port, &data, 1); 32 - return rc ? 0xff : data; 32 + return rc ? 0xff : be32_to_cpu(data); 33 33 } 34 34 35 35 static __le16 __opal_lpc_inw(unsigned long port) 36 36 { 37 37 int64_t rc; 38 - uint32_t data; 38 + __be32 data; 39 39 40 40 if (opal_lpc_chip_id < 0 || port > 0xfffe) 41 41 return 0xffff; 42 42 if (port & 1) 43 43 return (__le16)opal_lpc_inb(port) << 8 | opal_lpc_inb(port + 1); 44 44 rc = opal_lpc_read(opal_lpc_chip_id, OPAL_LPC_IO, port, &data, 2); 45 - return rc ? 0xffff : data; 45 + return rc ? 0xffff : be32_to_cpu(data); 46 46 } 47 47 static u16 opal_lpc_inw(unsigned long port) 48 48 { ··· 52 52 static __le32 __opal_lpc_inl(unsigned long port) 53 53 { 54 54 int64_t rc; 55 - uint32_t data; 55 + __be32 data; 56 56 57 57 if (opal_lpc_chip_id < 0 || port > 0xfffc) 58 58 return 0xffffffff; ··· 62 62 (__le32)opal_lpc_inb(port + 2) << 8 | 63 63 opal_lpc_inb(port + 3); 64 64 rc = opal_lpc_read(opal_lpc_chip_id, OPAL_LPC_IO, port, &data, 4); 65 - return rc ? 0xffffffff : data; 65 + return rc ? 0xffffffff : be32_to_cpu(data); 66 66 } 67 67 68 68 static u32 opal_lpc_inl(unsigned long port)
+3 -1
arch/powerpc/platforms/powernv/opal-xscom.c
··· 96 96 { 97 97 struct opal_scom_map *m = map; 98 98 int64_t rc; 99 + __be64 v; 99 100 100 101 reg = opal_scom_unmangle(reg); 101 - rc = opal_xscom_read(m->chip, m->addr + reg, (uint64_t *)__pa(value)); 102 + rc = opal_xscom_read(m->chip, m->addr + reg, (__be64 *)__pa(&v)); 103 + *value = be64_to_cpu(v); 102 104 return opal_xscom_err_xlate(rc); 103 105 } 104 106
+6 -6
arch/powerpc/platforms/pseries/lparcfg.c
··· 157 157 { 158 158 struct hvcall_ppp_data ppp_data; 159 159 struct device_node *root; 160 - const int *perf_level; 160 + const __be32 *perf_level; 161 161 int rc; 162 162 163 163 rc = h_get_ppp(&ppp_data); ··· 201 201 perf_level = of_get_property(root, 202 202 "ibm,partition-performance-parameters-level", 203 203 NULL); 204 - if (perf_level && (*perf_level >= 1)) { 204 + if (perf_level && (be32_to_cpup(perf_level) >= 1)) { 205 205 seq_printf(m, 206 206 "physical_procs_allocated_to_virtualization=%d\n", 207 207 ppp_data.phys_platform_procs); ··· 435 435 int partition_potential_processors; 436 436 int partition_active_processors; 437 437 struct device_node *rtas_node; 438 - const int *lrdrp = NULL; 438 + const __be32 *lrdrp = NULL; 439 439 440 440 rtas_node = of_find_node_by_path("/rtas"); 441 441 if (rtas_node) ··· 444 444 if (lrdrp == NULL) { 445 445 partition_potential_processors = vdso_data->processorCount; 446 446 } else { 447 - partition_potential_processors = *(lrdrp + 4); 447 + partition_potential_processors = be32_to_cpup(lrdrp + 4); 448 448 } 449 449 of_node_put(rtas_node); 450 450 ··· 654 654 const char *model = ""; 655 655 const char *system_id = ""; 656 656 const char *tmp; 657 - const unsigned int *lp_index_ptr; 657 + const __be32 *lp_index_ptr; 658 658 unsigned int lp_index = 0; 659 659 660 660 seq_printf(m, "%s %s\n", MODULE_NAME, MODULE_VERS); ··· 670 670 lp_index_ptr = of_get_property(rootdn, "ibm,partition-no", 671 671 NULL); 672 672 if (lp_index_ptr) 673 - lp_index = *lp_index_ptr; 673 + lp_index = be32_to_cpup(lp_index_ptr); 674 674 of_node_put(rootdn); 675 675 } 676 676 seq_printf(m, "serial_number=%s\n", system_id);
+15 -13
arch/powerpc/platforms/pseries/msi.c
··· 130 130 { 131 131 struct device_node *dn; 132 132 struct pci_dn *pdn; 133 - const u32 *req_msi; 133 + const __be32 *p; 134 + u32 req_msi; 134 135 135 136 pdn = pci_get_pdn(pdev); 136 137 if (!pdn) ··· 139 138 140 139 dn = pdn->node; 141 140 142 - req_msi = of_get_property(dn, prop_name, NULL); 143 - if (!req_msi) { 141 + p = of_get_property(dn, prop_name, NULL); 142 + if (!p) { 144 143 pr_debug("rtas_msi: No %s on %s\n", prop_name, dn->full_name); 145 144 return -ENOENT; 146 145 } 147 146 148 - if (*req_msi < nvec) { 147 + req_msi = be32_to_cpup(p); 148 + if (req_msi < nvec) { 149 149 pr_debug("rtas_msi: %s requests < %d MSIs\n", prop_name, nvec); 150 150 151 - if (*req_msi == 0) /* Be paranoid */ 151 + if (req_msi == 0) /* Be paranoid */ 152 152 return -ENOSPC; 153 153 154 - return *req_msi; 154 + return req_msi; 155 155 } 156 156 157 157 return 0; ··· 173 171 static struct device_node *find_pe_total_msi(struct pci_dev *dev, int *total) 174 172 { 175 173 struct device_node *dn; 176 - const u32 *p; 174 + const __be32 *p; 177 175 178 176 dn = of_node_get(pci_device_to_OF_node(dev)); 179 177 while (dn) { ··· 181 179 if (p) { 182 180 pr_debug("rtas_msi: found prop on dn %s\n", 183 181 dn->full_name); 184 - *total = *p; 182 + *total = be32_to_cpup(p); 185 183 return dn; 186 184 } 187 185 ··· 234 232 static void *count_non_bridge_devices(struct device_node *dn, void *data) 235 233 { 236 234 struct msi_counts *counts = data; 237 - const u32 *p; 235 + const __be32 *p; 238 236 u32 class; 239 237 240 238 pr_debug("rtas_msi: counting %s\n", dn->full_name); 241 239 242 240 p = of_get_property(dn, "class-code", NULL); 243 - class = p ? *p : 0; 241 + class = p ? be32_to_cpup(p) : 0; 244 242 245 243 if ((class >> 8) != PCI_CLASS_BRIDGE_PCI) 246 244 counts->num_devices++; ··· 251 249 static void *count_spare_msis(struct device_node *dn, void *data) 252 250 { 253 251 struct msi_counts *counts = data; 254 - const u32 *p; 252 + const __be32 *p; 255 253 int req; 256 254 257 255 if (dn == counts->requestor) ··· 262 260 req = 0; 263 261 p = of_get_property(dn, "ibm,req#msi", NULL); 264 262 if (p) 265 - req = *p; 263 + req = be32_to_cpup(p); 266 264 267 265 p = of_get_property(dn, "ibm,req#msi-x", NULL); 268 266 if (p) 269 - req = max(req, (int)*p); 267 + req = max(req, (int)be32_to_cpup(p)); 270 268 } 271 269 272 270 if (req < counts->quota)
+23 -23
arch/powerpc/platforms/pseries/nvram.c
··· 43 43 static DEFINE_SPINLOCK(nvram_lock); 44 44 45 45 struct err_log_info { 46 - int error_type; 47 - unsigned int seq_num; 46 + __be32 error_type; 47 + __be32 seq_num; 48 48 }; 49 49 50 50 struct nvram_os_partition { ··· 79 79 }; 80 80 81 81 struct oops_log_info { 82 - u16 version; 83 - u16 report_length; 84 - u64 timestamp; 82 + __be16 version; 83 + __be16 report_length; 84 + __be64 timestamp; 85 85 } __attribute__((packed)); 86 86 87 87 static void oops_to_nvram(struct kmsg_dumper *dumper, ··· 291 291 length = part->size; 292 292 } 293 293 294 - info.error_type = err_type; 295 - info.seq_num = error_log_cnt; 294 + info.error_type = cpu_to_be32(err_type); 295 + info.seq_num = cpu_to_be32(error_log_cnt); 296 296 297 297 tmp_index = part->index; 298 298 ··· 364 364 } 365 365 366 366 if (part->os_partition) { 367 - *error_log_cnt = info.seq_num; 368 - *err_type = info.error_type; 367 + *error_log_cnt = be32_to_cpu(info.seq_num); 368 + *err_type = be32_to_cpu(info.error_type); 369 369 } 370 370 371 371 return 0; ··· 529 529 pr_err("nvram: logging uncompressed oops/panic report\n"); 530 530 return -1; 531 531 } 532 - oops_hdr->version = OOPS_HDR_VERSION; 533 - oops_hdr->report_length = (u16) zipped_len; 534 - oops_hdr->timestamp = get_seconds(); 532 + oops_hdr->version = cpu_to_be16(OOPS_HDR_VERSION); 533 + oops_hdr->report_length = cpu_to_be16(zipped_len); 534 + oops_hdr->timestamp = cpu_to_be64(get_seconds()); 535 535 return 0; 536 536 } 537 537 ··· 574 574 clobbering_unread_rtas_event()) 575 575 return -1; 576 576 577 - oops_hdr->version = OOPS_HDR_VERSION; 578 - oops_hdr->report_length = (u16) size; 579 - oops_hdr->timestamp = get_seconds(); 577 + oops_hdr->version = cpu_to_be16(OOPS_HDR_VERSION); 578 + oops_hdr->report_length = cpu_to_be16(size); 579 + oops_hdr->timestamp = cpu_to_be64(get_seconds()); 580 580 581 581 if (compressed) 582 582 err_type = ERR_TYPE_KERNEL_PANIC_GZ; ··· 670 670 size_t length, hdr_size; 671 671 672 672 oops_hdr = (struct oops_log_info *)buff; 673 - if (oops_hdr->version < OOPS_HDR_VERSION) { 673 + if (be16_to_cpu(oops_hdr->version) < OOPS_HDR_VERSION) { 674 674 /* Old format oops header had 2-byte record size */ 675 675 hdr_size = sizeof(u16); 676 - length = oops_hdr->version; 676 + length = be16_to_cpu(oops_hdr->version); 677 677 time->tv_sec = 0; 678 678 time->tv_nsec = 0; 679 679 } else { 680 680 hdr_size = sizeof(*oops_hdr); 681 - length = oops_hdr->report_length; 682 - time->tv_sec = oops_hdr->timestamp; 681 + length = be16_to_cpu(oops_hdr->report_length); 682 + time->tv_sec = be64_to_cpu(oops_hdr->timestamp); 683 683 time->tv_nsec = 0; 684 684 } 685 685 *buf = kmalloc(length, GFP_KERNEL); ··· 889 889 kmsg_dump_get_buffer(dumper, false, 890 890 oops_data, oops_data_sz, &text_len); 891 891 err_type = ERR_TYPE_KERNEL_PANIC; 892 - oops_hdr->version = OOPS_HDR_VERSION; 893 - oops_hdr->report_length = (u16) text_len; 894 - oops_hdr->timestamp = get_seconds(); 892 + oops_hdr->version = cpu_to_be16(OOPS_HDR_VERSION); 893 + oops_hdr->report_length = cpu_to_be16(text_len); 894 + oops_hdr->timestamp = cpu_to_be64(get_seconds()); 895 895 } 896 896 897 897 (void) nvram_write_os_partition(&oops_log_partition, oops_buf, 898 - (int) (sizeof(*oops_hdr) + oops_hdr->report_length), err_type, 898 + (int) (sizeof(*oops_hdr) + text_len), err_type, 899 899 ++oops_count); 900 900 901 901 spin_unlock_irqrestore(&lock, flags);
+4 -4
arch/powerpc/platforms/pseries/pci.c
··· 113 113 { 114 114 struct device_node *dn, *pdn; 115 115 struct pci_bus *bus; 116 - const uint32_t *pcie_link_speed_stats; 116 + const __be32 *pcie_link_speed_stats; 117 117 118 118 bus = bridge->bus; 119 119 ··· 122 122 return 0; 123 123 124 124 for (pdn = dn; pdn != NULL; pdn = of_get_next_parent(pdn)) { 125 - pcie_link_speed_stats = (const uint32_t *) of_get_property(pdn, 125 + pcie_link_speed_stats = of_get_property(pdn, 126 126 "ibm,pcie-link-speed-stats", NULL); 127 127 if (pcie_link_speed_stats) 128 128 break; ··· 135 135 return 0; 136 136 } 137 137 138 - switch (pcie_link_speed_stats[0]) { 138 + switch (be32_to_cpup(pcie_link_speed_stats)) { 139 139 case 0x01: 140 140 bus->max_bus_speed = PCIE_SPEED_2_5GT; 141 141 break; ··· 147 147 break; 148 148 } 149 149 150 - switch (pcie_link_speed_stats[1]) { 150 + switch (be32_to_cpup(pcie_link_speed_stats)) { 151 151 case 0x01: 152 152 bus->cur_bus_speed = PCIE_SPEED_2_5GT; 153 153 break;
+1 -1
arch/sh/lib/Makefile
··· 6 6 checksum.o strlen.o div64.o div64-generic.o 7 7 8 8 # Extracted from libgcc 9 - lib-y += movmem.o ashldi3.o ashrdi3.o lshrdi3.o \ 9 + obj-y += movmem.o ashldi3.o ashrdi3.o lshrdi3.o \ 10 10 ashlsi3.o ashrsi3.o ashiftrt.o lshrsi3.o \ 11 11 udiv_qrnnd.o 12 12
+2 -2
arch/sparc/include/asm/pgtable_64.h
··· 619 619 } 620 620 621 621 #define pte_accessible pte_accessible 622 - static inline unsigned long pte_accessible(pte_t a) 622 + static inline unsigned long pte_accessible(struct mm_struct *mm, pte_t a) 623 623 { 624 624 return pte_val(a) & _PAGE_VALID; 625 625 } ··· 847 847 * SUN4V NOTE: _PAGE_VALID is the same value in both the SUN4U 848 848 * and SUN4V pte layout, so this inline test is fine. 849 849 */ 850 - if (likely(mm != &init_mm) && pte_accessible(orig)) 850 + if (likely(mm != &init_mm) && pte_accessible(mm, orig)) 851 851 tlb_batch_add(mm, addr, ptep, orig, fullmm); 852 852 } 853 853
+1
arch/x86/Kconfig
··· 26 26 select HAVE_AOUT if X86_32 27 27 select HAVE_UNSTABLE_SCHED_CLOCK 28 28 select ARCH_SUPPORTS_NUMA_BALANCING 29 + select ARCH_SUPPORTS_INT128 if X86_64 29 30 select ARCH_WANTS_PROT_NUMA_PROT_NONE 30 31 select HAVE_IDE 31 32 select HAVE_OPROFILE
+9 -2
arch/x86/include/asm/pgtable.h
··· 452 452 } 453 453 454 454 #define pte_accessible pte_accessible 455 - static inline int pte_accessible(pte_t a) 455 + static inline bool pte_accessible(struct mm_struct *mm, pte_t a) 456 456 { 457 - return pte_flags(a) & _PAGE_PRESENT; 457 + if (pte_flags(a) & _PAGE_PRESENT) 458 + return true; 459 + 460 + if ((pte_flags(a) & (_PAGE_PROTNONE | _PAGE_NUMA)) && 461 + mm_tlb_flush_pending(mm)) 462 + return true; 463 + 464 + return false; 458 465 } 459 466 460 467 static inline int pte_hidden(pte_t pte)
+11
arch/x86/include/asm/preempt.h
··· 8 8 DECLARE_PER_CPU(int, __preempt_count); 9 9 10 10 /* 11 + * We use the PREEMPT_NEED_RESCHED bit as an inverted NEED_RESCHED such 12 + * that a decrement hitting 0 means we can and should reschedule. 13 + */ 14 + #define PREEMPT_ENABLED (0 + PREEMPT_NEED_RESCHED) 15 + 16 + /* 11 17 * We mask the PREEMPT_NEED_RESCHED bit so as not to confuse all current users 12 18 * that think a non-zero value indicates we cannot preempt. 13 19 */ ··· 80 74 __this_cpu_add_4(__preempt_count, -val); 81 75 } 82 76 77 + /* 78 + * Because we keep PREEMPT_NEED_RESCHED set when we do _not_ need to reschedule 79 + * a decrement which hits zero means we have no preempt_count and should 80 + * reschedule. 81 + */ 83 82 static __always_inline bool __preempt_count_dec_and_test(void) 84 83 { 85 84 GEN_UNARY_RMWcc("decl", __preempt_count, __percpu_arg(0), "e");
+12 -3
arch/x86/kernel/cpu/perf_event.h
··· 262 262 __EVENT_CONSTRAINT(c, n, INTEL_ARCH_EVENT_MASK, \ 263 263 HWEIGHT(n), 0, PERF_X86_EVENT_PEBS_ST_HSW) 264 264 265 - #define EVENT_CONSTRAINT_END \ 266 - EVENT_CONSTRAINT(0, 0, 0) 265 + /* 266 + * We define the end marker as having a weight of -1 267 + * to enable blacklisting of events using a counter bitmask 268 + * of zero and thus a weight of zero. 269 + * The end marker has a weight that cannot possibly be 270 + * obtained from counting the bits in the bitmask. 271 + */ 272 + #define EVENT_CONSTRAINT_END { .weight = -1 } 267 273 274 + /* 275 + * Check for end marker with weight == -1 276 + */ 268 277 #define for_each_event_constraint(e, c) \ 269 - for ((e) = (c); (e)->weight; (e)++) 278 + for ((e) = (c); (e)->weight != -1; (e)++) 270 279 271 280 /* 272 281 * Extra registers for specific events.
+13
arch/x86/mm/gup.c
··· 83 83 pte_t pte = gup_get_pte(ptep); 84 84 struct page *page; 85 85 86 + /* Similar to the PMD case, NUMA hinting must take slow path */ 87 + if (pte_numa(pte)) { 88 + pte_unmap(ptep); 89 + return 0; 90 + } 91 + 86 92 if ((pte_flags(pte) & (mask | _PAGE_SPECIAL)) != mask) { 87 93 pte_unmap(ptep); 88 94 return 0; ··· 173 167 if (pmd_none(pmd) || pmd_trans_splitting(pmd)) 174 168 return 0; 175 169 if (unlikely(pmd_large(pmd))) { 170 + /* 171 + * NUMA hinting faults need to be handled in the GUP 172 + * slowpath for accounting purposes and so that they 173 + * can be serialised against THP migration. 174 + */ 175 + if (pmd_numa(pmd)) 176 + return 0; 176 177 if (!gup_huge_pmd(pmd, addr, next, write, pages, nr)) 177 178 return 0; 178 179 } else {
+1
drivers/acpi/apei/erst.c
··· 942 942 static struct pstore_info erst_info = { 943 943 .owner = THIS_MODULE, 944 944 .name = "erst", 945 + .flags = PSTORE_FLAGS_FRAGILE, 945 946 .open = erst_open_pstore, 946 947 .close = erst_close_pstore, 947 948 .read = erst_reader,
+3 -3
drivers/clk/clk-s2mps11.c
··· 60 60 struct s2mps11_clk *s2mps11 = to_s2mps11_clk(hw); 61 61 int ret; 62 62 63 - ret = regmap_update_bits(s2mps11->iodev->regmap, 63 + ret = regmap_update_bits(s2mps11->iodev->regmap_pmic, 64 64 S2MPS11_REG_RTC_CTRL, 65 65 s2mps11->mask, s2mps11->mask); 66 66 if (!ret) ··· 74 74 struct s2mps11_clk *s2mps11 = to_s2mps11_clk(hw); 75 75 int ret; 76 76 77 - ret = regmap_update_bits(s2mps11->iodev->regmap, S2MPS11_REG_RTC_CTRL, 77 + ret = regmap_update_bits(s2mps11->iodev->regmap_pmic, S2MPS11_REG_RTC_CTRL, 78 78 s2mps11->mask, ~s2mps11->mask); 79 79 80 80 if (!ret) ··· 174 174 s2mps11_clk->hw.init = &s2mps11_clks_init[i]; 175 175 s2mps11_clk->mask = 1 << i; 176 176 177 - ret = regmap_read(s2mps11_clk->iodev->regmap, 177 + ret = regmap_read(s2mps11_clk->iodev->regmap_pmic, 178 178 S2MPS11_REG_RTC_CTRL, &val); 179 179 if (ret < 0) 180 180 goto err_reg;
+1
drivers/clocksource/Kconfig
··· 75 75 config CLKSRC_EFM32 76 76 bool "Clocksource for Energy Micro's EFM32 SoCs" if !ARCH_EFM32 77 77 depends on OF && ARM && (ARCH_EFM32 || COMPILE_TEST) 78 + select CLKSRC_MMIO 78 79 default ARCH_EFM32 79 80 help 80 81 Support to use the timers of EFM32 SoCs as clock source and clock
-1
drivers/clocksource/clksrc-of.c
··· 35 35 36 36 init_func = match->data; 37 37 init_func(np); 38 - of_node_put(np); 39 38 } 40 39 }
+4 -3
drivers/clocksource/dw_apb_timer_of.c
··· 108 108 109 109 static u64 read_sched_clock(void) 110 110 { 111 - return __raw_readl(sched_io_base); 111 + return ~__raw_readl(sched_io_base); 112 112 } 113 113 114 114 static const struct of_device_id sptimer_ids[] __initconst = { 115 115 { .compatible = "picochip,pc3x2-rtc" }, 116 - { .compatible = "snps,dw-apb-timer-sp" }, 117 116 { /* Sentinel */ }, 118 117 }; 119 118 ··· 150 151 num_called++; 151 152 } 152 153 CLOCKSOURCE_OF_DECLARE(pc3x2_timer, "picochip,pc3x2-timer", dw_apb_timer_init); 153 - CLOCKSOURCE_OF_DECLARE(apb_timer, "snps,dw-apb-timer-osc", dw_apb_timer_init); 154 + CLOCKSOURCE_OF_DECLARE(apb_timer_osc, "snps,dw-apb-timer-osc", dw_apb_timer_init); 155 + CLOCKSOURCE_OF_DECLARE(apb_timer_sp, "snps,dw-apb-timer-sp", dw_apb_timer_init); 156 + CLOCKSOURCE_OF_DECLARE(apb_timer, "snps,dw-apb-timer", dw_apb_timer_init);
+3
drivers/clocksource/sun4i_timer.c
··· 179 179 writel(TIMER_CTL_CLK_SRC(TIMER_CTL_CLK_SRC_OSC24M), 180 180 timer_base + TIMER_CTL_REG(0)); 181 181 182 + /* Make sure timer is stopped before playing with interrupts */ 183 + sun4i_clkevt_time_stop(0); 184 + 182 185 ret = setup_irq(irq, &sun4i_timer_irq); 183 186 if (ret) 184 187 pr_warn("failed to setup irq %d\n", irq);
+5 -5
drivers/clocksource/time-armada-370-xp.c
··· 256 256 ticks_per_jiffy = (timer_clk + HZ / 2) / HZ; 257 257 258 258 /* 259 - * Set scale and timer for sched_clock. 260 - */ 261 - sched_clock_register(armada_370_xp_read_sched_clock, 32, timer_clk); 262 - 263 - /* 264 259 * Setup free-running clocksource timer (interrupts 265 260 * disabled). 266 261 */ ··· 264 269 265 270 timer_ctrl_clrset(0, TIMER0_EN | TIMER0_RELOAD_EN | 266 271 TIMER0_DIV(TIMER_DIVIDER_SHIFT)); 272 + 273 + /* 274 + * Set scale and timer for sched_clock. 275 + */ 276 + sched_clock_register(armada_370_xp_read_sched_clock, 32, timer_clk); 267 277 268 278 clocksource_mmio_init(timer_base + TIMER0_VAL_OFF, 269 279 "armada_370_xp_clocksource",
+7
drivers/dma/Kconfig
··· 62 62 tristate "Intel I/OAT DMA support" 63 63 depends on PCI && X86 64 64 select DMA_ENGINE 65 + select DMA_ENGINE_RAID 65 66 select DCA 66 67 help 67 68 Enable support for the Intel(R) I/OAT DMA engine present ··· 113 112 bool "Marvell XOR engine support" 114 113 depends on PLAT_ORION 115 114 select DMA_ENGINE 115 + select DMA_ENGINE_RAID 116 116 select ASYNC_TX_ENABLE_CHANNEL_SWITCH 117 117 ---help--- 118 118 Enable support for the Marvell XOR engine. ··· 189 187 tristate "AMCC PPC440SPe ADMA support" 190 188 depends on 440SPe || 440SP 191 189 select DMA_ENGINE 190 + select DMA_ENGINE_RAID 192 191 select ARCH_HAS_ASYNC_TX_FIND_CHANNEL 193 192 select ASYNC_TX_ENABLE_CHANNEL_SWITCH 194 193 help ··· 355 352 bool "Network: TCP receive copy offload" 356 353 depends on DMA_ENGINE && NET 357 354 default (INTEL_IOATDMA || FSL_DMA) 355 + depends on BROKEN 358 356 help 359 357 This enables the use of DMA engines in the network stack to 360 358 offload receive copy-to-user operations, freeing CPU cycles. ··· 380 376 help 381 377 Simple DMA test client. Say N unless you're debugging a 382 378 DMA Device driver. 379 + 380 + config DMA_ENGINE_RAID 381 + bool 383 382 384 383 endif
-4
drivers/dma/at_hdmac_regs.h
··· 347 347 { 348 348 return &chan->dev->device; 349 349 } 350 - static struct device *chan2parent(struct dma_chan *chan) 351 - { 352 - return chan->dev->device.parent; 353 - } 354 350 355 351 #if defined(VERBOSE_DEBUG) 356 352 static void vdbg_dump_regs(struct at_dma_chan *atchan)
+2 -2
drivers/dma/dmaengine.c
··· 912 912 #define __UNMAP_POOL(x) { .size = x, .name = "dmaengine-unmap-" __stringify(x) } 913 913 static struct dmaengine_unmap_pool unmap_pool[] = { 914 914 __UNMAP_POOL(2), 915 - #if IS_ENABLED(CONFIG_ASYNC_TX_DMA) 915 + #if IS_ENABLED(CONFIG_DMA_ENGINE_RAID) 916 916 __UNMAP_POOL(16), 917 917 __UNMAP_POOL(128), 918 918 __UNMAP_POOL(256), ··· 1054 1054 dma_cookie_t cookie; 1055 1055 unsigned long flags; 1056 1056 1057 - unmap = dmaengine_get_unmap_data(dev->dev, 2, GFP_NOIO); 1057 + unmap = dmaengine_get_unmap_data(dev->dev, 2, GFP_NOWAIT); 1058 1058 if (!unmap) 1059 1059 return -ENOMEM; 1060 1060
+4 -4
drivers/dma/dmatest.c
··· 539 539 540 540 um->len = params->buf_size; 541 541 for (i = 0; i < src_cnt; i++) { 542 - unsigned long buf = (unsigned long) thread->srcs[i]; 542 + void *buf = thread->srcs[i]; 543 543 struct page *pg = virt_to_page(buf); 544 - unsigned pg_off = buf & ~PAGE_MASK; 544 + unsigned pg_off = (unsigned long) buf & ~PAGE_MASK; 545 545 546 546 um->addr[i] = dma_map_page(dev->dev, pg, pg_off, 547 547 um->len, DMA_TO_DEVICE); ··· 559 559 /* map with DMA_BIDIRECTIONAL to force writeback/invalidate */ 560 560 dsts = &um->addr[src_cnt]; 561 561 for (i = 0; i < dst_cnt; i++) { 562 - unsigned long buf = (unsigned long) thread->dsts[i]; 562 + void *buf = thread->dsts[i]; 563 563 struct page *pg = virt_to_page(buf); 564 - unsigned pg_off = buf & ~PAGE_MASK; 564 + unsigned pg_off = (unsigned long) buf & ~PAGE_MASK; 565 565 566 566 dsts[i] = dma_map_page(dev->dev, pg, pg_off, um->len, 567 567 DMA_BIDIRECTIONAL);
+1 -30
drivers/dma/fsldma.c
··· 86 86 hw->count = CPU_TO_DMA(chan, count, 32); 87 87 } 88 88 89 - static u32 get_desc_cnt(struct fsldma_chan *chan, struct fsl_desc_sw *desc) 90 - { 91 - return DMA_TO_CPU(chan, desc->hw.count, 32); 92 - } 93 - 94 89 static void set_desc_src(struct fsldma_chan *chan, 95 90 struct fsl_dma_ld_hw *hw, dma_addr_t src) 96 91 { ··· 96 101 hw->src_addr = CPU_TO_DMA(chan, snoop_bits | src, 64); 97 102 } 98 103 99 - static dma_addr_t get_desc_src(struct fsldma_chan *chan, 100 - struct fsl_desc_sw *desc) 101 - { 102 - u64 snoop_bits; 103 - 104 - snoop_bits = ((chan->feature & FSL_DMA_IP_MASK) == FSL_DMA_IP_85XX) 105 - ? ((u64)FSL_DMA_SATR_SREADTYPE_SNOOP_READ << 32) : 0; 106 - return DMA_TO_CPU(chan, desc->hw.src_addr, 64) & ~snoop_bits; 107 - } 108 - 109 104 static void set_desc_dst(struct fsldma_chan *chan, 110 105 struct fsl_dma_ld_hw *hw, dma_addr_t dst) 111 106 { ··· 104 119 snoop_bits = ((chan->feature & FSL_DMA_IP_MASK) == FSL_DMA_IP_85XX) 105 120 ? ((u64)FSL_DMA_DATR_DWRITETYPE_SNOOP_WRITE << 32) : 0; 106 121 hw->dst_addr = CPU_TO_DMA(chan, snoop_bits | dst, 64); 107 - } 108 - 109 - static dma_addr_t get_desc_dst(struct fsldma_chan *chan, 110 - struct fsl_desc_sw *desc) 111 - { 112 - u64 snoop_bits; 113 - 114 - snoop_bits = ((chan->feature & FSL_DMA_IP_MASK) == FSL_DMA_IP_85XX) 115 - ? ((u64)FSL_DMA_DATR_DWRITETYPE_SNOOP_WRITE << 32) : 0; 116 - return DMA_TO_CPU(chan, desc->hw.dst_addr, 64) & ~snoop_bits; 117 122 } 118 123 119 124 static void set_desc_next(struct fsldma_chan *chan, ··· 383 408 struct fsl_desc_sw *desc = tx_to_fsl_desc(tx); 384 409 struct fsl_desc_sw *child; 385 410 unsigned long flags; 386 - dma_cookie_t cookie; 411 + dma_cookie_t cookie = -EINVAL; 387 412 388 413 spin_lock_irqsave(&chan->desc_lock, flags); 389 414 ··· 829 854 struct fsl_desc_sw *desc) 830 855 { 831 856 struct dma_async_tx_descriptor *txd = &desc->async_tx; 832 - struct device *dev = chan->common.device->dev; 833 - dma_addr_t src = get_desc_src(chan, desc); 834 - dma_addr_t dst = get_desc_dst(chan, desc); 835 - u32 len = get_desc_cnt(chan, desc); 836 857 837 858 /* Run the link descriptor callback function */ 838 859 if (txd->callback) {
+64 -39
drivers/dma/mv_xor.c
··· 54 54 hw_desc->desc_command = (1 << 31); 55 55 } 56 56 57 - static u32 mv_desc_get_dest_addr(struct mv_xor_desc_slot *desc) 58 - { 59 - struct mv_xor_desc *hw_desc = desc->hw_desc; 60 - return hw_desc->phy_dest_addr; 61 - } 62 - 63 57 static void mv_desc_set_byte_count(struct mv_xor_desc_slot *desc, 64 58 u32 byte_count) 65 59 { ··· 781 787 /* 782 788 * Perform a transaction to verify the HW works. 783 789 */ 784 - #define MV_XOR_TEST_SIZE 2000 785 790 786 791 static int mv_xor_memcpy_self_test(struct mv_xor_chan *mv_chan) 787 792 { ··· 790 797 struct dma_chan *dma_chan; 791 798 dma_cookie_t cookie; 792 799 struct dma_async_tx_descriptor *tx; 800 + struct dmaengine_unmap_data *unmap; 793 801 int err = 0; 794 802 795 - src = kmalloc(sizeof(u8) * MV_XOR_TEST_SIZE, GFP_KERNEL); 803 + src = kmalloc(sizeof(u8) * PAGE_SIZE, GFP_KERNEL); 796 804 if (!src) 797 805 return -ENOMEM; 798 806 799 - dest = kzalloc(sizeof(u8) * MV_XOR_TEST_SIZE, GFP_KERNEL); 807 + dest = kzalloc(sizeof(u8) * PAGE_SIZE, GFP_KERNEL); 800 808 if (!dest) { 801 809 kfree(src); 802 810 return -ENOMEM; 803 811 } 804 812 805 813 /* Fill in src buffer */ 806 - for (i = 0; i < MV_XOR_TEST_SIZE; i++) 814 + for (i = 0; i < PAGE_SIZE; i++) 807 815 ((u8 *) src)[i] = (u8)i; 808 816 809 817 dma_chan = &mv_chan->dmachan; ··· 813 819 goto out; 814 820 } 815 821 816 - dest_dma = dma_map_single(dma_chan->device->dev, dest, 817 - MV_XOR_TEST_SIZE, DMA_FROM_DEVICE); 822 + unmap = dmaengine_get_unmap_data(dma_chan->device->dev, 2, GFP_KERNEL); 823 + if (!unmap) { 824 + err = -ENOMEM; 825 + goto free_resources; 826 + } 818 827 819 - src_dma = dma_map_single(dma_chan->device->dev, src, 820 - MV_XOR_TEST_SIZE, DMA_TO_DEVICE); 828 + src_dma = dma_map_page(dma_chan->device->dev, virt_to_page(src), 0, 829 + PAGE_SIZE, DMA_TO_DEVICE); 830 + unmap->to_cnt = 1; 831 + unmap->addr[0] = src_dma; 832 + 833 + dest_dma = dma_map_page(dma_chan->device->dev, virt_to_page(dest), 0, 834 + PAGE_SIZE, DMA_FROM_DEVICE); 835 + unmap->from_cnt = 1; 836 + unmap->addr[1] = dest_dma; 837 + 838 + unmap->len = PAGE_SIZE; 821 839 822 840 tx = mv_xor_prep_dma_memcpy(dma_chan, dest_dma, src_dma, 823 - MV_XOR_TEST_SIZE, 0); 841 + PAGE_SIZE, 0); 824 842 cookie = mv_xor_tx_submit(tx); 825 843 mv_xor_issue_pending(dma_chan); 826 844 async_tx_ack(tx); ··· 847 841 } 848 842 849 843 dma_sync_single_for_cpu(dma_chan->device->dev, dest_dma, 850 - MV_XOR_TEST_SIZE, DMA_FROM_DEVICE); 851 - if (memcmp(src, dest, MV_XOR_TEST_SIZE)) { 844 + PAGE_SIZE, DMA_FROM_DEVICE); 845 + if (memcmp(src, dest, PAGE_SIZE)) { 852 846 dev_err(dma_chan->device->dev, 853 847 "Self-test copy failed compare, disabling\n"); 854 848 err = -ENODEV; ··· 856 850 } 857 851 858 852 free_resources: 853 + dmaengine_unmap_put(unmap); 859 854 mv_xor_free_chan_resources(dma_chan); 860 855 out: 861 856 kfree(src); ··· 874 867 dma_addr_t dma_srcs[MV_XOR_NUM_SRC_TEST]; 875 868 dma_addr_t dest_dma; 876 869 struct dma_async_tx_descriptor *tx; 870 + struct dmaengine_unmap_data *unmap; 877 871 struct dma_chan *dma_chan; 878 872 dma_cookie_t cookie; 879 873 u8 cmp_byte = 0; 880 874 u32 cmp_word; 881 875 int err = 0; 876 + int src_count = MV_XOR_NUM_SRC_TEST; 882 877 883 - for (src_idx = 0; src_idx < MV_XOR_NUM_SRC_TEST; src_idx++) { 878 + for (src_idx = 0; src_idx < src_count; src_idx++) { 884 879 xor_srcs[src_idx] = alloc_page(GFP_KERNEL); 885 880 if (!xor_srcs[src_idx]) { 886 881 while (src_idx--) ··· 899 890 } 900 891 901 892 /* Fill in src buffers */ 902 - for (src_idx = 0; src_idx < MV_XOR_NUM_SRC_TEST; src_idx++) { 893 + for (src_idx = 0; src_idx < src_count; src_idx++) { 903 894 u8 *ptr = page_address(xor_srcs[src_idx]); 904 895 for (i = 0; i < PAGE_SIZE; i++) 905 896 ptr[i] = (1 << src_idx); 906 897 } 907 898 908 - for (src_idx = 0; src_idx < MV_XOR_NUM_SRC_TEST; src_idx++) 899 + for (src_idx = 0; src_idx < src_count; src_idx++) 909 900 cmp_byte ^= (u8) (1 << src_idx); 910 901 911 902 cmp_word = (cmp_byte << 24) | (cmp_byte << 16) | ··· 919 910 goto out; 920 911 } 921 912 922 - /* test xor */ 923 - dest_dma = dma_map_page(dma_chan->device->dev, dest, 0, PAGE_SIZE, 924 - DMA_FROM_DEVICE); 913 + unmap = dmaengine_get_unmap_data(dma_chan->device->dev, src_count + 1, 914 + GFP_KERNEL); 915 + if (!unmap) { 916 + err = -ENOMEM; 917 + goto free_resources; 918 + } 925 919 926 - for (i = 0; i < MV_XOR_NUM_SRC_TEST; i++) 927 - dma_srcs[i] = dma_map_page(dma_chan->device->dev, xor_srcs[i], 928 - 0, PAGE_SIZE, DMA_TO_DEVICE); 920 + /* test xor */ 921 + for (i = 0; i < src_count; i++) { 922 + unmap->addr[i] = dma_map_page(dma_chan->device->dev, xor_srcs[i], 923 + 0, PAGE_SIZE, DMA_TO_DEVICE); 924 + dma_srcs[i] = unmap->addr[i]; 925 + unmap->to_cnt++; 926 + } 927 + 928 + unmap->addr[src_count] = dma_map_page(dma_chan->device->dev, dest, 0, PAGE_SIZE, 929 + DMA_FROM_DEVICE); 930 + dest_dma = unmap->addr[src_count]; 931 + unmap->from_cnt = 1; 932 + unmap->len = PAGE_SIZE; 929 933 930 934 tx = mv_xor_prep_dma_xor(dma_chan, dest_dma, dma_srcs, 931 - MV_XOR_NUM_SRC_TEST, PAGE_SIZE, 0); 935 + src_count, PAGE_SIZE, 0); 932 936 933 937 cookie = mv_xor_tx_submit(tx); 934 938 mv_xor_issue_pending(dma_chan); ··· 970 948 } 971 949 972 950 free_resources: 951 + dmaengine_unmap_put(unmap); 973 952 mv_xor_free_chan_resources(dma_chan); 974 953 out: 975 - src_idx = MV_XOR_NUM_SRC_TEST; 954 + src_idx = src_count; 976 955 while (src_idx--) 977 956 __free_page(xor_srcs[src_idx]); 978 957 __free_page(dest); ··· 1199 1176 int i = 0; 1200 1177 1201 1178 for_each_child_of_node(pdev->dev.of_node, np) { 1179 + struct mv_xor_chan *chan; 1202 1180 dma_cap_mask_t cap_mask; 1203 1181 int irq; 1204 1182 ··· 1217 1193 goto err_channel_add; 1218 1194 } 1219 1195 1220 - xordev->channels[i] = 1221 - mv_xor_channel_add(xordev, pdev, i, 1222 - cap_mask, irq); 1223 - if (IS_ERR(xordev->channels[i])) { 1224 - ret = PTR_ERR(xordev->channels[i]); 1225 - xordev->channels[i] = NULL; 1196 + chan = mv_xor_channel_add(xordev, pdev, i, 1197 + cap_mask, irq); 1198 + if (IS_ERR(chan)) { 1199 + ret = PTR_ERR(chan); 1226 1200 irq_dispose_mapping(irq); 1227 1201 goto err_channel_add; 1228 1202 } 1229 1203 1204 + xordev->channels[i] = chan; 1230 1205 i++; 1231 1206 } 1232 1207 } else if (pdata && pdata->channels) { 1233 1208 for (i = 0; i < MV_XOR_MAX_CHANNELS; i++) { 1234 1209 struct mv_xor_channel_data *cd; 1210 + struct mv_xor_chan *chan; 1235 1211 int irq; 1236 1212 1237 1213 cd = &pdata->channels[i]; ··· 1246 1222 goto err_channel_add; 1247 1223 } 1248 1224 1249 - xordev->channels[i] = 1250 - mv_xor_channel_add(xordev, pdev, i, 1251 - cd->cap_mask, irq); 1252 - if (IS_ERR(xordev->channels[i])) { 1253 - ret = PTR_ERR(xordev->channels[i]); 1225 + chan = mv_xor_channel_add(xordev, pdev, i, 1226 + cd->cap_mask, irq); 1227 + if (IS_ERR(chan)) { 1228 + ret = PTR_ERR(chan); 1254 1229 goto err_channel_add; 1255 1230 } 1231 + 1232 + xordev->channels[i] = chan; 1256 1233 } 1257 1234 } 1258 1235
+1 -4
drivers/dma/pl330.c
··· 2492 2492 2493 2493 static inline void _init_desc(struct dma_pl330_desc *desc) 2494 2494 { 2495 - desc->pchan = NULL; 2496 2495 desc->req.x = &desc->px; 2497 2496 desc->req.token = desc; 2498 2497 desc->rqcfg.swap = SWAP_NO; 2499 - desc->rqcfg.privileged = 0; 2500 - desc->rqcfg.insnaccess = 0; 2501 2498 desc->rqcfg.scctl = SCCTRL0; 2502 2499 desc->rqcfg.dcctl = DCCTRL0; 2503 2500 desc->req.cfg = &desc->rqcfg; ··· 2514 2517 if (!pdmac) 2515 2518 return 0; 2516 2519 2517 - desc = kmalloc(count * sizeof(*desc), flg); 2520 + desc = kcalloc(count, sizeof(*desc), flg); 2518 2521 if (!desc) 2519 2522 return 0; 2520 2523
+1 -26
drivers/dma/ppc4xx/adma.c
··· 533 533 } 534 534 535 535 /** 536 - * ppc440spe_desc_init_memset - initialize the descriptor for MEMSET operation 537 - */ 538 - static void ppc440spe_desc_init_memset(struct ppc440spe_adma_desc_slot *desc, 539 - int value, unsigned long flags) 540 - { 541 - struct dma_cdb *hw_desc = desc->hw_desc; 542 - 543 - memset(desc->hw_desc, 0, sizeof(struct dma_cdb)); 544 - desc->hw_next = NULL; 545 - desc->src_cnt = 1; 546 - desc->dst_cnt = 1; 547 - 548 - if (flags & DMA_PREP_INTERRUPT) 549 - set_bit(PPC440SPE_DESC_INT, &desc->flags); 550 - else 551 - clear_bit(PPC440SPE_DESC_INT, &desc->flags); 552 - 553 - hw_desc->sg1u = hw_desc->sg1l = cpu_to_le32((u32)value); 554 - hw_desc->sg3u = hw_desc->sg3l = cpu_to_le32((u32)value); 555 - hw_desc->opc = DMA_CDB_OPC_DFILL128; 556 - } 557 - 558 - /** 559 536 * ppc440spe_desc_set_src_addr - set source address into the descriptor 560 537 */ 561 538 static void ppc440spe_desc_set_src_addr(struct ppc440spe_adma_desc_slot *desc, ··· 1481 1504 struct ppc440spe_adma_chan *chan, 1482 1505 dma_cookie_t cookie) 1483 1506 { 1484 - int i; 1485 - 1486 1507 BUG_ON(desc->async_tx.cookie < 0); 1487 1508 if (desc->async_tx.cookie > 0) { 1488 1509 cookie = desc->async_tx.cookie; ··· 3873 3898 ppc440spe_adma_prep_dma_interrupt; 3874 3899 } 3875 3900 pr_info("%s: AMCC(R) PPC440SP(E) ADMA Engine: " 3876 - "( %s%s%s%s%s%s%s)\n", 3901 + "( %s%s%s%s%s%s)\n", 3877 3902 dev_name(adev->dev), 3878 3903 dma_has_cap(DMA_PQ, adev->common.cap_mask) ? "pq " : "", 3879 3904 dma_has_cap(DMA_PQ_VAL, adev->common.cap_mask) ? "pq_val " : "",
-1
drivers/dma/txx9dmac.c
··· 406 406 dma_async_tx_callback callback; 407 407 void *param; 408 408 struct dma_async_tx_descriptor *txd = &desc->txd; 409 - struct txx9dmac_slave *ds = dc->chan.private; 410 409 411 410 dev_vdbg(chan2dev(&dc->chan), "descriptor %u %p complete\n", 412 411 txd->cookie, desc);
-1
drivers/firewire/sbp2.c
··· 1623 1623 .cmd_per_lun = 1, 1624 1624 .can_queue = 1, 1625 1625 .sdev_attrs = sbp2_scsi_sysfs_attrs, 1626 - .no_write_same = 1, 1627 1626 }; 1628 1627 1629 1628 MODULE_AUTHOR("Kristian Hoegsberg <krh@bitplanet.net>");
+1
drivers/firmware/efi/efi-pstore.c
··· 356 356 static struct pstore_info efi_pstore_info = { 357 357 .owner = THIS_MODULE, 358 358 .name = "efi", 359 + .flags = PSTORE_FLAGS_FRAGILE, 359 360 .open = efi_pstore_open, 360 361 .close = efi_pstore_close, 361 362 .read = efi_pstore_read,
+2 -2
drivers/gpio/gpio-msm-v2.c
··· 252 252 253 253 spin_lock_irqsave(&tlmm_lock, irq_flags); 254 254 writel(TARGET_PROC_NONE, GPIO_INTR_CFG_SU(gpio)); 255 - clear_gpio_bits(INTR_RAW_STATUS_EN | INTR_ENABLE, GPIO_INTR_CFG(gpio)); 255 + clear_gpio_bits(BIT(INTR_RAW_STATUS_EN) | BIT(INTR_ENABLE), GPIO_INTR_CFG(gpio)); 256 256 __clear_bit(gpio, msm_gpio.enabled_irqs); 257 257 spin_unlock_irqrestore(&tlmm_lock, irq_flags); 258 258 } ··· 264 264 265 265 spin_lock_irqsave(&tlmm_lock, irq_flags); 266 266 __set_bit(gpio, msm_gpio.enabled_irqs); 267 - set_gpio_bits(INTR_RAW_STATUS_EN | INTR_ENABLE, GPIO_INTR_CFG(gpio)); 267 + set_gpio_bits(BIT(INTR_RAW_STATUS_EN) | BIT(INTR_ENABLE), GPIO_INTR_CFG(gpio)); 268 268 writel(TARGET_PROC_SCORPION, GPIO_INTR_CFG_SU(gpio)); 269 269 spin_unlock_irqrestore(&tlmm_lock, irq_flags); 270 270 }
+2 -1
drivers/gpio/gpio-rcar.c
··· 169 169 u32 pending; 170 170 unsigned int offset, irqs_handled = 0; 171 171 172 - while ((pending = gpio_rcar_read(p, INTDT))) { 172 + while ((pending = gpio_rcar_read(p, INTDT) & 173 + gpio_rcar_read(p, INTMSK))) { 173 174 offset = __ffs(pending); 174 175 gpio_rcar_write(p, INTCLR, BIT(offset)); 175 176 generic_handle_irq(irq_find_mapping(p->irq_domain, offset));
+12 -3
drivers/gpio/gpio-twl4030.c
··· 300 300 if (offset < TWL4030_GPIO_MAX) 301 301 ret = twl4030_set_gpio_direction(offset, 1); 302 302 else 303 - ret = -EINVAL; 303 + ret = -EINVAL; /* LED outputs can't be set as input */ 304 304 305 305 if (!ret) 306 306 priv->direction &= ~BIT(offset); ··· 354 354 static int twl_direction_out(struct gpio_chip *chip, unsigned offset, int value) 355 355 { 356 356 struct gpio_twl4030_priv *priv = to_gpio_twl4030(chip); 357 - int ret = -EINVAL; 357 + int ret = 0; 358 358 359 359 mutex_lock(&priv->mutex); 360 - if (offset < TWL4030_GPIO_MAX) 360 + if (offset < TWL4030_GPIO_MAX) { 361 361 ret = twl4030_set_gpio_direction(offset, 0); 362 + if (ret) { 363 + mutex_unlock(&priv->mutex); 364 + return ret; 365 + } 366 + } 367 + 368 + /* 369 + * LED gpios i.e. offset >= TWL4030_GPIO_MAX are always output 370 + */ 362 371 363 372 priv->direction |= BIT(offset); 364 373 mutex_unlock(&priv->mutex);
+1
drivers/gpu/drm/armada/armada_drm.h
··· 103 103 extern const struct drm_mode_config_funcs armada_drm_mode_config_funcs; 104 104 105 105 int armada_fbdev_init(struct drm_device *); 106 + void armada_fbdev_lastclose(struct drm_device *); 106 107 void armada_fbdev_fini(struct drm_device *); 107 108 108 109 int armada_overlay_plane_create(struct drm_device *, unsigned long);
+6 -1
drivers/gpu/drm/armada/armada_drv.c
··· 321 321 DRM_UNLOCKED), 322 322 }; 323 323 324 + static void armada_drm_lastclose(struct drm_device *dev) 325 + { 326 + armada_fbdev_lastclose(dev); 327 + } 328 + 324 329 static const struct file_operations armada_drm_fops = { 325 330 .owner = THIS_MODULE, 326 331 .llseek = no_llseek, ··· 342 337 .open = NULL, 343 338 .preclose = NULL, 344 339 .postclose = NULL, 345 - .lastclose = NULL, 340 + .lastclose = armada_drm_lastclose, 346 341 .unload = armada_drm_unload, 347 342 .get_vblank_counter = drm_vblank_count, 348 343 .enable_vblank = armada_drm_enable_vblank,
+15 -5
drivers/gpu/drm/armada/armada_fbdev.c
··· 105 105 drm_fb_helper_fill_fix(info, dfb->fb.pitches[0], dfb->fb.depth); 106 106 drm_fb_helper_fill_var(info, fbh, sizes->fb_width, sizes->fb_height); 107 107 108 - DRM_DEBUG_KMS("allocated %dx%d %dbpp fb: 0x%08x\n", 109 - dfb->fb.width, dfb->fb.height, 110 - dfb->fb.bits_per_pixel, obj->phys_addr); 108 + DRM_DEBUG_KMS("allocated %dx%d %dbpp fb: 0x%08llx\n", 109 + dfb->fb.width, dfb->fb.height, dfb->fb.bits_per_pixel, 110 + (unsigned long long)obj->phys_addr); 111 111 112 112 return 0; 113 113 ··· 177 177 return ret; 178 178 } 179 179 180 + void armada_fbdev_lastclose(struct drm_device *dev) 181 + { 182 + struct armada_private *priv = dev->dev_private; 183 + 184 + drm_modeset_lock_all(dev); 185 + if (priv->fbdev) 186 + drm_fb_helper_restore_fbdev_mode(priv->fbdev); 187 + drm_modeset_unlock_all(dev); 188 + } 189 + 180 190 void armada_fbdev_fini(struct drm_device *dev) 181 191 { 182 192 struct armada_private *priv = dev->dev_private; ··· 202 192 framebuffer_release(info); 203 193 } 204 194 195 + drm_fb_helper_fini(fbh); 196 + 205 197 if (fbh->fb) 206 198 fbh->fb->funcs->destroy(fbh->fb); 207 - 208 - drm_fb_helper_fini(fbh); 209 199 210 200 priv->fbdev = NULL; 211 201 }
+4 -3
drivers/gpu/drm/armada/armada_gem.c
··· 172 172 obj->dev_addr = obj->linear->start; 173 173 } 174 174 175 - DRM_DEBUG_DRIVER("obj %p phys %#x dev %#x\n", 176 - obj, obj->phys_addr, obj->dev_addr); 175 + DRM_DEBUG_DRIVER("obj %p phys %#llx dev %#llx\n", obj, 176 + (unsigned long long)obj->phys_addr, 177 + (unsigned long long)obj->dev_addr); 177 178 178 179 return 0; 179 180 } ··· 558 557 * refcount on the gem object itself. 559 558 */ 560 559 drm_gem_object_reference(obj); 561 - dma_buf_put(buf); 562 560 return obj; 563 561 } 564 562 } ··· 573 573 } 574 574 575 575 dobj->obj.import_attach = attach; 576 + get_dma_buf(buf); 576 577 577 578 /* 578 579 * Don't call dma_buf_map_attachment() here - it maps the
+8
drivers/gpu/drm/drm_edid.c
··· 68 68 #define EDID_QUIRK_DETAILED_SYNC_PP (1 << 6) 69 69 /* Force reduced-blanking timings for detailed modes */ 70 70 #define EDID_QUIRK_FORCE_REDUCED_BLANKING (1 << 7) 71 + /* Force 8bpc */ 72 + #define EDID_QUIRK_FORCE_8BPC (1 << 8) 71 73 72 74 struct detailed_mode_closure { 73 75 struct drm_connector *connector; ··· 130 128 131 129 /* Medion MD 30217 PG */ 132 130 { "MED", 0x7b8, EDID_QUIRK_PREFER_LARGE_75 }, 131 + 132 + /* Panel in Samsung NP700G7A-S01PL notebook reports 6bpc */ 133 + { "SEC", 0xd033, EDID_QUIRK_FORCE_8BPC }, 133 134 }; 134 135 135 136 /* ··· 3439 3434 edid_fixup_preferred(connector, quirks); 3440 3435 3441 3436 drm_add_display_info(edid, &connector->display_info); 3437 + 3438 + if (quirks & EDID_QUIRK_FORCE_8BPC) 3439 + connector->display_info.bpc = 8; 3442 3440 3443 3441 return num_modes; 3444 3442 }
+3 -3
drivers/gpu/drm/drm_stub.c
··· 566 566 if (dev->driver->unload) 567 567 dev->driver->unload(dev); 568 568 err_primary_node: 569 - drm_put_minor(dev->primary); 569 + drm_unplug_minor(dev->primary); 570 570 err_render_node: 571 - drm_put_minor(dev->render); 571 + drm_unplug_minor(dev->render); 572 572 err_control_node: 573 - drm_put_minor(dev->control); 573 + drm_unplug_minor(dev->control); 574 574 err_agp: 575 575 if (dev->driver->bus->agp_destroy) 576 576 dev->driver->bus->agp_destroy(dev);
+11 -9
drivers/gpu/drm/i915/i915_dma.c
··· 83 83 drm_i915_private_t *dev_priv = dev->dev_private; 84 84 struct drm_i915_master_private *master_priv; 85 85 86 + /* 87 + * The dri breadcrumb update races against the drm master disappearing. 88 + * Instead of trying to fix this (this is by far not the only ums issue) 89 + * just don't do the update in kms mode. 90 + */ 91 + if (drm_core_check_feature(dev, DRIVER_MODESET)) 92 + return; 93 + 86 94 if (dev->primary->master) { 87 95 master_priv = dev->primary->master->driver_priv; 88 96 if (master_priv->sarea_priv) ··· 1498 1490 spin_lock_init(&dev_priv->uncore.lock); 1499 1491 spin_lock_init(&dev_priv->mm.object_stat_lock); 1500 1492 mutex_init(&dev_priv->dpio_lock); 1501 - mutex_init(&dev_priv->rps.hw_lock); 1502 1493 mutex_init(&dev_priv->modeset_restore_lock); 1503 1494 1504 - mutex_init(&dev_priv->pc8.lock); 1505 - dev_priv->pc8.requirements_met = false; 1506 - dev_priv->pc8.gpu_idle = false; 1507 - dev_priv->pc8.irqs_disabled = false; 1508 - dev_priv->pc8.enabled = false; 1509 - dev_priv->pc8.disable_count = 2; /* requirements_met + gpu_idle */ 1510 - INIT_DELAYED_WORK(&dev_priv->pc8.enable_work, hsw_enable_pc8_work); 1495 + intel_pm_setup(dev); 1511 1496 1512 1497 intel_display_crc_init(dev); 1513 1498 ··· 1604 1603 } 1605 1604 1606 1605 intel_irq_init(dev); 1607 - intel_pm_init(dev); 1608 1606 intel_uncore_sanitize(dev); 1609 1607 1610 1608 /* Try to make sure MCHBAR is enabled before poking at it */ ··· 1848 1848 1849 1849 void i915_driver_preclose(struct drm_device * dev, struct drm_file *file_priv) 1850 1850 { 1851 + mutex_lock(&dev->struct_mutex); 1851 1852 i915_gem_context_close(dev, file_priv); 1852 1853 i915_gem_release(dev, file_priv); 1854 + mutex_unlock(&dev->struct_mutex); 1853 1855 } 1854 1856 1855 1857 void i915_driver_postclose(struct drm_device *dev, struct drm_file *file)
+1
drivers/gpu/drm/i915/i915_drv.c
··· 651 651 intel_modeset_init_hw(dev); 652 652 653 653 drm_modeset_lock_all(dev); 654 + drm_mode_config_reset(dev); 654 655 intel_modeset_setup_hw_state(dev, true); 655 656 drm_modeset_unlock_all(dev); 656 657
+6 -3
drivers/gpu/drm/i915/i915_drv.h
··· 1755 1755 #define IS_MOBILE(dev) (INTEL_INFO(dev)->is_mobile) 1756 1756 #define IS_HSW_EARLY_SDV(dev) (IS_HASWELL(dev) && \ 1757 1757 ((dev)->pdev->device & 0xFF00) == 0x0C00) 1758 - #define IS_ULT(dev) (IS_HASWELL(dev) && \ 1758 + #define IS_BDW_ULT(dev) (IS_BROADWELL(dev) && \ 1759 + (((dev)->pdev->device & 0xf) == 0x2 || \ 1760 + ((dev)->pdev->device & 0xf) == 0x6 || \ 1761 + ((dev)->pdev->device & 0xf) == 0xe)) 1762 + #define IS_HSW_ULT(dev) (IS_HASWELL(dev) && \ 1759 1763 ((dev)->pdev->device & 0xFF00) == 0x0A00) 1764 + #define IS_ULT(dev) (IS_HSW_ULT(dev) || IS_BDW_ULT(dev)) 1760 1765 #define IS_HSW_GT3(dev) (IS_HASWELL(dev) && \ 1761 1766 ((dev)->pdev->device & 0x00F0) == 0x0020) 1762 1767 #define IS_PRELIMINARY_HW(intel_info) ((intel_info)->is_preliminary) ··· 1906 1901 void i915_handle_error(struct drm_device *dev, bool wedged); 1907 1902 1908 1903 extern void intel_irq_init(struct drm_device *dev); 1909 - extern void intel_pm_init(struct drm_device *dev); 1910 1904 extern void intel_hpd_init(struct drm_device *dev); 1911 - extern void intel_pm_init(struct drm_device *dev); 1912 1905 1913 1906 extern void intel_uncore_sanitize(struct drm_device *dev); 1914 1907 extern void intel_uncore_early_sanitize(struct drm_device *dev);
+12 -4
drivers/gpu/drm/i915/i915_gem_context.c
··· 347 347 { 348 348 struct drm_i915_file_private *file_priv = file->driver_priv; 349 349 350 - mutex_lock(&dev->struct_mutex); 351 350 idr_for_each(&file_priv->context_idr, context_idr_cleanup, NULL); 352 351 idr_destroy(&file_priv->context_idr); 353 - mutex_unlock(&dev->struct_mutex); 354 352 } 355 353 356 354 static struct i915_hw_context * ··· 421 423 if (ret) 422 424 return ret; 423 425 424 - /* Clear this page out of any CPU caches for coherent swap-in/out. Note 426 + /* 427 + * Pin can switch back to the default context if we end up calling into 428 + * evict_everything - as a last ditch gtt defrag effort that also 429 + * switches to the default context. Hence we need to reload from here. 430 + */ 431 + from = ring->last_context; 432 + 433 + /* 434 + * Clear this page out of any CPU caches for coherent swap-in/out. Note 425 435 * that thanks to write = false in this call and us not setting any gpu 426 436 * write domains when putting a context object onto the active list 427 437 * (when switching away from it), this won't block. 428 - * XXX: We need a real interface to do this instead of trickery. */ 438 + * 439 + * XXX: We need a real interface to do this instead of trickery. 440 + */ 429 441 ret = i915_gem_object_set_to_gtt_domain(to->obj, false); 430 442 if (ret) { 431 443 i915_gem_object_unpin(to->obj);
+11 -3
drivers/gpu/drm/i915/i915_gem_evict.c
··· 88 88 } else 89 89 drm_mm_init_scan(&vm->mm, min_size, alignment, cache_level); 90 90 91 + search_again: 91 92 /* First see if there is a large enough contiguous idle region... */ 92 93 list_for_each_entry(vma, &vm->inactive_list, mm_list) { 93 94 if (mark_free(vma, &unwind_list)) ··· 116 115 list_del_init(&vma->exec_list); 117 116 } 118 117 119 - /* We expect the caller to unpin, evict all and try again, or give up. 120 - * So calling i915_gem_evict_vm() is unnecessary. 118 + /* Can we unpin some objects such as idle hw contents, 119 + * or pending flips? 121 120 */ 122 - return -ENOSPC; 121 + ret = nonblocking ? -ENOSPC : i915_gpu_idle(dev); 122 + if (ret) 123 + return ret; 124 + 125 + /* Only idle the GPU and repeat the search once */ 126 + i915_gem_retire_requests(dev); 127 + nonblocking = true; 128 + goto search_again; 123 129 124 130 found: 125 131 /* drm_mm doesn't allow any other other operations while
+7 -2
drivers/gpu/drm/i915/i915_gem_gtt.c
··· 337 337 kfree(ppgtt->gen8_pt_dma_addr[i]); 338 338 } 339 339 340 - __free_pages(ppgtt->gen8_pt_pages, ppgtt->num_pt_pages << PAGE_SHIFT); 341 - __free_pages(ppgtt->pd_pages, ppgtt->num_pd_pages << PAGE_SHIFT); 340 + __free_pages(ppgtt->gen8_pt_pages, get_order(ppgtt->num_pt_pages << PAGE_SHIFT)); 341 + __free_pages(ppgtt->pd_pages, get_order(ppgtt->num_pd_pages << PAGE_SHIFT)); 342 342 } 343 343 344 344 /** ··· 1241 1241 bdw_gmch_ctl &= BDW_GMCH_GGMS_MASK; 1242 1242 if (bdw_gmch_ctl) 1243 1243 bdw_gmch_ctl = 1 << bdw_gmch_ctl; 1244 + if (bdw_gmch_ctl > 4) { 1245 + WARN_ON(!i915_preliminary_hw_support); 1246 + return 4<<20; 1247 + } 1248 + 1244 1249 return bdw_gmch_ctl << 20; 1245 1250 } 1246 1251
+4 -3
drivers/gpu/drm/i915/intel_display.c
··· 9135 9135 if (IS_G4X(dev) || INTEL_INFO(dev)->gen >= 5) 9136 9136 PIPE_CONF_CHECK_I(pipe_bpp); 9137 9137 9138 - if (!IS_HASWELL(dev)) { 9138 + if (!HAS_DDI(dev)) { 9139 9139 PIPE_CONF_CHECK_CLOCK_FUZZY(adjusted_mode.crtc_clock); 9140 9140 PIPE_CONF_CHECK_CLOCK_FUZZY(port_clock); 9141 9141 } ··· 11036 11036 } 11037 11037 11038 11038 intel_modeset_check_state(dev); 11039 - 11040 - drm_mode_config_reset(dev); 11041 11039 } 11042 11040 11043 11041 void intel_modeset_gem_init(struct drm_device *dev) ··· 11044 11046 11045 11047 intel_setup_overlay(dev); 11046 11048 11049 + drm_modeset_lock_all(dev); 11050 + drm_mode_config_reset(dev); 11047 11051 intel_modeset_setup_hw_state(dev, false); 11052 + drm_modeset_unlock_all(dev); 11048 11053 } 11049 11054 11050 11055 void intel_modeset_cleanup(struct drm_device *dev)
+1
drivers/gpu/drm/i915/intel_drv.h
··· 821 821 uint32_t sprite_width, int pixel_size, 822 822 bool enabled, bool scaled); 823 823 void intel_init_pm(struct drm_device *dev); 824 + void intel_pm_setup(struct drm_device *dev); 824 825 bool intel_fbc_enabled(struct drm_device *dev); 825 826 void intel_update_fbc(struct drm_device *dev); 826 827 void intel_gpu_ips_init(struct drm_i915_private *dev_priv);
+23 -3
drivers/gpu/drm/i915/intel_panel.c
··· 451 451 452 452 spin_lock_irqsave(&dev_priv->backlight.lock, flags); 453 453 454 - if (HAS_PCH_SPLIT(dev)) { 454 + if (IS_BROADWELL(dev)) { 455 + val = I915_READ(BLC_PWM_PCH_CTL2) & BACKLIGHT_DUTY_CYCLE_MASK; 456 + } else if (HAS_PCH_SPLIT(dev)) { 455 457 val = I915_READ(BLC_PWM_CPU_CTL) & BACKLIGHT_DUTY_CYCLE_MASK; 456 458 } else { 457 459 if (IS_VALLEYVIEW(dev)) ··· 481 479 return val; 482 480 } 483 481 482 + static void intel_bdw_panel_set_backlight(struct drm_device *dev, u32 level) 483 + { 484 + struct drm_i915_private *dev_priv = dev->dev_private; 485 + u32 val = I915_READ(BLC_PWM_PCH_CTL2) & ~BACKLIGHT_DUTY_CYCLE_MASK; 486 + I915_WRITE(BLC_PWM_PCH_CTL2, val | level); 487 + } 488 + 484 489 static void intel_pch_panel_set_backlight(struct drm_device *dev, u32 level) 485 490 { 486 491 struct drm_i915_private *dev_priv = dev->dev_private; ··· 505 496 DRM_DEBUG_DRIVER("set backlight PWM = %d\n", level); 506 497 level = intel_panel_compute_brightness(dev, pipe, level); 507 498 508 - if (HAS_PCH_SPLIT(dev)) 499 + if (IS_BROADWELL(dev)) 500 + return intel_bdw_panel_set_backlight(dev, level); 501 + else if (HAS_PCH_SPLIT(dev)) 509 502 return intel_pch_panel_set_backlight(dev, level); 510 503 511 504 if (is_backlight_combination_mode(dev)) { ··· 677 666 POSTING_READ(reg); 678 667 I915_WRITE(reg, tmp | BLM_PWM_ENABLE); 679 668 680 - if (HAS_PCH_SPLIT(dev) && 669 + if (IS_BROADWELL(dev)) { 670 + /* 671 + * Broadwell requires PCH override to drive the PCH 672 + * backlight pin. The above will configure the CPU 673 + * backlight pin, which we don't plan to use. 674 + */ 675 + tmp = I915_READ(BLC_PWM_PCH_CTL1); 676 + tmp |= BLM_PCH_OVERRIDE_ENABLE | BLM_PCH_PWM_ENABLE; 677 + I915_WRITE(BLC_PWM_PCH_CTL1, tmp); 678 + } else if (HAS_PCH_SPLIT(dev) && 681 679 !(dev_priv->quirks & QUIRK_NO_PCH_PWM_ENABLE)) { 682 680 tmp = I915_READ(BLC_PWM_PCH_CTL1); 683 681 tmp |= BLM_PCH_PWM_ENABLE;
+27 -2
drivers/gpu/drm/i915/intel_pm.c
··· 5685 5685 { 5686 5686 struct drm_i915_private *dev_priv = dev->dev_private; 5687 5687 bool is_enabled, enable_requested; 5688 + unsigned long irqflags; 5688 5689 uint32_t tmp; 5689 5690 5690 5691 tmp = I915_READ(HSW_PWR_WELL_DRIVER); ··· 5703 5702 HSW_PWR_WELL_STATE_ENABLED), 20)) 5704 5703 DRM_ERROR("Timeout enabling power well\n"); 5705 5704 } 5705 + 5706 + if (IS_BROADWELL(dev)) { 5707 + spin_lock_irqsave(&dev_priv->irq_lock, irqflags); 5708 + I915_WRITE(GEN8_DE_PIPE_IMR(PIPE_B), 5709 + dev_priv->de_irq_mask[PIPE_B]); 5710 + I915_WRITE(GEN8_DE_PIPE_IER(PIPE_B), 5711 + ~dev_priv->de_irq_mask[PIPE_B] | 5712 + GEN8_PIPE_VBLANK); 5713 + I915_WRITE(GEN8_DE_PIPE_IMR(PIPE_C), 5714 + dev_priv->de_irq_mask[PIPE_C]); 5715 + I915_WRITE(GEN8_DE_PIPE_IER(PIPE_C), 5716 + ~dev_priv->de_irq_mask[PIPE_C] | 5717 + GEN8_PIPE_VBLANK); 5718 + POSTING_READ(GEN8_DE_PIPE_IER(PIPE_C)); 5719 + spin_unlock_irqrestore(&dev_priv->irq_lock, irqflags); 5720 + } 5706 5721 } else { 5707 5722 if (enable_requested) { 5708 - unsigned long irqflags; 5709 5723 enum pipe p; 5710 5724 5711 5725 I915_WRITE(HSW_PWR_WELL_DRIVER, 0); ··· 6146 6130 return val; 6147 6131 } 6148 6132 6149 - void intel_pm_init(struct drm_device *dev) 6133 + void intel_pm_setup(struct drm_device *dev) 6150 6134 { 6151 6135 struct drm_i915_private *dev_priv = dev->dev_private; 6152 6136 6137 + mutex_init(&dev_priv->rps.hw_lock); 6138 + 6139 + mutex_init(&dev_priv->pc8.lock); 6140 + dev_priv->pc8.requirements_met = false; 6141 + dev_priv->pc8.gpu_idle = false; 6142 + dev_priv->pc8.irqs_disabled = false; 6143 + dev_priv->pc8.enabled = false; 6144 + dev_priv->pc8.disable_count = 2; /* requirements_met + gpu_idle */ 6145 + INIT_DELAYED_WORK(&dev_priv->pc8.enable_work, hsw_enable_pc8_work); 6153 6146 INIT_DELAYED_WORK(&dev_priv->rps.delayed_resume_work, 6154 6147 intel_gen6_powersave_work); 6155 6148 }
+1
drivers/gpu/drm/i915/intel_ringbuffer.c
··· 965 965 } else if (IS_GEN6(ring->dev)) { 966 966 mmio = RING_HWS_PGA_GEN6(ring->mmio_base); 967 967 } else { 968 + /* XXX: gen8 returns to sanity */ 968 969 mmio = RING_HWS_PGA(ring->mmio_base); 969 970 } 970 971
+1
drivers/gpu/drm/i915/intel_uncore.c
··· 784 784 int intel_gpu_reset(struct drm_device *dev) 785 785 { 786 786 switch (INTEL_INFO(dev)->gen) { 787 + case 8: 787 788 case 7: 788 789 case 6: return gen6_do_reset(dev); 789 790 case 5: return ironlake_do_reset(dev);
+6
drivers/gpu/drm/nouveau/nouveau_drm.c
··· 858 858 if (nouveau_runtime_pm == 0) 859 859 return -EINVAL; 860 860 861 + /* are we optimus enabled? */ 862 + if (nouveau_runtime_pm == -1 && !nouveau_is_optimus() && !nouveau_is_v1_dsm()) { 863 + DRM_DEBUG_DRIVER("failing to power off - not optimus\n"); 864 + return -EINVAL; 865 + } 866 + 861 867 nv_debug_level(SILENT); 862 868 drm_kms_helper_poll_disable(drm_dev); 863 869 vga_switcheroo_set_dynamic_switch(pdev, VGA_SWITCHEROO_OFF);
+3 -1
drivers/gpu/drm/radeon/atombios_crtc.c
··· 1196 1196 } else if ((rdev->family == CHIP_TAHITI) || 1197 1197 (rdev->family == CHIP_PITCAIRN)) 1198 1198 fb_format |= SI_GRPH_PIPE_CONFIG(SI_ADDR_SURF_P8_32x32_8x16); 1199 - else if (rdev->family == CHIP_VERDE) 1199 + else if ((rdev->family == CHIP_VERDE) || 1200 + (rdev->family == CHIP_OLAND) || 1201 + (rdev->family == CHIP_HAINAN)) /* for completeness. HAINAN has no display hw */ 1200 1202 fb_format |= SI_GRPH_PIPE_CONFIG(SI_ADDR_SURF_P4_8x16); 1201 1203 1202 1204 switch (radeon_crtc->crtc_id) {
+1 -1
drivers/gpu/drm/radeon/cik_sdma.c
··· 458 458 radeon_ring_write(ring, 0); /* src/dst endian swap */ 459 459 radeon_ring_write(ring, src_offset & 0xffffffff); 460 460 radeon_ring_write(ring, upper_32_bits(src_offset) & 0xffffffff); 461 - radeon_ring_write(ring, dst_offset & 0xfffffffc); 461 + radeon_ring_write(ring, dst_offset & 0xffffffff); 462 462 radeon_ring_write(ring, upper_32_bits(dst_offset) & 0xffffffff); 463 463 src_offset += cur_size_in_bytes; 464 464 dst_offset += cur_size_in_bytes;
+2 -2
drivers/gpu/drm/radeon/radeon_asic.c
··· 2021 2021 .hdmi_setmode = &evergreen_hdmi_setmode, 2022 2022 }, 2023 2023 .copy = { 2024 - .blit = NULL, 2024 + .blit = &cik_copy_cpdma, 2025 2025 .blit_ring_index = RADEON_RING_TYPE_GFX_INDEX, 2026 2026 .dma = &cik_copy_dma, 2027 2027 .dma_ring_index = R600_RING_TYPE_DMA_INDEX, ··· 2122 2122 .hdmi_setmode = &evergreen_hdmi_setmode, 2123 2123 }, 2124 2124 .copy = { 2125 - .blit = NULL, 2125 + .blit = &cik_copy_cpdma, 2126 2126 .blit_ring_index = RADEON_RING_TYPE_GFX_INDEX, 2127 2127 .dma = &cik_copy_dma, 2128 2128 .dma_ring_index = R600_RING_TYPE_DMA_INDEX,
-10
drivers/gpu/drm/radeon/radeon_drv.c
··· 508 508 #endif 509 509 }; 510 510 511 - 512 - static void 513 - radeon_pci_shutdown(struct pci_dev *pdev) 514 - { 515 - struct drm_device *dev = pci_get_drvdata(pdev); 516 - 517 - radeon_driver_unload_kms(dev); 518 - } 519 - 520 511 static struct drm_driver kms_driver = { 521 512 .driver_features = 522 513 DRIVER_USE_AGP | ··· 577 586 .probe = radeon_pci_probe, 578 587 .remove = radeon_pci_remove, 579 588 .driver.pm = &radeon_pm_ops, 580 - .shutdown = radeon_pci_shutdown, 581 589 }; 582 590 583 591 static int __init radeon_init(void)
+10
drivers/gpu/drm/radeon/rs690.c
··· 162 162 base = RREG32_MC(R_000100_MCCFG_FB_LOCATION); 163 163 base = G_000100_MC_FB_START(base) << 16; 164 164 rdev->mc.igp_sideport_enabled = radeon_atombios_sideport_present(rdev); 165 + /* Some boards seem to be configured for 128MB of sideport memory, 166 + * but really only have 64MB. Just skip the sideport and use 167 + * UMA memory. 168 + */ 169 + if (rdev->mc.igp_sideport_enabled && 170 + (rdev->mc.real_vram_size == (384 * 1024 * 1024))) { 171 + base += 128 * 1024 * 1024; 172 + rdev->mc.real_vram_size -= 128 * 1024 * 1024; 173 + rdev->mc.mc_vram_size = rdev->mc.real_vram_size; 174 + } 165 175 166 176 /* Use K8 direct mapping for fast fb access. */ 167 177 rdev->fastfb_working = false;
+3 -3
drivers/gpu/drm/ttm/ttm_bo_vm.c
··· 169 169 } 170 170 171 171 page_offset = ((address - vma->vm_start) >> PAGE_SHIFT) + 172 - drm_vma_node_start(&bo->vma_node) - vma->vm_pgoff; 173 - page_last = vma_pages(vma) + 174 - drm_vma_node_start(&bo->vma_node) - vma->vm_pgoff; 172 + vma->vm_pgoff - drm_vma_node_start(&bo->vma_node); 173 + page_last = vma_pages(vma) + vma->vm_pgoff - 174 + drm_vma_node_start(&bo->vma_node); 175 175 176 176 if (unlikely(page_offset >= bo->num_pages)) { 177 177 retval = VM_FAULT_SIGBUS;
+3
drivers/gpu/drm/vmwgfx/vmwgfx_ioctl.c
··· 68 68 SVGA_FIFO_3D_HWVERSION)); 69 69 break; 70 70 } 71 + case DRM_VMW_PARAM_MAX_SURF_MEMORY: 72 + param->value = dev_priv->memory_size; 73 + break; 71 74 default: 72 75 DRM_ERROR("Illegal vmwgfx get param request: %d\n", 73 76 param->param);
+14 -2
drivers/iio/adc/ad7887.c
··· 200 200 .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE), 201 201 .address = 1, 202 202 .scan_index = 1, 203 - .scan_type = IIO_ST('u', 12, 16, 0), 203 + .scan_type = { 204 + .sign = 'u', 205 + .realbits = 12, 206 + .storagebits = 16, 207 + .shift = 0, 208 + .endianness = IIO_BE, 209 + }, 204 210 }, 205 211 .channel[1] = { 206 212 .type = IIO_VOLTAGE, ··· 216 210 .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE), 217 211 .address = 0, 218 212 .scan_index = 0, 219 - .scan_type = IIO_ST('u', 12, 16, 0), 213 + .scan_type = { 214 + .sign = 'u', 215 + .realbits = 12, 216 + .storagebits = 16, 217 + .shift = 0, 218 + .endianness = IIO_BE, 219 + }, 220 220 }, 221 221 .channel[2] = IIO_CHAN_SOFT_TIMESTAMP(2), 222 222 .int_vref_mv = 2500,
+6 -1
drivers/iio/imu/adis16400_core.c
··· 651 651 .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE), 652 652 .address = ADIS16448_BARO_OUT, 653 653 .scan_index = ADIS16400_SCAN_BARO, 654 - .scan_type = IIO_ST('s', 16, 16, 0), 654 + .scan_type = { 655 + .sign = 's', 656 + .realbits = 16, 657 + .storagebits = 16, 658 + .endianness = IIO_BE, 659 + }, 655 660 }, 656 661 ADIS16400_TEMP_CHAN(ADIS16448_TEMP_OUT, 12), 657 662 IIO_CHAN_SOFT_TIMESTAMP(11)
+1 -1
drivers/iio/light/cm36651.c
··· 387 387 return -EINVAL; 388 388 } 389 389 390 - return IIO_VAL_INT_PLUS_MICRO; 390 + return IIO_VAL_INT; 391 391 } 392 392 393 393 static int cm36651_write_int_time(struct cm36651_data *cm36651,
+18 -8
drivers/infiniband/ulp/isert/ib_isert.c
··· 207 207 isert_conn->conn_rx_descs = NULL; 208 208 } 209 209 210 + static void isert_cq_tx_work(struct work_struct *); 210 211 static void isert_cq_tx_callback(struct ib_cq *, void *); 212 + static void isert_cq_rx_work(struct work_struct *); 211 213 static void isert_cq_rx_callback(struct ib_cq *, void *); 212 214 213 215 static int ··· 261 259 cq_desc[i].device = device; 262 260 cq_desc[i].cq_index = i; 263 261 262 + INIT_WORK(&cq_desc[i].cq_rx_work, isert_cq_rx_work); 264 263 device->dev_rx_cq[i] = ib_create_cq(device->ib_device, 265 264 isert_cq_rx_callback, 266 265 isert_cq_event_callback, 267 266 (void *)&cq_desc[i], 268 267 ISER_MAX_RX_CQ_LEN, i); 269 - if (IS_ERR(device->dev_rx_cq[i])) 268 + if (IS_ERR(device->dev_rx_cq[i])) { 269 + ret = PTR_ERR(device->dev_rx_cq[i]); 270 + device->dev_rx_cq[i] = NULL; 270 271 goto out_cq; 272 + } 271 273 274 + INIT_WORK(&cq_desc[i].cq_tx_work, isert_cq_tx_work); 272 275 device->dev_tx_cq[i] = ib_create_cq(device->ib_device, 273 276 isert_cq_tx_callback, 274 277 isert_cq_event_callback, 275 278 (void *)&cq_desc[i], 276 279 ISER_MAX_TX_CQ_LEN, i); 277 - if (IS_ERR(device->dev_tx_cq[i])) 280 + if (IS_ERR(device->dev_tx_cq[i])) { 281 + ret = PTR_ERR(device->dev_tx_cq[i]); 282 + device->dev_tx_cq[i] = NULL; 283 + goto out_cq; 284 + } 285 + 286 + ret = ib_req_notify_cq(device->dev_rx_cq[i], IB_CQ_NEXT_COMP); 287 + if (ret) 278 288 goto out_cq; 279 289 280 - if (ib_req_notify_cq(device->dev_rx_cq[i], IB_CQ_NEXT_COMP)) 281 - goto out_cq; 282 - 283 - if (ib_req_notify_cq(device->dev_tx_cq[i], IB_CQ_NEXT_COMP)) 290 + ret = ib_req_notify_cq(device->dev_tx_cq[i], IB_CQ_NEXT_COMP); 291 + if (ret) 284 292 goto out_cq; 285 293 } 286 294 ··· 1736 1724 { 1737 1725 struct isert_cq_desc *cq_desc = (struct isert_cq_desc *)context; 1738 1726 1739 - INIT_WORK(&cq_desc->cq_tx_work, isert_cq_tx_work); 1740 1727 queue_work(isert_comp_wq, &cq_desc->cq_tx_work); 1741 1728 } 1742 1729 ··· 1779 1768 { 1780 1769 struct isert_cq_desc *cq_desc = (struct isert_cq_desc *)context; 1781 1770 1782 - INIT_WORK(&cq_desc->cq_rx_work, isert_cq_rx_work); 1783 1771 queue_work(isert_rx_wq, &cq_desc->cq_rx_work); 1784 1772 } 1785 1773
+2 -1
drivers/net/can/usb/ems_usb.c
··· 625 625 usb_unanchor_urb(urb); 626 626 usb_free_coherent(dev->udev, RX_BUFFER_SIZE, buf, 627 627 urb->transfer_dma); 628 + usb_free_urb(urb); 628 629 break; 629 630 } 630 631 ··· 799 798 * allowed (MAX_TX_URBS). 800 799 */ 801 800 if (!context) { 802 - usb_unanchor_urb(urb); 803 801 usb_free_coherent(dev->udev, size, buf, urb->transfer_dma); 802 + usb_free_urb(urb); 804 803 805 804 netdev_warn(netdev, "couldn't find free context\n"); 806 805
+3
drivers/net/can/usb/peak_usb/pcan_usb_pro.c
··· 927 927 /* set LED in default state (end of init phase) */ 928 928 pcan_usb_pro_set_led(dev, 0, 1); 929 929 930 + kfree(bi); 931 + kfree(fi); 932 + 930 933 return 0; 931 934 932 935 err_out:
+21 -26
drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_hw.c
··· 447 447 448 448 qlcnic_83xx_poll_process_aen(adapter); 449 449 450 - if (ahw->diag_test == QLCNIC_INTERRUPT_TEST) { 451 - ahw->diag_cnt++; 450 + if (ahw->diag_test) { 451 + if (ahw->diag_test == QLCNIC_INTERRUPT_TEST) 452 + ahw->diag_cnt++; 452 453 qlcnic_83xx_enable_legacy_msix_mbx_intr(adapter); 453 454 return IRQ_HANDLED; 454 455 } ··· 1346 1345 } 1347 1346 1348 1347 if (adapter->ahw->diag_test == QLCNIC_LOOPBACK_TEST) { 1349 - /* disable and free mailbox interrupt */ 1350 - if (!(adapter->flags & QLCNIC_MSIX_ENABLED)) { 1351 - qlcnic_83xx_enable_mbx_poll(adapter); 1352 - qlcnic_83xx_free_mbx_intr(adapter); 1353 - } 1354 1348 adapter->ahw->loopback_state = 0; 1355 1349 adapter->ahw->hw_ops->setup_link_event(adapter, 1); 1356 1350 } ··· 1359 1363 { 1360 1364 struct qlcnic_adapter *adapter = netdev_priv(netdev); 1361 1365 struct qlcnic_host_sds_ring *sds_ring; 1362 - int ring, err; 1366 + int ring; 1363 1367 1364 1368 clear_bit(__QLCNIC_DEV_UP, &adapter->state); 1365 1369 if (adapter->ahw->diag_test == QLCNIC_INTERRUPT_TEST) { 1366 1370 for (ring = 0; ring < adapter->drv_sds_rings; ring++) { 1367 1371 sds_ring = &adapter->recv_ctx->sds_rings[ring]; 1368 - qlcnic_83xx_disable_intr(adapter, sds_ring); 1369 - if (!(adapter->flags & QLCNIC_MSIX_ENABLED)) 1370 - qlcnic_83xx_enable_mbx_poll(adapter); 1372 + if (adapter->flags & QLCNIC_MSIX_ENABLED) 1373 + qlcnic_83xx_disable_intr(adapter, sds_ring); 1371 1374 } 1372 1375 } 1373 1376 1374 1377 qlcnic_fw_destroy_ctx(adapter); 1375 1378 qlcnic_detach(adapter); 1376 1379 1377 - if (adapter->ahw->diag_test == QLCNIC_LOOPBACK_TEST) { 1378 - if (!(adapter->flags & QLCNIC_MSIX_ENABLED)) { 1379 - err = qlcnic_83xx_setup_mbx_intr(adapter); 1380 - qlcnic_83xx_disable_mbx_poll(adapter); 1381 - if (err) { 1382 - dev_err(&adapter->pdev->dev, 1383 - "%s: failed to setup mbx interrupt\n", 1384 - __func__); 1385 - goto out; 1386 - } 1387 - } 1388 - } 1389 1380 adapter->ahw->diag_test = 0; 1390 1381 adapter->drv_sds_rings = drv_sds_rings; 1391 1382 ··· 1382 1399 if (netif_running(netdev)) 1383 1400 __qlcnic_up(adapter, netdev); 1384 1401 1385 - if (adapter->ahw->diag_test == QLCNIC_INTERRUPT_TEST && 1386 - !(adapter->flags & QLCNIC_MSIX_ENABLED)) 1387 - qlcnic_83xx_disable_mbx_poll(adapter); 1388 1402 out: 1389 1403 netif_device_attach(netdev); 1390 1404 } ··· 3734 3754 return; 3735 3755 } 3736 3756 3757 + static inline void qlcnic_dump_mailbox_registers(struct qlcnic_adapter *adapter) 3758 + { 3759 + struct qlcnic_hardware_context *ahw = adapter->ahw; 3760 + u32 offset; 3761 + 3762 + offset = QLCRDX(ahw, QLCNIC_DEF_INT_MASK); 3763 + dev_info(&adapter->pdev->dev, "Mbx interrupt mask=0x%x, Mbx interrupt enable=0x%x, Host mbx control=0x%x, Fw mbx control=0x%x", 3764 + readl(ahw->pci_base0 + offset), 3765 + QLCRDX(ahw, QLCNIC_MBX_INTR_ENBL), 3766 + QLCRDX(ahw, QLCNIC_HOST_MBX_CTRL), 3767 + QLCRDX(ahw, QLCNIC_FW_MBX_CTRL)); 3768 + } 3769 + 3737 3770 static void qlcnic_83xx_mailbox_worker(struct work_struct *work) 3738 3771 { 3739 3772 struct qlcnic_mailbox *mbx = container_of(work, struct qlcnic_mailbox, ··· 3791 3798 __func__, cmd->cmd_op, cmd->type, ahw->pci_func, 3792 3799 ahw->op_mode); 3793 3800 clear_bit(QLC_83XX_MBX_READY, &mbx->status); 3801 + qlcnic_dump_mailbox_registers(adapter); 3802 + qlcnic_83xx_get_mbx_data(adapter, cmd); 3794 3803 qlcnic_dump_mbx(adapter, cmd); 3795 3804 qlcnic_83xx_idc_request_reset(adapter, 3796 3805 QLCNIC_FORCE_FW_DUMP_KEY);
+1
drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_hw.h
··· 662 662 pci_channel_state_t); 663 663 pci_ers_result_t qlcnic_83xx_io_slot_reset(struct pci_dev *); 664 664 void qlcnic_83xx_io_resume(struct pci_dev *); 665 + void qlcnic_83xx_stop_hw(struct qlcnic_adapter *); 665 666 #endif
+41 -26
drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_init.c
··· 740 740 adapter->ahw->idc.err_code = -EIO; 741 741 dev_err(&adapter->pdev->dev, 742 742 "%s: Device in unknown state\n", __func__); 743 + clear_bit(__QLCNIC_RESETTING, &adapter->state); 743 744 return 0; 744 745 } 745 746 ··· 819 818 struct qlcnic_hardware_context *ahw = adapter->ahw; 820 819 struct qlcnic_mailbox *mbx = ahw->mailbox; 821 820 int ret = 0; 822 - u32 owner; 823 821 u32 val; 824 822 825 823 /* Perform NIC configuration based ready state entry actions */ ··· 848 848 set_bit(__QLCNIC_RESETTING, &adapter->state); 849 849 qlcnic_83xx_idc_enter_need_reset_state(adapter, 1); 850 850 } else { 851 - owner = qlcnic_83xx_idc_find_reset_owner_id(adapter); 852 - if (ahw->pci_func == owner) 853 - qlcnic_dump_fw(adapter); 851 + netdev_info(adapter->netdev, "%s: Auto firmware recovery is disabled\n", 852 + __func__); 853 + qlcnic_83xx_idc_enter_failed_state(adapter, 1); 854 854 } 855 855 return -EIO; 856 856 } ··· 948 948 return 0; 949 949 } 950 950 951 - static int qlcnic_83xx_idc_failed_state(struct qlcnic_adapter *adapter) 951 + static void qlcnic_83xx_idc_failed_state(struct qlcnic_adapter *adapter) 952 952 { 953 - dev_err(&adapter->pdev->dev, "%s: please restart!!\n", __func__); 954 - clear_bit(__QLCNIC_RESETTING, &adapter->state); 955 - adapter->ahw->idc.err_code = -EIO; 953 + struct qlcnic_hardware_context *ahw = adapter->ahw; 954 + u32 val, owner; 956 955 957 - return 0; 956 + val = QLCRDX(adapter->ahw, QLC_83XX_IDC_CTRL); 957 + if (val & QLC_83XX_IDC_DISABLE_FW_RESET_RECOVERY) { 958 + owner = qlcnic_83xx_idc_find_reset_owner_id(adapter); 959 + if (ahw->pci_func == owner) { 960 + qlcnic_83xx_stop_hw(adapter); 961 + qlcnic_dump_fw(adapter); 962 + } 963 + } 964 + 965 + netdev_warn(adapter->netdev, "%s: Reboot will be required to recover the adapter!!\n", 966 + __func__); 967 + clear_bit(__QLCNIC_RESETTING, &adapter->state); 968 + ahw->idc.err_code = -EIO; 969 + 970 + return; 958 971 } 959 972 960 973 static int qlcnic_83xx_idc_quiesce_state(struct qlcnic_adapter *adapter) ··· 1075 1062 } 1076 1063 adapter->ahw->idc.prev_state = adapter->ahw->idc.curr_state; 1077 1064 qlcnic_83xx_periodic_tasks(adapter); 1078 - 1079 - /* Do not reschedule if firmaware is in hanged state and auto 1080 - * recovery is disabled 1081 - */ 1082 - if ((adapter->flags & QLCNIC_FW_HANG) && !qlcnic_auto_fw_reset) 1083 - return; 1084 1065 1085 1066 /* Re-schedule the function */ 1086 1067 if (test_bit(QLC_83XX_MODULE_LOADED, &adapter->ahw->idc.status)) ··· 1226 1219 } 1227 1220 1228 1221 val = QLCRDX(adapter->ahw, QLC_83XX_IDC_CTRL); 1229 - if ((val & QLC_83XX_IDC_DISABLE_FW_RESET_RECOVERY) || 1230 - !qlcnic_auto_fw_reset) { 1231 - dev_err(&adapter->pdev->dev, 1232 - "%s:failed, device in non reset mode\n", __func__); 1222 + if (val & QLC_83XX_IDC_DISABLE_FW_RESET_RECOVERY) { 1223 + netdev_info(adapter->netdev, "%s: Auto firmware recovery is disabled\n", 1224 + __func__); 1225 + qlcnic_83xx_idc_enter_failed_state(adapter, 0); 1233 1226 qlcnic_83xx_unlock_driver(adapter); 1234 1227 return; 1235 1228 } ··· 1261 1254 if (size & 0xF) 1262 1255 size = (size + 16) & ~0xF; 1263 1256 1264 - p_cache = kzalloc(size, GFP_KERNEL); 1257 + p_cache = vzalloc(size); 1265 1258 if (p_cache == NULL) 1266 1259 return -ENOMEM; 1267 1260 1268 1261 ret = qlcnic_83xx_lockless_flash_read32(adapter, src, p_cache, 1269 1262 size / sizeof(u32)); 1270 1263 if (ret) { 1271 - kfree(p_cache); 1264 + vfree(p_cache); 1272 1265 return ret; 1273 1266 } 1274 1267 /* 16 byte write to MS memory */ 1275 1268 ret = qlcnic_83xx_ms_mem_write128(adapter, dest, (u32 *)p_cache, 1276 1269 size / 16); 1277 1270 if (ret) { 1278 - kfree(p_cache); 1271 + vfree(p_cache); 1279 1272 return ret; 1280 1273 } 1281 - kfree(p_cache); 1274 + vfree(p_cache); 1282 1275 1283 1276 return ret; 1284 1277 } ··· 1946 1939 p_dev->ahw->reset.seq_index = index; 1947 1940 } 1948 1941 1949 - static void qlcnic_83xx_stop_hw(struct qlcnic_adapter *p_dev) 1942 + void qlcnic_83xx_stop_hw(struct qlcnic_adapter *p_dev) 1950 1943 { 1951 1944 p_dev->ahw->reset.seq_index = 0; 1952 1945 ··· 2001 1994 val = QLCRDX(adapter->ahw, QLC_83XX_IDC_CTRL); 2002 1995 if (!(val & QLC_83XX_IDC_GRACEFULL_RESET)) 2003 1996 qlcnic_dump_fw(adapter); 1997 + 1998 + if (val & QLC_83XX_IDC_DISABLE_FW_RESET_RECOVERY) { 1999 + netdev_info(adapter->netdev, "%s: Auto firmware recovery is disabled\n", 2000 + __func__); 2001 + qlcnic_83xx_idc_enter_failed_state(adapter, 1); 2002 + return err; 2003 + } 2004 + 2004 2005 qlcnic_83xx_init_hw(adapter); 2005 2006 2006 2007 if (qlcnic_83xx_copy_bootloader(adapter)) ··· 2088 2073 ahw->nic_mode = QLCNIC_DEFAULT_MODE; 2089 2074 adapter->nic_ops->init_driver = qlcnic_83xx_init_default_driver; 2090 2075 ahw->idc.state_entry = qlcnic_83xx_idc_ready_state_entry; 2091 - adapter->max_sds_rings = ahw->max_rx_ques; 2092 - adapter->max_tx_rings = ahw->max_tx_ques; 2076 + adapter->max_sds_rings = QLCNIC_MAX_SDS_RINGS; 2077 + adapter->max_tx_rings = QLCNIC_MAX_TX_RINGS; 2093 2078 } else { 2094 2079 return -EIO; 2095 2080 }
+8 -11
drivers/net/ethernet/qlogic/qlcnic/qlcnic_ethtool.c
··· 667 667 static int qlcnic_validate_ring_count(struct qlcnic_adapter *adapter, 668 668 u8 rx_ring, u8 tx_ring) 669 669 { 670 + if (rx_ring == 0 || tx_ring == 0) 671 + return -EINVAL; 672 + 670 673 if (rx_ring != 0) { 671 674 if (rx_ring > adapter->max_sds_rings) { 672 - netdev_err(adapter->netdev, "Invalid ring count, SDS ring count %d should not be greater than max %d driver sds rings.\n", 675 + netdev_err(adapter->netdev, 676 + "Invalid ring count, SDS ring count %d should not be greater than max %d driver sds rings.\n", 673 677 rx_ring, adapter->max_sds_rings); 674 678 return -EINVAL; 675 679 } 676 680 } 677 681 678 682 if (tx_ring != 0) { 679 - if (qlcnic_82xx_check(adapter) && 680 - (tx_ring > adapter->max_tx_rings)) { 683 + if (tx_ring > adapter->max_tx_rings) { 681 684 netdev_err(adapter->netdev, 682 685 "Invalid ring count, Tx ring count %d should not be greater than max %d driver Tx rings.\n", 683 686 tx_ring, adapter->max_tx_rings); 684 687 return -EINVAL; 685 - } 686 - 687 - if (qlcnic_83xx_check(adapter) && 688 - (tx_ring > QLCNIC_SINGLE_RING)) { 689 - netdev_err(adapter->netdev, 690 - "Invalid ring count, Tx ring count %d should not be greater than %d driver Tx rings.\n", 691 - tx_ring, QLCNIC_SINGLE_RING); 692 - return -EINVAL; 693 688 } 694 689 } 695 690 ··· 943 948 struct qlcnic_hardware_context *ahw = adapter->ahw; 944 949 struct qlcnic_cmd_args cmd; 945 950 int ret, drv_sds_rings = adapter->drv_sds_rings; 951 + int drv_tx_rings = adapter->drv_tx_rings; 946 952 947 953 if (qlcnic_83xx_check(adapter)) 948 954 return qlcnic_83xx_interrupt_test(netdev); ··· 976 980 977 981 clear_diag_irq: 978 982 adapter->drv_sds_rings = drv_sds_rings; 983 + adapter->drv_tx_rings = drv_tx_rings; 979 984 clear_bit(__QLCNIC_RESETTING, &adapter->state); 980 985 981 986 return ret;
+2 -8
drivers/net/ethernet/qlogic/qlcnic/qlcnic_io.c
··· 687 687 if (adapter->ahw->linkup && !linkup) { 688 688 netdev_info(netdev, "NIC Link is down\n"); 689 689 adapter->ahw->linkup = 0; 690 - if (netif_running(netdev)) { 691 - netif_carrier_off(netdev); 692 - netif_tx_stop_all_queues(netdev); 693 - } 690 + netif_carrier_off(netdev); 694 691 } else if (!adapter->ahw->linkup && linkup) { 695 692 netdev_info(netdev, "NIC Link is up\n"); 696 693 adapter->ahw->linkup = 1; 697 - if (netif_running(netdev)) { 698 - netif_carrier_on(netdev); 699 - netif_wake_queue(netdev); 700 - } 694 + netif_carrier_on(netdev); 701 695 } 702 696 } 703 697
+1 -1
drivers/net/ethernet/qlogic/qlcnic/qlcnic_main.c
··· 1178 1178 } else { 1179 1179 adapter->ahw->nic_mode = QLCNIC_DEFAULT_MODE; 1180 1180 adapter->max_tx_rings = QLCNIC_MAX_HW_TX_RINGS; 1181 + adapter->max_sds_rings = QLCNIC_MAX_SDS_RINGS; 1181 1182 adapter->flags &= ~QLCNIC_ESWITCH_ENABLED; 1182 1183 } 1183 1184 ··· 1941 1940 qlcnic_detach(adapter); 1942 1941 1943 1942 adapter->drv_sds_rings = QLCNIC_SINGLE_RING; 1944 - adapter->drv_tx_rings = QLCNIC_SINGLE_RING; 1945 1943 adapter->ahw->diag_test = test; 1946 1944 adapter->ahw->linkup = 0; 1947 1945
-1
drivers/net/hyperv/netvsc_drv.c
··· 327 327 return -EINVAL; 328 328 329 329 nvdev->start_remove = true; 330 - cancel_delayed_work_sync(&ndevctx->dwork); 331 330 cancel_work_sync(&ndevctx->work); 332 331 netif_tx_disable(ndev); 333 332 rndis_filter_device_remove(hdev);
+3
drivers/net/xen-netback/netback.c
··· 1197 1197 1198 1198 err = -EPROTO; 1199 1199 1200 + if (fragment) 1201 + goto out; 1202 + 1200 1203 switch (ip_hdr(skb)->protocol) { 1201 1204 case IPPROTO_TCP: 1202 1205 err = maybe_pull_tail(skb,
+2 -2
drivers/phy/Kconfig
··· 24 24 config OMAP_USB2 25 25 tristate "OMAP USB2 PHY Driver" 26 26 depends on ARCH_OMAP2PLUS 27 + depends on USB_PHY 27 28 select GENERIC_PHY 28 - select USB_PHY 29 29 select OMAP_CONTROL_USB 30 30 help 31 31 Enable this to support the transceiver that is part of SOC. This ··· 36 36 config TWL4030_USB 37 37 tristate "TWL4030 USB Transceiver Driver" 38 38 depends on TWL4030_CORE && REGULATOR_TWL4030 && USB_MUSB_OMAP2PLUS 39 + depends on USB_PHY 39 40 select GENERIC_PHY 40 - select USB_PHY 41 41 help 42 42 Enable this to support the USB OTG transceiver on TWL4030 43 43 family chips (including the TWL5030 and TPS659x0 devices).
+10 -16
drivers/phy/phy-core.c
··· 437 437 int id; 438 438 struct phy *phy; 439 439 440 - if (!dev) { 441 - dev_WARN(dev, "no device provided for PHY\n"); 442 - ret = -EINVAL; 443 - goto err0; 444 - } 440 + if (WARN_ON(!dev)) 441 + return ERR_PTR(-EINVAL); 445 442 446 443 phy = kzalloc(sizeof(*phy), GFP_KERNEL); 447 - if (!phy) { 448 - ret = -ENOMEM; 449 - goto err0; 450 - } 444 + if (!phy) 445 + return ERR_PTR(-ENOMEM); 451 446 452 447 id = ida_simple_get(&phy_ida, 0, 0, GFP_KERNEL); 453 448 if (id < 0) { 454 449 dev_err(dev, "unable to get id\n"); 455 450 ret = id; 456 - goto err0; 451 + goto free_phy; 457 452 } 458 453 459 454 device_initialize(&phy->dev); ··· 463 468 464 469 ret = dev_set_name(&phy->dev, "phy-%s.%d", dev_name(dev), id); 465 470 if (ret) 466 - goto err1; 471 + goto put_dev; 467 472 468 473 ret = device_add(&phy->dev); 469 474 if (ret) 470 - goto err1; 475 + goto put_dev; 471 476 472 477 if (pm_runtime_enabled(dev)) { 473 478 pm_runtime_enable(&phy->dev); ··· 476 481 477 482 return phy; 478 483 479 - err1: 480 - ida_remove(&phy_ida, phy->id); 484 + put_dev: 481 485 put_device(&phy->dev); 486 + ida_remove(&phy_ida, phy->id); 487 + free_phy: 482 488 kfree(phy); 483 - 484 - err0: 485 489 return ERR_PTR(ret); 486 490 } 487 491 EXPORT_SYMBOL_GPL(phy_create);
+1 -1
drivers/pinctrl/sh-pfc/sh_pfc.h
··· 254 254 #define PINMUX_GPIO(_pin) \ 255 255 [GPIO_##_pin] = { \ 256 256 .pin = (u16)-1, \ 257 - .name = __stringify(name), \ 257 + .name = __stringify(GPIO_##_pin), \ 258 258 .enum_id = _pin##_DATA, \ 259 259 } 260 260
+1 -1
drivers/regulator/s2mps11.c
··· 438 438 platform_set_drvdata(pdev, s2mps11); 439 439 440 440 config.dev = &pdev->dev; 441 - config.regmap = iodev->regmap; 441 + config.regmap = iodev->regmap_pmic; 442 442 config.driver_data = s2mps11; 443 443 for (i = 0; i < S2MPS11_REGULATOR_MAX; i++) { 444 444 if (!reg_np) {
+6 -4
drivers/scsi/qla2xxx/qla_target.c
··· 471 471 schedule_delayed_work(&tgt->sess_del_work, 0); 472 472 else 473 473 schedule_delayed_work(&tgt->sess_del_work, 474 - jiffies - sess->expires); 474 + sess->expires - jiffies); 475 475 } 476 476 477 477 /* ha->hardware_lock supposed to be held on entry */ ··· 550 550 struct scsi_qla_host *vha = tgt->vha; 551 551 struct qla_hw_data *ha = vha->hw; 552 552 struct qla_tgt_sess *sess; 553 - unsigned long flags; 553 + unsigned long flags, elapsed; 554 554 555 555 spin_lock_irqsave(&ha->hardware_lock, flags); 556 556 while (!list_empty(&tgt->del_sess_list)) { 557 557 sess = list_entry(tgt->del_sess_list.next, typeof(*sess), 558 558 del_list_entry); 559 - if (time_after_eq(jiffies, sess->expires)) { 559 + elapsed = jiffies; 560 + if (time_after_eq(elapsed, sess->expires)) { 560 561 qlt_undelete_sess(sess); 561 562 562 563 ql_dbg(ql_dbg_tgt_mgt, vha, 0xf004, ··· 567 566 ha->tgt.tgt_ops->put_sess(sess); 568 567 } else { 569 568 schedule_delayed_work(&tgt->sess_del_work, 570 - jiffies - sess->expires); 569 + sess->expires - elapsed); 571 570 break; 572 571 } 573 572 } ··· 4291 4290 if (rc != 0) { 4292 4291 ha->tgt.tgt_ops = NULL; 4293 4292 ha->tgt.target_lport_ptr = NULL; 4293 + scsi_host_put(host); 4294 4294 } 4295 4295 mutex_unlock(&qla_tgt_mutex); 4296 4296 return rc;
+1 -1
drivers/staging/comedi/drivers.c
··· 446 446 release_firmware(fw); 447 447 } 448 448 449 - return ret; 449 + return ret < 0 ? ret : 0; 450 450 } 451 451 EXPORT_SYMBOL_GPL(comedi_load_firmware); 452 452
+12 -3
drivers/staging/comedi/drivers/8255_pci.c
··· 63 63 BOARD_ADLINK_PCI7296, 64 64 BOARD_CB_PCIDIO24, 65 65 BOARD_CB_PCIDIO24H, 66 - BOARD_CB_PCIDIO48H, 66 + BOARD_CB_PCIDIO48H_OLD, 67 + BOARD_CB_PCIDIO48H_NEW, 67 68 BOARD_CB_PCIDIO96H, 68 69 BOARD_NI_PCIDIO96, 69 70 BOARD_NI_PCIDIO96B, ··· 107 106 .dio_badr = 2, 108 107 .n_8255 = 1, 109 108 }, 110 - [BOARD_CB_PCIDIO48H] = { 109 + [BOARD_CB_PCIDIO48H_OLD] = { 111 110 .name = "cb_pci-dio48h", 112 111 .dio_badr = 1, 112 + .n_8255 = 2, 113 + }, 114 + [BOARD_CB_PCIDIO48H_NEW] = { 115 + .name = "cb_pci-dio48h", 116 + .dio_badr = 2, 113 117 .n_8255 = 2, 114 118 }, 115 119 [BOARD_CB_PCIDIO96H] = { ··· 269 263 { PCI_VDEVICE(ADLINK, 0x7296), BOARD_ADLINK_PCI7296 }, 270 264 { PCI_VDEVICE(CB, 0x0028), BOARD_CB_PCIDIO24 }, 271 265 { PCI_VDEVICE(CB, 0x0014), BOARD_CB_PCIDIO24H }, 272 - { PCI_VDEVICE(CB, 0x000b), BOARD_CB_PCIDIO48H }, 266 + { PCI_DEVICE_SUB(PCI_VENDOR_ID_CB, 0x000b, 0x0000, 0x0000), 267 + .driver_data = BOARD_CB_PCIDIO48H_OLD }, 268 + { PCI_DEVICE_SUB(PCI_VENDOR_ID_CB, 0x000b, PCI_VENDOR_ID_CB, 0x000b), 269 + .driver_data = BOARD_CB_PCIDIO48H_NEW }, 273 270 { PCI_VDEVICE(CB, 0x0017), BOARD_CB_PCIDIO96H }, 274 271 { PCI_VDEVICE(NI, 0x0160), BOARD_NI_PCIDIO96 }, 275 272 { PCI_VDEVICE(NI, 0x1630), BOARD_NI_PCIDIO96B },
+6 -1
drivers/staging/iio/magnetometer/hmc5843.c
··· 451 451 .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE) | \ 452 452 BIT(IIO_CHAN_INFO_SAMP_FREQ), \ 453 453 .scan_index = idx, \ 454 - .scan_type = IIO_ST('s', 16, 16, IIO_BE), \ 454 + .scan_type = { \ 455 + .sign = 's', \ 456 + .realbits = 16, \ 457 + .storagebits = 16, \ 458 + .endianness = IIO_BE, \ 459 + }, \ 455 460 } 456 461 457 462 static const struct iio_chan_spec hmc5843_channels[] = {
+29 -10
drivers/staging/imx-drm/imx-drm-core.c
··· 88 88 89 89 imx_drm_device_put(); 90 90 91 - drm_mode_config_cleanup(imxdrm->drm); 91 + drm_vblank_cleanup(imxdrm->drm); 92 92 drm_kms_helper_poll_fini(imxdrm->drm); 93 + drm_mode_config_cleanup(imxdrm->drm); 93 94 94 95 return 0; 95 96 } ··· 200 199 if (!file->is_master) 201 200 return; 202 201 203 - for (i = 0; i < 4; i++) 204 - imx_drm_disable_vblank(drm , i); 202 + for (i = 0; i < MAX_CRTC; i++) 203 + imx_drm_disable_vblank(drm, i); 205 204 } 206 205 207 206 static const struct file_operations imx_drm_driver_fops = { ··· 377 376 struct imx_drm_device *imxdrm = __imx_drm_device(); 378 377 int ret; 379 378 380 - drm_crtc_init(imxdrm->drm, imx_drm_crtc->crtc, 381 - imx_drm_crtc->imx_drm_helper_funcs.crtc_funcs); 382 379 ret = drm_mode_crtc_set_gamma_size(imx_drm_crtc->crtc, 256); 383 380 if (ret) 384 381 return ret; 385 382 386 383 drm_crtc_helper_add(imx_drm_crtc->crtc, 387 384 imx_drm_crtc->imx_drm_helper_funcs.crtc_helper_funcs); 385 + 386 + drm_crtc_init(imxdrm->drm, imx_drm_crtc->crtc, 387 + imx_drm_crtc->imx_drm_helper_funcs.crtc_funcs); 388 388 389 389 drm_mode_group_reinit(imxdrm->drm); 390 390 ··· 430 428 ret = drm_mode_group_init_legacy_group(imxdrm->drm, 431 429 &imxdrm->drm->primary->mode_group); 432 430 if (ret) 433 - goto err_init; 431 + goto err_kms; 434 432 435 433 ret = drm_vblank_init(imxdrm->drm, MAX_CRTC); 436 434 if (ret) 437 - goto err_init; 435 + goto err_kms; 438 436 439 437 /* 440 438 * with vblank_disable_allowed = true, vblank interrupt will be disabled ··· 443 441 */ 444 442 imxdrm->drm->vblank_disable_allowed = true; 445 443 446 - if (!imx_drm_device_get()) 444 + if (!imx_drm_device_get()) { 447 445 ret = -EINVAL; 446 + goto err_vblank; 447 + } 448 448 449 - ret = 0; 449 + mutex_unlock(&imxdrm->mutex); 450 + return 0; 450 451 451 - err_init: 452 + err_vblank: 453 + drm_vblank_cleanup(drm); 454 + err_kms: 455 + drm_kms_helper_poll_fini(drm); 456 + drm_mode_config_cleanup(drm); 452 457 mutex_unlock(&imxdrm->mutex); 453 458 454 459 return ret; ··· 501 492 502 493 mutex_lock(&imxdrm->mutex); 503 494 495 + /* 496 + * The vblank arrays are dimensioned by MAX_CRTC - we can't 497 + * pass IDs greater than this to those functions. 498 + */ 499 + if (imxdrm->pipes >= MAX_CRTC) { 500 + ret = -EINVAL; 501 + goto err_busy; 502 + } 503 + 504 504 if (imxdrm->drm->open_count) { 505 505 ret = -EBUSY; 506 506 goto err_busy; ··· 546 528 return 0; 547 529 548 530 err_register: 531 + list_del(&imx_drm_crtc->list); 549 532 kfree(imx_drm_crtc); 550 533 err_alloc: 551 534 err_busy:
-9
drivers/staging/imx-drm/imx-tve.c
··· 114 114 struct drm_encoder encoder; 115 115 struct imx_drm_encoder *imx_drm_encoder; 116 116 struct device *dev; 117 - spinlock_t enable_lock; /* serializes tve_enable/disable */ 118 117 spinlock_t lock; /* register lock */ 119 118 bool enabled; 120 119 int mode; ··· 145 146 146 147 static void tve_enable(struct imx_tve *tve) 147 148 { 148 - unsigned long flags; 149 149 int ret; 150 150 151 - spin_lock_irqsave(&tve->enable_lock, flags); 152 151 if (!tve->enabled) { 153 152 tve->enabled = true; 154 153 clk_prepare_enable(tve->clk); ··· 166 169 TVE_CD_SM_IEN | 167 170 TVE_CD_LM_IEN | 168 171 TVE_CD_MON_END_IEN); 169 - 170 - spin_unlock_irqrestore(&tve->enable_lock, flags); 171 172 } 172 173 173 174 static void tve_disable(struct imx_tve *tve) 174 175 { 175 - unsigned long flags; 176 176 int ret; 177 177 178 - spin_lock_irqsave(&tve->enable_lock, flags); 179 178 if (tve->enabled) { 180 179 tve->enabled = false; 181 180 ret = regmap_update_bits(tve->regmap, TVE_COM_CONF_REG, 182 181 TVE_IPU_CLK_EN | TVE_EN, 0); 183 182 clk_disable_unprepare(tve->clk); 184 183 } 185 - spin_unlock_irqrestore(&tve->enable_lock, flags); 186 184 } 187 185 188 186 static int tve_setup_tvout(struct imx_tve *tve) ··· 593 601 594 602 tve->dev = &pdev->dev; 595 603 spin_lock_init(&tve->lock); 596 - spin_lock_init(&tve->enable_lock); 597 604 598 605 ddc_node = of_parse_phandle(np, "ddc", 0); 599 606 if (ddc_node) {
+16 -16
drivers/staging/imx-drm/ipu-v3/ipu-common.c
··· 996 996 }, 997 997 }; 998 998 999 + static DEFINE_MUTEX(ipu_client_id_mutex); 999 1000 static int ipu_client_id; 1000 - 1001 - static int ipu_add_subdevice_pdata(struct device *dev, 1002 - const struct ipu_platform_reg *reg) 1003 - { 1004 - struct platform_device *pdev; 1005 - 1006 - pdev = platform_device_register_data(dev, reg->name, ipu_client_id++, 1007 - &reg->pdata, sizeof(struct ipu_platform_reg)); 1008 - 1009 - return PTR_ERR_OR_ZERO(pdev); 1010 - } 1011 1001 1012 1002 static int ipu_add_client_devices(struct ipu_soc *ipu) 1013 1003 { 1014 - int ret; 1015 - int i; 1004 + struct device *dev = ipu->dev; 1005 + unsigned i; 1006 + int id, ret; 1007 + 1008 + mutex_lock(&ipu_client_id_mutex); 1009 + id = ipu_client_id; 1010 + ipu_client_id += ARRAY_SIZE(client_reg); 1011 + mutex_unlock(&ipu_client_id_mutex); 1016 1012 1017 1013 for (i = 0; i < ARRAY_SIZE(client_reg); i++) { 1018 1014 const struct ipu_platform_reg *reg = &client_reg[i]; 1019 - ret = ipu_add_subdevice_pdata(ipu->dev, reg); 1020 - if (ret) 1015 + struct platform_device *pdev; 1016 + 1017 + pdev = platform_device_register_data(dev, reg->name, 1018 + id++, &reg->pdata, sizeof(reg->pdata)); 1019 + 1020 + if (IS_ERR(pdev)) 1021 1021 goto err_register; 1022 1022 } 1023 1023 1024 1024 return 0; 1025 1025 1026 1026 err_register: 1027 - platform_device_unregister_children(to_platform_device(ipu->dev)); 1027 + platform_device_unregister_children(to_platform_device(dev)); 1028 1028 1029 1029 return ret; 1030 1030 }
+13 -14
drivers/target/iscsi/iscsi_target.c
··· 465 465 */ 466 466 send_sig(SIGINT, np->np_thread, 1); 467 467 kthread_stop(np->np_thread); 468 + np->np_thread = NULL; 468 469 } 469 470 470 471 np->np_transport->iscsit_free_np(np); ··· 824 823 if (((hdr->flags & ISCSI_FLAG_CMD_READ) || 825 824 (hdr->flags & ISCSI_FLAG_CMD_WRITE)) && !hdr->data_length) { 826 825 /* 827 - * Vmware ESX v3.0 uses a modified Cisco Initiator (v3.4.2) 828 - * that adds support for RESERVE/RELEASE. There is a bug 829 - * add with this new functionality that sets R/W bits when 830 - * neither CDB carries any READ or WRITE datapayloads. 826 + * From RFC-3720 Section 10.3.1: 827 + * 828 + * "Either or both of R and W MAY be 1 when either the 829 + * Expected Data Transfer Length and/or Bidirectional Read 830 + * Expected Data Transfer Length are 0" 831 + * 832 + * For this case, go ahead and clear the unnecssary bits 833 + * to avoid any confusion with ->data_direction. 831 834 */ 832 - if ((hdr->cdb[0] == 0x16) || (hdr->cdb[0] == 0x17)) { 833 - hdr->flags &= ~ISCSI_FLAG_CMD_READ; 834 - hdr->flags &= ~ISCSI_FLAG_CMD_WRITE; 835 - goto done; 836 - } 835 + hdr->flags &= ~ISCSI_FLAG_CMD_READ; 836 + hdr->flags &= ~ISCSI_FLAG_CMD_WRITE; 837 837 838 - pr_err("ISCSI_FLAG_CMD_READ or ISCSI_FLAG_CMD_WRITE" 838 + pr_warn("ISCSI_FLAG_CMD_READ or ISCSI_FLAG_CMD_WRITE" 839 839 " set when Expected Data Transfer Length is 0 for" 840 - " CDB: 0x%02x. Bad iSCSI Initiator.\n", hdr->cdb[0]); 841 - return iscsit_add_reject_cmd(cmd, 842 - ISCSI_REASON_BOOKMARK_INVALID, buf); 840 + " CDB: 0x%02x, Fixing up flags\n", hdr->cdb[0]); 843 841 } 844 - done: 845 842 846 843 if (!(hdr->flags & ISCSI_FLAG_CMD_READ) && 847 844 !(hdr->flags & ISCSI_FLAG_CMD_WRITE) && (hdr->data_length != 0)) {
+2 -1
drivers/target/iscsi/iscsi_target_configfs.c
··· 474 474 \ 475 475 if (!capable(CAP_SYS_ADMIN)) \ 476 476 return -EPERM; \ 477 - \ 477 + if (count >= sizeof(auth->name)) \ 478 + return -EINVAL; \ 478 479 snprintf(auth->name, sizeof(auth->name), "%s", page); \ 479 480 if (!strncmp("NULL", auth->name, 4)) \ 480 481 auth->naf_flags &= ~flags; \
-6
drivers/target/iscsi/iscsi_target_login.c
··· 1403 1403 1404 1404 out: 1405 1405 stop = kthread_should_stop(); 1406 - if (!stop && signal_pending(current)) { 1407 - spin_lock_bh(&np->np_thread_lock); 1408 - stop = (np->np_thread_state == ISCSI_NP_THREAD_SHUTDOWN); 1409 - spin_unlock_bh(&np->np_thread_lock); 1410 - } 1411 1406 /* Wait for another socket.. */ 1412 1407 if (!stop) 1413 1408 return 1; ··· 1410 1415 iscsi_stop_login_thread_timer(np); 1411 1416 spin_lock_bh(&np->np_thread_lock); 1412 1417 np->np_thread_state = ISCSI_NP_THREAD_EXIT; 1413 - np->np_thread = NULL; 1414 1418 spin_unlock_bh(&np->np_thread_lock); 1415 1419 1416 1420 return 0;
+5
drivers/target/target_core_device.c
··· 1106 1106 dev->dev_attrib.block_size = block_size; 1107 1107 pr_debug("dev[%p]: SE Device block_size changed to %u\n", 1108 1108 dev, block_size); 1109 + 1110 + if (dev->dev_attrib.max_bytes_per_io) 1111 + dev->dev_attrib.hw_max_sectors = 1112 + dev->dev_attrib.max_bytes_per_io / block_size; 1113 + 1109 1114 return 0; 1110 1115 } 1111 1116
+4 -4
drivers/target/target_core_file.c
··· 66 66 pr_debug("CORE_HBA[%d] - TCM FILEIO HBA Driver %s on Generic" 67 67 " Target Core Stack %s\n", hba->hba_id, FD_VERSION, 68 68 TARGET_CORE_MOD_VERSION); 69 - pr_debug("CORE_HBA[%d] - Attached FILEIO HBA: %u to Generic" 70 - " MaxSectors: %u\n", 71 - hba->hba_id, fd_host->fd_host_id, FD_MAX_SECTORS); 69 + pr_debug("CORE_HBA[%d] - Attached FILEIO HBA: %u to Generic\n", 70 + hba->hba_id, fd_host->fd_host_id); 72 71 73 72 return 0; 74 73 } ··· 219 220 } 220 221 221 222 dev->dev_attrib.hw_block_size = fd_dev->fd_block_size; 222 - dev->dev_attrib.hw_max_sectors = FD_MAX_SECTORS; 223 + dev->dev_attrib.max_bytes_per_io = FD_MAX_BYTES; 224 + dev->dev_attrib.hw_max_sectors = FD_MAX_BYTES / fd_dev->fd_block_size; 223 225 dev->dev_attrib.hw_queue_depth = FD_MAX_DEVICE_QUEUE_DEPTH; 224 226 225 227 if (fd_dev->fbd_flags & FDBD_HAS_BUFFERED_IO_WCE) {
+4 -1
drivers/target/target_core_file.h
··· 7 7 #define FD_DEVICE_QUEUE_DEPTH 32 8 8 #define FD_MAX_DEVICE_QUEUE_DEPTH 128 9 9 #define FD_BLOCKSIZE 512 10 - #define FD_MAX_SECTORS 2048 10 + /* 11 + * Limited by the number of iovecs (2048) per vfs_[writev,readv] call 12 + */ 13 + #define FD_MAX_BYTES 8388608 11 14 12 15 #define RRF_EMULATE_CDB 0x01 13 16 #define RRF_GOT_LBA 0x02
+1 -9
drivers/target/target_core_tpg.c
··· 278 278 snprintf(acl->initiatorname, TRANSPORT_IQN_LEN, "%s", initiatorname); 279 279 acl->se_tpg = tpg; 280 280 acl->acl_index = scsi_get_new_index(SCSI_AUTH_INTR_INDEX); 281 - spin_lock_init(&acl->stats_lock); 282 281 acl->dynamic_node_acl = 1; 283 282 284 283 tpg->se_tpg_tfo->set_default_node_attributes(acl); ··· 405 406 snprintf(acl->initiatorname, TRANSPORT_IQN_LEN, "%s", initiatorname); 406 407 acl->se_tpg = tpg; 407 408 acl->acl_index = scsi_get_new_index(SCSI_AUTH_INTR_INDEX); 408 - spin_lock_init(&acl->stats_lock); 409 409 410 410 tpg->se_tpg_tfo->set_default_node_attributes(acl); 411 411 ··· 656 658 spin_lock_init(&lun->lun_sep_lock); 657 659 init_completion(&lun->lun_ref_comp); 658 660 659 - ret = percpu_ref_init(&lun->lun_ref, core_tpg_lun_ref_release); 661 + ret = core_tpg_post_addlun(se_tpg, lun, lun_access, dev); 660 662 if (ret < 0) 661 663 return ret; 662 - 663 - ret = core_tpg_post_addlun(se_tpg, lun, lun_access, dev); 664 - if (ret < 0) { 665 - percpu_ref_cancel_init(&lun->lun_ref); 666 - return ret; 667 - } 668 664 669 665 return 0; 670 666 }
+6 -1
drivers/tty/n_tty.c
··· 93 93 size_t canon_head; 94 94 size_t echo_head; 95 95 size_t echo_commit; 96 + size_t echo_mark; 96 97 DECLARE_BITMAP(char_map, 256); 97 98 98 99 /* private to n_tty_receive_overrun (single-threaded) */ ··· 337 336 { 338 337 ldata->read_head = ldata->canon_head = ldata->read_tail = 0; 339 338 ldata->echo_head = ldata->echo_tail = ldata->echo_commit = 0; 339 + ldata->echo_mark = 0; 340 340 ldata->line_start = 0; 341 341 342 342 ldata->erasing = 0; ··· 789 787 size_t head; 790 788 791 789 head = ldata->echo_head; 790 + ldata->echo_mark = head; 792 791 old = ldata->echo_commit - ldata->echo_tail; 793 792 794 793 /* Process committed echoes if the accumulated # of bytes ··· 814 811 size_t echoed; 815 812 816 813 if ((!L_ECHO(tty) && !L_ECHONL(tty)) || 817 - ldata->echo_commit == ldata->echo_tail) 814 + ldata->echo_mark == ldata->echo_tail) 818 815 return; 819 816 820 817 mutex_lock(&ldata->output_lock); 818 + ldata->echo_commit = ldata->echo_mark; 821 819 echoed = __process_echoes(tty); 822 820 mutex_unlock(&ldata->output_lock); 823 821 ··· 826 822 tty->ops->flush_chars(tty); 827 823 } 828 824 825 + /* NB: echo_mark and echo_head should be equivalent here */ 829 826 static void flush_echoes(struct tty_struct *tty) 830 827 { 831 828 struct n_tty_data *ldata = tty->disc_data;
+6 -2
drivers/tty/serial/8250/8250_dw.c
··· 96 96 if (offset == UART_LCR) { 97 97 int tries = 1000; 98 98 while (tries--) { 99 - if (value == p->serial_in(p, UART_LCR)) 99 + unsigned int lcr = p->serial_in(p, UART_LCR); 100 + if ((value & ~UART_LCR_SPAR) == (lcr & ~UART_LCR_SPAR)) 100 101 return; 101 102 dw8250_force_idle(p); 102 103 writeb(value, p->membase + (UART_LCR << p->regshift)); ··· 133 132 if (offset == UART_LCR) { 134 133 int tries = 1000; 135 134 while (tries--) { 136 - if (value == p->serial_in(p, UART_LCR)) 135 + unsigned int lcr = p->serial_in(p, UART_LCR); 136 + if ((value & ~UART_LCR_SPAR) == (lcr & ~UART_LCR_SPAR)) 137 137 return; 138 138 dw8250_force_idle(p); 139 139 writel(value, p->membase + (UART_LCR << p->regshift)); ··· 457 455 static const struct acpi_device_id dw8250_acpi_match[] = { 458 456 { "INT33C4", 0 }, 459 457 { "INT33C5", 0 }, 458 + { "INT3434", 0 }, 459 + { "INT3435", 0 }, 460 460 { "80860F0A", 0 }, 461 461 { }, 462 462 };
+2
drivers/tty/serial/xilinx_uartps.c
··· 240 240 continue; 241 241 } 242 242 243 + #ifdef SUPPORT_SYSRQ 243 244 /* 244 245 * uart_handle_sysrq_char() doesn't work if 245 246 * spinlocked, for some reason ··· 254 253 } 255 254 spin_lock(&port->lock); 256 255 } 256 + #endif 257 257 258 258 port->icount.rx++; 259 259
+13 -3
drivers/tty/tty_ldsem.c
··· 86 86 return atomic_long_add_return(delta, (atomic_long_t *)&sem->count); 87 87 } 88 88 89 + /* 90 + * ldsem_cmpxchg() updates @*old with the last-known sem->count value. 91 + * Returns 1 if count was successfully changed; @*old will have @new value. 92 + * Returns 0 if count was not changed; @*old will have most recent sem->count 93 + */ 89 94 static inline int ldsem_cmpxchg(long *old, long new, struct ld_semaphore *sem) 90 95 { 91 - long tmp = *old; 92 - *old = atomic_long_cmpxchg(&sem->count, *old, new); 93 - return *old == tmp; 96 + long tmp = atomic_long_cmpxchg(&sem->count, *old, new); 97 + if (tmp == *old) { 98 + *old = new; 99 + return 1; 100 + } else { 101 + *old = tmp; 102 + return 0; 103 + } 94 104 } 95 105 96 106 /*
+4
drivers/usb/chipidea/core.c
··· 642 642 : CI_ROLE_GADGET; 643 643 } 644 644 645 + /* only update vbus status for peripheral */ 646 + if (ci->role == CI_ROLE_GADGET) 647 + ci_handle_vbus_change(ci); 648 + 645 649 ret = ci_role_start(ci, ci->role); 646 650 if (ret) { 647 651 dev_err(dev, "can't start %s role\n", ci_role(ci)->name);
+2 -1
drivers/usb/chipidea/host.c
··· 88 88 return ret; 89 89 90 90 disable_reg: 91 - regulator_disable(ci->platdata->reg_vbus); 91 + if (ci->platdata->reg_vbus) 92 + regulator_disable(ci->platdata->reg_vbus); 92 93 93 94 put_hcd: 94 95 usb_put_hcd(hcd);
-3
drivers/usb/chipidea/udc.c
··· 1795 1795 pm_runtime_no_callbacks(&ci->gadget.dev); 1796 1796 pm_runtime_enable(&ci->gadget.dev); 1797 1797 1798 - /* Update ci->vbus_active */ 1799 - ci_handle_vbus_change(ci); 1800 - 1801 1798 return retval; 1802 1799 1803 1800 destroy_eps:
+3 -5
drivers/usb/class/cdc-wdm.c
··· 854 854 { 855 855 /* need autopm_get/put here to ensure the usbcore sees the new value */ 856 856 int rv = usb_autopm_get_interface(intf); 857 - if (rv < 0) 858 - goto err; 859 857 860 858 intf->needs_remote_wakeup = on; 861 - usb_autopm_put_interface(intf); 862 - err: 863 - return rv; 859 + if (!rv) 860 + usb_autopm_put_interface(intf); 861 + return 0; 864 862 } 865 863 866 864 static int wdm_probe(struct usb_interface *intf, const struct usb_device_id *id)
+5 -3
drivers/usb/dwc3/core.c
··· 455 455 if (IS_ERR(regs)) 456 456 return PTR_ERR(regs); 457 457 458 - usb_phy_set_suspend(dwc->usb2_phy, 0); 459 - usb_phy_set_suspend(dwc->usb3_phy, 0); 460 - 461 458 spin_lock_init(&dwc->lock); 462 459 platform_set_drvdata(pdev, dwc); 463 460 ··· 484 487 dev_err(dev, "failed to initialize core\n"); 485 488 goto err0; 486 489 } 490 + 491 + usb_phy_set_suspend(dwc->usb2_phy, 0); 492 + usb_phy_set_suspend(dwc->usb3_phy, 0); 487 493 488 494 ret = dwc3_event_buffers_setup(dwc); 489 495 if (ret) { ··· 569 569 dwc3_event_buffers_cleanup(dwc); 570 570 571 571 err1: 572 + usb_phy_set_suspend(dwc->usb2_phy, 1); 573 + usb_phy_set_suspend(dwc->usb3_phy, 1); 572 574 dwc3_core_exit(dwc); 573 575 574 576 err0:
+14 -10
drivers/usb/host/ohci-at91.c
··· 136 136 struct ohci_hcd *ohci; 137 137 int retval; 138 138 struct usb_hcd *hcd = NULL; 139 + struct device *dev = &pdev->dev; 140 + struct resource *res; 141 + int irq; 139 142 140 - if (pdev->num_resources != 2) { 141 - pr_debug("hcd probe: invalid num_resources"); 142 - return -ENODEV; 143 + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 144 + if (!res) { 145 + dev_dbg(dev, "hcd probe: missing memory resource\n"); 146 + return -ENXIO; 143 147 } 144 148 145 - if ((pdev->resource[0].flags != IORESOURCE_MEM) 146 - || (pdev->resource[1].flags != IORESOURCE_IRQ)) { 147 - pr_debug("hcd probe: invalid resource type\n"); 148 - return -ENODEV; 149 + irq = platform_get_irq(pdev, 0); 150 + if (irq < 0) { 151 + dev_dbg(dev, "hcd probe: missing irq resource\n"); 152 + return irq; 149 153 } 150 154 151 155 hcd = usb_create_hcd(driver, &pdev->dev, "at91"); 152 156 if (!hcd) 153 157 return -ENOMEM; 154 - hcd->rsrc_start = pdev->resource[0].start; 155 - hcd->rsrc_len = resource_size(&pdev->resource[0]); 158 + hcd->rsrc_start = res->start; 159 + hcd->rsrc_len = resource_size(res); 156 160 157 161 if (!request_mem_region(hcd->rsrc_start, hcd->rsrc_len, hcd_name)) { 158 162 pr_debug("request_mem_region failed\n"); ··· 203 199 ohci->num_ports = board->ports; 204 200 at91_start_hc(pdev); 205 201 206 - retval = usb_add_hcd(hcd, pdev->resource[1].start, IRQF_SHARED); 202 + retval = usb_add_hcd(hcd, irq, IRQF_SHARED); 207 203 if (retval == 0) 208 204 return retval; 209 205
+6 -1
drivers/usb/host/xhci-pci.c
··· 128 128 * any other sleep) on Haswell machines with LPT and LPT-LP 129 129 * with the new Intel BIOS 130 130 */ 131 - xhci->quirks |= XHCI_SPURIOUS_WAKEUP; 131 + /* Limit the quirk to only known vendors, as this triggers 132 + * yet another BIOS bug on some other machines 133 + * https://bugzilla.kernel.org/show_bug.cgi?id=66171 134 + */ 135 + if (pdev->subsystem_vendor == PCI_VENDOR_ID_HP) 136 + xhci->quirks |= XHCI_SPURIOUS_WAKEUP; 132 137 } 133 138 if (pdev->vendor == PCI_VENDOR_ID_ETRON && 134 139 pdev->device == PCI_DEVICE_ID_ASROCK_P67) {
+3 -1
drivers/usb/phy/Kconfig
··· 19 19 in host mode, low speed. 20 20 21 21 config FSL_USB2_OTG 22 - bool "Freescale USB OTG Transceiver Driver" 22 + tristate "Freescale USB OTG Transceiver Driver" 23 23 depends on USB_EHCI_FSL && USB_FSL_USB2 && PM_RUNTIME 24 + depends on USB 24 25 select USB_OTG 25 26 select USB_PHY 26 27 help ··· 30 29 config ISP1301_OMAP 31 30 tristate "Philips ISP1301 with OMAP OTG" 32 31 depends on I2C && ARCH_OMAP_OTG 32 + depends on USB 33 33 select USB_PHY 34 34 help 35 35 If you say yes here you get support for the Philips ISP1301
+1 -1
drivers/usb/phy/phy-tegra-usb.c
··· 876 876 877 877 tegra_phy->pad_regs = devm_ioremap(&pdev->dev, res->start, 878 878 resource_size(res)); 879 - if (!tegra_phy->regs) { 879 + if (!tegra_phy->pad_regs) { 880 880 dev_err(&pdev->dev, "Failed to remap UTMI Pad regs\n"); 881 881 return -ENOMEM; 882 882 }
+2 -1
drivers/usb/phy/phy-twl6030-usb.c
··· 127 127 128 128 static inline u8 twl6030_readb(struct twl6030_usb *twl, u8 module, u8 address) 129 129 { 130 - u8 data, ret = 0; 130 + u8 data; 131 + int ret; 131 132 132 133 ret = twl_i2c_read_u8(module, &data, address); 133 134 if (ret >= 0)
+2
drivers/usb/serial/option.c
··· 251 251 #define ZTE_PRODUCT_MF628 0x0015 252 252 #define ZTE_PRODUCT_MF626 0x0031 253 253 #define ZTE_PRODUCT_MC2718 0xffe8 254 + #define ZTE_PRODUCT_AC2726 0xfff1 254 255 255 256 #define BENQ_VENDOR_ID 0x04a5 256 257 #define BENQ_PRODUCT_H10 0x4068 ··· 1454 1453 { USB_VENDOR_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0xff, 0x02, 0x01) }, 1455 1454 { USB_VENDOR_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0xff, 0x02, 0x05) }, 1456 1455 { USB_VENDOR_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0xff, 0x86, 0x10) }, 1456 + { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, ZTE_PRODUCT_AC2726, 0xff, 0xff, 0xff) }, 1457 1457 1458 1458 { USB_DEVICE(BENQ_VENDOR_ID, BENQ_PRODUCT_H10) }, 1459 1459 { USB_DEVICE(DLINK_VENDOR_ID, DLINK_PRODUCT_DWM_652) },
+1 -2
drivers/usb/serial/zte_ev.c
··· 281 281 { USB_DEVICE(0x19d2, 0xfffd) }, 282 282 { USB_DEVICE(0x19d2, 0xfffc) }, 283 283 { USB_DEVICE(0x19d2, 0xfffb) }, 284 - /* AC2726, AC8710_V3 */ 285 - { USB_DEVICE_AND_INTERFACE_INFO(0x19d2, 0xfff1, 0xff, 0xff, 0xff) }, 284 + /* AC8710_V3 */ 286 285 { USB_DEVICE(0x19d2, 0xfff6) }, 287 286 { USB_DEVICE(0x19d2, 0xfff7) }, 288 287 { USB_DEVICE(0x19d2, 0xfff8) },
+34 -29
drivers/xen/balloon.c
··· 350 350 351 351 pfn = page_to_pfn(page); 352 352 353 - set_phys_to_machine(pfn, frame_list[i]); 354 - 355 353 #ifdef CONFIG_XEN_HAVE_PVMMU 356 - /* Link back into the page tables if not highmem. */ 357 - if (xen_pv_domain() && !PageHighMem(page)) { 358 - int ret; 359 - ret = HYPERVISOR_update_va_mapping( 360 - (unsigned long)__va(pfn << PAGE_SHIFT), 361 - mfn_pte(frame_list[i], PAGE_KERNEL), 362 - 0); 363 - BUG_ON(ret); 354 + if (!xen_feature(XENFEAT_auto_translated_physmap)) { 355 + set_phys_to_machine(pfn, frame_list[i]); 356 + 357 + /* Link back into the page tables if not highmem. */ 358 + if (!PageHighMem(page)) { 359 + int ret; 360 + ret = HYPERVISOR_update_va_mapping( 361 + (unsigned long)__va(pfn << PAGE_SHIFT), 362 + mfn_pte(frame_list[i], PAGE_KERNEL), 363 + 0); 364 + BUG_ON(ret); 365 + } 364 366 } 365 367 #endif 366 368 ··· 380 378 enum bp_state state = BP_DONE; 381 379 unsigned long pfn, i; 382 380 struct page *page; 383 - struct page *scratch_page; 384 381 int ret; 385 382 struct xen_memory_reservation reservation = { 386 383 .address_bits = 0, ··· 412 411 413 412 scrub_page(page); 414 413 414 + #ifdef CONFIG_XEN_HAVE_PVMMU 415 415 /* 416 416 * Ballooned out frames are effectively replaced with 417 417 * a scratch frame. Ensure direct mappings and the 418 418 * p2m are consistent. 419 419 */ 420 - scratch_page = get_balloon_scratch_page(); 421 - #ifdef CONFIG_XEN_HAVE_PVMMU 422 - if (xen_pv_domain() && !PageHighMem(page)) { 423 - ret = HYPERVISOR_update_va_mapping( 424 - (unsigned long)__va(pfn << PAGE_SHIFT), 425 - pfn_pte(page_to_pfn(scratch_page), 426 - PAGE_KERNEL_RO), 0); 427 - BUG_ON(ret); 428 - } 429 - #endif 430 420 if (!xen_feature(XENFEAT_auto_translated_physmap)) { 431 421 unsigned long p; 422 + struct page *scratch_page = get_balloon_scratch_page(); 423 + 424 + if (!PageHighMem(page)) { 425 + ret = HYPERVISOR_update_va_mapping( 426 + (unsigned long)__va(pfn << PAGE_SHIFT), 427 + pfn_pte(page_to_pfn(scratch_page), 428 + PAGE_KERNEL_RO), 0); 429 + BUG_ON(ret); 430 + } 432 431 p = page_to_pfn(scratch_page); 433 432 __set_phys_to_machine(pfn, pfn_to_mfn(p)); 433 + 434 + put_balloon_scratch_page(); 434 435 } 435 - put_balloon_scratch_page(); 436 + #endif 436 437 437 438 balloon_append(pfn_to_page(pfn)); 438 439 } ··· 630 627 if (!xen_domain()) 631 628 return -ENODEV; 632 629 633 - for_each_online_cpu(cpu) 634 - { 635 - per_cpu(balloon_scratch_page, cpu) = alloc_page(GFP_KERNEL); 636 - if (per_cpu(balloon_scratch_page, cpu) == NULL) { 637 - pr_warn("Failed to allocate balloon_scratch_page for cpu %d\n", cpu); 638 - return -ENOMEM; 630 + if (!xen_feature(XENFEAT_auto_translated_physmap)) { 631 + for_each_online_cpu(cpu) 632 + { 633 + per_cpu(balloon_scratch_page, cpu) = alloc_page(GFP_KERNEL); 634 + if (per_cpu(balloon_scratch_page, cpu) == NULL) { 635 + pr_warn("Failed to allocate balloon_scratch_page for cpu %d\n", cpu); 636 + return -ENOMEM; 637 + } 639 638 } 639 + register_cpu_notifier(&balloon_cpu_notifier); 640 640 } 641 - register_cpu_notifier(&balloon_cpu_notifier); 642 641 643 642 pr_info("Initialising balloon driver\n"); 644 643
+2 -1
drivers/xen/grant-table.c
··· 1176 1176 gnttab_shared.addr = xen_remap(xen_hvm_resume_frames, 1177 1177 PAGE_SIZE * max_nr_gframes); 1178 1178 if (gnttab_shared.addr == NULL) { 1179 - pr_warn("Failed to ioremap gnttab share frames!\n"); 1179 + pr_warn("Failed to ioremap gnttab share frames (addr=0x%08lx)!\n", 1180 + xen_hvm_resume_frames); 1180 1181 return -ENOMEM; 1181 1182 } 1182 1183 }
+7 -2
drivers/xen/privcmd.c
··· 533 533 { 534 534 struct page **pages = vma->vm_private_data; 535 535 int numpgs = (vma->vm_end - vma->vm_start) >> PAGE_SHIFT; 536 + int rc; 536 537 537 538 if (!xen_feature(XENFEAT_auto_translated_physmap) || !numpgs || !pages) 538 539 return; 539 540 540 - xen_unmap_domain_mfn_range(vma, numpgs, pages); 541 - free_xenballooned_pages(numpgs, pages); 541 + rc = xen_unmap_domain_mfn_range(vma, numpgs, pages); 542 + if (rc == 0) 543 + free_xenballooned_pages(numpgs, pages); 544 + else 545 + pr_crit("unable to unmap MFN range: leaking %d pages. rc=%d\n", 546 + numpgs, rc); 542 547 kfree(pages); 543 548 } 544 549
+70 -45
fs/aio.c
··· 244 244 int i; 245 245 246 246 for (i = 0; i < ctx->nr_pages; i++) { 247 + struct page *page; 247 248 pr_debug("pid(%d) [%d] page->count=%d\n", current->pid, i, 248 249 page_count(ctx->ring_pages[i])); 249 - put_page(ctx->ring_pages[i]); 250 + page = ctx->ring_pages[i]; 251 + if (!page) 252 + continue; 253 + ctx->ring_pages[i] = NULL; 254 + put_page(page); 250 255 } 251 256 252 257 put_aio_ring_file(ctx); ··· 285 280 unsigned long flags; 286 281 int rc; 287 282 283 + rc = 0; 284 + 285 + /* Make sure the old page hasn't already been changed */ 286 + spin_lock(&mapping->private_lock); 287 + ctx = mapping->private_data; 288 + if (ctx) { 289 + pgoff_t idx; 290 + spin_lock_irqsave(&ctx->completion_lock, flags); 291 + idx = old->index; 292 + if (idx < (pgoff_t)ctx->nr_pages) { 293 + if (ctx->ring_pages[idx] != old) 294 + rc = -EAGAIN; 295 + } else 296 + rc = -EINVAL; 297 + spin_unlock_irqrestore(&ctx->completion_lock, flags); 298 + } else 299 + rc = -EINVAL; 300 + spin_unlock(&mapping->private_lock); 301 + 302 + if (rc != 0) 303 + return rc; 304 + 288 305 /* Writeback must be complete */ 289 306 BUG_ON(PageWriteback(old)); 290 - put_page(old); 307 + get_page(new); 291 308 292 - rc = migrate_page_move_mapping(mapping, new, old, NULL, mode); 309 + rc = migrate_page_move_mapping(mapping, new, old, NULL, mode, 1); 293 310 if (rc != MIGRATEPAGE_SUCCESS) { 294 - get_page(old); 311 + put_page(new); 295 312 return rc; 296 313 } 297 - 298 - get_page(new); 299 314 300 315 /* We can potentially race against kioctx teardown here. Use the 301 316 * address_space's private data lock to protect the mapping's ··· 328 303 spin_lock_irqsave(&ctx->completion_lock, flags); 329 304 migrate_page_copy(new, old); 330 305 idx = old->index; 331 - if (idx < (pgoff_t)ctx->nr_pages) 332 - ctx->ring_pages[idx] = new; 306 + if (idx < (pgoff_t)ctx->nr_pages) { 307 + /* And only do the move if things haven't changed */ 308 + if (ctx->ring_pages[idx] == old) 309 + ctx->ring_pages[idx] = new; 310 + else 311 + rc = -EAGAIN; 312 + } else 313 + rc = -EINVAL; 333 314 spin_unlock_irqrestore(&ctx->completion_lock, flags); 334 315 } else 335 316 rc = -EBUSY; 336 317 spin_unlock(&mapping->private_lock); 318 + 319 + if (rc == MIGRATEPAGE_SUCCESS) 320 + put_page(old); 321 + else 322 + put_page(new); 337 323 338 324 return rc; 339 325 } ··· 362 326 struct aio_ring *ring; 363 327 unsigned nr_events = ctx->max_reqs; 364 328 struct mm_struct *mm = current->mm; 365 - unsigned long size, populate; 329 + unsigned long size, unused; 366 330 int nr_pages; 367 331 int i; 368 332 struct file *file; ··· 383 347 return -EAGAIN; 384 348 } 385 349 386 - for (i = 0; i < nr_pages; i++) { 387 - struct page *page; 388 - page = find_or_create_page(file->f_inode->i_mapping, 389 - i, GFP_HIGHUSER | __GFP_ZERO); 390 - if (!page) 391 - break; 392 - pr_debug("pid(%d) page[%d]->count=%d\n", 393 - current->pid, i, page_count(page)); 394 - SetPageUptodate(page); 395 - SetPageDirty(page); 396 - unlock_page(page); 397 - } 398 350 ctx->aio_ring_file = file; 399 351 nr_events = (PAGE_SIZE * nr_pages - sizeof(struct aio_ring)) 400 352 / sizeof(struct io_event); ··· 397 373 } 398 374 } 399 375 376 + for (i = 0; i < nr_pages; i++) { 377 + struct page *page; 378 + page = find_or_create_page(file->f_inode->i_mapping, 379 + i, GFP_HIGHUSER | __GFP_ZERO); 380 + if (!page) 381 + break; 382 + pr_debug("pid(%d) page[%d]->count=%d\n", 383 + current->pid, i, page_count(page)); 384 + SetPageUptodate(page); 385 + SetPageDirty(page); 386 + unlock_page(page); 387 + 388 + ctx->ring_pages[i] = page; 389 + } 390 + ctx->nr_pages = i; 391 + 392 + if (unlikely(i != nr_pages)) { 393 + aio_free_ring(ctx); 394 + return -EAGAIN; 395 + } 396 + 400 397 ctx->mmap_size = nr_pages * PAGE_SIZE; 401 398 pr_debug("attempting mmap of %lu bytes\n", ctx->mmap_size); 402 399 403 400 down_write(&mm->mmap_sem); 404 401 ctx->mmap_base = do_mmap_pgoff(ctx->aio_ring_file, 0, ctx->mmap_size, 405 402 PROT_READ | PROT_WRITE, 406 - MAP_SHARED | MAP_POPULATE, 0, &populate); 403 + MAP_SHARED, 0, &unused); 404 + up_write(&mm->mmap_sem); 407 405 if (IS_ERR((void *)ctx->mmap_base)) { 408 - up_write(&mm->mmap_sem); 409 406 ctx->mmap_size = 0; 410 407 aio_free_ring(ctx); 411 408 return -EAGAIN; 412 409 } 413 410 414 411 pr_debug("mmap address: 0x%08lx\n", ctx->mmap_base); 415 - 416 - /* We must do this while still holding mmap_sem for write, as we 417 - * need to be protected against userspace attempting to mremap() 418 - * or munmap() the ring buffer. 419 - */ 420 - ctx->nr_pages = get_user_pages(current, mm, ctx->mmap_base, nr_pages, 421 - 1, 0, ctx->ring_pages, NULL); 422 - 423 - /* Dropping the reference here is safe as the page cache will hold 424 - * onto the pages for us. It is also required so that page migration 425 - * can unmap the pages and get the right reference count. 426 - */ 427 - for (i = 0; i < ctx->nr_pages; i++) 428 - put_page(ctx->ring_pages[i]); 429 - 430 - up_write(&mm->mmap_sem); 431 - 432 - if (unlikely(ctx->nr_pages != nr_pages)) { 433 - aio_free_ring(ctx); 434 - return -EAGAIN; 435 - } 436 412 437 413 ctx->user_id = ctx->mmap_base; 438 414 ctx->nr_events = nr_events; /* trusted copy */ ··· 676 652 aio_nr += ctx->max_reqs; 677 653 spin_unlock(&aio_nr_lock); 678 654 679 - percpu_ref_get(&ctx->users); /* io_setup() will drop this ref */ 655 + percpu_ref_get(&ctx->users); /* io_setup() will drop this ref */ 656 + percpu_ref_get(&ctx->reqs); /* free_ioctx_users() will drop this */ 680 657 681 658 err = ioctx_add_table(ctx, mm); 682 659 if (err)
+6 -2
fs/ceph/addr.c
··· 210 210 if (err < 0) { 211 211 SetPageError(page); 212 212 goto out; 213 - } else if (err < PAGE_CACHE_SIZE) { 213 + } else { 214 + if (err < PAGE_CACHE_SIZE) { 214 215 /* zero fill remainder of page */ 215 - zero_user_segment(page, err, PAGE_CACHE_SIZE); 216 + zero_user_segment(page, err, PAGE_CACHE_SIZE); 217 + } else { 218 + flush_dcache_page(page); 219 + } 216 220 } 217 221 SetPageUptodate(page); 218 222
+58 -78
fs/ceph/inode.c
··· 978 978 struct ceph_mds_reply_inode *ininfo; 979 979 struct ceph_vino vino; 980 980 struct ceph_fs_client *fsc = ceph_sb_to_client(sb); 981 - int i = 0; 982 981 int err = 0; 983 982 984 983 dout("fill_trace %p is_dentry %d is_target %d\n", req, ··· 1035 1036 return err; 1036 1037 } else { 1037 1038 WARN_ON_ONCE(1); 1039 + } 1040 + } 1041 + 1042 + if (rinfo->head->is_target) { 1043 + vino.ino = le64_to_cpu(rinfo->targeti.in->ino); 1044 + vino.snap = le64_to_cpu(rinfo->targeti.in->snapid); 1045 + 1046 + in = ceph_get_inode(sb, vino); 1047 + if (IS_ERR(in)) { 1048 + err = PTR_ERR(in); 1049 + goto done; 1050 + } 1051 + req->r_target_inode = in; 1052 + 1053 + err = fill_inode(in, &rinfo->targeti, NULL, 1054 + session, req->r_request_started, 1055 + (le32_to_cpu(rinfo->head->result) == 0) ? 1056 + req->r_fmode : -1, 1057 + &req->r_caps_reservation); 1058 + if (err < 0) { 1059 + pr_err("fill_inode badness %p %llx.%llx\n", 1060 + in, ceph_vinop(in)); 1061 + goto done; 1038 1062 } 1039 1063 } 1040 1064 ··· 1130 1108 ceph_dentry(req->r_old_dentry)->offset); 1131 1109 1132 1110 dn = req->r_old_dentry; /* use old_dentry */ 1133 - in = dn->d_inode; 1134 1111 } 1135 1112 1136 1113 /* null dentry? */ ··· 1151 1130 } 1152 1131 1153 1132 /* attach proper inode */ 1154 - ininfo = rinfo->targeti.in; 1155 - vino.ino = le64_to_cpu(ininfo->ino); 1156 - vino.snap = le64_to_cpu(ininfo->snapid); 1157 - in = dn->d_inode; 1158 - if (!in) { 1159 - in = ceph_get_inode(sb, vino); 1160 - if (IS_ERR(in)) { 1161 - pr_err("fill_trace bad get_inode " 1162 - "%llx.%llx\n", vino.ino, vino.snap); 1163 - err = PTR_ERR(in); 1164 - d_drop(dn); 1165 - goto done; 1166 - } 1133 + if (!dn->d_inode) { 1134 + ihold(in); 1167 1135 dn = splice_dentry(dn, in, &have_lease, true); 1168 1136 if (IS_ERR(dn)) { 1169 1137 err = PTR_ERR(dn); 1170 1138 goto done; 1171 1139 } 1172 1140 req->r_dentry = dn; /* may have spliced */ 1173 - ihold(in); 1174 - } else if (ceph_ino(in) == vino.ino && 1175 - ceph_snap(in) == vino.snap) { 1176 - ihold(in); 1177 - } else { 1141 + } else if (dn->d_inode && dn->d_inode != in) { 1178 1142 dout(" %p links to %p %llx.%llx, not %llx.%llx\n", 1179 - dn, in, ceph_ino(in), ceph_snap(in), 1180 - vino.ino, vino.snap); 1143 + dn, dn->d_inode, ceph_vinop(dn->d_inode), 1144 + ceph_vinop(in)); 1181 1145 have_lease = false; 1182 - in = NULL; 1183 1146 } 1184 1147 1185 1148 if (have_lease) 1186 1149 update_dentry_lease(dn, rinfo->dlease, session, 1187 1150 req->r_request_started); 1188 1151 dout(" final dn %p\n", dn); 1189 - i++; 1190 - } else if ((req->r_op == CEPH_MDS_OP_LOOKUPSNAP || 1191 - req->r_op == CEPH_MDS_OP_MKSNAP) && !req->r_aborted) { 1152 + } else if (!req->r_aborted && 1153 + (req->r_op == CEPH_MDS_OP_LOOKUPSNAP || 1154 + req->r_op == CEPH_MDS_OP_MKSNAP)) { 1192 1155 struct dentry *dn = req->r_dentry; 1193 1156 1194 1157 /* fill out a snapdir LOOKUPSNAP dentry */ ··· 1182 1177 ininfo = rinfo->targeti.in; 1183 1178 vino.ino = le64_to_cpu(ininfo->ino); 1184 1179 vino.snap = le64_to_cpu(ininfo->snapid); 1185 - in = ceph_get_inode(sb, vino); 1186 - if (IS_ERR(in)) { 1187 - pr_err("fill_inode get_inode badness %llx.%llx\n", 1188 - vino.ino, vino.snap); 1189 - err = PTR_ERR(in); 1190 - d_delete(dn); 1191 - goto done; 1192 - } 1193 1180 dout(" linking snapped dir %p to dn %p\n", in, dn); 1181 + ihold(in); 1194 1182 dn = splice_dentry(dn, in, NULL, true); 1195 1183 if (IS_ERR(dn)) { 1196 1184 err = PTR_ERR(dn); 1197 1185 goto done; 1198 1186 } 1199 1187 req->r_dentry = dn; /* may have spliced */ 1200 - ihold(in); 1201 - rinfo->head->is_dentry = 1; /* fool notrace handlers */ 1202 1188 } 1203 - 1204 - if (rinfo->head->is_target) { 1205 - vino.ino = le64_to_cpu(rinfo->targeti.in->ino); 1206 - vino.snap = le64_to_cpu(rinfo->targeti.in->snapid); 1207 - 1208 - if (in == NULL || ceph_ino(in) != vino.ino || 1209 - ceph_snap(in) != vino.snap) { 1210 - in = ceph_get_inode(sb, vino); 1211 - if (IS_ERR(in)) { 1212 - err = PTR_ERR(in); 1213 - goto done; 1214 - } 1215 - } 1216 - req->r_target_inode = in; 1217 - 1218 - err = fill_inode(in, 1219 - &rinfo->targeti, NULL, 1220 - session, req->r_request_started, 1221 - (le32_to_cpu(rinfo->head->result) == 0) ? 1222 - req->r_fmode : -1, 1223 - &req->r_caps_reservation); 1224 - if (err < 0) { 1225 - pr_err("fill_inode badness %p %llx.%llx\n", 1226 - in, ceph_vinop(in)); 1227 - goto done; 1228 - } 1229 - } 1230 - 1231 1189 done: 1232 1190 dout("fill_trace done err=%d\n", err); 1233 1191 return err; ··· 1240 1272 struct qstr dname; 1241 1273 struct dentry *dn; 1242 1274 struct inode *in; 1243 - int err = 0, i; 1275 + int err = 0, ret, i; 1244 1276 struct inode *snapdir = NULL; 1245 1277 struct ceph_mds_request_head *rhead = req->r_request->front.iov_base; 1246 1278 struct ceph_dentry_info *di; ··· 1273 1305 ceph_fill_dirfrag(parent->d_inode, rinfo->dir_dir); 1274 1306 } 1275 1307 1308 + /* FIXME: release caps/leases if error occurs */ 1276 1309 for (i = 0; i < rinfo->dir_nr; i++) { 1277 1310 struct ceph_vino vino; 1278 1311 ··· 1298 1329 err = -ENOMEM; 1299 1330 goto out; 1300 1331 } 1301 - err = ceph_init_dentry(dn); 1302 - if (err < 0) { 1332 + ret = ceph_init_dentry(dn); 1333 + if (ret < 0) { 1303 1334 dput(dn); 1335 + err = ret; 1304 1336 goto out; 1305 1337 } 1306 1338 } else if (dn->d_inode && ··· 1321 1351 spin_unlock(&parent->d_lock); 1322 1352 } 1323 1353 1324 - di = dn->d_fsdata; 1325 - di->offset = ceph_make_fpos(frag, i + r_readdir_offset); 1326 - 1327 1354 /* inode */ 1328 1355 if (dn->d_inode) { 1329 1356 in = dn->d_inode; ··· 1333 1366 err = PTR_ERR(in); 1334 1367 goto out; 1335 1368 } 1336 - dn = splice_dentry(dn, in, NULL, false); 1337 - if (IS_ERR(dn)) 1338 - dn = NULL; 1339 1369 } 1340 1370 1341 1371 if (fill_inode(in, &rinfo->dir_in[i], NULL, session, 1342 1372 req->r_request_started, -1, 1343 1373 &req->r_caps_reservation) < 0) { 1344 1374 pr_err("fill_inode badness on %p\n", in); 1375 + if (!dn->d_inode) 1376 + iput(in); 1377 + d_drop(dn); 1345 1378 goto next_item; 1346 1379 } 1347 - if (dn) 1348 - update_dentry_lease(dn, rinfo->dir_dlease[i], 1349 - req->r_session, 1350 - req->r_request_started); 1380 + 1381 + if (!dn->d_inode) { 1382 + dn = splice_dentry(dn, in, NULL, false); 1383 + if (IS_ERR(dn)) { 1384 + err = PTR_ERR(dn); 1385 + dn = NULL; 1386 + goto next_item; 1387 + } 1388 + } 1389 + 1390 + di = dn->d_fsdata; 1391 + di->offset = ceph_make_fpos(frag, i + r_readdir_offset); 1392 + 1393 + update_dentry_lease(dn, rinfo->dir_dlease[i], 1394 + req->r_session, 1395 + req->r_request_started); 1351 1396 next_item: 1352 1397 if (dn) 1353 1398 dput(dn); 1354 1399 } 1355 - req->r_did_prepopulate = true; 1400 + if (err == 0) 1401 + req->r_did_prepopulate = true; 1356 1402 1357 1403 out: 1358 1404 if (snapdir) {
+5 -2
fs/pstore/platform.c
··· 443 443 pstore_get_records(0); 444 444 445 445 kmsg_dump_register(&pstore_dumper); 446 - pstore_register_console(); 447 - pstore_register_ftrace(); 446 + 447 + if ((psi->flags & PSTORE_FLAGS_FRAGILE) == 0) { 448 + pstore_register_console(); 449 + pstore_register_ftrace(); 450 + } 448 451 449 452 if (pstore_update_ms >= 0) { 450 453 pstore_timer.expires = jiffies +
+3 -5
fs/sysfs/file.c
··· 609 609 struct sysfs_dirent *attr_sd = file->f_path.dentry->d_fsdata; 610 610 struct kobject *kobj = attr_sd->s_parent->s_dir.kobj; 611 611 struct sysfs_open_file *of; 612 - bool has_read, has_write, has_mmap; 612 + bool has_read, has_write; 613 613 int error = -EACCES; 614 614 615 615 /* need attr_sd for attr and ops, its parent for kobj */ ··· 621 621 622 622 has_read = battr->read || battr->mmap; 623 623 has_write = battr->write || battr->mmap; 624 - has_mmap = battr->mmap; 625 624 } else { 626 625 const struct sysfs_ops *ops = sysfs_file_ops(attr_sd); 627 626 ··· 632 633 633 634 has_read = ops->show; 634 635 has_write = ops->store; 635 - has_mmap = false; 636 636 } 637 637 638 638 /* check perms and supported operations */ ··· 659 661 * open file has a separate mutex, it's okay as long as those don't 660 662 * happen on the same file. At this point, we can't easily give 661 663 * each file a separate locking class. Let's differentiate on 662 - * whether the file has mmap or not for now. 664 + * whether the file is bin or not for now. 663 665 */ 664 - if (has_mmap) 666 + if (sysfs_is_bin(attr_sd)) 665 667 mutex_init(&of->mutex); 666 668 else 667 669 mutex_init(&of->mutex);
+24 -8
fs/xfs/xfs_bmap.c
··· 1635 1635 * blocks at the end of the file which do not start at the previous data block, 1636 1636 * we will try to align the new blocks at stripe unit boundaries. 1637 1637 * 1638 - * Returns 0 in bma->aeof if the file (fork) is empty as any new write will be 1638 + * Returns 1 in bma->aeof if the file (fork) is empty as any new write will be 1639 1639 * at, or past the EOF. 1640 1640 */ 1641 1641 STATIC int ··· 1650 1650 bma->aeof = 0; 1651 1651 error = xfs_bmap_last_extent(NULL, bma->ip, whichfork, &rec, 1652 1652 &is_empty); 1653 - if (error || is_empty) 1653 + if (error) 1654 1654 return error; 1655 + 1656 + if (is_empty) { 1657 + bma->aeof = 1; 1658 + return 0; 1659 + } 1655 1660 1656 1661 /* 1657 1662 * Check if we are allocation or past the last extent, or at least into ··· 3648 3643 int isaligned; 3649 3644 int tryagain; 3650 3645 int error; 3646 + int stripe_align; 3651 3647 3652 3648 ASSERT(ap->length); 3653 3649 3654 3650 mp = ap->ip->i_mount; 3651 + 3652 + /* stripe alignment for allocation is determined by mount parameters */ 3653 + stripe_align = 0; 3654 + if (mp->m_swidth && (mp->m_flags & XFS_MOUNT_SWALLOC)) 3655 + stripe_align = mp->m_swidth; 3656 + else if (mp->m_dalign) 3657 + stripe_align = mp->m_dalign; 3658 + 3655 3659 align = ap->userdata ? xfs_get_extsz_hint(ap->ip) : 0; 3656 3660 if (unlikely(align)) { 3657 3661 error = xfs_bmap_extsize_align(mp, &ap->got, &ap->prev, ··· 3669 3655 ASSERT(!error); 3670 3656 ASSERT(ap->length); 3671 3657 } 3658 + 3659 + 3672 3660 nullfb = *ap->firstblock == NULLFSBLOCK; 3673 3661 fb_agno = nullfb ? NULLAGNUMBER : XFS_FSB_TO_AGNO(mp, *ap->firstblock); 3674 3662 if (nullfb) { ··· 3746 3730 */ 3747 3731 if (!ap->flist->xbf_low && ap->aeof) { 3748 3732 if (!ap->offset) { 3749 - args.alignment = mp->m_dalign; 3733 + args.alignment = stripe_align; 3750 3734 atype = args.type; 3751 3735 isaligned = 1; 3752 3736 /* ··· 3771 3755 * of minlen+alignment+slop doesn't go up 3772 3756 * between the calls. 3773 3757 */ 3774 - if (blen > mp->m_dalign && blen <= args.maxlen) 3775 - nextminlen = blen - mp->m_dalign; 3758 + if (blen > stripe_align && blen <= args.maxlen) 3759 + nextminlen = blen - stripe_align; 3776 3760 else 3777 3761 nextminlen = args.minlen; 3778 - if (nextminlen + mp->m_dalign > args.minlen + 1) 3762 + if (nextminlen + stripe_align > args.minlen + 1) 3779 3763 args.minalignslop = 3780 - nextminlen + mp->m_dalign - 3764 + nextminlen + stripe_align - 3781 3765 args.minlen - 1; 3782 3766 else 3783 3767 args.minalignslop = 0; ··· 3799 3783 */ 3800 3784 args.type = atype; 3801 3785 args.fsbno = ap->blkno; 3802 - args.alignment = mp->m_dalign; 3786 + args.alignment = stripe_align; 3803 3787 args.minlen = nextminlen; 3804 3788 args.minalignslop = 0; 3805 3789 isaligned = 1;
+12 -2
fs/xfs/xfs_bmap_util.c
··· 1187 1187 XFS_BUF_UNWRITE(bp); 1188 1188 XFS_BUF_READ(bp); 1189 1189 XFS_BUF_SET_ADDR(bp, xfs_fsb_to_db(ip, imap.br_startblock)); 1190 - xfsbdstrat(mp, bp); 1190 + 1191 + if (XFS_FORCED_SHUTDOWN(mp)) { 1192 + error = XFS_ERROR(EIO); 1193 + break; 1194 + } 1195 + xfs_buf_iorequest(bp); 1191 1196 error = xfs_buf_iowait(bp); 1192 1197 if (error) { 1193 1198 xfs_buf_ioerror_alert(bp, ··· 1205 1200 XFS_BUF_UNDONE(bp); 1206 1201 XFS_BUF_UNREAD(bp); 1207 1202 XFS_BUF_WRITE(bp); 1208 - xfsbdstrat(mp, bp); 1203 + 1204 + if (XFS_FORCED_SHUTDOWN(mp)) { 1205 + error = XFS_ERROR(EIO); 1206 + break; 1207 + } 1208 + xfs_buf_iorequest(bp); 1209 1209 error = xfs_buf_iowait(bp); 1210 1210 if (error) { 1211 1211 xfs_buf_ioerror_alert(bp,
+14 -23
fs/xfs/xfs_buf.c
··· 698 698 bp->b_flags |= XBF_READ; 699 699 bp->b_ops = ops; 700 700 701 - xfsbdstrat(target->bt_mount, bp); 701 + if (XFS_FORCED_SHUTDOWN(target->bt_mount)) { 702 + xfs_buf_relse(bp); 703 + return NULL; 704 + } 705 + xfs_buf_iorequest(bp); 702 706 xfs_buf_iowait(bp); 703 707 return bp; 704 708 } ··· 1093 1089 * This is meant for userdata errors; metadata bufs come with 1094 1090 * iodone functions attached, so that we can track down errors. 1095 1091 */ 1096 - STATIC int 1092 + int 1097 1093 xfs_bioerror_relse( 1098 1094 struct xfs_buf *bp) 1099 1095 { ··· 1156 1152 ASSERT(xfs_buf_islocked(bp)); 1157 1153 1158 1154 bp->b_flags |= XBF_WRITE; 1159 - bp->b_flags &= ~(XBF_ASYNC | XBF_READ | _XBF_DELWRI_Q); 1155 + bp->b_flags &= ~(XBF_ASYNC | XBF_READ | _XBF_DELWRI_Q | XBF_WRITE_FAIL); 1160 1156 1161 1157 xfs_bdstrat_cb(bp); 1162 1158 ··· 1166 1162 SHUTDOWN_META_IO_ERROR); 1167 1163 } 1168 1164 return error; 1169 - } 1170 - 1171 - /* 1172 - * Wrapper around bdstrat so that we can stop data from going to disk in case 1173 - * we are shutting down the filesystem. Typically user data goes thru this 1174 - * path; one of the exceptions is the superblock. 1175 - */ 1176 - void 1177 - xfsbdstrat( 1178 - struct xfs_mount *mp, 1179 - struct xfs_buf *bp) 1180 - { 1181 - if (XFS_FORCED_SHUTDOWN(mp)) { 1182 - trace_xfs_bdstrat_shut(bp, _RET_IP_); 1183 - xfs_bioerror_relse(bp); 1184 - return; 1185 - } 1186 - 1187 - xfs_buf_iorequest(bp); 1188 1165 } 1189 1166 1190 1167 STATIC void ··· 1501 1516 struct xfs_buf *bp; 1502 1517 bp = list_first_entry(&dispose, struct xfs_buf, b_lru); 1503 1518 list_del_init(&bp->b_lru); 1519 + if (bp->b_flags & XBF_WRITE_FAIL) { 1520 + xfs_alert(btp->bt_mount, 1521 + "Corruption Alert: Buffer at block 0x%llx had permanent write failures!\n" 1522 + "Please run xfs_repair to determine the extent of the problem.", 1523 + (long long)bp->b_bn); 1524 + } 1504 1525 xfs_buf_rele(bp); 1505 1526 } 1506 1527 if (loop++ != 0) ··· 1790 1799 1791 1800 blk_start_plug(&plug); 1792 1801 list_for_each_entry_safe(bp, n, io_list, b_list) { 1793 - bp->b_flags &= ~(_XBF_DELWRI_Q | XBF_ASYNC); 1802 + bp->b_flags &= ~(_XBF_DELWRI_Q | XBF_ASYNC | XBF_WRITE_FAIL); 1794 1803 bp->b_flags |= XBF_WRITE; 1795 1804 1796 1805 if (!wait) {
+7 -4
fs/xfs/xfs_buf.h
··· 45 45 #define XBF_ASYNC (1 << 4) /* initiator will not wait for completion */ 46 46 #define XBF_DONE (1 << 5) /* all pages in the buffer uptodate */ 47 47 #define XBF_STALE (1 << 6) /* buffer has been staled, do not find it */ 48 + #define XBF_WRITE_FAIL (1 << 24)/* async writes have failed on this buffer */ 48 49 49 50 /* I/O hints for the BIO layer */ 50 51 #define XBF_SYNCIO (1 << 10)/* treat this buffer as synchronous I/O */ ··· 71 70 { XBF_ASYNC, "ASYNC" }, \ 72 71 { XBF_DONE, "DONE" }, \ 73 72 { XBF_STALE, "STALE" }, \ 73 + { XBF_WRITE_FAIL, "WRITE_FAIL" }, \ 74 74 { XBF_SYNCIO, "SYNCIO" }, \ 75 75 { XBF_FUA, "FUA" }, \ 76 76 { XBF_FLUSH, "FLUSH" }, \ ··· 81 79 { _XBF_KMEM, "KMEM" }, \ 82 80 { _XBF_DELWRI_Q, "DELWRI_Q" }, \ 83 81 { _XBF_COMPOUND, "COMPOUND" } 82 + 84 83 85 84 /* 86 85 * Internal state flags. ··· 272 269 273 270 /* Buffer Read and Write Routines */ 274 271 extern int xfs_bwrite(struct xfs_buf *bp); 275 - 276 - extern void xfsbdstrat(struct xfs_mount *, struct xfs_buf *); 277 - 278 272 extern void xfs_buf_ioend(xfs_buf_t *, int); 279 273 extern void xfs_buf_ioerror(xfs_buf_t *, int); 280 274 extern void xfs_buf_ioerror_alert(struct xfs_buf *, const char *func); ··· 281 281 xfs_buf_rw_t); 282 282 #define xfs_buf_zero(bp, off, len) \ 283 283 xfs_buf_iomove((bp), (off), (len), NULL, XBRW_ZERO) 284 + 285 + extern int xfs_bioerror_relse(struct xfs_buf *); 284 286 285 287 static inline int xfs_buf_geterror(xfs_buf_t *bp) 286 288 { ··· 303 301 304 302 #define XFS_BUF_ZEROFLAGS(bp) \ 305 303 ((bp)->b_flags &= ~(XBF_READ|XBF_WRITE|XBF_ASYNC| \ 306 - XBF_SYNCIO|XBF_FUA|XBF_FLUSH)) 304 + XBF_SYNCIO|XBF_FUA|XBF_FLUSH| \ 305 + XBF_WRITE_FAIL)) 307 306 308 307 void xfs_buf_stale(struct xfs_buf *bp); 309 308 #define XFS_BUF_UNSTALE(bp) ((bp)->b_flags &= ~XBF_STALE)
+19 -2
fs/xfs/xfs_buf_item.c
··· 496 496 } 497 497 } 498 498 499 + /* 500 + * Buffer IO error rate limiting. Limit it to no more than 10 messages per 30 501 + * seconds so as to not spam logs too much on repeated detection of the same 502 + * buffer being bad.. 503 + */ 504 + 505 + DEFINE_RATELIMIT_STATE(xfs_buf_write_fail_rl_state, 30 * HZ, 10); 506 + 499 507 STATIC uint 500 508 xfs_buf_item_push( 501 509 struct xfs_log_item *lip, ··· 531 523 ASSERT(!(bip->bli_flags & XFS_BLI_STALE)); 532 524 533 525 trace_xfs_buf_item_push(bip); 526 + 527 + /* has a previous flush failed due to IO errors? */ 528 + if ((bp->b_flags & XBF_WRITE_FAIL) && 529 + ___ratelimit(&xfs_buf_write_fail_rl_state, "XFS:")) { 530 + xfs_warn(bp->b_target->bt_mount, 531 + "Detected failing async write on buffer block 0x%llx. Retrying async write.\n", 532 + (long long)bp->b_bn); 533 + } 534 534 535 535 if (!xfs_buf_delwri_queue(bp, buffer_list)) 536 536 rval = XFS_ITEM_FLUSHING; ··· 1112 1096 1113 1097 xfs_buf_ioerror(bp, 0); /* errno of 0 unsets the flag */ 1114 1098 1115 - if (!XFS_BUF_ISSTALE(bp)) { 1116 - bp->b_flags |= XBF_WRITE | XBF_ASYNC | XBF_DONE; 1099 + if (!(bp->b_flags & (XBF_STALE|XBF_WRITE_FAIL))) { 1100 + bp->b_flags |= XBF_WRITE | XBF_ASYNC | 1101 + XBF_DONE | XBF_WRITE_FAIL; 1117 1102 xfs_buf_iorequest(bp); 1118 1103 } else { 1119 1104 xfs_buf_relse(bp);
+13 -13
fs/xfs/xfs_dir2_node.c
··· 2067 2067 */ 2068 2068 int /* error */ 2069 2069 xfs_dir2_node_removename( 2070 - xfs_da_args_t *args) /* operation arguments */ 2070 + struct xfs_da_args *args) /* operation arguments */ 2071 2071 { 2072 - xfs_da_state_blk_t *blk; /* leaf block */ 2072 + struct xfs_da_state_blk *blk; /* leaf block */ 2073 2073 int error; /* error return value */ 2074 2074 int rval; /* operation return value */ 2075 - xfs_da_state_t *state; /* btree cursor */ 2075 + struct xfs_da_state *state; /* btree cursor */ 2076 2076 2077 2077 trace_xfs_dir2_node_removename(args); 2078 2078 ··· 2084 2084 state->mp = args->dp->i_mount; 2085 2085 state->blocksize = state->mp->m_dirblksize; 2086 2086 state->node_ents = state->mp->m_dir_node_ents; 2087 - /* 2088 - * Look up the entry we're deleting, set up the cursor. 2089 - */ 2087 + 2088 + /* Look up the entry we're deleting, set up the cursor. */ 2090 2089 error = xfs_da3_node_lookup_int(state, &rval); 2091 2090 if (error) 2092 - rval = error; 2093 - /* 2094 - * Didn't find it, upper layer screwed up. 2095 - */ 2091 + goto out_free; 2092 + 2093 + /* Didn't find it, upper layer screwed up. */ 2096 2094 if (rval != EEXIST) { 2097 - xfs_da_state_free(state); 2098 - return rval; 2095 + error = rval; 2096 + goto out_free; 2099 2097 } 2098 + 2100 2099 blk = &state->path.blk[state->path.active - 1]; 2101 2100 ASSERT(blk->magic == XFS_DIR2_LEAFN_MAGIC); 2102 2101 ASSERT(state->extravalid); ··· 2106 2107 error = xfs_dir2_leafn_remove(args, blk->bp, blk->index, 2107 2108 &state->extrablk, &rval); 2108 2109 if (error) 2109 - return error; 2110 + goto out_free; 2110 2111 /* 2111 2112 * Fix the hash values up the btree. 2112 2113 */ ··· 2121 2122 */ 2122 2123 if (!error) 2123 2124 error = xfs_dir2_node_to_leaf(state); 2125 + out_free: 2124 2126 xfs_da_state_free(state); 2125 2127 return error; 2126 2128 }
+2 -1
fs/xfs/xfs_iops.c
··· 618 618 } 619 619 if (!gid_eq(igid, gid)) { 620 620 if (XFS_IS_QUOTA_RUNNING(mp) && XFS_IS_GQUOTA_ON(mp)) { 621 - ASSERT(!XFS_IS_PQUOTA_ON(mp)); 621 + ASSERT(xfs_sb_version_has_pquotino(&mp->m_sb) || 622 + !XFS_IS_PQUOTA_ON(mp)); 622 623 ASSERT(mask & ATTR_GID); 623 624 ASSERT(gdqp); 624 625 olddquot2 = xfs_qm_vop_chown(tp, ip,
+11 -2
fs/xfs/xfs_log_recover.c
··· 193 193 bp->b_io_length = nbblks; 194 194 bp->b_error = 0; 195 195 196 - xfsbdstrat(log->l_mp, bp); 196 + if (XFS_FORCED_SHUTDOWN(log->l_mp)) 197 + return XFS_ERROR(EIO); 198 + 199 + xfs_buf_iorequest(bp); 197 200 error = xfs_buf_iowait(bp); 198 201 if (error) 199 202 xfs_buf_ioerror_alert(bp, __func__); ··· 4400 4397 XFS_BUF_READ(bp); 4401 4398 XFS_BUF_UNASYNC(bp); 4402 4399 bp->b_ops = &xfs_sb_buf_ops; 4403 - xfsbdstrat(log->l_mp, bp); 4400 + 4401 + if (XFS_FORCED_SHUTDOWN(log->l_mp)) { 4402 + xfs_buf_relse(bp); 4403 + return XFS_ERROR(EIO); 4404 + } 4405 + 4406 + xfs_buf_iorequest(bp); 4404 4407 error = xfs_buf_iowait(bp); 4405 4408 if (error) { 4406 4409 xfs_buf_ioerror_alert(bp, __func__);
+53 -27
fs/xfs/xfs_qm.c
··· 134 134 { 135 135 struct xfs_mount *mp = dqp->q_mount; 136 136 struct xfs_quotainfo *qi = mp->m_quotainfo; 137 - struct xfs_dquot *gdqp = NULL; 138 - struct xfs_dquot *pdqp = NULL; 139 137 140 138 xfs_dqlock(dqp); 141 139 if ((dqp->dq_flags & XFS_DQ_FREEING) || dqp->q_nrefs != 0) { 142 140 xfs_dqunlock(dqp); 143 141 return EAGAIN; 144 - } 145 - 146 - /* 147 - * If this quota has a hint attached, prepare for releasing it now. 148 - */ 149 - gdqp = dqp->q_gdquot; 150 - if (gdqp) { 151 - xfs_dqlock(gdqp); 152 - dqp->q_gdquot = NULL; 153 - } 154 - 155 - pdqp = dqp->q_pdquot; 156 - if (pdqp) { 157 - xfs_dqlock(pdqp); 158 - dqp->q_pdquot = NULL; 159 142 } 160 143 161 144 dqp->dq_flags |= XFS_DQ_FREEING; ··· 189 206 XFS_STATS_DEC(xs_qm_dquot_unused); 190 207 191 208 xfs_qm_dqdestroy(dqp); 209 + return 0; 210 + } 211 + 212 + /* 213 + * Release the group or project dquot pointers the user dquots maybe carrying 214 + * around as a hint, and proceed to purge the user dquot cache if requested. 215 + */ 216 + STATIC int 217 + xfs_qm_dqpurge_hints( 218 + struct xfs_dquot *dqp, 219 + void *data) 220 + { 221 + struct xfs_dquot *gdqp = NULL; 222 + struct xfs_dquot *pdqp = NULL; 223 + uint flags = *((uint *)data); 224 + 225 + xfs_dqlock(dqp); 226 + if (dqp->dq_flags & XFS_DQ_FREEING) { 227 + xfs_dqunlock(dqp); 228 + return EAGAIN; 229 + } 230 + 231 + /* If this quota has a hint attached, prepare for releasing it now */ 232 + gdqp = dqp->q_gdquot; 233 + if (gdqp) 234 + dqp->q_gdquot = NULL; 235 + 236 + pdqp = dqp->q_pdquot; 237 + if (pdqp) 238 + dqp->q_pdquot = NULL; 239 + 240 + xfs_dqunlock(dqp); 192 241 193 242 if (gdqp) 194 - xfs_qm_dqput(gdqp); 243 + xfs_qm_dqrele(gdqp); 195 244 if (pdqp) 196 - xfs_qm_dqput(pdqp); 245 + xfs_qm_dqrele(pdqp); 246 + 247 + if (flags & XFS_QMOPT_UQUOTA) 248 + return xfs_qm_dqpurge(dqp, NULL); 249 + 197 250 return 0; 198 251 } 199 252 ··· 241 222 struct xfs_mount *mp, 242 223 uint flags) 243 224 { 244 - if (flags & XFS_QMOPT_UQUOTA) 245 - xfs_qm_dquot_walk(mp, XFS_DQ_USER, xfs_qm_dqpurge, NULL); 225 + /* 226 + * We have to release group/project dquot hint(s) from the user dquot 227 + * at first if they are there, otherwise we would run into an infinite 228 + * loop while walking through radix tree to purge other type of dquots 229 + * since their refcount is not zero if the user dquot refers to them 230 + * as hint. 231 + * 232 + * Call the special xfs_qm_dqpurge_hints() will end up go through the 233 + * general xfs_qm_dqpurge() against user dquot cache if requested. 234 + */ 235 + xfs_qm_dquot_walk(mp, XFS_DQ_USER, xfs_qm_dqpurge_hints, &flags); 236 + 246 237 if (flags & XFS_QMOPT_GQUOTA) 247 238 xfs_qm_dquot_walk(mp, XFS_DQ_GROUP, xfs_qm_dqpurge, NULL); 248 239 if (flags & XFS_QMOPT_PQUOTA) ··· 2111 2082 ASSERT(xfs_isilocked(ip, XFS_ILOCK_EXCL)); 2112 2083 ASSERT(XFS_IS_QUOTA_RUNNING(mp)); 2113 2084 2114 - if (udqp) { 2085 + if (udqp && XFS_IS_UQUOTA_ON(mp)) { 2115 2086 ASSERT(ip->i_udquot == NULL); 2116 - ASSERT(XFS_IS_UQUOTA_ON(mp)); 2117 2087 ASSERT(ip->i_d.di_uid == be32_to_cpu(udqp->q_core.d_id)); 2118 2088 2119 2089 ip->i_udquot = xfs_qm_dqhold(udqp); 2120 2090 xfs_trans_mod_dquot(tp, udqp, XFS_TRANS_DQ_ICOUNT, 1); 2121 2091 } 2122 - if (gdqp) { 2092 + if (gdqp && XFS_IS_GQUOTA_ON(mp)) { 2123 2093 ASSERT(ip->i_gdquot == NULL); 2124 - ASSERT(XFS_IS_GQUOTA_ON(mp)); 2125 2094 ASSERT(ip->i_d.di_gid == be32_to_cpu(gdqp->q_core.d_id)); 2126 2095 ip->i_gdquot = xfs_qm_dqhold(gdqp); 2127 2096 xfs_trans_mod_dquot(tp, gdqp, XFS_TRANS_DQ_ICOUNT, 1); 2128 2097 } 2129 - if (pdqp) { 2098 + if (pdqp && XFS_IS_PQUOTA_ON(mp)) { 2130 2099 ASSERT(ip->i_pdquot == NULL); 2131 - ASSERT(XFS_IS_PQUOTA_ON(mp)); 2132 2100 ASSERT(xfs_get_projid(ip) == be32_to_cpu(pdqp->q_core.d_id)); 2133 2101 2134 2102 ip->i_pdquot = xfs_qm_dqhold(pdqp);
+12 -1
fs/xfs/xfs_trans_buf.c
··· 314 314 ASSERT(bp->b_iodone == NULL); 315 315 XFS_BUF_READ(bp); 316 316 bp->b_ops = ops; 317 - xfsbdstrat(tp->t_mountp, bp); 317 + 318 + /* 319 + * XXX(hch): clean up the error handling here to be less 320 + * of a mess.. 321 + */ 322 + if (XFS_FORCED_SHUTDOWN(mp)) { 323 + trace_xfs_bdstrat_shut(bp, _RET_IP_); 324 + xfs_bioerror_relse(bp); 325 + } else { 326 + xfs_buf_iorequest(bp); 327 + } 328 + 318 329 error = xfs_buf_iowait(bp); 319 330 if (error) { 320 331 xfs_buf_ioerror_alert(bp, __func__);
+3 -4
include/asm-generic/pgtable.h
··· 217 217 #endif 218 218 219 219 #ifndef pte_accessible 220 - # define pte_accessible(pte) ((void)(pte),1) 220 + # define pte_accessible(mm, pte) ((void)(pte), 1) 221 221 #endif 222 222 223 223 #ifndef flush_tlb_fix_spurious_fault ··· 599 599 #ifdef CONFIG_TRANSPARENT_HUGEPAGE 600 600 barrier(); 601 601 #endif 602 - if (pmd_none(pmdval)) 602 + if (pmd_none(pmdval) || pmd_trans_huge(pmdval)) 603 603 return 1; 604 604 if (unlikely(pmd_bad(pmdval))) { 605 - if (!pmd_trans_huge(pmdval)) 606 - pmd_clear_bad(pmd); 605 + pmd_clear_bad(pmd); 607 606 return 1; 608 607 } 609 608 return 0;
+11 -24
include/asm-generic/preempt.h
··· 3 3 4 4 #include <linux/thread_info.h> 5 5 6 - /* 7 - * We mask the PREEMPT_NEED_RESCHED bit so as not to confuse all current users 8 - * that think a non-zero value indicates we cannot preempt. 9 - */ 6 + #define PREEMPT_ENABLED (0) 7 + 10 8 static __always_inline int preempt_count(void) 11 9 { 12 - return current_thread_info()->preempt_count & ~PREEMPT_NEED_RESCHED; 10 + return current_thread_info()->preempt_count; 13 11 } 14 12 15 13 static __always_inline int *preempt_count_ptr(void) ··· 15 17 return &current_thread_info()->preempt_count; 16 18 } 17 19 18 - /* 19 - * We now loose PREEMPT_NEED_RESCHED and cause an extra reschedule; however the 20 - * alternative is loosing a reschedule. Better schedule too often -- also this 21 - * should be a very rare operation. 22 - */ 23 20 static __always_inline void preempt_count_set(int pc) 24 21 { 25 22 *preempt_count_ptr() = pc; ··· 34 41 task_thread_info(p)->preempt_count = PREEMPT_ENABLED; \ 35 42 } while (0) 36 43 37 - /* 38 - * We fold the NEED_RESCHED bit into the preempt count such that 39 - * preempt_enable() can decrement and test for needing to reschedule with a 40 - * single instruction. 41 - * 42 - * We invert the actual bit, so that when the decrement hits 0 we know we both 43 - * need to resched (the bit is cleared) and can resched (no preempt count). 44 - */ 45 - 46 44 static __always_inline void set_preempt_need_resched(void) 47 45 { 48 - *preempt_count_ptr() &= ~PREEMPT_NEED_RESCHED; 49 46 } 50 47 51 48 static __always_inline void clear_preempt_need_resched(void) 52 49 { 53 - *preempt_count_ptr() |= PREEMPT_NEED_RESCHED; 54 50 } 55 51 56 52 static __always_inline bool test_preempt_need_resched(void) 57 53 { 58 - return !(*preempt_count_ptr() & PREEMPT_NEED_RESCHED); 54 + return false; 59 55 } 60 56 61 57 /* ··· 63 81 64 82 static __always_inline bool __preempt_count_dec_and_test(void) 65 83 { 66 - return !--*preempt_count_ptr(); 84 + /* 85 + * Because of load-store architectures cannot do per-cpu atomic 86 + * operations; we cannot use PREEMPT_NEED_RESCHED because it might get 87 + * lost. 88 + */ 89 + return !--*preempt_count_ptr() && tif_need_resched(); 67 90 } 68 91 69 92 /* ··· 76 89 */ 77 90 static __always_inline bool should_resched(void) 78 91 { 79 - return unlikely(!*preempt_count_ptr()); 92 + return unlikely(!preempt_count() && tif_need_resched()); 80 93 } 81 94 82 95 #ifdef CONFIG_PREEMPT
+1 -1
include/linux/lockref.h
··· 19 19 20 20 #define USE_CMPXCHG_LOCKREF \ 21 21 (IS_ENABLED(CONFIG_ARCH_USE_CMPXCHG_LOCKREF) && \ 22 - IS_ENABLED(CONFIG_SMP) && !BLOATED_SPINLOCKS) 22 + IS_ENABLED(CONFIG_SMP) && SPINLOCK_SIZE <= 4) 23 23 24 24 struct lockref { 25 25 union {
+30
include/linux/math64.h
··· 133 133 return ret; 134 134 } 135 135 136 + #if defined(CONFIG_ARCH_SUPPORTS_INT128) && defined(__SIZEOF_INT128__) 137 + 138 + #ifndef mul_u64_u32_shr 139 + static inline u64 mul_u64_u32_shr(u64 a, u32 mul, unsigned int shift) 140 + { 141 + return (u64)(((unsigned __int128)a * mul) >> shift); 142 + } 143 + #endif /* mul_u64_u32_shr */ 144 + 145 + #else 146 + 147 + #ifndef mul_u64_u32_shr 148 + static inline u64 mul_u64_u32_shr(u64 a, u32 mul, unsigned int shift) 149 + { 150 + u32 ah, al; 151 + u64 ret; 152 + 153 + al = a; 154 + ah = a >> 32; 155 + 156 + ret = ((u64)al * mul) >> shift; 157 + if (ah) 158 + ret += ((u64)ah * mul) << (32 - shift); 159 + 160 + return ret; 161 + } 162 + #endif /* mul_u64_u32_shr */ 163 + 164 + #endif 165 + 136 166 #endif /* _LINUX_MATH64_H */
+11 -1
include/linux/migrate.h
··· 55 55 struct page *newpage, struct page *page); 56 56 extern int migrate_page_move_mapping(struct address_space *mapping, 57 57 struct page *newpage, struct page *page, 58 - struct buffer_head *head, enum migrate_mode mode); 58 + struct buffer_head *head, enum migrate_mode mode, 59 + int extra_count); 59 60 #else 60 61 61 62 static inline void putback_lru_pages(struct list_head *l) {} ··· 91 90 #endif /* CONFIG_MIGRATION */ 92 91 93 92 #ifdef CONFIG_NUMA_BALANCING 93 + extern bool pmd_trans_migrating(pmd_t pmd); 94 + extern void wait_migrate_huge_page(struct anon_vma *anon_vma, pmd_t *pmd); 94 95 extern int migrate_misplaced_page(struct page *page, 95 96 struct vm_area_struct *vma, int node); 96 97 extern bool migrate_ratelimited(int node); 97 98 #else 99 + static inline bool pmd_trans_migrating(pmd_t pmd) 100 + { 101 + return false; 102 + } 103 + static inline void wait_migrate_huge_page(struct anon_vma *anon_vma, pmd_t *pmd) 104 + { 105 + } 98 106 static inline int migrate_misplaced_page(struct page *page, 99 107 struct vm_area_struct *vma, int node) 100 108 {
+3 -3
include/linux/mm.h
··· 1317 1317 #endif /* CONFIG_MMU && !__ARCH_HAS_4LEVEL_HACK */ 1318 1318 1319 1319 #if USE_SPLIT_PTE_PTLOCKS 1320 - #if BLOATED_SPINLOCKS 1320 + #if ALLOC_SPLIT_PTLOCKS 1321 1321 extern bool ptlock_alloc(struct page *page); 1322 1322 extern void ptlock_free(struct page *page); 1323 1323 ··· 1325 1325 { 1326 1326 return page->ptl; 1327 1327 } 1328 - #else /* BLOATED_SPINLOCKS */ 1328 + #else /* ALLOC_SPLIT_PTLOCKS */ 1329 1329 static inline bool ptlock_alloc(struct page *page) 1330 1330 { 1331 1331 return true; ··· 1339 1339 { 1340 1340 return &page->ptl; 1341 1341 } 1342 - #endif /* BLOATED_SPINLOCKS */ 1342 + #endif /* ALLOC_SPLIT_PTLOCKS */ 1343 1343 1344 1344 static inline spinlock_t *pte_lockptr(struct mm_struct *mm, pmd_t *pmd) 1345 1345 {
+51 -1
include/linux/mm_types.h
··· 26 26 #define USE_SPLIT_PTE_PTLOCKS (NR_CPUS >= CONFIG_SPLIT_PTLOCK_CPUS) 27 27 #define USE_SPLIT_PMD_PTLOCKS (USE_SPLIT_PTE_PTLOCKS && \ 28 28 IS_ENABLED(CONFIG_ARCH_ENABLE_SPLIT_PMD_PTLOCK)) 29 + #define ALLOC_SPLIT_PTLOCKS (SPINLOCK_SIZE > BITS_PER_LONG/8) 29 30 30 31 /* 31 32 * Each physical page in the system has a struct page associated with ··· 156 155 * system if PG_buddy is set. 157 156 */ 158 157 #if USE_SPLIT_PTE_PTLOCKS 159 - #if BLOATED_SPINLOCKS 158 + #if ALLOC_SPLIT_PTLOCKS 160 159 spinlock_t *ptl; 161 160 #else 162 161 spinlock_t ptl; ··· 444 443 /* numa_scan_seq prevents two threads setting pte_numa */ 445 444 int numa_scan_seq; 446 445 #endif 446 + #if defined(CONFIG_NUMA_BALANCING) || defined(CONFIG_COMPACTION) 447 + /* 448 + * An operation with batched TLB flushing is going on. Anything that 449 + * can move process memory needs to flush the TLB when moving a 450 + * PROT_NONE or PROT_NUMA mapped page. 451 + */ 452 + bool tlb_flush_pending; 453 + #endif 447 454 struct uprobes_state uprobes_state; 448 455 }; 449 456 ··· 467 458 { 468 459 return mm->cpu_vm_mask_var; 469 460 } 461 + 462 + #if defined(CONFIG_NUMA_BALANCING) || defined(CONFIG_COMPACTION) 463 + /* 464 + * Memory barriers to keep this state in sync are graciously provided by 465 + * the page table locks, outside of which no page table modifications happen. 466 + * The barriers below prevent the compiler from re-ordering the instructions 467 + * around the memory barriers that are already present in the code. 468 + */ 469 + static inline bool mm_tlb_flush_pending(struct mm_struct *mm) 470 + { 471 + barrier(); 472 + return mm->tlb_flush_pending; 473 + } 474 + static inline void set_tlb_flush_pending(struct mm_struct *mm) 475 + { 476 + mm->tlb_flush_pending = true; 477 + 478 + /* 479 + * Guarantee that the tlb_flush_pending store does not leak into the 480 + * critical section updating the page tables 481 + */ 482 + smp_mb__before_spinlock(); 483 + } 484 + /* Clearing is done after a TLB flush, which also provides a barrier. */ 485 + static inline void clear_tlb_flush_pending(struct mm_struct *mm) 486 + { 487 + barrier(); 488 + mm->tlb_flush_pending = false; 489 + } 490 + #else 491 + static inline bool mm_tlb_flush_pending(struct mm_struct *mm) 492 + { 493 + return false; 494 + } 495 + static inline void set_tlb_flush_pending(struct mm_struct *mm) 496 + { 497 + } 498 + static inline void clear_tlb_flush_pending(struct mm_struct *mm) 499 + { 500 + } 501 + #endif 470 502 471 503 #endif /* _LINUX_MM_TYPES_H */
+3
include/linux/pstore.h
··· 51 51 char *buf; 52 52 size_t bufsize; 53 53 struct mutex read_mutex; /* serialize open/read/close */ 54 + int flags; 54 55 int (*open)(struct pstore_info *psi); 55 56 int (*close)(struct pstore_info *psi); 56 57 ssize_t (*read)(u64 *id, enum pstore_type_id *type, ··· 70 69 struct pstore_info *psi); 71 70 void *data; 72 71 }; 72 + 73 + #define PSTORE_FLAGS_FRAGILE 1 73 74 74 75 #ifdef CONFIG_PSTORE 75 76 extern int pstore_register(struct pstore_info *);
+1
include/linux/reboot.h
··· 43 43 * Architecture-specific implementations of sys_reboot commands. 44 44 */ 45 45 46 + extern void migrate_to_reboot_cpu(void); 46 47 extern void machine_restart(char *cmd); 47 48 extern void machine_halt(void); 48 49 extern void machine_power_off(void);
+2 -3
include/linux/sched.h
··· 440 440 .sum_exec_runtime = 0, \ 441 441 } 442 442 443 - #define PREEMPT_ENABLED (PREEMPT_NEED_RESCHED) 444 - 445 443 #ifdef CONFIG_PREEMPT_COUNT 446 444 #define PREEMPT_DISABLED (1 + PREEMPT_ENABLED) 447 445 #else ··· 930 932 struct uts_namespace; 931 933 932 934 struct load_weight { 933 - unsigned long weight, inv_weight; 935 + unsigned long weight; 936 + u32 inv_weight; 934 937 }; 935 938 936 939 struct sched_avg {
+1 -4
include/target/target_core_base.h
··· 517 517 u32 acl_index; 518 518 #define MAX_ACL_TAG_SIZE 64 519 519 char acl_tag[MAX_ACL_TAG_SIZE]; 520 - u64 num_cmds; 521 - u64 read_bytes; 522 - u64 write_bytes; 523 - spinlock_t stats_lock; 524 520 /* Used for PR SPEC_I_PT=1 and REGISTER_AND_MOVE */ 525 521 atomic_t acl_pr_ref_count; 526 522 struct se_dev_entry **device_list; ··· 620 624 u32 unmap_granularity; 621 625 u32 unmap_granularity_alignment; 622 626 u32 max_write_same_len; 627 + u32 max_bytes_per_io; 623 628 struct se_device *da_dev; 624 629 struct config_group da_group; 625 630 };
+1
include/uapi/drm/vmwgfx_drm.h
··· 75 75 #define DRM_VMW_PARAM_FIFO_CAPS 4 76 76 #define DRM_VMW_PARAM_MAX_FB_SIZE 5 77 77 #define DRM_VMW_PARAM_FIFO_HW_VERSION 6 78 + #define DRM_VMW_PARAM_MAX_SURF_MEMORY 7 78 79 79 80 /** 80 81 * struct drm_vmw_getparam_arg
+1
include/uapi/linux/perf_event.h
··· 679 679 * 680 680 * { u64 weight; } && PERF_SAMPLE_WEIGHT 681 681 * { u64 data_src; } && PERF_SAMPLE_DATA_SRC 682 + * { u64 transaction; } && PERF_SAMPLE_TRANSACTION 682 683 * }; 683 684 */ 684 685 PERF_RECORD_SAMPLE = 9,
+5 -5
include/xen/interface/io/blkif.h
··· 146 146 struct blkif_request_rw { 147 147 uint8_t nr_segments; /* number of segments */ 148 148 blkif_vdev_t handle; /* only for read/write requests */ 149 - #ifdef CONFIG_X86_64 149 + #ifndef CONFIG_X86_32 150 150 uint32_t _pad1; /* offsetof(blkif_request,u.rw.id) == 8 */ 151 151 #endif 152 152 uint64_t id; /* private guest value, echoed in resp */ ··· 163 163 uint8_t flag; /* BLKIF_DISCARD_SECURE or zero. */ 164 164 #define BLKIF_DISCARD_SECURE (1<<0) /* ignored if discard-secure=0 */ 165 165 blkif_vdev_t _pad1; /* only for read/write requests */ 166 - #ifdef CONFIG_X86_64 166 + #ifndef CONFIG_X86_32 167 167 uint32_t _pad2; /* offsetof(blkif_req..,u.discard.id)==8*/ 168 168 #endif 169 169 uint64_t id; /* private guest value, echoed in resp */ ··· 175 175 struct blkif_request_other { 176 176 uint8_t _pad1; 177 177 blkif_vdev_t _pad2; /* only for read/write requests */ 178 - #ifdef CONFIG_X86_64 178 + #ifndef CONFIG_X86_32 179 179 uint32_t _pad3; /* offsetof(blkif_req..,u.other.id)==8*/ 180 180 #endif 181 181 uint64_t id; /* private guest value, echoed in resp */ ··· 184 184 struct blkif_request_indirect { 185 185 uint8_t indirect_op; 186 186 uint16_t nr_segments; 187 - #ifdef CONFIG_X86_64 187 + #ifndef CONFIG_X86_32 188 188 uint32_t _pad1; /* offsetof(blkif_...,u.indirect.id) == 8 */ 189 189 #endif 190 190 uint64_t id; ··· 192 192 blkif_vdev_t handle; 193 193 uint16_t _pad2; 194 194 grant_ref_t indirect_grefs[BLKIF_MAX_INDIRECT_PAGES_PER_REQUEST]; 195 - #ifdef CONFIG_X86_64 195 + #ifndef CONFIG_X86_32 196 196 uint32_t _pad3; /* make it 64 byte aligned */ 197 197 #else 198 198 uint64_t _pad3; /* make it 64 byte aligned */
+6
init/Kconfig
··· 809 809 config ARCH_SUPPORTS_NUMA_BALANCING 810 810 bool 811 811 812 + # 813 + # For architectures that know their GCC __int128 support is sound 814 + # 815 + config ARCH_SUPPORTS_INT128 816 + bool 817 + 812 818 # For architectures that (ab)use NUMA to represent different memory regions 813 819 # all cpu-local but of different latencies, such as SuperH. 814 820 #
+4 -3
kernel/Makefile
··· 137 137 ############################################################################### 138 138 ifeq ($(CONFIG_SYSTEM_TRUSTED_KEYRING),y) 139 139 X509_CERTIFICATES-y := $(wildcard *.x509) $(wildcard $(srctree)/*.x509) 140 - X509_CERTIFICATES-$(CONFIG_MODULE_SIG) += signing_key.x509 141 - X509_CERTIFICATES := $(sort $(foreach CERT,$(X509_CERTIFICATES-y), \ 140 + X509_CERTIFICATES-$(CONFIG_MODULE_SIG) += $(objtree)/signing_key.x509 141 + X509_CERTIFICATES-raw := $(sort $(foreach CERT,$(X509_CERTIFICATES-y), \ 142 142 $(or $(realpath $(CERT)),$(CERT)))) 143 + X509_CERTIFICATES := $(subst $(realpath $(objtree))/,,$(X509_CERTIFICATES-raw)) 143 144 144 145 ifeq ($(X509_CERTIFICATES),) 145 146 $(warning *** No X.509 certificates found ***) ··· 165 164 targets += $(obj)/.x509.list 166 165 $(obj)/.x509.list: 167 166 @echo $(X509_CERTIFICATES) >$@ 167 + endif 168 168 169 169 clean-files := x509_certificate_list .x509.list 170 - endif 171 170 172 171 ifeq ($(CONFIG_MODULE_SIG),y) 173 172 ###############################################################################
+1 -1
kernel/bounds.c
··· 22 22 #ifdef CONFIG_SMP 23 23 DEFINE(NR_CPUS_BITS, ilog2(CONFIG_NR_CPUS)); 24 24 #endif 25 - DEFINE(BLOATED_SPINLOCKS, sizeof(spinlock_t) > sizeof(int)); 25 + DEFINE(SPINLOCK_SIZE, sizeof(spinlock_t)); 26 26 /* End of constants */ 27 27 }
+18 -3
kernel/events/core.c
··· 1396 1396 if (event->state != PERF_EVENT_STATE_ACTIVE) 1397 1397 return; 1398 1398 1399 + perf_pmu_disable(event->pmu); 1400 + 1399 1401 event->state = PERF_EVENT_STATE_INACTIVE; 1400 1402 if (event->pending_disable) { 1401 1403 event->pending_disable = 0; ··· 1414 1412 ctx->nr_freq--; 1415 1413 if (event->attr.exclusive || !cpuctx->active_oncpu) 1416 1414 cpuctx->exclusive = 0; 1415 + 1416 + perf_pmu_enable(event->pmu); 1417 1417 } 1418 1418 1419 1419 static void ··· 1656 1652 struct perf_event_context *ctx) 1657 1653 { 1658 1654 u64 tstamp = perf_event_time(event); 1655 + int ret = 0; 1659 1656 1660 1657 if (event->state <= PERF_EVENT_STATE_OFF) 1661 1658 return 0; ··· 1679 1674 */ 1680 1675 smp_wmb(); 1681 1676 1677 + perf_pmu_disable(event->pmu); 1678 + 1682 1679 if (event->pmu->add(event, PERF_EF_START)) { 1683 1680 event->state = PERF_EVENT_STATE_INACTIVE; 1684 1681 event->oncpu = -1; 1685 - return -EAGAIN; 1682 + ret = -EAGAIN; 1683 + goto out; 1686 1684 } 1687 1685 1688 1686 event->tstamp_running += tstamp - event->tstamp_stopped; ··· 1701 1693 if (event->attr.exclusive) 1702 1694 cpuctx->exclusive = 1; 1703 1695 1704 - return 0; 1696 + out: 1697 + perf_pmu_enable(event->pmu); 1698 + 1699 + return ret; 1705 1700 } 1706 1701 1707 1702 static int ··· 2754 2743 if (!event_filter_match(event)) 2755 2744 continue; 2756 2745 2746 + perf_pmu_disable(event->pmu); 2747 + 2757 2748 hwc = &event->hw; 2758 2749 2759 2750 if (hwc->interrupts == MAX_INTERRUPTS) { ··· 2765 2752 } 2766 2753 2767 2754 if (!event->attr.freq || !event->attr.sample_freq) 2768 - continue; 2755 + goto next; 2769 2756 2770 2757 /* 2771 2758 * stop the event and update event->count ··· 2787 2774 perf_adjust_period(event, period, delta, false); 2788 2775 2789 2776 event->pmu->start(event, delta > 0 ? PERF_EF_RELOAD : 0); 2777 + next: 2778 + perf_pmu_enable(event->pmu); 2790 2779 } 2791 2780 2792 2781 perf_pmu_enable(ctx->pmu);
+1
kernel/fork.c
··· 537 537 spin_lock_init(&mm->page_table_lock); 538 538 mm_init_aio(mm); 539 539 mm_init_owner(mm, p); 540 + clear_tlb_flush_pending(mm); 540 541 541 542 if (likely(!mm_alloc_pgd(mm))) { 542 543 mm->def_flags = 0;
+1
kernel/kexec.c
··· 1680 1680 { 1681 1681 kexec_in_progress = true; 1682 1682 kernel_restart_prepare(NULL); 1683 + migrate_to_reboot_cpu(); 1683 1684 printk(KERN_EMERG "Starting new kernel\n"); 1684 1685 machine_shutdown(); 1685 1686 }
+1 -1
kernel/reboot.c
··· 104 104 } 105 105 EXPORT_SYMBOL(unregister_reboot_notifier); 106 106 107 - static void migrate_to_reboot_cpu(void) 107 + void migrate_to_reboot_cpu(void) 108 108 { 109 109 /* The boot cpu is always logical cpu 0 */ 110 110 int cpu = reboot_cpu;
+4 -2
kernel/sched/core.c
··· 4902 4902 static void update_top_cache_domain(int cpu) 4903 4903 { 4904 4904 struct sched_domain *sd; 4905 + struct sched_domain *busy_sd = NULL; 4905 4906 int id = cpu; 4906 4907 int size = 1; 4907 4908 ··· 4910 4909 if (sd) { 4911 4910 id = cpumask_first(sched_domain_span(sd)); 4912 4911 size = cpumask_weight(sched_domain_span(sd)); 4913 - sd = sd->parent; /* sd_busy */ 4912 + busy_sd = sd->parent; /* sd_busy */ 4914 4913 } 4915 - rcu_assign_pointer(per_cpu(sd_busy, cpu), sd); 4914 + rcu_assign_pointer(per_cpu(sd_busy, cpu), busy_sd); 4916 4915 4917 4916 rcu_assign_pointer(per_cpu(sd_llc, cpu), sd); 4918 4917 per_cpu(sd_llc_size, cpu) = size; ··· 5113 5112 * die on a /0 trap. 5114 5113 */ 5115 5114 sg->sgp->power = SCHED_POWER_SCALE * cpumask_weight(sg_span); 5115 + sg->sgp->power_orig = sg->sgp->power; 5116 5116 5117 5117 /* 5118 5118 * Make sure the first group of this domain contains the
+72 -81
kernel/sched/fair.c
··· 178 178 update_sysctl(); 179 179 } 180 180 181 - #if BITS_PER_LONG == 32 182 - # define WMULT_CONST (~0UL) 183 - #else 184 - # define WMULT_CONST (1UL << 32) 185 - #endif 186 - 181 + #define WMULT_CONST (~0U) 187 182 #define WMULT_SHIFT 32 188 183 189 - /* 190 - * Shift right and round: 191 - */ 192 - #define SRR(x, y) (((x) + (1UL << ((y) - 1))) >> (y)) 193 - 194 - /* 195 - * delta *= weight / lw 196 - */ 197 - static unsigned long 198 - calc_delta_mine(unsigned long delta_exec, unsigned long weight, 199 - struct load_weight *lw) 184 + static void __update_inv_weight(struct load_weight *lw) 200 185 { 201 - u64 tmp; 186 + unsigned long w; 202 187 203 - /* 204 - * weight can be less than 2^SCHED_LOAD_RESOLUTION for task group sched 205 - * entities since MIN_SHARES = 2. Treat weight as 1 if less than 206 - * 2^SCHED_LOAD_RESOLUTION. 207 - */ 208 - if (likely(weight > (1UL << SCHED_LOAD_RESOLUTION))) 209 - tmp = (u64)delta_exec * scale_load_down(weight); 188 + if (likely(lw->inv_weight)) 189 + return; 190 + 191 + w = scale_load_down(lw->weight); 192 + 193 + if (BITS_PER_LONG > 32 && unlikely(w >= WMULT_CONST)) 194 + lw->inv_weight = 1; 195 + else if (unlikely(!w)) 196 + lw->inv_weight = WMULT_CONST; 210 197 else 211 - tmp = (u64)delta_exec; 198 + lw->inv_weight = WMULT_CONST / w; 199 + } 212 200 213 - if (!lw->inv_weight) { 214 - unsigned long w = scale_load_down(lw->weight); 201 + /* 202 + * delta_exec * weight / lw.weight 203 + * OR 204 + * (delta_exec * (weight * lw->inv_weight)) >> WMULT_SHIFT 205 + * 206 + * Either weight := NICE_0_LOAD and lw \e prio_to_wmult[], in which case 207 + * we're guaranteed shift stays positive because inv_weight is guaranteed to 208 + * fit 32 bits, and NICE_0_LOAD gives another 10 bits; therefore shift >= 22. 209 + * 210 + * Or, weight =< lw.weight (because lw.weight is the runqueue weight), thus 211 + * weight/lw.weight <= 1, and therefore our shift will also be positive. 212 + */ 213 + static u64 __calc_delta(u64 delta_exec, unsigned long weight, struct load_weight *lw) 214 + { 215 + u64 fact = scale_load_down(weight); 216 + int shift = WMULT_SHIFT; 215 217 216 - if (BITS_PER_LONG > 32 && unlikely(w >= WMULT_CONST)) 217 - lw->inv_weight = 1; 218 - else if (unlikely(!w)) 219 - lw->inv_weight = WMULT_CONST; 220 - else 221 - lw->inv_weight = WMULT_CONST / w; 218 + __update_inv_weight(lw); 219 + 220 + if (unlikely(fact >> 32)) { 221 + while (fact >> 32) { 222 + fact >>= 1; 223 + shift--; 224 + } 222 225 } 223 226 224 - /* 225 - * Check whether we'd overflow the 64-bit multiplication: 226 - */ 227 - if (unlikely(tmp > WMULT_CONST)) 228 - tmp = SRR(SRR(tmp, WMULT_SHIFT/2) * lw->inv_weight, 229 - WMULT_SHIFT/2); 230 - else 231 - tmp = SRR(tmp * lw->inv_weight, WMULT_SHIFT); 227 + /* hint to use a 32x32->64 mul */ 228 + fact = (u64)(u32)fact * lw->inv_weight; 232 229 233 - return (unsigned long)min(tmp, (u64)(unsigned long)LONG_MAX); 230 + while (fact >> 32) { 231 + fact >>= 1; 232 + shift--; 233 + } 234 + 235 + return mul_u64_u32_shr(delta_exec, fact, shift); 234 236 } 235 237 236 238 ··· 445 443 #endif /* CONFIG_FAIR_GROUP_SCHED */ 446 444 447 445 static __always_inline 448 - void account_cfs_rq_runtime(struct cfs_rq *cfs_rq, unsigned long delta_exec); 446 + void account_cfs_rq_runtime(struct cfs_rq *cfs_rq, u64 delta_exec); 449 447 450 448 /************************************************************** 451 449 * Scheduling class tree data structure manipulation methods: ··· 614 612 /* 615 613 * delta /= w 616 614 */ 617 - static inline unsigned long 618 - calc_delta_fair(unsigned long delta, struct sched_entity *se) 615 + static inline u64 calc_delta_fair(u64 delta, struct sched_entity *se) 619 616 { 620 617 if (unlikely(se->load.weight != NICE_0_LOAD)) 621 - delta = calc_delta_mine(delta, NICE_0_LOAD, &se->load); 618 + delta = __calc_delta(delta, NICE_0_LOAD, &se->load); 622 619 623 620 return delta; 624 621 } ··· 666 665 update_load_add(&lw, se->load.weight); 667 666 load = &lw; 668 667 } 669 - slice = calc_delta_mine(slice, se->load.weight, load); 668 + slice = __calc_delta(slice, se->load.weight, load); 670 669 } 671 670 return slice; 672 671 } ··· 704 703 #endif 705 704 706 705 /* 707 - * Update the current task's runtime statistics. Skip current tasks that 708 - * are not in our scheduling class. 706 + * Update the current task's runtime statistics. 709 707 */ 710 - static inline void 711 - __update_curr(struct cfs_rq *cfs_rq, struct sched_entity *curr, 712 - unsigned long delta_exec) 713 - { 714 - unsigned long delta_exec_weighted; 715 - 716 - schedstat_set(curr->statistics.exec_max, 717 - max((u64)delta_exec, curr->statistics.exec_max)); 718 - 719 - curr->sum_exec_runtime += delta_exec; 720 - schedstat_add(cfs_rq, exec_clock, delta_exec); 721 - delta_exec_weighted = calc_delta_fair(delta_exec, curr); 722 - 723 - curr->vruntime += delta_exec_weighted; 724 - update_min_vruntime(cfs_rq); 725 - } 726 - 727 708 static void update_curr(struct cfs_rq *cfs_rq) 728 709 { 729 710 struct sched_entity *curr = cfs_rq->curr; 730 711 u64 now = rq_clock_task(rq_of(cfs_rq)); 731 - unsigned long delta_exec; 712 + u64 delta_exec; 732 713 733 714 if (unlikely(!curr)) 734 715 return; 735 716 736 - /* 737 - * Get the amount of time the current task was running 738 - * since the last time we changed load (this cannot 739 - * overflow on 32 bits): 740 - */ 741 - delta_exec = (unsigned long)(now - curr->exec_start); 742 - if (!delta_exec) 717 + delta_exec = now - curr->exec_start; 718 + if (unlikely((s64)delta_exec <= 0)) 743 719 return; 744 720 745 - __update_curr(cfs_rq, curr, delta_exec); 746 721 curr->exec_start = now; 722 + 723 + schedstat_set(curr->statistics.exec_max, 724 + max(delta_exec, curr->statistics.exec_max)); 725 + 726 + curr->sum_exec_runtime += delta_exec; 727 + schedstat_add(cfs_rq, exec_clock, delta_exec); 728 + 729 + curr->vruntime += calc_delta_fair(delta_exec, curr); 730 + update_min_vruntime(cfs_rq); 747 731 748 732 if (entity_is_task(curr)) { 749 733 struct task_struct *curtask = task_of(curr); ··· 1736 1750 */ 1737 1751 if (!vma->vm_mm || 1738 1752 (vma->vm_file && (vma->vm_flags & (VM_READ|VM_WRITE)) == (VM_READ))) 1753 + continue; 1754 + 1755 + /* 1756 + * Skip inaccessible VMAs to avoid any confusion between 1757 + * PROT_NONE and NUMA hinting ptes 1758 + */ 1759 + if (!(vma->vm_flags & (VM_READ | VM_EXEC | VM_WRITE))) 1739 1760 continue; 1740 1761 1741 1762 do { ··· 3008 3015 } 3009 3016 } 3010 3017 3011 - static void __account_cfs_rq_runtime(struct cfs_rq *cfs_rq, 3012 - unsigned long delta_exec) 3018 + static void __account_cfs_rq_runtime(struct cfs_rq *cfs_rq, u64 delta_exec) 3013 3019 { 3014 3020 /* dock delta_exec before expiring quota (as it could span periods) */ 3015 3021 cfs_rq->runtime_remaining -= delta_exec; ··· 3026 3034 } 3027 3035 3028 3036 static __always_inline 3029 - void account_cfs_rq_runtime(struct cfs_rq *cfs_rq, unsigned long delta_exec) 3037 + void account_cfs_rq_runtime(struct cfs_rq *cfs_rq, u64 delta_exec) 3030 3038 { 3031 3039 if (!cfs_bandwidth_used() || !cfs_rq->runtime_enabled) 3032 3040 return; ··· 3566 3574 return rq_clock_task(rq_of(cfs_rq)); 3567 3575 } 3568 3576 3569 - static void account_cfs_rq_runtime(struct cfs_rq *cfs_rq, 3570 - unsigned long delta_exec) {} 3577 + static void account_cfs_rq_runtime(struct cfs_rq *cfs_rq, u64 delta_exec) {} 3571 3578 static void check_cfs_rq_runtime(struct cfs_rq *cfs_rq) {} 3572 3579 static void check_enqueue_throttle(struct cfs_rq *cfs_rq) {} 3573 3580 static __always_inline void return_cfs_rq_runtime(struct cfs_rq *cfs_rq) {}
+14
kernel/sched/rt.c
··· 901 901 { 902 902 struct rq *rq = rq_of_rt_rq(rt_rq); 903 903 904 + #ifdef CONFIG_RT_GROUP_SCHED 905 + /* 906 + * Change rq's cpupri only if rt_rq is the top queue. 907 + */ 908 + if (&rq->rt != rt_rq) 909 + return; 910 + #endif 904 911 if (rq->online && prio < prev_prio) 905 912 cpupri_set(&rq->rd->cpupri, rq->cpu, prio); 906 913 } ··· 917 910 { 918 911 struct rq *rq = rq_of_rt_rq(rt_rq); 919 912 913 + #ifdef CONFIG_RT_GROUP_SCHED 914 + /* 915 + * Change rq's cpupri only if rt_rq is the top queue. 916 + */ 917 + if (&rq->rt != rt_rq) 918 + return; 919 + #endif 920 920 if (rq->online && rt_rq->highest_prio.curr != prev_prio) 921 921 cpupri_set(&rq->rd->cpupri, rq->cpu, rt_rq->highest_prio.curr); 922 922 }
+1 -1
kernel/trace/ftrace.c
··· 775 775 int cpu; 776 776 int ret = 0; 777 777 778 - for_each_online_cpu(cpu) { 778 + for_each_possible_cpu(cpu) { 779 779 ret = ftrace_profile_init_cpu(cpu); 780 780 if (ret) 781 781 break;
+3 -3
kernel/user.c
··· 51 51 .owner = GLOBAL_ROOT_UID, 52 52 .group = GLOBAL_ROOT_GID, 53 53 .proc_inum = PROC_USER_INIT_INO, 54 - #ifdef CONFIG_KEYS_KERBEROS_CACHE 55 - .krb_cache_register_sem = 56 - __RWSEM_INITIALIZER(init_user_ns.krb_cache_register_sem), 54 + #ifdef CONFIG_PERSISTENT_KEYRINGS 55 + .persistent_keyring_register_sem = 56 + __RWSEM_INITIALIZER(init_user_ns.persistent_keyring_register_sem), 57 57 #endif 58 58 }; 59 59 EXPORT_SYMBOL_GPL(init_user_ns);
+1 -1
mm/Kconfig
··· 543 543 544 544 config MEM_SOFT_DIRTY 545 545 bool "Track memory changes" 546 - depends on CHECKPOINT_RESTORE && HAVE_ARCH_SOFT_DIRTY 546 + depends on CHECKPOINT_RESTORE && HAVE_ARCH_SOFT_DIRTY && PROC_FS 547 547 select PROC_PAGE_MONITOR 548 548 help 549 549 This option enables memory changes tracking by introducing a
+4
mm/compaction.c
··· 134 134 bool migrate_scanner) 135 135 { 136 136 struct zone *zone = cc->zone; 137 + 138 + if (cc->ignore_skip_hint) 139 + return; 140 + 137 141 if (!page) 138 142 return; 139 143
+36 -9
mm/huge_memory.c
··· 882 882 ret = 0; 883 883 goto out_unlock; 884 884 } 885 + 886 + /* mmap_sem prevents this happening but warn if that changes */ 887 + WARN_ON(pmd_trans_migrating(pmd)); 888 + 885 889 if (unlikely(pmd_trans_splitting(pmd))) { 886 890 /* split huge page running from under us */ 887 891 spin_unlock(src_ptl); ··· 1247 1243 if ((flags & FOLL_DUMP) && is_huge_zero_pmd(*pmd)) 1248 1244 return ERR_PTR(-EFAULT); 1249 1245 1246 + /* Full NUMA hinting faults to serialise migration in fault paths */ 1247 + if ((flags & FOLL_NUMA) && pmd_numa(*pmd)) 1248 + goto out; 1249 + 1250 1250 page = pmd_page(*pmd); 1251 1251 VM_BUG_ON(!PageHead(page)); 1252 1252 if (flags & FOLL_TOUCH) { ··· 1303 1295 if (unlikely(!pmd_same(pmd, *pmdp))) 1304 1296 goto out_unlock; 1305 1297 1298 + /* 1299 + * If there are potential migrations, wait for completion and retry 1300 + * without disrupting NUMA hinting information. Do not relock and 1301 + * check_same as the page may no longer be mapped. 1302 + */ 1303 + if (unlikely(pmd_trans_migrating(*pmdp))) { 1304 + spin_unlock(ptl); 1305 + wait_migrate_huge_page(vma->anon_vma, pmdp); 1306 + goto out; 1307 + } 1308 + 1306 1309 page = pmd_page(pmd); 1307 1310 BUG_ON(is_huge_zero_page(page)); 1308 1311 page_nid = page_to_nid(page); ··· 1342 1323 /* If the page was locked, there are no parallel migrations */ 1343 1324 if (page_locked) 1344 1325 goto clear_pmdnuma; 1326 + } 1345 1327 1346 - /* 1347 - * Otherwise wait for potential migrations and retry. We do 1348 - * relock and check_same as the page may no longer be mapped. 1349 - * As the fault is being retried, do not account for it. 1350 - */ 1328 + /* Migration could have started since the pmd_trans_migrating check */ 1329 + if (!page_locked) { 1351 1330 spin_unlock(ptl); 1352 1331 wait_on_page_locked(page); 1353 1332 page_nid = -1; 1354 1333 goto out; 1355 1334 } 1356 1335 1357 - /* Page is misplaced, serialise migrations and parallel THP splits */ 1336 + /* 1337 + * Page is misplaced. Page lock serialises migrations. Acquire anon_vma 1338 + * to serialises splits 1339 + */ 1358 1340 get_page(page); 1359 1341 spin_unlock(ptl); 1360 - if (!page_locked) 1361 - lock_page(page); 1362 1342 anon_vma = page_lock_anon_vma_read(page); 1363 1343 1364 1344 /* Confirm the PMD did not change while page_table_lock was released */ ··· 1367 1349 put_page(page); 1368 1350 page_nid = -1; 1369 1351 goto out_unlock; 1352 + } 1353 + 1354 + /* Bail if we fail to protect against THP splits for any reason */ 1355 + if (unlikely(!anon_vma)) { 1356 + put_page(page); 1357 + page_nid = -1; 1358 + goto clear_pmdnuma; 1370 1359 } 1371 1360 1372 1361 /* ··· 1542 1517 ret = 1; 1543 1518 if (!prot_numa) { 1544 1519 entry = pmdp_get_and_clear(mm, addr, pmd); 1520 + if (pmd_numa(entry)) 1521 + entry = pmd_mknonnuma(entry); 1545 1522 entry = pmd_modify(entry, newprot); 1546 1523 ret = HPAGE_PMD_NR; 1547 1524 BUG_ON(pmd_write(entry)); ··· 1558 1531 */ 1559 1532 if (!is_huge_zero_page(page) && 1560 1533 !pmd_numa(*pmd)) { 1561 - entry = pmdp_get_and_clear(mm, addr, pmd); 1534 + entry = *pmd; 1562 1535 entry = pmd_mknuma(entry); 1563 1536 ret = HPAGE_PMD_NR; 1564 1537 }
+10 -4
mm/memory-failure.c
··· 1505 1505 if (ret > 0) 1506 1506 ret = -EIO; 1507 1507 } else { 1508 - set_page_hwpoison_huge_page(hpage); 1509 - dequeue_hwpoisoned_huge_page(hpage); 1510 - atomic_long_add(1 << compound_order(hpage), 1511 - &num_poisoned_pages); 1508 + /* overcommit hugetlb page will be freed to buddy */ 1509 + if (PageHuge(page)) { 1510 + set_page_hwpoison_huge_page(hpage); 1511 + dequeue_hwpoisoned_huge_page(hpage); 1512 + atomic_long_add(1 << compound_order(hpage), 1513 + &num_poisoned_pages); 1514 + } else { 1515 + SetPageHWPoison(page); 1516 + atomic_long_inc(&num_poisoned_pages); 1517 + } 1512 1518 } 1513 1519 return ret; 1514 1520 }
+1 -1
mm/memory.c
··· 4271 4271 } 4272 4272 #endif /* CONFIG_TRANSPARENT_HUGEPAGE || CONFIG_HUGETLBFS */ 4273 4273 4274 - #if USE_SPLIT_PTE_PTLOCKS && BLOATED_SPINLOCKS 4274 + #if USE_SPLIT_PTE_PTLOCKS && ALLOC_SPLIT_PTLOCKS 4275 4275 bool ptlock_alloc(struct page *page) 4276 4276 { 4277 4277 spinlock_t *ptl;
+10 -8
mm/mempolicy.c
··· 1197 1197 break; 1198 1198 vma = vma->vm_next; 1199 1199 } 1200 - /* 1201 - * queue_pages_range() confirms that @page belongs to some vma, 1202 - * so vma shouldn't be NULL. 1203 - */ 1204 - BUG_ON(!vma); 1205 1200 1206 - if (PageHuge(page)) 1207 - return alloc_huge_page_noerr(vma, address, 1); 1201 + if (PageHuge(page)) { 1202 + if (vma) 1203 + return alloc_huge_page_noerr(vma, address, 1); 1204 + else 1205 + return NULL; 1206 + } 1207 + /* 1208 + * if !vma, alloc_page_vma() will use task or system default policy 1209 + */ 1208 1210 return alloc_page_vma(GFP_HIGHUSER_MOVABLE, vma, address); 1209 1211 } 1210 1212 #else ··· 1320 1318 if (nr_failed && (flags & MPOL_MF_STRICT)) 1321 1319 err = -EIO; 1322 1320 } else 1323 - putback_lru_pages(&pagelist); 1321 + putback_movable_pages(&pagelist); 1324 1322 1325 1323 up_write(&mm->mmap_sem); 1326 1324 mpol_out:
+65 -17
mm/migrate.c
··· 36 36 #include <linux/hugetlb_cgroup.h> 37 37 #include <linux/gfp.h> 38 38 #include <linux/balloon_compaction.h> 39 + #include <linux/mmu_notifier.h> 39 40 40 41 #include <asm/tlbflush.h> 41 42 ··· 317 316 */ 318 317 int migrate_page_move_mapping(struct address_space *mapping, 319 318 struct page *newpage, struct page *page, 320 - struct buffer_head *head, enum migrate_mode mode) 319 + struct buffer_head *head, enum migrate_mode mode, 320 + int extra_count) 321 321 { 322 - int expected_count = 0; 322 + int expected_count = 1 + extra_count; 323 323 void **pslot; 324 324 325 325 if (!mapping) { 326 326 /* Anonymous page without mapping */ 327 - if (page_count(page) != 1) 327 + if (page_count(page) != expected_count) 328 328 return -EAGAIN; 329 329 return MIGRATEPAGE_SUCCESS; 330 330 } ··· 335 333 pslot = radix_tree_lookup_slot(&mapping->page_tree, 336 334 page_index(page)); 337 335 338 - expected_count = 2 + page_has_private(page); 336 + expected_count += 1 + page_has_private(page); 339 337 if (page_count(page) != expected_count || 340 338 radix_tree_deref_slot_protected(pslot, &mapping->tree_lock) != page) { 341 339 spin_unlock_irq(&mapping->tree_lock); ··· 585 583 586 584 BUG_ON(PageWriteback(page)); /* Writeback must be complete */ 587 585 588 - rc = migrate_page_move_mapping(mapping, newpage, page, NULL, mode); 586 + rc = migrate_page_move_mapping(mapping, newpage, page, NULL, mode, 0); 589 587 590 588 if (rc != MIGRATEPAGE_SUCCESS) 591 589 return rc; ··· 612 610 613 611 head = page_buffers(page); 614 612 615 - rc = migrate_page_move_mapping(mapping, newpage, page, head, mode); 613 + rc = migrate_page_move_mapping(mapping, newpage, page, head, mode, 0); 616 614 617 615 if (rc != MIGRATEPAGE_SUCCESS) 618 616 return rc; ··· 1656 1654 return 1; 1657 1655 } 1658 1656 1657 + bool pmd_trans_migrating(pmd_t pmd) 1658 + { 1659 + struct page *page = pmd_page(pmd); 1660 + return PageLocked(page); 1661 + } 1662 + 1663 + void wait_migrate_huge_page(struct anon_vma *anon_vma, pmd_t *pmd) 1664 + { 1665 + struct page *page = pmd_page(*pmd); 1666 + wait_on_page_locked(page); 1667 + } 1668 + 1659 1669 /* 1660 1670 * Attempt to migrate a misplaced page to the specified destination 1661 1671 * node. Caller is expected to have an elevated reference count on ··· 1730 1716 struct page *page, int node) 1731 1717 { 1732 1718 spinlock_t *ptl; 1733 - unsigned long haddr = address & HPAGE_PMD_MASK; 1734 1719 pg_data_t *pgdat = NODE_DATA(node); 1735 1720 int isolated = 0; 1736 1721 struct page *new_page = NULL; 1737 1722 struct mem_cgroup *memcg = NULL; 1738 1723 int page_lru = page_is_file_cache(page); 1724 + unsigned long mmun_start = address & HPAGE_PMD_MASK; 1725 + unsigned long mmun_end = mmun_start + HPAGE_PMD_SIZE; 1726 + pmd_t orig_entry; 1739 1727 1740 1728 /* 1741 1729 * Rate-limit the amount of data that is being migrated to a node. ··· 1760 1744 goto out_fail; 1761 1745 } 1762 1746 1747 + if (mm_tlb_flush_pending(mm)) 1748 + flush_tlb_range(vma, mmun_start, mmun_end); 1749 + 1763 1750 /* Prepare a page as a migration target */ 1764 1751 __set_page_locked(new_page); 1765 1752 SetPageSwapBacked(new_page); ··· 1774 1755 WARN_ON(PageLRU(new_page)); 1775 1756 1776 1757 /* Recheck the target PMD */ 1758 + mmu_notifier_invalidate_range_start(mm, mmun_start, mmun_end); 1777 1759 ptl = pmd_lock(mm, pmd); 1778 - if (unlikely(!pmd_same(*pmd, entry))) { 1760 + if (unlikely(!pmd_same(*pmd, entry) || page_count(page) != 2)) { 1761 + fail_putback: 1779 1762 spin_unlock(ptl); 1763 + mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end); 1780 1764 1781 1765 /* Reverse changes made by migrate_page_copy() */ 1782 1766 if (TestClearPageActive(new_page)) ··· 1796 1774 putback_lru_page(page); 1797 1775 mod_zone_page_state(page_zone(page), 1798 1776 NR_ISOLATED_ANON + page_lru, -HPAGE_PMD_NR); 1799 - goto out_fail; 1777 + 1778 + goto out_unlock; 1800 1779 } 1801 1780 1802 1781 /* ··· 1809 1786 */ 1810 1787 mem_cgroup_prepare_migration(page, new_page, &memcg); 1811 1788 1789 + orig_entry = *pmd; 1812 1790 entry = mk_pmd(new_page, vma->vm_page_prot); 1813 - entry = pmd_mknonnuma(entry); 1814 - entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma); 1815 1791 entry = pmd_mkhuge(entry); 1792 + entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma); 1816 1793 1817 - pmdp_clear_flush(vma, haddr, pmd); 1818 - set_pmd_at(mm, haddr, pmd, entry); 1819 - page_add_new_anon_rmap(new_page, vma, haddr); 1794 + /* 1795 + * Clear the old entry under pagetable lock and establish the new PTE. 1796 + * Any parallel GUP will either observe the old page blocking on the 1797 + * page lock, block on the page table lock or observe the new page. 1798 + * The SetPageUptodate on the new page and page_add_new_anon_rmap 1799 + * guarantee the copy is visible before the pagetable update. 1800 + */ 1801 + flush_cache_range(vma, mmun_start, mmun_end); 1802 + page_add_new_anon_rmap(new_page, vma, mmun_start); 1803 + pmdp_clear_flush(vma, mmun_start, pmd); 1804 + set_pmd_at(mm, mmun_start, pmd, entry); 1805 + flush_tlb_range(vma, mmun_start, mmun_end); 1820 1806 update_mmu_cache_pmd(vma, address, &entry); 1807 + 1808 + if (page_count(page) != 2) { 1809 + set_pmd_at(mm, mmun_start, pmd, orig_entry); 1810 + flush_tlb_range(vma, mmun_start, mmun_end); 1811 + update_mmu_cache_pmd(vma, address, &entry); 1812 + page_remove_rmap(new_page); 1813 + goto fail_putback; 1814 + } 1815 + 1821 1816 page_remove_rmap(page); 1817 + 1822 1818 /* 1823 1819 * Finish the charge transaction under the page table lock to 1824 1820 * prevent split_huge_page() from dividing up the charge ··· 1845 1803 */ 1846 1804 mem_cgroup_end_migration(memcg, page, new_page, true); 1847 1805 spin_unlock(ptl); 1806 + mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end); 1848 1807 1849 1808 unlock_page(new_page); 1850 1809 unlock_page(page); ··· 1863 1820 out_fail: 1864 1821 count_vm_events(PGMIGRATE_FAIL, HPAGE_PMD_NR); 1865 1822 out_dropref: 1866 - entry = pmd_mknonnuma(entry); 1867 - set_pmd_at(mm, haddr, pmd, entry); 1868 - update_mmu_cache_pmd(vma, address, &entry); 1823 + ptl = pmd_lock(mm, pmd); 1824 + if (pmd_same(*pmd, entry)) { 1825 + entry = pmd_mknonnuma(entry); 1826 + set_pmd_at(mm, mmun_start, pmd, entry); 1827 + update_mmu_cache_pmd(vma, address, &entry); 1828 + } 1829 + spin_unlock(ptl); 1869 1830 1831 + out_unlock: 1870 1832 unlock_page(page); 1871 1833 put_page(page); 1872 1834 return 0;
+11 -2
mm/mprotect.c
··· 52 52 pte_t ptent; 53 53 bool updated = false; 54 54 55 - ptent = ptep_modify_prot_start(mm, addr, pte); 56 55 if (!prot_numa) { 56 + ptent = ptep_modify_prot_start(mm, addr, pte); 57 + if (pte_numa(ptent)) 58 + ptent = pte_mknonnuma(ptent); 57 59 ptent = pte_modify(ptent, newprot); 58 60 updated = true; 59 61 } else { 60 62 struct page *page; 61 63 64 + ptent = *pte; 62 65 page = vm_normal_page(vma, addr, oldpte); 63 66 if (page) { 64 67 if (!pte_numa(oldpte)) { 65 68 ptent = pte_mknuma(ptent); 69 + set_pte_at(mm, addr, pte, ptent); 66 70 updated = true; 67 71 } 68 72 } ··· 83 79 84 80 if (updated) 85 81 pages++; 86 - ptep_modify_prot_commit(mm, addr, pte, ptent); 82 + 83 + /* Only !prot_numa always clears the pte */ 84 + if (!prot_numa) 85 + ptep_modify_prot_commit(mm, addr, pte, ptent); 87 86 } else if (IS_ENABLED(CONFIG_MIGRATION) && !pte_file(oldpte)) { 88 87 swp_entry_t entry = pte_to_swp_entry(oldpte); 89 88 ··· 188 181 BUG_ON(addr >= end); 189 182 pgd = pgd_offset(mm, addr); 190 183 flush_cache_range(vma, addr, end); 184 + set_tlb_flush_pending(mm); 191 185 do { 192 186 next = pgd_addr_end(addr, end); 193 187 if (pgd_none_or_clear_bad(pgd)) ··· 200 192 /* Only flush the TLB if we actually modified any entries: */ 201 193 if (pages) 202 194 flush_tlb_range(vma, start, end); 195 + clear_tlb_flush_pending(mm); 203 196 204 197 return pages; 205 198 }
+9 -10
mm/page_alloc.c
··· 1816 1816 1817 1817 static bool zone_local(struct zone *local_zone, struct zone *zone) 1818 1818 { 1819 - return node_distance(local_zone->node, zone->node) == LOCAL_DISTANCE; 1819 + return local_zone->node == zone->node; 1820 1820 } 1821 1821 1822 1822 static bool zone_allows_reclaim(struct zone *local_zone, struct zone *zone) ··· 1913 1913 * page was allocated in should have no effect on the 1914 1914 * time the page has in memory before being reclaimed. 1915 1915 * 1916 - * When zone_reclaim_mode is enabled, try to stay in 1917 - * local zones in the fastpath. If that fails, the 1918 - * slowpath is entered, which will do another pass 1919 - * starting with the local zones, but ultimately fall 1920 - * back to remote zones that do not partake in the 1921 - * fairness round-robin cycle of this zonelist. 1916 + * Try to stay in local zones in the fastpath. If 1917 + * that fails, the slowpath is entered, which will do 1918 + * another pass starting with the local zones, but 1919 + * ultimately fall back to remote zones that do not 1920 + * partake in the fairness round-robin cycle of this 1921 + * zonelist. 1922 1922 */ 1923 1923 if (alloc_flags & ALLOC_WMARK_LOW) { 1924 1924 if (zone_page_state(zone, NR_ALLOC_BATCH) <= 0) 1925 1925 continue; 1926 - if (zone_reclaim_mode && 1927 - !zone_local(preferred_zone, zone)) 1926 + if (!zone_local(preferred_zone, zone)) 1928 1927 continue; 1929 1928 } 1930 1929 /* ··· 2389 2390 * thrash fairness information for zones that are not 2390 2391 * actually part of this zonelist's round-robin cycle. 2391 2392 */ 2392 - if (zone_reclaim_mode && !zone_local(preferred_zone, zone)) 2393 + if (!zone_local(preferred_zone, zone)) 2393 2394 continue; 2394 2395 mod_zone_page_state(zone, NR_ALLOC_BATCH, 2395 2396 high_wmark_pages(zone) -
+6 -2
mm/pgtable-generic.c
··· 110 110 pte_t ptep_clear_flush(struct vm_area_struct *vma, unsigned long address, 111 111 pte_t *ptep) 112 112 { 113 + struct mm_struct *mm = (vma)->vm_mm; 113 114 pte_t pte; 114 - pte = ptep_get_and_clear((vma)->vm_mm, address, ptep); 115 - if (pte_accessible(pte)) 115 + pte = ptep_get_and_clear(mm, address, ptep); 116 + if (pte_accessible(mm, pte)) 116 117 flush_tlb_page(vma, address); 117 118 return pte; 118 119 } ··· 192 191 void pmdp_invalidate(struct vm_area_struct *vma, unsigned long address, 193 192 pmd_t *pmdp) 194 193 { 194 + pmd_t entry = *pmdp; 195 + if (pmd_numa(entry)) 196 + entry = pmd_mknonnuma(entry); 195 197 set_pmd_at(vma->vm_mm, address, pmdp, pmd_mknotpresent(*pmdp)); 196 198 flush_tlb_range(vma, address, address + HPAGE_PMD_SIZE); 197 199 }
+4
mm/rmap.c
··· 600 600 spinlock_t *ptl; 601 601 602 602 if (unlikely(PageHuge(page))) { 603 + /* when pud is not present, pte will be NULL */ 603 604 pte = huge_pte_offset(mm, address); 605 + if (!pte) 606 + return NULL; 607 + 604 608 ptl = huge_pte_lockptr(page_hstate(page), mm, pte); 605 609 goto check; 606 610 }
+1
net/core/neighbour.c
··· 1161 1161 neigh->parms->reachable_time : 1162 1162 0))); 1163 1163 neigh->nud_state = new; 1164 + notify = 1; 1164 1165 } 1165 1166 1166 1167 if (lladdr != neigh->ha) {
+1
net/ipv4/netfilter/ipt_SYNPROXY.c
··· 423 423 static struct xt_target synproxy_tg4_reg __read_mostly = { 424 424 .name = "SYNPROXY", 425 425 .family = NFPROTO_IPV4, 426 + .hooks = (1 << NF_INET_LOCAL_IN) | (1 << NF_INET_FORWARD), 426 427 .target = synproxy_tg4, 427 428 .targetsize = sizeof(struct xt_synproxy_info), 428 429 .checkentry = synproxy_tg4_check,
+1 -1
net/ipv4/netfilter/nft_reject_ipv4.c
··· 72 72 { 73 73 const struct nft_reject *priv = nft_expr_priv(expr); 74 74 75 - if (nla_put_be32(skb, NFTA_REJECT_TYPE, priv->type)) 75 + if (nla_put_be32(skb, NFTA_REJECT_TYPE, htonl(priv->type))) 76 76 goto nla_put_failure; 77 77 78 78 switch (priv->type) {
+4 -9
net/ipv4/udp.c
··· 1600 1600 } 1601 1601 1602 1602 /* For TCP sockets, sk_rx_dst is protected by socket lock 1603 - * For UDP, we use sk_dst_lock to guard against concurrent changes. 1603 + * For UDP, we use xchg() to guard against concurrent changes. 1604 1604 */ 1605 1605 static void udp_sk_rx_dst_set(struct sock *sk, struct dst_entry *dst) 1606 1606 { 1607 1607 struct dst_entry *old; 1608 1608 1609 - spin_lock(&sk->sk_dst_lock); 1610 - old = sk->sk_rx_dst; 1611 - if (likely(old != dst)) { 1612 - dst_hold(dst); 1613 - sk->sk_rx_dst = dst; 1614 - dst_release(old); 1615 - } 1616 - spin_unlock(&sk->sk_dst_lock); 1609 + dst_hold(dst); 1610 + old = xchg(&sk->sk_rx_dst, dst); 1611 + dst_release(old); 1617 1612 } 1618 1613 1619 1614 /*
+1
net/ipv6/netfilter/ip6t_SYNPROXY.c
··· 446 446 static struct xt_target synproxy_tg6_reg __read_mostly = { 447 447 .name = "SYNPROXY", 448 448 .family = NFPROTO_IPV6, 449 + .hooks = (1 << NF_INET_LOCAL_IN) | (1 << NF_INET_FORWARD), 449 450 .target = synproxy_tg6, 450 451 .targetsize = sizeof(struct xt_synproxy_info), 451 452 .checkentry = synproxy_tg6_check,
+16 -1
net/sctp/probe.c
··· 38 38 #include <net/sctp/sctp.h> 39 39 #include <net/sctp/sm.h> 40 40 41 + MODULE_SOFTDEP("pre: sctp"); 41 42 MODULE_AUTHOR("Wei Yongjun <yjwei@cn.fujitsu.com>"); 42 43 MODULE_DESCRIPTION("SCTP snooper"); 43 44 MODULE_LICENSE("GPL"); ··· 183 182 .entry = jsctp_sf_eat_sack, 184 183 }; 185 184 185 + static __init int sctp_setup_jprobe(void) 186 + { 187 + int ret = register_jprobe(&sctp_recv_probe); 188 + 189 + if (ret) { 190 + if (request_module("sctp")) 191 + goto out; 192 + ret = register_jprobe(&sctp_recv_probe); 193 + } 194 + 195 + out: 196 + return ret; 197 + } 198 + 186 199 static __init int sctpprobe_init(void) 187 200 { 188 201 int ret = -ENOMEM; ··· 217 202 &sctpprobe_fops)) 218 203 goto free_kfifo; 219 204 220 - ret = register_jprobe(&sctp_recv_probe); 205 + ret = sctp_setup_jprobe(); 221 206 if (ret) 222 207 goto remove_proc; 223 208
+6 -2
net/unix/af_unix.c
··· 718 718 int err; 719 719 unsigned int retries = 0; 720 720 721 - mutex_lock(&u->readlock); 721 + err = mutex_lock_interruptible(&u->readlock); 722 + if (err) 723 + return err; 722 724 723 725 err = 0; 724 726 if (u->addr) ··· 879 877 goto out; 880 878 addr_len = err; 881 879 882 - mutex_lock(&u->readlock); 880 + err = mutex_lock_interruptible(&u->readlock); 881 + if (err) 882 + goto out; 883 883 884 884 err = -EINVAL; 885 885 if (u->addr)
+2
sound/core/pcm_lib.c
··· 1937 1937 case SNDRV_PCM_STATE_DISCONNECTED: 1938 1938 err = -EBADFD; 1939 1939 goto _endloop; 1940 + case SNDRV_PCM_STATE_PAUSED: 1941 + continue; 1940 1942 } 1941 1943 if (!tout) { 1942 1944 snd_printd("%s write error (DMA or IRQ trouble?)\n",
+4
sound/pci/hda/hda_intel.c
··· 3433 3433 * white/black-list for enable_msi 3434 3434 */ 3435 3435 static struct snd_pci_quirk msi_black_list[] = { 3436 + SND_PCI_QUIRK(0x103c, 0x2191, "HP", 0), /* AMD Hudson */ 3437 + SND_PCI_QUIRK(0x103c, 0x2192, "HP", 0), /* AMD Hudson */ 3438 + SND_PCI_QUIRK(0x103c, 0x21f7, "HP", 0), /* AMD Hudson */ 3439 + SND_PCI_QUIRK(0x103c, 0x21fa, "HP", 0), /* AMD Hudson */ 3436 3440 SND_PCI_QUIRK(0x1043, 0x81f2, "ASUS", 0), /* Athlon64 X2 + nvidia */ 3437 3441 SND_PCI_QUIRK(0x1043, 0x81f6, "ASUS", 0), /* nvidia */ 3438 3442 SND_PCI_QUIRK(0x1043, 0x822d, "ASUS", 0), /* Athlon64 X2 + nvidia MCP55 */
+4
sound/pci/hda/patch_realtek.c
··· 4247 4247 SND_PCI_QUIRK(0x1028, 0x0606, "Dell", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE), 4248 4248 SND_PCI_QUIRK(0x1028, 0x0608, "Dell", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE), 4249 4249 SND_PCI_QUIRK(0x1028, 0x0609, "Dell", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE), 4250 + SND_PCI_QUIRK(0x1028, 0x0610, "Dell", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE), 4250 4251 SND_PCI_QUIRK(0x1028, 0x0613, "Dell", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE), 4251 4252 SND_PCI_QUIRK(0x1028, 0x0614, "Dell Inspiron 3135", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE), 4252 4253 SND_PCI_QUIRK(0x1028, 0x0616, "Dell Vostro 5470", ALC290_FIXUP_MONO_SPEAKERS), 4253 4254 SND_PCI_QUIRK(0x1028, 0x061f, "Dell", ALC255_FIXUP_DELL1_MIC_NO_PRESENCE), 4255 + SND_PCI_QUIRK(0x1028, 0x0629, "Dell", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE), 4254 4256 SND_PCI_QUIRK(0x1028, 0x0638, "Dell Inspiron 5439", ALC290_FIXUP_MONO_SPEAKERS), 4257 + SND_PCI_QUIRK(0x1028, 0x063e, "Dell", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE), 4255 4258 SND_PCI_QUIRK(0x1028, 0x063f, "Dell", ALC255_FIXUP_DELL1_MIC_NO_PRESENCE), 4259 + SND_PCI_QUIRK(0x1028, 0x0640, "Dell", ALC255_FIXUP_DELL1_MIC_NO_PRESENCE), 4256 4260 SND_PCI_QUIRK(0x1028, 0x15cc, "Dell X5 Precision", ALC269_FIXUP_DELL2_MIC_NO_PRESENCE), 4257 4261 SND_PCI_QUIRK(0x1028, 0x15cd, "Dell X5 Precision", ALC269_FIXUP_DELL2_MIC_NO_PRESENCE), 4258 4262 SND_PCI_QUIRK(0x103c, 0x1586, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC2),
+29 -1
sound/soc/atmel/atmel_ssc_dai.c
··· 648 648 649 649 dma_params = ssc_p->dma_params[dir]; 650 650 651 - ssc_writel(ssc_p->ssc->regs, CR, dma_params->mask->ssc_enable); 651 + ssc_writel(ssc_p->ssc->regs, CR, dma_params->mask->ssc_disable); 652 652 ssc_writel(ssc_p->ssc->regs, IDR, dma_params->mask->ssc_error); 653 653 654 654 pr_debug("%s enabled SSC_SR=0x%08x\n", ··· 657 657 return 0; 658 658 } 659 659 660 + static int atmel_ssc_trigger(struct snd_pcm_substream *substream, 661 + int cmd, struct snd_soc_dai *dai) 662 + { 663 + struct atmel_ssc_info *ssc_p = &ssc_info[dai->id]; 664 + struct atmel_pcm_dma_params *dma_params; 665 + int dir; 666 + 667 + if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK) 668 + dir = 0; 669 + else 670 + dir = 1; 671 + 672 + dma_params = ssc_p->dma_params[dir]; 673 + 674 + switch (cmd) { 675 + case SNDRV_PCM_TRIGGER_START: 676 + case SNDRV_PCM_TRIGGER_RESUME: 677 + case SNDRV_PCM_TRIGGER_PAUSE_RELEASE: 678 + ssc_writel(ssc_p->ssc->regs, CR, dma_params->mask->ssc_enable); 679 + break; 680 + default: 681 + ssc_writel(ssc_p->ssc->regs, CR, dma_params->mask->ssc_disable); 682 + break; 683 + } 684 + 685 + return 0; 686 + } 660 687 661 688 #ifdef CONFIG_PM 662 689 static int atmel_ssc_suspend(struct snd_soc_dai *cpu_dai) ··· 758 731 .startup = atmel_ssc_startup, 759 732 .shutdown = atmel_ssc_shutdown, 760 733 .prepare = atmel_ssc_prepare, 734 + .trigger = atmel_ssc_trigger, 761 735 .hw_params = atmel_ssc_hw_params, 762 736 .set_fmt = atmel_ssc_set_dai_fmt, 763 737 .set_clkdiv = atmel_ssc_set_dai_clkdiv,
+1 -1
sound/soc/atmel/sam9x5_wm8731.c
··· 109 109 dai->stream_name = "WM8731 PCM"; 110 110 dai->codec_dai_name = "wm8731-hifi"; 111 111 dai->init = sam9x5_wm8731_init; 112 - dai->dai_fmt = SND_SOC_DAIFMT_I2S | SND_SOC_DAIFMT_NB_NF 112 + dai->dai_fmt = SND_SOC_DAIFMT_DSP_A | SND_SOC_DAIFMT_NB_NF 113 113 | SND_SOC_DAIFMT_CBM_CFM; 114 114 115 115 ret = snd_soc_of_parse_card_name(card, "atmel,model");
+1 -1
sound/soc/codecs/wm5110.c
··· 1012 1012 { "AEC Loopback", "HPOUT3L", "OUT3L" }, 1013 1013 { "AEC Loopback", "HPOUT3R", "OUT3R" }, 1014 1014 { "HPOUT3L", NULL, "OUT3L" }, 1015 - { "HPOUT3R", NULL, "OUT3L" }, 1015 + { "HPOUT3R", NULL, "OUT3R" }, 1016 1016 1017 1017 { "AEC Loopback", "SPKOUTL", "OUT4L" }, 1018 1018 { "SPKOUTLN", NULL, "OUT4L" },
+1 -1
sound/soc/codecs/wm8904.c
··· 1444 1444 1445 1445 switch (fmt & SND_SOC_DAIFMT_FORMAT_MASK) { 1446 1446 case SND_SOC_DAIFMT_DSP_B: 1447 - aif1 |= WM8904_AIF_LRCLK_INV; 1447 + aif1 |= 0x3 | WM8904_AIF_LRCLK_INV; 1448 1448 case SND_SOC_DAIFMT_DSP_A: 1449 1449 aif1 |= 0x3; 1450 1450 break;
+13
sound/soc/codecs/wm8962.c
··· 2439 2439 snd_soc_update_bits(codec, WM8962_CLOCKING_4, 2440 2440 WM8962_SYSCLK_RATE_MASK, clocking4); 2441 2441 2442 + /* DSPCLK_DIV can be only generated correctly after enabling SYSCLK. 2443 + * So we here provisionally enable it and then disable it afterward 2444 + * if current bias_level hasn't reached SND_SOC_BIAS_ON. 2445 + */ 2446 + if (codec->dapm.bias_level != SND_SOC_BIAS_ON) 2447 + snd_soc_update_bits(codec, WM8962_CLOCKING2, 2448 + WM8962_SYSCLK_ENA_MASK, WM8962_SYSCLK_ENA); 2449 + 2442 2450 dspclk = snd_soc_read(codec, WM8962_CLOCKING1); 2451 + 2452 + if (codec->dapm.bias_level != SND_SOC_BIAS_ON) 2453 + snd_soc_update_bits(codec, WM8962_CLOCKING2, 2454 + WM8962_SYSCLK_ENA_MASK, 0); 2455 + 2443 2456 if (dspclk < 0) { 2444 2457 dev_err(codec->dev, "Failed to read DSPCLK: %d\n", dspclk); 2445 2458 return;
+7 -3
sound/soc/codecs/wm_adsp.c
··· 1474 1474 return ret; 1475 1475 1476 1476 /* Wait for the RAM to start, should be near instantaneous */ 1477 - count = 0; 1478 - do { 1477 + for (count = 0; count < 10; ++count) { 1479 1478 ret = regmap_read(dsp->regmap, dsp->base + ADSP2_STATUS1, 1480 1479 &val); 1481 1480 if (ret != 0) 1482 1481 return ret; 1483 - } while (!(val & ADSP2_RAM_RDY) && ++count < 10); 1482 + 1483 + if (val & ADSP2_RAM_RDY) 1484 + break; 1485 + 1486 + msleep(1); 1487 + } 1484 1488 1485 1489 if (!(val & ADSP2_RAM_RDY)) { 1486 1490 adsp_err(dsp, "Failed to start DSP RAM\n");
-2
sound/soc/fsl/imx-wm8962.c
··· 130 130 break; 131 131 } 132 132 133 - dapm->bias_level = level; 134 - 135 133 return 0; 136 134 } 137 135
+12 -12
sound/soc/kirkwood/kirkwood-i2s.c
··· 473 473 .playback = { 474 474 .channels_min = 1, 475 475 .channels_max = 2, 476 - .rates = SNDRV_PCM_RATE_8000_192000 | 477 - SNDRV_PCM_RATE_CONTINUOUS | 478 - SNDRV_PCM_RATE_KNOT, 476 + .rates = SNDRV_PCM_RATE_CONTINUOUS, 477 + .rate_min = 5512, 478 + .rate_max = 192000, 479 479 .formats = KIRKWOOD_I2S_FORMATS, 480 480 }, 481 481 .capture = { 482 482 .channels_min = 1, 483 483 .channels_max = 2, 484 - .rates = SNDRV_PCM_RATE_8000_192000 | 485 - SNDRV_PCM_RATE_CONTINUOUS | 486 - SNDRV_PCM_RATE_KNOT, 484 + .rates = SNDRV_PCM_RATE_CONTINUOUS, 485 + .rate_min = 5512, 486 + .rate_max = 192000, 487 487 .formats = KIRKWOOD_I2S_FORMATS, 488 488 }, 489 489 .ops = &kirkwood_i2s_dai_ops, ··· 494 494 .playback = { 495 495 .channels_min = 1, 496 496 .channels_max = 2, 497 - .rates = SNDRV_PCM_RATE_8000_192000 | 498 - SNDRV_PCM_RATE_CONTINUOUS | 499 - SNDRV_PCM_RATE_KNOT, 497 + .rates = SNDRV_PCM_RATE_CONTINUOUS, 498 + .rate_min = 5512, 499 + .rate_max = 192000, 500 500 .formats = KIRKWOOD_SPDIF_FORMATS, 501 501 }, 502 502 .capture = { 503 503 .channels_min = 1, 504 504 .channels_max = 2, 505 - .rates = SNDRV_PCM_RATE_8000_192000 | 506 - SNDRV_PCM_RATE_CONTINUOUS | 507 - SNDRV_PCM_RATE_KNOT, 505 + .rates = SNDRV_PCM_RATE_CONTINUOUS, 506 + .rate_min = 5512, 507 + .rate_max = 192000, 508 508 .formats = KIRKWOOD_SPDIF_FORMATS, 509 509 }, 510 510 .ops = &kirkwood_i2s_dai_ops,
+27 -11
sound/soc/soc-generic-dmaengine-pcm.c
··· 305 305 } 306 306 } 307 307 308 + static void dmaengine_pcm_release_chan(struct dmaengine_pcm *pcm) 309 + { 310 + unsigned int i; 311 + 312 + for (i = SNDRV_PCM_STREAM_PLAYBACK; i <= SNDRV_PCM_STREAM_CAPTURE; 313 + i++) { 314 + if (!pcm->chan[i]) 315 + continue; 316 + dma_release_channel(pcm->chan[i]); 317 + if (pcm->flags & SND_DMAENGINE_PCM_FLAG_HALF_DUPLEX) 318 + break; 319 + } 320 + } 321 + 308 322 /** 309 323 * snd_dmaengine_pcm_register - Register a dmaengine based PCM device 310 324 * @dev: The parent device for the PCM device ··· 329 315 const struct snd_dmaengine_pcm_config *config, unsigned int flags) 330 316 { 331 317 struct dmaengine_pcm *pcm; 318 + int ret; 332 319 333 320 pcm = kzalloc(sizeof(*pcm), GFP_KERNEL); 334 321 if (!pcm) ··· 341 326 dmaengine_pcm_request_chan_of(pcm, dev); 342 327 343 328 if (flags & SND_DMAENGINE_PCM_FLAG_NO_RESIDUE) 344 - return snd_soc_add_platform(dev, &pcm->platform, 329 + ret = snd_soc_add_platform(dev, &pcm->platform, 345 330 &dmaengine_no_residue_pcm_platform); 346 331 else 347 - return snd_soc_add_platform(dev, &pcm->platform, 332 + ret = snd_soc_add_platform(dev, &pcm->platform, 348 333 &dmaengine_pcm_platform); 334 + if (ret) 335 + goto err_free_dma; 336 + 337 + return 0; 338 + 339 + err_free_dma: 340 + dmaengine_pcm_release_chan(pcm); 341 + kfree(pcm); 342 + return ret; 349 343 } 350 344 EXPORT_SYMBOL_GPL(snd_dmaengine_pcm_register); 351 345 ··· 369 345 { 370 346 struct snd_soc_platform *platform; 371 347 struct dmaengine_pcm *pcm; 372 - unsigned int i; 373 348 374 349 platform = snd_soc_lookup_platform(dev); 375 350 if (!platform) ··· 376 353 377 354 pcm = soc_platform_to_pcm(platform); 378 355 379 - for (i = SNDRV_PCM_STREAM_PLAYBACK; i <= SNDRV_PCM_STREAM_CAPTURE; i++) { 380 - if (pcm->chan[i]) { 381 - dma_release_channel(pcm->chan[i]); 382 - if (pcm->flags & SND_DMAENGINE_PCM_FLAG_HALF_DUPLEX) 383 - break; 384 - } 385 - } 386 - 387 356 snd_soc_remove_platform(platform); 357 + dmaengine_pcm_release_chan(pcm); 388 358 kfree(pcm); 389 359 } 390 360 EXPORT_SYMBOL_GPL(snd_dmaengine_pcm_unregister);
+3 -2
sound/soc/soc-pcm.c
··· 600 600 struct snd_soc_platform *platform = rtd->platform; 601 601 struct snd_soc_dai *cpu_dai = rtd->cpu_dai; 602 602 struct snd_soc_dai *codec_dai = rtd->codec_dai; 603 - struct snd_soc_codec *codec = rtd->codec; 603 + bool playback = substream->stream == SNDRV_PCM_STREAM_PLAYBACK; 604 604 605 605 mutex_lock_nested(&rtd->pcm_mutex, rtd->pcm_subclass); 606 606 607 607 /* apply codec digital mute */ 608 - if (!codec->active) 608 + if ((playback && codec_dai->playback_active == 1) || 609 + (!playback && codec_dai->capture_active == 1)) 609 610 snd_soc_dai_digital_mute(codec_dai, 1, substream->stream); 610 611 611 612 /* free any machine hw params */
+3 -3
sound/soc/tegra/tegra20_i2s.c
··· 74 74 unsigned int fmt) 75 75 { 76 76 struct tegra20_i2s *i2s = snd_soc_dai_get_drvdata(dai); 77 - unsigned int mask, val; 77 + unsigned int mask = 0, val = 0; 78 78 79 79 switch (fmt & SND_SOC_DAIFMT_INV_MASK) { 80 80 case SND_SOC_DAIFMT_NB_NF: ··· 83 83 return -EINVAL; 84 84 } 85 85 86 - mask = TEGRA20_I2S_CTRL_MASTER_ENABLE; 86 + mask |= TEGRA20_I2S_CTRL_MASTER_ENABLE; 87 87 switch (fmt & SND_SOC_DAIFMT_MASTER_MASK) { 88 88 case SND_SOC_DAIFMT_CBS_CFS: 89 - val = TEGRA20_I2S_CTRL_MASTER_ENABLE; 89 + val |= TEGRA20_I2S_CTRL_MASTER_ENABLE; 90 90 break; 91 91 case SND_SOC_DAIFMT_CBM_CFM: 92 92 break;
+5 -5
sound/soc/tegra/tegra20_spdif.c
··· 67 67 { 68 68 struct device *dev = dai->dev; 69 69 struct tegra20_spdif *spdif = snd_soc_dai_get_drvdata(dai); 70 - unsigned int mask, val; 70 + unsigned int mask = 0, val = 0; 71 71 int ret, spdifclock; 72 72 73 - mask = TEGRA20_SPDIF_CTRL_PACK | 74 - TEGRA20_SPDIF_CTRL_BIT_MODE_MASK; 73 + mask |= TEGRA20_SPDIF_CTRL_PACK | 74 + TEGRA20_SPDIF_CTRL_BIT_MODE_MASK; 75 75 switch (params_format(params)) { 76 76 case SNDRV_PCM_FORMAT_S16_LE: 77 - val = TEGRA20_SPDIF_CTRL_PACK | 78 - TEGRA20_SPDIF_CTRL_BIT_MODE_16BIT; 77 + val |= TEGRA20_SPDIF_CTRL_PACK | 78 + TEGRA20_SPDIF_CTRL_BIT_MODE_16BIT; 79 79 break; 80 80 default: 81 81 return -EINVAL;
+3 -3
sound/soc/tegra/tegra30_i2s.c
··· 118 118 unsigned int fmt) 119 119 { 120 120 struct tegra30_i2s *i2s = snd_soc_dai_get_drvdata(dai); 121 - unsigned int mask, val; 121 + unsigned int mask = 0, val = 0; 122 122 123 123 switch (fmt & SND_SOC_DAIFMT_INV_MASK) { 124 124 case SND_SOC_DAIFMT_NB_NF: ··· 127 127 return -EINVAL; 128 128 } 129 129 130 - mask = TEGRA30_I2S_CTRL_MASTER_ENABLE; 130 + mask |= TEGRA30_I2S_CTRL_MASTER_ENABLE; 131 131 switch (fmt & SND_SOC_DAIFMT_MASTER_MASK) { 132 132 case SND_SOC_DAIFMT_CBS_CFS: 133 - val = TEGRA30_I2S_CTRL_MASTER_ENABLE; 133 + val |= TEGRA30_I2S_CTRL_MASTER_ENABLE; 134 134 break; 135 135 case SND_SOC_DAIFMT_CBM_CFM: 136 136 break;
+3 -3
tools/power/cpupower/utils/cpupower-set.c
··· 18 18 #include "helpers/bitmask.h" 19 19 20 20 static struct option set_opts[] = { 21 - { .name = "perf-bias", .has_arg = optional_argument, .flag = NULL, .val = 'b'}, 22 - { .name = "sched-mc", .has_arg = optional_argument, .flag = NULL, .val = 'm'}, 23 - { .name = "sched-smt", .has_arg = optional_argument, .flag = NULL, .val = 's'}, 21 + { .name = "perf-bias", .has_arg = required_argument, .flag = NULL, .val = 'b'}, 22 + { .name = "sched-mc", .has_arg = required_argument, .flag = NULL, .val = 'm'}, 23 + { .name = "sched-smt", .has_arg = required_argument, .flag = NULL, .val = 's'}, 24 24 { }, 25 25 }; 26 26