Merge tag 'omap-for-v3.13/intc-ldp-fix' of git://git.kernel.org/pub/scm/linux/kernel/git/tmlind/linux-omap into fixes

From Tony Lindgren:
Fix a regression for wrong interrupt numbers for some devices after
the sparse IRQ conversion, fix DRA7 console output for earlyprintk,
and fix the LDP LCD backlight when DSS is built into the kernel and
not as a loadable module.

* tag 'omap-for-v3.13/intc-ldp-fix' of git://git.kernel.org/pub/scm/linux/kernel/git/tmlind/linux-omap:
ARM: OMAP2+: Fix LCD panel backlight regression for LDP legacy booting
ARM: OMAP2+: hwmod_data: fix missing OMAP_INTC_START in irq data
ARM: DRA7: hwmod: Fix boot crash with DEBUG_LL
+ v3.13-rc5

Signed-off-by: Olof Johansson <olof@lixom.net>

+2040 -1097
+240
Documentation/module-signing.txt
···
··· 1 + ============================== 2 + KERNEL MODULE SIGNING FACILITY 3 + ============================== 4 + 5 + CONTENTS 6 + 7 + - Overview. 8 + - Configuring module signing. 9 + - Generating signing keys. 10 + - Public keys in the kernel. 11 + - Manually signing modules. 12 + - Signed modules and stripping. 13 + - Loading signed modules. 14 + - Non-valid signatures and unsigned modules. 15 + - Administering/protecting the private key. 16 + 17 + 18 + ======== 19 + OVERVIEW 20 + ======== 21 + 22 + The kernel module signing facility cryptographically signs modules during 23 + installation and then checks the signature upon loading the module. This 24 + allows increased kernel security by disallowing the loading of unsigned modules 25 + or modules signed with an invalid key. Module signing increases security by 26 + making it harder to load a malicious module into the kernel. The module 27 + signature checking is done by the kernel so that it is not necessary to have 28 + trusted userspace bits. 29 + 30 + This facility uses X.509 ITU-T standard certificates to encode the public keys 31 + involved. The signatures are not themselves encoded in any industrial standard 32 + type. The facility currently only supports the RSA public key encryption 33 + standard (though it is pluggable and permits others to be used). The possible 34 + hash algorithms that can be used are SHA-1, SHA-224, SHA-256, SHA-384, and 35 + SHA-512 (the algorithm is selected by data in the signature). 36 + 37 + 38 + ========================== 39 + CONFIGURING MODULE SIGNING 40 + ========================== 41 + 42 + The module signing facility is enabled by going to the "Enable Loadable Module 43 + Support" section of the kernel configuration and turning on 44 + 45 + CONFIG_MODULE_SIG "Module signature verification" 46 + 47 + This has a number of options available: 48 + 49 + (1) "Require modules to be validly signed" (CONFIG_MODULE_SIG_FORCE) 50 + 51 + This specifies how the kernel should deal with a module that has a 52 + signature for which the key is not known or a module that is unsigned. 53 + 54 + If this is off (ie. "permissive"), then modules for which the key is not 55 + available and modules that are unsigned are permitted, but the kernel will 56 + be marked as being tainted. 57 + 58 + If this is on (ie. "restrictive"), only modules that have a valid 59 + signature that can be verified by a public key in the kernel's possession 60 + will be loaded. All other modules will generate an error. 61 + 62 + Irrespective of the setting here, if the module has a signature block that 63 + cannot be parsed, it will be rejected out of hand. 64 + 65 + 66 + (2) "Automatically sign all modules" (CONFIG_MODULE_SIG_ALL) 67 + 68 + If this is on then modules will be automatically signed during the 69 + modules_install phase of a build. If this is off, then the modules must 70 + be signed manually using: 71 + 72 + scripts/sign-file 73 + 74 + 75 + (3) "Which hash algorithm should modules be signed with?" 76 + 77 + This presents a choice of which hash algorithm the installation phase will 78 + sign the modules with: 79 + 80 + CONFIG_SIG_SHA1 "Sign modules with SHA-1" 81 + CONFIG_SIG_SHA224 "Sign modules with SHA-224" 82 + CONFIG_SIG_SHA256 "Sign modules with SHA-256" 83 + CONFIG_SIG_SHA384 "Sign modules with SHA-384" 84 + CONFIG_SIG_SHA512 "Sign modules with SHA-512" 85 + 86 + The algorithm selected here will also be built into the kernel (rather 87 + than being a module) so that modules signed with that algorithm can have 88 + their signatures checked without causing a dependency loop. 89 + 90 + 91 + ======================= 92 + GENERATING SIGNING KEYS 93 + ======================= 94 + 95 + Cryptographic keypairs are required to generate and check signatures. A 96 + private key is used to generate a signature and the corresponding public key is 97 + used to check it. The private key is only needed during the build, after which 98 + it can be deleted or stored securely. The public key gets built into the 99 + kernel so that it can be used to check the signatures as the modules are 100 + loaded. 101 + 102 + Under normal conditions, the kernel build will automatically generate a new 103 + keypair using openssl if one does not exist in the files: 104 + 105 + signing_key.priv 106 + signing_key.x509 107 + 108 + during the building of vmlinux (the public part of the key needs to be built 109 + into vmlinux) using parameters in the: 110 + 111 + x509.genkey 112 + 113 + file (which is also generated if it does not already exist). 114 + 115 + It is strongly recommended that you provide your own x509.genkey file. 116 + 117 + Most notably, in the x509.genkey file, the req_distinguished_name section 118 + should be altered from the default: 119 + 120 + [ req_distinguished_name ] 121 + O = Magrathea 122 + CN = Glacier signing key 123 + emailAddress = slartibartfast@magrathea.h2g2 124 + 125 + The generated RSA key size can also be set with: 126 + 127 + [ req ] 128 + default_bits = 4096 129 + 130 + 131 + It is also possible to manually generate the key private/public files using the 132 + x509.genkey key generation configuration file in the root node of the Linux 133 + kernel sources tree and the openssl command. The following is an example to 134 + generate the public/private key files: 135 + 136 + openssl req -new -nodes -utf8 -sha256 -days 36500 -batch -x509 \ 137 + -config x509.genkey -outform DER -out signing_key.x509 \ 138 + -keyout signing_key.priv 139 + 140 + 141 + ========================= 142 + PUBLIC KEYS IN THE KERNEL 143 + ========================= 144 + 145 + The kernel contains a ring of public keys that can be viewed by root. They're 146 + in a keyring called ".system_keyring" that can be seen by: 147 + 148 + [root@deneb ~]# cat /proc/keys 149 + ... 150 + 223c7853 I------ 1 perm 1f030000 0 0 keyring .system_keyring: 1 151 + 302d2d52 I------ 1 perm 1f010000 0 0 asymmetri Fedora kernel signing key: d69a84e6bce3d216b979e9505b3e3ef9a7118079: X509.RSA a7118079 [] 152 + ... 153 + 154 + Beyond the public key generated specifically for module signing, any file 155 + placed in the kernel source root directory or the kernel build root directory 156 + whose name is suffixed with ".x509" will be assumed to be an X.509 public key 157 + and will be added to the keyring. 158 + 159 + Further, the architecture code may take public keys from a hardware store and 160 + add those in also (e.g. from the UEFI key database). 161 + 162 + Finally, it is possible to add additional public keys by doing: 163 + 164 + keyctl padd asymmetric "" [.system_keyring-ID] <[key-file] 165 + 166 + e.g.: 167 + 168 + keyctl padd asymmetric "" 0x223c7853 <my_public_key.x509 169 + 170 + Note, however, that the kernel will only permit keys to be added to 171 + .system_keyring _if_ the new key's X.509 wrapper is validly signed by a key 172 + that is already resident in the .system_keyring at the time the key was added. 173 + 174 + 175 + ========================= 176 + MANUALLY SIGNING MODULES 177 + ========================= 178 + 179 + To manually sign a module, use the scripts/sign-file tool available in 180 + the Linux kernel source tree. The script requires 4 arguments: 181 + 182 + 1. The hash algorithm (e.g., sha256) 183 + 2. The private key filename 184 + 3. The public key filename 185 + 4. The kernel module to be signed 186 + 187 + The following is an example to sign a kernel module: 188 + 189 + scripts/sign-file sha512 kernel-signkey.priv \ 190 + kernel-signkey.x509 module.ko 191 + 192 + The hash algorithm used does not have to match the one configured, but if it 193 + doesn't, you should make sure that hash algorithm is either built into the 194 + kernel or can be loaded without requiring itself. 195 + 196 + 197 + ============================ 198 + SIGNED MODULES AND STRIPPING 199 + ============================ 200 + 201 + A signed module has a digital signature simply appended at the end. The string 202 + "~Module signature appended~." at the end of the module's file confirms that a 203 + signature is present but it does not confirm that the signature is valid! 204 + 205 + Signed modules are BRITTLE as the signature is outside of the defined ELF 206 + container. Thus they MAY NOT be stripped once the signature is computed and 207 + attached. Note the entire module is the signed payload, including any and all 208 + debug information present at the time of signing. 209 + 210 + 211 + ====================== 212 + LOADING SIGNED MODULES 213 + ====================== 214 + 215 + Modules are loaded with insmod, modprobe, init_module() or finit_module(), 216 + exactly as for unsigned modules as no processing is done in userspace. The 217 + signature checking is all done within the kernel. 218 + 219 + 220 + ========================================= 221 + NON-VALID SIGNATURES AND UNSIGNED MODULES 222 + ========================================= 223 + 224 + If CONFIG_MODULE_SIG_FORCE is enabled or enforcemodulesig=1 is supplied on 225 + the kernel command line, the kernel will only load validly signed modules 226 + for which it has a public key. Otherwise, it will also load modules that are 227 + unsigned. Any module for which the kernel has a key, but which proves to have 228 + a signature mismatch will not be permitted to load. 229 + 230 + Any module that has an unparseable signature will be rejected. 231 + 232 + 233 + ========================================= 234 + ADMINISTERING/PROTECTING THE PRIVATE KEY 235 + ========================================= 236 + 237 + Since the private key is used to sign modules, viruses and malware could use 238 + the private key to sign modules and compromise the operating system. The 239 + private key must be either destroyed or moved to a secure location and not kept 240 + in the root node of the kernel source tree.
+6 -2
Documentation/networking/ip-sysctl.txt
··· 16 Default: 64 (as recommended by RFC1700) 17 18 ip_no_pmtu_disc - BOOLEAN 19 - Disable Path MTU Discovery. 20 - default FALSE 21 22 min_pmtu - INTEGER 23 default 552 - minimum discovered Path MTU
··· 16 Default: 64 (as recommended by RFC1700) 17 18 ip_no_pmtu_disc - BOOLEAN 19 + Disable Path MTU Discovery. If enabled and a 20 + fragmentation-required ICMP is received, the PMTU to this 21 + destination will be set to min_pmtu (see below). You will need 22 + to raise min_pmtu to the smallest interface MTU on your system 23 + manually if you want to avoid locally generated fragments. 24 + Default: FALSE 25 26 min_pmtu - INTEGER 27 default 552 - minimum discovered Path MTU
+21 -4
MAINTAINERS
··· 3763 3764 GPIO SUBSYSTEM 3765 M: Linus Walleij <linus.walleij@linaro.org> 3766 - S: Maintained 3767 L: linux-gpio@vger.kernel.org 3768 - F: Documentation/gpio.txt 3769 F: drivers/gpio/ 3770 F: include/linux/gpio* 3771 F: include/asm-generic/gpio.h ··· 3834 T: git git://linuxtv.org/media_tree.git 3835 S: Maintained 3836 F: drivers/media/usb/gspca/ 3837 3838 STK1160 USB VIDEO CAPTURE DRIVER 3839 M: Ezequiel Garcia <elezegarcia@gmail.com> ··· 5921 M: Herbert Xu <herbert@gondor.apana.org.au> 5922 M: "David S. Miller" <davem@davemloft.net> 5923 L: netdev@vger.kernel.org 5924 - T: git git://git.kernel.org/pub/scm/linux/kernel/git/davem/net.git 5925 S: Maintained 5926 F: net/xfrm/ 5927 F: net/key/ 5928 F: net/ipv4/xfrm* 5929 F: net/ipv6/xfrm* 5930 F: include/uapi/linux/xfrm.h 5931 F: include/net/xfrm.h 5932 ··· 9590 9591 XFS FILESYSTEM 9592 P: Silicon Graphics Inc 9593 - M: Dave Chinner <dchinner@fromorbit.com> 9594 M: Ben Myers <bpm@sgi.com> 9595 M: xfs@oss.sgi.com 9596 L: xfs@oss.sgi.com
··· 3763 3764 GPIO SUBSYSTEM 3765 M: Linus Walleij <linus.walleij@linaro.org> 3766 + M: Alexandre Courbot <gnurou@gmail.com> 3767 L: linux-gpio@vger.kernel.org 3768 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-gpio.git 3769 + S: Maintained 3770 + F: Documentation/gpio/ 3771 F: drivers/gpio/ 3772 F: include/linux/gpio* 3773 F: include/asm-generic/gpio.h ··· 3832 T: git git://linuxtv.org/media_tree.git 3833 S: Maintained 3834 F: drivers/media/usb/gspca/ 3835 + 3836 + GUID PARTITION TABLE (GPT) 3837 + M: Davidlohr Bueso <davidlohr@hp.com> 3838 + L: linux-efi@vger.kernel.org 3839 + S: Maintained 3840 + F: block/partitions/efi.* 3841 3842 STK1160 USB VIDEO CAPTURE DRIVER 3843 M: Ezequiel Garcia <elezegarcia@gmail.com> ··· 5913 M: Herbert Xu <herbert@gondor.apana.org.au> 5914 M: "David S. Miller" <davem@davemloft.net> 5915 L: netdev@vger.kernel.org 5916 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/klassert/ipsec.git 5917 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/klassert/ipsec-next.git 5918 S: Maintained 5919 F: net/xfrm/ 5920 F: net/key/ 5921 F: net/ipv4/xfrm* 5922 + F: net/ipv4/esp4.c 5923 + F: net/ipv4/ah4.c 5924 + F: net/ipv4/ipcomp.c 5925 + F: net/ipv4/ip_vti.c 5926 F: net/ipv6/xfrm* 5927 + F: net/ipv6/esp6.c 5928 + F: net/ipv6/ah6.c 5929 + F: net/ipv6/ipcomp6.c 5930 + F: net/ipv6/ip6_vti.c 5931 F: include/uapi/linux/xfrm.h 5932 F: include/net/xfrm.h 5933 ··· 9573 9574 XFS FILESYSTEM 9575 P: Silicon Graphics Inc 9576 + M: Dave Chinner <david@fromorbit.com> 9577 M: Ben Myers <bpm@sgi.com> 9578 M: xfs@oss.sgi.com 9579 L: xfs@oss.sgi.com
+10 -14
Makefile
··· 1 VERSION = 3 2 PATCHLEVEL = 13 3 SUBLEVEL = 0 4 - EXTRAVERSION = -rc4 5 NAME = One Giant Leap for Frogkind 6 7 # *DOCUMENTATION* ··· 732 # Select initial ramdisk compression format, default is gzip(1). 733 # This shall be used by the dracut(8) tool while creating an initramfs image. 734 # 735 - INITRD_COMPRESS=gzip 736 - ifeq ($(CONFIG_RD_BZIP2), y) 737 - INITRD_COMPRESS=bzip2 738 - else ifeq ($(CONFIG_RD_LZMA), y) 739 - INITRD_COMPRESS=lzma 740 - else ifeq ($(CONFIG_RD_XZ), y) 741 - INITRD_COMPRESS=xz 742 - else ifeq ($(CONFIG_RD_LZO), y) 743 - INITRD_COMPRESS=lzo 744 - else ifeq ($(CONFIG_RD_LZ4), y) 745 - INITRD_COMPRESS=lz4 746 - endif 747 - export INITRD_COMPRESS 748 749 ifdef CONFIG_MODULE_SIG_ALL 750 MODSECKEY = ./signing_key.priv
··· 1 VERSION = 3 2 PATCHLEVEL = 13 3 SUBLEVEL = 0 4 + EXTRAVERSION = -rc5 5 NAME = One Giant Leap for Frogkind 6 7 # *DOCUMENTATION* ··· 732 # Select initial ramdisk compression format, default is gzip(1). 733 # This shall be used by the dracut(8) tool while creating an initramfs image. 734 # 735 + INITRD_COMPRESS-y := gzip 736 + INITRD_COMPRESS-$(CONFIG_RD_BZIP2) := bzip2 737 + INITRD_COMPRESS-$(CONFIG_RD_LZMA) := lzma 738 + INITRD_COMPRESS-$(CONFIG_RD_XZ) := xz 739 + INITRD_COMPRESS-$(CONFIG_RD_LZO) := lzo 740 + INITRD_COMPRESS-$(CONFIG_RD_LZ4) := lz4 741 + # do not export INITRD_COMPRESS, since we didn't actually 742 + # choose a sane default compression above. 743 + # export INITRD_COMPRESS := $(INITRD_COMPRESS-y) 744 745 ifdef CONFIG_MODULE_SIG_ALL 746 MODSECKEY = ./signing_key.priv
+7 -1
arch/arc/include/uapi/asm/unistd.h
··· 8 9 /******** no-legacy-syscalls-ABI *******/ 10 11 - #ifndef _UAPI_ASM_ARC_UNISTD_H 12 #define _UAPI_ASM_ARC_UNISTD_H 13 14 #define __ARCH_WANT_SYS_EXECVE ··· 39 /* Generic syscall (fs/filesystems.c - lost in asm-generic/unistd.h */ 40 #define __NR_sysfs (__NR_arch_specific_syscall + 3) 41 __SYSCALL(__NR_sysfs, sys_sysfs) 42 43 #endif
··· 8 9 /******** no-legacy-syscalls-ABI *******/ 10 11 + /* 12 + * Non-typical guard macro to enable inclusion twice in ARCH sys.c 13 + * That is how the Generic syscall wrapper generator works 14 + */ 15 + #if !defined(_UAPI_ASM_ARC_UNISTD_H) || defined(__SYSCALL) 16 #define _UAPI_ASM_ARC_UNISTD_H 17 18 #define __ARCH_WANT_SYS_EXECVE ··· 35 /* Generic syscall (fs/filesystems.c - lost in asm-generic/unistd.h */ 36 #define __NR_sysfs (__NR_arch_specific_syscall + 3) 37 __SYSCALL(__NR_sysfs, sys_sysfs) 38 + 39 + #undef __SYSCALL 40 41 #endif
+6 -1
arch/arm/mach-omap2/board-ldp.c
··· 242 243 static int ldp_twl_gpio_setup(struct device *dev, unsigned gpio, unsigned ngpio) 244 { 245 /* LCD enable GPIO */ 246 ldp_lcd_pdata.enable_gpio = gpio + 7; 247 248 /* Backlight enable GPIO */ 249 ldp_lcd_pdata.backlight_gpio = gpio + 15; 250 251 return 0; 252 } ··· 352 353 static struct platform_device *ldp_devices[] __initdata = { 354 &ldp_gpio_keys_device, 355 - &ldp_lcd_device, 356 }; 357 358 #ifdef CONFIG_OMAP_MUX
··· 242 243 static int ldp_twl_gpio_setup(struct device *dev, unsigned gpio, unsigned ngpio) 244 { 245 + int res; 246 + 247 /* LCD enable GPIO */ 248 ldp_lcd_pdata.enable_gpio = gpio + 7; 249 250 /* Backlight enable GPIO */ 251 ldp_lcd_pdata.backlight_gpio = gpio + 15; 252 + 253 + res = platform_device_register(&ldp_lcd_device); 254 + if (res) 255 + pr_err("Unable to register LCD: %d\n", res); 256 257 return 0; 258 } ··· 346 347 static struct platform_device *ldp_devices[] __initdata = { 348 &ldp_gpio_keys_device, 349 }; 350 351 #ifdef CONFIG_OMAP_MUX
+2 -2
arch/arm/mach-omap2/omap_hwmod_2xxx_ipblock_data.c
··· 796 797 /* gpmc */ 798 static struct omap_hwmod_irq_info omap2xxx_gpmc_irqs[] = { 799 - { .irq = 20 }, 800 { .irq = -1 } 801 }; 802 ··· 841 }; 842 843 static struct omap_hwmod_irq_info omap2_rng_mpu_irqs[] = { 844 - { .irq = 52 }, 845 { .irq = -1 } 846 }; 847
··· 796 797 /* gpmc */ 798 static struct omap_hwmod_irq_info omap2xxx_gpmc_irqs[] = { 799 + { .irq = 20 + OMAP_INTC_START, }, 800 { .irq = -1 } 801 }; 802 ··· 841 }; 842 843 static struct omap_hwmod_irq_info omap2_rng_mpu_irqs[] = { 844 + { .irq = 52 + OMAP_INTC_START, }, 845 { .irq = -1 } 846 }; 847
+3 -3
arch/arm/mach-omap2/omap_hwmod_3xxx_data.c
··· 2165 }; 2166 2167 static struct omap_hwmod_irq_info omap3xxx_gpmc_irqs[] = { 2168 - { .irq = 20 }, 2169 { .irq = -1 } 2170 }; 2171 ··· 2999 3000 static struct omap_hwmod omap3xxx_mmu_isp_hwmod; 3001 static struct omap_hwmod_irq_info omap3xxx_mmu_isp_irqs[] = { 3002 - { .irq = 24 }, 3003 { .irq = -1 } 3004 }; 3005 ··· 3041 3042 static struct omap_hwmod omap3xxx_mmu_iva_hwmod; 3043 static struct omap_hwmod_irq_info omap3xxx_mmu_iva_irqs[] = { 3044 - { .irq = 28 }, 3045 { .irq = -1 } 3046 }; 3047
··· 2165 }; 2166 2167 static struct omap_hwmod_irq_info omap3xxx_gpmc_irqs[] = { 2168 + { .irq = 20 + OMAP_INTC_START, }, 2169 { .irq = -1 } 2170 }; 2171 ··· 2999 3000 static struct omap_hwmod omap3xxx_mmu_isp_hwmod; 3001 static struct omap_hwmod_irq_info omap3xxx_mmu_isp_irqs[] = { 3002 + { .irq = 24 + OMAP_INTC_START, }, 3003 { .irq = -1 } 3004 }; 3005 ··· 3041 3042 static struct omap_hwmod omap3xxx_mmu_iva_hwmod; 3043 static struct omap_hwmod_irq_info omap3xxx_mmu_iva_irqs[] = { 3044 + { .irq = 28 + OMAP_INTC_START, }, 3045 { .irq = -1 } 3046 }; 3047
+1 -1
arch/arm/mach-omap2/omap_hwmod_7xx_data.c
··· 1637 .class = &dra7xx_uart_hwmod_class, 1638 .clkdm_name = "l4per_clkdm", 1639 .main_clk = "uart1_gfclk_mux", 1640 - .flags = HWMOD_SWSUP_SIDLE_ACT, 1641 .prcm = { 1642 .omap4 = { 1643 .clkctrl_offs = DRA7XX_CM_L4PER_UART1_CLKCTRL_OFFSET,
··· 1637 .class = &dra7xx_uart_hwmod_class, 1638 .clkdm_name = "l4per_clkdm", 1639 .main_clk = "uart1_gfclk_mux", 1640 + .flags = HWMOD_SWSUP_SIDLE_ACT | DEBUG_OMAP2UART1_FLAGS, 1641 .prcm = { 1642 .omap4 = { 1643 .clkctrl_offs = DRA7XX_CM_L4PER_UART1_CLKCTRL_OFFSET,
+3 -3
arch/arm/xen/enlighten.c
··· 96 struct remap_data *info = data; 97 struct page *page = info->pages[info->index++]; 98 unsigned long pfn = page_to_pfn(page); 99 - pte_t pte = pfn_pte(pfn, info->prot); 100 101 if (map_foreign_page(pfn, info->fgmfn, info->domid)) 102 return -EFAULT; ··· 224 } 225 if (of_address_to_resource(node, GRANT_TABLE_PHYSADDR, &res)) 226 return 0; 227 - xen_hvm_resume_frames = res.start >> PAGE_SHIFT; 228 xen_events_irq = irq_of_parse_and_map(node, 0); 229 pr_info("Xen %s support found, events_irq=%d gnttab_frame_pfn=%lx\n", 230 - version, xen_events_irq, xen_hvm_resume_frames); 231 xen_domain_type = XEN_HVM_DOMAIN; 232 233 xen_setup_features();
··· 96 struct remap_data *info = data; 97 struct page *page = info->pages[info->index++]; 98 unsigned long pfn = page_to_pfn(page); 99 + pte_t pte = pte_mkspecial(pfn_pte(pfn, info->prot)); 100 101 if (map_foreign_page(pfn, info->fgmfn, info->domid)) 102 return -EFAULT; ··· 224 } 225 if (of_address_to_resource(node, GRANT_TABLE_PHYSADDR, &res)) 226 return 0; 227 + xen_hvm_resume_frames = res.start; 228 xen_events_irq = irq_of_parse_and_map(node, 0); 229 pr_info("Xen %s support found, events_irq=%d gnttab_frame_pfn=%lx\n", 230 + version, xen_events_irq, (xen_hvm_resume_frames >> PAGE_SHIFT)); 231 xen_domain_type = XEN_HVM_DOMAIN; 232 233 xen_setup_features();
-4
arch/arm64/include/asm/xen/page-coherent.h
··· 23 unsigned long offset, size_t size, enum dma_data_direction dir, 24 struct dma_attrs *attrs) 25 { 26 - __generic_dma_ops(hwdev)->map_page(hwdev, page, offset, size, dir, attrs); 27 } 28 29 static inline void xen_dma_unmap_page(struct device *hwdev, dma_addr_t handle, 30 size_t size, enum dma_data_direction dir, 31 struct dma_attrs *attrs) 32 { 33 - __generic_dma_ops(hwdev)->unmap_page(hwdev, handle, size, dir, attrs); 34 } 35 36 static inline void xen_dma_sync_single_for_cpu(struct device *hwdev, 37 dma_addr_t handle, size_t size, enum dma_data_direction dir) 38 { 39 - __generic_dma_ops(hwdev)->sync_single_for_cpu(hwdev, handle, size, dir); 40 } 41 42 static inline void xen_dma_sync_single_for_device(struct device *hwdev, 43 dma_addr_t handle, size_t size, enum dma_data_direction dir) 44 { 45 - __generic_dma_ops(hwdev)->sync_single_for_device(hwdev, handle, size, dir); 46 } 47 #endif /* _ASM_ARM64_XEN_PAGE_COHERENT_H */
··· 23 unsigned long offset, size_t size, enum dma_data_direction dir, 24 struct dma_attrs *attrs) 25 { 26 } 27 28 static inline void xen_dma_unmap_page(struct device *hwdev, dma_addr_t handle, 29 size_t size, enum dma_data_direction dir, 30 struct dma_attrs *attrs) 31 { 32 } 33 34 static inline void xen_dma_sync_single_for_cpu(struct device *hwdev, 35 dma_addr_t handle, size_t size, enum dma_data_direction dir) 36 { 37 } 38 39 static inline void xen_dma_sync_single_for_device(struct device *hwdev, 40 dma_addr_t handle, size_t size, enum dma_data_direction dir) 41 { 42 } 43 #endif /* _ASM_ARM64_XEN_PAGE_COHERENT_H */
+17 -19
arch/arm64/kernel/ptrace.c
··· 214 { 215 int err, len, type, disabled = !ctrl.enabled; 216 217 - if (disabled) { 218 - len = 0; 219 - type = HW_BREAKPOINT_EMPTY; 220 - } else { 221 - err = arch_bp_generic_fields(ctrl, &len, &type); 222 - if (err) 223 - return err; 224 225 - switch (note_type) { 226 - case NT_ARM_HW_BREAK: 227 - if ((type & HW_BREAKPOINT_X) != type) 228 - return -EINVAL; 229 - break; 230 - case NT_ARM_HW_WATCH: 231 - if ((type & HW_BREAKPOINT_RW) != type) 232 - return -EINVAL; 233 - break; 234 - default: 235 return -EINVAL; 236 - } 237 } 238 239 attr->bp_len = len; 240 attr->bp_type = type; 241 - attr->disabled = disabled; 242 243 return 0; 244 }
··· 214 { 215 int err, len, type, disabled = !ctrl.enabled; 216 217 + attr->disabled = disabled; 218 + if (disabled) 219 + return 0; 220 221 + err = arch_bp_generic_fields(ctrl, &len, &type); 222 + if (err) 223 + return err; 224 + 225 + switch (note_type) { 226 + case NT_ARM_HW_BREAK: 227 + if ((type & HW_BREAKPOINT_X) != type) 228 return -EINVAL; 229 + break; 230 + case NT_ARM_HW_WATCH: 231 + if ((type & HW_BREAKPOINT_RW) != type) 232 + return -EINVAL; 233 + break; 234 + default: 235 + return -EINVAL; 236 } 237 238 attr->bp_len = len; 239 attr->bp_type = type; 240 241 return 0; 242 }
+4
arch/powerpc/include/asm/kvm_book3s.h
··· 192 extern u32 kvmppc_alignment_dsisr(struct kvm_vcpu *vcpu, unsigned int inst); 193 extern ulong kvmppc_alignment_dar(struct kvm_vcpu *vcpu, unsigned int inst); 194 extern int kvmppc_h_pr(struct kvm_vcpu *vcpu, unsigned long cmd); 195 196 static inline struct kvmppc_vcpu_book3s *to_book3s(struct kvm_vcpu *vcpu) 197 {
··· 192 extern u32 kvmppc_alignment_dsisr(struct kvm_vcpu *vcpu, unsigned int inst); 193 extern ulong kvmppc_alignment_dar(struct kvm_vcpu *vcpu, unsigned int inst); 194 extern int kvmppc_h_pr(struct kvm_vcpu *vcpu, unsigned long cmd); 195 + extern void kvmppc_copy_to_svcpu(struct kvmppc_book3s_shadow_vcpu *svcpu, 196 + struct kvm_vcpu *vcpu); 197 + extern void kvmppc_copy_from_svcpu(struct kvm_vcpu *vcpu, 198 + struct kvmppc_book3s_shadow_vcpu *svcpu); 199 200 static inline struct kvmppc_vcpu_book3s *to_book3s(struct kvm_vcpu *vcpu) 201 {
+2
arch/powerpc/include/asm/kvm_book3s_asm.h
··· 79 ulong vmhandler; 80 ulong scratch0; 81 ulong scratch1; 82 u8 in_guest; 83 u8 restore_hid5; 84 u8 napping; ··· 107 }; 108 109 struct kvmppc_book3s_shadow_vcpu { 110 ulong gpr[14]; 111 u32 cr; 112 u32 xer;
··· 79 ulong vmhandler; 80 ulong scratch0; 81 ulong scratch1; 82 + ulong scratch2; 83 u8 in_guest; 84 u8 restore_hid5; 85 u8 napping; ··· 106 }; 107 108 struct kvmppc_book3s_shadow_vcpu { 109 + bool in_use; 110 ulong gpr[14]; 111 u32 cr; 112 u32 xer;
+2 -2
arch/powerpc/include/asm/opal.h
··· 720 int64_t opal_pci_poll(uint64_t phb_id); 721 int64_t opal_return_cpu(void); 722 723 - int64_t opal_xscom_read(uint32_t gcid, uint32_t pcb_addr, uint64_t *val); 724 int64_t opal_xscom_write(uint32_t gcid, uint32_t pcb_addr, uint64_t val); 725 726 int64_t opal_lpc_write(uint32_t chip_id, enum OpalLPCAddressType addr_type, 727 uint32_t addr, uint32_t data, uint32_t sz); 728 int64_t opal_lpc_read(uint32_t chip_id, enum OpalLPCAddressType addr_type, 729 - uint32_t addr, uint32_t *data, uint32_t sz); 730 int64_t opal_validate_flash(uint64_t buffer, uint32_t *size, uint32_t *result); 731 int64_t opal_manage_flash(uint8_t op); 732 int64_t opal_update_flash(uint64_t blk_list);
··· 720 int64_t opal_pci_poll(uint64_t phb_id); 721 int64_t opal_return_cpu(void); 722 723 + int64_t opal_xscom_read(uint32_t gcid, uint32_t pcb_addr, __be64 *val); 724 int64_t opal_xscom_write(uint32_t gcid, uint32_t pcb_addr, uint64_t val); 725 726 int64_t opal_lpc_write(uint32_t chip_id, enum OpalLPCAddressType addr_type, 727 uint32_t addr, uint32_t data, uint32_t sz); 728 int64_t opal_lpc_read(uint32_t chip_id, enum OpalLPCAddressType addr_type, 729 + uint32_t addr, __be32 *data, uint32_t sz); 730 int64_t opal_validate_flash(uint64_t buffer, uint32_t *size, uint32_t *result); 731 int64_t opal_manage_flash(uint8_t op); 732 int64_t opal_update_flash(uint64_t blk_list);
+1 -1
arch/powerpc/include/asm/switch_to.h
··· 35 extern void enable_kernel_spe(void); 36 extern void giveup_spe(struct task_struct *); 37 extern void load_up_spe(struct task_struct *); 38 - extern void switch_booke_debug_regs(struct thread_struct *new_thread); 39 40 #ifndef CONFIG_SMP 41 extern void discard_lazy_cpu_state(void);
··· 35 extern void enable_kernel_spe(void); 36 extern void giveup_spe(struct task_struct *); 37 extern void load_up_spe(struct task_struct *); 38 + extern void switch_booke_debug_regs(struct debug_reg *new_debug); 39 40 #ifndef CONFIG_SMP 41 extern void discard_lazy_cpu_state(void);
+1
arch/powerpc/kernel/asm-offsets.c
··· 576 HSTATE_FIELD(HSTATE_VMHANDLER, vmhandler); 577 HSTATE_FIELD(HSTATE_SCRATCH0, scratch0); 578 HSTATE_FIELD(HSTATE_SCRATCH1, scratch1); 579 HSTATE_FIELD(HSTATE_IN_GUEST, in_guest); 580 HSTATE_FIELD(HSTATE_RESTORE_HID5, restore_hid5); 581 HSTATE_FIELD(HSTATE_NAPPING, napping);
··· 576 HSTATE_FIELD(HSTATE_VMHANDLER, vmhandler); 577 HSTATE_FIELD(HSTATE_SCRATCH0, scratch0); 578 HSTATE_FIELD(HSTATE_SCRATCH1, scratch1); 579 + HSTATE_FIELD(HSTATE_SCRATCH2, scratch2); 580 HSTATE_FIELD(HSTATE_IN_GUEST, in_guest); 581 HSTATE_FIELD(HSTATE_RESTORE_HID5, restore_hid5); 582 HSTATE_FIELD(HSTATE_NAPPING, napping);
+3 -3
arch/powerpc/kernel/crash_dump.c
··· 124 void crash_free_reserved_phys_range(unsigned long begin, unsigned long end) 125 { 126 unsigned long addr; 127 - const u32 *basep, *sizep; 128 unsigned int rtas_start = 0, rtas_end = 0; 129 130 basep = of_get_property(rtas.dev, "linux,rtas-base", NULL); 131 sizep = of_get_property(rtas.dev, "rtas-size", NULL); 132 133 if (basep && sizep) { 134 - rtas_start = *basep; 135 - rtas_end = *basep + *sizep; 136 } 137 138 for (addr = begin; addr < end; addr += PAGE_SIZE) {
··· 124 void crash_free_reserved_phys_range(unsigned long begin, unsigned long end) 125 { 126 unsigned long addr; 127 + const __be32 *basep, *sizep; 128 unsigned int rtas_start = 0, rtas_end = 0; 129 130 basep = of_get_property(rtas.dev, "linux,rtas-base", NULL); 131 sizep = of_get_property(rtas.dev, "rtas-size", NULL); 132 133 if (basep && sizep) { 134 + rtas_start = be32_to_cpup(basep); 135 + rtas_end = rtas_start + be32_to_cpup(sizep); 136 } 137 138 for (addr = begin; addr < end; addr += PAGE_SIZE) {
+16 -16
arch/powerpc/kernel/process.c
··· 339 #endif 340 } 341 342 - static void prime_debug_regs(struct thread_struct *thread) 343 { 344 /* 345 * We could have inherited MSR_DE from userspace, since ··· 348 */ 349 mtmsr(mfmsr() & ~MSR_DE); 350 351 - mtspr(SPRN_IAC1, thread->debug.iac1); 352 - mtspr(SPRN_IAC2, thread->debug.iac2); 353 #if CONFIG_PPC_ADV_DEBUG_IACS > 2 354 - mtspr(SPRN_IAC3, thread->debug.iac3); 355 - mtspr(SPRN_IAC4, thread->debug.iac4); 356 #endif 357 - mtspr(SPRN_DAC1, thread->debug.dac1); 358 - mtspr(SPRN_DAC2, thread->debug.dac2); 359 #if CONFIG_PPC_ADV_DEBUG_DVCS > 0 360 - mtspr(SPRN_DVC1, thread->debug.dvc1); 361 - mtspr(SPRN_DVC2, thread->debug.dvc2); 362 #endif 363 - mtspr(SPRN_DBCR0, thread->debug.dbcr0); 364 - mtspr(SPRN_DBCR1, thread->debug.dbcr1); 365 #ifdef CONFIG_BOOKE 366 - mtspr(SPRN_DBCR2, thread->debug.dbcr2); 367 #endif 368 } 369 /* ··· 371 * debug registers, set the debug registers from the values 372 * stored in the new thread. 373 */ 374 - void switch_booke_debug_regs(struct thread_struct *new_thread) 375 { 376 if ((current->thread.debug.dbcr0 & DBCR0_IDM) 377 - || (new_thread->debug.dbcr0 & DBCR0_IDM)) 378 - prime_debug_regs(new_thread); 379 } 380 EXPORT_SYMBOL_GPL(switch_booke_debug_regs); 381 #else /* !CONFIG_PPC_ADV_DEBUG_REGS */ ··· 683 #endif /* CONFIG_SMP */ 684 685 #ifdef CONFIG_PPC_ADV_DEBUG_REGS 686 - switch_booke_debug_regs(&new->thread); 687 #else 688 /* 689 * For PPC_BOOK3S_64, we use the hw-breakpoint interfaces that would
··· 339 #endif 340 } 341 342 + static void prime_debug_regs(struct debug_reg *debug) 343 { 344 /* 345 * We could have inherited MSR_DE from userspace, since ··· 348 */ 349 mtmsr(mfmsr() & ~MSR_DE); 350 351 + mtspr(SPRN_IAC1, debug->iac1); 352 + mtspr(SPRN_IAC2, debug->iac2); 353 #if CONFIG_PPC_ADV_DEBUG_IACS > 2 354 + mtspr(SPRN_IAC3, debug->iac3); 355 + mtspr(SPRN_IAC4, debug->iac4); 356 #endif 357 + mtspr(SPRN_DAC1, debug->dac1); 358 + mtspr(SPRN_DAC2, debug->dac2); 359 #if CONFIG_PPC_ADV_DEBUG_DVCS > 0 360 + mtspr(SPRN_DVC1, debug->dvc1); 361 + mtspr(SPRN_DVC2, debug->dvc2); 362 #endif 363 + mtspr(SPRN_DBCR0, debug->dbcr0); 364 + mtspr(SPRN_DBCR1, debug->dbcr1); 365 #ifdef CONFIG_BOOKE 366 + mtspr(SPRN_DBCR2, debug->dbcr2); 367 #endif 368 } 369 /* ··· 371 * debug registers, set the debug registers from the values 372 * stored in the new thread. 373 */ 374 + void switch_booke_debug_regs(struct debug_reg *new_debug) 375 { 376 if ((current->thread.debug.dbcr0 & DBCR0_IDM) 377 + || (new_debug->dbcr0 & DBCR0_IDM)) 378 + prime_debug_regs(new_debug); 379 } 380 EXPORT_SYMBOL_GPL(switch_booke_debug_regs); 381 #else /* !CONFIG_PPC_ADV_DEBUG_REGS */ ··· 683 #endif /* CONFIG_SMP */ 684 685 #ifdef CONFIG_PPC_ADV_DEBUG_REGS 686 + switch_booke_debug_regs(&new->thread.debug); 687 #else 688 /* 689 * For PPC_BOOK3S_64, we use the hw-breakpoint interfaces that would
+2 -2
arch/powerpc/kernel/ptrace.c
··· 1555 1556 flush_fp_to_thread(child); 1557 if (fpidx < (PT_FPSCR - PT_FPR0)) 1558 - memcpy(&tmp, &child->thread.fp_state.fpr, 1559 sizeof(long)); 1560 else 1561 tmp = child->thread.fp_state.fpscr; ··· 1588 1589 flush_fp_to_thread(child); 1590 if (fpidx < (PT_FPSCR - PT_FPR0)) 1591 - memcpy(&child->thread.fp_state.fpr, &data, 1592 sizeof(long)); 1593 else 1594 child->thread.fp_state.fpscr = data;
··· 1555 1556 flush_fp_to_thread(child); 1557 if (fpidx < (PT_FPSCR - PT_FPR0)) 1558 + memcpy(&tmp, &child->thread.TS_FPR(fpidx), 1559 sizeof(long)); 1560 else 1561 tmp = child->thread.fp_state.fpscr; ··· 1588 1589 flush_fp_to_thread(child); 1590 if (fpidx < (PT_FPSCR - PT_FPR0)) 1591 + memcpy(&child->thread.TS_FPR(fpidx), &data, 1592 sizeof(long)); 1593 else 1594 child->thread.fp_state.fpscr = data;
+2 -2
arch/powerpc/kernel/setup-common.c
··· 479 if (machine_is(pseries) && firmware_has_feature(FW_FEATURE_LPAR) && 480 (dn = of_find_node_by_path("/rtas"))) { 481 int num_addr_cell, num_size_cell, maxcpus; 482 - const unsigned int *ireg; 483 484 num_addr_cell = of_n_addr_cells(dn); 485 num_size_cell = of_n_size_cells(dn); ··· 489 if (!ireg) 490 goto out; 491 492 - maxcpus = ireg[num_addr_cell + num_size_cell]; 493 494 /* Double maxcpus for processors which have SMT capability */ 495 if (cpu_has_feature(CPU_FTR_SMT))
··· 479 if (machine_is(pseries) && firmware_has_feature(FW_FEATURE_LPAR) && 480 (dn = of_find_node_by_path("/rtas"))) { 481 int num_addr_cell, num_size_cell, maxcpus; 482 + const __be32 *ireg; 483 484 num_addr_cell = of_n_addr_cells(dn); 485 num_size_cell = of_n_size_cells(dn); ··· 489 if (!ireg) 490 goto out; 491 492 + maxcpus = be32_to_cpup(ireg + num_addr_cell + num_size_cell); 493 494 /* Double maxcpus for processors which have SMT capability */ 495 if (cpu_has_feature(CPU_FTR_SMT))
+2 -2
arch/powerpc/kernel/smp.c
··· 580 int cpu_to_core_id(int cpu) 581 { 582 struct device_node *np; 583 - const int *reg; 584 int id = -1; 585 586 np = of_get_cpu_node(cpu, NULL); ··· 591 if (!reg) 592 goto out; 593 594 - id = *reg; 595 out: 596 of_node_put(np); 597 return id;
··· 580 int cpu_to_core_id(int cpu) 581 { 582 struct device_node *np; 583 + const __be32 *reg; 584 int id = -1; 585 586 np = of_get_cpu_node(cpu, NULL); ··· 591 if (!reg) 592 goto out; 593 594 + id = be32_to_cpup(reg); 595 out: 596 of_node_put(np); 597 return id;
+14 -4
arch/powerpc/kvm/book3s_64_mmu_hv.c
··· 469 slb_v = vcpu->kvm->arch.vrma_slb_v; 470 } 471 472 /* Find the HPTE in the hash table */ 473 index = kvmppc_hv_find_lock_hpte(kvm, eaddr, slb_v, 474 HPTE_V_VALID | HPTE_V_ABSENT); 475 - if (index < 0) 476 return -ENOENT; 477 hptep = (unsigned long *)(kvm->arch.hpt_virt + (index << 4)); 478 v = hptep[0] & ~HPTE_V_HVLOCK; 479 gr = kvm->arch.revmap[index].guest_rpte; ··· 484 /* Unlock the HPTE */ 485 asm volatile("lwsync" : : : "memory"); 486 hptep[0] = v; 487 488 gpte->eaddr = eaddr; 489 gpte->vpage = ((v & HPTE_V_AVPN) << 4) | ((eaddr >> 12) & 0xfff); ··· 669 return -EFAULT; 670 } else { 671 page = pages[0]; 672 if (PageHuge(page)) { 673 page = compound_head(page); 674 pte_size <<= compound_order(page); ··· 694 } 695 rcu_read_unlock_sched(); 696 } 697 - pfn = page_to_pfn(page); 698 } 699 700 ret = -EFAULT; ··· 711 r = (r & ~(HPTE_R_W|HPTE_R_I|HPTE_R_G)) | HPTE_R_M; 712 } 713 714 - /* Set the HPTE to point to pfn */ 715 - r = (r & ~(HPTE_R_PP0 - pte_size)) | (pfn << PAGE_SHIFT); 716 if (hpte_is_writable(r) && !write_ok) 717 r = hpte_make_readonly(r); 718 ret = RESUME_GUEST;
··· 469 slb_v = vcpu->kvm->arch.vrma_slb_v; 470 } 471 472 + preempt_disable(); 473 /* Find the HPTE in the hash table */ 474 index = kvmppc_hv_find_lock_hpte(kvm, eaddr, slb_v, 475 HPTE_V_VALID | HPTE_V_ABSENT); 476 + if (index < 0) { 477 + preempt_enable(); 478 return -ENOENT; 479 + } 480 hptep = (unsigned long *)(kvm->arch.hpt_virt + (index << 4)); 481 v = hptep[0] & ~HPTE_V_HVLOCK; 482 gr = kvm->arch.revmap[index].guest_rpte; ··· 481 /* Unlock the HPTE */ 482 asm volatile("lwsync" : : : "memory"); 483 hptep[0] = v; 484 + preempt_enable(); 485 486 gpte->eaddr = eaddr; 487 gpte->vpage = ((v & HPTE_V_AVPN) << 4) | ((eaddr >> 12) & 0xfff); ··· 665 return -EFAULT; 666 } else { 667 page = pages[0]; 668 + pfn = page_to_pfn(page); 669 if (PageHuge(page)) { 670 page = compound_head(page); 671 pte_size <<= compound_order(page); ··· 689 } 690 rcu_read_unlock_sched(); 691 } 692 } 693 694 ret = -EFAULT; ··· 707 r = (r & ~(HPTE_R_W|HPTE_R_I|HPTE_R_G)) | HPTE_R_M; 708 } 709 710 + /* 711 + * Set the HPTE to point to pfn. 712 + * Since the pfn is at PAGE_SIZE granularity, make sure we 713 + * don't mask out lower-order bits if psize < PAGE_SIZE. 714 + */ 715 + if (psize < PAGE_SIZE) 716 + psize = PAGE_SIZE; 717 + r = (r & ~(HPTE_R_PP0 - psize)) | ((pfn << PAGE_SHIFT) & ~(psize - 1)); 718 if (hpte_is_writable(r) && !write_ok) 719 r = hpte_make_readonly(r); 720 ret = RESUME_GUEST;
+14 -10
arch/powerpc/kvm/book3s_hv.c
··· 131 static void kvmppc_core_vcpu_load_hv(struct kvm_vcpu *vcpu, int cpu) 132 { 133 struct kvmppc_vcore *vc = vcpu->arch.vcore; 134 135 - spin_lock(&vcpu->arch.tbacct_lock); 136 if (vc->runner == vcpu && vc->vcore_state != VCORE_INACTIVE && 137 vc->preempt_tb != TB_NIL) { 138 vc->stolen_tb += mftb() - vc->preempt_tb; ··· 144 vcpu->arch.busy_stolen += mftb() - vcpu->arch.busy_preempt; 145 vcpu->arch.busy_preempt = TB_NIL; 146 } 147 - spin_unlock(&vcpu->arch.tbacct_lock); 148 } 149 150 static void kvmppc_core_vcpu_put_hv(struct kvm_vcpu *vcpu) 151 { 152 struct kvmppc_vcore *vc = vcpu->arch.vcore; 153 154 - spin_lock(&vcpu->arch.tbacct_lock); 155 if (vc->runner == vcpu && vc->vcore_state != VCORE_INACTIVE) 156 vc->preempt_tb = mftb(); 157 if (vcpu->arch.state == KVMPPC_VCPU_BUSY_IN_HOST) 158 vcpu->arch.busy_preempt = mftb(); 159 - spin_unlock(&vcpu->arch.tbacct_lock); 160 } 161 162 static void kvmppc_set_msr_hv(struct kvm_vcpu *vcpu, u64 msr) ··· 488 */ 489 if (vc->vcore_state != VCORE_INACTIVE && 490 vc->runner->arch.run_task != current) { 491 - spin_lock(&vc->runner->arch.tbacct_lock); 492 p = vc->stolen_tb; 493 if (vc->preempt_tb != TB_NIL) 494 p += now - vc->preempt_tb; 495 - spin_unlock(&vc->runner->arch.tbacct_lock); 496 } else { 497 p = vc->stolen_tb; 498 } ··· 514 core_stolen = vcore_stolen_time(vc, now); 515 stolen = core_stolen - vcpu->arch.stolen_logged; 516 vcpu->arch.stolen_logged = core_stolen; 517 - spin_lock(&vcpu->arch.tbacct_lock); 518 stolen += vcpu->arch.busy_stolen; 519 vcpu->arch.busy_stolen = 0; 520 - spin_unlock(&vcpu->arch.tbacct_lock); 521 if (!dt || !vpa) 522 return; 523 memset(dt, 0, sizeof(struct dtl_entry)); ··· 591 if (list_empty(&vcpu->kvm->arch.rtas_tokens)) 592 return RESUME_HOST; 593 594 rc = kvmppc_rtas_hcall(vcpu); 595 596 if (rc == -ENOENT) 597 return RESUME_HOST; ··· 1119 1120 if (vcpu->arch.state != KVMPPC_VCPU_RUNNABLE) 1121 return; 1122 - spin_lock(&vcpu->arch.tbacct_lock); 1123 now = mftb(); 1124 vcpu->arch.busy_stolen += vcore_stolen_time(vc, now) - 1125 vcpu->arch.stolen_logged; 1126 vcpu->arch.busy_preempt = now; 1127 vcpu->arch.state = KVMPPC_VCPU_BUSY_IN_HOST; 1128 - spin_unlock(&vcpu->arch.tbacct_lock); 1129 --vc->n_runnable; 1130 list_del(&vcpu->arch.run_list); 1131 }
··· 131 static void kvmppc_core_vcpu_load_hv(struct kvm_vcpu *vcpu, int cpu) 132 { 133 struct kvmppc_vcore *vc = vcpu->arch.vcore; 134 + unsigned long flags; 135 136 + spin_lock_irqsave(&vcpu->arch.tbacct_lock, flags); 137 if (vc->runner == vcpu && vc->vcore_state != VCORE_INACTIVE && 138 vc->preempt_tb != TB_NIL) { 139 vc->stolen_tb += mftb() - vc->preempt_tb; ··· 143 vcpu->arch.busy_stolen += mftb() - vcpu->arch.busy_preempt; 144 vcpu->arch.busy_preempt = TB_NIL; 145 } 146 + spin_unlock_irqrestore(&vcpu->arch.tbacct_lock, flags); 147 } 148 149 static void kvmppc_core_vcpu_put_hv(struct kvm_vcpu *vcpu) 150 { 151 struct kvmppc_vcore *vc = vcpu->arch.vcore; 152 + unsigned long flags; 153 154 + spin_lock_irqsave(&vcpu->arch.tbacct_lock, flags); 155 if (vc->runner == vcpu && vc->vcore_state != VCORE_INACTIVE) 156 vc->preempt_tb = mftb(); 157 if (vcpu->arch.state == KVMPPC_VCPU_BUSY_IN_HOST) 158 vcpu->arch.busy_preempt = mftb(); 159 + spin_unlock_irqrestore(&vcpu->arch.tbacct_lock, flags); 160 } 161 162 static void kvmppc_set_msr_hv(struct kvm_vcpu *vcpu, u64 msr) ··· 486 */ 487 if (vc->vcore_state != VCORE_INACTIVE && 488 vc->runner->arch.run_task != current) { 489 + spin_lock_irq(&vc->runner->arch.tbacct_lock); 490 p = vc->stolen_tb; 491 if (vc->preempt_tb != TB_NIL) 492 p += now - vc->preempt_tb; 493 + spin_unlock_irq(&vc->runner->arch.tbacct_lock); 494 } else { 495 p = vc->stolen_tb; 496 } ··· 512 core_stolen = vcore_stolen_time(vc, now); 513 stolen = core_stolen - vcpu->arch.stolen_logged; 514 vcpu->arch.stolen_logged = core_stolen; 515 + spin_lock_irq(&vcpu->arch.tbacct_lock); 516 stolen += vcpu->arch.busy_stolen; 517 vcpu->arch.busy_stolen = 0; 518 + spin_unlock_irq(&vcpu->arch.tbacct_lock); 519 if (!dt || !vpa) 520 return; 521 memset(dt, 0, sizeof(struct dtl_entry)); ··· 589 if (list_empty(&vcpu->kvm->arch.rtas_tokens)) 590 return RESUME_HOST; 591 592 + idx = srcu_read_lock(&vcpu->kvm->srcu); 593 rc = kvmppc_rtas_hcall(vcpu); 594 + srcu_read_unlock(&vcpu->kvm->srcu, idx); 595 596 if (rc == -ENOENT) 597 return RESUME_HOST; ··· 1115 1116 if (vcpu->arch.state != KVMPPC_VCPU_RUNNABLE) 1117 return; 1118 + spin_lock_irq(&vcpu->arch.tbacct_lock); 1119 now = mftb(); 1120 vcpu->arch.busy_stolen += vcore_stolen_time(vc, now) - 1121 vcpu->arch.stolen_logged; 1122 vcpu->arch.busy_preempt = now; 1123 vcpu->arch.state = KVMPPC_VCPU_BUSY_IN_HOST; 1124 + spin_unlock_irq(&vcpu->arch.tbacct_lock); 1125 --vc->n_runnable; 1126 list_del(&vcpu->arch.run_list); 1127 }
+7 -2
arch/powerpc/kvm/book3s_hv_rm_mmu.c
··· 225 is_io = pa & (HPTE_R_I | HPTE_R_W); 226 pte_size = PAGE_SIZE << (pa & KVMPPC_PAGE_ORDER_MASK); 227 pa &= PAGE_MASK; 228 } else { 229 /* Translate to host virtual address */ 230 hva = __gfn_to_hva_memslot(memslot, gfn); ··· 239 ptel = hpte_make_readonly(ptel); 240 is_io = hpte_cache_bits(pte_val(pte)); 241 pa = pte_pfn(pte) << PAGE_SHIFT; 242 } 243 } 244 245 if (pte_size < psize) 246 return H_PARAMETER; 247 - if (pa && pte_size > psize) 248 - pa |= gpa & (pte_size - 1); 249 250 ptel &= ~(HPTE_R_PP0 - psize); 251 ptel |= pa; ··· 750 20, /* 1M, unsupported */ 751 }; 752 753 long kvmppc_hv_find_lock_hpte(struct kvm *kvm, gva_t eaddr, unsigned long slb_v, 754 unsigned long valid) 755 {
··· 225 is_io = pa & (HPTE_R_I | HPTE_R_W); 226 pte_size = PAGE_SIZE << (pa & KVMPPC_PAGE_ORDER_MASK); 227 pa &= PAGE_MASK; 228 + pa |= gpa & ~PAGE_MASK; 229 } else { 230 /* Translate to host virtual address */ 231 hva = __gfn_to_hva_memslot(memslot, gfn); ··· 238 ptel = hpte_make_readonly(ptel); 239 is_io = hpte_cache_bits(pte_val(pte)); 240 pa = pte_pfn(pte) << PAGE_SHIFT; 241 + pa |= hva & (pte_size - 1); 242 + pa |= gpa & ~PAGE_MASK; 243 } 244 } 245 246 if (pte_size < psize) 247 return H_PARAMETER; 248 249 ptel &= ~(HPTE_R_PP0 - psize); 250 ptel |= pa; ··· 749 20, /* 1M, unsupported */ 750 }; 751 752 + /* When called from virtmode, this func should be protected by 753 + * preempt_disable(), otherwise, the holding of HPTE_V_HVLOCK 754 + * can trigger deadlock issue. 755 + */ 756 long kvmppc_hv_find_lock_hpte(struct kvm *kvm, gva_t eaddr, unsigned long slb_v, 757 unsigned long valid) 758 {
+13 -10
arch/powerpc/kvm/book3s_hv_rmhandlers.S
··· 153 154 13: b machine_check_fwnmi 155 156 - 157 /* 158 * We come in here when wakened from nap mode on a secondary hw thread. 159 * Relocation is off and most register values are lost. ··· 223 /* Clear our vcpu pointer so we don't come back in early */ 224 li r0, 0 225 std r0, HSTATE_KVM_VCPU(r13) 226 lwsync 227 /* Clear any pending IPI - we're an offline thread */ 228 ld r5, HSTATE_XICS_PHYS(r13) ··· 245 /* increment the nap count and then go to nap mode */ 246 ld r4, HSTATE_KVM_VCORE(r13) 247 addi r4, r4, VCORE_NAP_COUNT 248 - lwsync /* make previous updates visible */ 249 51: lwarx r3, 0, r4 250 addi r3, r3, 1 251 stwcx. r3, 0, r4 ··· 754 * guest CR, R12 saved in shadow VCPU SCRATCH1/0 755 * guest R13 saved in SPRN_SCRATCH0 756 */ 757 - /* abuse host_r2 as third scratch area; we get r2 from PACATOC(r13) */ 758 - std r9, HSTATE_HOST_R2(r13) 759 760 lbz r9, HSTATE_IN_GUEST(r13) 761 cmpwi r9, KVM_GUEST_MODE_HOST_HV 762 beq kvmppc_bad_host_intr 763 #ifdef CONFIG_KVM_BOOK3S_PR_POSSIBLE 764 cmpwi r9, KVM_GUEST_MODE_GUEST 765 - ld r9, HSTATE_HOST_R2(r13) 766 beq kvmppc_interrupt_pr 767 #endif 768 /* We're now back in the host but in guest MMU context */ ··· 781 std r6, VCPU_GPR(R6)(r9) 782 std r7, VCPU_GPR(R7)(r9) 783 std r8, VCPU_GPR(R8)(r9) 784 - ld r0, HSTATE_HOST_R2(r13) 785 std r0, VCPU_GPR(R9)(r9) 786 std r10, VCPU_GPR(R10)(r9) 787 std r11, VCPU_GPR(R11)(r9) ··· 992 */ 993 /* Increment the threads-exiting-guest count in the 0xff00 994 bits of vcore->entry_exit_count */ 995 - lwsync 996 ld r5,HSTATE_KVM_VCORE(r13) 997 addi r6,r5,VCORE_ENTRY_EXIT 998 41: lwarx r3,0,r6 999 addi r0,r3,0x100 1000 stwcx. r0,0,r6 1001 bne 41b 1002 - lwsync 1003 1004 /* 1005 * At this point we have an interrupt that we have to pass ··· 1031 sld r0,r0,r4 1032 andc. r3,r3,r0 /* no sense IPI'ing ourselves */ 1033 beq 43f 1034 mulli r4,r4,PACA_SIZE /* get paca for thread 0 */ 1035 subf r6,r4,r13 1036 42: andi. r0,r3,1 ··· 1641 bge kvm_cede_exit 1642 stwcx. r4,0,r6 1643 bne 31b 1644 li r0,1 1645 stb r0,HSTATE_NAPPING(r13) 1646 - /* order napping_threads update vs testing entry_exit_count */ 1647 - lwsync 1648 mr r4,r3 1649 lwz r7,VCORE_ENTRY_EXIT(r5) 1650 cmpwi r7,0x100
··· 153 154 13: b machine_check_fwnmi 155 156 /* 157 * We come in here when wakened from nap mode on a secondary hw thread. 158 * Relocation is off and most register values are lost. ··· 224 /* Clear our vcpu pointer so we don't come back in early */ 225 li r0, 0 226 std r0, HSTATE_KVM_VCPU(r13) 227 + /* 228 + * Make sure we clear HSTATE_KVM_VCPU(r13) before incrementing 229 + * the nap_count, because once the increment to nap_count is 230 + * visible we could be given another vcpu. 231 + */ 232 lwsync 233 /* Clear any pending IPI - we're an offline thread */ 234 ld r5, HSTATE_XICS_PHYS(r13) ··· 241 /* increment the nap count and then go to nap mode */ 242 ld r4, HSTATE_KVM_VCORE(r13) 243 addi r4, r4, VCORE_NAP_COUNT 244 51: lwarx r3, 0, r4 245 addi r3, r3, 1 246 stwcx. r3, 0, r4 ··· 751 * guest CR, R12 saved in shadow VCPU SCRATCH1/0 752 * guest R13 saved in SPRN_SCRATCH0 753 */ 754 + std r9, HSTATE_SCRATCH2(r13) 755 756 lbz r9, HSTATE_IN_GUEST(r13) 757 cmpwi r9, KVM_GUEST_MODE_HOST_HV 758 beq kvmppc_bad_host_intr 759 #ifdef CONFIG_KVM_BOOK3S_PR_POSSIBLE 760 cmpwi r9, KVM_GUEST_MODE_GUEST 761 + ld r9, HSTATE_SCRATCH2(r13) 762 beq kvmppc_interrupt_pr 763 #endif 764 /* We're now back in the host but in guest MMU context */ ··· 779 std r6, VCPU_GPR(R6)(r9) 780 std r7, VCPU_GPR(R7)(r9) 781 std r8, VCPU_GPR(R8)(r9) 782 + ld r0, HSTATE_SCRATCH2(r13) 783 std r0, VCPU_GPR(R9)(r9) 784 std r10, VCPU_GPR(R10)(r9) 785 std r11, VCPU_GPR(R11)(r9) ··· 990 */ 991 /* Increment the threads-exiting-guest count in the 0xff00 992 bits of vcore->entry_exit_count */ 993 ld r5,HSTATE_KVM_VCORE(r13) 994 addi r6,r5,VCORE_ENTRY_EXIT 995 41: lwarx r3,0,r6 996 addi r0,r3,0x100 997 stwcx. r0,0,r6 998 bne 41b 999 + isync /* order stwcx. vs. reading napping_threads */ 1000 1001 /* 1002 * At this point we have an interrupt that we have to pass ··· 1030 sld r0,r0,r4 1031 andc. r3,r3,r0 /* no sense IPI'ing ourselves */ 1032 beq 43f 1033 + /* Order entry/exit update vs. IPIs */ 1034 + sync 1035 mulli r4,r4,PACA_SIZE /* get paca for thread 0 */ 1036 subf r6,r4,r13 1037 42: andi. r0,r3,1 ··· 1638 bge kvm_cede_exit 1639 stwcx. r4,0,r6 1640 bne 31b 1641 + /* order napping_threads update vs testing entry_exit_count */ 1642 + isync 1643 li r0,1 1644 stb r0,HSTATE_NAPPING(r13) 1645 mr r4,r3 1646 lwz r7,VCORE_ENTRY_EXIT(r5) 1647 cmpwi r7,0x100
+11 -8
arch/powerpc/kvm/book3s_interrupts.S
··· 129 * R12 = exit handler id 130 * R13 = PACA 131 * SVCPU.* = guest * 132 * 133 */ 134 135 /* Transfer reg values from shadow vcpu back to vcpu struct */ 136 /* On 64-bit, interrupts are still off at this point */ 137 - PPC_LL r3, GPR4(r1) /* vcpu pointer */ 138 GET_SHADOW_VCPU(r4) 139 bl FUNC(kvmppc_copy_from_svcpu) 140 nop 141 142 #ifdef CONFIG_PPC_BOOK3S_64 143 - /* Re-enable interrupts */ 144 - ld r3, HSTATE_HOST_MSR(r13) 145 - ori r3, r3, MSR_EE 146 - MTMSR_EERI(r3) 147 - 148 /* 149 * Reload kernel SPRG3 value. 150 * No need to save guest value as usermode can't modify SPRG3. 151 */ 152 ld r3, PACA_SPRG3(r13) 153 mtspr SPRN_SPRG3, r3 154 - 155 #endif /* CONFIG_PPC_BOOK3S_64 */ 156 157 /* R7 = vcpu */ ··· 180 PPC_STL r31, VCPU_GPR(R31)(r7) 181 182 /* Pass the exit number as 3rd argument to kvmppc_handle_exit */ 183 - mr r5, r12 184 185 /* Restore r3 (kvm_run) and r4 (vcpu) */ 186 REST_2GPRS(3, r1)
··· 129 * R12 = exit handler id 130 * R13 = PACA 131 * SVCPU.* = guest * 132 + * MSR.EE = 1 133 * 134 */ 135 136 + PPC_LL r3, GPR4(r1) /* vcpu pointer */ 137 + 138 + /* 139 + * kvmppc_copy_from_svcpu can clobber volatile registers, save 140 + * the exit handler id to the vcpu and restore it from there later. 141 + */ 142 + stw r12, VCPU_TRAP(r3) 143 + 144 /* Transfer reg values from shadow vcpu back to vcpu struct */ 145 /* On 64-bit, interrupts are still off at this point */ 146 + 147 GET_SHADOW_VCPU(r4) 148 bl FUNC(kvmppc_copy_from_svcpu) 149 nop 150 151 #ifdef CONFIG_PPC_BOOK3S_64 152 /* 153 * Reload kernel SPRG3 value. 154 * No need to save guest value as usermode can't modify SPRG3. 155 */ 156 ld r3, PACA_SPRG3(r13) 157 mtspr SPRN_SPRG3, r3 158 #endif /* CONFIG_PPC_BOOK3S_64 */ 159 160 /* R7 = vcpu */ ··· 177 PPC_STL r31, VCPU_GPR(R31)(r7) 178 179 /* Pass the exit number as 3rd argument to kvmppc_handle_exit */ 180 + lwz r5, VCPU_TRAP(r7) 181 182 /* Restore r3 (kvm_run) and r4 (vcpu) */ 183 REST_2GPRS(3, r1)
+22
arch/powerpc/kvm/book3s_pr.c
··· 66 struct kvmppc_book3s_shadow_vcpu *svcpu = svcpu_get(vcpu); 67 memcpy(svcpu->slb, to_book3s(vcpu)->slb_shadow, sizeof(svcpu->slb)); 68 svcpu->slb_max = to_book3s(vcpu)->slb_shadow_max; 69 svcpu_put(svcpu); 70 #endif 71 vcpu->cpu = smp_processor_id(); ··· 79 { 80 #ifdef CONFIG_PPC_BOOK3S_64 81 struct kvmppc_book3s_shadow_vcpu *svcpu = svcpu_get(vcpu); 82 memcpy(to_book3s(vcpu)->slb_shadow, svcpu->slb, sizeof(svcpu->slb)); 83 to_book3s(vcpu)->slb_shadow_max = svcpu->slb_max; 84 svcpu_put(svcpu); ··· 114 svcpu->ctr = vcpu->arch.ctr; 115 svcpu->lr = vcpu->arch.lr; 116 svcpu->pc = vcpu->arch.pc; 117 } 118 119 /* Copy data touched by real-mode code from shadow vcpu back to vcpu */ 120 void kvmppc_copy_from_svcpu(struct kvm_vcpu *vcpu, 121 struct kvmppc_book3s_shadow_vcpu *svcpu) 122 { 123 vcpu->arch.gpr[0] = svcpu->gpr[0]; 124 vcpu->arch.gpr[1] = svcpu->gpr[1]; 125 vcpu->arch.gpr[2] = svcpu->gpr[2]; ··· 157 vcpu->arch.fault_dar = svcpu->fault_dar; 158 vcpu->arch.fault_dsisr = svcpu->fault_dsisr; 159 vcpu->arch.last_inst = svcpu->last_inst; 160 } 161 162 static int kvmppc_core_check_requests_pr(struct kvm_vcpu *vcpu)
··· 66 struct kvmppc_book3s_shadow_vcpu *svcpu = svcpu_get(vcpu); 67 memcpy(svcpu->slb, to_book3s(vcpu)->slb_shadow, sizeof(svcpu->slb)); 68 svcpu->slb_max = to_book3s(vcpu)->slb_shadow_max; 69 + svcpu->in_use = 0; 70 svcpu_put(svcpu); 71 #endif 72 vcpu->cpu = smp_processor_id(); ··· 78 { 79 #ifdef CONFIG_PPC_BOOK3S_64 80 struct kvmppc_book3s_shadow_vcpu *svcpu = svcpu_get(vcpu); 81 + if (svcpu->in_use) { 82 + kvmppc_copy_from_svcpu(vcpu, svcpu); 83 + } 84 memcpy(to_book3s(vcpu)->slb_shadow, svcpu->slb, sizeof(svcpu->slb)); 85 to_book3s(vcpu)->slb_shadow_max = svcpu->slb_max; 86 svcpu_put(svcpu); ··· 110 svcpu->ctr = vcpu->arch.ctr; 111 svcpu->lr = vcpu->arch.lr; 112 svcpu->pc = vcpu->arch.pc; 113 + svcpu->in_use = true; 114 } 115 116 /* Copy data touched by real-mode code from shadow vcpu back to vcpu */ 117 void kvmppc_copy_from_svcpu(struct kvm_vcpu *vcpu, 118 struct kvmppc_book3s_shadow_vcpu *svcpu) 119 { 120 + /* 121 + * vcpu_put would just call us again because in_use hasn't 122 + * been updated yet. 123 + */ 124 + preempt_disable(); 125 + 126 + /* 127 + * Maybe we were already preempted and synced the svcpu from 128 + * our preempt notifiers. Don't bother touching this svcpu then. 129 + */ 130 + if (!svcpu->in_use) 131 + goto out; 132 + 133 vcpu->arch.gpr[0] = svcpu->gpr[0]; 134 vcpu->arch.gpr[1] = svcpu->gpr[1]; 135 vcpu->arch.gpr[2] = svcpu->gpr[2]; ··· 139 vcpu->arch.fault_dar = svcpu->fault_dar; 140 vcpu->arch.fault_dsisr = svcpu->fault_dsisr; 141 vcpu->arch.last_inst = svcpu->last_inst; 142 + svcpu->in_use = false; 143 + 144 + out: 145 + preempt_enable(); 146 } 147 148 static int kvmppc_core_check_requests_pr(struct kvm_vcpu *vcpu)
+1 -5
arch/powerpc/kvm/book3s_rmhandlers.S
··· 153 154 li r6, MSR_IR | MSR_DR 155 andc r6, r5, r6 /* Clear DR and IR in MSR value */ 156 - #ifdef CONFIG_PPC_BOOK3S_32 157 /* 158 * Set EE in HOST_MSR so that it's enabled when we get into our 159 - * C exit handler function. On 64-bit we delay enabling 160 - * interrupts until we have finished transferring stuff 161 - * to or from the PACA. 162 */ 163 ori r5, r5, MSR_EE 164 - #endif 165 mtsrr0 r7 166 mtsrr1 r6 167 RFI
··· 153 154 li r6, MSR_IR | MSR_DR 155 andc r6, r5, r6 /* Clear DR and IR in MSR value */ 156 /* 157 * Set EE in HOST_MSR so that it's enabled when we get into our 158 + * C exit handler function. 159 */ 160 ori r5, r5, MSR_EE 161 mtsrr0 r7 162 mtsrr1 r6 163 RFI
+6 -6
arch/powerpc/kvm/booke.c
··· 681 int kvmppc_vcpu_run(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu) 682 { 683 int ret, s; 684 - struct thread_struct thread; 685 #ifdef CONFIG_PPC_FPU 686 struct thread_fp_state fp; 687 int fpexc_mode; ··· 723 #endif 724 725 /* Switch to guest debug context */ 726 - thread.debug = vcpu->arch.shadow_dbg_reg; 727 - switch_booke_debug_regs(&thread); 728 - thread.debug = current->thread.debug; 729 current->thread.debug = vcpu->arch.shadow_dbg_reg; 730 731 kvmppc_fix_ee_before_entry(); ··· 736 We also get here with interrupts enabled. */ 737 738 /* Switch back to user space debug context */ 739 - switch_booke_debug_regs(&thread); 740 - current->thread.debug = thread.debug; 741 742 #ifdef CONFIG_PPC_FPU 743 kvmppc_save_guest_fp(vcpu);
··· 681 int kvmppc_vcpu_run(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu) 682 { 683 int ret, s; 684 + struct debug_reg debug; 685 #ifdef CONFIG_PPC_FPU 686 struct thread_fp_state fp; 687 int fpexc_mode; ··· 723 #endif 724 725 /* Switch to guest debug context */ 726 + debug = vcpu->arch.shadow_dbg_reg; 727 + switch_booke_debug_regs(&debug); 728 + debug = current->thread.debug; 729 current->thread.debug = vcpu->arch.shadow_dbg_reg; 730 731 kvmppc_fix_ee_before_entry(); ··· 736 We also get here with interrupts enabled. */ 737 738 /* Switch back to user space debug context */ 739 + switch_booke_debug_regs(&debug); 740 + current->thread.debug = debug; 741 742 #ifdef CONFIG_PPC_FPU 743 kvmppc_save_guest_fp(vcpu);
+6 -6
arch/powerpc/platforms/powernv/opal-lpc.c
··· 24 static u8 opal_lpc_inb(unsigned long port) 25 { 26 int64_t rc; 27 - uint32_t data; 28 29 if (opal_lpc_chip_id < 0 || port > 0xffff) 30 return 0xff; 31 rc = opal_lpc_read(opal_lpc_chip_id, OPAL_LPC_IO, port, &data, 1); 32 - return rc ? 0xff : data; 33 } 34 35 static __le16 __opal_lpc_inw(unsigned long port) 36 { 37 int64_t rc; 38 - uint32_t data; 39 40 if (opal_lpc_chip_id < 0 || port > 0xfffe) 41 return 0xffff; 42 if (port & 1) 43 return (__le16)opal_lpc_inb(port) << 8 | opal_lpc_inb(port + 1); 44 rc = opal_lpc_read(opal_lpc_chip_id, OPAL_LPC_IO, port, &data, 2); 45 - return rc ? 0xffff : data; 46 } 47 static u16 opal_lpc_inw(unsigned long port) 48 { ··· 52 static __le32 __opal_lpc_inl(unsigned long port) 53 { 54 int64_t rc; 55 - uint32_t data; 56 57 if (opal_lpc_chip_id < 0 || port > 0xfffc) 58 return 0xffffffff; ··· 62 (__le32)opal_lpc_inb(port + 2) << 8 | 63 opal_lpc_inb(port + 3); 64 rc = opal_lpc_read(opal_lpc_chip_id, OPAL_LPC_IO, port, &data, 4); 65 - return rc ? 0xffffffff : data; 66 } 67 68 static u32 opal_lpc_inl(unsigned long port)
··· 24 static u8 opal_lpc_inb(unsigned long port) 25 { 26 int64_t rc; 27 + __be32 data; 28 29 if (opal_lpc_chip_id < 0 || port > 0xffff) 30 return 0xff; 31 rc = opal_lpc_read(opal_lpc_chip_id, OPAL_LPC_IO, port, &data, 1); 32 + return rc ? 0xff : be32_to_cpu(data); 33 } 34 35 static __le16 __opal_lpc_inw(unsigned long port) 36 { 37 int64_t rc; 38 + __be32 data; 39 40 if (opal_lpc_chip_id < 0 || port > 0xfffe) 41 return 0xffff; 42 if (port & 1) 43 return (__le16)opal_lpc_inb(port) << 8 | opal_lpc_inb(port + 1); 44 rc = opal_lpc_read(opal_lpc_chip_id, OPAL_LPC_IO, port, &data, 2); 45 + return rc ? 0xffff : be32_to_cpu(data); 46 } 47 static u16 opal_lpc_inw(unsigned long port) 48 { ··· 52 static __le32 __opal_lpc_inl(unsigned long port) 53 { 54 int64_t rc; 55 + __be32 data; 56 57 if (opal_lpc_chip_id < 0 || port > 0xfffc) 58 return 0xffffffff; ··· 62 (__le32)opal_lpc_inb(port + 2) << 8 | 63 opal_lpc_inb(port + 3); 64 rc = opal_lpc_read(opal_lpc_chip_id, OPAL_LPC_IO, port, &data, 4); 65 + return rc ? 0xffffffff : be32_to_cpu(data); 66 } 67 68 static u32 opal_lpc_inl(unsigned long port)
+3 -1
arch/powerpc/platforms/powernv/opal-xscom.c
··· 96 { 97 struct opal_scom_map *m = map; 98 int64_t rc; 99 100 reg = opal_scom_unmangle(reg); 101 - rc = opal_xscom_read(m->chip, m->addr + reg, (uint64_t *)__pa(value)); 102 return opal_xscom_err_xlate(rc); 103 } 104
··· 96 { 97 struct opal_scom_map *m = map; 98 int64_t rc; 99 + __be64 v; 100 101 reg = opal_scom_unmangle(reg); 102 + rc = opal_xscom_read(m->chip, m->addr + reg, (__be64 *)__pa(&v)); 103 + *value = be64_to_cpu(v); 104 return opal_xscom_err_xlate(rc); 105 } 106
+6 -6
arch/powerpc/platforms/pseries/lparcfg.c
··· 157 { 158 struct hvcall_ppp_data ppp_data; 159 struct device_node *root; 160 - const int *perf_level; 161 int rc; 162 163 rc = h_get_ppp(&ppp_data); ··· 201 perf_level = of_get_property(root, 202 "ibm,partition-performance-parameters-level", 203 NULL); 204 - if (perf_level && (*perf_level >= 1)) { 205 seq_printf(m, 206 "physical_procs_allocated_to_virtualization=%d\n", 207 ppp_data.phys_platform_procs); ··· 435 int partition_potential_processors; 436 int partition_active_processors; 437 struct device_node *rtas_node; 438 - const int *lrdrp = NULL; 439 440 rtas_node = of_find_node_by_path("/rtas"); 441 if (rtas_node) ··· 444 if (lrdrp == NULL) { 445 partition_potential_processors = vdso_data->processorCount; 446 } else { 447 - partition_potential_processors = *(lrdrp + 4); 448 } 449 of_node_put(rtas_node); 450 ··· 654 const char *model = ""; 655 const char *system_id = ""; 656 const char *tmp; 657 - const unsigned int *lp_index_ptr; 658 unsigned int lp_index = 0; 659 660 seq_printf(m, "%s %s\n", MODULE_NAME, MODULE_VERS); ··· 670 lp_index_ptr = of_get_property(rootdn, "ibm,partition-no", 671 NULL); 672 if (lp_index_ptr) 673 - lp_index = *lp_index_ptr; 674 of_node_put(rootdn); 675 } 676 seq_printf(m, "serial_number=%s\n", system_id);
··· 157 { 158 struct hvcall_ppp_data ppp_data; 159 struct device_node *root; 160 + const __be32 *perf_level; 161 int rc; 162 163 rc = h_get_ppp(&ppp_data); ··· 201 perf_level = of_get_property(root, 202 "ibm,partition-performance-parameters-level", 203 NULL); 204 + if (perf_level && (be32_to_cpup(perf_level) >= 1)) { 205 seq_printf(m, 206 "physical_procs_allocated_to_virtualization=%d\n", 207 ppp_data.phys_platform_procs); ··· 435 int partition_potential_processors; 436 int partition_active_processors; 437 struct device_node *rtas_node; 438 + const __be32 *lrdrp = NULL; 439 440 rtas_node = of_find_node_by_path("/rtas"); 441 if (rtas_node) ··· 444 if (lrdrp == NULL) { 445 partition_potential_processors = vdso_data->processorCount; 446 } else { 447 + partition_potential_processors = be32_to_cpup(lrdrp + 4); 448 } 449 of_node_put(rtas_node); 450 ··· 654 const char *model = ""; 655 const char *system_id = ""; 656 const char *tmp; 657 + const __be32 *lp_index_ptr; 658 unsigned int lp_index = 0; 659 660 seq_printf(m, "%s %s\n", MODULE_NAME, MODULE_VERS); ··· 670 lp_index_ptr = of_get_property(rootdn, "ibm,partition-no", 671 NULL); 672 if (lp_index_ptr) 673 + lp_index = be32_to_cpup(lp_index_ptr); 674 of_node_put(rootdn); 675 } 676 seq_printf(m, "serial_number=%s\n", system_id);
+15 -13
arch/powerpc/platforms/pseries/msi.c
··· 130 { 131 struct device_node *dn; 132 struct pci_dn *pdn; 133 - const u32 *req_msi; 134 135 pdn = pci_get_pdn(pdev); 136 if (!pdn) ··· 139 140 dn = pdn->node; 141 142 - req_msi = of_get_property(dn, prop_name, NULL); 143 - if (!req_msi) { 144 pr_debug("rtas_msi: No %s on %s\n", prop_name, dn->full_name); 145 return -ENOENT; 146 } 147 148 - if (*req_msi < nvec) { 149 pr_debug("rtas_msi: %s requests < %d MSIs\n", prop_name, nvec); 150 151 - if (*req_msi == 0) /* Be paranoid */ 152 return -ENOSPC; 153 154 - return *req_msi; 155 } 156 157 return 0; ··· 173 static struct device_node *find_pe_total_msi(struct pci_dev *dev, int *total) 174 { 175 struct device_node *dn; 176 - const u32 *p; 177 178 dn = of_node_get(pci_device_to_OF_node(dev)); 179 while (dn) { ··· 181 if (p) { 182 pr_debug("rtas_msi: found prop on dn %s\n", 183 dn->full_name); 184 - *total = *p; 185 return dn; 186 } 187 ··· 234 static void *count_non_bridge_devices(struct device_node *dn, void *data) 235 { 236 struct msi_counts *counts = data; 237 - const u32 *p; 238 u32 class; 239 240 pr_debug("rtas_msi: counting %s\n", dn->full_name); 241 242 p = of_get_property(dn, "class-code", NULL); 243 - class = p ? *p : 0; 244 245 if ((class >> 8) != PCI_CLASS_BRIDGE_PCI) 246 counts->num_devices++; ··· 251 static void *count_spare_msis(struct device_node *dn, void *data) 252 { 253 struct msi_counts *counts = data; 254 - const u32 *p; 255 int req; 256 257 if (dn == counts->requestor) ··· 262 req = 0; 263 p = of_get_property(dn, "ibm,req#msi", NULL); 264 if (p) 265 - req = *p; 266 267 p = of_get_property(dn, "ibm,req#msi-x", NULL); 268 if (p) 269 - req = max(req, (int)*p); 270 } 271 272 if (req < counts->quota)
··· 130 { 131 struct device_node *dn; 132 struct pci_dn *pdn; 133 + const __be32 *p; 134 + u32 req_msi; 135 136 pdn = pci_get_pdn(pdev); 137 if (!pdn) ··· 138 139 dn = pdn->node; 140 141 + p = of_get_property(dn, prop_name, NULL); 142 + if (!p) { 143 pr_debug("rtas_msi: No %s on %s\n", prop_name, dn->full_name); 144 return -ENOENT; 145 } 146 147 + req_msi = be32_to_cpup(p); 148 + if (req_msi < nvec) { 149 pr_debug("rtas_msi: %s requests < %d MSIs\n", prop_name, nvec); 150 151 + if (req_msi == 0) /* Be paranoid */ 152 return -ENOSPC; 153 154 + return req_msi; 155 } 156 157 return 0; ··· 171 static struct device_node *find_pe_total_msi(struct pci_dev *dev, int *total) 172 { 173 struct device_node *dn; 174 + const __be32 *p; 175 176 dn = of_node_get(pci_device_to_OF_node(dev)); 177 while (dn) { ··· 179 if (p) { 180 pr_debug("rtas_msi: found prop on dn %s\n", 181 dn->full_name); 182 + *total = be32_to_cpup(p); 183 return dn; 184 } 185 ··· 232 static void *count_non_bridge_devices(struct device_node *dn, void *data) 233 { 234 struct msi_counts *counts = data; 235 + const __be32 *p; 236 u32 class; 237 238 pr_debug("rtas_msi: counting %s\n", dn->full_name); 239 240 p = of_get_property(dn, "class-code", NULL); 241 + class = p ? be32_to_cpup(p) : 0; 242 243 if ((class >> 8) != PCI_CLASS_BRIDGE_PCI) 244 counts->num_devices++; ··· 249 static void *count_spare_msis(struct device_node *dn, void *data) 250 { 251 struct msi_counts *counts = data; 252 + const __be32 *p; 253 int req; 254 255 if (dn == counts->requestor) ··· 260 req = 0; 261 p = of_get_property(dn, "ibm,req#msi", NULL); 262 if (p) 263 + req = be32_to_cpup(p); 264 265 p = of_get_property(dn, "ibm,req#msi-x", NULL); 266 if (p) 267 + req = max(req, (int)be32_to_cpup(p)); 268 } 269 270 if (req < counts->quota)
+23 -23
arch/powerpc/platforms/pseries/nvram.c
··· 43 static DEFINE_SPINLOCK(nvram_lock); 44 45 struct err_log_info { 46 - int error_type; 47 - unsigned int seq_num; 48 }; 49 50 struct nvram_os_partition { ··· 79 }; 80 81 struct oops_log_info { 82 - u16 version; 83 - u16 report_length; 84 - u64 timestamp; 85 } __attribute__((packed)); 86 87 static void oops_to_nvram(struct kmsg_dumper *dumper, ··· 291 length = part->size; 292 } 293 294 - info.error_type = err_type; 295 - info.seq_num = error_log_cnt; 296 297 tmp_index = part->index; 298 ··· 364 } 365 366 if (part->os_partition) { 367 - *error_log_cnt = info.seq_num; 368 - *err_type = info.error_type; 369 } 370 371 return 0; ··· 529 pr_err("nvram: logging uncompressed oops/panic report\n"); 530 return -1; 531 } 532 - oops_hdr->version = OOPS_HDR_VERSION; 533 - oops_hdr->report_length = (u16) zipped_len; 534 - oops_hdr->timestamp = get_seconds(); 535 return 0; 536 } 537 ··· 574 clobbering_unread_rtas_event()) 575 return -1; 576 577 - oops_hdr->version = OOPS_HDR_VERSION; 578 - oops_hdr->report_length = (u16) size; 579 - oops_hdr->timestamp = get_seconds(); 580 581 if (compressed) 582 err_type = ERR_TYPE_KERNEL_PANIC_GZ; ··· 670 size_t length, hdr_size; 671 672 oops_hdr = (struct oops_log_info *)buff; 673 - if (oops_hdr->version < OOPS_HDR_VERSION) { 674 /* Old format oops header had 2-byte record size */ 675 hdr_size = sizeof(u16); 676 - length = oops_hdr->version; 677 time->tv_sec = 0; 678 time->tv_nsec = 0; 679 } else { 680 hdr_size = sizeof(*oops_hdr); 681 - length = oops_hdr->report_length; 682 - time->tv_sec = oops_hdr->timestamp; 683 time->tv_nsec = 0; 684 } 685 *buf = kmalloc(length, GFP_KERNEL); ··· 889 kmsg_dump_get_buffer(dumper, false, 890 oops_data, oops_data_sz, &text_len); 891 err_type = ERR_TYPE_KERNEL_PANIC; 892 - oops_hdr->version = OOPS_HDR_VERSION; 893 - oops_hdr->report_length = (u16) text_len; 894 - oops_hdr->timestamp = get_seconds(); 895 } 896 897 (void) nvram_write_os_partition(&oops_log_partition, oops_buf, 898 - (int) (sizeof(*oops_hdr) + oops_hdr->report_length), err_type, 899 ++oops_count); 900 901 spin_unlock_irqrestore(&lock, flags);
··· 43 static DEFINE_SPINLOCK(nvram_lock); 44 45 struct err_log_info { 46 + __be32 error_type; 47 + __be32 seq_num; 48 }; 49 50 struct nvram_os_partition { ··· 79 }; 80 81 struct oops_log_info { 82 + __be16 version; 83 + __be16 report_length; 84 + __be64 timestamp; 85 } __attribute__((packed)); 86 87 static void oops_to_nvram(struct kmsg_dumper *dumper, ··· 291 length = part->size; 292 } 293 294 + info.error_type = cpu_to_be32(err_type); 295 + info.seq_num = cpu_to_be32(error_log_cnt); 296 297 tmp_index = part->index; 298 ··· 364 } 365 366 if (part->os_partition) { 367 + *error_log_cnt = be32_to_cpu(info.seq_num); 368 + *err_type = be32_to_cpu(info.error_type); 369 } 370 371 return 0; ··· 529 pr_err("nvram: logging uncompressed oops/panic report\n"); 530 return -1; 531 } 532 + oops_hdr->version = cpu_to_be16(OOPS_HDR_VERSION); 533 + oops_hdr->report_length = cpu_to_be16(zipped_len); 534 + oops_hdr->timestamp = cpu_to_be64(get_seconds()); 535 return 0; 536 } 537 ··· 574 clobbering_unread_rtas_event()) 575 return -1; 576 577 + oops_hdr->version = cpu_to_be16(OOPS_HDR_VERSION); 578 + oops_hdr->report_length = cpu_to_be16(size); 579 + oops_hdr->timestamp = cpu_to_be64(get_seconds()); 580 581 if (compressed) 582 err_type = ERR_TYPE_KERNEL_PANIC_GZ; ··· 670 size_t length, hdr_size; 671 672 oops_hdr = (struct oops_log_info *)buff; 673 + if (be16_to_cpu(oops_hdr->version) < OOPS_HDR_VERSION) { 674 /* Old format oops header had 2-byte record size */ 675 hdr_size = sizeof(u16); 676 + length = be16_to_cpu(oops_hdr->version); 677 time->tv_sec = 0; 678 time->tv_nsec = 0; 679 } else { 680 hdr_size = sizeof(*oops_hdr); 681 + length = be16_to_cpu(oops_hdr->report_length); 682 + time->tv_sec = be64_to_cpu(oops_hdr->timestamp); 683 time->tv_nsec = 0; 684 } 685 *buf = kmalloc(length, GFP_KERNEL); ··· 889 kmsg_dump_get_buffer(dumper, false, 890 oops_data, oops_data_sz, &text_len); 891 err_type = ERR_TYPE_KERNEL_PANIC; 892 + oops_hdr->version = cpu_to_be16(OOPS_HDR_VERSION); 893 + oops_hdr->report_length = cpu_to_be16(text_len); 894 + oops_hdr->timestamp = cpu_to_be64(get_seconds()); 895 } 896 897 (void) nvram_write_os_partition(&oops_log_partition, oops_buf, 898 + (int) (sizeof(*oops_hdr) + text_len), err_type, 899 ++oops_count); 900 901 spin_unlock_irqrestore(&lock, flags);
+4 -4
arch/powerpc/platforms/pseries/pci.c
··· 113 { 114 struct device_node *dn, *pdn; 115 struct pci_bus *bus; 116 - const uint32_t *pcie_link_speed_stats; 117 118 bus = bridge->bus; 119 ··· 122 return 0; 123 124 for (pdn = dn; pdn != NULL; pdn = of_get_next_parent(pdn)) { 125 - pcie_link_speed_stats = (const uint32_t *) of_get_property(pdn, 126 "ibm,pcie-link-speed-stats", NULL); 127 if (pcie_link_speed_stats) 128 break; ··· 135 return 0; 136 } 137 138 - switch (pcie_link_speed_stats[0]) { 139 case 0x01: 140 bus->max_bus_speed = PCIE_SPEED_2_5GT; 141 break; ··· 147 break; 148 } 149 150 - switch (pcie_link_speed_stats[1]) { 151 case 0x01: 152 bus->cur_bus_speed = PCIE_SPEED_2_5GT; 153 break;
··· 113 { 114 struct device_node *dn, *pdn; 115 struct pci_bus *bus; 116 + const __be32 *pcie_link_speed_stats; 117 118 bus = bridge->bus; 119 ··· 122 return 0; 123 124 for (pdn = dn; pdn != NULL; pdn = of_get_next_parent(pdn)) { 125 + pcie_link_speed_stats = of_get_property(pdn, 126 "ibm,pcie-link-speed-stats", NULL); 127 if (pcie_link_speed_stats) 128 break; ··· 135 return 0; 136 } 137 138 + switch (be32_to_cpup(pcie_link_speed_stats)) { 139 case 0x01: 140 bus->max_bus_speed = PCIE_SPEED_2_5GT; 141 break; ··· 147 break; 148 } 149 150 + switch (be32_to_cpup(pcie_link_speed_stats)) { 151 case 0x01: 152 bus->cur_bus_speed = PCIE_SPEED_2_5GT; 153 break;
+1 -1
arch/sh/lib/Makefile
··· 6 checksum.o strlen.o div64.o div64-generic.o 7 8 # Extracted from libgcc 9 - lib-y += movmem.o ashldi3.o ashrdi3.o lshrdi3.o \ 10 ashlsi3.o ashrsi3.o ashiftrt.o lshrsi3.o \ 11 udiv_qrnnd.o 12
··· 6 checksum.o strlen.o div64.o div64-generic.o 7 8 # Extracted from libgcc 9 + obj-y += movmem.o ashldi3.o ashrdi3.o lshrdi3.o \ 10 ashlsi3.o ashrsi3.o ashiftrt.o lshrsi3.o \ 11 udiv_qrnnd.o 12
+2 -2
arch/sparc/include/asm/pgtable_64.h
··· 619 } 620 621 #define pte_accessible pte_accessible 622 - static inline unsigned long pte_accessible(pte_t a) 623 { 624 return pte_val(a) & _PAGE_VALID; 625 } ··· 847 * SUN4V NOTE: _PAGE_VALID is the same value in both the SUN4U 848 * and SUN4V pte layout, so this inline test is fine. 849 */ 850 - if (likely(mm != &init_mm) && pte_accessible(orig)) 851 tlb_batch_add(mm, addr, ptep, orig, fullmm); 852 } 853
··· 619 } 620 621 #define pte_accessible pte_accessible 622 + static inline unsigned long pte_accessible(struct mm_struct *mm, pte_t a) 623 { 624 return pte_val(a) & _PAGE_VALID; 625 } ··· 847 * SUN4V NOTE: _PAGE_VALID is the same value in both the SUN4U 848 * and SUN4V pte layout, so this inline test is fine. 849 */ 850 + if (likely(mm != &init_mm) && pte_accessible(mm, orig)) 851 tlb_batch_add(mm, addr, ptep, orig, fullmm); 852 } 853
+1
arch/x86/Kconfig
··· 26 select HAVE_AOUT if X86_32 27 select HAVE_UNSTABLE_SCHED_CLOCK 28 select ARCH_SUPPORTS_NUMA_BALANCING 29 select ARCH_WANTS_PROT_NUMA_PROT_NONE 30 select HAVE_IDE 31 select HAVE_OPROFILE
··· 26 select HAVE_AOUT if X86_32 27 select HAVE_UNSTABLE_SCHED_CLOCK 28 select ARCH_SUPPORTS_NUMA_BALANCING 29 + select ARCH_SUPPORTS_INT128 if X86_64 30 select ARCH_WANTS_PROT_NUMA_PROT_NONE 31 select HAVE_IDE 32 select HAVE_OPROFILE
+9 -2
arch/x86/include/asm/pgtable.h
··· 452 } 453 454 #define pte_accessible pte_accessible 455 - static inline int pte_accessible(pte_t a) 456 { 457 - return pte_flags(a) & _PAGE_PRESENT; 458 } 459 460 static inline int pte_hidden(pte_t pte)
··· 452 } 453 454 #define pte_accessible pte_accessible 455 + static inline bool pte_accessible(struct mm_struct *mm, pte_t a) 456 { 457 + if (pte_flags(a) & _PAGE_PRESENT) 458 + return true; 459 + 460 + if ((pte_flags(a) & (_PAGE_PROTNONE | _PAGE_NUMA)) && 461 + mm_tlb_flush_pending(mm)) 462 + return true; 463 + 464 + return false; 465 } 466 467 static inline int pte_hidden(pte_t pte)
+11
arch/x86/include/asm/preempt.h
··· 8 DECLARE_PER_CPU(int, __preempt_count); 9 10 /* 11 * We mask the PREEMPT_NEED_RESCHED bit so as not to confuse all current users 12 * that think a non-zero value indicates we cannot preempt. 13 */ ··· 80 __this_cpu_add_4(__preempt_count, -val); 81 } 82 83 static __always_inline bool __preempt_count_dec_and_test(void) 84 { 85 GEN_UNARY_RMWcc("decl", __preempt_count, __percpu_arg(0), "e");
··· 8 DECLARE_PER_CPU(int, __preempt_count); 9 10 /* 11 + * We use the PREEMPT_NEED_RESCHED bit as an inverted NEED_RESCHED such 12 + * that a decrement hitting 0 means we can and should reschedule. 13 + */ 14 + #define PREEMPT_ENABLED (0 + PREEMPT_NEED_RESCHED) 15 + 16 + /* 17 * We mask the PREEMPT_NEED_RESCHED bit so as not to confuse all current users 18 * that think a non-zero value indicates we cannot preempt. 19 */ ··· 74 __this_cpu_add_4(__preempt_count, -val); 75 } 76 77 + /* 78 + * Because we keep PREEMPT_NEED_RESCHED set when we do _not_ need to reschedule 79 + * a decrement which hits zero means we have no preempt_count and should 80 + * reschedule. 81 + */ 82 static __always_inline bool __preempt_count_dec_and_test(void) 83 { 84 GEN_UNARY_RMWcc("decl", __preempt_count, __percpu_arg(0), "e");
+12 -3
arch/x86/kernel/cpu/perf_event.h
··· 262 __EVENT_CONSTRAINT(c, n, INTEL_ARCH_EVENT_MASK, \ 263 HWEIGHT(n), 0, PERF_X86_EVENT_PEBS_ST_HSW) 264 265 - #define EVENT_CONSTRAINT_END \ 266 - EVENT_CONSTRAINT(0, 0, 0) 267 268 #define for_each_event_constraint(e, c) \ 269 - for ((e) = (c); (e)->weight; (e)++) 270 271 /* 272 * Extra registers for specific events.
··· 262 __EVENT_CONSTRAINT(c, n, INTEL_ARCH_EVENT_MASK, \ 263 HWEIGHT(n), 0, PERF_X86_EVENT_PEBS_ST_HSW) 264 265 + /* 266 + * We define the end marker as having a weight of -1 267 + * to enable blacklisting of events using a counter bitmask 268 + * of zero and thus a weight of zero. 269 + * The end marker has a weight that cannot possibly be 270 + * obtained from counting the bits in the bitmask. 271 + */ 272 + #define EVENT_CONSTRAINT_END { .weight = -1 } 273 274 + /* 275 + * Check for end marker with weight == -1 276 + */ 277 #define for_each_event_constraint(e, c) \ 278 + for ((e) = (c); (e)->weight != -1; (e)++) 279 280 /* 281 * Extra registers for specific events.
+13
arch/x86/mm/gup.c
··· 83 pte_t pte = gup_get_pte(ptep); 84 struct page *page; 85 86 if ((pte_flags(pte) & (mask | _PAGE_SPECIAL)) != mask) { 87 pte_unmap(ptep); 88 return 0; ··· 173 if (pmd_none(pmd) || pmd_trans_splitting(pmd)) 174 return 0; 175 if (unlikely(pmd_large(pmd))) { 176 if (!gup_huge_pmd(pmd, addr, next, write, pages, nr)) 177 return 0; 178 } else {
··· 83 pte_t pte = gup_get_pte(ptep); 84 struct page *page; 85 86 + /* Similar to the PMD case, NUMA hinting must take slow path */ 87 + if (pte_numa(pte)) { 88 + pte_unmap(ptep); 89 + return 0; 90 + } 91 + 92 if ((pte_flags(pte) & (mask | _PAGE_SPECIAL)) != mask) { 93 pte_unmap(ptep); 94 return 0; ··· 167 if (pmd_none(pmd) || pmd_trans_splitting(pmd)) 168 return 0; 169 if (unlikely(pmd_large(pmd))) { 170 + /* 171 + * NUMA hinting faults need to be handled in the GUP 172 + * slowpath for accounting purposes and so that they 173 + * can be serialised against THP migration. 174 + */ 175 + if (pmd_numa(pmd)) 176 + return 0; 177 if (!gup_huge_pmd(pmd, addr, next, write, pages, nr)) 178 return 0; 179 } else {
+1
drivers/acpi/apei/erst.c
··· 942 static struct pstore_info erst_info = { 943 .owner = THIS_MODULE, 944 .name = "erst", 945 .open = erst_open_pstore, 946 .close = erst_close_pstore, 947 .read = erst_reader,
··· 942 static struct pstore_info erst_info = { 943 .owner = THIS_MODULE, 944 .name = "erst", 945 + .flags = PSTORE_FLAGS_FRAGILE, 946 .open = erst_open_pstore, 947 .close = erst_close_pstore, 948 .read = erst_reader,
+3 -3
drivers/clk/clk-s2mps11.c
··· 60 struct s2mps11_clk *s2mps11 = to_s2mps11_clk(hw); 61 int ret; 62 63 - ret = regmap_update_bits(s2mps11->iodev->regmap, 64 S2MPS11_REG_RTC_CTRL, 65 s2mps11->mask, s2mps11->mask); 66 if (!ret) ··· 74 struct s2mps11_clk *s2mps11 = to_s2mps11_clk(hw); 75 int ret; 76 77 - ret = regmap_update_bits(s2mps11->iodev->regmap, S2MPS11_REG_RTC_CTRL, 78 s2mps11->mask, ~s2mps11->mask); 79 80 if (!ret) ··· 174 s2mps11_clk->hw.init = &s2mps11_clks_init[i]; 175 s2mps11_clk->mask = 1 << i; 176 177 - ret = regmap_read(s2mps11_clk->iodev->regmap, 178 S2MPS11_REG_RTC_CTRL, &val); 179 if (ret < 0) 180 goto err_reg;
··· 60 struct s2mps11_clk *s2mps11 = to_s2mps11_clk(hw); 61 int ret; 62 63 + ret = regmap_update_bits(s2mps11->iodev->regmap_pmic, 64 S2MPS11_REG_RTC_CTRL, 65 s2mps11->mask, s2mps11->mask); 66 if (!ret) ··· 74 struct s2mps11_clk *s2mps11 = to_s2mps11_clk(hw); 75 int ret; 76 77 + ret = regmap_update_bits(s2mps11->iodev->regmap_pmic, S2MPS11_REG_RTC_CTRL, 78 s2mps11->mask, ~s2mps11->mask); 79 80 if (!ret) ··· 174 s2mps11_clk->hw.init = &s2mps11_clks_init[i]; 175 s2mps11_clk->mask = 1 << i; 176 177 + ret = regmap_read(s2mps11_clk->iodev->regmap_pmic, 178 S2MPS11_REG_RTC_CTRL, &val); 179 if (ret < 0) 180 goto err_reg;
+1
drivers/clocksource/Kconfig
··· 75 config CLKSRC_EFM32 76 bool "Clocksource for Energy Micro's EFM32 SoCs" if !ARCH_EFM32 77 depends on OF && ARM && (ARCH_EFM32 || COMPILE_TEST) 78 default ARCH_EFM32 79 help 80 Support to use the timers of EFM32 SoCs as clock source and clock
··· 75 config CLKSRC_EFM32 76 bool "Clocksource for Energy Micro's EFM32 SoCs" if !ARCH_EFM32 77 depends on OF && ARM && (ARCH_EFM32 || COMPILE_TEST) 78 + select CLKSRC_MMIO 79 default ARCH_EFM32 80 help 81 Support to use the timers of EFM32 SoCs as clock source and clock
-1
drivers/clocksource/clksrc-of.c
··· 35 36 init_func = match->data; 37 init_func(np); 38 - of_node_put(np); 39 } 40 }
··· 35 36 init_func = match->data; 37 init_func(np); 38 } 39 }
+4 -3
drivers/clocksource/dw_apb_timer_of.c
··· 108 109 static u64 read_sched_clock(void) 110 { 111 - return __raw_readl(sched_io_base); 112 } 113 114 static const struct of_device_id sptimer_ids[] __initconst = { 115 { .compatible = "picochip,pc3x2-rtc" }, 116 - { .compatible = "snps,dw-apb-timer-sp" }, 117 { /* Sentinel */ }, 118 }; 119 ··· 150 num_called++; 151 } 152 CLOCKSOURCE_OF_DECLARE(pc3x2_timer, "picochip,pc3x2-timer", dw_apb_timer_init); 153 - CLOCKSOURCE_OF_DECLARE(apb_timer, "snps,dw-apb-timer-osc", dw_apb_timer_init);
··· 108 109 static u64 read_sched_clock(void) 110 { 111 + return ~__raw_readl(sched_io_base); 112 } 113 114 static const struct of_device_id sptimer_ids[] __initconst = { 115 { .compatible = "picochip,pc3x2-rtc" }, 116 { /* Sentinel */ }, 117 }; 118 ··· 151 num_called++; 152 } 153 CLOCKSOURCE_OF_DECLARE(pc3x2_timer, "picochip,pc3x2-timer", dw_apb_timer_init); 154 + CLOCKSOURCE_OF_DECLARE(apb_timer_osc, "snps,dw-apb-timer-osc", dw_apb_timer_init); 155 + CLOCKSOURCE_OF_DECLARE(apb_timer_sp, "snps,dw-apb-timer-sp", dw_apb_timer_init); 156 + CLOCKSOURCE_OF_DECLARE(apb_timer, "snps,dw-apb-timer", dw_apb_timer_init);
+3
drivers/clocksource/sun4i_timer.c
··· 179 writel(TIMER_CTL_CLK_SRC(TIMER_CTL_CLK_SRC_OSC24M), 180 timer_base + TIMER_CTL_REG(0)); 181 182 ret = setup_irq(irq, &sun4i_timer_irq); 183 if (ret) 184 pr_warn("failed to setup irq %d\n", irq);
··· 179 writel(TIMER_CTL_CLK_SRC(TIMER_CTL_CLK_SRC_OSC24M), 180 timer_base + TIMER_CTL_REG(0)); 181 182 + /* Make sure timer is stopped before playing with interrupts */ 183 + sun4i_clkevt_time_stop(0); 184 + 185 ret = setup_irq(irq, &sun4i_timer_irq); 186 if (ret) 187 pr_warn("failed to setup irq %d\n", irq);
+5 -5
drivers/clocksource/time-armada-370-xp.c
··· 256 ticks_per_jiffy = (timer_clk + HZ / 2) / HZ; 257 258 /* 259 - * Set scale and timer for sched_clock. 260 - */ 261 - sched_clock_register(armada_370_xp_read_sched_clock, 32, timer_clk); 262 - 263 - /* 264 * Setup free-running clocksource timer (interrupts 265 * disabled). 266 */ ··· 264 265 timer_ctrl_clrset(0, TIMER0_EN | TIMER0_RELOAD_EN | 266 TIMER0_DIV(TIMER_DIVIDER_SHIFT)); 267 268 clocksource_mmio_init(timer_base + TIMER0_VAL_OFF, 269 "armada_370_xp_clocksource",
··· 256 ticks_per_jiffy = (timer_clk + HZ / 2) / HZ; 257 258 /* 259 * Setup free-running clocksource timer (interrupts 260 * disabled). 261 */ ··· 269 270 timer_ctrl_clrset(0, TIMER0_EN | TIMER0_RELOAD_EN | 271 TIMER0_DIV(TIMER_DIVIDER_SHIFT)); 272 + 273 + /* 274 + * Set scale and timer for sched_clock. 275 + */ 276 + sched_clock_register(armada_370_xp_read_sched_clock, 32, timer_clk); 277 278 clocksource_mmio_init(timer_base + TIMER0_VAL_OFF, 279 "armada_370_xp_clocksource",
+7
drivers/dma/Kconfig
··· 62 tristate "Intel I/OAT DMA support" 63 depends on PCI && X86 64 select DMA_ENGINE 65 select DCA 66 help 67 Enable support for the Intel(R) I/OAT DMA engine present ··· 113 bool "Marvell XOR engine support" 114 depends on PLAT_ORION 115 select DMA_ENGINE 116 select ASYNC_TX_ENABLE_CHANNEL_SWITCH 117 ---help--- 118 Enable support for the Marvell XOR engine. ··· 189 tristate "AMCC PPC440SPe ADMA support" 190 depends on 440SPe || 440SP 191 select DMA_ENGINE 192 select ARCH_HAS_ASYNC_TX_FIND_CHANNEL 193 select ASYNC_TX_ENABLE_CHANNEL_SWITCH 194 help ··· 355 bool "Network: TCP receive copy offload" 356 depends on DMA_ENGINE && NET 357 default (INTEL_IOATDMA || FSL_DMA) 358 help 359 This enables the use of DMA engines in the network stack to 360 offload receive copy-to-user operations, freeing CPU cycles. ··· 380 help 381 Simple DMA test client. Say N unless you're debugging a 382 DMA Device driver. 383 384 endif
··· 62 tristate "Intel I/OAT DMA support" 63 depends on PCI && X86 64 select DMA_ENGINE 65 + select DMA_ENGINE_RAID 66 select DCA 67 help 68 Enable support for the Intel(R) I/OAT DMA engine present ··· 112 bool "Marvell XOR engine support" 113 depends on PLAT_ORION 114 select DMA_ENGINE 115 + select DMA_ENGINE_RAID 116 select ASYNC_TX_ENABLE_CHANNEL_SWITCH 117 ---help--- 118 Enable support for the Marvell XOR engine. ··· 187 tristate "AMCC PPC440SPe ADMA support" 188 depends on 440SPe || 440SP 189 select DMA_ENGINE 190 + select DMA_ENGINE_RAID 191 select ARCH_HAS_ASYNC_TX_FIND_CHANNEL 192 select ASYNC_TX_ENABLE_CHANNEL_SWITCH 193 help ··· 352 bool "Network: TCP receive copy offload" 353 depends on DMA_ENGINE && NET 354 default (INTEL_IOATDMA || FSL_DMA) 355 + depends on BROKEN 356 help 357 This enables the use of DMA engines in the network stack to 358 offload receive copy-to-user operations, freeing CPU cycles. ··· 376 help 377 Simple DMA test client. Say N unless you're debugging a 378 DMA Device driver. 379 + 380 + config DMA_ENGINE_RAID 381 + bool 382 383 endif
-4
drivers/dma/at_hdmac_regs.h
··· 347 { 348 return &chan->dev->device; 349 } 350 - static struct device *chan2parent(struct dma_chan *chan) 351 - { 352 - return chan->dev->device.parent; 353 - } 354 355 #if defined(VERBOSE_DEBUG) 356 static void vdbg_dump_regs(struct at_dma_chan *atchan)
··· 347 { 348 return &chan->dev->device; 349 } 350 351 #if defined(VERBOSE_DEBUG) 352 static void vdbg_dump_regs(struct at_dma_chan *atchan)
+2 -2
drivers/dma/dmaengine.c
··· 912 #define __UNMAP_POOL(x) { .size = x, .name = "dmaengine-unmap-" __stringify(x) } 913 static struct dmaengine_unmap_pool unmap_pool[] = { 914 __UNMAP_POOL(2), 915 - #if IS_ENABLED(CONFIG_ASYNC_TX_DMA) 916 __UNMAP_POOL(16), 917 __UNMAP_POOL(128), 918 __UNMAP_POOL(256), ··· 1054 dma_cookie_t cookie; 1055 unsigned long flags; 1056 1057 - unmap = dmaengine_get_unmap_data(dev->dev, 2, GFP_NOIO); 1058 if (!unmap) 1059 return -ENOMEM; 1060
··· 912 #define __UNMAP_POOL(x) { .size = x, .name = "dmaengine-unmap-" __stringify(x) } 913 static struct dmaengine_unmap_pool unmap_pool[] = { 914 __UNMAP_POOL(2), 915 + #if IS_ENABLED(CONFIG_DMA_ENGINE_RAID) 916 __UNMAP_POOL(16), 917 __UNMAP_POOL(128), 918 __UNMAP_POOL(256), ··· 1054 dma_cookie_t cookie; 1055 unsigned long flags; 1056 1057 + unmap = dmaengine_get_unmap_data(dev->dev, 2, GFP_NOWAIT); 1058 if (!unmap) 1059 return -ENOMEM; 1060
+4 -4
drivers/dma/dmatest.c
··· 539 540 um->len = params->buf_size; 541 for (i = 0; i < src_cnt; i++) { 542 - unsigned long buf = (unsigned long) thread->srcs[i]; 543 struct page *pg = virt_to_page(buf); 544 - unsigned pg_off = buf & ~PAGE_MASK; 545 546 um->addr[i] = dma_map_page(dev->dev, pg, pg_off, 547 um->len, DMA_TO_DEVICE); ··· 559 /* map with DMA_BIDIRECTIONAL to force writeback/invalidate */ 560 dsts = &um->addr[src_cnt]; 561 for (i = 0; i < dst_cnt; i++) { 562 - unsigned long buf = (unsigned long) thread->dsts[i]; 563 struct page *pg = virt_to_page(buf); 564 - unsigned pg_off = buf & ~PAGE_MASK; 565 566 dsts[i] = dma_map_page(dev->dev, pg, pg_off, um->len, 567 DMA_BIDIRECTIONAL);
··· 539 540 um->len = params->buf_size; 541 for (i = 0; i < src_cnt; i++) { 542 + void *buf = thread->srcs[i]; 543 struct page *pg = virt_to_page(buf); 544 + unsigned pg_off = (unsigned long) buf & ~PAGE_MASK; 545 546 um->addr[i] = dma_map_page(dev->dev, pg, pg_off, 547 um->len, DMA_TO_DEVICE); ··· 559 /* map with DMA_BIDIRECTIONAL to force writeback/invalidate */ 560 dsts = &um->addr[src_cnt]; 561 for (i = 0; i < dst_cnt; i++) { 562 + void *buf = thread->dsts[i]; 563 struct page *pg = virt_to_page(buf); 564 + unsigned pg_off = (unsigned long) buf & ~PAGE_MASK; 565 566 dsts[i] = dma_map_page(dev->dev, pg, pg_off, um->len, 567 DMA_BIDIRECTIONAL);
+1 -30
drivers/dma/fsldma.c
··· 86 hw->count = CPU_TO_DMA(chan, count, 32); 87 } 88 89 - static u32 get_desc_cnt(struct fsldma_chan *chan, struct fsl_desc_sw *desc) 90 - { 91 - return DMA_TO_CPU(chan, desc->hw.count, 32); 92 - } 93 - 94 static void set_desc_src(struct fsldma_chan *chan, 95 struct fsl_dma_ld_hw *hw, dma_addr_t src) 96 { ··· 96 hw->src_addr = CPU_TO_DMA(chan, snoop_bits | src, 64); 97 } 98 99 - static dma_addr_t get_desc_src(struct fsldma_chan *chan, 100 - struct fsl_desc_sw *desc) 101 - { 102 - u64 snoop_bits; 103 - 104 - snoop_bits = ((chan->feature & FSL_DMA_IP_MASK) == FSL_DMA_IP_85XX) 105 - ? ((u64)FSL_DMA_SATR_SREADTYPE_SNOOP_READ << 32) : 0; 106 - return DMA_TO_CPU(chan, desc->hw.src_addr, 64) & ~snoop_bits; 107 - } 108 - 109 static void set_desc_dst(struct fsldma_chan *chan, 110 struct fsl_dma_ld_hw *hw, dma_addr_t dst) 111 { ··· 104 snoop_bits = ((chan->feature & FSL_DMA_IP_MASK) == FSL_DMA_IP_85XX) 105 ? ((u64)FSL_DMA_DATR_DWRITETYPE_SNOOP_WRITE << 32) : 0; 106 hw->dst_addr = CPU_TO_DMA(chan, snoop_bits | dst, 64); 107 - } 108 - 109 - static dma_addr_t get_desc_dst(struct fsldma_chan *chan, 110 - struct fsl_desc_sw *desc) 111 - { 112 - u64 snoop_bits; 113 - 114 - snoop_bits = ((chan->feature & FSL_DMA_IP_MASK) == FSL_DMA_IP_85XX) 115 - ? ((u64)FSL_DMA_DATR_DWRITETYPE_SNOOP_WRITE << 32) : 0; 116 - return DMA_TO_CPU(chan, desc->hw.dst_addr, 64) & ~snoop_bits; 117 } 118 119 static void set_desc_next(struct fsldma_chan *chan, ··· 383 struct fsl_desc_sw *desc = tx_to_fsl_desc(tx); 384 struct fsl_desc_sw *child; 385 unsigned long flags; 386 - dma_cookie_t cookie; 387 388 spin_lock_irqsave(&chan->desc_lock, flags); 389 ··· 829 struct fsl_desc_sw *desc) 830 { 831 struct dma_async_tx_descriptor *txd = &desc->async_tx; 832 - struct device *dev = chan->common.device->dev; 833 - dma_addr_t src = get_desc_src(chan, desc); 834 - dma_addr_t dst = get_desc_dst(chan, desc); 835 - u32 len = get_desc_cnt(chan, desc); 836 837 /* Run the link descriptor callback function */ 838 if (txd->callback) {
··· 86 hw->count = CPU_TO_DMA(chan, count, 32); 87 } 88 89 static void set_desc_src(struct fsldma_chan *chan, 90 struct fsl_dma_ld_hw *hw, dma_addr_t src) 91 { ··· 101 hw->src_addr = CPU_TO_DMA(chan, snoop_bits | src, 64); 102 } 103 104 static void set_desc_dst(struct fsldma_chan *chan, 105 struct fsl_dma_ld_hw *hw, dma_addr_t dst) 106 { ··· 119 snoop_bits = ((chan->feature & FSL_DMA_IP_MASK) == FSL_DMA_IP_85XX) 120 ? ((u64)FSL_DMA_DATR_DWRITETYPE_SNOOP_WRITE << 32) : 0; 121 hw->dst_addr = CPU_TO_DMA(chan, snoop_bits | dst, 64); 122 } 123 124 static void set_desc_next(struct fsldma_chan *chan, ··· 408 struct fsl_desc_sw *desc = tx_to_fsl_desc(tx); 409 struct fsl_desc_sw *child; 410 unsigned long flags; 411 + dma_cookie_t cookie = -EINVAL; 412 413 spin_lock_irqsave(&chan->desc_lock, flags); 414 ··· 854 struct fsl_desc_sw *desc) 855 { 856 struct dma_async_tx_descriptor *txd = &desc->async_tx; 857 858 /* Run the link descriptor callback function */ 859 if (txd->callback) {
+64 -39
drivers/dma/mv_xor.c
··· 54 hw_desc->desc_command = (1 << 31); 55 } 56 57 - static u32 mv_desc_get_dest_addr(struct mv_xor_desc_slot *desc) 58 - { 59 - struct mv_xor_desc *hw_desc = desc->hw_desc; 60 - return hw_desc->phy_dest_addr; 61 - } 62 - 63 static void mv_desc_set_byte_count(struct mv_xor_desc_slot *desc, 64 u32 byte_count) 65 { ··· 781 /* 782 * Perform a transaction to verify the HW works. 783 */ 784 - #define MV_XOR_TEST_SIZE 2000 785 786 static int mv_xor_memcpy_self_test(struct mv_xor_chan *mv_chan) 787 { ··· 790 struct dma_chan *dma_chan; 791 dma_cookie_t cookie; 792 struct dma_async_tx_descriptor *tx; 793 int err = 0; 794 795 - src = kmalloc(sizeof(u8) * MV_XOR_TEST_SIZE, GFP_KERNEL); 796 if (!src) 797 return -ENOMEM; 798 799 - dest = kzalloc(sizeof(u8) * MV_XOR_TEST_SIZE, GFP_KERNEL); 800 if (!dest) { 801 kfree(src); 802 return -ENOMEM; 803 } 804 805 /* Fill in src buffer */ 806 - for (i = 0; i < MV_XOR_TEST_SIZE; i++) 807 ((u8 *) src)[i] = (u8)i; 808 809 dma_chan = &mv_chan->dmachan; ··· 813 goto out; 814 } 815 816 - dest_dma = dma_map_single(dma_chan->device->dev, dest, 817 - MV_XOR_TEST_SIZE, DMA_FROM_DEVICE); 818 819 - src_dma = dma_map_single(dma_chan->device->dev, src, 820 - MV_XOR_TEST_SIZE, DMA_TO_DEVICE); 821 822 tx = mv_xor_prep_dma_memcpy(dma_chan, dest_dma, src_dma, 823 - MV_XOR_TEST_SIZE, 0); 824 cookie = mv_xor_tx_submit(tx); 825 mv_xor_issue_pending(dma_chan); 826 async_tx_ack(tx); ··· 847 } 848 849 dma_sync_single_for_cpu(dma_chan->device->dev, dest_dma, 850 - MV_XOR_TEST_SIZE, DMA_FROM_DEVICE); 851 - if (memcmp(src, dest, MV_XOR_TEST_SIZE)) { 852 dev_err(dma_chan->device->dev, 853 "Self-test copy failed compare, disabling\n"); 854 err = -ENODEV; ··· 856 } 857 858 free_resources: 859 mv_xor_free_chan_resources(dma_chan); 860 out: 861 kfree(src); ··· 874 dma_addr_t dma_srcs[MV_XOR_NUM_SRC_TEST]; 875 dma_addr_t dest_dma; 876 struct dma_async_tx_descriptor *tx; 877 struct dma_chan *dma_chan; 878 dma_cookie_t cookie; 879 u8 cmp_byte = 0; 880 u32 cmp_word; 881 int err = 0; 882 883 - for (src_idx = 0; src_idx < MV_XOR_NUM_SRC_TEST; src_idx++) { 884 xor_srcs[src_idx] = alloc_page(GFP_KERNEL); 885 if (!xor_srcs[src_idx]) { 886 while (src_idx--) ··· 899 } 900 901 /* Fill in src buffers */ 902 - for (src_idx = 0; src_idx < MV_XOR_NUM_SRC_TEST; src_idx++) { 903 u8 *ptr = page_address(xor_srcs[src_idx]); 904 for (i = 0; i < PAGE_SIZE; i++) 905 ptr[i] = (1 << src_idx); 906 } 907 908 - for (src_idx = 0; src_idx < MV_XOR_NUM_SRC_TEST; src_idx++) 909 cmp_byte ^= (u8) (1 << src_idx); 910 911 cmp_word = (cmp_byte << 24) | (cmp_byte << 16) | ··· 919 goto out; 920 } 921 922 - /* test xor */ 923 - dest_dma = dma_map_page(dma_chan->device->dev, dest, 0, PAGE_SIZE, 924 - DMA_FROM_DEVICE); 925 926 - for (i = 0; i < MV_XOR_NUM_SRC_TEST; i++) 927 - dma_srcs[i] = dma_map_page(dma_chan->device->dev, xor_srcs[i], 928 - 0, PAGE_SIZE, DMA_TO_DEVICE); 929 930 tx = mv_xor_prep_dma_xor(dma_chan, dest_dma, dma_srcs, 931 - MV_XOR_NUM_SRC_TEST, PAGE_SIZE, 0); 932 933 cookie = mv_xor_tx_submit(tx); 934 mv_xor_issue_pending(dma_chan); ··· 970 } 971 972 free_resources: 973 mv_xor_free_chan_resources(dma_chan); 974 out: 975 - src_idx = MV_XOR_NUM_SRC_TEST; 976 while (src_idx--) 977 __free_page(xor_srcs[src_idx]); 978 __free_page(dest); ··· 1199 int i = 0; 1200 1201 for_each_child_of_node(pdev->dev.of_node, np) { 1202 dma_cap_mask_t cap_mask; 1203 int irq; 1204 ··· 1217 goto err_channel_add; 1218 } 1219 1220 - xordev->channels[i] = 1221 - mv_xor_channel_add(xordev, pdev, i, 1222 - cap_mask, irq); 1223 - if (IS_ERR(xordev->channels[i])) { 1224 - ret = PTR_ERR(xordev->channels[i]); 1225 - xordev->channels[i] = NULL; 1226 irq_dispose_mapping(irq); 1227 goto err_channel_add; 1228 } 1229 1230 i++; 1231 } 1232 } else if (pdata && pdata->channels) { 1233 for (i = 0; i < MV_XOR_MAX_CHANNELS; i++) { 1234 struct mv_xor_channel_data *cd; 1235 int irq; 1236 1237 cd = &pdata->channels[i]; ··· 1246 goto err_channel_add; 1247 } 1248 1249 - xordev->channels[i] = 1250 - mv_xor_channel_add(xordev, pdev, i, 1251 - cd->cap_mask, irq); 1252 - if (IS_ERR(xordev->channels[i])) { 1253 - ret = PTR_ERR(xordev->channels[i]); 1254 goto err_channel_add; 1255 } 1256 } 1257 } 1258
··· 54 hw_desc->desc_command = (1 << 31); 55 } 56 57 static void mv_desc_set_byte_count(struct mv_xor_desc_slot *desc, 58 u32 byte_count) 59 { ··· 787 /* 788 * Perform a transaction to verify the HW works. 789 */ 790 791 static int mv_xor_memcpy_self_test(struct mv_xor_chan *mv_chan) 792 { ··· 797 struct dma_chan *dma_chan; 798 dma_cookie_t cookie; 799 struct dma_async_tx_descriptor *tx; 800 + struct dmaengine_unmap_data *unmap; 801 int err = 0; 802 803 + src = kmalloc(sizeof(u8) * PAGE_SIZE, GFP_KERNEL); 804 if (!src) 805 return -ENOMEM; 806 807 + dest = kzalloc(sizeof(u8) * PAGE_SIZE, GFP_KERNEL); 808 if (!dest) { 809 kfree(src); 810 return -ENOMEM; 811 } 812 813 /* Fill in src buffer */ 814 + for (i = 0; i < PAGE_SIZE; i++) 815 ((u8 *) src)[i] = (u8)i; 816 817 dma_chan = &mv_chan->dmachan; ··· 819 goto out; 820 } 821 822 + unmap = dmaengine_get_unmap_data(dma_chan->device->dev, 2, GFP_KERNEL); 823 + if (!unmap) { 824 + err = -ENOMEM; 825 + goto free_resources; 826 + } 827 828 + src_dma = dma_map_page(dma_chan->device->dev, virt_to_page(src), 0, 829 + PAGE_SIZE, DMA_TO_DEVICE); 830 + unmap->to_cnt = 1; 831 + unmap->addr[0] = src_dma; 832 + 833 + dest_dma = dma_map_page(dma_chan->device->dev, virt_to_page(dest), 0, 834 + PAGE_SIZE, DMA_FROM_DEVICE); 835 + unmap->from_cnt = 1; 836 + unmap->addr[1] = dest_dma; 837 + 838 + unmap->len = PAGE_SIZE; 839 840 tx = mv_xor_prep_dma_memcpy(dma_chan, dest_dma, src_dma, 841 + PAGE_SIZE, 0); 842 cookie = mv_xor_tx_submit(tx); 843 mv_xor_issue_pending(dma_chan); 844 async_tx_ack(tx); ··· 841 } 842 843 dma_sync_single_for_cpu(dma_chan->device->dev, dest_dma, 844 + PAGE_SIZE, DMA_FROM_DEVICE); 845 + if (memcmp(src, dest, PAGE_SIZE)) { 846 dev_err(dma_chan->device->dev, 847 "Self-test copy failed compare, disabling\n"); 848 err = -ENODEV; ··· 850 } 851 852 free_resources: 853 + dmaengine_unmap_put(unmap); 854 mv_xor_free_chan_resources(dma_chan); 855 out: 856 kfree(src); ··· 867 dma_addr_t dma_srcs[MV_XOR_NUM_SRC_TEST]; 868 dma_addr_t dest_dma; 869 struct dma_async_tx_descriptor *tx; 870 + struct dmaengine_unmap_data *unmap; 871 struct dma_chan *dma_chan; 872 dma_cookie_t cookie; 873 u8 cmp_byte = 0; 874 u32 cmp_word; 875 int err = 0; 876 + int src_count = MV_XOR_NUM_SRC_TEST; 877 878 + for (src_idx = 0; src_idx < src_count; src_idx++) { 879 xor_srcs[src_idx] = alloc_page(GFP_KERNEL); 880 if (!xor_srcs[src_idx]) { 881 while (src_idx--) ··· 890 } 891 892 /* Fill in src buffers */ 893 + for (src_idx = 0; src_idx < src_count; src_idx++) { 894 u8 *ptr = page_address(xor_srcs[src_idx]); 895 for (i = 0; i < PAGE_SIZE; i++) 896 ptr[i] = (1 << src_idx); 897 } 898 899 + for (src_idx = 0; src_idx < src_count; src_idx++) 900 cmp_byte ^= (u8) (1 << src_idx); 901 902 cmp_word = (cmp_byte << 24) | (cmp_byte << 16) | ··· 910 goto out; 911 } 912 913 + unmap = dmaengine_get_unmap_data(dma_chan->device->dev, src_count + 1, 914 + GFP_KERNEL); 915 + if (!unmap) { 916 + err = -ENOMEM; 917 + goto free_resources; 918 + } 919 920 + /* test xor */ 921 + for (i = 0; i < src_count; i++) { 922 + unmap->addr[i] = dma_map_page(dma_chan->device->dev, xor_srcs[i], 923 + 0, PAGE_SIZE, DMA_TO_DEVICE); 924 + dma_srcs[i] = unmap->addr[i]; 925 + unmap->to_cnt++; 926 + } 927 + 928 + unmap->addr[src_count] = dma_map_page(dma_chan->device->dev, dest, 0, PAGE_SIZE, 929 + DMA_FROM_DEVICE); 930 + dest_dma = unmap->addr[src_count]; 931 + unmap->from_cnt = 1; 932 + unmap->len = PAGE_SIZE; 933 934 tx = mv_xor_prep_dma_xor(dma_chan, dest_dma, dma_srcs, 935 + src_count, PAGE_SIZE, 0); 936 937 cookie = mv_xor_tx_submit(tx); 938 mv_xor_issue_pending(dma_chan); ··· 948 } 949 950 free_resources: 951 + dmaengine_unmap_put(unmap); 952 mv_xor_free_chan_resources(dma_chan); 953 out: 954 + src_idx = src_count; 955 while (src_idx--) 956 __free_page(xor_srcs[src_idx]); 957 __free_page(dest); ··· 1176 int i = 0; 1177 1178 for_each_child_of_node(pdev->dev.of_node, np) { 1179 + struct mv_xor_chan *chan; 1180 dma_cap_mask_t cap_mask; 1181 int irq; 1182 ··· 1193 goto err_channel_add; 1194 } 1195 1196 + chan = mv_xor_channel_add(xordev, pdev, i, 1197 + cap_mask, irq); 1198 + if (IS_ERR(chan)) { 1199 + ret = PTR_ERR(chan); 1200 irq_dispose_mapping(irq); 1201 goto err_channel_add; 1202 } 1203 1204 + xordev->channels[i] = chan; 1205 i++; 1206 } 1207 } else if (pdata && pdata->channels) { 1208 for (i = 0; i < MV_XOR_MAX_CHANNELS; i++) { 1209 struct mv_xor_channel_data *cd; 1210 + struct mv_xor_chan *chan; 1211 int irq; 1212 1213 cd = &pdata->channels[i]; ··· 1222 goto err_channel_add; 1223 } 1224 1225 + chan = mv_xor_channel_add(xordev, pdev, i, 1226 + cd->cap_mask, irq); 1227 + if (IS_ERR(chan)) { 1228 + ret = PTR_ERR(chan); 1229 goto err_channel_add; 1230 } 1231 + 1232 + xordev->channels[i] = chan; 1233 } 1234 } 1235
+1 -4
drivers/dma/pl330.c
··· 2492 2493 static inline void _init_desc(struct dma_pl330_desc *desc) 2494 { 2495 - desc->pchan = NULL; 2496 desc->req.x = &desc->px; 2497 desc->req.token = desc; 2498 desc->rqcfg.swap = SWAP_NO; 2499 - desc->rqcfg.privileged = 0; 2500 - desc->rqcfg.insnaccess = 0; 2501 desc->rqcfg.scctl = SCCTRL0; 2502 desc->rqcfg.dcctl = DCCTRL0; 2503 desc->req.cfg = &desc->rqcfg; ··· 2514 if (!pdmac) 2515 return 0; 2516 2517 - desc = kmalloc(count * sizeof(*desc), flg); 2518 if (!desc) 2519 return 0; 2520
··· 2492 2493 static inline void _init_desc(struct dma_pl330_desc *desc) 2494 { 2495 desc->req.x = &desc->px; 2496 desc->req.token = desc; 2497 desc->rqcfg.swap = SWAP_NO; 2498 desc->rqcfg.scctl = SCCTRL0; 2499 desc->rqcfg.dcctl = DCCTRL0; 2500 desc->req.cfg = &desc->rqcfg; ··· 2517 if (!pdmac) 2518 return 0; 2519 2520 + desc = kcalloc(count, sizeof(*desc), flg); 2521 if (!desc) 2522 return 0; 2523
+1 -26
drivers/dma/ppc4xx/adma.c
··· 533 } 534 535 /** 536 - * ppc440spe_desc_init_memset - initialize the descriptor for MEMSET operation 537 - */ 538 - static void ppc440spe_desc_init_memset(struct ppc440spe_adma_desc_slot *desc, 539 - int value, unsigned long flags) 540 - { 541 - struct dma_cdb *hw_desc = desc->hw_desc; 542 - 543 - memset(desc->hw_desc, 0, sizeof(struct dma_cdb)); 544 - desc->hw_next = NULL; 545 - desc->src_cnt = 1; 546 - desc->dst_cnt = 1; 547 - 548 - if (flags & DMA_PREP_INTERRUPT) 549 - set_bit(PPC440SPE_DESC_INT, &desc->flags); 550 - else 551 - clear_bit(PPC440SPE_DESC_INT, &desc->flags); 552 - 553 - hw_desc->sg1u = hw_desc->sg1l = cpu_to_le32((u32)value); 554 - hw_desc->sg3u = hw_desc->sg3l = cpu_to_le32((u32)value); 555 - hw_desc->opc = DMA_CDB_OPC_DFILL128; 556 - } 557 - 558 - /** 559 * ppc440spe_desc_set_src_addr - set source address into the descriptor 560 */ 561 static void ppc440spe_desc_set_src_addr(struct ppc440spe_adma_desc_slot *desc, ··· 1481 struct ppc440spe_adma_chan *chan, 1482 dma_cookie_t cookie) 1483 { 1484 - int i; 1485 - 1486 BUG_ON(desc->async_tx.cookie < 0); 1487 if (desc->async_tx.cookie > 0) { 1488 cookie = desc->async_tx.cookie; ··· 3873 ppc440spe_adma_prep_dma_interrupt; 3874 } 3875 pr_info("%s: AMCC(R) PPC440SP(E) ADMA Engine: " 3876 - "( %s%s%s%s%s%s%s)\n", 3877 dev_name(adev->dev), 3878 dma_has_cap(DMA_PQ, adev->common.cap_mask) ? "pq " : "", 3879 dma_has_cap(DMA_PQ_VAL, adev->common.cap_mask) ? "pq_val " : "",
··· 533 } 534 535 /** 536 * ppc440spe_desc_set_src_addr - set source address into the descriptor 537 */ 538 static void ppc440spe_desc_set_src_addr(struct ppc440spe_adma_desc_slot *desc, ··· 1504 struct ppc440spe_adma_chan *chan, 1505 dma_cookie_t cookie) 1506 { 1507 BUG_ON(desc->async_tx.cookie < 0); 1508 if (desc->async_tx.cookie > 0) { 1509 cookie = desc->async_tx.cookie; ··· 3898 ppc440spe_adma_prep_dma_interrupt; 3899 } 3900 pr_info("%s: AMCC(R) PPC440SP(E) ADMA Engine: " 3901 + "( %s%s%s%s%s%s)\n", 3902 dev_name(adev->dev), 3903 dma_has_cap(DMA_PQ, adev->common.cap_mask) ? "pq " : "", 3904 dma_has_cap(DMA_PQ_VAL, adev->common.cap_mask) ? "pq_val " : "",
-1
drivers/dma/txx9dmac.c
··· 406 dma_async_tx_callback callback; 407 void *param; 408 struct dma_async_tx_descriptor *txd = &desc->txd; 409 - struct txx9dmac_slave *ds = dc->chan.private; 410 411 dev_vdbg(chan2dev(&dc->chan), "descriptor %u %p complete\n", 412 txd->cookie, desc);
··· 406 dma_async_tx_callback callback; 407 void *param; 408 struct dma_async_tx_descriptor *txd = &desc->txd; 409 410 dev_vdbg(chan2dev(&dc->chan), "descriptor %u %p complete\n", 411 txd->cookie, desc);
-1
drivers/firewire/sbp2.c
··· 1623 .cmd_per_lun = 1, 1624 .can_queue = 1, 1625 .sdev_attrs = sbp2_scsi_sysfs_attrs, 1626 - .no_write_same = 1, 1627 }; 1628 1629 MODULE_AUTHOR("Kristian Hoegsberg <krh@bitplanet.net>");
··· 1623 .cmd_per_lun = 1, 1624 .can_queue = 1, 1625 .sdev_attrs = sbp2_scsi_sysfs_attrs, 1626 }; 1627 1628 MODULE_AUTHOR("Kristian Hoegsberg <krh@bitplanet.net>");
+1
drivers/firmware/efi/efi-pstore.c
··· 356 static struct pstore_info efi_pstore_info = { 357 .owner = THIS_MODULE, 358 .name = "efi", 359 .open = efi_pstore_open, 360 .close = efi_pstore_close, 361 .read = efi_pstore_read,
··· 356 static struct pstore_info efi_pstore_info = { 357 .owner = THIS_MODULE, 358 .name = "efi", 359 + .flags = PSTORE_FLAGS_FRAGILE, 360 .open = efi_pstore_open, 361 .close = efi_pstore_close, 362 .read = efi_pstore_read,
+2 -2
drivers/gpio/gpio-msm-v2.c
··· 252 253 spin_lock_irqsave(&tlmm_lock, irq_flags); 254 writel(TARGET_PROC_NONE, GPIO_INTR_CFG_SU(gpio)); 255 - clear_gpio_bits(INTR_RAW_STATUS_EN | INTR_ENABLE, GPIO_INTR_CFG(gpio)); 256 __clear_bit(gpio, msm_gpio.enabled_irqs); 257 spin_unlock_irqrestore(&tlmm_lock, irq_flags); 258 } ··· 264 265 spin_lock_irqsave(&tlmm_lock, irq_flags); 266 __set_bit(gpio, msm_gpio.enabled_irqs); 267 - set_gpio_bits(INTR_RAW_STATUS_EN | INTR_ENABLE, GPIO_INTR_CFG(gpio)); 268 writel(TARGET_PROC_SCORPION, GPIO_INTR_CFG_SU(gpio)); 269 spin_unlock_irqrestore(&tlmm_lock, irq_flags); 270 }
··· 252 253 spin_lock_irqsave(&tlmm_lock, irq_flags); 254 writel(TARGET_PROC_NONE, GPIO_INTR_CFG_SU(gpio)); 255 + clear_gpio_bits(BIT(INTR_RAW_STATUS_EN) | BIT(INTR_ENABLE), GPIO_INTR_CFG(gpio)); 256 __clear_bit(gpio, msm_gpio.enabled_irqs); 257 spin_unlock_irqrestore(&tlmm_lock, irq_flags); 258 } ··· 264 265 spin_lock_irqsave(&tlmm_lock, irq_flags); 266 __set_bit(gpio, msm_gpio.enabled_irqs); 267 + set_gpio_bits(BIT(INTR_RAW_STATUS_EN) | BIT(INTR_ENABLE), GPIO_INTR_CFG(gpio)); 268 writel(TARGET_PROC_SCORPION, GPIO_INTR_CFG_SU(gpio)); 269 spin_unlock_irqrestore(&tlmm_lock, irq_flags); 270 }
+2 -1
drivers/gpio/gpio-rcar.c
··· 169 u32 pending; 170 unsigned int offset, irqs_handled = 0; 171 172 - while ((pending = gpio_rcar_read(p, INTDT))) { 173 offset = __ffs(pending); 174 gpio_rcar_write(p, INTCLR, BIT(offset)); 175 generic_handle_irq(irq_find_mapping(p->irq_domain, offset));
··· 169 u32 pending; 170 unsigned int offset, irqs_handled = 0; 171 172 + while ((pending = gpio_rcar_read(p, INTDT) & 173 + gpio_rcar_read(p, INTMSK))) { 174 offset = __ffs(pending); 175 gpio_rcar_write(p, INTCLR, BIT(offset)); 176 generic_handle_irq(irq_find_mapping(p->irq_domain, offset));
+12 -3
drivers/gpio/gpio-twl4030.c
··· 300 if (offset < TWL4030_GPIO_MAX) 301 ret = twl4030_set_gpio_direction(offset, 1); 302 else 303 - ret = -EINVAL; 304 305 if (!ret) 306 priv->direction &= ~BIT(offset); ··· 354 static int twl_direction_out(struct gpio_chip *chip, unsigned offset, int value) 355 { 356 struct gpio_twl4030_priv *priv = to_gpio_twl4030(chip); 357 - int ret = -EINVAL; 358 359 mutex_lock(&priv->mutex); 360 - if (offset < TWL4030_GPIO_MAX) 361 ret = twl4030_set_gpio_direction(offset, 0); 362 363 priv->direction |= BIT(offset); 364 mutex_unlock(&priv->mutex);
··· 300 if (offset < TWL4030_GPIO_MAX) 301 ret = twl4030_set_gpio_direction(offset, 1); 302 else 303 + ret = -EINVAL; /* LED outputs can't be set as input */ 304 305 if (!ret) 306 priv->direction &= ~BIT(offset); ··· 354 static int twl_direction_out(struct gpio_chip *chip, unsigned offset, int value) 355 { 356 struct gpio_twl4030_priv *priv = to_gpio_twl4030(chip); 357 + int ret = 0; 358 359 mutex_lock(&priv->mutex); 360 + if (offset < TWL4030_GPIO_MAX) { 361 ret = twl4030_set_gpio_direction(offset, 0); 362 + if (ret) { 363 + mutex_unlock(&priv->mutex); 364 + return ret; 365 + } 366 + } 367 + 368 + /* 369 + * LED gpios i.e. offset >= TWL4030_GPIO_MAX are always output 370 + */ 371 372 priv->direction |= BIT(offset); 373 mutex_unlock(&priv->mutex);
+1
drivers/gpu/drm/armada/armada_drm.h
··· 103 extern const struct drm_mode_config_funcs armada_drm_mode_config_funcs; 104 105 int armada_fbdev_init(struct drm_device *); 106 void armada_fbdev_fini(struct drm_device *); 107 108 int armada_overlay_plane_create(struct drm_device *, unsigned long);
··· 103 extern const struct drm_mode_config_funcs armada_drm_mode_config_funcs; 104 105 int armada_fbdev_init(struct drm_device *); 106 + void armada_fbdev_lastclose(struct drm_device *); 107 void armada_fbdev_fini(struct drm_device *); 108 109 int armada_overlay_plane_create(struct drm_device *, unsigned long);
+6 -1
drivers/gpu/drm/armada/armada_drv.c
··· 321 DRM_UNLOCKED), 322 }; 323 324 static const struct file_operations armada_drm_fops = { 325 .owner = THIS_MODULE, 326 .llseek = no_llseek, ··· 342 .open = NULL, 343 .preclose = NULL, 344 .postclose = NULL, 345 - .lastclose = NULL, 346 .unload = armada_drm_unload, 347 .get_vblank_counter = drm_vblank_count, 348 .enable_vblank = armada_drm_enable_vblank,
··· 321 DRM_UNLOCKED), 322 }; 323 324 + static void armada_drm_lastclose(struct drm_device *dev) 325 + { 326 + armada_fbdev_lastclose(dev); 327 + } 328 + 329 static const struct file_operations armada_drm_fops = { 330 .owner = THIS_MODULE, 331 .llseek = no_llseek, ··· 337 .open = NULL, 338 .preclose = NULL, 339 .postclose = NULL, 340 + .lastclose = armada_drm_lastclose, 341 .unload = armada_drm_unload, 342 .get_vblank_counter = drm_vblank_count, 343 .enable_vblank = armada_drm_enable_vblank,
+15 -5
drivers/gpu/drm/armada/armada_fbdev.c
··· 105 drm_fb_helper_fill_fix(info, dfb->fb.pitches[0], dfb->fb.depth); 106 drm_fb_helper_fill_var(info, fbh, sizes->fb_width, sizes->fb_height); 107 108 - DRM_DEBUG_KMS("allocated %dx%d %dbpp fb: 0x%08x\n", 109 - dfb->fb.width, dfb->fb.height, 110 - dfb->fb.bits_per_pixel, obj->phys_addr); 111 112 return 0; 113 ··· 177 return ret; 178 } 179 180 void armada_fbdev_fini(struct drm_device *dev) 181 { 182 struct armada_private *priv = dev->dev_private; ··· 202 framebuffer_release(info); 203 } 204 205 if (fbh->fb) 206 fbh->fb->funcs->destroy(fbh->fb); 207 - 208 - drm_fb_helper_fini(fbh); 209 210 priv->fbdev = NULL; 211 }
··· 105 drm_fb_helper_fill_fix(info, dfb->fb.pitches[0], dfb->fb.depth); 106 drm_fb_helper_fill_var(info, fbh, sizes->fb_width, sizes->fb_height); 107 108 + DRM_DEBUG_KMS("allocated %dx%d %dbpp fb: 0x%08llx\n", 109 + dfb->fb.width, dfb->fb.height, dfb->fb.bits_per_pixel, 110 + (unsigned long long)obj->phys_addr); 111 112 return 0; 113 ··· 177 return ret; 178 } 179 180 + void armada_fbdev_lastclose(struct drm_device *dev) 181 + { 182 + struct armada_private *priv = dev->dev_private; 183 + 184 + drm_modeset_lock_all(dev); 185 + if (priv->fbdev) 186 + drm_fb_helper_restore_fbdev_mode(priv->fbdev); 187 + drm_modeset_unlock_all(dev); 188 + } 189 + 190 void armada_fbdev_fini(struct drm_device *dev) 191 { 192 struct armada_private *priv = dev->dev_private; ··· 192 framebuffer_release(info); 193 } 194 195 + drm_fb_helper_fini(fbh); 196 + 197 if (fbh->fb) 198 fbh->fb->funcs->destroy(fbh->fb); 199 200 priv->fbdev = NULL; 201 }
+4 -3
drivers/gpu/drm/armada/armada_gem.c
··· 172 obj->dev_addr = obj->linear->start; 173 } 174 175 - DRM_DEBUG_DRIVER("obj %p phys %#x dev %#x\n", 176 - obj, obj->phys_addr, obj->dev_addr); 177 178 return 0; 179 } ··· 558 * refcount on the gem object itself. 559 */ 560 drm_gem_object_reference(obj); 561 - dma_buf_put(buf); 562 return obj; 563 } 564 } ··· 573 } 574 575 dobj->obj.import_attach = attach; 576 577 /* 578 * Don't call dma_buf_map_attachment() here - it maps the
··· 172 obj->dev_addr = obj->linear->start; 173 } 174 175 + DRM_DEBUG_DRIVER("obj %p phys %#llx dev %#llx\n", obj, 176 + (unsigned long long)obj->phys_addr, 177 + (unsigned long long)obj->dev_addr); 178 179 return 0; 180 } ··· 557 * refcount on the gem object itself. 558 */ 559 drm_gem_object_reference(obj); 560 return obj; 561 } 562 } ··· 573 } 574 575 dobj->obj.import_attach = attach; 576 + get_dma_buf(buf); 577 578 /* 579 * Don't call dma_buf_map_attachment() here - it maps the
+8
drivers/gpu/drm/drm_edid.c
··· 68 #define EDID_QUIRK_DETAILED_SYNC_PP (1 << 6) 69 /* Force reduced-blanking timings for detailed modes */ 70 #define EDID_QUIRK_FORCE_REDUCED_BLANKING (1 << 7) 71 72 struct detailed_mode_closure { 73 struct drm_connector *connector; ··· 130 131 /* Medion MD 30217 PG */ 132 { "MED", 0x7b8, EDID_QUIRK_PREFER_LARGE_75 }, 133 }; 134 135 /* ··· 3439 edid_fixup_preferred(connector, quirks); 3440 3441 drm_add_display_info(edid, &connector->display_info); 3442 3443 return num_modes; 3444 }
··· 68 #define EDID_QUIRK_DETAILED_SYNC_PP (1 << 6) 69 /* Force reduced-blanking timings for detailed modes */ 70 #define EDID_QUIRK_FORCE_REDUCED_BLANKING (1 << 7) 71 + /* Force 8bpc */ 72 + #define EDID_QUIRK_FORCE_8BPC (1 << 8) 73 74 struct detailed_mode_closure { 75 struct drm_connector *connector; ··· 128 129 /* Medion MD 30217 PG */ 130 { "MED", 0x7b8, EDID_QUIRK_PREFER_LARGE_75 }, 131 + 132 + /* Panel in Samsung NP700G7A-S01PL notebook reports 6bpc */ 133 + { "SEC", 0xd033, EDID_QUIRK_FORCE_8BPC }, 134 }; 135 136 /* ··· 3434 edid_fixup_preferred(connector, quirks); 3435 3436 drm_add_display_info(edid, &connector->display_info); 3437 + 3438 + if (quirks & EDID_QUIRK_FORCE_8BPC) 3439 + connector->display_info.bpc = 8; 3440 3441 return num_modes; 3442 }
+3 -3
drivers/gpu/drm/drm_stub.c
··· 566 if (dev->driver->unload) 567 dev->driver->unload(dev); 568 err_primary_node: 569 - drm_put_minor(dev->primary); 570 err_render_node: 571 - drm_put_minor(dev->render); 572 err_control_node: 573 - drm_put_minor(dev->control); 574 err_agp: 575 if (dev->driver->bus->agp_destroy) 576 dev->driver->bus->agp_destroy(dev);
··· 566 if (dev->driver->unload) 567 dev->driver->unload(dev); 568 err_primary_node: 569 + drm_unplug_minor(dev->primary); 570 err_render_node: 571 + drm_unplug_minor(dev->render); 572 err_control_node: 573 + drm_unplug_minor(dev->control); 574 err_agp: 575 if (dev->driver->bus->agp_destroy) 576 dev->driver->bus->agp_destroy(dev);
+11 -9
drivers/gpu/drm/i915/i915_dma.c
··· 83 drm_i915_private_t *dev_priv = dev->dev_private; 84 struct drm_i915_master_private *master_priv; 85 86 if (dev->primary->master) { 87 master_priv = dev->primary->master->driver_priv; 88 if (master_priv->sarea_priv) ··· 1498 spin_lock_init(&dev_priv->uncore.lock); 1499 spin_lock_init(&dev_priv->mm.object_stat_lock); 1500 mutex_init(&dev_priv->dpio_lock); 1501 - mutex_init(&dev_priv->rps.hw_lock); 1502 mutex_init(&dev_priv->modeset_restore_lock); 1503 1504 - mutex_init(&dev_priv->pc8.lock); 1505 - dev_priv->pc8.requirements_met = false; 1506 - dev_priv->pc8.gpu_idle = false; 1507 - dev_priv->pc8.irqs_disabled = false; 1508 - dev_priv->pc8.enabled = false; 1509 - dev_priv->pc8.disable_count = 2; /* requirements_met + gpu_idle */ 1510 - INIT_DELAYED_WORK(&dev_priv->pc8.enable_work, hsw_enable_pc8_work); 1511 1512 intel_display_crc_init(dev); 1513 ··· 1604 } 1605 1606 intel_irq_init(dev); 1607 - intel_pm_init(dev); 1608 intel_uncore_sanitize(dev); 1609 1610 /* Try to make sure MCHBAR is enabled before poking at it */ ··· 1848 1849 void i915_driver_preclose(struct drm_device * dev, struct drm_file *file_priv) 1850 { 1851 i915_gem_context_close(dev, file_priv); 1852 i915_gem_release(dev, file_priv); 1853 } 1854 1855 void i915_driver_postclose(struct drm_device *dev, struct drm_file *file)
··· 83 drm_i915_private_t *dev_priv = dev->dev_private; 84 struct drm_i915_master_private *master_priv; 85 86 + /* 87 + * The dri breadcrumb update races against the drm master disappearing. 88 + * Instead of trying to fix this (this is by far not the only ums issue) 89 + * just don't do the update in kms mode. 90 + */ 91 + if (drm_core_check_feature(dev, DRIVER_MODESET)) 92 + return; 93 + 94 if (dev->primary->master) { 95 master_priv = dev->primary->master->driver_priv; 96 if (master_priv->sarea_priv) ··· 1490 spin_lock_init(&dev_priv->uncore.lock); 1491 spin_lock_init(&dev_priv->mm.object_stat_lock); 1492 mutex_init(&dev_priv->dpio_lock); 1493 mutex_init(&dev_priv->modeset_restore_lock); 1494 1495 + intel_pm_setup(dev); 1496 1497 intel_display_crc_init(dev); 1498 ··· 1603 } 1604 1605 intel_irq_init(dev); 1606 intel_uncore_sanitize(dev); 1607 1608 /* Try to make sure MCHBAR is enabled before poking at it */ ··· 1848 1849 void i915_driver_preclose(struct drm_device * dev, struct drm_file *file_priv) 1850 { 1851 + mutex_lock(&dev->struct_mutex); 1852 i915_gem_context_close(dev, file_priv); 1853 i915_gem_release(dev, file_priv); 1854 + mutex_unlock(&dev->struct_mutex); 1855 } 1856 1857 void i915_driver_postclose(struct drm_device *dev, struct drm_file *file)
+1
drivers/gpu/drm/i915/i915_drv.c
··· 651 intel_modeset_init_hw(dev); 652 653 drm_modeset_lock_all(dev); 654 intel_modeset_setup_hw_state(dev, true); 655 drm_modeset_unlock_all(dev); 656
··· 651 intel_modeset_init_hw(dev); 652 653 drm_modeset_lock_all(dev); 654 + drm_mode_config_reset(dev); 655 intel_modeset_setup_hw_state(dev, true); 656 drm_modeset_unlock_all(dev); 657
+6 -3
drivers/gpu/drm/i915/i915_drv.h
··· 1755 #define IS_MOBILE(dev) (INTEL_INFO(dev)->is_mobile) 1756 #define IS_HSW_EARLY_SDV(dev) (IS_HASWELL(dev) && \ 1757 ((dev)->pdev->device & 0xFF00) == 0x0C00) 1758 - #define IS_ULT(dev) (IS_HASWELL(dev) && \ 1759 ((dev)->pdev->device & 0xFF00) == 0x0A00) 1760 #define IS_HSW_GT3(dev) (IS_HASWELL(dev) && \ 1761 ((dev)->pdev->device & 0x00F0) == 0x0020) 1762 #define IS_PRELIMINARY_HW(intel_info) ((intel_info)->is_preliminary) ··· 1906 void i915_handle_error(struct drm_device *dev, bool wedged); 1907 1908 extern void intel_irq_init(struct drm_device *dev); 1909 - extern void intel_pm_init(struct drm_device *dev); 1910 extern void intel_hpd_init(struct drm_device *dev); 1911 - extern void intel_pm_init(struct drm_device *dev); 1912 1913 extern void intel_uncore_sanitize(struct drm_device *dev); 1914 extern void intel_uncore_early_sanitize(struct drm_device *dev);
··· 1755 #define IS_MOBILE(dev) (INTEL_INFO(dev)->is_mobile) 1756 #define IS_HSW_EARLY_SDV(dev) (IS_HASWELL(dev) && \ 1757 ((dev)->pdev->device & 0xFF00) == 0x0C00) 1758 + #define IS_BDW_ULT(dev) (IS_BROADWELL(dev) && \ 1759 + (((dev)->pdev->device & 0xf) == 0x2 || \ 1760 + ((dev)->pdev->device & 0xf) == 0x6 || \ 1761 + ((dev)->pdev->device & 0xf) == 0xe)) 1762 + #define IS_HSW_ULT(dev) (IS_HASWELL(dev) && \ 1763 ((dev)->pdev->device & 0xFF00) == 0x0A00) 1764 + #define IS_ULT(dev) (IS_HSW_ULT(dev) || IS_BDW_ULT(dev)) 1765 #define IS_HSW_GT3(dev) (IS_HASWELL(dev) && \ 1766 ((dev)->pdev->device & 0x00F0) == 0x0020) 1767 #define IS_PRELIMINARY_HW(intel_info) ((intel_info)->is_preliminary) ··· 1901 void i915_handle_error(struct drm_device *dev, bool wedged); 1902 1903 extern void intel_irq_init(struct drm_device *dev); 1904 extern void intel_hpd_init(struct drm_device *dev); 1905 1906 extern void intel_uncore_sanitize(struct drm_device *dev); 1907 extern void intel_uncore_early_sanitize(struct drm_device *dev);
+12 -4
drivers/gpu/drm/i915/i915_gem_context.c
··· 347 { 348 struct drm_i915_file_private *file_priv = file->driver_priv; 349 350 - mutex_lock(&dev->struct_mutex); 351 idr_for_each(&file_priv->context_idr, context_idr_cleanup, NULL); 352 idr_destroy(&file_priv->context_idr); 353 - mutex_unlock(&dev->struct_mutex); 354 } 355 356 static struct i915_hw_context * ··· 421 if (ret) 422 return ret; 423 424 - /* Clear this page out of any CPU caches for coherent swap-in/out. Note 425 * that thanks to write = false in this call and us not setting any gpu 426 * write domains when putting a context object onto the active list 427 * (when switching away from it), this won't block. 428 - * XXX: We need a real interface to do this instead of trickery. */ 429 ret = i915_gem_object_set_to_gtt_domain(to->obj, false); 430 if (ret) { 431 i915_gem_object_unpin(to->obj);
··· 347 { 348 struct drm_i915_file_private *file_priv = file->driver_priv; 349 350 idr_for_each(&file_priv->context_idr, context_idr_cleanup, NULL); 351 idr_destroy(&file_priv->context_idr); 352 } 353 354 static struct i915_hw_context * ··· 423 if (ret) 424 return ret; 425 426 + /* 427 + * Pin can switch back to the default context if we end up calling into 428 + * evict_everything - as a last ditch gtt defrag effort that also 429 + * switches to the default context. Hence we need to reload from here. 430 + */ 431 + from = ring->last_context; 432 + 433 + /* 434 + * Clear this page out of any CPU caches for coherent swap-in/out. Note 435 * that thanks to write = false in this call and us not setting any gpu 436 * write domains when putting a context object onto the active list 437 * (when switching away from it), this won't block. 438 + * 439 + * XXX: We need a real interface to do this instead of trickery. 440 + */ 441 ret = i915_gem_object_set_to_gtt_domain(to->obj, false); 442 if (ret) { 443 i915_gem_object_unpin(to->obj);
+11 -3
drivers/gpu/drm/i915/i915_gem_evict.c
··· 88 } else 89 drm_mm_init_scan(&vm->mm, min_size, alignment, cache_level); 90 91 /* First see if there is a large enough contiguous idle region... */ 92 list_for_each_entry(vma, &vm->inactive_list, mm_list) { 93 if (mark_free(vma, &unwind_list)) ··· 116 list_del_init(&vma->exec_list); 117 } 118 119 - /* We expect the caller to unpin, evict all and try again, or give up. 120 - * So calling i915_gem_evict_vm() is unnecessary. 121 */ 122 - return -ENOSPC; 123 124 found: 125 /* drm_mm doesn't allow any other other operations while
··· 88 } else 89 drm_mm_init_scan(&vm->mm, min_size, alignment, cache_level); 90 91 + search_again: 92 /* First see if there is a large enough contiguous idle region... */ 93 list_for_each_entry(vma, &vm->inactive_list, mm_list) { 94 if (mark_free(vma, &unwind_list)) ··· 115 list_del_init(&vma->exec_list); 116 } 117 118 + /* Can we unpin some objects such as idle hw contents, 119 + * or pending flips? 120 */ 121 + ret = nonblocking ? -ENOSPC : i915_gpu_idle(dev); 122 + if (ret) 123 + return ret; 124 + 125 + /* Only idle the GPU and repeat the search once */ 126 + i915_gem_retire_requests(dev); 127 + nonblocking = true; 128 + goto search_again; 129 130 found: 131 /* drm_mm doesn't allow any other other operations while
+7 -2
drivers/gpu/drm/i915/i915_gem_gtt.c
··· 337 kfree(ppgtt->gen8_pt_dma_addr[i]); 338 } 339 340 - __free_pages(ppgtt->gen8_pt_pages, ppgtt->num_pt_pages << PAGE_SHIFT); 341 - __free_pages(ppgtt->pd_pages, ppgtt->num_pd_pages << PAGE_SHIFT); 342 } 343 344 /** ··· 1241 bdw_gmch_ctl &= BDW_GMCH_GGMS_MASK; 1242 if (bdw_gmch_ctl) 1243 bdw_gmch_ctl = 1 << bdw_gmch_ctl; 1244 return bdw_gmch_ctl << 20; 1245 } 1246
··· 337 kfree(ppgtt->gen8_pt_dma_addr[i]); 338 } 339 340 + __free_pages(ppgtt->gen8_pt_pages, get_order(ppgtt->num_pt_pages << PAGE_SHIFT)); 341 + __free_pages(ppgtt->pd_pages, get_order(ppgtt->num_pd_pages << PAGE_SHIFT)); 342 } 343 344 /** ··· 1241 bdw_gmch_ctl &= BDW_GMCH_GGMS_MASK; 1242 if (bdw_gmch_ctl) 1243 bdw_gmch_ctl = 1 << bdw_gmch_ctl; 1244 + if (bdw_gmch_ctl > 4) { 1245 + WARN_ON(!i915_preliminary_hw_support); 1246 + return 4<<20; 1247 + } 1248 + 1249 return bdw_gmch_ctl << 20; 1250 } 1251
+4 -3
drivers/gpu/drm/i915/intel_display.c
··· 9135 if (IS_G4X(dev) || INTEL_INFO(dev)->gen >= 5) 9136 PIPE_CONF_CHECK_I(pipe_bpp); 9137 9138 - if (!IS_HASWELL(dev)) { 9139 PIPE_CONF_CHECK_CLOCK_FUZZY(adjusted_mode.crtc_clock); 9140 PIPE_CONF_CHECK_CLOCK_FUZZY(port_clock); 9141 } ··· 11036 } 11037 11038 intel_modeset_check_state(dev); 11039 - 11040 - drm_mode_config_reset(dev); 11041 } 11042 11043 void intel_modeset_gem_init(struct drm_device *dev) ··· 11044 11045 intel_setup_overlay(dev); 11046 11047 intel_modeset_setup_hw_state(dev, false); 11048 } 11049 11050 void intel_modeset_cleanup(struct drm_device *dev)
··· 9135 if (IS_G4X(dev) || INTEL_INFO(dev)->gen >= 5) 9136 PIPE_CONF_CHECK_I(pipe_bpp); 9137 9138 + if (!HAS_DDI(dev)) { 9139 PIPE_CONF_CHECK_CLOCK_FUZZY(adjusted_mode.crtc_clock); 9140 PIPE_CONF_CHECK_CLOCK_FUZZY(port_clock); 9141 } ··· 11036 } 11037 11038 intel_modeset_check_state(dev); 11039 } 11040 11041 void intel_modeset_gem_init(struct drm_device *dev) ··· 11046 11047 intel_setup_overlay(dev); 11048 11049 + drm_modeset_lock_all(dev); 11050 + drm_mode_config_reset(dev); 11051 intel_modeset_setup_hw_state(dev, false); 11052 + drm_modeset_unlock_all(dev); 11053 } 11054 11055 void intel_modeset_cleanup(struct drm_device *dev)
+1
drivers/gpu/drm/i915/intel_drv.h
··· 821 uint32_t sprite_width, int pixel_size, 822 bool enabled, bool scaled); 823 void intel_init_pm(struct drm_device *dev); 824 bool intel_fbc_enabled(struct drm_device *dev); 825 void intel_update_fbc(struct drm_device *dev); 826 void intel_gpu_ips_init(struct drm_i915_private *dev_priv);
··· 821 uint32_t sprite_width, int pixel_size, 822 bool enabled, bool scaled); 823 void intel_init_pm(struct drm_device *dev); 824 + void intel_pm_setup(struct drm_device *dev); 825 bool intel_fbc_enabled(struct drm_device *dev); 826 void intel_update_fbc(struct drm_device *dev); 827 void intel_gpu_ips_init(struct drm_i915_private *dev_priv);
+23 -3
drivers/gpu/drm/i915/intel_panel.c
··· 451 452 spin_lock_irqsave(&dev_priv->backlight.lock, flags); 453 454 - if (HAS_PCH_SPLIT(dev)) { 455 val = I915_READ(BLC_PWM_CPU_CTL) & BACKLIGHT_DUTY_CYCLE_MASK; 456 } else { 457 if (IS_VALLEYVIEW(dev)) ··· 481 return val; 482 } 483 484 static void intel_pch_panel_set_backlight(struct drm_device *dev, u32 level) 485 { 486 struct drm_i915_private *dev_priv = dev->dev_private; ··· 505 DRM_DEBUG_DRIVER("set backlight PWM = %d\n", level); 506 level = intel_panel_compute_brightness(dev, pipe, level); 507 508 - if (HAS_PCH_SPLIT(dev)) 509 return intel_pch_panel_set_backlight(dev, level); 510 511 if (is_backlight_combination_mode(dev)) { ··· 677 POSTING_READ(reg); 678 I915_WRITE(reg, tmp | BLM_PWM_ENABLE); 679 680 - if (HAS_PCH_SPLIT(dev) && 681 !(dev_priv->quirks & QUIRK_NO_PCH_PWM_ENABLE)) { 682 tmp = I915_READ(BLC_PWM_PCH_CTL1); 683 tmp |= BLM_PCH_PWM_ENABLE;
··· 451 452 spin_lock_irqsave(&dev_priv->backlight.lock, flags); 453 454 + if (IS_BROADWELL(dev)) { 455 + val = I915_READ(BLC_PWM_PCH_CTL2) & BACKLIGHT_DUTY_CYCLE_MASK; 456 + } else if (HAS_PCH_SPLIT(dev)) { 457 val = I915_READ(BLC_PWM_CPU_CTL) & BACKLIGHT_DUTY_CYCLE_MASK; 458 } else { 459 if (IS_VALLEYVIEW(dev)) ··· 479 return val; 480 } 481 482 + static void intel_bdw_panel_set_backlight(struct drm_device *dev, u32 level) 483 + { 484 + struct drm_i915_private *dev_priv = dev->dev_private; 485 + u32 val = I915_READ(BLC_PWM_PCH_CTL2) & ~BACKLIGHT_DUTY_CYCLE_MASK; 486 + I915_WRITE(BLC_PWM_PCH_CTL2, val | level); 487 + } 488 + 489 static void intel_pch_panel_set_backlight(struct drm_device *dev, u32 level) 490 { 491 struct drm_i915_private *dev_priv = dev->dev_private; ··· 496 DRM_DEBUG_DRIVER("set backlight PWM = %d\n", level); 497 level = intel_panel_compute_brightness(dev, pipe, level); 498 499 + if (IS_BROADWELL(dev)) 500 + return intel_bdw_panel_set_backlight(dev, level); 501 + else if (HAS_PCH_SPLIT(dev)) 502 return intel_pch_panel_set_backlight(dev, level); 503 504 if (is_backlight_combination_mode(dev)) { ··· 666 POSTING_READ(reg); 667 I915_WRITE(reg, tmp | BLM_PWM_ENABLE); 668 669 + if (IS_BROADWELL(dev)) { 670 + /* 671 + * Broadwell requires PCH override to drive the PCH 672 + * backlight pin. The above will configure the CPU 673 + * backlight pin, which we don't plan to use. 674 + */ 675 + tmp = I915_READ(BLC_PWM_PCH_CTL1); 676 + tmp |= BLM_PCH_OVERRIDE_ENABLE | BLM_PCH_PWM_ENABLE; 677 + I915_WRITE(BLC_PWM_PCH_CTL1, tmp); 678 + } else if (HAS_PCH_SPLIT(dev) && 679 !(dev_priv->quirks & QUIRK_NO_PCH_PWM_ENABLE)) { 680 tmp = I915_READ(BLC_PWM_PCH_CTL1); 681 tmp |= BLM_PCH_PWM_ENABLE;
+27 -2
drivers/gpu/drm/i915/intel_pm.c
··· 5685 { 5686 struct drm_i915_private *dev_priv = dev->dev_private; 5687 bool is_enabled, enable_requested; 5688 uint32_t tmp; 5689 5690 tmp = I915_READ(HSW_PWR_WELL_DRIVER); ··· 5703 HSW_PWR_WELL_STATE_ENABLED), 20)) 5704 DRM_ERROR("Timeout enabling power well\n"); 5705 } 5706 } else { 5707 if (enable_requested) { 5708 - unsigned long irqflags; 5709 enum pipe p; 5710 5711 I915_WRITE(HSW_PWR_WELL_DRIVER, 0); ··· 6146 return val; 6147 } 6148 6149 - void intel_pm_init(struct drm_device *dev) 6150 { 6151 struct drm_i915_private *dev_priv = dev->dev_private; 6152 6153 INIT_DELAYED_WORK(&dev_priv->rps.delayed_resume_work, 6154 intel_gen6_powersave_work); 6155 }
··· 5685 { 5686 struct drm_i915_private *dev_priv = dev->dev_private; 5687 bool is_enabled, enable_requested; 5688 + unsigned long irqflags; 5689 uint32_t tmp; 5690 5691 tmp = I915_READ(HSW_PWR_WELL_DRIVER); ··· 5702 HSW_PWR_WELL_STATE_ENABLED), 20)) 5703 DRM_ERROR("Timeout enabling power well\n"); 5704 } 5705 + 5706 + if (IS_BROADWELL(dev)) { 5707 + spin_lock_irqsave(&dev_priv->irq_lock, irqflags); 5708 + I915_WRITE(GEN8_DE_PIPE_IMR(PIPE_B), 5709 + dev_priv->de_irq_mask[PIPE_B]); 5710 + I915_WRITE(GEN8_DE_PIPE_IER(PIPE_B), 5711 + ~dev_priv->de_irq_mask[PIPE_B] | 5712 + GEN8_PIPE_VBLANK); 5713 + I915_WRITE(GEN8_DE_PIPE_IMR(PIPE_C), 5714 + dev_priv->de_irq_mask[PIPE_C]); 5715 + I915_WRITE(GEN8_DE_PIPE_IER(PIPE_C), 5716 + ~dev_priv->de_irq_mask[PIPE_C] | 5717 + GEN8_PIPE_VBLANK); 5718 + POSTING_READ(GEN8_DE_PIPE_IER(PIPE_C)); 5719 + spin_unlock_irqrestore(&dev_priv->irq_lock, irqflags); 5720 + } 5721 } else { 5722 if (enable_requested) { 5723 enum pipe p; 5724 5725 I915_WRITE(HSW_PWR_WELL_DRIVER, 0); ··· 6130 return val; 6131 } 6132 6133 + void intel_pm_setup(struct drm_device *dev) 6134 { 6135 struct drm_i915_private *dev_priv = dev->dev_private; 6136 6137 + mutex_init(&dev_priv->rps.hw_lock); 6138 + 6139 + mutex_init(&dev_priv->pc8.lock); 6140 + dev_priv->pc8.requirements_met = false; 6141 + dev_priv->pc8.gpu_idle = false; 6142 + dev_priv->pc8.irqs_disabled = false; 6143 + dev_priv->pc8.enabled = false; 6144 + dev_priv->pc8.disable_count = 2; /* requirements_met + gpu_idle */ 6145 + INIT_DELAYED_WORK(&dev_priv->pc8.enable_work, hsw_enable_pc8_work); 6146 INIT_DELAYED_WORK(&dev_priv->rps.delayed_resume_work, 6147 intel_gen6_powersave_work); 6148 }
+1
drivers/gpu/drm/i915/intel_ringbuffer.c
··· 965 } else if (IS_GEN6(ring->dev)) { 966 mmio = RING_HWS_PGA_GEN6(ring->mmio_base); 967 } else { 968 mmio = RING_HWS_PGA(ring->mmio_base); 969 } 970
··· 965 } else if (IS_GEN6(ring->dev)) { 966 mmio = RING_HWS_PGA_GEN6(ring->mmio_base); 967 } else { 968 + /* XXX: gen8 returns to sanity */ 969 mmio = RING_HWS_PGA(ring->mmio_base); 970 } 971
+1
drivers/gpu/drm/i915/intel_uncore.c
··· 784 int intel_gpu_reset(struct drm_device *dev) 785 { 786 switch (INTEL_INFO(dev)->gen) { 787 case 7: 788 case 6: return gen6_do_reset(dev); 789 case 5: return ironlake_do_reset(dev);
··· 784 int intel_gpu_reset(struct drm_device *dev) 785 { 786 switch (INTEL_INFO(dev)->gen) { 787 + case 8: 788 case 7: 789 case 6: return gen6_do_reset(dev); 790 case 5: return ironlake_do_reset(dev);
+6
drivers/gpu/drm/nouveau/nouveau_drm.c
··· 858 if (nouveau_runtime_pm == 0) 859 return -EINVAL; 860 861 nv_debug_level(SILENT); 862 drm_kms_helper_poll_disable(drm_dev); 863 vga_switcheroo_set_dynamic_switch(pdev, VGA_SWITCHEROO_OFF);
··· 858 if (nouveau_runtime_pm == 0) 859 return -EINVAL; 860 861 + /* are we optimus enabled? */ 862 + if (nouveau_runtime_pm == -1 && !nouveau_is_optimus() && !nouveau_is_v1_dsm()) { 863 + DRM_DEBUG_DRIVER("failing to power off - not optimus\n"); 864 + return -EINVAL; 865 + } 866 + 867 nv_debug_level(SILENT); 868 drm_kms_helper_poll_disable(drm_dev); 869 vga_switcheroo_set_dynamic_switch(pdev, VGA_SWITCHEROO_OFF);
+3 -1
drivers/gpu/drm/radeon/atombios_crtc.c
··· 1196 } else if ((rdev->family == CHIP_TAHITI) || 1197 (rdev->family == CHIP_PITCAIRN)) 1198 fb_format |= SI_GRPH_PIPE_CONFIG(SI_ADDR_SURF_P8_32x32_8x16); 1199 - else if (rdev->family == CHIP_VERDE) 1200 fb_format |= SI_GRPH_PIPE_CONFIG(SI_ADDR_SURF_P4_8x16); 1201 1202 switch (radeon_crtc->crtc_id) {
··· 1196 } else if ((rdev->family == CHIP_TAHITI) || 1197 (rdev->family == CHIP_PITCAIRN)) 1198 fb_format |= SI_GRPH_PIPE_CONFIG(SI_ADDR_SURF_P8_32x32_8x16); 1199 + else if ((rdev->family == CHIP_VERDE) || 1200 + (rdev->family == CHIP_OLAND) || 1201 + (rdev->family == CHIP_HAINAN)) /* for completeness. HAINAN has no display hw */ 1202 fb_format |= SI_GRPH_PIPE_CONFIG(SI_ADDR_SURF_P4_8x16); 1203 1204 switch (radeon_crtc->crtc_id) {
+1 -1
drivers/gpu/drm/radeon/cik_sdma.c
··· 458 radeon_ring_write(ring, 0); /* src/dst endian swap */ 459 radeon_ring_write(ring, src_offset & 0xffffffff); 460 radeon_ring_write(ring, upper_32_bits(src_offset) & 0xffffffff); 461 - radeon_ring_write(ring, dst_offset & 0xfffffffc); 462 radeon_ring_write(ring, upper_32_bits(dst_offset) & 0xffffffff); 463 src_offset += cur_size_in_bytes; 464 dst_offset += cur_size_in_bytes;
··· 458 radeon_ring_write(ring, 0); /* src/dst endian swap */ 459 radeon_ring_write(ring, src_offset & 0xffffffff); 460 radeon_ring_write(ring, upper_32_bits(src_offset) & 0xffffffff); 461 + radeon_ring_write(ring, dst_offset & 0xffffffff); 462 radeon_ring_write(ring, upper_32_bits(dst_offset) & 0xffffffff); 463 src_offset += cur_size_in_bytes; 464 dst_offset += cur_size_in_bytes;
+2 -2
drivers/gpu/drm/radeon/radeon_asic.c
··· 2021 .hdmi_setmode = &evergreen_hdmi_setmode, 2022 }, 2023 .copy = { 2024 - .blit = NULL, 2025 .blit_ring_index = RADEON_RING_TYPE_GFX_INDEX, 2026 .dma = &cik_copy_dma, 2027 .dma_ring_index = R600_RING_TYPE_DMA_INDEX, ··· 2122 .hdmi_setmode = &evergreen_hdmi_setmode, 2123 }, 2124 .copy = { 2125 - .blit = NULL, 2126 .blit_ring_index = RADEON_RING_TYPE_GFX_INDEX, 2127 .dma = &cik_copy_dma, 2128 .dma_ring_index = R600_RING_TYPE_DMA_INDEX,
··· 2021 .hdmi_setmode = &evergreen_hdmi_setmode, 2022 }, 2023 .copy = { 2024 + .blit = &cik_copy_cpdma, 2025 .blit_ring_index = RADEON_RING_TYPE_GFX_INDEX, 2026 .dma = &cik_copy_dma, 2027 .dma_ring_index = R600_RING_TYPE_DMA_INDEX, ··· 2122 .hdmi_setmode = &evergreen_hdmi_setmode, 2123 }, 2124 .copy = { 2125 + .blit = &cik_copy_cpdma, 2126 .blit_ring_index = RADEON_RING_TYPE_GFX_INDEX, 2127 .dma = &cik_copy_dma, 2128 .dma_ring_index = R600_RING_TYPE_DMA_INDEX,
-10
drivers/gpu/drm/radeon/radeon_drv.c
··· 508 #endif 509 }; 510 511 - 512 - static void 513 - radeon_pci_shutdown(struct pci_dev *pdev) 514 - { 515 - struct drm_device *dev = pci_get_drvdata(pdev); 516 - 517 - radeon_driver_unload_kms(dev); 518 - } 519 - 520 static struct drm_driver kms_driver = { 521 .driver_features = 522 DRIVER_USE_AGP | ··· 577 .probe = radeon_pci_probe, 578 .remove = radeon_pci_remove, 579 .driver.pm = &radeon_pm_ops, 580 - .shutdown = radeon_pci_shutdown, 581 }; 582 583 static int __init radeon_init(void)
··· 508 #endif 509 }; 510 511 static struct drm_driver kms_driver = { 512 .driver_features = 513 DRIVER_USE_AGP | ··· 586 .probe = radeon_pci_probe, 587 .remove = radeon_pci_remove, 588 .driver.pm = &radeon_pm_ops, 589 }; 590 591 static int __init radeon_init(void)
+10
drivers/gpu/drm/radeon/rs690.c
··· 162 base = RREG32_MC(R_000100_MCCFG_FB_LOCATION); 163 base = G_000100_MC_FB_START(base) << 16; 164 rdev->mc.igp_sideport_enabled = radeon_atombios_sideport_present(rdev); 165 166 /* Use K8 direct mapping for fast fb access. */ 167 rdev->fastfb_working = false;
··· 162 base = RREG32_MC(R_000100_MCCFG_FB_LOCATION); 163 base = G_000100_MC_FB_START(base) << 16; 164 rdev->mc.igp_sideport_enabled = radeon_atombios_sideport_present(rdev); 165 + /* Some boards seem to be configured for 128MB of sideport memory, 166 + * but really only have 64MB. Just skip the sideport and use 167 + * UMA memory. 168 + */ 169 + if (rdev->mc.igp_sideport_enabled && 170 + (rdev->mc.real_vram_size == (384 * 1024 * 1024))) { 171 + base += 128 * 1024 * 1024; 172 + rdev->mc.real_vram_size -= 128 * 1024 * 1024; 173 + rdev->mc.mc_vram_size = rdev->mc.real_vram_size; 174 + } 175 176 /* Use K8 direct mapping for fast fb access. */ 177 rdev->fastfb_working = false;
+3 -3
drivers/gpu/drm/ttm/ttm_bo_vm.c
··· 169 } 170 171 page_offset = ((address - vma->vm_start) >> PAGE_SHIFT) + 172 - drm_vma_node_start(&bo->vma_node) - vma->vm_pgoff; 173 - page_last = vma_pages(vma) + 174 - drm_vma_node_start(&bo->vma_node) - vma->vm_pgoff; 175 176 if (unlikely(page_offset >= bo->num_pages)) { 177 retval = VM_FAULT_SIGBUS;
··· 169 } 170 171 page_offset = ((address - vma->vm_start) >> PAGE_SHIFT) + 172 + vma->vm_pgoff - drm_vma_node_start(&bo->vma_node); 173 + page_last = vma_pages(vma) + vma->vm_pgoff - 174 + drm_vma_node_start(&bo->vma_node); 175 176 if (unlikely(page_offset >= bo->num_pages)) { 177 retval = VM_FAULT_SIGBUS;
+3
drivers/gpu/drm/vmwgfx/vmwgfx_ioctl.c
··· 68 SVGA_FIFO_3D_HWVERSION)); 69 break; 70 } 71 default: 72 DRM_ERROR("Illegal vmwgfx get param request: %d\n", 73 param->param);
··· 68 SVGA_FIFO_3D_HWVERSION)); 69 break; 70 } 71 + case DRM_VMW_PARAM_MAX_SURF_MEMORY: 72 + param->value = dev_priv->memory_size; 73 + break; 74 default: 75 DRM_ERROR("Illegal vmwgfx get param request: %d\n", 76 param->param);
+14 -2
drivers/iio/adc/ad7887.c
··· 200 .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE), 201 .address = 1, 202 .scan_index = 1, 203 - .scan_type = IIO_ST('u', 12, 16, 0), 204 }, 205 .channel[1] = { 206 .type = IIO_VOLTAGE, ··· 216 .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE), 217 .address = 0, 218 .scan_index = 0, 219 - .scan_type = IIO_ST('u', 12, 16, 0), 220 }, 221 .channel[2] = IIO_CHAN_SOFT_TIMESTAMP(2), 222 .int_vref_mv = 2500,
··· 200 .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE), 201 .address = 1, 202 .scan_index = 1, 203 + .scan_type = { 204 + .sign = 'u', 205 + .realbits = 12, 206 + .storagebits = 16, 207 + .shift = 0, 208 + .endianness = IIO_BE, 209 + }, 210 }, 211 .channel[1] = { 212 .type = IIO_VOLTAGE, ··· 210 .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE), 211 .address = 0, 212 .scan_index = 0, 213 + .scan_type = { 214 + .sign = 'u', 215 + .realbits = 12, 216 + .storagebits = 16, 217 + .shift = 0, 218 + .endianness = IIO_BE, 219 + }, 220 }, 221 .channel[2] = IIO_CHAN_SOFT_TIMESTAMP(2), 222 .int_vref_mv = 2500,
+6 -1
drivers/iio/imu/adis16400_core.c
··· 651 .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE), 652 .address = ADIS16448_BARO_OUT, 653 .scan_index = ADIS16400_SCAN_BARO, 654 - .scan_type = IIO_ST('s', 16, 16, 0), 655 }, 656 ADIS16400_TEMP_CHAN(ADIS16448_TEMP_OUT, 12), 657 IIO_CHAN_SOFT_TIMESTAMP(11)
··· 651 .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE), 652 .address = ADIS16448_BARO_OUT, 653 .scan_index = ADIS16400_SCAN_BARO, 654 + .scan_type = { 655 + .sign = 's', 656 + .realbits = 16, 657 + .storagebits = 16, 658 + .endianness = IIO_BE, 659 + }, 660 }, 661 ADIS16400_TEMP_CHAN(ADIS16448_TEMP_OUT, 12), 662 IIO_CHAN_SOFT_TIMESTAMP(11)
+1 -1
drivers/iio/light/cm36651.c
··· 387 return -EINVAL; 388 } 389 390 - return IIO_VAL_INT_PLUS_MICRO; 391 } 392 393 static int cm36651_write_int_time(struct cm36651_data *cm36651,
··· 387 return -EINVAL; 388 } 389 390 + return IIO_VAL_INT; 391 } 392 393 static int cm36651_write_int_time(struct cm36651_data *cm36651,
+18 -8
drivers/infiniband/ulp/isert/ib_isert.c
··· 207 isert_conn->conn_rx_descs = NULL; 208 } 209 210 static void isert_cq_tx_callback(struct ib_cq *, void *); 211 static void isert_cq_rx_callback(struct ib_cq *, void *); 212 213 static int ··· 261 cq_desc[i].device = device; 262 cq_desc[i].cq_index = i; 263 264 device->dev_rx_cq[i] = ib_create_cq(device->ib_device, 265 isert_cq_rx_callback, 266 isert_cq_event_callback, 267 (void *)&cq_desc[i], 268 ISER_MAX_RX_CQ_LEN, i); 269 - if (IS_ERR(device->dev_rx_cq[i])) 270 goto out_cq; 271 272 device->dev_tx_cq[i] = ib_create_cq(device->ib_device, 273 isert_cq_tx_callback, 274 isert_cq_event_callback, 275 (void *)&cq_desc[i], 276 ISER_MAX_TX_CQ_LEN, i); 277 - if (IS_ERR(device->dev_tx_cq[i])) 278 goto out_cq; 279 280 - if (ib_req_notify_cq(device->dev_rx_cq[i], IB_CQ_NEXT_COMP)) 281 - goto out_cq; 282 - 283 - if (ib_req_notify_cq(device->dev_tx_cq[i], IB_CQ_NEXT_COMP)) 284 goto out_cq; 285 } 286 ··· 1736 { 1737 struct isert_cq_desc *cq_desc = (struct isert_cq_desc *)context; 1738 1739 - INIT_WORK(&cq_desc->cq_tx_work, isert_cq_tx_work); 1740 queue_work(isert_comp_wq, &cq_desc->cq_tx_work); 1741 } 1742 ··· 1779 { 1780 struct isert_cq_desc *cq_desc = (struct isert_cq_desc *)context; 1781 1782 - INIT_WORK(&cq_desc->cq_rx_work, isert_cq_rx_work); 1783 queue_work(isert_rx_wq, &cq_desc->cq_rx_work); 1784 } 1785
··· 207 isert_conn->conn_rx_descs = NULL; 208 } 209 210 + static void isert_cq_tx_work(struct work_struct *); 211 static void isert_cq_tx_callback(struct ib_cq *, void *); 212 + static void isert_cq_rx_work(struct work_struct *); 213 static void isert_cq_rx_callback(struct ib_cq *, void *); 214 215 static int ··· 259 cq_desc[i].device = device; 260 cq_desc[i].cq_index = i; 261 262 + INIT_WORK(&cq_desc[i].cq_rx_work, isert_cq_rx_work); 263 device->dev_rx_cq[i] = ib_create_cq(device->ib_device, 264 isert_cq_rx_callback, 265 isert_cq_event_callback, 266 (void *)&cq_desc[i], 267 ISER_MAX_RX_CQ_LEN, i); 268 + if (IS_ERR(device->dev_rx_cq[i])) { 269 + ret = PTR_ERR(device->dev_rx_cq[i]); 270 + device->dev_rx_cq[i] = NULL; 271 goto out_cq; 272 + } 273 274 + INIT_WORK(&cq_desc[i].cq_tx_work, isert_cq_tx_work); 275 device->dev_tx_cq[i] = ib_create_cq(device->ib_device, 276 isert_cq_tx_callback, 277 isert_cq_event_callback, 278 (void *)&cq_desc[i], 279 ISER_MAX_TX_CQ_LEN, i); 280 + if (IS_ERR(device->dev_tx_cq[i])) { 281 + ret = PTR_ERR(device->dev_tx_cq[i]); 282 + device->dev_tx_cq[i] = NULL; 283 + goto out_cq; 284 + } 285 + 286 + ret = ib_req_notify_cq(device->dev_rx_cq[i], IB_CQ_NEXT_COMP); 287 + if (ret) 288 goto out_cq; 289 290 + ret = ib_req_notify_cq(device->dev_tx_cq[i], IB_CQ_NEXT_COMP); 291 + if (ret) 292 goto out_cq; 293 } 294 ··· 1724 { 1725 struct isert_cq_desc *cq_desc = (struct isert_cq_desc *)context; 1726 1727 queue_work(isert_comp_wq, &cq_desc->cq_tx_work); 1728 } 1729 ··· 1768 { 1769 struct isert_cq_desc *cq_desc = (struct isert_cq_desc *)context; 1770 1771 queue_work(isert_rx_wq, &cq_desc->cq_rx_work); 1772 } 1773
+2 -1
drivers/net/can/usb/ems_usb.c
··· 625 usb_unanchor_urb(urb); 626 usb_free_coherent(dev->udev, RX_BUFFER_SIZE, buf, 627 urb->transfer_dma); 628 break; 629 } 630 ··· 799 * allowed (MAX_TX_URBS). 800 */ 801 if (!context) { 802 - usb_unanchor_urb(urb); 803 usb_free_coherent(dev->udev, size, buf, urb->transfer_dma); 804 805 netdev_warn(netdev, "couldn't find free context\n"); 806
··· 625 usb_unanchor_urb(urb); 626 usb_free_coherent(dev->udev, RX_BUFFER_SIZE, buf, 627 urb->transfer_dma); 628 + usb_free_urb(urb); 629 break; 630 } 631 ··· 798 * allowed (MAX_TX_URBS). 799 */ 800 if (!context) { 801 usb_free_coherent(dev->udev, size, buf, urb->transfer_dma); 802 + usb_free_urb(urb); 803 804 netdev_warn(netdev, "couldn't find free context\n"); 805
+3
drivers/net/can/usb/peak_usb/pcan_usb_pro.c
··· 927 /* set LED in default state (end of init phase) */ 928 pcan_usb_pro_set_led(dev, 0, 1); 929 930 return 0; 931 932 err_out:
··· 927 /* set LED in default state (end of init phase) */ 928 pcan_usb_pro_set_led(dev, 0, 1); 929 930 + kfree(bi); 931 + kfree(fi); 932 + 933 return 0; 934 935 err_out:
+21 -26
drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_hw.c
··· 447 448 qlcnic_83xx_poll_process_aen(adapter); 449 450 - if (ahw->diag_test == QLCNIC_INTERRUPT_TEST) { 451 - ahw->diag_cnt++; 452 qlcnic_83xx_enable_legacy_msix_mbx_intr(adapter); 453 return IRQ_HANDLED; 454 } ··· 1346 } 1347 1348 if (adapter->ahw->diag_test == QLCNIC_LOOPBACK_TEST) { 1349 - /* disable and free mailbox interrupt */ 1350 - if (!(adapter->flags & QLCNIC_MSIX_ENABLED)) { 1351 - qlcnic_83xx_enable_mbx_poll(adapter); 1352 - qlcnic_83xx_free_mbx_intr(adapter); 1353 - } 1354 adapter->ahw->loopback_state = 0; 1355 adapter->ahw->hw_ops->setup_link_event(adapter, 1); 1356 } ··· 1359 { 1360 struct qlcnic_adapter *adapter = netdev_priv(netdev); 1361 struct qlcnic_host_sds_ring *sds_ring; 1362 - int ring, err; 1363 1364 clear_bit(__QLCNIC_DEV_UP, &adapter->state); 1365 if (adapter->ahw->diag_test == QLCNIC_INTERRUPT_TEST) { 1366 for (ring = 0; ring < adapter->drv_sds_rings; ring++) { 1367 sds_ring = &adapter->recv_ctx->sds_rings[ring]; 1368 - qlcnic_83xx_disable_intr(adapter, sds_ring); 1369 - if (!(adapter->flags & QLCNIC_MSIX_ENABLED)) 1370 - qlcnic_83xx_enable_mbx_poll(adapter); 1371 } 1372 } 1373 1374 qlcnic_fw_destroy_ctx(adapter); 1375 qlcnic_detach(adapter); 1376 1377 - if (adapter->ahw->diag_test == QLCNIC_LOOPBACK_TEST) { 1378 - if (!(adapter->flags & QLCNIC_MSIX_ENABLED)) { 1379 - err = qlcnic_83xx_setup_mbx_intr(adapter); 1380 - qlcnic_83xx_disable_mbx_poll(adapter); 1381 - if (err) { 1382 - dev_err(&adapter->pdev->dev, 1383 - "%s: failed to setup mbx interrupt\n", 1384 - __func__); 1385 - goto out; 1386 - } 1387 - } 1388 - } 1389 adapter->ahw->diag_test = 0; 1390 adapter->drv_sds_rings = drv_sds_rings; 1391 ··· 1382 if (netif_running(netdev)) 1383 __qlcnic_up(adapter, netdev); 1384 1385 - if (adapter->ahw->diag_test == QLCNIC_INTERRUPT_TEST && 1386 - !(adapter->flags & QLCNIC_MSIX_ENABLED)) 1387 - qlcnic_83xx_disable_mbx_poll(adapter); 1388 out: 1389 netif_device_attach(netdev); 1390 } ··· 3734 return; 3735 } 3736 3737 static void qlcnic_83xx_mailbox_worker(struct work_struct *work) 3738 { 3739 struct qlcnic_mailbox *mbx = container_of(work, struct qlcnic_mailbox, ··· 3791 __func__, cmd->cmd_op, cmd->type, ahw->pci_func, 3792 ahw->op_mode); 3793 clear_bit(QLC_83XX_MBX_READY, &mbx->status); 3794 qlcnic_dump_mbx(adapter, cmd); 3795 qlcnic_83xx_idc_request_reset(adapter, 3796 QLCNIC_FORCE_FW_DUMP_KEY);
··· 447 448 qlcnic_83xx_poll_process_aen(adapter); 449 450 + if (ahw->diag_test) { 451 + if (ahw->diag_test == QLCNIC_INTERRUPT_TEST) 452 + ahw->diag_cnt++; 453 qlcnic_83xx_enable_legacy_msix_mbx_intr(adapter); 454 return IRQ_HANDLED; 455 } ··· 1345 } 1346 1347 if (adapter->ahw->diag_test == QLCNIC_LOOPBACK_TEST) { 1348 adapter->ahw->loopback_state = 0; 1349 adapter->ahw->hw_ops->setup_link_event(adapter, 1); 1350 } ··· 1363 { 1364 struct qlcnic_adapter *adapter = netdev_priv(netdev); 1365 struct qlcnic_host_sds_ring *sds_ring; 1366 + int ring; 1367 1368 clear_bit(__QLCNIC_DEV_UP, &adapter->state); 1369 if (adapter->ahw->diag_test == QLCNIC_INTERRUPT_TEST) { 1370 for (ring = 0; ring < adapter->drv_sds_rings; ring++) { 1371 sds_ring = &adapter->recv_ctx->sds_rings[ring]; 1372 + if (adapter->flags & QLCNIC_MSIX_ENABLED) 1373 + qlcnic_83xx_disable_intr(adapter, sds_ring); 1374 } 1375 } 1376 1377 qlcnic_fw_destroy_ctx(adapter); 1378 qlcnic_detach(adapter); 1379 1380 adapter->ahw->diag_test = 0; 1381 adapter->drv_sds_rings = drv_sds_rings; 1382 ··· 1399 if (netif_running(netdev)) 1400 __qlcnic_up(adapter, netdev); 1401 1402 out: 1403 netif_device_attach(netdev); 1404 } ··· 3754 return; 3755 } 3756 3757 + static inline void qlcnic_dump_mailbox_registers(struct qlcnic_adapter *adapter) 3758 + { 3759 + struct qlcnic_hardware_context *ahw = adapter->ahw; 3760 + u32 offset; 3761 + 3762 + offset = QLCRDX(ahw, QLCNIC_DEF_INT_MASK); 3763 + dev_info(&adapter->pdev->dev, "Mbx interrupt mask=0x%x, Mbx interrupt enable=0x%x, Host mbx control=0x%x, Fw mbx control=0x%x", 3764 + readl(ahw->pci_base0 + offset), 3765 + QLCRDX(ahw, QLCNIC_MBX_INTR_ENBL), 3766 + QLCRDX(ahw, QLCNIC_HOST_MBX_CTRL), 3767 + QLCRDX(ahw, QLCNIC_FW_MBX_CTRL)); 3768 + } 3769 + 3770 static void qlcnic_83xx_mailbox_worker(struct work_struct *work) 3771 { 3772 struct qlcnic_mailbox *mbx = container_of(work, struct qlcnic_mailbox, ··· 3798 __func__, cmd->cmd_op, cmd->type, ahw->pci_func, 3799 ahw->op_mode); 3800 clear_bit(QLC_83XX_MBX_READY, &mbx->status); 3801 + qlcnic_dump_mailbox_registers(adapter); 3802 + qlcnic_83xx_get_mbx_data(adapter, cmd); 3803 qlcnic_dump_mbx(adapter, cmd); 3804 qlcnic_83xx_idc_request_reset(adapter, 3805 QLCNIC_FORCE_FW_DUMP_KEY);
+1
drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_hw.h
··· 662 pci_channel_state_t); 663 pci_ers_result_t qlcnic_83xx_io_slot_reset(struct pci_dev *); 664 void qlcnic_83xx_io_resume(struct pci_dev *); 665 #endif
··· 662 pci_channel_state_t); 663 pci_ers_result_t qlcnic_83xx_io_slot_reset(struct pci_dev *); 664 void qlcnic_83xx_io_resume(struct pci_dev *); 665 + void qlcnic_83xx_stop_hw(struct qlcnic_adapter *); 666 #endif
+41 -26
drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_init.c
··· 740 adapter->ahw->idc.err_code = -EIO; 741 dev_err(&adapter->pdev->dev, 742 "%s: Device in unknown state\n", __func__); 743 return 0; 744 } 745 ··· 819 struct qlcnic_hardware_context *ahw = adapter->ahw; 820 struct qlcnic_mailbox *mbx = ahw->mailbox; 821 int ret = 0; 822 - u32 owner; 823 u32 val; 824 825 /* Perform NIC configuration based ready state entry actions */ ··· 848 set_bit(__QLCNIC_RESETTING, &adapter->state); 849 qlcnic_83xx_idc_enter_need_reset_state(adapter, 1); 850 } else { 851 - owner = qlcnic_83xx_idc_find_reset_owner_id(adapter); 852 - if (ahw->pci_func == owner) 853 - qlcnic_dump_fw(adapter); 854 } 855 return -EIO; 856 } ··· 948 return 0; 949 } 950 951 - static int qlcnic_83xx_idc_failed_state(struct qlcnic_adapter *adapter) 952 { 953 - dev_err(&adapter->pdev->dev, "%s: please restart!!\n", __func__); 954 - clear_bit(__QLCNIC_RESETTING, &adapter->state); 955 - adapter->ahw->idc.err_code = -EIO; 956 957 - return 0; 958 } 959 960 static int qlcnic_83xx_idc_quiesce_state(struct qlcnic_adapter *adapter) ··· 1075 } 1076 adapter->ahw->idc.prev_state = adapter->ahw->idc.curr_state; 1077 qlcnic_83xx_periodic_tasks(adapter); 1078 - 1079 - /* Do not reschedule if firmaware is in hanged state and auto 1080 - * recovery is disabled 1081 - */ 1082 - if ((adapter->flags & QLCNIC_FW_HANG) && !qlcnic_auto_fw_reset) 1083 - return; 1084 1085 /* Re-schedule the function */ 1086 if (test_bit(QLC_83XX_MODULE_LOADED, &adapter->ahw->idc.status)) ··· 1226 } 1227 1228 val = QLCRDX(adapter->ahw, QLC_83XX_IDC_CTRL); 1229 - if ((val & QLC_83XX_IDC_DISABLE_FW_RESET_RECOVERY) || 1230 - !qlcnic_auto_fw_reset) { 1231 - dev_err(&adapter->pdev->dev, 1232 - "%s:failed, device in non reset mode\n", __func__); 1233 qlcnic_83xx_unlock_driver(adapter); 1234 return; 1235 } ··· 1261 if (size & 0xF) 1262 size = (size + 16) & ~0xF; 1263 1264 - p_cache = kzalloc(size, GFP_KERNEL); 1265 if (p_cache == NULL) 1266 return -ENOMEM; 1267 1268 ret = qlcnic_83xx_lockless_flash_read32(adapter, src, p_cache, 1269 size / sizeof(u32)); 1270 if (ret) { 1271 - kfree(p_cache); 1272 return ret; 1273 } 1274 /* 16 byte write to MS memory */ 1275 ret = qlcnic_83xx_ms_mem_write128(adapter, dest, (u32 *)p_cache, 1276 size / 16); 1277 if (ret) { 1278 - kfree(p_cache); 1279 return ret; 1280 } 1281 - kfree(p_cache); 1282 1283 return ret; 1284 } ··· 1946 p_dev->ahw->reset.seq_index = index; 1947 } 1948 1949 - static void qlcnic_83xx_stop_hw(struct qlcnic_adapter *p_dev) 1950 { 1951 p_dev->ahw->reset.seq_index = 0; 1952 ··· 2001 val = QLCRDX(adapter->ahw, QLC_83XX_IDC_CTRL); 2002 if (!(val & QLC_83XX_IDC_GRACEFULL_RESET)) 2003 qlcnic_dump_fw(adapter); 2004 qlcnic_83xx_init_hw(adapter); 2005 2006 if (qlcnic_83xx_copy_bootloader(adapter)) ··· 2088 ahw->nic_mode = QLCNIC_DEFAULT_MODE; 2089 adapter->nic_ops->init_driver = qlcnic_83xx_init_default_driver; 2090 ahw->idc.state_entry = qlcnic_83xx_idc_ready_state_entry; 2091 - adapter->max_sds_rings = ahw->max_rx_ques; 2092 - adapter->max_tx_rings = ahw->max_tx_ques; 2093 } else { 2094 return -EIO; 2095 }
··· 740 adapter->ahw->idc.err_code = -EIO; 741 dev_err(&adapter->pdev->dev, 742 "%s: Device in unknown state\n", __func__); 743 + clear_bit(__QLCNIC_RESETTING, &adapter->state); 744 return 0; 745 } 746 ··· 818 struct qlcnic_hardware_context *ahw = adapter->ahw; 819 struct qlcnic_mailbox *mbx = ahw->mailbox; 820 int ret = 0; 821 u32 val; 822 823 /* Perform NIC configuration based ready state entry actions */ ··· 848 set_bit(__QLCNIC_RESETTING, &adapter->state); 849 qlcnic_83xx_idc_enter_need_reset_state(adapter, 1); 850 } else { 851 + netdev_info(adapter->netdev, "%s: Auto firmware recovery is disabled\n", 852 + __func__); 853 + qlcnic_83xx_idc_enter_failed_state(adapter, 1); 854 } 855 return -EIO; 856 } ··· 948 return 0; 949 } 950 951 + static void qlcnic_83xx_idc_failed_state(struct qlcnic_adapter *adapter) 952 { 953 + struct qlcnic_hardware_context *ahw = adapter->ahw; 954 + u32 val, owner; 955 956 + val = QLCRDX(adapter->ahw, QLC_83XX_IDC_CTRL); 957 + if (val & QLC_83XX_IDC_DISABLE_FW_RESET_RECOVERY) { 958 + owner = qlcnic_83xx_idc_find_reset_owner_id(adapter); 959 + if (ahw->pci_func == owner) { 960 + qlcnic_83xx_stop_hw(adapter); 961 + qlcnic_dump_fw(adapter); 962 + } 963 + } 964 + 965 + netdev_warn(adapter->netdev, "%s: Reboot will be required to recover the adapter!!\n", 966 + __func__); 967 + clear_bit(__QLCNIC_RESETTING, &adapter->state); 968 + ahw->idc.err_code = -EIO; 969 + 970 + return; 971 } 972 973 static int qlcnic_83xx_idc_quiesce_state(struct qlcnic_adapter *adapter) ··· 1062 } 1063 adapter->ahw->idc.prev_state = adapter->ahw->idc.curr_state; 1064 qlcnic_83xx_periodic_tasks(adapter); 1065 1066 /* Re-schedule the function */ 1067 if (test_bit(QLC_83XX_MODULE_LOADED, &adapter->ahw->idc.status)) ··· 1219 } 1220 1221 val = QLCRDX(adapter->ahw, QLC_83XX_IDC_CTRL); 1222 + if (val & QLC_83XX_IDC_DISABLE_FW_RESET_RECOVERY) { 1223 + netdev_info(adapter->netdev, "%s: Auto firmware recovery is disabled\n", 1224 + __func__); 1225 + qlcnic_83xx_idc_enter_failed_state(adapter, 0); 1226 qlcnic_83xx_unlock_driver(adapter); 1227 return; 1228 } ··· 1254 if (size & 0xF) 1255 size = (size + 16) & ~0xF; 1256 1257 + p_cache = vzalloc(size); 1258 if (p_cache == NULL) 1259 return -ENOMEM; 1260 1261 ret = qlcnic_83xx_lockless_flash_read32(adapter, src, p_cache, 1262 size / sizeof(u32)); 1263 if (ret) { 1264 + vfree(p_cache); 1265 return ret; 1266 } 1267 /* 16 byte write to MS memory */ 1268 ret = qlcnic_83xx_ms_mem_write128(adapter, dest, (u32 *)p_cache, 1269 size / 16); 1270 if (ret) { 1271 + vfree(p_cache); 1272 return ret; 1273 } 1274 + vfree(p_cache); 1275 1276 return ret; 1277 } ··· 1939 p_dev->ahw->reset.seq_index = index; 1940 } 1941 1942 + void qlcnic_83xx_stop_hw(struct qlcnic_adapter *p_dev) 1943 { 1944 p_dev->ahw->reset.seq_index = 0; 1945 ··· 1994 val = QLCRDX(adapter->ahw, QLC_83XX_IDC_CTRL); 1995 if (!(val & QLC_83XX_IDC_GRACEFULL_RESET)) 1996 qlcnic_dump_fw(adapter); 1997 + 1998 + if (val & QLC_83XX_IDC_DISABLE_FW_RESET_RECOVERY) { 1999 + netdev_info(adapter->netdev, "%s: Auto firmware recovery is disabled\n", 2000 + __func__); 2001 + qlcnic_83xx_idc_enter_failed_state(adapter, 1); 2002 + return err; 2003 + } 2004 + 2005 qlcnic_83xx_init_hw(adapter); 2006 2007 if (qlcnic_83xx_copy_bootloader(adapter)) ··· 2073 ahw->nic_mode = QLCNIC_DEFAULT_MODE; 2074 adapter->nic_ops->init_driver = qlcnic_83xx_init_default_driver; 2075 ahw->idc.state_entry = qlcnic_83xx_idc_ready_state_entry; 2076 + adapter->max_sds_rings = QLCNIC_MAX_SDS_RINGS; 2077 + adapter->max_tx_rings = QLCNIC_MAX_TX_RINGS; 2078 } else { 2079 return -EIO; 2080 }
+8 -11
drivers/net/ethernet/qlogic/qlcnic/qlcnic_ethtool.c
··· 667 static int qlcnic_validate_ring_count(struct qlcnic_adapter *adapter, 668 u8 rx_ring, u8 tx_ring) 669 { 670 if (rx_ring != 0) { 671 if (rx_ring > adapter->max_sds_rings) { 672 - netdev_err(adapter->netdev, "Invalid ring count, SDS ring count %d should not be greater than max %d driver sds rings.\n", 673 rx_ring, adapter->max_sds_rings); 674 return -EINVAL; 675 } 676 } 677 678 if (tx_ring != 0) { 679 - if (qlcnic_82xx_check(adapter) && 680 - (tx_ring > adapter->max_tx_rings)) { 681 netdev_err(adapter->netdev, 682 "Invalid ring count, Tx ring count %d should not be greater than max %d driver Tx rings.\n", 683 tx_ring, adapter->max_tx_rings); 684 return -EINVAL; 685 - } 686 - 687 - if (qlcnic_83xx_check(adapter) && 688 - (tx_ring > QLCNIC_SINGLE_RING)) { 689 - netdev_err(adapter->netdev, 690 - "Invalid ring count, Tx ring count %d should not be greater than %d driver Tx rings.\n", 691 - tx_ring, QLCNIC_SINGLE_RING); 692 - return -EINVAL; 693 } 694 } 695 ··· 943 struct qlcnic_hardware_context *ahw = adapter->ahw; 944 struct qlcnic_cmd_args cmd; 945 int ret, drv_sds_rings = adapter->drv_sds_rings; 946 947 if (qlcnic_83xx_check(adapter)) 948 return qlcnic_83xx_interrupt_test(netdev); ··· 976 977 clear_diag_irq: 978 adapter->drv_sds_rings = drv_sds_rings; 979 clear_bit(__QLCNIC_RESETTING, &adapter->state); 980 981 return ret;
··· 667 static int qlcnic_validate_ring_count(struct qlcnic_adapter *adapter, 668 u8 rx_ring, u8 tx_ring) 669 { 670 + if (rx_ring == 0 || tx_ring == 0) 671 + return -EINVAL; 672 + 673 if (rx_ring != 0) { 674 if (rx_ring > adapter->max_sds_rings) { 675 + netdev_err(adapter->netdev, 676 + "Invalid ring count, SDS ring count %d should not be greater than max %d driver sds rings.\n", 677 rx_ring, adapter->max_sds_rings); 678 return -EINVAL; 679 } 680 } 681 682 if (tx_ring != 0) { 683 + if (tx_ring > adapter->max_tx_rings) { 684 netdev_err(adapter->netdev, 685 "Invalid ring count, Tx ring count %d should not be greater than max %d driver Tx rings.\n", 686 tx_ring, adapter->max_tx_rings); 687 return -EINVAL; 688 } 689 } 690 ··· 948 struct qlcnic_hardware_context *ahw = adapter->ahw; 949 struct qlcnic_cmd_args cmd; 950 int ret, drv_sds_rings = adapter->drv_sds_rings; 951 + int drv_tx_rings = adapter->drv_tx_rings; 952 953 if (qlcnic_83xx_check(adapter)) 954 return qlcnic_83xx_interrupt_test(netdev); ··· 980 981 clear_diag_irq: 982 adapter->drv_sds_rings = drv_sds_rings; 983 + adapter->drv_tx_rings = drv_tx_rings; 984 clear_bit(__QLCNIC_RESETTING, &adapter->state); 985 986 return ret;
+2 -8
drivers/net/ethernet/qlogic/qlcnic/qlcnic_io.c
··· 687 if (adapter->ahw->linkup && !linkup) { 688 netdev_info(netdev, "NIC Link is down\n"); 689 adapter->ahw->linkup = 0; 690 - if (netif_running(netdev)) { 691 - netif_carrier_off(netdev); 692 - netif_tx_stop_all_queues(netdev); 693 - } 694 } else if (!adapter->ahw->linkup && linkup) { 695 netdev_info(netdev, "NIC Link is up\n"); 696 adapter->ahw->linkup = 1; 697 - if (netif_running(netdev)) { 698 - netif_carrier_on(netdev); 699 - netif_wake_queue(netdev); 700 - } 701 } 702 } 703
··· 687 if (adapter->ahw->linkup && !linkup) { 688 netdev_info(netdev, "NIC Link is down\n"); 689 adapter->ahw->linkup = 0; 690 + netif_carrier_off(netdev); 691 } else if (!adapter->ahw->linkup && linkup) { 692 netdev_info(netdev, "NIC Link is up\n"); 693 adapter->ahw->linkup = 1; 694 + netif_carrier_on(netdev); 695 } 696 } 697
+1 -1
drivers/net/ethernet/qlogic/qlcnic/qlcnic_main.c
··· 1178 } else { 1179 adapter->ahw->nic_mode = QLCNIC_DEFAULT_MODE; 1180 adapter->max_tx_rings = QLCNIC_MAX_HW_TX_RINGS; 1181 adapter->flags &= ~QLCNIC_ESWITCH_ENABLED; 1182 } 1183 ··· 1941 qlcnic_detach(adapter); 1942 1943 adapter->drv_sds_rings = QLCNIC_SINGLE_RING; 1944 - adapter->drv_tx_rings = QLCNIC_SINGLE_RING; 1945 adapter->ahw->diag_test = test; 1946 adapter->ahw->linkup = 0; 1947
··· 1178 } else { 1179 adapter->ahw->nic_mode = QLCNIC_DEFAULT_MODE; 1180 adapter->max_tx_rings = QLCNIC_MAX_HW_TX_RINGS; 1181 + adapter->max_sds_rings = QLCNIC_MAX_SDS_RINGS; 1182 adapter->flags &= ~QLCNIC_ESWITCH_ENABLED; 1183 } 1184 ··· 1940 qlcnic_detach(adapter); 1941 1942 adapter->drv_sds_rings = QLCNIC_SINGLE_RING; 1943 adapter->ahw->diag_test = test; 1944 adapter->ahw->linkup = 0; 1945
-1
drivers/net/hyperv/netvsc_drv.c
··· 327 return -EINVAL; 328 329 nvdev->start_remove = true; 330 - cancel_delayed_work_sync(&ndevctx->dwork); 331 cancel_work_sync(&ndevctx->work); 332 netif_tx_disable(ndev); 333 rndis_filter_device_remove(hdev);
··· 327 return -EINVAL; 328 329 nvdev->start_remove = true; 330 cancel_work_sync(&ndevctx->work); 331 netif_tx_disable(ndev); 332 rndis_filter_device_remove(hdev);
+3
drivers/net/xen-netback/netback.c
··· 1197 1198 err = -EPROTO; 1199 1200 switch (ip_hdr(skb)->protocol) { 1201 case IPPROTO_TCP: 1202 err = maybe_pull_tail(skb,
··· 1197 1198 err = -EPROTO; 1199 1200 + if (fragment) 1201 + goto out; 1202 + 1203 switch (ip_hdr(skb)->protocol) { 1204 case IPPROTO_TCP: 1205 err = maybe_pull_tail(skb,
+2 -2
drivers/phy/Kconfig
··· 24 config OMAP_USB2 25 tristate "OMAP USB2 PHY Driver" 26 depends on ARCH_OMAP2PLUS 27 select GENERIC_PHY 28 - select USB_PHY 29 select OMAP_CONTROL_USB 30 help 31 Enable this to support the transceiver that is part of SOC. This ··· 36 config TWL4030_USB 37 tristate "TWL4030 USB Transceiver Driver" 38 depends on TWL4030_CORE && REGULATOR_TWL4030 && USB_MUSB_OMAP2PLUS 39 select GENERIC_PHY 40 - select USB_PHY 41 help 42 Enable this to support the USB OTG transceiver on TWL4030 43 family chips (including the TWL5030 and TPS659x0 devices).
··· 24 config OMAP_USB2 25 tristate "OMAP USB2 PHY Driver" 26 depends on ARCH_OMAP2PLUS 27 + depends on USB_PHY 28 select GENERIC_PHY 29 select OMAP_CONTROL_USB 30 help 31 Enable this to support the transceiver that is part of SOC. This ··· 36 config TWL4030_USB 37 tristate "TWL4030 USB Transceiver Driver" 38 depends on TWL4030_CORE && REGULATOR_TWL4030 && USB_MUSB_OMAP2PLUS 39 + depends on USB_PHY 40 select GENERIC_PHY 41 help 42 Enable this to support the USB OTG transceiver on TWL4030 43 family chips (including the TWL5030 and TPS659x0 devices).
+10 -16
drivers/phy/phy-core.c
··· 437 int id; 438 struct phy *phy; 439 440 - if (!dev) { 441 - dev_WARN(dev, "no device provided for PHY\n"); 442 - ret = -EINVAL; 443 - goto err0; 444 - } 445 446 phy = kzalloc(sizeof(*phy), GFP_KERNEL); 447 - if (!phy) { 448 - ret = -ENOMEM; 449 - goto err0; 450 - } 451 452 id = ida_simple_get(&phy_ida, 0, 0, GFP_KERNEL); 453 if (id < 0) { 454 dev_err(dev, "unable to get id\n"); 455 ret = id; 456 - goto err0; 457 } 458 459 device_initialize(&phy->dev); ··· 463 464 ret = dev_set_name(&phy->dev, "phy-%s.%d", dev_name(dev), id); 465 if (ret) 466 - goto err1; 467 468 ret = device_add(&phy->dev); 469 if (ret) 470 - goto err1; 471 472 if (pm_runtime_enabled(dev)) { 473 pm_runtime_enable(&phy->dev); ··· 476 477 return phy; 478 479 - err1: 480 - ida_remove(&phy_ida, phy->id); 481 put_device(&phy->dev); 482 kfree(phy); 483 - 484 - err0: 485 return ERR_PTR(ret); 486 } 487 EXPORT_SYMBOL_GPL(phy_create);
··· 437 int id; 438 struct phy *phy; 439 440 + if (WARN_ON(!dev)) 441 + return ERR_PTR(-EINVAL); 442 443 phy = kzalloc(sizeof(*phy), GFP_KERNEL); 444 + if (!phy) 445 + return ERR_PTR(-ENOMEM); 446 447 id = ida_simple_get(&phy_ida, 0, 0, GFP_KERNEL); 448 if (id < 0) { 449 dev_err(dev, "unable to get id\n"); 450 ret = id; 451 + goto free_phy; 452 } 453 454 device_initialize(&phy->dev); ··· 468 469 ret = dev_set_name(&phy->dev, "phy-%s.%d", dev_name(dev), id); 470 if (ret) 471 + goto put_dev; 472 473 ret = device_add(&phy->dev); 474 if (ret) 475 + goto put_dev; 476 477 if (pm_runtime_enabled(dev)) { 478 pm_runtime_enable(&phy->dev); ··· 481 482 return phy; 483 484 + put_dev: 485 put_device(&phy->dev); 486 + ida_remove(&phy_ida, phy->id); 487 + free_phy: 488 kfree(phy); 489 return ERR_PTR(ret); 490 } 491 EXPORT_SYMBOL_GPL(phy_create);
+1 -1
drivers/pinctrl/sh-pfc/sh_pfc.h
··· 254 #define PINMUX_GPIO(_pin) \ 255 [GPIO_##_pin] = { \ 256 .pin = (u16)-1, \ 257 - .name = __stringify(name), \ 258 .enum_id = _pin##_DATA, \ 259 } 260
··· 254 #define PINMUX_GPIO(_pin) \ 255 [GPIO_##_pin] = { \ 256 .pin = (u16)-1, \ 257 + .name = __stringify(GPIO_##_pin), \ 258 .enum_id = _pin##_DATA, \ 259 } 260
+1 -1
drivers/regulator/s2mps11.c
··· 438 platform_set_drvdata(pdev, s2mps11); 439 440 config.dev = &pdev->dev; 441 - config.regmap = iodev->regmap; 442 config.driver_data = s2mps11; 443 for (i = 0; i < S2MPS11_REGULATOR_MAX; i++) { 444 if (!reg_np) {
··· 438 platform_set_drvdata(pdev, s2mps11); 439 440 config.dev = &pdev->dev; 441 + config.regmap = iodev->regmap_pmic; 442 config.driver_data = s2mps11; 443 for (i = 0; i < S2MPS11_REGULATOR_MAX; i++) { 444 if (!reg_np) {
+6 -4
drivers/scsi/qla2xxx/qla_target.c
··· 471 schedule_delayed_work(&tgt->sess_del_work, 0); 472 else 473 schedule_delayed_work(&tgt->sess_del_work, 474 - jiffies - sess->expires); 475 } 476 477 /* ha->hardware_lock supposed to be held on entry */ ··· 550 struct scsi_qla_host *vha = tgt->vha; 551 struct qla_hw_data *ha = vha->hw; 552 struct qla_tgt_sess *sess; 553 - unsigned long flags; 554 555 spin_lock_irqsave(&ha->hardware_lock, flags); 556 while (!list_empty(&tgt->del_sess_list)) { 557 sess = list_entry(tgt->del_sess_list.next, typeof(*sess), 558 del_list_entry); 559 - if (time_after_eq(jiffies, sess->expires)) { 560 qlt_undelete_sess(sess); 561 562 ql_dbg(ql_dbg_tgt_mgt, vha, 0xf004, ··· 567 ha->tgt.tgt_ops->put_sess(sess); 568 } else { 569 schedule_delayed_work(&tgt->sess_del_work, 570 - jiffies - sess->expires); 571 break; 572 } 573 } ··· 4291 if (rc != 0) { 4292 ha->tgt.tgt_ops = NULL; 4293 ha->tgt.target_lport_ptr = NULL; 4294 } 4295 mutex_unlock(&qla_tgt_mutex); 4296 return rc;
··· 471 schedule_delayed_work(&tgt->sess_del_work, 0); 472 else 473 schedule_delayed_work(&tgt->sess_del_work, 474 + sess->expires - jiffies); 475 } 476 477 /* ha->hardware_lock supposed to be held on entry */ ··· 550 struct scsi_qla_host *vha = tgt->vha; 551 struct qla_hw_data *ha = vha->hw; 552 struct qla_tgt_sess *sess; 553 + unsigned long flags, elapsed; 554 555 spin_lock_irqsave(&ha->hardware_lock, flags); 556 while (!list_empty(&tgt->del_sess_list)) { 557 sess = list_entry(tgt->del_sess_list.next, typeof(*sess), 558 del_list_entry); 559 + elapsed = jiffies; 560 + if (time_after_eq(elapsed, sess->expires)) { 561 qlt_undelete_sess(sess); 562 563 ql_dbg(ql_dbg_tgt_mgt, vha, 0xf004, ··· 566 ha->tgt.tgt_ops->put_sess(sess); 567 } else { 568 schedule_delayed_work(&tgt->sess_del_work, 569 + sess->expires - elapsed); 570 break; 571 } 572 } ··· 4290 if (rc != 0) { 4291 ha->tgt.tgt_ops = NULL; 4292 ha->tgt.target_lport_ptr = NULL; 4293 + scsi_host_put(host); 4294 } 4295 mutex_unlock(&qla_tgt_mutex); 4296 return rc;
+1 -1
drivers/staging/comedi/drivers.c
··· 446 release_firmware(fw); 447 } 448 449 - return ret; 450 } 451 EXPORT_SYMBOL_GPL(comedi_load_firmware); 452
··· 446 release_firmware(fw); 447 } 448 449 + return ret < 0 ? ret : 0; 450 } 451 EXPORT_SYMBOL_GPL(comedi_load_firmware); 452
+12 -3
drivers/staging/comedi/drivers/8255_pci.c
··· 63 BOARD_ADLINK_PCI7296, 64 BOARD_CB_PCIDIO24, 65 BOARD_CB_PCIDIO24H, 66 - BOARD_CB_PCIDIO48H, 67 BOARD_CB_PCIDIO96H, 68 BOARD_NI_PCIDIO96, 69 BOARD_NI_PCIDIO96B, ··· 107 .dio_badr = 2, 108 .n_8255 = 1, 109 }, 110 - [BOARD_CB_PCIDIO48H] = { 111 .name = "cb_pci-dio48h", 112 .dio_badr = 1, 113 .n_8255 = 2, 114 }, 115 [BOARD_CB_PCIDIO96H] = { ··· 269 { PCI_VDEVICE(ADLINK, 0x7296), BOARD_ADLINK_PCI7296 }, 270 { PCI_VDEVICE(CB, 0x0028), BOARD_CB_PCIDIO24 }, 271 { PCI_VDEVICE(CB, 0x0014), BOARD_CB_PCIDIO24H }, 272 - { PCI_VDEVICE(CB, 0x000b), BOARD_CB_PCIDIO48H }, 273 { PCI_VDEVICE(CB, 0x0017), BOARD_CB_PCIDIO96H }, 274 { PCI_VDEVICE(NI, 0x0160), BOARD_NI_PCIDIO96 }, 275 { PCI_VDEVICE(NI, 0x1630), BOARD_NI_PCIDIO96B },
··· 63 BOARD_ADLINK_PCI7296, 64 BOARD_CB_PCIDIO24, 65 BOARD_CB_PCIDIO24H, 66 + BOARD_CB_PCIDIO48H_OLD, 67 + BOARD_CB_PCIDIO48H_NEW, 68 BOARD_CB_PCIDIO96H, 69 BOARD_NI_PCIDIO96, 70 BOARD_NI_PCIDIO96B, ··· 106 .dio_badr = 2, 107 .n_8255 = 1, 108 }, 109 + [BOARD_CB_PCIDIO48H_OLD] = { 110 .name = "cb_pci-dio48h", 111 .dio_badr = 1, 112 + .n_8255 = 2, 113 + }, 114 + [BOARD_CB_PCIDIO48H_NEW] = { 115 + .name = "cb_pci-dio48h", 116 + .dio_badr = 2, 117 .n_8255 = 2, 118 }, 119 [BOARD_CB_PCIDIO96H] = { ··· 263 { PCI_VDEVICE(ADLINK, 0x7296), BOARD_ADLINK_PCI7296 }, 264 { PCI_VDEVICE(CB, 0x0028), BOARD_CB_PCIDIO24 }, 265 { PCI_VDEVICE(CB, 0x0014), BOARD_CB_PCIDIO24H }, 266 + { PCI_DEVICE_SUB(PCI_VENDOR_ID_CB, 0x000b, 0x0000, 0x0000), 267 + .driver_data = BOARD_CB_PCIDIO48H_OLD }, 268 + { PCI_DEVICE_SUB(PCI_VENDOR_ID_CB, 0x000b, PCI_VENDOR_ID_CB, 0x000b), 269 + .driver_data = BOARD_CB_PCIDIO48H_NEW }, 270 { PCI_VDEVICE(CB, 0x0017), BOARD_CB_PCIDIO96H }, 271 { PCI_VDEVICE(NI, 0x0160), BOARD_NI_PCIDIO96 }, 272 { PCI_VDEVICE(NI, 0x1630), BOARD_NI_PCIDIO96B },
+6 -1
drivers/staging/iio/magnetometer/hmc5843.c
··· 451 .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE) | \ 452 BIT(IIO_CHAN_INFO_SAMP_FREQ), \ 453 .scan_index = idx, \ 454 - .scan_type = IIO_ST('s', 16, 16, IIO_BE), \ 455 } 456 457 static const struct iio_chan_spec hmc5843_channels[] = {
··· 451 .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE) | \ 452 BIT(IIO_CHAN_INFO_SAMP_FREQ), \ 453 .scan_index = idx, \ 454 + .scan_type = { \ 455 + .sign = 's', \ 456 + .realbits = 16, \ 457 + .storagebits = 16, \ 458 + .endianness = IIO_BE, \ 459 + }, \ 460 } 461 462 static const struct iio_chan_spec hmc5843_channels[] = {
+29 -10
drivers/staging/imx-drm/imx-drm-core.c
··· 88 89 imx_drm_device_put(); 90 91 - drm_mode_config_cleanup(imxdrm->drm); 92 drm_kms_helper_poll_fini(imxdrm->drm); 93 94 return 0; 95 } ··· 200 if (!file->is_master) 201 return; 202 203 - for (i = 0; i < 4; i++) 204 - imx_drm_disable_vblank(drm , i); 205 } 206 207 static const struct file_operations imx_drm_driver_fops = { ··· 377 struct imx_drm_device *imxdrm = __imx_drm_device(); 378 int ret; 379 380 - drm_crtc_init(imxdrm->drm, imx_drm_crtc->crtc, 381 - imx_drm_crtc->imx_drm_helper_funcs.crtc_funcs); 382 ret = drm_mode_crtc_set_gamma_size(imx_drm_crtc->crtc, 256); 383 if (ret) 384 return ret; 385 386 drm_crtc_helper_add(imx_drm_crtc->crtc, 387 imx_drm_crtc->imx_drm_helper_funcs.crtc_helper_funcs); 388 389 drm_mode_group_reinit(imxdrm->drm); 390 ··· 430 ret = drm_mode_group_init_legacy_group(imxdrm->drm, 431 &imxdrm->drm->primary->mode_group); 432 if (ret) 433 - goto err_init; 434 435 ret = drm_vblank_init(imxdrm->drm, MAX_CRTC); 436 if (ret) 437 - goto err_init; 438 439 /* 440 * with vblank_disable_allowed = true, vblank interrupt will be disabled ··· 443 */ 444 imxdrm->drm->vblank_disable_allowed = true; 445 446 - if (!imx_drm_device_get()) 447 ret = -EINVAL; 448 449 - ret = 0; 450 451 - err_init: 452 mutex_unlock(&imxdrm->mutex); 453 454 return ret; ··· 501 502 mutex_lock(&imxdrm->mutex); 503 504 if (imxdrm->drm->open_count) { 505 ret = -EBUSY; 506 goto err_busy; ··· 546 return 0; 547 548 err_register: 549 kfree(imx_drm_crtc); 550 err_alloc: 551 err_busy:
··· 88 89 imx_drm_device_put(); 90 91 + drm_vblank_cleanup(imxdrm->drm); 92 drm_kms_helper_poll_fini(imxdrm->drm); 93 + drm_mode_config_cleanup(imxdrm->drm); 94 95 return 0; 96 } ··· 199 if (!file->is_master) 200 return; 201 202 + for (i = 0; i < MAX_CRTC; i++) 203 + imx_drm_disable_vblank(drm, i); 204 } 205 206 static const struct file_operations imx_drm_driver_fops = { ··· 376 struct imx_drm_device *imxdrm = __imx_drm_device(); 377 int ret; 378 379 ret = drm_mode_crtc_set_gamma_size(imx_drm_crtc->crtc, 256); 380 if (ret) 381 return ret; 382 383 drm_crtc_helper_add(imx_drm_crtc->crtc, 384 imx_drm_crtc->imx_drm_helper_funcs.crtc_helper_funcs); 385 + 386 + drm_crtc_init(imxdrm->drm, imx_drm_crtc->crtc, 387 + imx_drm_crtc->imx_drm_helper_funcs.crtc_funcs); 388 389 drm_mode_group_reinit(imxdrm->drm); 390 ··· 428 ret = drm_mode_group_init_legacy_group(imxdrm->drm, 429 &imxdrm->drm->primary->mode_group); 430 if (ret) 431 + goto err_kms; 432 433 ret = drm_vblank_init(imxdrm->drm, MAX_CRTC); 434 if (ret) 435 + goto err_kms; 436 437 /* 438 * with vblank_disable_allowed = true, vblank interrupt will be disabled ··· 441 */ 442 imxdrm->drm->vblank_disable_allowed = true; 443 444 + if (!imx_drm_device_get()) { 445 ret = -EINVAL; 446 + goto err_vblank; 447 + } 448 449 + mutex_unlock(&imxdrm->mutex); 450 + return 0; 451 452 + err_vblank: 453 + drm_vblank_cleanup(drm); 454 + err_kms: 455 + drm_kms_helper_poll_fini(drm); 456 + drm_mode_config_cleanup(drm); 457 mutex_unlock(&imxdrm->mutex); 458 459 return ret; ··· 492 493 mutex_lock(&imxdrm->mutex); 494 495 + /* 496 + * The vblank arrays are dimensioned by MAX_CRTC - we can't 497 + * pass IDs greater than this to those functions. 498 + */ 499 + if (imxdrm->pipes >= MAX_CRTC) { 500 + ret = -EINVAL; 501 + goto err_busy; 502 + } 503 + 504 if (imxdrm->drm->open_count) { 505 ret = -EBUSY; 506 goto err_busy; ··· 528 return 0; 529 530 err_register: 531 + list_del(&imx_drm_crtc->list); 532 kfree(imx_drm_crtc); 533 err_alloc: 534 err_busy:
-9
drivers/staging/imx-drm/imx-tve.c
··· 114 struct drm_encoder encoder; 115 struct imx_drm_encoder *imx_drm_encoder; 116 struct device *dev; 117 - spinlock_t enable_lock; /* serializes tve_enable/disable */ 118 spinlock_t lock; /* register lock */ 119 bool enabled; 120 int mode; ··· 145 146 static void tve_enable(struct imx_tve *tve) 147 { 148 - unsigned long flags; 149 int ret; 150 151 - spin_lock_irqsave(&tve->enable_lock, flags); 152 if (!tve->enabled) { 153 tve->enabled = true; 154 clk_prepare_enable(tve->clk); ··· 166 TVE_CD_SM_IEN | 167 TVE_CD_LM_IEN | 168 TVE_CD_MON_END_IEN); 169 - 170 - spin_unlock_irqrestore(&tve->enable_lock, flags); 171 } 172 173 static void tve_disable(struct imx_tve *tve) 174 { 175 - unsigned long flags; 176 int ret; 177 178 - spin_lock_irqsave(&tve->enable_lock, flags); 179 if (tve->enabled) { 180 tve->enabled = false; 181 ret = regmap_update_bits(tve->regmap, TVE_COM_CONF_REG, 182 TVE_IPU_CLK_EN | TVE_EN, 0); 183 clk_disable_unprepare(tve->clk); 184 } 185 - spin_unlock_irqrestore(&tve->enable_lock, flags); 186 } 187 188 static int tve_setup_tvout(struct imx_tve *tve) ··· 593 594 tve->dev = &pdev->dev; 595 spin_lock_init(&tve->lock); 596 - spin_lock_init(&tve->enable_lock); 597 598 ddc_node = of_parse_phandle(np, "ddc", 0); 599 if (ddc_node) {
··· 114 struct drm_encoder encoder; 115 struct imx_drm_encoder *imx_drm_encoder; 116 struct device *dev; 117 spinlock_t lock; /* register lock */ 118 bool enabled; 119 int mode; ··· 146 147 static void tve_enable(struct imx_tve *tve) 148 { 149 int ret; 150 151 if (!tve->enabled) { 152 tve->enabled = true; 153 clk_prepare_enable(tve->clk); ··· 169 TVE_CD_SM_IEN | 170 TVE_CD_LM_IEN | 171 TVE_CD_MON_END_IEN); 172 } 173 174 static void tve_disable(struct imx_tve *tve) 175 { 176 int ret; 177 178 if (tve->enabled) { 179 tve->enabled = false; 180 ret = regmap_update_bits(tve->regmap, TVE_COM_CONF_REG, 181 TVE_IPU_CLK_EN | TVE_EN, 0); 182 clk_disable_unprepare(tve->clk); 183 } 184 } 185 186 static int tve_setup_tvout(struct imx_tve *tve) ··· 601 602 tve->dev = &pdev->dev; 603 spin_lock_init(&tve->lock); 604 605 ddc_node = of_parse_phandle(np, "ddc", 0); 606 if (ddc_node) {
+16 -16
drivers/staging/imx-drm/ipu-v3/ipu-common.c
··· 996 }, 997 }; 998 999 static int ipu_client_id; 1000 - 1001 - static int ipu_add_subdevice_pdata(struct device *dev, 1002 - const struct ipu_platform_reg *reg) 1003 - { 1004 - struct platform_device *pdev; 1005 - 1006 - pdev = platform_device_register_data(dev, reg->name, ipu_client_id++, 1007 - &reg->pdata, sizeof(struct ipu_platform_reg)); 1008 - 1009 - return PTR_ERR_OR_ZERO(pdev); 1010 - } 1011 1012 static int ipu_add_client_devices(struct ipu_soc *ipu) 1013 { 1014 - int ret; 1015 - int i; 1016 1017 for (i = 0; i < ARRAY_SIZE(client_reg); i++) { 1018 const struct ipu_platform_reg *reg = &client_reg[i]; 1019 - ret = ipu_add_subdevice_pdata(ipu->dev, reg); 1020 - if (ret) 1021 goto err_register; 1022 } 1023 1024 return 0; 1025 1026 err_register: 1027 - platform_device_unregister_children(to_platform_device(ipu->dev)); 1028 1029 return ret; 1030 }
··· 996 }, 997 }; 998 999 + static DEFINE_MUTEX(ipu_client_id_mutex); 1000 static int ipu_client_id; 1001 1002 static int ipu_add_client_devices(struct ipu_soc *ipu) 1003 { 1004 + struct device *dev = ipu->dev; 1005 + unsigned i; 1006 + int id, ret; 1007 + 1008 + mutex_lock(&ipu_client_id_mutex); 1009 + id = ipu_client_id; 1010 + ipu_client_id += ARRAY_SIZE(client_reg); 1011 + mutex_unlock(&ipu_client_id_mutex); 1012 1013 for (i = 0; i < ARRAY_SIZE(client_reg); i++) { 1014 const struct ipu_platform_reg *reg = &client_reg[i]; 1015 + struct platform_device *pdev; 1016 + 1017 + pdev = platform_device_register_data(dev, reg->name, 1018 + id++, &reg->pdata, sizeof(reg->pdata)); 1019 + 1020 + if (IS_ERR(pdev)) 1021 goto err_register; 1022 } 1023 1024 return 0; 1025 1026 err_register: 1027 + platform_device_unregister_children(to_platform_device(dev)); 1028 1029 return ret; 1030 }
+13 -14
drivers/target/iscsi/iscsi_target.c
··· 465 */ 466 send_sig(SIGINT, np->np_thread, 1); 467 kthread_stop(np->np_thread); 468 } 469 470 np->np_transport->iscsit_free_np(np); ··· 824 if (((hdr->flags & ISCSI_FLAG_CMD_READ) || 825 (hdr->flags & ISCSI_FLAG_CMD_WRITE)) && !hdr->data_length) { 826 /* 827 - * Vmware ESX v3.0 uses a modified Cisco Initiator (v3.4.2) 828 - * that adds support for RESERVE/RELEASE. There is a bug 829 - * add with this new functionality that sets R/W bits when 830 - * neither CDB carries any READ or WRITE datapayloads. 831 */ 832 - if ((hdr->cdb[0] == 0x16) || (hdr->cdb[0] == 0x17)) { 833 - hdr->flags &= ~ISCSI_FLAG_CMD_READ; 834 - hdr->flags &= ~ISCSI_FLAG_CMD_WRITE; 835 - goto done; 836 - } 837 838 - pr_err("ISCSI_FLAG_CMD_READ or ISCSI_FLAG_CMD_WRITE" 839 " set when Expected Data Transfer Length is 0 for" 840 - " CDB: 0x%02x. Bad iSCSI Initiator.\n", hdr->cdb[0]); 841 - return iscsit_add_reject_cmd(cmd, 842 - ISCSI_REASON_BOOKMARK_INVALID, buf); 843 } 844 - done: 845 846 if (!(hdr->flags & ISCSI_FLAG_CMD_READ) && 847 !(hdr->flags & ISCSI_FLAG_CMD_WRITE) && (hdr->data_length != 0)) {
··· 465 */ 466 send_sig(SIGINT, np->np_thread, 1); 467 kthread_stop(np->np_thread); 468 + np->np_thread = NULL; 469 } 470 471 np->np_transport->iscsit_free_np(np); ··· 823 if (((hdr->flags & ISCSI_FLAG_CMD_READ) || 824 (hdr->flags & ISCSI_FLAG_CMD_WRITE)) && !hdr->data_length) { 825 /* 826 + * From RFC-3720 Section 10.3.1: 827 + * 828 + * "Either or both of R and W MAY be 1 when either the 829 + * Expected Data Transfer Length and/or Bidirectional Read 830 + * Expected Data Transfer Length are 0" 831 + * 832 + * For this case, go ahead and clear the unnecssary bits 833 + * to avoid any confusion with ->data_direction. 834 */ 835 + hdr->flags &= ~ISCSI_FLAG_CMD_READ; 836 + hdr->flags &= ~ISCSI_FLAG_CMD_WRITE; 837 838 + pr_warn("ISCSI_FLAG_CMD_READ or ISCSI_FLAG_CMD_WRITE" 839 " set when Expected Data Transfer Length is 0 for" 840 + " CDB: 0x%02x, Fixing up flags\n", hdr->cdb[0]); 841 } 842 843 if (!(hdr->flags & ISCSI_FLAG_CMD_READ) && 844 !(hdr->flags & ISCSI_FLAG_CMD_WRITE) && (hdr->data_length != 0)) {
+2 -1
drivers/target/iscsi/iscsi_target_configfs.c
··· 474 \ 475 if (!capable(CAP_SYS_ADMIN)) \ 476 return -EPERM; \ 477 - \ 478 snprintf(auth->name, sizeof(auth->name), "%s", page); \ 479 if (!strncmp("NULL", auth->name, 4)) \ 480 auth->naf_flags &= ~flags; \
··· 474 \ 475 if (!capable(CAP_SYS_ADMIN)) \ 476 return -EPERM; \ 477 + if (count >= sizeof(auth->name)) \ 478 + return -EINVAL; \ 479 snprintf(auth->name, sizeof(auth->name), "%s", page); \ 480 if (!strncmp("NULL", auth->name, 4)) \ 481 auth->naf_flags &= ~flags; \
-6
drivers/target/iscsi/iscsi_target_login.c
··· 1403 1404 out: 1405 stop = kthread_should_stop(); 1406 - if (!stop && signal_pending(current)) { 1407 - spin_lock_bh(&np->np_thread_lock); 1408 - stop = (np->np_thread_state == ISCSI_NP_THREAD_SHUTDOWN); 1409 - spin_unlock_bh(&np->np_thread_lock); 1410 - } 1411 /* Wait for another socket.. */ 1412 if (!stop) 1413 return 1; ··· 1410 iscsi_stop_login_thread_timer(np); 1411 spin_lock_bh(&np->np_thread_lock); 1412 np->np_thread_state = ISCSI_NP_THREAD_EXIT; 1413 - np->np_thread = NULL; 1414 spin_unlock_bh(&np->np_thread_lock); 1415 1416 return 0;
··· 1403 1404 out: 1405 stop = kthread_should_stop(); 1406 /* Wait for another socket.. */ 1407 if (!stop) 1408 return 1; ··· 1415 iscsi_stop_login_thread_timer(np); 1416 spin_lock_bh(&np->np_thread_lock); 1417 np->np_thread_state = ISCSI_NP_THREAD_EXIT; 1418 spin_unlock_bh(&np->np_thread_lock); 1419 1420 return 0;
+5
drivers/target/target_core_device.c
··· 1106 dev->dev_attrib.block_size = block_size; 1107 pr_debug("dev[%p]: SE Device block_size changed to %u\n", 1108 dev, block_size); 1109 return 0; 1110 } 1111
··· 1106 dev->dev_attrib.block_size = block_size; 1107 pr_debug("dev[%p]: SE Device block_size changed to %u\n", 1108 dev, block_size); 1109 + 1110 + if (dev->dev_attrib.max_bytes_per_io) 1111 + dev->dev_attrib.hw_max_sectors = 1112 + dev->dev_attrib.max_bytes_per_io / block_size; 1113 + 1114 return 0; 1115 } 1116
+4 -4
drivers/target/target_core_file.c
··· 66 pr_debug("CORE_HBA[%d] - TCM FILEIO HBA Driver %s on Generic" 67 " Target Core Stack %s\n", hba->hba_id, FD_VERSION, 68 TARGET_CORE_MOD_VERSION); 69 - pr_debug("CORE_HBA[%d] - Attached FILEIO HBA: %u to Generic" 70 - " MaxSectors: %u\n", 71 - hba->hba_id, fd_host->fd_host_id, FD_MAX_SECTORS); 72 73 return 0; 74 } ··· 219 } 220 221 dev->dev_attrib.hw_block_size = fd_dev->fd_block_size; 222 - dev->dev_attrib.hw_max_sectors = FD_MAX_SECTORS; 223 dev->dev_attrib.hw_queue_depth = FD_MAX_DEVICE_QUEUE_DEPTH; 224 225 if (fd_dev->fbd_flags & FDBD_HAS_BUFFERED_IO_WCE) {
··· 66 pr_debug("CORE_HBA[%d] - TCM FILEIO HBA Driver %s on Generic" 67 " Target Core Stack %s\n", hba->hba_id, FD_VERSION, 68 TARGET_CORE_MOD_VERSION); 69 + pr_debug("CORE_HBA[%d] - Attached FILEIO HBA: %u to Generic\n", 70 + hba->hba_id, fd_host->fd_host_id); 71 72 return 0; 73 } ··· 220 } 221 222 dev->dev_attrib.hw_block_size = fd_dev->fd_block_size; 223 + dev->dev_attrib.max_bytes_per_io = FD_MAX_BYTES; 224 + dev->dev_attrib.hw_max_sectors = FD_MAX_BYTES / fd_dev->fd_block_size; 225 dev->dev_attrib.hw_queue_depth = FD_MAX_DEVICE_QUEUE_DEPTH; 226 227 if (fd_dev->fbd_flags & FDBD_HAS_BUFFERED_IO_WCE) {
+4 -1
drivers/target/target_core_file.h
··· 7 #define FD_DEVICE_QUEUE_DEPTH 32 8 #define FD_MAX_DEVICE_QUEUE_DEPTH 128 9 #define FD_BLOCKSIZE 512 10 - #define FD_MAX_SECTORS 2048 11 12 #define RRF_EMULATE_CDB 0x01 13 #define RRF_GOT_LBA 0x02
··· 7 #define FD_DEVICE_QUEUE_DEPTH 32 8 #define FD_MAX_DEVICE_QUEUE_DEPTH 128 9 #define FD_BLOCKSIZE 512 10 + /* 11 + * Limited by the number of iovecs (2048) per vfs_[writev,readv] call 12 + */ 13 + #define FD_MAX_BYTES 8388608 14 15 #define RRF_EMULATE_CDB 0x01 16 #define RRF_GOT_LBA 0x02
+1 -9
drivers/target/target_core_tpg.c
··· 278 snprintf(acl->initiatorname, TRANSPORT_IQN_LEN, "%s", initiatorname); 279 acl->se_tpg = tpg; 280 acl->acl_index = scsi_get_new_index(SCSI_AUTH_INTR_INDEX); 281 - spin_lock_init(&acl->stats_lock); 282 acl->dynamic_node_acl = 1; 283 284 tpg->se_tpg_tfo->set_default_node_attributes(acl); ··· 405 snprintf(acl->initiatorname, TRANSPORT_IQN_LEN, "%s", initiatorname); 406 acl->se_tpg = tpg; 407 acl->acl_index = scsi_get_new_index(SCSI_AUTH_INTR_INDEX); 408 - spin_lock_init(&acl->stats_lock); 409 410 tpg->se_tpg_tfo->set_default_node_attributes(acl); 411 ··· 656 spin_lock_init(&lun->lun_sep_lock); 657 init_completion(&lun->lun_ref_comp); 658 659 - ret = percpu_ref_init(&lun->lun_ref, core_tpg_lun_ref_release); 660 if (ret < 0) 661 return ret; 662 - 663 - ret = core_tpg_post_addlun(se_tpg, lun, lun_access, dev); 664 - if (ret < 0) { 665 - percpu_ref_cancel_init(&lun->lun_ref); 666 - return ret; 667 - } 668 669 return 0; 670 }
··· 278 snprintf(acl->initiatorname, TRANSPORT_IQN_LEN, "%s", initiatorname); 279 acl->se_tpg = tpg; 280 acl->acl_index = scsi_get_new_index(SCSI_AUTH_INTR_INDEX); 281 acl->dynamic_node_acl = 1; 282 283 tpg->se_tpg_tfo->set_default_node_attributes(acl); ··· 406 snprintf(acl->initiatorname, TRANSPORT_IQN_LEN, "%s", initiatorname); 407 acl->se_tpg = tpg; 408 acl->acl_index = scsi_get_new_index(SCSI_AUTH_INTR_INDEX); 409 410 tpg->se_tpg_tfo->set_default_node_attributes(acl); 411 ··· 658 spin_lock_init(&lun->lun_sep_lock); 659 init_completion(&lun->lun_ref_comp); 660 661 + ret = core_tpg_post_addlun(se_tpg, lun, lun_access, dev); 662 if (ret < 0) 663 return ret; 664 665 return 0; 666 }
+6 -1
drivers/tty/n_tty.c
··· 93 size_t canon_head; 94 size_t echo_head; 95 size_t echo_commit; 96 DECLARE_BITMAP(char_map, 256); 97 98 /* private to n_tty_receive_overrun (single-threaded) */ ··· 337 { 338 ldata->read_head = ldata->canon_head = ldata->read_tail = 0; 339 ldata->echo_head = ldata->echo_tail = ldata->echo_commit = 0; 340 ldata->line_start = 0; 341 342 ldata->erasing = 0; ··· 789 size_t head; 790 791 head = ldata->echo_head; 792 old = ldata->echo_commit - ldata->echo_tail; 793 794 /* Process committed echoes if the accumulated # of bytes ··· 814 size_t echoed; 815 816 if ((!L_ECHO(tty) && !L_ECHONL(tty)) || 817 - ldata->echo_commit == ldata->echo_tail) 818 return; 819 820 mutex_lock(&ldata->output_lock); 821 echoed = __process_echoes(tty); 822 mutex_unlock(&ldata->output_lock); 823 ··· 826 tty->ops->flush_chars(tty); 827 } 828 829 static void flush_echoes(struct tty_struct *tty) 830 { 831 struct n_tty_data *ldata = tty->disc_data;
··· 93 size_t canon_head; 94 size_t echo_head; 95 size_t echo_commit; 96 + size_t echo_mark; 97 DECLARE_BITMAP(char_map, 256); 98 99 /* private to n_tty_receive_overrun (single-threaded) */ ··· 336 { 337 ldata->read_head = ldata->canon_head = ldata->read_tail = 0; 338 ldata->echo_head = ldata->echo_tail = ldata->echo_commit = 0; 339 + ldata->echo_mark = 0; 340 ldata->line_start = 0; 341 342 ldata->erasing = 0; ··· 787 size_t head; 788 789 head = ldata->echo_head; 790 + ldata->echo_mark = head; 791 old = ldata->echo_commit - ldata->echo_tail; 792 793 /* Process committed echoes if the accumulated # of bytes ··· 811 size_t echoed; 812 813 if ((!L_ECHO(tty) && !L_ECHONL(tty)) || 814 + ldata->echo_mark == ldata->echo_tail) 815 return; 816 817 mutex_lock(&ldata->output_lock); 818 + ldata->echo_commit = ldata->echo_mark; 819 echoed = __process_echoes(tty); 820 mutex_unlock(&ldata->output_lock); 821 ··· 822 tty->ops->flush_chars(tty); 823 } 824 825 + /* NB: echo_mark and echo_head should be equivalent here */ 826 static void flush_echoes(struct tty_struct *tty) 827 { 828 struct n_tty_data *ldata = tty->disc_data;
+6 -2
drivers/tty/serial/8250/8250_dw.c
··· 96 if (offset == UART_LCR) { 97 int tries = 1000; 98 while (tries--) { 99 - if (value == p->serial_in(p, UART_LCR)) 100 return; 101 dw8250_force_idle(p); 102 writeb(value, p->membase + (UART_LCR << p->regshift)); ··· 133 if (offset == UART_LCR) { 134 int tries = 1000; 135 while (tries--) { 136 - if (value == p->serial_in(p, UART_LCR)) 137 return; 138 dw8250_force_idle(p); 139 writel(value, p->membase + (UART_LCR << p->regshift)); ··· 457 static const struct acpi_device_id dw8250_acpi_match[] = { 458 { "INT33C4", 0 }, 459 { "INT33C5", 0 }, 460 { "80860F0A", 0 }, 461 { }, 462 };
··· 96 if (offset == UART_LCR) { 97 int tries = 1000; 98 while (tries--) { 99 + unsigned int lcr = p->serial_in(p, UART_LCR); 100 + if ((value & ~UART_LCR_SPAR) == (lcr & ~UART_LCR_SPAR)) 101 return; 102 dw8250_force_idle(p); 103 writeb(value, p->membase + (UART_LCR << p->regshift)); ··· 132 if (offset == UART_LCR) { 133 int tries = 1000; 134 while (tries--) { 135 + unsigned int lcr = p->serial_in(p, UART_LCR); 136 + if ((value & ~UART_LCR_SPAR) == (lcr & ~UART_LCR_SPAR)) 137 return; 138 dw8250_force_idle(p); 139 writel(value, p->membase + (UART_LCR << p->regshift)); ··· 455 static const struct acpi_device_id dw8250_acpi_match[] = { 456 { "INT33C4", 0 }, 457 { "INT33C5", 0 }, 458 + { "INT3434", 0 }, 459 + { "INT3435", 0 }, 460 { "80860F0A", 0 }, 461 { }, 462 };
+2
drivers/tty/serial/xilinx_uartps.c
··· 240 continue; 241 } 242 243 /* 244 * uart_handle_sysrq_char() doesn't work if 245 * spinlocked, for some reason ··· 254 } 255 spin_lock(&port->lock); 256 } 257 258 port->icount.rx++; 259
··· 240 continue; 241 } 242 243 + #ifdef SUPPORT_SYSRQ 244 /* 245 * uart_handle_sysrq_char() doesn't work if 246 * spinlocked, for some reason ··· 253 } 254 spin_lock(&port->lock); 255 } 256 + #endif 257 258 port->icount.rx++; 259
+13 -3
drivers/tty/tty_ldsem.c
··· 86 return atomic_long_add_return(delta, (atomic_long_t *)&sem->count); 87 } 88 89 static inline int ldsem_cmpxchg(long *old, long new, struct ld_semaphore *sem) 90 { 91 - long tmp = *old; 92 - *old = atomic_long_cmpxchg(&sem->count, *old, new); 93 - return *old == tmp; 94 } 95 96 /*
··· 86 return atomic_long_add_return(delta, (atomic_long_t *)&sem->count); 87 } 88 89 + /* 90 + * ldsem_cmpxchg() updates @*old with the last-known sem->count value. 91 + * Returns 1 if count was successfully changed; @*old will have @new value. 92 + * Returns 0 if count was not changed; @*old will have most recent sem->count 93 + */ 94 static inline int ldsem_cmpxchg(long *old, long new, struct ld_semaphore *sem) 95 { 96 + long tmp = atomic_long_cmpxchg(&sem->count, *old, new); 97 + if (tmp == *old) { 98 + *old = new; 99 + return 1; 100 + } else { 101 + *old = tmp; 102 + return 0; 103 + } 104 } 105 106 /*
+4
drivers/usb/chipidea/core.c
··· 642 : CI_ROLE_GADGET; 643 } 644 645 ret = ci_role_start(ci, ci->role); 646 if (ret) { 647 dev_err(dev, "can't start %s role\n", ci_role(ci)->name);
··· 642 : CI_ROLE_GADGET; 643 } 644 645 + /* only update vbus status for peripheral */ 646 + if (ci->role == CI_ROLE_GADGET) 647 + ci_handle_vbus_change(ci); 648 + 649 ret = ci_role_start(ci, ci->role); 650 if (ret) { 651 dev_err(dev, "can't start %s role\n", ci_role(ci)->name);
+2 -1
drivers/usb/chipidea/host.c
··· 88 return ret; 89 90 disable_reg: 91 - regulator_disable(ci->platdata->reg_vbus); 92 93 put_hcd: 94 usb_put_hcd(hcd);
··· 88 return ret; 89 90 disable_reg: 91 + if (ci->platdata->reg_vbus) 92 + regulator_disable(ci->platdata->reg_vbus); 93 94 put_hcd: 95 usb_put_hcd(hcd);
-3
drivers/usb/chipidea/udc.c
··· 1795 pm_runtime_no_callbacks(&ci->gadget.dev); 1796 pm_runtime_enable(&ci->gadget.dev); 1797 1798 - /* Update ci->vbus_active */ 1799 - ci_handle_vbus_change(ci); 1800 - 1801 return retval; 1802 1803 destroy_eps:
··· 1795 pm_runtime_no_callbacks(&ci->gadget.dev); 1796 pm_runtime_enable(&ci->gadget.dev); 1797 1798 return retval; 1799 1800 destroy_eps:
+3 -5
drivers/usb/class/cdc-wdm.c
··· 854 { 855 /* need autopm_get/put here to ensure the usbcore sees the new value */ 856 int rv = usb_autopm_get_interface(intf); 857 - if (rv < 0) 858 - goto err; 859 860 intf->needs_remote_wakeup = on; 861 - usb_autopm_put_interface(intf); 862 - err: 863 - return rv; 864 } 865 866 static int wdm_probe(struct usb_interface *intf, const struct usb_device_id *id)
··· 854 { 855 /* need autopm_get/put here to ensure the usbcore sees the new value */ 856 int rv = usb_autopm_get_interface(intf); 857 858 intf->needs_remote_wakeup = on; 859 + if (!rv) 860 + usb_autopm_put_interface(intf); 861 + return 0; 862 } 863 864 static int wdm_probe(struct usb_interface *intf, const struct usb_device_id *id)
+5 -3
drivers/usb/dwc3/core.c
··· 455 if (IS_ERR(regs)) 456 return PTR_ERR(regs); 457 458 - usb_phy_set_suspend(dwc->usb2_phy, 0); 459 - usb_phy_set_suspend(dwc->usb3_phy, 0); 460 - 461 spin_lock_init(&dwc->lock); 462 platform_set_drvdata(pdev, dwc); 463 ··· 484 dev_err(dev, "failed to initialize core\n"); 485 goto err0; 486 } 487 488 ret = dwc3_event_buffers_setup(dwc); 489 if (ret) { ··· 569 dwc3_event_buffers_cleanup(dwc); 570 571 err1: 572 dwc3_core_exit(dwc); 573 574 err0:
··· 455 if (IS_ERR(regs)) 456 return PTR_ERR(regs); 457 458 spin_lock_init(&dwc->lock); 459 platform_set_drvdata(pdev, dwc); 460 ··· 487 dev_err(dev, "failed to initialize core\n"); 488 goto err0; 489 } 490 + 491 + usb_phy_set_suspend(dwc->usb2_phy, 0); 492 + usb_phy_set_suspend(dwc->usb3_phy, 0); 493 494 ret = dwc3_event_buffers_setup(dwc); 495 if (ret) { ··· 569 dwc3_event_buffers_cleanup(dwc); 570 571 err1: 572 + usb_phy_set_suspend(dwc->usb2_phy, 1); 573 + usb_phy_set_suspend(dwc->usb3_phy, 1); 574 dwc3_core_exit(dwc); 575 576 err0:
+14 -10
drivers/usb/host/ohci-at91.c
··· 136 struct ohci_hcd *ohci; 137 int retval; 138 struct usb_hcd *hcd = NULL; 139 140 - if (pdev->num_resources != 2) { 141 - pr_debug("hcd probe: invalid num_resources"); 142 - return -ENODEV; 143 } 144 145 - if ((pdev->resource[0].flags != IORESOURCE_MEM) 146 - || (pdev->resource[1].flags != IORESOURCE_IRQ)) { 147 - pr_debug("hcd probe: invalid resource type\n"); 148 - return -ENODEV; 149 } 150 151 hcd = usb_create_hcd(driver, &pdev->dev, "at91"); 152 if (!hcd) 153 return -ENOMEM; 154 - hcd->rsrc_start = pdev->resource[0].start; 155 - hcd->rsrc_len = resource_size(&pdev->resource[0]); 156 157 if (!request_mem_region(hcd->rsrc_start, hcd->rsrc_len, hcd_name)) { 158 pr_debug("request_mem_region failed\n"); ··· 203 ohci->num_ports = board->ports; 204 at91_start_hc(pdev); 205 206 - retval = usb_add_hcd(hcd, pdev->resource[1].start, IRQF_SHARED); 207 if (retval == 0) 208 return retval; 209
··· 136 struct ohci_hcd *ohci; 137 int retval; 138 struct usb_hcd *hcd = NULL; 139 + struct device *dev = &pdev->dev; 140 + struct resource *res; 141 + int irq; 142 143 + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 144 + if (!res) { 145 + dev_dbg(dev, "hcd probe: missing memory resource\n"); 146 + return -ENXIO; 147 } 148 149 + irq = platform_get_irq(pdev, 0); 150 + if (irq < 0) { 151 + dev_dbg(dev, "hcd probe: missing irq resource\n"); 152 + return irq; 153 } 154 155 hcd = usb_create_hcd(driver, &pdev->dev, "at91"); 156 if (!hcd) 157 return -ENOMEM; 158 + hcd->rsrc_start = res->start; 159 + hcd->rsrc_len = resource_size(res); 160 161 if (!request_mem_region(hcd->rsrc_start, hcd->rsrc_len, hcd_name)) { 162 pr_debug("request_mem_region failed\n"); ··· 199 ohci->num_ports = board->ports; 200 at91_start_hc(pdev); 201 202 + retval = usb_add_hcd(hcd, irq, IRQF_SHARED); 203 if (retval == 0) 204 return retval; 205
+6 -1
drivers/usb/host/xhci-pci.c
··· 128 * any other sleep) on Haswell machines with LPT and LPT-LP 129 * with the new Intel BIOS 130 */ 131 - xhci->quirks |= XHCI_SPURIOUS_WAKEUP; 132 } 133 if (pdev->vendor == PCI_VENDOR_ID_ETRON && 134 pdev->device == PCI_DEVICE_ID_ASROCK_P67) {
··· 128 * any other sleep) on Haswell machines with LPT and LPT-LP 129 * with the new Intel BIOS 130 */ 131 + /* Limit the quirk to only known vendors, as this triggers 132 + * yet another BIOS bug on some other machines 133 + * https://bugzilla.kernel.org/show_bug.cgi?id=66171 134 + */ 135 + if (pdev->subsystem_vendor == PCI_VENDOR_ID_HP) 136 + xhci->quirks |= XHCI_SPURIOUS_WAKEUP; 137 } 138 if (pdev->vendor == PCI_VENDOR_ID_ETRON && 139 pdev->device == PCI_DEVICE_ID_ASROCK_P67) {
+3 -1
drivers/usb/phy/Kconfig
··· 19 in host mode, low speed. 20 21 config FSL_USB2_OTG 22 - bool "Freescale USB OTG Transceiver Driver" 23 depends on USB_EHCI_FSL && USB_FSL_USB2 && PM_RUNTIME 24 select USB_OTG 25 select USB_PHY 26 help ··· 30 config ISP1301_OMAP 31 tristate "Philips ISP1301 with OMAP OTG" 32 depends on I2C && ARCH_OMAP_OTG 33 select USB_PHY 34 help 35 If you say yes here you get support for the Philips ISP1301
··· 19 in host mode, low speed. 20 21 config FSL_USB2_OTG 22 + tristate "Freescale USB OTG Transceiver Driver" 23 depends on USB_EHCI_FSL && USB_FSL_USB2 && PM_RUNTIME 24 + depends on USB 25 select USB_OTG 26 select USB_PHY 27 help ··· 29 config ISP1301_OMAP 30 tristate "Philips ISP1301 with OMAP OTG" 31 depends on I2C && ARCH_OMAP_OTG 32 + depends on USB 33 select USB_PHY 34 help 35 If you say yes here you get support for the Philips ISP1301
+1 -1
drivers/usb/phy/phy-tegra-usb.c
··· 876 877 tegra_phy->pad_regs = devm_ioremap(&pdev->dev, res->start, 878 resource_size(res)); 879 - if (!tegra_phy->regs) { 880 dev_err(&pdev->dev, "Failed to remap UTMI Pad regs\n"); 881 return -ENOMEM; 882 }
··· 876 877 tegra_phy->pad_regs = devm_ioremap(&pdev->dev, res->start, 878 resource_size(res)); 879 + if (!tegra_phy->pad_regs) { 880 dev_err(&pdev->dev, "Failed to remap UTMI Pad regs\n"); 881 return -ENOMEM; 882 }
+2 -1
drivers/usb/phy/phy-twl6030-usb.c
··· 127 128 static inline u8 twl6030_readb(struct twl6030_usb *twl, u8 module, u8 address) 129 { 130 - u8 data, ret = 0; 131 132 ret = twl_i2c_read_u8(module, &data, address); 133 if (ret >= 0)
··· 127 128 static inline u8 twl6030_readb(struct twl6030_usb *twl, u8 module, u8 address) 129 { 130 + u8 data; 131 + int ret; 132 133 ret = twl_i2c_read_u8(module, &data, address); 134 if (ret >= 0)
+2
drivers/usb/serial/option.c
··· 251 #define ZTE_PRODUCT_MF628 0x0015 252 #define ZTE_PRODUCT_MF626 0x0031 253 #define ZTE_PRODUCT_MC2718 0xffe8 254 255 #define BENQ_VENDOR_ID 0x04a5 256 #define BENQ_PRODUCT_H10 0x4068 ··· 1454 { USB_VENDOR_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0xff, 0x02, 0x01) }, 1455 { USB_VENDOR_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0xff, 0x02, 0x05) }, 1456 { USB_VENDOR_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0xff, 0x86, 0x10) }, 1457 1458 { USB_DEVICE(BENQ_VENDOR_ID, BENQ_PRODUCT_H10) }, 1459 { USB_DEVICE(DLINK_VENDOR_ID, DLINK_PRODUCT_DWM_652) },
··· 251 #define ZTE_PRODUCT_MF628 0x0015 252 #define ZTE_PRODUCT_MF626 0x0031 253 #define ZTE_PRODUCT_MC2718 0xffe8 254 + #define ZTE_PRODUCT_AC2726 0xfff1 255 256 #define BENQ_VENDOR_ID 0x04a5 257 #define BENQ_PRODUCT_H10 0x4068 ··· 1453 { USB_VENDOR_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0xff, 0x02, 0x01) }, 1454 { USB_VENDOR_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0xff, 0x02, 0x05) }, 1455 { USB_VENDOR_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0xff, 0x86, 0x10) }, 1456 + { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, ZTE_PRODUCT_AC2726, 0xff, 0xff, 0xff) }, 1457 1458 { USB_DEVICE(BENQ_VENDOR_ID, BENQ_PRODUCT_H10) }, 1459 { USB_DEVICE(DLINK_VENDOR_ID, DLINK_PRODUCT_DWM_652) },
+1 -2
drivers/usb/serial/zte_ev.c
··· 281 { USB_DEVICE(0x19d2, 0xfffd) }, 282 { USB_DEVICE(0x19d2, 0xfffc) }, 283 { USB_DEVICE(0x19d2, 0xfffb) }, 284 - /* AC2726, AC8710_V3 */ 285 - { USB_DEVICE_AND_INTERFACE_INFO(0x19d2, 0xfff1, 0xff, 0xff, 0xff) }, 286 { USB_DEVICE(0x19d2, 0xfff6) }, 287 { USB_DEVICE(0x19d2, 0xfff7) }, 288 { USB_DEVICE(0x19d2, 0xfff8) },
··· 281 { USB_DEVICE(0x19d2, 0xfffd) }, 282 { USB_DEVICE(0x19d2, 0xfffc) }, 283 { USB_DEVICE(0x19d2, 0xfffb) }, 284 + /* AC8710_V3 */ 285 { USB_DEVICE(0x19d2, 0xfff6) }, 286 { USB_DEVICE(0x19d2, 0xfff7) }, 287 { USB_DEVICE(0x19d2, 0xfff8) },
+34 -29
drivers/xen/balloon.c
··· 350 351 pfn = page_to_pfn(page); 352 353 - set_phys_to_machine(pfn, frame_list[i]); 354 - 355 #ifdef CONFIG_XEN_HAVE_PVMMU 356 - /* Link back into the page tables if not highmem. */ 357 - if (xen_pv_domain() && !PageHighMem(page)) { 358 - int ret; 359 - ret = HYPERVISOR_update_va_mapping( 360 - (unsigned long)__va(pfn << PAGE_SHIFT), 361 - mfn_pte(frame_list[i], PAGE_KERNEL), 362 - 0); 363 - BUG_ON(ret); 364 } 365 #endif 366 ··· 380 enum bp_state state = BP_DONE; 381 unsigned long pfn, i; 382 struct page *page; 383 - struct page *scratch_page; 384 int ret; 385 struct xen_memory_reservation reservation = { 386 .address_bits = 0, ··· 412 413 scrub_page(page); 414 415 /* 416 * Ballooned out frames are effectively replaced with 417 * a scratch frame. Ensure direct mappings and the 418 * p2m are consistent. 419 */ 420 - scratch_page = get_balloon_scratch_page(); 421 - #ifdef CONFIG_XEN_HAVE_PVMMU 422 - if (xen_pv_domain() && !PageHighMem(page)) { 423 - ret = HYPERVISOR_update_va_mapping( 424 - (unsigned long)__va(pfn << PAGE_SHIFT), 425 - pfn_pte(page_to_pfn(scratch_page), 426 - PAGE_KERNEL_RO), 0); 427 - BUG_ON(ret); 428 - } 429 - #endif 430 if (!xen_feature(XENFEAT_auto_translated_physmap)) { 431 unsigned long p; 432 p = page_to_pfn(scratch_page); 433 __set_phys_to_machine(pfn, pfn_to_mfn(p)); 434 } 435 - put_balloon_scratch_page(); 436 437 balloon_append(pfn_to_page(pfn)); 438 } ··· 630 if (!xen_domain()) 631 return -ENODEV; 632 633 - for_each_online_cpu(cpu) 634 - { 635 - per_cpu(balloon_scratch_page, cpu) = alloc_page(GFP_KERNEL); 636 - if (per_cpu(balloon_scratch_page, cpu) == NULL) { 637 - pr_warn("Failed to allocate balloon_scratch_page for cpu %d\n", cpu); 638 - return -ENOMEM; 639 } 640 } 641 - register_cpu_notifier(&balloon_cpu_notifier); 642 643 pr_info("Initialising balloon driver\n"); 644
··· 350 351 pfn = page_to_pfn(page); 352 353 #ifdef CONFIG_XEN_HAVE_PVMMU 354 + if (!xen_feature(XENFEAT_auto_translated_physmap)) { 355 + set_phys_to_machine(pfn, frame_list[i]); 356 + 357 + /* Link back into the page tables if not highmem. */ 358 + if (!PageHighMem(page)) { 359 + int ret; 360 + ret = HYPERVISOR_update_va_mapping( 361 + (unsigned long)__va(pfn << PAGE_SHIFT), 362 + mfn_pte(frame_list[i], PAGE_KERNEL), 363 + 0); 364 + BUG_ON(ret); 365 + } 366 } 367 #endif 368 ··· 378 enum bp_state state = BP_DONE; 379 unsigned long pfn, i; 380 struct page *page; 381 int ret; 382 struct xen_memory_reservation reservation = { 383 .address_bits = 0, ··· 411 412 scrub_page(page); 413 414 + #ifdef CONFIG_XEN_HAVE_PVMMU 415 /* 416 * Ballooned out frames are effectively replaced with 417 * a scratch frame. Ensure direct mappings and the 418 * p2m are consistent. 419 */ 420 if (!xen_feature(XENFEAT_auto_translated_physmap)) { 421 unsigned long p; 422 + struct page *scratch_page = get_balloon_scratch_page(); 423 + 424 + if (!PageHighMem(page)) { 425 + ret = HYPERVISOR_update_va_mapping( 426 + (unsigned long)__va(pfn << PAGE_SHIFT), 427 + pfn_pte(page_to_pfn(scratch_page), 428 + PAGE_KERNEL_RO), 0); 429 + BUG_ON(ret); 430 + } 431 p = page_to_pfn(scratch_page); 432 __set_phys_to_machine(pfn, pfn_to_mfn(p)); 433 + 434 + put_balloon_scratch_page(); 435 } 436 + #endif 437 438 balloon_append(pfn_to_page(pfn)); 439 } ··· 627 if (!xen_domain()) 628 return -ENODEV; 629 630 + if (!xen_feature(XENFEAT_auto_translated_physmap)) { 631 + for_each_online_cpu(cpu) 632 + { 633 + per_cpu(balloon_scratch_page, cpu) = alloc_page(GFP_KERNEL); 634 + if (per_cpu(balloon_scratch_page, cpu) == NULL) { 635 + pr_warn("Failed to allocate balloon_scratch_page for cpu %d\n", cpu); 636 + return -ENOMEM; 637 + } 638 } 639 + register_cpu_notifier(&balloon_cpu_notifier); 640 } 641 642 pr_info("Initialising balloon driver\n"); 643
+2 -1
drivers/xen/grant-table.c
··· 1176 gnttab_shared.addr = xen_remap(xen_hvm_resume_frames, 1177 PAGE_SIZE * max_nr_gframes); 1178 if (gnttab_shared.addr == NULL) { 1179 - pr_warn("Failed to ioremap gnttab share frames!\n"); 1180 return -ENOMEM; 1181 } 1182 }
··· 1176 gnttab_shared.addr = xen_remap(xen_hvm_resume_frames, 1177 PAGE_SIZE * max_nr_gframes); 1178 if (gnttab_shared.addr == NULL) { 1179 + pr_warn("Failed to ioremap gnttab share frames (addr=0x%08lx)!\n", 1180 + xen_hvm_resume_frames); 1181 return -ENOMEM; 1182 } 1183 }
+7 -2
drivers/xen/privcmd.c
··· 533 { 534 struct page **pages = vma->vm_private_data; 535 int numpgs = (vma->vm_end - vma->vm_start) >> PAGE_SHIFT; 536 537 if (!xen_feature(XENFEAT_auto_translated_physmap) || !numpgs || !pages) 538 return; 539 540 - xen_unmap_domain_mfn_range(vma, numpgs, pages); 541 - free_xenballooned_pages(numpgs, pages); 542 kfree(pages); 543 } 544
··· 533 { 534 struct page **pages = vma->vm_private_data; 535 int numpgs = (vma->vm_end - vma->vm_start) >> PAGE_SHIFT; 536 + int rc; 537 538 if (!xen_feature(XENFEAT_auto_translated_physmap) || !numpgs || !pages) 539 return; 540 541 + rc = xen_unmap_domain_mfn_range(vma, numpgs, pages); 542 + if (rc == 0) 543 + free_xenballooned_pages(numpgs, pages); 544 + else 545 + pr_crit("unable to unmap MFN range: leaking %d pages. rc=%d\n", 546 + numpgs, rc); 547 kfree(pages); 548 } 549
+70 -45
fs/aio.c
··· 244 int i; 245 246 for (i = 0; i < ctx->nr_pages; i++) { 247 pr_debug("pid(%d) [%d] page->count=%d\n", current->pid, i, 248 page_count(ctx->ring_pages[i])); 249 - put_page(ctx->ring_pages[i]); 250 } 251 252 put_aio_ring_file(ctx); ··· 285 unsigned long flags; 286 int rc; 287 288 /* Writeback must be complete */ 289 BUG_ON(PageWriteback(old)); 290 - put_page(old); 291 292 - rc = migrate_page_move_mapping(mapping, new, old, NULL, mode); 293 if (rc != MIGRATEPAGE_SUCCESS) { 294 - get_page(old); 295 return rc; 296 } 297 - 298 - get_page(new); 299 300 /* We can potentially race against kioctx teardown here. Use the 301 * address_space's private data lock to protect the mapping's ··· 328 spin_lock_irqsave(&ctx->completion_lock, flags); 329 migrate_page_copy(new, old); 330 idx = old->index; 331 - if (idx < (pgoff_t)ctx->nr_pages) 332 - ctx->ring_pages[idx] = new; 333 spin_unlock_irqrestore(&ctx->completion_lock, flags); 334 } else 335 rc = -EBUSY; 336 spin_unlock(&mapping->private_lock); 337 338 return rc; 339 } ··· 362 struct aio_ring *ring; 363 unsigned nr_events = ctx->max_reqs; 364 struct mm_struct *mm = current->mm; 365 - unsigned long size, populate; 366 int nr_pages; 367 int i; 368 struct file *file; ··· 383 return -EAGAIN; 384 } 385 386 - for (i = 0; i < nr_pages; i++) { 387 - struct page *page; 388 - page = find_or_create_page(file->f_inode->i_mapping, 389 - i, GFP_HIGHUSER | __GFP_ZERO); 390 - if (!page) 391 - break; 392 - pr_debug("pid(%d) page[%d]->count=%d\n", 393 - current->pid, i, page_count(page)); 394 - SetPageUptodate(page); 395 - SetPageDirty(page); 396 - unlock_page(page); 397 - } 398 ctx->aio_ring_file = file; 399 nr_events = (PAGE_SIZE * nr_pages - sizeof(struct aio_ring)) 400 / sizeof(struct io_event); ··· 397 } 398 } 399 400 ctx->mmap_size = nr_pages * PAGE_SIZE; 401 pr_debug("attempting mmap of %lu bytes\n", ctx->mmap_size); 402 403 down_write(&mm->mmap_sem); 404 ctx->mmap_base = do_mmap_pgoff(ctx->aio_ring_file, 0, ctx->mmap_size, 405 PROT_READ | PROT_WRITE, 406 - MAP_SHARED | MAP_POPULATE, 0, &populate); 407 if (IS_ERR((void *)ctx->mmap_base)) { 408 - up_write(&mm->mmap_sem); 409 ctx->mmap_size = 0; 410 aio_free_ring(ctx); 411 return -EAGAIN; 412 } 413 414 pr_debug("mmap address: 0x%08lx\n", ctx->mmap_base); 415 - 416 - /* We must do this while still holding mmap_sem for write, as we 417 - * need to be protected against userspace attempting to mremap() 418 - * or munmap() the ring buffer. 419 - */ 420 - ctx->nr_pages = get_user_pages(current, mm, ctx->mmap_base, nr_pages, 421 - 1, 0, ctx->ring_pages, NULL); 422 - 423 - /* Dropping the reference here is safe as the page cache will hold 424 - * onto the pages for us. It is also required so that page migration 425 - * can unmap the pages and get the right reference count. 426 - */ 427 - for (i = 0; i < ctx->nr_pages; i++) 428 - put_page(ctx->ring_pages[i]); 429 - 430 - up_write(&mm->mmap_sem); 431 - 432 - if (unlikely(ctx->nr_pages != nr_pages)) { 433 - aio_free_ring(ctx); 434 - return -EAGAIN; 435 - } 436 437 ctx->user_id = ctx->mmap_base; 438 ctx->nr_events = nr_events; /* trusted copy */ ··· 676 aio_nr += ctx->max_reqs; 677 spin_unlock(&aio_nr_lock); 678 679 - percpu_ref_get(&ctx->users); /* io_setup() will drop this ref */ 680 681 err = ioctx_add_table(ctx, mm); 682 if (err)
··· 244 int i; 245 246 for (i = 0; i < ctx->nr_pages; i++) { 247 + struct page *page; 248 pr_debug("pid(%d) [%d] page->count=%d\n", current->pid, i, 249 page_count(ctx->ring_pages[i])); 250 + page = ctx->ring_pages[i]; 251 + if (!page) 252 + continue; 253 + ctx->ring_pages[i] = NULL; 254 + put_page(page); 255 } 256 257 put_aio_ring_file(ctx); ··· 280 unsigned long flags; 281 int rc; 282 283 + rc = 0; 284 + 285 + /* Make sure the old page hasn't already been changed */ 286 + spin_lock(&mapping->private_lock); 287 + ctx = mapping->private_data; 288 + if (ctx) { 289 + pgoff_t idx; 290 + spin_lock_irqsave(&ctx->completion_lock, flags); 291 + idx = old->index; 292 + if (idx < (pgoff_t)ctx->nr_pages) { 293 + if (ctx->ring_pages[idx] != old) 294 + rc = -EAGAIN; 295 + } else 296 + rc = -EINVAL; 297 + spin_unlock_irqrestore(&ctx->completion_lock, flags); 298 + } else 299 + rc = -EINVAL; 300 + spin_unlock(&mapping->private_lock); 301 + 302 + if (rc != 0) 303 + return rc; 304 + 305 /* Writeback must be complete */ 306 BUG_ON(PageWriteback(old)); 307 + get_page(new); 308 309 + rc = migrate_page_move_mapping(mapping, new, old, NULL, mode, 1); 310 if (rc != MIGRATEPAGE_SUCCESS) { 311 + put_page(new); 312 return rc; 313 } 314 315 /* We can potentially race against kioctx teardown here. Use the 316 * address_space's private data lock to protect the mapping's ··· 303 spin_lock_irqsave(&ctx->completion_lock, flags); 304 migrate_page_copy(new, old); 305 idx = old->index; 306 + if (idx < (pgoff_t)ctx->nr_pages) { 307 + /* And only do the move if things haven't changed */ 308 + if (ctx->ring_pages[idx] == old) 309 + ctx->ring_pages[idx] = new; 310 + else 311 + rc = -EAGAIN; 312 + } else 313 + rc = -EINVAL; 314 spin_unlock_irqrestore(&ctx->completion_lock, flags); 315 } else 316 rc = -EBUSY; 317 spin_unlock(&mapping->private_lock); 318 + 319 + if (rc == MIGRATEPAGE_SUCCESS) 320 + put_page(old); 321 + else 322 + put_page(new); 323 324 return rc; 325 } ··· 326 struct aio_ring *ring; 327 unsigned nr_events = ctx->max_reqs; 328 struct mm_struct *mm = current->mm; 329 + unsigned long size, unused; 330 int nr_pages; 331 int i; 332 struct file *file; ··· 347 return -EAGAIN; 348 } 349 350 ctx->aio_ring_file = file; 351 nr_events = (PAGE_SIZE * nr_pages - sizeof(struct aio_ring)) 352 / sizeof(struct io_event); ··· 373 } 374 } 375 376 + for (i = 0; i < nr_pages; i++) { 377 + struct page *page; 378 + page = find_or_create_page(file->f_inode->i_mapping, 379 + i, GFP_HIGHUSER | __GFP_ZERO); 380 + if (!page) 381 + break; 382 + pr_debug("pid(%d) page[%d]->count=%d\n", 383 + current->pid, i, page_count(page)); 384 + SetPageUptodate(page); 385 + SetPageDirty(page); 386 + unlock_page(page); 387 + 388 + ctx->ring_pages[i] = page; 389 + } 390 + ctx->nr_pages = i; 391 + 392 + if (unlikely(i != nr_pages)) { 393 + aio_free_ring(ctx); 394 + return -EAGAIN; 395 + } 396 + 397 ctx->mmap_size = nr_pages * PAGE_SIZE; 398 pr_debug("attempting mmap of %lu bytes\n", ctx->mmap_size); 399 400 down_write(&mm->mmap_sem); 401 ctx->mmap_base = do_mmap_pgoff(ctx->aio_ring_file, 0, ctx->mmap_size, 402 PROT_READ | PROT_WRITE, 403 + MAP_SHARED, 0, &unused); 404 + up_write(&mm->mmap_sem); 405 if (IS_ERR((void *)ctx->mmap_base)) { 406 ctx->mmap_size = 0; 407 aio_free_ring(ctx); 408 return -EAGAIN; 409 } 410 411 pr_debug("mmap address: 0x%08lx\n", ctx->mmap_base); 412 413 ctx->user_id = ctx->mmap_base; 414 ctx->nr_events = nr_events; /* trusted copy */ ··· 652 aio_nr += ctx->max_reqs; 653 spin_unlock(&aio_nr_lock); 654 655 + percpu_ref_get(&ctx->users); /* io_setup() will drop this ref */ 656 + percpu_ref_get(&ctx->reqs); /* free_ioctx_users() will drop this */ 657 658 err = ioctx_add_table(ctx, mm); 659 if (err)
+6 -2
fs/ceph/addr.c
··· 210 if (err < 0) { 211 SetPageError(page); 212 goto out; 213 - } else if (err < PAGE_CACHE_SIZE) { 214 /* zero fill remainder of page */ 215 - zero_user_segment(page, err, PAGE_CACHE_SIZE); 216 } 217 SetPageUptodate(page); 218
··· 210 if (err < 0) { 211 SetPageError(page); 212 goto out; 213 + } else { 214 + if (err < PAGE_CACHE_SIZE) { 215 /* zero fill remainder of page */ 216 + zero_user_segment(page, err, PAGE_CACHE_SIZE); 217 + } else { 218 + flush_dcache_page(page); 219 + } 220 } 221 SetPageUptodate(page); 222
+58 -78
fs/ceph/inode.c
··· 978 struct ceph_mds_reply_inode *ininfo; 979 struct ceph_vino vino; 980 struct ceph_fs_client *fsc = ceph_sb_to_client(sb); 981 - int i = 0; 982 int err = 0; 983 984 dout("fill_trace %p is_dentry %d is_target %d\n", req, ··· 1035 return err; 1036 } else { 1037 WARN_ON_ONCE(1); 1038 } 1039 } 1040 ··· 1130 ceph_dentry(req->r_old_dentry)->offset); 1131 1132 dn = req->r_old_dentry; /* use old_dentry */ 1133 - in = dn->d_inode; 1134 } 1135 1136 /* null dentry? */ ··· 1151 } 1152 1153 /* attach proper inode */ 1154 - ininfo = rinfo->targeti.in; 1155 - vino.ino = le64_to_cpu(ininfo->ino); 1156 - vino.snap = le64_to_cpu(ininfo->snapid); 1157 - in = dn->d_inode; 1158 - if (!in) { 1159 - in = ceph_get_inode(sb, vino); 1160 - if (IS_ERR(in)) { 1161 - pr_err("fill_trace bad get_inode " 1162 - "%llx.%llx\n", vino.ino, vino.snap); 1163 - err = PTR_ERR(in); 1164 - d_drop(dn); 1165 - goto done; 1166 - } 1167 dn = splice_dentry(dn, in, &have_lease, true); 1168 if (IS_ERR(dn)) { 1169 err = PTR_ERR(dn); 1170 goto done; 1171 } 1172 req->r_dentry = dn; /* may have spliced */ 1173 - ihold(in); 1174 - } else if (ceph_ino(in) == vino.ino && 1175 - ceph_snap(in) == vino.snap) { 1176 - ihold(in); 1177 - } else { 1178 dout(" %p links to %p %llx.%llx, not %llx.%llx\n", 1179 - dn, in, ceph_ino(in), ceph_snap(in), 1180 - vino.ino, vino.snap); 1181 have_lease = false; 1182 - in = NULL; 1183 } 1184 1185 if (have_lease) 1186 update_dentry_lease(dn, rinfo->dlease, session, 1187 req->r_request_started); 1188 dout(" final dn %p\n", dn); 1189 - i++; 1190 - } else if ((req->r_op == CEPH_MDS_OP_LOOKUPSNAP || 1191 - req->r_op == CEPH_MDS_OP_MKSNAP) && !req->r_aborted) { 1192 struct dentry *dn = req->r_dentry; 1193 1194 /* fill out a snapdir LOOKUPSNAP dentry */ ··· 1182 ininfo = rinfo->targeti.in; 1183 vino.ino = le64_to_cpu(ininfo->ino); 1184 vino.snap = le64_to_cpu(ininfo->snapid); 1185 - in = ceph_get_inode(sb, vino); 1186 - if (IS_ERR(in)) { 1187 - pr_err("fill_inode get_inode badness %llx.%llx\n", 1188 - vino.ino, vino.snap); 1189 - err = PTR_ERR(in); 1190 - d_delete(dn); 1191 - goto done; 1192 - } 1193 dout(" linking snapped dir %p to dn %p\n", in, dn); 1194 dn = splice_dentry(dn, in, NULL, true); 1195 if (IS_ERR(dn)) { 1196 err = PTR_ERR(dn); 1197 goto done; 1198 } 1199 req->r_dentry = dn; /* may have spliced */ 1200 - ihold(in); 1201 - rinfo->head->is_dentry = 1; /* fool notrace handlers */ 1202 } 1203 - 1204 - if (rinfo->head->is_target) { 1205 - vino.ino = le64_to_cpu(rinfo->targeti.in->ino); 1206 - vino.snap = le64_to_cpu(rinfo->targeti.in->snapid); 1207 - 1208 - if (in == NULL || ceph_ino(in) != vino.ino || 1209 - ceph_snap(in) != vino.snap) { 1210 - in = ceph_get_inode(sb, vino); 1211 - if (IS_ERR(in)) { 1212 - err = PTR_ERR(in); 1213 - goto done; 1214 - } 1215 - } 1216 - req->r_target_inode = in; 1217 - 1218 - err = fill_inode(in, 1219 - &rinfo->targeti, NULL, 1220 - session, req->r_request_started, 1221 - (le32_to_cpu(rinfo->head->result) == 0) ? 1222 - req->r_fmode : -1, 1223 - &req->r_caps_reservation); 1224 - if (err < 0) { 1225 - pr_err("fill_inode badness %p %llx.%llx\n", 1226 - in, ceph_vinop(in)); 1227 - goto done; 1228 - } 1229 - } 1230 - 1231 done: 1232 dout("fill_trace done err=%d\n", err); 1233 return err; ··· 1240 struct qstr dname; 1241 struct dentry *dn; 1242 struct inode *in; 1243 - int err = 0, i; 1244 struct inode *snapdir = NULL; 1245 struct ceph_mds_request_head *rhead = req->r_request->front.iov_base; 1246 struct ceph_dentry_info *di; ··· 1273 ceph_fill_dirfrag(parent->d_inode, rinfo->dir_dir); 1274 } 1275 1276 for (i = 0; i < rinfo->dir_nr; i++) { 1277 struct ceph_vino vino; 1278 ··· 1298 err = -ENOMEM; 1299 goto out; 1300 } 1301 - err = ceph_init_dentry(dn); 1302 - if (err < 0) { 1303 dput(dn); 1304 goto out; 1305 } 1306 } else if (dn->d_inode && ··· 1321 spin_unlock(&parent->d_lock); 1322 } 1323 1324 - di = dn->d_fsdata; 1325 - di->offset = ceph_make_fpos(frag, i + r_readdir_offset); 1326 - 1327 /* inode */ 1328 if (dn->d_inode) { 1329 in = dn->d_inode; ··· 1333 err = PTR_ERR(in); 1334 goto out; 1335 } 1336 - dn = splice_dentry(dn, in, NULL, false); 1337 - if (IS_ERR(dn)) 1338 - dn = NULL; 1339 } 1340 1341 if (fill_inode(in, &rinfo->dir_in[i], NULL, session, 1342 req->r_request_started, -1, 1343 &req->r_caps_reservation) < 0) { 1344 pr_err("fill_inode badness on %p\n", in); 1345 goto next_item; 1346 } 1347 - if (dn) 1348 - update_dentry_lease(dn, rinfo->dir_dlease[i], 1349 - req->r_session, 1350 - req->r_request_started); 1351 next_item: 1352 if (dn) 1353 dput(dn); 1354 } 1355 - req->r_did_prepopulate = true; 1356 1357 out: 1358 if (snapdir) {
··· 978 struct ceph_mds_reply_inode *ininfo; 979 struct ceph_vino vino; 980 struct ceph_fs_client *fsc = ceph_sb_to_client(sb); 981 int err = 0; 982 983 dout("fill_trace %p is_dentry %d is_target %d\n", req, ··· 1036 return err; 1037 } else { 1038 WARN_ON_ONCE(1); 1039 + } 1040 + } 1041 + 1042 + if (rinfo->head->is_target) { 1043 + vino.ino = le64_to_cpu(rinfo->targeti.in->ino); 1044 + vino.snap = le64_to_cpu(rinfo->targeti.in->snapid); 1045 + 1046 + in = ceph_get_inode(sb, vino); 1047 + if (IS_ERR(in)) { 1048 + err = PTR_ERR(in); 1049 + goto done; 1050 + } 1051 + req->r_target_inode = in; 1052 + 1053 + err = fill_inode(in, &rinfo->targeti, NULL, 1054 + session, req->r_request_started, 1055 + (le32_to_cpu(rinfo->head->result) == 0) ? 1056 + req->r_fmode : -1, 1057 + &req->r_caps_reservation); 1058 + if (err < 0) { 1059 + pr_err("fill_inode badness %p %llx.%llx\n", 1060 + in, ceph_vinop(in)); 1061 + goto done; 1062 } 1063 } 1064 ··· 1108 ceph_dentry(req->r_old_dentry)->offset); 1109 1110 dn = req->r_old_dentry; /* use old_dentry */ 1111 } 1112 1113 /* null dentry? */ ··· 1130 } 1131 1132 /* attach proper inode */ 1133 + if (!dn->d_inode) { 1134 + ihold(in); 1135 dn = splice_dentry(dn, in, &have_lease, true); 1136 if (IS_ERR(dn)) { 1137 err = PTR_ERR(dn); 1138 goto done; 1139 } 1140 req->r_dentry = dn; /* may have spliced */ 1141 + } else if (dn->d_inode && dn->d_inode != in) { 1142 dout(" %p links to %p %llx.%llx, not %llx.%llx\n", 1143 + dn, dn->d_inode, ceph_vinop(dn->d_inode), 1144 + ceph_vinop(in)); 1145 have_lease = false; 1146 } 1147 1148 if (have_lease) 1149 update_dentry_lease(dn, rinfo->dlease, session, 1150 req->r_request_started); 1151 dout(" final dn %p\n", dn); 1152 + } else if (!req->r_aborted && 1153 + (req->r_op == CEPH_MDS_OP_LOOKUPSNAP || 1154 + req->r_op == CEPH_MDS_OP_MKSNAP)) { 1155 struct dentry *dn = req->r_dentry; 1156 1157 /* fill out a snapdir LOOKUPSNAP dentry */ ··· 1177 ininfo = rinfo->targeti.in; 1178 vino.ino = le64_to_cpu(ininfo->ino); 1179 vino.snap = le64_to_cpu(ininfo->snapid); 1180 dout(" linking snapped dir %p to dn %p\n", in, dn); 1181 + ihold(in); 1182 dn = splice_dentry(dn, in, NULL, true); 1183 if (IS_ERR(dn)) { 1184 err = PTR_ERR(dn); 1185 goto done; 1186 } 1187 req->r_dentry = dn; /* may have spliced */ 1188 } 1189 done: 1190 dout("fill_trace done err=%d\n", err); 1191 return err; ··· 1272 struct qstr dname; 1273 struct dentry *dn; 1274 struct inode *in; 1275 + int err = 0, ret, i; 1276 struct inode *snapdir = NULL; 1277 struct ceph_mds_request_head *rhead = req->r_request->front.iov_base; 1278 struct ceph_dentry_info *di; ··· 1305 ceph_fill_dirfrag(parent->d_inode, rinfo->dir_dir); 1306 } 1307 1308 + /* FIXME: release caps/leases if error occurs */ 1309 for (i = 0; i < rinfo->dir_nr; i++) { 1310 struct ceph_vino vino; 1311 ··· 1329 err = -ENOMEM; 1330 goto out; 1331 } 1332 + ret = ceph_init_dentry(dn); 1333 + if (ret < 0) { 1334 dput(dn); 1335 + err = ret; 1336 goto out; 1337 } 1338 } else if (dn->d_inode && ··· 1351 spin_unlock(&parent->d_lock); 1352 } 1353 1354 /* inode */ 1355 if (dn->d_inode) { 1356 in = dn->d_inode; ··· 1366 err = PTR_ERR(in); 1367 goto out; 1368 } 1369 } 1370 1371 if (fill_inode(in, &rinfo->dir_in[i], NULL, session, 1372 req->r_request_started, -1, 1373 &req->r_caps_reservation) < 0) { 1374 pr_err("fill_inode badness on %p\n", in); 1375 + if (!dn->d_inode) 1376 + iput(in); 1377 + d_drop(dn); 1378 goto next_item; 1379 } 1380 + 1381 + if (!dn->d_inode) { 1382 + dn = splice_dentry(dn, in, NULL, false); 1383 + if (IS_ERR(dn)) { 1384 + err = PTR_ERR(dn); 1385 + dn = NULL; 1386 + goto next_item; 1387 + } 1388 + } 1389 + 1390 + di = dn->d_fsdata; 1391 + di->offset = ceph_make_fpos(frag, i + r_readdir_offset); 1392 + 1393 + update_dentry_lease(dn, rinfo->dir_dlease[i], 1394 + req->r_session, 1395 + req->r_request_started); 1396 next_item: 1397 if (dn) 1398 dput(dn); 1399 } 1400 + if (err == 0) 1401 + req->r_did_prepopulate = true; 1402 1403 out: 1404 if (snapdir) {
+5 -2
fs/pstore/platform.c
··· 443 pstore_get_records(0); 444 445 kmsg_dump_register(&pstore_dumper); 446 - pstore_register_console(); 447 - pstore_register_ftrace(); 448 449 if (pstore_update_ms >= 0) { 450 pstore_timer.expires = jiffies +
··· 443 pstore_get_records(0); 444 445 kmsg_dump_register(&pstore_dumper); 446 + 447 + if ((psi->flags & PSTORE_FLAGS_FRAGILE) == 0) { 448 + pstore_register_console(); 449 + pstore_register_ftrace(); 450 + } 451 452 if (pstore_update_ms >= 0) { 453 pstore_timer.expires = jiffies +
+3 -5
fs/sysfs/file.c
··· 609 struct sysfs_dirent *attr_sd = file->f_path.dentry->d_fsdata; 610 struct kobject *kobj = attr_sd->s_parent->s_dir.kobj; 611 struct sysfs_open_file *of; 612 - bool has_read, has_write, has_mmap; 613 int error = -EACCES; 614 615 /* need attr_sd for attr and ops, its parent for kobj */ ··· 621 622 has_read = battr->read || battr->mmap; 623 has_write = battr->write || battr->mmap; 624 - has_mmap = battr->mmap; 625 } else { 626 const struct sysfs_ops *ops = sysfs_file_ops(attr_sd); 627 ··· 632 633 has_read = ops->show; 634 has_write = ops->store; 635 - has_mmap = false; 636 } 637 638 /* check perms and supported operations */ ··· 659 * open file has a separate mutex, it's okay as long as those don't 660 * happen on the same file. At this point, we can't easily give 661 * each file a separate locking class. Let's differentiate on 662 - * whether the file has mmap or not for now. 663 */ 664 - if (has_mmap) 665 mutex_init(&of->mutex); 666 else 667 mutex_init(&of->mutex);
··· 609 struct sysfs_dirent *attr_sd = file->f_path.dentry->d_fsdata; 610 struct kobject *kobj = attr_sd->s_parent->s_dir.kobj; 611 struct sysfs_open_file *of; 612 + bool has_read, has_write; 613 int error = -EACCES; 614 615 /* need attr_sd for attr and ops, its parent for kobj */ ··· 621 622 has_read = battr->read || battr->mmap; 623 has_write = battr->write || battr->mmap; 624 } else { 625 const struct sysfs_ops *ops = sysfs_file_ops(attr_sd); 626 ··· 633 634 has_read = ops->show; 635 has_write = ops->store; 636 } 637 638 /* check perms and supported operations */ ··· 661 * open file has a separate mutex, it's okay as long as those don't 662 * happen on the same file. At this point, we can't easily give 663 * each file a separate locking class. Let's differentiate on 664 + * whether the file is bin or not for now. 665 */ 666 + if (sysfs_is_bin(attr_sd)) 667 mutex_init(&of->mutex); 668 else 669 mutex_init(&of->mutex);
+24 -8
fs/xfs/xfs_bmap.c
··· 1635 * blocks at the end of the file which do not start at the previous data block, 1636 * we will try to align the new blocks at stripe unit boundaries. 1637 * 1638 - * Returns 0 in bma->aeof if the file (fork) is empty as any new write will be 1639 * at, or past the EOF. 1640 */ 1641 STATIC int ··· 1650 bma->aeof = 0; 1651 error = xfs_bmap_last_extent(NULL, bma->ip, whichfork, &rec, 1652 &is_empty); 1653 - if (error || is_empty) 1654 return error; 1655 1656 /* 1657 * Check if we are allocation or past the last extent, or at least into ··· 3648 int isaligned; 3649 int tryagain; 3650 int error; 3651 3652 ASSERT(ap->length); 3653 3654 mp = ap->ip->i_mount; 3655 align = ap->userdata ? xfs_get_extsz_hint(ap->ip) : 0; 3656 if (unlikely(align)) { 3657 error = xfs_bmap_extsize_align(mp, &ap->got, &ap->prev, ··· 3669 ASSERT(!error); 3670 ASSERT(ap->length); 3671 } 3672 nullfb = *ap->firstblock == NULLFSBLOCK; 3673 fb_agno = nullfb ? NULLAGNUMBER : XFS_FSB_TO_AGNO(mp, *ap->firstblock); 3674 if (nullfb) { ··· 3746 */ 3747 if (!ap->flist->xbf_low && ap->aeof) { 3748 if (!ap->offset) { 3749 - args.alignment = mp->m_dalign; 3750 atype = args.type; 3751 isaligned = 1; 3752 /* ··· 3771 * of minlen+alignment+slop doesn't go up 3772 * between the calls. 3773 */ 3774 - if (blen > mp->m_dalign && blen <= args.maxlen) 3775 - nextminlen = blen - mp->m_dalign; 3776 else 3777 nextminlen = args.minlen; 3778 - if (nextminlen + mp->m_dalign > args.minlen + 1) 3779 args.minalignslop = 3780 - nextminlen + mp->m_dalign - 3781 args.minlen - 1; 3782 else 3783 args.minalignslop = 0; ··· 3799 */ 3800 args.type = atype; 3801 args.fsbno = ap->blkno; 3802 - args.alignment = mp->m_dalign; 3803 args.minlen = nextminlen; 3804 args.minalignslop = 0; 3805 isaligned = 1;
··· 1635 * blocks at the end of the file which do not start at the previous data block, 1636 * we will try to align the new blocks at stripe unit boundaries. 1637 * 1638 + * Returns 1 in bma->aeof if the file (fork) is empty as any new write will be 1639 * at, or past the EOF. 1640 */ 1641 STATIC int ··· 1650 bma->aeof = 0; 1651 error = xfs_bmap_last_extent(NULL, bma->ip, whichfork, &rec, 1652 &is_empty); 1653 + if (error) 1654 return error; 1655 + 1656 + if (is_empty) { 1657 + bma->aeof = 1; 1658 + return 0; 1659 + } 1660 1661 /* 1662 * Check if we are allocation or past the last extent, or at least into ··· 3643 int isaligned; 3644 int tryagain; 3645 int error; 3646 + int stripe_align; 3647 3648 ASSERT(ap->length); 3649 3650 mp = ap->ip->i_mount; 3651 + 3652 + /* stripe alignment for allocation is determined by mount parameters */ 3653 + stripe_align = 0; 3654 + if (mp->m_swidth && (mp->m_flags & XFS_MOUNT_SWALLOC)) 3655 + stripe_align = mp->m_swidth; 3656 + else if (mp->m_dalign) 3657 + stripe_align = mp->m_dalign; 3658 + 3659 align = ap->userdata ? xfs_get_extsz_hint(ap->ip) : 0; 3660 if (unlikely(align)) { 3661 error = xfs_bmap_extsize_align(mp, &ap->got, &ap->prev, ··· 3655 ASSERT(!error); 3656 ASSERT(ap->length); 3657 } 3658 + 3659 + 3660 nullfb = *ap->firstblock == NULLFSBLOCK; 3661 fb_agno = nullfb ? NULLAGNUMBER : XFS_FSB_TO_AGNO(mp, *ap->firstblock); 3662 if (nullfb) { ··· 3730 */ 3731 if (!ap->flist->xbf_low && ap->aeof) { 3732 if (!ap->offset) { 3733 + args.alignment = stripe_align; 3734 atype = args.type; 3735 isaligned = 1; 3736 /* ··· 3755 * of minlen+alignment+slop doesn't go up 3756 * between the calls. 3757 */ 3758 + if (blen > stripe_align && blen <= args.maxlen) 3759 + nextminlen = blen - stripe_align; 3760 else 3761 nextminlen = args.minlen; 3762 + if (nextminlen + stripe_align > args.minlen + 1) 3763 args.minalignslop = 3764 + nextminlen + stripe_align - 3765 args.minlen - 1; 3766 else 3767 args.minalignslop = 0; ··· 3783 */ 3784 args.type = atype; 3785 args.fsbno = ap->blkno; 3786 + args.alignment = stripe_align; 3787 args.minlen = nextminlen; 3788 args.minalignslop = 0; 3789 isaligned = 1;
+12 -2
fs/xfs/xfs_bmap_util.c
··· 1187 XFS_BUF_UNWRITE(bp); 1188 XFS_BUF_READ(bp); 1189 XFS_BUF_SET_ADDR(bp, xfs_fsb_to_db(ip, imap.br_startblock)); 1190 - xfsbdstrat(mp, bp); 1191 error = xfs_buf_iowait(bp); 1192 if (error) { 1193 xfs_buf_ioerror_alert(bp, ··· 1205 XFS_BUF_UNDONE(bp); 1206 XFS_BUF_UNREAD(bp); 1207 XFS_BUF_WRITE(bp); 1208 - xfsbdstrat(mp, bp); 1209 error = xfs_buf_iowait(bp); 1210 if (error) { 1211 xfs_buf_ioerror_alert(bp,
··· 1187 XFS_BUF_UNWRITE(bp); 1188 XFS_BUF_READ(bp); 1189 XFS_BUF_SET_ADDR(bp, xfs_fsb_to_db(ip, imap.br_startblock)); 1190 + 1191 + if (XFS_FORCED_SHUTDOWN(mp)) { 1192 + error = XFS_ERROR(EIO); 1193 + break; 1194 + } 1195 + xfs_buf_iorequest(bp); 1196 error = xfs_buf_iowait(bp); 1197 if (error) { 1198 xfs_buf_ioerror_alert(bp, ··· 1200 XFS_BUF_UNDONE(bp); 1201 XFS_BUF_UNREAD(bp); 1202 XFS_BUF_WRITE(bp); 1203 + 1204 + if (XFS_FORCED_SHUTDOWN(mp)) { 1205 + error = XFS_ERROR(EIO); 1206 + break; 1207 + } 1208 + xfs_buf_iorequest(bp); 1209 error = xfs_buf_iowait(bp); 1210 if (error) { 1211 xfs_buf_ioerror_alert(bp,
+14 -23
fs/xfs/xfs_buf.c
··· 698 bp->b_flags |= XBF_READ; 699 bp->b_ops = ops; 700 701 - xfsbdstrat(target->bt_mount, bp); 702 xfs_buf_iowait(bp); 703 return bp; 704 } ··· 1093 * This is meant for userdata errors; metadata bufs come with 1094 * iodone functions attached, so that we can track down errors. 1095 */ 1096 - STATIC int 1097 xfs_bioerror_relse( 1098 struct xfs_buf *bp) 1099 { ··· 1156 ASSERT(xfs_buf_islocked(bp)); 1157 1158 bp->b_flags |= XBF_WRITE; 1159 - bp->b_flags &= ~(XBF_ASYNC | XBF_READ | _XBF_DELWRI_Q); 1160 1161 xfs_bdstrat_cb(bp); 1162 ··· 1166 SHUTDOWN_META_IO_ERROR); 1167 } 1168 return error; 1169 - } 1170 - 1171 - /* 1172 - * Wrapper around bdstrat so that we can stop data from going to disk in case 1173 - * we are shutting down the filesystem. Typically user data goes thru this 1174 - * path; one of the exceptions is the superblock. 1175 - */ 1176 - void 1177 - xfsbdstrat( 1178 - struct xfs_mount *mp, 1179 - struct xfs_buf *bp) 1180 - { 1181 - if (XFS_FORCED_SHUTDOWN(mp)) { 1182 - trace_xfs_bdstrat_shut(bp, _RET_IP_); 1183 - xfs_bioerror_relse(bp); 1184 - return; 1185 - } 1186 - 1187 - xfs_buf_iorequest(bp); 1188 } 1189 1190 STATIC void ··· 1501 struct xfs_buf *bp; 1502 bp = list_first_entry(&dispose, struct xfs_buf, b_lru); 1503 list_del_init(&bp->b_lru); 1504 xfs_buf_rele(bp); 1505 } 1506 if (loop++ != 0) ··· 1790 1791 blk_start_plug(&plug); 1792 list_for_each_entry_safe(bp, n, io_list, b_list) { 1793 - bp->b_flags &= ~(_XBF_DELWRI_Q | XBF_ASYNC); 1794 bp->b_flags |= XBF_WRITE; 1795 1796 if (!wait) {
··· 698 bp->b_flags |= XBF_READ; 699 bp->b_ops = ops; 700 701 + if (XFS_FORCED_SHUTDOWN(target->bt_mount)) { 702 + xfs_buf_relse(bp); 703 + return NULL; 704 + } 705 + xfs_buf_iorequest(bp); 706 xfs_buf_iowait(bp); 707 return bp; 708 } ··· 1089 * This is meant for userdata errors; metadata bufs come with 1090 * iodone functions attached, so that we can track down errors. 1091 */ 1092 + int 1093 xfs_bioerror_relse( 1094 struct xfs_buf *bp) 1095 { ··· 1152 ASSERT(xfs_buf_islocked(bp)); 1153 1154 bp->b_flags |= XBF_WRITE; 1155 + bp->b_flags &= ~(XBF_ASYNC | XBF_READ | _XBF_DELWRI_Q | XBF_WRITE_FAIL); 1156 1157 xfs_bdstrat_cb(bp); 1158 ··· 1162 SHUTDOWN_META_IO_ERROR); 1163 } 1164 return error; 1165 } 1166 1167 STATIC void ··· 1516 struct xfs_buf *bp; 1517 bp = list_first_entry(&dispose, struct xfs_buf, b_lru); 1518 list_del_init(&bp->b_lru); 1519 + if (bp->b_flags & XBF_WRITE_FAIL) { 1520 + xfs_alert(btp->bt_mount, 1521 + "Corruption Alert: Buffer at block 0x%llx had permanent write failures!\n" 1522 + "Please run xfs_repair to determine the extent of the problem.", 1523 + (long long)bp->b_bn); 1524 + } 1525 xfs_buf_rele(bp); 1526 } 1527 if (loop++ != 0) ··· 1799 1800 blk_start_plug(&plug); 1801 list_for_each_entry_safe(bp, n, io_list, b_list) { 1802 + bp->b_flags &= ~(_XBF_DELWRI_Q | XBF_ASYNC | XBF_WRITE_FAIL); 1803 bp->b_flags |= XBF_WRITE; 1804 1805 if (!wait) {
+7 -4
fs/xfs/xfs_buf.h
··· 45 #define XBF_ASYNC (1 << 4) /* initiator will not wait for completion */ 46 #define XBF_DONE (1 << 5) /* all pages in the buffer uptodate */ 47 #define XBF_STALE (1 << 6) /* buffer has been staled, do not find it */ 48 49 /* I/O hints for the BIO layer */ 50 #define XBF_SYNCIO (1 << 10)/* treat this buffer as synchronous I/O */ ··· 71 { XBF_ASYNC, "ASYNC" }, \ 72 { XBF_DONE, "DONE" }, \ 73 { XBF_STALE, "STALE" }, \ 74 { XBF_SYNCIO, "SYNCIO" }, \ 75 { XBF_FUA, "FUA" }, \ 76 { XBF_FLUSH, "FLUSH" }, \ ··· 81 { _XBF_KMEM, "KMEM" }, \ 82 { _XBF_DELWRI_Q, "DELWRI_Q" }, \ 83 { _XBF_COMPOUND, "COMPOUND" } 84 85 /* 86 * Internal state flags. ··· 272 273 /* Buffer Read and Write Routines */ 274 extern int xfs_bwrite(struct xfs_buf *bp); 275 - 276 - extern void xfsbdstrat(struct xfs_mount *, struct xfs_buf *); 277 - 278 extern void xfs_buf_ioend(xfs_buf_t *, int); 279 extern void xfs_buf_ioerror(xfs_buf_t *, int); 280 extern void xfs_buf_ioerror_alert(struct xfs_buf *, const char *func); ··· 281 xfs_buf_rw_t); 282 #define xfs_buf_zero(bp, off, len) \ 283 xfs_buf_iomove((bp), (off), (len), NULL, XBRW_ZERO) 284 285 static inline int xfs_buf_geterror(xfs_buf_t *bp) 286 { ··· 303 304 #define XFS_BUF_ZEROFLAGS(bp) \ 305 ((bp)->b_flags &= ~(XBF_READ|XBF_WRITE|XBF_ASYNC| \ 306 - XBF_SYNCIO|XBF_FUA|XBF_FLUSH)) 307 308 void xfs_buf_stale(struct xfs_buf *bp); 309 #define XFS_BUF_UNSTALE(bp) ((bp)->b_flags &= ~XBF_STALE)
··· 45 #define XBF_ASYNC (1 << 4) /* initiator will not wait for completion */ 46 #define XBF_DONE (1 << 5) /* all pages in the buffer uptodate */ 47 #define XBF_STALE (1 << 6) /* buffer has been staled, do not find it */ 48 + #define XBF_WRITE_FAIL (1 << 24)/* async writes have failed on this buffer */ 49 50 /* I/O hints for the BIO layer */ 51 #define XBF_SYNCIO (1 << 10)/* treat this buffer as synchronous I/O */ ··· 70 { XBF_ASYNC, "ASYNC" }, \ 71 { XBF_DONE, "DONE" }, \ 72 { XBF_STALE, "STALE" }, \ 73 + { XBF_WRITE_FAIL, "WRITE_FAIL" }, \ 74 { XBF_SYNCIO, "SYNCIO" }, \ 75 { XBF_FUA, "FUA" }, \ 76 { XBF_FLUSH, "FLUSH" }, \ ··· 79 { _XBF_KMEM, "KMEM" }, \ 80 { _XBF_DELWRI_Q, "DELWRI_Q" }, \ 81 { _XBF_COMPOUND, "COMPOUND" } 82 + 83 84 /* 85 * Internal state flags. ··· 269 270 /* Buffer Read and Write Routines */ 271 extern int xfs_bwrite(struct xfs_buf *bp); 272 extern void xfs_buf_ioend(xfs_buf_t *, int); 273 extern void xfs_buf_ioerror(xfs_buf_t *, int); 274 extern void xfs_buf_ioerror_alert(struct xfs_buf *, const char *func); ··· 281 xfs_buf_rw_t); 282 #define xfs_buf_zero(bp, off, len) \ 283 xfs_buf_iomove((bp), (off), (len), NULL, XBRW_ZERO) 284 + 285 + extern int xfs_bioerror_relse(struct xfs_buf *); 286 287 static inline int xfs_buf_geterror(xfs_buf_t *bp) 288 { ··· 301 302 #define XFS_BUF_ZEROFLAGS(bp) \ 303 ((bp)->b_flags &= ~(XBF_READ|XBF_WRITE|XBF_ASYNC| \ 304 + XBF_SYNCIO|XBF_FUA|XBF_FLUSH| \ 305 + XBF_WRITE_FAIL)) 306 307 void xfs_buf_stale(struct xfs_buf *bp); 308 #define XFS_BUF_UNSTALE(bp) ((bp)->b_flags &= ~XBF_STALE)
+19 -2
fs/xfs/xfs_buf_item.c
··· 496 } 497 } 498 499 STATIC uint 500 xfs_buf_item_push( 501 struct xfs_log_item *lip, ··· 531 ASSERT(!(bip->bli_flags & XFS_BLI_STALE)); 532 533 trace_xfs_buf_item_push(bip); 534 535 if (!xfs_buf_delwri_queue(bp, buffer_list)) 536 rval = XFS_ITEM_FLUSHING; ··· 1112 1113 xfs_buf_ioerror(bp, 0); /* errno of 0 unsets the flag */ 1114 1115 - if (!XFS_BUF_ISSTALE(bp)) { 1116 - bp->b_flags |= XBF_WRITE | XBF_ASYNC | XBF_DONE; 1117 xfs_buf_iorequest(bp); 1118 } else { 1119 xfs_buf_relse(bp);
··· 496 } 497 } 498 499 + /* 500 + * Buffer IO error rate limiting. Limit it to no more than 10 messages per 30 501 + * seconds so as to not spam logs too much on repeated detection of the same 502 + * buffer being bad.. 503 + */ 504 + 505 + DEFINE_RATELIMIT_STATE(xfs_buf_write_fail_rl_state, 30 * HZ, 10); 506 + 507 STATIC uint 508 xfs_buf_item_push( 509 struct xfs_log_item *lip, ··· 523 ASSERT(!(bip->bli_flags & XFS_BLI_STALE)); 524 525 trace_xfs_buf_item_push(bip); 526 + 527 + /* has a previous flush failed due to IO errors? */ 528 + if ((bp->b_flags & XBF_WRITE_FAIL) && 529 + ___ratelimit(&xfs_buf_write_fail_rl_state, "XFS:")) { 530 + xfs_warn(bp->b_target->bt_mount, 531 + "Detected failing async write on buffer block 0x%llx. Retrying async write.\n", 532 + (long long)bp->b_bn); 533 + } 534 535 if (!xfs_buf_delwri_queue(bp, buffer_list)) 536 rval = XFS_ITEM_FLUSHING; ··· 1096 1097 xfs_buf_ioerror(bp, 0); /* errno of 0 unsets the flag */ 1098 1099 + if (!(bp->b_flags & (XBF_STALE|XBF_WRITE_FAIL))) { 1100 + bp->b_flags |= XBF_WRITE | XBF_ASYNC | 1101 + XBF_DONE | XBF_WRITE_FAIL; 1102 xfs_buf_iorequest(bp); 1103 } else { 1104 xfs_buf_relse(bp);
+13 -13
fs/xfs/xfs_dir2_node.c
··· 2067 */ 2068 int /* error */ 2069 xfs_dir2_node_removename( 2070 - xfs_da_args_t *args) /* operation arguments */ 2071 { 2072 - xfs_da_state_blk_t *blk; /* leaf block */ 2073 int error; /* error return value */ 2074 int rval; /* operation return value */ 2075 - xfs_da_state_t *state; /* btree cursor */ 2076 2077 trace_xfs_dir2_node_removename(args); 2078 ··· 2084 state->mp = args->dp->i_mount; 2085 state->blocksize = state->mp->m_dirblksize; 2086 state->node_ents = state->mp->m_dir_node_ents; 2087 - /* 2088 - * Look up the entry we're deleting, set up the cursor. 2089 - */ 2090 error = xfs_da3_node_lookup_int(state, &rval); 2091 if (error) 2092 - rval = error; 2093 - /* 2094 - * Didn't find it, upper layer screwed up. 2095 - */ 2096 if (rval != EEXIST) { 2097 - xfs_da_state_free(state); 2098 - return rval; 2099 } 2100 blk = &state->path.blk[state->path.active - 1]; 2101 ASSERT(blk->magic == XFS_DIR2_LEAFN_MAGIC); 2102 ASSERT(state->extravalid); ··· 2106 error = xfs_dir2_leafn_remove(args, blk->bp, blk->index, 2107 &state->extrablk, &rval); 2108 if (error) 2109 - return error; 2110 /* 2111 * Fix the hash values up the btree. 2112 */ ··· 2121 */ 2122 if (!error) 2123 error = xfs_dir2_node_to_leaf(state); 2124 xfs_da_state_free(state); 2125 return error; 2126 }
··· 2067 */ 2068 int /* error */ 2069 xfs_dir2_node_removename( 2070 + struct xfs_da_args *args) /* operation arguments */ 2071 { 2072 + struct xfs_da_state_blk *blk; /* leaf block */ 2073 int error; /* error return value */ 2074 int rval; /* operation return value */ 2075 + struct xfs_da_state *state; /* btree cursor */ 2076 2077 trace_xfs_dir2_node_removename(args); 2078 ··· 2084 state->mp = args->dp->i_mount; 2085 state->blocksize = state->mp->m_dirblksize; 2086 state->node_ents = state->mp->m_dir_node_ents; 2087 + 2088 + /* Look up the entry we're deleting, set up the cursor. */ 2089 error = xfs_da3_node_lookup_int(state, &rval); 2090 if (error) 2091 + goto out_free; 2092 + 2093 + /* Didn't find it, upper layer screwed up. */ 2094 if (rval != EEXIST) { 2095 + error = rval; 2096 + goto out_free; 2097 } 2098 + 2099 blk = &state->path.blk[state->path.active - 1]; 2100 ASSERT(blk->magic == XFS_DIR2_LEAFN_MAGIC); 2101 ASSERT(state->extravalid); ··· 2107 error = xfs_dir2_leafn_remove(args, blk->bp, blk->index, 2108 &state->extrablk, &rval); 2109 if (error) 2110 + goto out_free; 2111 /* 2112 * Fix the hash values up the btree. 2113 */ ··· 2122 */ 2123 if (!error) 2124 error = xfs_dir2_node_to_leaf(state); 2125 + out_free: 2126 xfs_da_state_free(state); 2127 return error; 2128 }
+2 -1
fs/xfs/xfs_iops.c
··· 618 } 619 if (!gid_eq(igid, gid)) { 620 if (XFS_IS_QUOTA_RUNNING(mp) && XFS_IS_GQUOTA_ON(mp)) { 621 - ASSERT(!XFS_IS_PQUOTA_ON(mp)); 622 ASSERT(mask & ATTR_GID); 623 ASSERT(gdqp); 624 olddquot2 = xfs_qm_vop_chown(tp, ip,
··· 618 } 619 if (!gid_eq(igid, gid)) { 620 if (XFS_IS_QUOTA_RUNNING(mp) && XFS_IS_GQUOTA_ON(mp)) { 621 + ASSERT(xfs_sb_version_has_pquotino(&mp->m_sb) || 622 + !XFS_IS_PQUOTA_ON(mp)); 623 ASSERT(mask & ATTR_GID); 624 ASSERT(gdqp); 625 olddquot2 = xfs_qm_vop_chown(tp, ip,
+11 -2
fs/xfs/xfs_log_recover.c
··· 193 bp->b_io_length = nbblks; 194 bp->b_error = 0; 195 196 - xfsbdstrat(log->l_mp, bp); 197 error = xfs_buf_iowait(bp); 198 if (error) 199 xfs_buf_ioerror_alert(bp, __func__); ··· 4400 XFS_BUF_READ(bp); 4401 XFS_BUF_UNASYNC(bp); 4402 bp->b_ops = &xfs_sb_buf_ops; 4403 - xfsbdstrat(log->l_mp, bp); 4404 error = xfs_buf_iowait(bp); 4405 if (error) { 4406 xfs_buf_ioerror_alert(bp, __func__);
··· 193 bp->b_io_length = nbblks; 194 bp->b_error = 0; 195 196 + if (XFS_FORCED_SHUTDOWN(log->l_mp)) 197 + return XFS_ERROR(EIO); 198 + 199 + xfs_buf_iorequest(bp); 200 error = xfs_buf_iowait(bp); 201 if (error) 202 xfs_buf_ioerror_alert(bp, __func__); ··· 4397 XFS_BUF_READ(bp); 4398 XFS_BUF_UNASYNC(bp); 4399 bp->b_ops = &xfs_sb_buf_ops; 4400 + 4401 + if (XFS_FORCED_SHUTDOWN(log->l_mp)) { 4402 + xfs_buf_relse(bp); 4403 + return XFS_ERROR(EIO); 4404 + } 4405 + 4406 + xfs_buf_iorequest(bp); 4407 error = xfs_buf_iowait(bp); 4408 if (error) { 4409 xfs_buf_ioerror_alert(bp, __func__);
+53 -27
fs/xfs/xfs_qm.c
··· 134 { 135 struct xfs_mount *mp = dqp->q_mount; 136 struct xfs_quotainfo *qi = mp->m_quotainfo; 137 - struct xfs_dquot *gdqp = NULL; 138 - struct xfs_dquot *pdqp = NULL; 139 140 xfs_dqlock(dqp); 141 if ((dqp->dq_flags & XFS_DQ_FREEING) || dqp->q_nrefs != 0) { 142 xfs_dqunlock(dqp); 143 return EAGAIN; 144 - } 145 - 146 - /* 147 - * If this quota has a hint attached, prepare for releasing it now. 148 - */ 149 - gdqp = dqp->q_gdquot; 150 - if (gdqp) { 151 - xfs_dqlock(gdqp); 152 - dqp->q_gdquot = NULL; 153 - } 154 - 155 - pdqp = dqp->q_pdquot; 156 - if (pdqp) { 157 - xfs_dqlock(pdqp); 158 - dqp->q_pdquot = NULL; 159 } 160 161 dqp->dq_flags |= XFS_DQ_FREEING; ··· 189 XFS_STATS_DEC(xs_qm_dquot_unused); 190 191 xfs_qm_dqdestroy(dqp); 192 193 if (gdqp) 194 - xfs_qm_dqput(gdqp); 195 if (pdqp) 196 - xfs_qm_dqput(pdqp); 197 return 0; 198 } 199 ··· 241 struct xfs_mount *mp, 242 uint flags) 243 { 244 - if (flags & XFS_QMOPT_UQUOTA) 245 - xfs_qm_dquot_walk(mp, XFS_DQ_USER, xfs_qm_dqpurge, NULL); 246 if (flags & XFS_QMOPT_GQUOTA) 247 xfs_qm_dquot_walk(mp, XFS_DQ_GROUP, xfs_qm_dqpurge, NULL); 248 if (flags & XFS_QMOPT_PQUOTA) ··· 2111 ASSERT(xfs_isilocked(ip, XFS_ILOCK_EXCL)); 2112 ASSERT(XFS_IS_QUOTA_RUNNING(mp)); 2113 2114 - if (udqp) { 2115 ASSERT(ip->i_udquot == NULL); 2116 - ASSERT(XFS_IS_UQUOTA_ON(mp)); 2117 ASSERT(ip->i_d.di_uid == be32_to_cpu(udqp->q_core.d_id)); 2118 2119 ip->i_udquot = xfs_qm_dqhold(udqp); 2120 xfs_trans_mod_dquot(tp, udqp, XFS_TRANS_DQ_ICOUNT, 1); 2121 } 2122 - if (gdqp) { 2123 ASSERT(ip->i_gdquot == NULL); 2124 - ASSERT(XFS_IS_GQUOTA_ON(mp)); 2125 ASSERT(ip->i_d.di_gid == be32_to_cpu(gdqp->q_core.d_id)); 2126 ip->i_gdquot = xfs_qm_dqhold(gdqp); 2127 xfs_trans_mod_dquot(tp, gdqp, XFS_TRANS_DQ_ICOUNT, 1); 2128 } 2129 - if (pdqp) { 2130 ASSERT(ip->i_pdquot == NULL); 2131 - ASSERT(XFS_IS_PQUOTA_ON(mp)); 2132 ASSERT(xfs_get_projid(ip) == be32_to_cpu(pdqp->q_core.d_id)); 2133 2134 ip->i_pdquot = xfs_qm_dqhold(pdqp);
··· 134 { 135 struct xfs_mount *mp = dqp->q_mount; 136 struct xfs_quotainfo *qi = mp->m_quotainfo; 137 138 xfs_dqlock(dqp); 139 if ((dqp->dq_flags & XFS_DQ_FREEING) || dqp->q_nrefs != 0) { 140 xfs_dqunlock(dqp); 141 return EAGAIN; 142 } 143 144 dqp->dq_flags |= XFS_DQ_FREEING; ··· 206 XFS_STATS_DEC(xs_qm_dquot_unused); 207 208 xfs_qm_dqdestroy(dqp); 209 + return 0; 210 + } 211 + 212 + /* 213 + * Release the group or project dquot pointers the user dquots maybe carrying 214 + * around as a hint, and proceed to purge the user dquot cache if requested. 215 + */ 216 + STATIC int 217 + xfs_qm_dqpurge_hints( 218 + struct xfs_dquot *dqp, 219 + void *data) 220 + { 221 + struct xfs_dquot *gdqp = NULL; 222 + struct xfs_dquot *pdqp = NULL; 223 + uint flags = *((uint *)data); 224 + 225 + xfs_dqlock(dqp); 226 + if (dqp->dq_flags & XFS_DQ_FREEING) { 227 + xfs_dqunlock(dqp); 228 + return EAGAIN; 229 + } 230 + 231 + /* If this quota has a hint attached, prepare for releasing it now */ 232 + gdqp = dqp->q_gdquot; 233 + if (gdqp) 234 + dqp->q_gdquot = NULL; 235 + 236 + pdqp = dqp->q_pdquot; 237 + if (pdqp) 238 + dqp->q_pdquot = NULL; 239 + 240 + xfs_dqunlock(dqp); 241 242 if (gdqp) 243 + xfs_qm_dqrele(gdqp); 244 if (pdqp) 245 + xfs_qm_dqrele(pdqp); 246 + 247 + if (flags & XFS_QMOPT_UQUOTA) 248 + return xfs_qm_dqpurge(dqp, NULL); 249 + 250 return 0; 251 } 252 ··· 222 struct xfs_mount *mp, 223 uint flags) 224 { 225 + /* 226 + * We have to release group/project dquot hint(s) from the user dquot 227 + * at first if they are there, otherwise we would run into an infinite 228 + * loop while walking through radix tree to purge other type of dquots 229 + * since their refcount is not zero if the user dquot refers to them 230 + * as hint. 231 + * 232 + * Call the special xfs_qm_dqpurge_hints() will end up go through the 233 + * general xfs_qm_dqpurge() against user dquot cache if requested. 234 + */ 235 + xfs_qm_dquot_walk(mp, XFS_DQ_USER, xfs_qm_dqpurge_hints, &flags); 236 + 237 if (flags & XFS_QMOPT_GQUOTA) 238 xfs_qm_dquot_walk(mp, XFS_DQ_GROUP, xfs_qm_dqpurge, NULL); 239 if (flags & XFS_QMOPT_PQUOTA) ··· 2082 ASSERT(xfs_isilocked(ip, XFS_ILOCK_EXCL)); 2083 ASSERT(XFS_IS_QUOTA_RUNNING(mp)); 2084 2085 + if (udqp && XFS_IS_UQUOTA_ON(mp)) { 2086 ASSERT(ip->i_udquot == NULL); 2087 ASSERT(ip->i_d.di_uid == be32_to_cpu(udqp->q_core.d_id)); 2088 2089 ip->i_udquot = xfs_qm_dqhold(udqp); 2090 xfs_trans_mod_dquot(tp, udqp, XFS_TRANS_DQ_ICOUNT, 1); 2091 } 2092 + if (gdqp && XFS_IS_GQUOTA_ON(mp)) { 2093 ASSERT(ip->i_gdquot == NULL); 2094 ASSERT(ip->i_d.di_gid == be32_to_cpu(gdqp->q_core.d_id)); 2095 ip->i_gdquot = xfs_qm_dqhold(gdqp); 2096 xfs_trans_mod_dquot(tp, gdqp, XFS_TRANS_DQ_ICOUNT, 1); 2097 } 2098 + if (pdqp && XFS_IS_PQUOTA_ON(mp)) { 2099 ASSERT(ip->i_pdquot == NULL); 2100 ASSERT(xfs_get_projid(ip) == be32_to_cpu(pdqp->q_core.d_id)); 2101 2102 ip->i_pdquot = xfs_qm_dqhold(pdqp);
+12 -1
fs/xfs/xfs_trans_buf.c
··· 314 ASSERT(bp->b_iodone == NULL); 315 XFS_BUF_READ(bp); 316 bp->b_ops = ops; 317 - xfsbdstrat(tp->t_mountp, bp); 318 error = xfs_buf_iowait(bp); 319 if (error) { 320 xfs_buf_ioerror_alert(bp, __func__);
··· 314 ASSERT(bp->b_iodone == NULL); 315 XFS_BUF_READ(bp); 316 bp->b_ops = ops; 317 + 318 + /* 319 + * XXX(hch): clean up the error handling here to be less 320 + * of a mess.. 321 + */ 322 + if (XFS_FORCED_SHUTDOWN(mp)) { 323 + trace_xfs_bdstrat_shut(bp, _RET_IP_); 324 + xfs_bioerror_relse(bp); 325 + } else { 326 + xfs_buf_iorequest(bp); 327 + } 328 + 329 error = xfs_buf_iowait(bp); 330 if (error) { 331 xfs_buf_ioerror_alert(bp, __func__);
+3 -4
include/asm-generic/pgtable.h
··· 217 #endif 218 219 #ifndef pte_accessible 220 - # define pte_accessible(pte) ((void)(pte),1) 221 #endif 222 223 #ifndef flush_tlb_fix_spurious_fault ··· 599 #ifdef CONFIG_TRANSPARENT_HUGEPAGE 600 barrier(); 601 #endif 602 - if (pmd_none(pmdval)) 603 return 1; 604 if (unlikely(pmd_bad(pmdval))) { 605 - if (!pmd_trans_huge(pmdval)) 606 - pmd_clear_bad(pmd); 607 return 1; 608 } 609 return 0;
··· 217 #endif 218 219 #ifndef pte_accessible 220 + # define pte_accessible(mm, pte) ((void)(pte), 1) 221 #endif 222 223 #ifndef flush_tlb_fix_spurious_fault ··· 599 #ifdef CONFIG_TRANSPARENT_HUGEPAGE 600 barrier(); 601 #endif 602 + if (pmd_none(pmdval) || pmd_trans_huge(pmdval)) 603 return 1; 604 if (unlikely(pmd_bad(pmdval))) { 605 + pmd_clear_bad(pmd); 606 return 1; 607 } 608 return 0;
+11 -24
include/asm-generic/preempt.h
··· 3 4 #include <linux/thread_info.h> 5 6 - /* 7 - * We mask the PREEMPT_NEED_RESCHED bit so as not to confuse all current users 8 - * that think a non-zero value indicates we cannot preempt. 9 - */ 10 static __always_inline int preempt_count(void) 11 { 12 - return current_thread_info()->preempt_count & ~PREEMPT_NEED_RESCHED; 13 } 14 15 static __always_inline int *preempt_count_ptr(void) ··· 15 return &current_thread_info()->preempt_count; 16 } 17 18 - /* 19 - * We now loose PREEMPT_NEED_RESCHED and cause an extra reschedule; however the 20 - * alternative is loosing a reschedule. Better schedule too often -- also this 21 - * should be a very rare operation. 22 - */ 23 static __always_inline void preempt_count_set(int pc) 24 { 25 *preempt_count_ptr() = pc; ··· 34 task_thread_info(p)->preempt_count = PREEMPT_ENABLED; \ 35 } while (0) 36 37 - /* 38 - * We fold the NEED_RESCHED bit into the preempt count such that 39 - * preempt_enable() can decrement and test for needing to reschedule with a 40 - * single instruction. 41 - * 42 - * We invert the actual bit, so that when the decrement hits 0 we know we both 43 - * need to resched (the bit is cleared) and can resched (no preempt count). 44 - */ 45 - 46 static __always_inline void set_preempt_need_resched(void) 47 { 48 - *preempt_count_ptr() &= ~PREEMPT_NEED_RESCHED; 49 } 50 51 static __always_inline void clear_preempt_need_resched(void) 52 { 53 - *preempt_count_ptr() |= PREEMPT_NEED_RESCHED; 54 } 55 56 static __always_inline bool test_preempt_need_resched(void) 57 { 58 - return !(*preempt_count_ptr() & PREEMPT_NEED_RESCHED); 59 } 60 61 /* ··· 63 64 static __always_inline bool __preempt_count_dec_and_test(void) 65 { 66 - return !--*preempt_count_ptr(); 67 } 68 69 /* ··· 76 */ 77 static __always_inline bool should_resched(void) 78 { 79 - return unlikely(!*preempt_count_ptr()); 80 } 81 82 #ifdef CONFIG_PREEMPT
··· 3 4 #include <linux/thread_info.h> 5 6 + #define PREEMPT_ENABLED (0) 7 + 8 static __always_inline int preempt_count(void) 9 { 10 + return current_thread_info()->preempt_count; 11 } 12 13 static __always_inline int *preempt_count_ptr(void) ··· 17 return &current_thread_info()->preempt_count; 18 } 19 20 static __always_inline void preempt_count_set(int pc) 21 { 22 *preempt_count_ptr() = pc; ··· 41 task_thread_info(p)->preempt_count = PREEMPT_ENABLED; \ 42 } while (0) 43 44 static __always_inline void set_preempt_need_resched(void) 45 { 46 } 47 48 static __always_inline void clear_preempt_need_resched(void) 49 { 50 } 51 52 static __always_inline bool test_preempt_need_resched(void) 53 { 54 + return false; 55 } 56 57 /* ··· 81 82 static __always_inline bool __preempt_count_dec_and_test(void) 83 { 84 + /* 85 + * Because of load-store architectures cannot do per-cpu atomic 86 + * operations; we cannot use PREEMPT_NEED_RESCHED because it might get 87 + * lost. 88 + */ 89 + return !--*preempt_count_ptr() && tif_need_resched(); 90 } 91 92 /* ··· 89 */ 90 static __always_inline bool should_resched(void) 91 { 92 + return unlikely(!preempt_count() && tif_need_resched()); 93 } 94 95 #ifdef CONFIG_PREEMPT
+1 -1
include/linux/lockref.h
··· 19 20 #define USE_CMPXCHG_LOCKREF \ 21 (IS_ENABLED(CONFIG_ARCH_USE_CMPXCHG_LOCKREF) && \ 22 - IS_ENABLED(CONFIG_SMP) && !BLOATED_SPINLOCKS) 23 24 struct lockref { 25 union {
··· 19 20 #define USE_CMPXCHG_LOCKREF \ 21 (IS_ENABLED(CONFIG_ARCH_USE_CMPXCHG_LOCKREF) && \ 22 + IS_ENABLED(CONFIG_SMP) && SPINLOCK_SIZE <= 4) 23 24 struct lockref { 25 union {
+30
include/linux/math64.h
··· 133 return ret; 134 } 135 136 #endif /* _LINUX_MATH64_H */
··· 133 return ret; 134 } 135 136 + #if defined(CONFIG_ARCH_SUPPORTS_INT128) && defined(__SIZEOF_INT128__) 137 + 138 + #ifndef mul_u64_u32_shr 139 + static inline u64 mul_u64_u32_shr(u64 a, u32 mul, unsigned int shift) 140 + { 141 + return (u64)(((unsigned __int128)a * mul) >> shift); 142 + } 143 + #endif /* mul_u64_u32_shr */ 144 + 145 + #else 146 + 147 + #ifndef mul_u64_u32_shr 148 + static inline u64 mul_u64_u32_shr(u64 a, u32 mul, unsigned int shift) 149 + { 150 + u32 ah, al; 151 + u64 ret; 152 + 153 + al = a; 154 + ah = a >> 32; 155 + 156 + ret = ((u64)al * mul) >> shift; 157 + if (ah) 158 + ret += ((u64)ah * mul) << (32 - shift); 159 + 160 + return ret; 161 + } 162 + #endif /* mul_u64_u32_shr */ 163 + 164 + #endif 165 + 166 #endif /* _LINUX_MATH64_H */
+11 -1
include/linux/migrate.h
··· 55 struct page *newpage, struct page *page); 56 extern int migrate_page_move_mapping(struct address_space *mapping, 57 struct page *newpage, struct page *page, 58 - struct buffer_head *head, enum migrate_mode mode); 59 #else 60 61 static inline void putback_lru_pages(struct list_head *l) {} ··· 91 #endif /* CONFIG_MIGRATION */ 92 93 #ifdef CONFIG_NUMA_BALANCING 94 extern int migrate_misplaced_page(struct page *page, 95 struct vm_area_struct *vma, int node); 96 extern bool migrate_ratelimited(int node); 97 #else 98 static inline int migrate_misplaced_page(struct page *page, 99 struct vm_area_struct *vma, int node) 100 {
··· 55 struct page *newpage, struct page *page); 56 extern int migrate_page_move_mapping(struct address_space *mapping, 57 struct page *newpage, struct page *page, 58 + struct buffer_head *head, enum migrate_mode mode, 59 + int extra_count); 60 #else 61 62 static inline void putback_lru_pages(struct list_head *l) {} ··· 90 #endif /* CONFIG_MIGRATION */ 91 92 #ifdef CONFIG_NUMA_BALANCING 93 + extern bool pmd_trans_migrating(pmd_t pmd); 94 + extern void wait_migrate_huge_page(struct anon_vma *anon_vma, pmd_t *pmd); 95 extern int migrate_misplaced_page(struct page *page, 96 struct vm_area_struct *vma, int node); 97 extern bool migrate_ratelimited(int node); 98 #else 99 + static inline bool pmd_trans_migrating(pmd_t pmd) 100 + { 101 + return false; 102 + } 103 + static inline void wait_migrate_huge_page(struct anon_vma *anon_vma, pmd_t *pmd) 104 + { 105 + } 106 static inline int migrate_misplaced_page(struct page *page, 107 struct vm_area_struct *vma, int node) 108 {
+3 -3
include/linux/mm.h
··· 1317 #endif /* CONFIG_MMU && !__ARCH_HAS_4LEVEL_HACK */ 1318 1319 #if USE_SPLIT_PTE_PTLOCKS 1320 - #if BLOATED_SPINLOCKS 1321 extern bool ptlock_alloc(struct page *page); 1322 extern void ptlock_free(struct page *page); 1323 ··· 1325 { 1326 return page->ptl; 1327 } 1328 - #else /* BLOATED_SPINLOCKS */ 1329 static inline bool ptlock_alloc(struct page *page) 1330 { 1331 return true; ··· 1339 { 1340 return &page->ptl; 1341 } 1342 - #endif /* BLOATED_SPINLOCKS */ 1343 1344 static inline spinlock_t *pte_lockptr(struct mm_struct *mm, pmd_t *pmd) 1345 {
··· 1317 #endif /* CONFIG_MMU && !__ARCH_HAS_4LEVEL_HACK */ 1318 1319 #if USE_SPLIT_PTE_PTLOCKS 1320 + #if ALLOC_SPLIT_PTLOCKS 1321 extern bool ptlock_alloc(struct page *page); 1322 extern void ptlock_free(struct page *page); 1323 ··· 1325 { 1326 return page->ptl; 1327 } 1328 + #else /* ALLOC_SPLIT_PTLOCKS */ 1329 static inline bool ptlock_alloc(struct page *page) 1330 { 1331 return true; ··· 1339 { 1340 return &page->ptl; 1341 } 1342 + #endif /* ALLOC_SPLIT_PTLOCKS */ 1343 1344 static inline spinlock_t *pte_lockptr(struct mm_struct *mm, pmd_t *pmd) 1345 {
+51 -1
include/linux/mm_types.h
··· 26 #define USE_SPLIT_PTE_PTLOCKS (NR_CPUS >= CONFIG_SPLIT_PTLOCK_CPUS) 27 #define USE_SPLIT_PMD_PTLOCKS (USE_SPLIT_PTE_PTLOCKS && \ 28 IS_ENABLED(CONFIG_ARCH_ENABLE_SPLIT_PMD_PTLOCK)) 29 30 /* 31 * Each physical page in the system has a struct page associated with ··· 156 * system if PG_buddy is set. 157 */ 158 #if USE_SPLIT_PTE_PTLOCKS 159 - #if BLOATED_SPINLOCKS 160 spinlock_t *ptl; 161 #else 162 spinlock_t ptl; ··· 444 /* numa_scan_seq prevents two threads setting pte_numa */ 445 int numa_scan_seq; 446 #endif 447 struct uprobes_state uprobes_state; 448 }; 449 ··· 467 { 468 return mm->cpu_vm_mask_var; 469 } 470 471 #endif /* _LINUX_MM_TYPES_H */
··· 26 #define USE_SPLIT_PTE_PTLOCKS (NR_CPUS >= CONFIG_SPLIT_PTLOCK_CPUS) 27 #define USE_SPLIT_PMD_PTLOCKS (USE_SPLIT_PTE_PTLOCKS && \ 28 IS_ENABLED(CONFIG_ARCH_ENABLE_SPLIT_PMD_PTLOCK)) 29 + #define ALLOC_SPLIT_PTLOCKS (SPINLOCK_SIZE > BITS_PER_LONG/8) 30 31 /* 32 * Each physical page in the system has a struct page associated with ··· 155 * system if PG_buddy is set. 156 */ 157 #if USE_SPLIT_PTE_PTLOCKS 158 + #if ALLOC_SPLIT_PTLOCKS 159 spinlock_t *ptl; 160 #else 161 spinlock_t ptl; ··· 443 /* numa_scan_seq prevents two threads setting pte_numa */ 444 int numa_scan_seq; 445 #endif 446 + #if defined(CONFIG_NUMA_BALANCING) || defined(CONFIG_COMPACTION) 447 + /* 448 + * An operation with batched TLB flushing is going on. Anything that 449 + * can move process memory needs to flush the TLB when moving a 450 + * PROT_NONE or PROT_NUMA mapped page. 451 + */ 452 + bool tlb_flush_pending; 453 + #endif 454 struct uprobes_state uprobes_state; 455 }; 456 ··· 458 { 459 return mm->cpu_vm_mask_var; 460 } 461 + 462 + #if defined(CONFIG_NUMA_BALANCING) || defined(CONFIG_COMPACTION) 463 + /* 464 + * Memory barriers to keep this state in sync are graciously provided by 465 + * the page table locks, outside of which no page table modifications happen. 466 + * The barriers below prevent the compiler from re-ordering the instructions 467 + * around the memory barriers that are already present in the code. 468 + */ 469 + static inline bool mm_tlb_flush_pending(struct mm_struct *mm) 470 + { 471 + barrier(); 472 + return mm->tlb_flush_pending; 473 + } 474 + static inline void set_tlb_flush_pending(struct mm_struct *mm) 475 + { 476 + mm->tlb_flush_pending = true; 477 + 478 + /* 479 + * Guarantee that the tlb_flush_pending store does not leak into the 480 + * critical section updating the page tables 481 + */ 482 + smp_mb__before_spinlock(); 483 + } 484 + /* Clearing is done after a TLB flush, which also provides a barrier. */ 485 + static inline void clear_tlb_flush_pending(struct mm_struct *mm) 486 + { 487 + barrier(); 488 + mm->tlb_flush_pending = false; 489 + } 490 + #else 491 + static inline bool mm_tlb_flush_pending(struct mm_struct *mm) 492 + { 493 + return false; 494 + } 495 + static inline void set_tlb_flush_pending(struct mm_struct *mm) 496 + { 497 + } 498 + static inline void clear_tlb_flush_pending(struct mm_struct *mm) 499 + { 500 + } 501 + #endif 502 503 #endif /* _LINUX_MM_TYPES_H */
+3
include/linux/pstore.h
··· 51 char *buf; 52 size_t bufsize; 53 struct mutex read_mutex; /* serialize open/read/close */ 54 int (*open)(struct pstore_info *psi); 55 int (*close)(struct pstore_info *psi); 56 ssize_t (*read)(u64 *id, enum pstore_type_id *type, ··· 70 struct pstore_info *psi); 71 void *data; 72 }; 73 74 #ifdef CONFIG_PSTORE 75 extern int pstore_register(struct pstore_info *);
··· 51 char *buf; 52 size_t bufsize; 53 struct mutex read_mutex; /* serialize open/read/close */ 54 + int flags; 55 int (*open)(struct pstore_info *psi); 56 int (*close)(struct pstore_info *psi); 57 ssize_t (*read)(u64 *id, enum pstore_type_id *type, ··· 69 struct pstore_info *psi); 70 void *data; 71 }; 72 + 73 + #define PSTORE_FLAGS_FRAGILE 1 74 75 #ifdef CONFIG_PSTORE 76 extern int pstore_register(struct pstore_info *);
+1
include/linux/reboot.h
··· 43 * Architecture-specific implementations of sys_reboot commands. 44 */ 45 46 extern void machine_restart(char *cmd); 47 extern void machine_halt(void); 48 extern void machine_power_off(void);
··· 43 * Architecture-specific implementations of sys_reboot commands. 44 */ 45 46 + extern void migrate_to_reboot_cpu(void); 47 extern void machine_restart(char *cmd); 48 extern void machine_halt(void); 49 extern void machine_power_off(void);
+2 -3
include/linux/sched.h
··· 440 .sum_exec_runtime = 0, \ 441 } 442 443 - #define PREEMPT_ENABLED (PREEMPT_NEED_RESCHED) 444 - 445 #ifdef CONFIG_PREEMPT_COUNT 446 #define PREEMPT_DISABLED (1 + PREEMPT_ENABLED) 447 #else ··· 930 struct uts_namespace; 931 932 struct load_weight { 933 - unsigned long weight, inv_weight; 934 }; 935 936 struct sched_avg {
··· 440 .sum_exec_runtime = 0, \ 441 } 442 443 #ifdef CONFIG_PREEMPT_COUNT 444 #define PREEMPT_DISABLED (1 + PREEMPT_ENABLED) 445 #else ··· 932 struct uts_namespace; 933 934 struct load_weight { 935 + unsigned long weight; 936 + u32 inv_weight; 937 }; 938 939 struct sched_avg {
+1 -4
include/target/target_core_base.h
··· 517 u32 acl_index; 518 #define MAX_ACL_TAG_SIZE 64 519 char acl_tag[MAX_ACL_TAG_SIZE]; 520 - u64 num_cmds; 521 - u64 read_bytes; 522 - u64 write_bytes; 523 - spinlock_t stats_lock; 524 /* Used for PR SPEC_I_PT=1 and REGISTER_AND_MOVE */ 525 atomic_t acl_pr_ref_count; 526 struct se_dev_entry **device_list; ··· 620 u32 unmap_granularity; 621 u32 unmap_granularity_alignment; 622 u32 max_write_same_len; 623 struct se_device *da_dev; 624 struct config_group da_group; 625 };
··· 517 u32 acl_index; 518 #define MAX_ACL_TAG_SIZE 64 519 char acl_tag[MAX_ACL_TAG_SIZE]; 520 /* Used for PR SPEC_I_PT=1 and REGISTER_AND_MOVE */ 521 atomic_t acl_pr_ref_count; 522 struct se_dev_entry **device_list; ··· 624 u32 unmap_granularity; 625 u32 unmap_granularity_alignment; 626 u32 max_write_same_len; 627 + u32 max_bytes_per_io; 628 struct se_device *da_dev; 629 struct config_group da_group; 630 };
+1
include/uapi/drm/vmwgfx_drm.h
··· 75 #define DRM_VMW_PARAM_FIFO_CAPS 4 76 #define DRM_VMW_PARAM_MAX_FB_SIZE 5 77 #define DRM_VMW_PARAM_FIFO_HW_VERSION 6 78 79 /** 80 * struct drm_vmw_getparam_arg
··· 75 #define DRM_VMW_PARAM_FIFO_CAPS 4 76 #define DRM_VMW_PARAM_MAX_FB_SIZE 5 77 #define DRM_VMW_PARAM_FIFO_HW_VERSION 6 78 + #define DRM_VMW_PARAM_MAX_SURF_MEMORY 7 79 80 /** 81 * struct drm_vmw_getparam_arg
+1
include/uapi/linux/perf_event.h
··· 679 * 680 * { u64 weight; } && PERF_SAMPLE_WEIGHT 681 * { u64 data_src; } && PERF_SAMPLE_DATA_SRC 682 * }; 683 */ 684 PERF_RECORD_SAMPLE = 9,
··· 679 * 680 * { u64 weight; } && PERF_SAMPLE_WEIGHT 681 * { u64 data_src; } && PERF_SAMPLE_DATA_SRC 682 + * { u64 transaction; } && PERF_SAMPLE_TRANSACTION 683 * }; 684 */ 685 PERF_RECORD_SAMPLE = 9,
+5 -5
include/xen/interface/io/blkif.h
··· 146 struct blkif_request_rw { 147 uint8_t nr_segments; /* number of segments */ 148 blkif_vdev_t handle; /* only for read/write requests */ 149 - #ifdef CONFIG_X86_64 150 uint32_t _pad1; /* offsetof(blkif_request,u.rw.id) == 8 */ 151 #endif 152 uint64_t id; /* private guest value, echoed in resp */ ··· 163 uint8_t flag; /* BLKIF_DISCARD_SECURE or zero. */ 164 #define BLKIF_DISCARD_SECURE (1<<0) /* ignored if discard-secure=0 */ 165 blkif_vdev_t _pad1; /* only for read/write requests */ 166 - #ifdef CONFIG_X86_64 167 uint32_t _pad2; /* offsetof(blkif_req..,u.discard.id)==8*/ 168 #endif 169 uint64_t id; /* private guest value, echoed in resp */ ··· 175 struct blkif_request_other { 176 uint8_t _pad1; 177 blkif_vdev_t _pad2; /* only for read/write requests */ 178 - #ifdef CONFIG_X86_64 179 uint32_t _pad3; /* offsetof(blkif_req..,u.other.id)==8*/ 180 #endif 181 uint64_t id; /* private guest value, echoed in resp */ ··· 184 struct blkif_request_indirect { 185 uint8_t indirect_op; 186 uint16_t nr_segments; 187 - #ifdef CONFIG_X86_64 188 uint32_t _pad1; /* offsetof(blkif_...,u.indirect.id) == 8 */ 189 #endif 190 uint64_t id; ··· 192 blkif_vdev_t handle; 193 uint16_t _pad2; 194 grant_ref_t indirect_grefs[BLKIF_MAX_INDIRECT_PAGES_PER_REQUEST]; 195 - #ifdef CONFIG_X86_64 196 uint32_t _pad3; /* make it 64 byte aligned */ 197 #else 198 uint64_t _pad3; /* make it 64 byte aligned */
··· 146 struct blkif_request_rw { 147 uint8_t nr_segments; /* number of segments */ 148 blkif_vdev_t handle; /* only for read/write requests */ 149 + #ifndef CONFIG_X86_32 150 uint32_t _pad1; /* offsetof(blkif_request,u.rw.id) == 8 */ 151 #endif 152 uint64_t id; /* private guest value, echoed in resp */ ··· 163 uint8_t flag; /* BLKIF_DISCARD_SECURE or zero. */ 164 #define BLKIF_DISCARD_SECURE (1<<0) /* ignored if discard-secure=0 */ 165 blkif_vdev_t _pad1; /* only for read/write requests */ 166 + #ifndef CONFIG_X86_32 167 uint32_t _pad2; /* offsetof(blkif_req..,u.discard.id)==8*/ 168 #endif 169 uint64_t id; /* private guest value, echoed in resp */ ··· 175 struct blkif_request_other { 176 uint8_t _pad1; 177 blkif_vdev_t _pad2; /* only for read/write requests */ 178 + #ifndef CONFIG_X86_32 179 uint32_t _pad3; /* offsetof(blkif_req..,u.other.id)==8*/ 180 #endif 181 uint64_t id; /* private guest value, echoed in resp */ ··· 184 struct blkif_request_indirect { 185 uint8_t indirect_op; 186 uint16_t nr_segments; 187 + #ifndef CONFIG_X86_32 188 uint32_t _pad1; /* offsetof(blkif_...,u.indirect.id) == 8 */ 189 #endif 190 uint64_t id; ··· 192 blkif_vdev_t handle; 193 uint16_t _pad2; 194 grant_ref_t indirect_grefs[BLKIF_MAX_INDIRECT_PAGES_PER_REQUEST]; 195 + #ifndef CONFIG_X86_32 196 uint32_t _pad3; /* make it 64 byte aligned */ 197 #else 198 uint64_t _pad3; /* make it 64 byte aligned */
+6
init/Kconfig
··· 809 config ARCH_SUPPORTS_NUMA_BALANCING 810 bool 811 812 # For architectures that (ab)use NUMA to represent different memory regions 813 # all cpu-local but of different latencies, such as SuperH. 814 #
··· 809 config ARCH_SUPPORTS_NUMA_BALANCING 810 bool 811 812 + # 813 + # For architectures that know their GCC __int128 support is sound 814 + # 815 + config ARCH_SUPPORTS_INT128 816 + bool 817 + 818 # For architectures that (ab)use NUMA to represent different memory regions 819 # all cpu-local but of different latencies, such as SuperH. 820 #
+4 -3
kernel/Makefile
··· 137 ############################################################################### 138 ifeq ($(CONFIG_SYSTEM_TRUSTED_KEYRING),y) 139 X509_CERTIFICATES-y := $(wildcard *.x509) $(wildcard $(srctree)/*.x509) 140 - X509_CERTIFICATES-$(CONFIG_MODULE_SIG) += signing_key.x509 141 - X509_CERTIFICATES := $(sort $(foreach CERT,$(X509_CERTIFICATES-y), \ 142 $(or $(realpath $(CERT)),$(CERT)))) 143 144 ifeq ($(X509_CERTIFICATES),) 145 $(warning *** No X.509 certificates found ***) ··· 165 targets += $(obj)/.x509.list 166 $(obj)/.x509.list: 167 @echo $(X509_CERTIFICATES) >$@ 168 169 clean-files := x509_certificate_list .x509.list 170 - endif 171 172 ifeq ($(CONFIG_MODULE_SIG),y) 173 ###############################################################################
··· 137 ############################################################################### 138 ifeq ($(CONFIG_SYSTEM_TRUSTED_KEYRING),y) 139 X509_CERTIFICATES-y := $(wildcard *.x509) $(wildcard $(srctree)/*.x509) 140 + X509_CERTIFICATES-$(CONFIG_MODULE_SIG) += $(objtree)/signing_key.x509 141 + X509_CERTIFICATES-raw := $(sort $(foreach CERT,$(X509_CERTIFICATES-y), \ 142 $(or $(realpath $(CERT)),$(CERT)))) 143 + X509_CERTIFICATES := $(subst $(realpath $(objtree))/,,$(X509_CERTIFICATES-raw)) 144 145 ifeq ($(X509_CERTIFICATES),) 146 $(warning *** No X.509 certificates found ***) ··· 164 targets += $(obj)/.x509.list 165 $(obj)/.x509.list: 166 @echo $(X509_CERTIFICATES) >$@ 167 + endif 168 169 clean-files := x509_certificate_list .x509.list 170 171 ifeq ($(CONFIG_MODULE_SIG),y) 172 ###############################################################################
+1 -1
kernel/bounds.c
··· 22 #ifdef CONFIG_SMP 23 DEFINE(NR_CPUS_BITS, ilog2(CONFIG_NR_CPUS)); 24 #endif 25 - DEFINE(BLOATED_SPINLOCKS, sizeof(spinlock_t) > sizeof(int)); 26 /* End of constants */ 27 }
··· 22 #ifdef CONFIG_SMP 23 DEFINE(NR_CPUS_BITS, ilog2(CONFIG_NR_CPUS)); 24 #endif 25 + DEFINE(SPINLOCK_SIZE, sizeof(spinlock_t)); 26 /* End of constants */ 27 }
+18 -3
kernel/events/core.c
··· 1396 if (event->state != PERF_EVENT_STATE_ACTIVE) 1397 return; 1398 1399 event->state = PERF_EVENT_STATE_INACTIVE; 1400 if (event->pending_disable) { 1401 event->pending_disable = 0; ··· 1414 ctx->nr_freq--; 1415 if (event->attr.exclusive || !cpuctx->active_oncpu) 1416 cpuctx->exclusive = 0; 1417 } 1418 1419 static void ··· 1656 struct perf_event_context *ctx) 1657 { 1658 u64 tstamp = perf_event_time(event); 1659 1660 if (event->state <= PERF_EVENT_STATE_OFF) 1661 return 0; ··· 1679 */ 1680 smp_wmb(); 1681 1682 if (event->pmu->add(event, PERF_EF_START)) { 1683 event->state = PERF_EVENT_STATE_INACTIVE; 1684 event->oncpu = -1; 1685 - return -EAGAIN; 1686 } 1687 1688 event->tstamp_running += tstamp - event->tstamp_stopped; ··· 1701 if (event->attr.exclusive) 1702 cpuctx->exclusive = 1; 1703 1704 - return 0; 1705 } 1706 1707 static int ··· 2754 if (!event_filter_match(event)) 2755 continue; 2756 2757 hwc = &event->hw; 2758 2759 if (hwc->interrupts == MAX_INTERRUPTS) { ··· 2765 } 2766 2767 if (!event->attr.freq || !event->attr.sample_freq) 2768 - continue; 2769 2770 /* 2771 * stop the event and update event->count ··· 2787 perf_adjust_period(event, period, delta, false); 2788 2789 event->pmu->start(event, delta > 0 ? PERF_EF_RELOAD : 0); 2790 } 2791 2792 perf_pmu_enable(ctx->pmu);
··· 1396 if (event->state != PERF_EVENT_STATE_ACTIVE) 1397 return; 1398 1399 + perf_pmu_disable(event->pmu); 1400 + 1401 event->state = PERF_EVENT_STATE_INACTIVE; 1402 if (event->pending_disable) { 1403 event->pending_disable = 0; ··· 1412 ctx->nr_freq--; 1413 if (event->attr.exclusive || !cpuctx->active_oncpu) 1414 cpuctx->exclusive = 0; 1415 + 1416 + perf_pmu_enable(event->pmu); 1417 } 1418 1419 static void ··· 1652 struct perf_event_context *ctx) 1653 { 1654 u64 tstamp = perf_event_time(event); 1655 + int ret = 0; 1656 1657 if (event->state <= PERF_EVENT_STATE_OFF) 1658 return 0; ··· 1674 */ 1675 smp_wmb(); 1676 1677 + perf_pmu_disable(event->pmu); 1678 + 1679 if (event->pmu->add(event, PERF_EF_START)) { 1680 event->state = PERF_EVENT_STATE_INACTIVE; 1681 event->oncpu = -1; 1682 + ret = -EAGAIN; 1683 + goto out; 1684 } 1685 1686 event->tstamp_running += tstamp - event->tstamp_stopped; ··· 1693 if (event->attr.exclusive) 1694 cpuctx->exclusive = 1; 1695 1696 + out: 1697 + perf_pmu_enable(event->pmu); 1698 + 1699 + return ret; 1700 } 1701 1702 static int ··· 2743 if (!event_filter_match(event)) 2744 continue; 2745 2746 + perf_pmu_disable(event->pmu); 2747 + 2748 hwc = &event->hw; 2749 2750 if (hwc->interrupts == MAX_INTERRUPTS) { ··· 2752 } 2753 2754 if (!event->attr.freq || !event->attr.sample_freq) 2755 + goto next; 2756 2757 /* 2758 * stop the event and update event->count ··· 2774 perf_adjust_period(event, period, delta, false); 2775 2776 event->pmu->start(event, delta > 0 ? PERF_EF_RELOAD : 0); 2777 + next: 2778 + perf_pmu_enable(event->pmu); 2779 } 2780 2781 perf_pmu_enable(ctx->pmu);
+1
kernel/fork.c
··· 537 spin_lock_init(&mm->page_table_lock); 538 mm_init_aio(mm); 539 mm_init_owner(mm, p); 540 541 if (likely(!mm_alloc_pgd(mm))) { 542 mm->def_flags = 0;
··· 537 spin_lock_init(&mm->page_table_lock); 538 mm_init_aio(mm); 539 mm_init_owner(mm, p); 540 + clear_tlb_flush_pending(mm); 541 542 if (likely(!mm_alloc_pgd(mm))) { 543 mm->def_flags = 0;
+1
kernel/kexec.c
··· 1680 { 1681 kexec_in_progress = true; 1682 kernel_restart_prepare(NULL); 1683 printk(KERN_EMERG "Starting new kernel\n"); 1684 machine_shutdown(); 1685 }
··· 1680 { 1681 kexec_in_progress = true; 1682 kernel_restart_prepare(NULL); 1683 + migrate_to_reboot_cpu(); 1684 printk(KERN_EMERG "Starting new kernel\n"); 1685 machine_shutdown(); 1686 }
+1 -1
kernel/reboot.c
··· 104 } 105 EXPORT_SYMBOL(unregister_reboot_notifier); 106 107 - static void migrate_to_reboot_cpu(void) 108 { 109 /* The boot cpu is always logical cpu 0 */ 110 int cpu = reboot_cpu;
··· 104 } 105 EXPORT_SYMBOL(unregister_reboot_notifier); 106 107 + void migrate_to_reboot_cpu(void) 108 { 109 /* The boot cpu is always logical cpu 0 */ 110 int cpu = reboot_cpu;
+4 -2
kernel/sched/core.c
··· 4902 static void update_top_cache_domain(int cpu) 4903 { 4904 struct sched_domain *sd; 4905 int id = cpu; 4906 int size = 1; 4907 ··· 4910 if (sd) { 4911 id = cpumask_first(sched_domain_span(sd)); 4912 size = cpumask_weight(sched_domain_span(sd)); 4913 - sd = sd->parent; /* sd_busy */ 4914 } 4915 - rcu_assign_pointer(per_cpu(sd_busy, cpu), sd); 4916 4917 rcu_assign_pointer(per_cpu(sd_llc, cpu), sd); 4918 per_cpu(sd_llc_size, cpu) = size; ··· 5113 * die on a /0 trap. 5114 */ 5115 sg->sgp->power = SCHED_POWER_SCALE * cpumask_weight(sg_span); 5116 5117 /* 5118 * Make sure the first group of this domain contains the
··· 4902 static void update_top_cache_domain(int cpu) 4903 { 4904 struct sched_domain *sd; 4905 + struct sched_domain *busy_sd = NULL; 4906 int id = cpu; 4907 int size = 1; 4908 ··· 4909 if (sd) { 4910 id = cpumask_first(sched_domain_span(sd)); 4911 size = cpumask_weight(sched_domain_span(sd)); 4912 + busy_sd = sd->parent; /* sd_busy */ 4913 } 4914 + rcu_assign_pointer(per_cpu(sd_busy, cpu), busy_sd); 4915 4916 rcu_assign_pointer(per_cpu(sd_llc, cpu), sd); 4917 per_cpu(sd_llc_size, cpu) = size; ··· 5112 * die on a /0 trap. 5113 */ 5114 sg->sgp->power = SCHED_POWER_SCALE * cpumask_weight(sg_span); 5115 + sg->sgp->power_orig = sg->sgp->power; 5116 5117 /* 5118 * Make sure the first group of this domain contains the
+72 -81
kernel/sched/fair.c
··· 178 update_sysctl(); 179 } 180 181 - #if BITS_PER_LONG == 32 182 - # define WMULT_CONST (~0UL) 183 - #else 184 - # define WMULT_CONST (1UL << 32) 185 - #endif 186 - 187 #define WMULT_SHIFT 32 188 189 - /* 190 - * Shift right and round: 191 - */ 192 - #define SRR(x, y) (((x) + (1UL << ((y) - 1))) >> (y)) 193 - 194 - /* 195 - * delta *= weight / lw 196 - */ 197 - static unsigned long 198 - calc_delta_mine(unsigned long delta_exec, unsigned long weight, 199 - struct load_weight *lw) 200 { 201 - u64 tmp; 202 203 - /* 204 - * weight can be less than 2^SCHED_LOAD_RESOLUTION for task group sched 205 - * entities since MIN_SHARES = 2. Treat weight as 1 if less than 206 - * 2^SCHED_LOAD_RESOLUTION. 207 - */ 208 - if (likely(weight > (1UL << SCHED_LOAD_RESOLUTION))) 209 - tmp = (u64)delta_exec * scale_load_down(weight); 210 else 211 - tmp = (u64)delta_exec; 212 213 - if (!lw->inv_weight) { 214 - unsigned long w = scale_load_down(lw->weight); 215 216 - if (BITS_PER_LONG > 32 && unlikely(w >= WMULT_CONST)) 217 - lw->inv_weight = 1; 218 - else if (unlikely(!w)) 219 - lw->inv_weight = WMULT_CONST; 220 - else 221 - lw->inv_weight = WMULT_CONST / w; 222 } 223 224 - /* 225 - * Check whether we'd overflow the 64-bit multiplication: 226 - */ 227 - if (unlikely(tmp > WMULT_CONST)) 228 - tmp = SRR(SRR(tmp, WMULT_SHIFT/2) * lw->inv_weight, 229 - WMULT_SHIFT/2); 230 - else 231 - tmp = SRR(tmp * lw->inv_weight, WMULT_SHIFT); 232 233 - return (unsigned long)min(tmp, (u64)(unsigned long)LONG_MAX); 234 } 235 236 ··· 445 #endif /* CONFIG_FAIR_GROUP_SCHED */ 446 447 static __always_inline 448 - void account_cfs_rq_runtime(struct cfs_rq *cfs_rq, unsigned long delta_exec); 449 450 /************************************************************** 451 * Scheduling class tree data structure manipulation methods: ··· 614 /* 615 * delta /= w 616 */ 617 - static inline unsigned long 618 - calc_delta_fair(unsigned long delta, struct sched_entity *se) 619 { 620 if (unlikely(se->load.weight != NICE_0_LOAD)) 621 - delta = calc_delta_mine(delta, NICE_0_LOAD, &se->load); 622 623 return delta; 624 } ··· 666 update_load_add(&lw, se->load.weight); 667 load = &lw; 668 } 669 - slice = calc_delta_mine(slice, se->load.weight, load); 670 } 671 return slice; 672 } ··· 704 #endif 705 706 /* 707 - * Update the current task's runtime statistics. Skip current tasks that 708 - * are not in our scheduling class. 709 */ 710 - static inline void 711 - __update_curr(struct cfs_rq *cfs_rq, struct sched_entity *curr, 712 - unsigned long delta_exec) 713 - { 714 - unsigned long delta_exec_weighted; 715 - 716 - schedstat_set(curr->statistics.exec_max, 717 - max((u64)delta_exec, curr->statistics.exec_max)); 718 - 719 - curr->sum_exec_runtime += delta_exec; 720 - schedstat_add(cfs_rq, exec_clock, delta_exec); 721 - delta_exec_weighted = calc_delta_fair(delta_exec, curr); 722 - 723 - curr->vruntime += delta_exec_weighted; 724 - update_min_vruntime(cfs_rq); 725 - } 726 - 727 static void update_curr(struct cfs_rq *cfs_rq) 728 { 729 struct sched_entity *curr = cfs_rq->curr; 730 u64 now = rq_clock_task(rq_of(cfs_rq)); 731 - unsigned long delta_exec; 732 733 if (unlikely(!curr)) 734 return; 735 736 - /* 737 - * Get the amount of time the current task was running 738 - * since the last time we changed load (this cannot 739 - * overflow on 32 bits): 740 - */ 741 - delta_exec = (unsigned long)(now - curr->exec_start); 742 - if (!delta_exec) 743 return; 744 745 - __update_curr(cfs_rq, curr, delta_exec); 746 curr->exec_start = now; 747 748 if (entity_is_task(curr)) { 749 struct task_struct *curtask = task_of(curr); ··· 1736 */ 1737 if (!vma->vm_mm || 1738 (vma->vm_file && (vma->vm_flags & (VM_READ|VM_WRITE)) == (VM_READ))) 1739 continue; 1740 1741 do { ··· 3008 } 3009 } 3010 3011 - static void __account_cfs_rq_runtime(struct cfs_rq *cfs_rq, 3012 - unsigned long delta_exec) 3013 { 3014 /* dock delta_exec before expiring quota (as it could span periods) */ 3015 cfs_rq->runtime_remaining -= delta_exec; ··· 3026 } 3027 3028 static __always_inline 3029 - void account_cfs_rq_runtime(struct cfs_rq *cfs_rq, unsigned long delta_exec) 3030 { 3031 if (!cfs_bandwidth_used() || !cfs_rq->runtime_enabled) 3032 return; ··· 3566 return rq_clock_task(rq_of(cfs_rq)); 3567 } 3568 3569 - static void account_cfs_rq_runtime(struct cfs_rq *cfs_rq, 3570 - unsigned long delta_exec) {} 3571 static void check_cfs_rq_runtime(struct cfs_rq *cfs_rq) {} 3572 static void check_enqueue_throttle(struct cfs_rq *cfs_rq) {} 3573 static __always_inline void return_cfs_rq_runtime(struct cfs_rq *cfs_rq) {}
··· 178 update_sysctl(); 179 } 180 181 + #define WMULT_CONST (~0U) 182 #define WMULT_SHIFT 32 183 184 + static void __update_inv_weight(struct load_weight *lw) 185 { 186 + unsigned long w; 187 188 + if (likely(lw->inv_weight)) 189 + return; 190 + 191 + w = scale_load_down(lw->weight); 192 + 193 + if (BITS_PER_LONG > 32 && unlikely(w >= WMULT_CONST)) 194 + lw->inv_weight = 1; 195 + else if (unlikely(!w)) 196 + lw->inv_weight = WMULT_CONST; 197 else 198 + lw->inv_weight = WMULT_CONST / w; 199 + } 200 201 + /* 202 + * delta_exec * weight / lw.weight 203 + * OR 204 + * (delta_exec * (weight * lw->inv_weight)) >> WMULT_SHIFT 205 + * 206 + * Either weight := NICE_0_LOAD and lw \e prio_to_wmult[], in which case 207 + * we're guaranteed shift stays positive because inv_weight is guaranteed to 208 + * fit 32 bits, and NICE_0_LOAD gives another 10 bits; therefore shift >= 22. 209 + * 210 + * Or, weight =< lw.weight (because lw.weight is the runqueue weight), thus 211 + * weight/lw.weight <= 1, and therefore our shift will also be positive. 212 + */ 213 + static u64 __calc_delta(u64 delta_exec, unsigned long weight, struct load_weight *lw) 214 + { 215 + u64 fact = scale_load_down(weight); 216 + int shift = WMULT_SHIFT; 217 218 + __update_inv_weight(lw); 219 + 220 + if (unlikely(fact >> 32)) { 221 + while (fact >> 32) { 222 + fact >>= 1; 223 + shift--; 224 + } 225 } 226 227 + /* hint to use a 32x32->64 mul */ 228 + fact = (u64)(u32)fact * lw->inv_weight; 229 230 + while (fact >> 32) { 231 + fact >>= 1; 232 + shift--; 233 + } 234 + 235 + return mul_u64_u32_shr(delta_exec, fact, shift); 236 } 237 238 ··· 443 #endif /* CONFIG_FAIR_GROUP_SCHED */ 444 445 static __always_inline 446 + void account_cfs_rq_runtime(struct cfs_rq *cfs_rq, u64 delta_exec); 447 448 /************************************************************** 449 * Scheduling class tree data structure manipulation methods: ··· 612 /* 613 * delta /= w 614 */ 615 + static inline u64 calc_delta_fair(u64 delta, struct sched_entity *se) 616 { 617 if (unlikely(se->load.weight != NICE_0_LOAD)) 618 + delta = __calc_delta(delta, NICE_0_LOAD, &se->load); 619 620 return delta; 621 } ··· 665 update_load_add(&lw, se->load.weight); 666 load = &lw; 667 } 668 + slice = __calc_delta(slice, se->load.weight, load); 669 } 670 return slice; 671 } ··· 703 #endif 704 705 /* 706 + * Update the current task's runtime statistics. 707 */ 708 static void update_curr(struct cfs_rq *cfs_rq) 709 { 710 struct sched_entity *curr = cfs_rq->curr; 711 u64 now = rq_clock_task(rq_of(cfs_rq)); 712 + u64 delta_exec; 713 714 if (unlikely(!curr)) 715 return; 716 717 + delta_exec = now - curr->exec_start; 718 + if (unlikely((s64)delta_exec <= 0)) 719 return; 720 721 curr->exec_start = now; 722 + 723 + schedstat_set(curr->statistics.exec_max, 724 + max(delta_exec, curr->statistics.exec_max)); 725 + 726 + curr->sum_exec_runtime += delta_exec; 727 + schedstat_add(cfs_rq, exec_clock, delta_exec); 728 + 729 + curr->vruntime += calc_delta_fair(delta_exec, curr); 730 + update_min_vruntime(cfs_rq); 731 732 if (entity_is_task(curr)) { 733 struct task_struct *curtask = task_of(curr); ··· 1750 */ 1751 if (!vma->vm_mm || 1752 (vma->vm_file && (vma->vm_flags & (VM_READ|VM_WRITE)) == (VM_READ))) 1753 + continue; 1754 + 1755 + /* 1756 + * Skip inaccessible VMAs to avoid any confusion between 1757 + * PROT_NONE and NUMA hinting ptes 1758 + */ 1759 + if (!(vma->vm_flags & (VM_READ | VM_EXEC | VM_WRITE))) 1760 continue; 1761 1762 do { ··· 3015 } 3016 } 3017 3018 + static void __account_cfs_rq_runtime(struct cfs_rq *cfs_rq, u64 delta_exec) 3019 { 3020 /* dock delta_exec before expiring quota (as it could span periods) */ 3021 cfs_rq->runtime_remaining -= delta_exec; ··· 3034 } 3035 3036 static __always_inline 3037 + void account_cfs_rq_runtime(struct cfs_rq *cfs_rq, u64 delta_exec) 3038 { 3039 if (!cfs_bandwidth_used() || !cfs_rq->runtime_enabled) 3040 return; ··· 3574 return rq_clock_task(rq_of(cfs_rq)); 3575 } 3576 3577 + static void account_cfs_rq_runtime(struct cfs_rq *cfs_rq, u64 delta_exec) {} 3578 static void check_cfs_rq_runtime(struct cfs_rq *cfs_rq) {} 3579 static void check_enqueue_throttle(struct cfs_rq *cfs_rq) {} 3580 static __always_inline void return_cfs_rq_runtime(struct cfs_rq *cfs_rq) {}
+14
kernel/sched/rt.c
··· 901 { 902 struct rq *rq = rq_of_rt_rq(rt_rq); 903 904 if (rq->online && prio < prev_prio) 905 cpupri_set(&rq->rd->cpupri, rq->cpu, prio); 906 } ··· 917 { 918 struct rq *rq = rq_of_rt_rq(rt_rq); 919 920 if (rq->online && rt_rq->highest_prio.curr != prev_prio) 921 cpupri_set(&rq->rd->cpupri, rq->cpu, rt_rq->highest_prio.curr); 922 }
··· 901 { 902 struct rq *rq = rq_of_rt_rq(rt_rq); 903 904 + #ifdef CONFIG_RT_GROUP_SCHED 905 + /* 906 + * Change rq's cpupri only if rt_rq is the top queue. 907 + */ 908 + if (&rq->rt != rt_rq) 909 + return; 910 + #endif 911 if (rq->online && prio < prev_prio) 912 cpupri_set(&rq->rd->cpupri, rq->cpu, prio); 913 } ··· 910 { 911 struct rq *rq = rq_of_rt_rq(rt_rq); 912 913 + #ifdef CONFIG_RT_GROUP_SCHED 914 + /* 915 + * Change rq's cpupri only if rt_rq is the top queue. 916 + */ 917 + if (&rq->rt != rt_rq) 918 + return; 919 + #endif 920 if (rq->online && rt_rq->highest_prio.curr != prev_prio) 921 cpupri_set(&rq->rd->cpupri, rq->cpu, rt_rq->highest_prio.curr); 922 }
+1 -1
kernel/trace/ftrace.c
··· 775 int cpu; 776 int ret = 0; 777 778 - for_each_online_cpu(cpu) { 779 ret = ftrace_profile_init_cpu(cpu); 780 if (ret) 781 break;
··· 775 int cpu; 776 int ret = 0; 777 778 + for_each_possible_cpu(cpu) { 779 ret = ftrace_profile_init_cpu(cpu); 780 if (ret) 781 break;
+3 -3
kernel/user.c
··· 51 .owner = GLOBAL_ROOT_UID, 52 .group = GLOBAL_ROOT_GID, 53 .proc_inum = PROC_USER_INIT_INO, 54 - #ifdef CONFIG_KEYS_KERBEROS_CACHE 55 - .krb_cache_register_sem = 56 - __RWSEM_INITIALIZER(init_user_ns.krb_cache_register_sem), 57 #endif 58 }; 59 EXPORT_SYMBOL_GPL(init_user_ns);
··· 51 .owner = GLOBAL_ROOT_UID, 52 .group = GLOBAL_ROOT_GID, 53 .proc_inum = PROC_USER_INIT_INO, 54 + #ifdef CONFIG_PERSISTENT_KEYRINGS 55 + .persistent_keyring_register_sem = 56 + __RWSEM_INITIALIZER(init_user_ns.persistent_keyring_register_sem), 57 #endif 58 }; 59 EXPORT_SYMBOL_GPL(init_user_ns);
+1 -1
mm/Kconfig
··· 543 544 config MEM_SOFT_DIRTY 545 bool "Track memory changes" 546 - depends on CHECKPOINT_RESTORE && HAVE_ARCH_SOFT_DIRTY 547 select PROC_PAGE_MONITOR 548 help 549 This option enables memory changes tracking by introducing a
··· 543 544 config MEM_SOFT_DIRTY 545 bool "Track memory changes" 546 + depends on CHECKPOINT_RESTORE && HAVE_ARCH_SOFT_DIRTY && PROC_FS 547 select PROC_PAGE_MONITOR 548 help 549 This option enables memory changes tracking by introducing a
+4
mm/compaction.c
··· 134 bool migrate_scanner) 135 { 136 struct zone *zone = cc->zone; 137 if (!page) 138 return; 139
··· 134 bool migrate_scanner) 135 { 136 struct zone *zone = cc->zone; 137 + 138 + if (cc->ignore_skip_hint) 139 + return; 140 + 141 if (!page) 142 return; 143
+36 -9
mm/huge_memory.c
··· 882 ret = 0; 883 goto out_unlock; 884 } 885 if (unlikely(pmd_trans_splitting(pmd))) { 886 /* split huge page running from under us */ 887 spin_unlock(src_ptl); ··· 1247 if ((flags & FOLL_DUMP) && is_huge_zero_pmd(*pmd)) 1248 return ERR_PTR(-EFAULT); 1249 1250 page = pmd_page(*pmd); 1251 VM_BUG_ON(!PageHead(page)); 1252 if (flags & FOLL_TOUCH) { ··· 1303 if (unlikely(!pmd_same(pmd, *pmdp))) 1304 goto out_unlock; 1305 1306 page = pmd_page(pmd); 1307 BUG_ON(is_huge_zero_page(page)); 1308 page_nid = page_to_nid(page); ··· 1342 /* If the page was locked, there are no parallel migrations */ 1343 if (page_locked) 1344 goto clear_pmdnuma; 1345 1346 - /* 1347 - * Otherwise wait for potential migrations and retry. We do 1348 - * relock and check_same as the page may no longer be mapped. 1349 - * As the fault is being retried, do not account for it. 1350 - */ 1351 spin_unlock(ptl); 1352 wait_on_page_locked(page); 1353 page_nid = -1; 1354 goto out; 1355 } 1356 1357 - /* Page is misplaced, serialise migrations and parallel THP splits */ 1358 get_page(page); 1359 spin_unlock(ptl); 1360 - if (!page_locked) 1361 - lock_page(page); 1362 anon_vma = page_lock_anon_vma_read(page); 1363 1364 /* Confirm the PMD did not change while page_table_lock was released */ ··· 1367 put_page(page); 1368 page_nid = -1; 1369 goto out_unlock; 1370 } 1371 1372 /* ··· 1542 ret = 1; 1543 if (!prot_numa) { 1544 entry = pmdp_get_and_clear(mm, addr, pmd); 1545 entry = pmd_modify(entry, newprot); 1546 ret = HPAGE_PMD_NR; 1547 BUG_ON(pmd_write(entry)); ··· 1558 */ 1559 if (!is_huge_zero_page(page) && 1560 !pmd_numa(*pmd)) { 1561 - entry = pmdp_get_and_clear(mm, addr, pmd); 1562 entry = pmd_mknuma(entry); 1563 ret = HPAGE_PMD_NR; 1564 }
··· 882 ret = 0; 883 goto out_unlock; 884 } 885 + 886 + /* mmap_sem prevents this happening but warn if that changes */ 887 + WARN_ON(pmd_trans_migrating(pmd)); 888 + 889 if (unlikely(pmd_trans_splitting(pmd))) { 890 /* split huge page running from under us */ 891 spin_unlock(src_ptl); ··· 1243 if ((flags & FOLL_DUMP) && is_huge_zero_pmd(*pmd)) 1244 return ERR_PTR(-EFAULT); 1245 1246 + /* Full NUMA hinting faults to serialise migration in fault paths */ 1247 + if ((flags & FOLL_NUMA) && pmd_numa(*pmd)) 1248 + goto out; 1249 + 1250 page = pmd_page(*pmd); 1251 VM_BUG_ON(!PageHead(page)); 1252 if (flags & FOLL_TOUCH) { ··· 1295 if (unlikely(!pmd_same(pmd, *pmdp))) 1296 goto out_unlock; 1297 1298 + /* 1299 + * If there are potential migrations, wait for completion and retry 1300 + * without disrupting NUMA hinting information. Do not relock and 1301 + * check_same as the page may no longer be mapped. 1302 + */ 1303 + if (unlikely(pmd_trans_migrating(*pmdp))) { 1304 + spin_unlock(ptl); 1305 + wait_migrate_huge_page(vma->anon_vma, pmdp); 1306 + goto out; 1307 + } 1308 + 1309 page = pmd_page(pmd); 1310 BUG_ON(is_huge_zero_page(page)); 1311 page_nid = page_to_nid(page); ··· 1323 /* If the page was locked, there are no parallel migrations */ 1324 if (page_locked) 1325 goto clear_pmdnuma; 1326 + } 1327 1328 + /* Migration could have started since the pmd_trans_migrating check */ 1329 + if (!page_locked) { 1330 spin_unlock(ptl); 1331 wait_on_page_locked(page); 1332 page_nid = -1; 1333 goto out; 1334 } 1335 1336 + /* 1337 + * Page is misplaced. Page lock serialises migrations. Acquire anon_vma 1338 + * to serialises splits 1339 + */ 1340 get_page(page); 1341 spin_unlock(ptl); 1342 anon_vma = page_lock_anon_vma_read(page); 1343 1344 /* Confirm the PMD did not change while page_table_lock was released */ ··· 1349 put_page(page); 1350 page_nid = -1; 1351 goto out_unlock; 1352 + } 1353 + 1354 + /* Bail if we fail to protect against THP splits for any reason */ 1355 + if (unlikely(!anon_vma)) { 1356 + put_page(page); 1357 + page_nid = -1; 1358 + goto clear_pmdnuma; 1359 } 1360 1361 /* ··· 1517 ret = 1; 1518 if (!prot_numa) { 1519 entry = pmdp_get_and_clear(mm, addr, pmd); 1520 + if (pmd_numa(entry)) 1521 + entry = pmd_mknonnuma(entry); 1522 entry = pmd_modify(entry, newprot); 1523 ret = HPAGE_PMD_NR; 1524 BUG_ON(pmd_write(entry)); ··· 1531 */ 1532 if (!is_huge_zero_page(page) && 1533 !pmd_numa(*pmd)) { 1534 + entry = *pmd; 1535 entry = pmd_mknuma(entry); 1536 ret = HPAGE_PMD_NR; 1537 }
+10 -4
mm/memory-failure.c
··· 1505 if (ret > 0) 1506 ret = -EIO; 1507 } else { 1508 - set_page_hwpoison_huge_page(hpage); 1509 - dequeue_hwpoisoned_huge_page(hpage); 1510 - atomic_long_add(1 << compound_order(hpage), 1511 - &num_poisoned_pages); 1512 } 1513 return ret; 1514 }
··· 1505 if (ret > 0) 1506 ret = -EIO; 1507 } else { 1508 + /* overcommit hugetlb page will be freed to buddy */ 1509 + if (PageHuge(page)) { 1510 + set_page_hwpoison_huge_page(hpage); 1511 + dequeue_hwpoisoned_huge_page(hpage); 1512 + atomic_long_add(1 << compound_order(hpage), 1513 + &num_poisoned_pages); 1514 + } else { 1515 + SetPageHWPoison(page); 1516 + atomic_long_inc(&num_poisoned_pages); 1517 + } 1518 } 1519 return ret; 1520 }
+1 -1
mm/memory.c
··· 4271 } 4272 #endif /* CONFIG_TRANSPARENT_HUGEPAGE || CONFIG_HUGETLBFS */ 4273 4274 - #if USE_SPLIT_PTE_PTLOCKS && BLOATED_SPINLOCKS 4275 bool ptlock_alloc(struct page *page) 4276 { 4277 spinlock_t *ptl;
··· 4271 } 4272 #endif /* CONFIG_TRANSPARENT_HUGEPAGE || CONFIG_HUGETLBFS */ 4273 4274 + #if USE_SPLIT_PTE_PTLOCKS && ALLOC_SPLIT_PTLOCKS 4275 bool ptlock_alloc(struct page *page) 4276 { 4277 spinlock_t *ptl;
+10 -8
mm/mempolicy.c
··· 1197 break; 1198 vma = vma->vm_next; 1199 } 1200 - /* 1201 - * queue_pages_range() confirms that @page belongs to some vma, 1202 - * so vma shouldn't be NULL. 1203 - */ 1204 - BUG_ON(!vma); 1205 1206 - if (PageHuge(page)) 1207 - return alloc_huge_page_noerr(vma, address, 1); 1208 return alloc_page_vma(GFP_HIGHUSER_MOVABLE, vma, address); 1209 } 1210 #else ··· 1320 if (nr_failed && (flags & MPOL_MF_STRICT)) 1321 err = -EIO; 1322 } else 1323 - putback_lru_pages(&pagelist); 1324 1325 up_write(&mm->mmap_sem); 1326 mpol_out:
··· 1197 break; 1198 vma = vma->vm_next; 1199 } 1200 1201 + if (PageHuge(page)) { 1202 + if (vma) 1203 + return alloc_huge_page_noerr(vma, address, 1); 1204 + else 1205 + return NULL; 1206 + } 1207 + /* 1208 + * if !vma, alloc_page_vma() will use task or system default policy 1209 + */ 1210 return alloc_page_vma(GFP_HIGHUSER_MOVABLE, vma, address); 1211 } 1212 #else ··· 1318 if (nr_failed && (flags & MPOL_MF_STRICT)) 1319 err = -EIO; 1320 } else 1321 + putback_movable_pages(&pagelist); 1322 1323 up_write(&mm->mmap_sem); 1324 mpol_out:
+65 -17
mm/migrate.c
··· 36 #include <linux/hugetlb_cgroup.h> 37 #include <linux/gfp.h> 38 #include <linux/balloon_compaction.h> 39 40 #include <asm/tlbflush.h> 41 ··· 317 */ 318 int migrate_page_move_mapping(struct address_space *mapping, 319 struct page *newpage, struct page *page, 320 - struct buffer_head *head, enum migrate_mode mode) 321 { 322 - int expected_count = 0; 323 void **pslot; 324 325 if (!mapping) { 326 /* Anonymous page without mapping */ 327 - if (page_count(page) != 1) 328 return -EAGAIN; 329 return MIGRATEPAGE_SUCCESS; 330 } ··· 335 pslot = radix_tree_lookup_slot(&mapping->page_tree, 336 page_index(page)); 337 338 - expected_count = 2 + page_has_private(page); 339 if (page_count(page) != expected_count || 340 radix_tree_deref_slot_protected(pslot, &mapping->tree_lock) != page) { 341 spin_unlock_irq(&mapping->tree_lock); ··· 585 586 BUG_ON(PageWriteback(page)); /* Writeback must be complete */ 587 588 - rc = migrate_page_move_mapping(mapping, newpage, page, NULL, mode); 589 590 if (rc != MIGRATEPAGE_SUCCESS) 591 return rc; ··· 612 613 head = page_buffers(page); 614 615 - rc = migrate_page_move_mapping(mapping, newpage, page, head, mode); 616 617 if (rc != MIGRATEPAGE_SUCCESS) 618 return rc; ··· 1656 return 1; 1657 } 1658 1659 /* 1660 * Attempt to migrate a misplaced page to the specified destination 1661 * node. Caller is expected to have an elevated reference count on ··· 1730 struct page *page, int node) 1731 { 1732 spinlock_t *ptl; 1733 - unsigned long haddr = address & HPAGE_PMD_MASK; 1734 pg_data_t *pgdat = NODE_DATA(node); 1735 int isolated = 0; 1736 struct page *new_page = NULL; 1737 struct mem_cgroup *memcg = NULL; 1738 int page_lru = page_is_file_cache(page); 1739 1740 /* 1741 * Rate-limit the amount of data that is being migrated to a node. ··· 1760 goto out_fail; 1761 } 1762 1763 /* Prepare a page as a migration target */ 1764 __set_page_locked(new_page); 1765 SetPageSwapBacked(new_page); ··· 1774 WARN_ON(PageLRU(new_page)); 1775 1776 /* Recheck the target PMD */ 1777 ptl = pmd_lock(mm, pmd); 1778 - if (unlikely(!pmd_same(*pmd, entry))) { 1779 spin_unlock(ptl); 1780 1781 /* Reverse changes made by migrate_page_copy() */ 1782 if (TestClearPageActive(new_page)) ··· 1796 putback_lru_page(page); 1797 mod_zone_page_state(page_zone(page), 1798 NR_ISOLATED_ANON + page_lru, -HPAGE_PMD_NR); 1799 - goto out_fail; 1800 } 1801 1802 /* ··· 1809 */ 1810 mem_cgroup_prepare_migration(page, new_page, &memcg); 1811 1812 entry = mk_pmd(new_page, vma->vm_page_prot); 1813 - entry = pmd_mknonnuma(entry); 1814 - entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma); 1815 entry = pmd_mkhuge(entry); 1816 1817 - pmdp_clear_flush(vma, haddr, pmd); 1818 - set_pmd_at(mm, haddr, pmd, entry); 1819 - page_add_new_anon_rmap(new_page, vma, haddr); 1820 update_mmu_cache_pmd(vma, address, &entry); 1821 page_remove_rmap(page); 1822 /* 1823 * Finish the charge transaction under the page table lock to 1824 * prevent split_huge_page() from dividing up the charge ··· 1845 */ 1846 mem_cgroup_end_migration(memcg, page, new_page, true); 1847 spin_unlock(ptl); 1848 1849 unlock_page(new_page); 1850 unlock_page(page); ··· 1863 out_fail: 1864 count_vm_events(PGMIGRATE_FAIL, HPAGE_PMD_NR); 1865 out_dropref: 1866 - entry = pmd_mknonnuma(entry); 1867 - set_pmd_at(mm, haddr, pmd, entry); 1868 - update_mmu_cache_pmd(vma, address, &entry); 1869 1870 unlock_page(page); 1871 put_page(page); 1872 return 0;
··· 36 #include <linux/hugetlb_cgroup.h> 37 #include <linux/gfp.h> 38 #include <linux/balloon_compaction.h> 39 + #include <linux/mmu_notifier.h> 40 41 #include <asm/tlbflush.h> 42 ··· 316 */ 317 int migrate_page_move_mapping(struct address_space *mapping, 318 struct page *newpage, struct page *page, 319 + struct buffer_head *head, enum migrate_mode mode, 320 + int extra_count) 321 { 322 + int expected_count = 1 + extra_count; 323 void **pslot; 324 325 if (!mapping) { 326 /* Anonymous page without mapping */ 327 + if (page_count(page) != expected_count) 328 return -EAGAIN; 329 return MIGRATEPAGE_SUCCESS; 330 } ··· 333 pslot = radix_tree_lookup_slot(&mapping->page_tree, 334 page_index(page)); 335 336 + expected_count += 1 + page_has_private(page); 337 if (page_count(page) != expected_count || 338 radix_tree_deref_slot_protected(pslot, &mapping->tree_lock) != page) { 339 spin_unlock_irq(&mapping->tree_lock); ··· 583 584 BUG_ON(PageWriteback(page)); /* Writeback must be complete */ 585 586 + rc = migrate_page_move_mapping(mapping, newpage, page, NULL, mode, 0); 587 588 if (rc != MIGRATEPAGE_SUCCESS) 589 return rc; ··· 610 611 head = page_buffers(page); 612 613 + rc = migrate_page_move_mapping(mapping, newpage, page, head, mode, 0); 614 615 if (rc != MIGRATEPAGE_SUCCESS) 616 return rc; ··· 1654 return 1; 1655 } 1656 1657 + bool pmd_trans_migrating(pmd_t pmd) 1658 + { 1659 + struct page *page = pmd_page(pmd); 1660 + return PageLocked(page); 1661 + } 1662 + 1663 + void wait_migrate_huge_page(struct anon_vma *anon_vma, pmd_t *pmd) 1664 + { 1665 + struct page *page = pmd_page(*pmd); 1666 + wait_on_page_locked(page); 1667 + } 1668 + 1669 /* 1670 * Attempt to migrate a misplaced page to the specified destination 1671 * node. Caller is expected to have an elevated reference count on ··· 1716 struct page *page, int node) 1717 { 1718 spinlock_t *ptl; 1719 pg_data_t *pgdat = NODE_DATA(node); 1720 int isolated = 0; 1721 struct page *new_page = NULL; 1722 struct mem_cgroup *memcg = NULL; 1723 int page_lru = page_is_file_cache(page); 1724 + unsigned long mmun_start = address & HPAGE_PMD_MASK; 1725 + unsigned long mmun_end = mmun_start + HPAGE_PMD_SIZE; 1726 + pmd_t orig_entry; 1727 1728 /* 1729 * Rate-limit the amount of data that is being migrated to a node. ··· 1744 goto out_fail; 1745 } 1746 1747 + if (mm_tlb_flush_pending(mm)) 1748 + flush_tlb_range(vma, mmun_start, mmun_end); 1749 + 1750 /* Prepare a page as a migration target */ 1751 __set_page_locked(new_page); 1752 SetPageSwapBacked(new_page); ··· 1755 WARN_ON(PageLRU(new_page)); 1756 1757 /* Recheck the target PMD */ 1758 + mmu_notifier_invalidate_range_start(mm, mmun_start, mmun_end); 1759 ptl = pmd_lock(mm, pmd); 1760 + if (unlikely(!pmd_same(*pmd, entry) || page_count(page) != 2)) { 1761 + fail_putback: 1762 spin_unlock(ptl); 1763 + mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end); 1764 1765 /* Reverse changes made by migrate_page_copy() */ 1766 if (TestClearPageActive(new_page)) ··· 1774 putback_lru_page(page); 1775 mod_zone_page_state(page_zone(page), 1776 NR_ISOLATED_ANON + page_lru, -HPAGE_PMD_NR); 1777 + 1778 + goto out_unlock; 1779 } 1780 1781 /* ··· 1786 */ 1787 mem_cgroup_prepare_migration(page, new_page, &memcg); 1788 1789 + orig_entry = *pmd; 1790 entry = mk_pmd(new_page, vma->vm_page_prot); 1791 entry = pmd_mkhuge(entry); 1792 + entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma); 1793 1794 + /* 1795 + * Clear the old entry under pagetable lock and establish the new PTE. 1796 + * Any parallel GUP will either observe the old page blocking on the 1797 + * page lock, block on the page table lock or observe the new page. 1798 + * The SetPageUptodate on the new page and page_add_new_anon_rmap 1799 + * guarantee the copy is visible before the pagetable update. 1800 + */ 1801 + flush_cache_range(vma, mmun_start, mmun_end); 1802 + page_add_new_anon_rmap(new_page, vma, mmun_start); 1803 + pmdp_clear_flush(vma, mmun_start, pmd); 1804 + set_pmd_at(mm, mmun_start, pmd, entry); 1805 + flush_tlb_range(vma, mmun_start, mmun_end); 1806 update_mmu_cache_pmd(vma, address, &entry); 1807 + 1808 + if (page_count(page) != 2) { 1809 + set_pmd_at(mm, mmun_start, pmd, orig_entry); 1810 + flush_tlb_range(vma, mmun_start, mmun_end); 1811 + update_mmu_cache_pmd(vma, address, &entry); 1812 + page_remove_rmap(new_page); 1813 + goto fail_putback; 1814 + } 1815 + 1816 page_remove_rmap(page); 1817 + 1818 /* 1819 * Finish the charge transaction under the page table lock to 1820 * prevent split_huge_page() from dividing up the charge ··· 1803 */ 1804 mem_cgroup_end_migration(memcg, page, new_page, true); 1805 spin_unlock(ptl); 1806 + mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end); 1807 1808 unlock_page(new_page); 1809 unlock_page(page); ··· 1820 out_fail: 1821 count_vm_events(PGMIGRATE_FAIL, HPAGE_PMD_NR); 1822 out_dropref: 1823 + ptl = pmd_lock(mm, pmd); 1824 + if (pmd_same(*pmd, entry)) { 1825 + entry = pmd_mknonnuma(entry); 1826 + set_pmd_at(mm, mmun_start, pmd, entry); 1827 + update_mmu_cache_pmd(vma, address, &entry); 1828 + } 1829 + spin_unlock(ptl); 1830 1831 + out_unlock: 1832 unlock_page(page); 1833 put_page(page); 1834 return 0;
+11 -2
mm/mprotect.c
··· 52 pte_t ptent; 53 bool updated = false; 54 55 - ptent = ptep_modify_prot_start(mm, addr, pte); 56 if (!prot_numa) { 57 ptent = pte_modify(ptent, newprot); 58 updated = true; 59 } else { 60 struct page *page; 61 62 page = vm_normal_page(vma, addr, oldpte); 63 if (page) { 64 if (!pte_numa(oldpte)) { 65 ptent = pte_mknuma(ptent); 66 updated = true; 67 } 68 } ··· 83 84 if (updated) 85 pages++; 86 - ptep_modify_prot_commit(mm, addr, pte, ptent); 87 } else if (IS_ENABLED(CONFIG_MIGRATION) && !pte_file(oldpte)) { 88 swp_entry_t entry = pte_to_swp_entry(oldpte); 89 ··· 188 BUG_ON(addr >= end); 189 pgd = pgd_offset(mm, addr); 190 flush_cache_range(vma, addr, end); 191 do { 192 next = pgd_addr_end(addr, end); 193 if (pgd_none_or_clear_bad(pgd)) ··· 200 /* Only flush the TLB if we actually modified any entries: */ 201 if (pages) 202 flush_tlb_range(vma, start, end); 203 204 return pages; 205 }
··· 52 pte_t ptent; 53 bool updated = false; 54 55 if (!prot_numa) { 56 + ptent = ptep_modify_prot_start(mm, addr, pte); 57 + if (pte_numa(ptent)) 58 + ptent = pte_mknonnuma(ptent); 59 ptent = pte_modify(ptent, newprot); 60 updated = true; 61 } else { 62 struct page *page; 63 64 + ptent = *pte; 65 page = vm_normal_page(vma, addr, oldpte); 66 if (page) { 67 if (!pte_numa(oldpte)) { 68 ptent = pte_mknuma(ptent); 69 + set_pte_at(mm, addr, pte, ptent); 70 updated = true; 71 } 72 } ··· 79 80 if (updated) 81 pages++; 82 + 83 + /* Only !prot_numa always clears the pte */ 84 + if (!prot_numa) 85 + ptep_modify_prot_commit(mm, addr, pte, ptent); 86 } else if (IS_ENABLED(CONFIG_MIGRATION) && !pte_file(oldpte)) { 87 swp_entry_t entry = pte_to_swp_entry(oldpte); 88 ··· 181 BUG_ON(addr >= end); 182 pgd = pgd_offset(mm, addr); 183 flush_cache_range(vma, addr, end); 184 + set_tlb_flush_pending(mm); 185 do { 186 next = pgd_addr_end(addr, end); 187 if (pgd_none_or_clear_bad(pgd)) ··· 192 /* Only flush the TLB if we actually modified any entries: */ 193 if (pages) 194 flush_tlb_range(vma, start, end); 195 + clear_tlb_flush_pending(mm); 196 197 return pages; 198 }
+9 -10
mm/page_alloc.c
··· 1816 1817 static bool zone_local(struct zone *local_zone, struct zone *zone) 1818 { 1819 - return node_distance(local_zone->node, zone->node) == LOCAL_DISTANCE; 1820 } 1821 1822 static bool zone_allows_reclaim(struct zone *local_zone, struct zone *zone) ··· 1913 * page was allocated in should have no effect on the 1914 * time the page has in memory before being reclaimed. 1915 * 1916 - * When zone_reclaim_mode is enabled, try to stay in 1917 - * local zones in the fastpath. If that fails, the 1918 - * slowpath is entered, which will do another pass 1919 - * starting with the local zones, but ultimately fall 1920 - * back to remote zones that do not partake in the 1921 - * fairness round-robin cycle of this zonelist. 1922 */ 1923 if (alloc_flags & ALLOC_WMARK_LOW) { 1924 if (zone_page_state(zone, NR_ALLOC_BATCH) <= 0) 1925 continue; 1926 - if (zone_reclaim_mode && 1927 - !zone_local(preferred_zone, zone)) 1928 continue; 1929 } 1930 /* ··· 2389 * thrash fairness information for zones that are not 2390 * actually part of this zonelist's round-robin cycle. 2391 */ 2392 - if (zone_reclaim_mode && !zone_local(preferred_zone, zone)) 2393 continue; 2394 mod_zone_page_state(zone, NR_ALLOC_BATCH, 2395 high_wmark_pages(zone) -
··· 1816 1817 static bool zone_local(struct zone *local_zone, struct zone *zone) 1818 { 1819 + return local_zone->node == zone->node; 1820 } 1821 1822 static bool zone_allows_reclaim(struct zone *local_zone, struct zone *zone) ··· 1913 * page was allocated in should have no effect on the 1914 * time the page has in memory before being reclaimed. 1915 * 1916 + * Try to stay in local zones in the fastpath. If 1917 + * that fails, the slowpath is entered, which will do 1918 + * another pass starting with the local zones, but 1919 + * ultimately fall back to remote zones that do not 1920 + * partake in the fairness round-robin cycle of this 1921 + * zonelist. 1922 */ 1923 if (alloc_flags & ALLOC_WMARK_LOW) { 1924 if (zone_page_state(zone, NR_ALLOC_BATCH) <= 0) 1925 continue; 1926 + if (!zone_local(preferred_zone, zone)) 1927 continue; 1928 } 1929 /* ··· 2390 * thrash fairness information for zones that are not 2391 * actually part of this zonelist's round-robin cycle. 2392 */ 2393 + if (!zone_local(preferred_zone, zone)) 2394 continue; 2395 mod_zone_page_state(zone, NR_ALLOC_BATCH, 2396 high_wmark_pages(zone) -
+6 -2
mm/pgtable-generic.c
··· 110 pte_t ptep_clear_flush(struct vm_area_struct *vma, unsigned long address, 111 pte_t *ptep) 112 { 113 pte_t pte; 114 - pte = ptep_get_and_clear((vma)->vm_mm, address, ptep); 115 - if (pte_accessible(pte)) 116 flush_tlb_page(vma, address); 117 return pte; 118 } ··· 192 void pmdp_invalidate(struct vm_area_struct *vma, unsigned long address, 193 pmd_t *pmdp) 194 { 195 set_pmd_at(vma->vm_mm, address, pmdp, pmd_mknotpresent(*pmdp)); 196 flush_tlb_range(vma, address, address + HPAGE_PMD_SIZE); 197 }
··· 110 pte_t ptep_clear_flush(struct vm_area_struct *vma, unsigned long address, 111 pte_t *ptep) 112 { 113 + struct mm_struct *mm = (vma)->vm_mm; 114 pte_t pte; 115 + pte = ptep_get_and_clear(mm, address, ptep); 116 + if (pte_accessible(mm, pte)) 117 flush_tlb_page(vma, address); 118 return pte; 119 } ··· 191 void pmdp_invalidate(struct vm_area_struct *vma, unsigned long address, 192 pmd_t *pmdp) 193 { 194 + pmd_t entry = *pmdp; 195 + if (pmd_numa(entry)) 196 + entry = pmd_mknonnuma(entry); 197 set_pmd_at(vma->vm_mm, address, pmdp, pmd_mknotpresent(*pmdp)); 198 flush_tlb_range(vma, address, address + HPAGE_PMD_SIZE); 199 }
+4
mm/rmap.c
··· 600 spinlock_t *ptl; 601 602 if (unlikely(PageHuge(page))) { 603 pte = huge_pte_offset(mm, address); 604 ptl = huge_pte_lockptr(page_hstate(page), mm, pte); 605 goto check; 606 }
··· 600 spinlock_t *ptl; 601 602 if (unlikely(PageHuge(page))) { 603 + /* when pud is not present, pte will be NULL */ 604 pte = huge_pte_offset(mm, address); 605 + if (!pte) 606 + return NULL; 607 + 608 ptl = huge_pte_lockptr(page_hstate(page), mm, pte); 609 goto check; 610 }
+1
net/core/neighbour.c
··· 1161 neigh->parms->reachable_time : 1162 0))); 1163 neigh->nud_state = new; 1164 } 1165 1166 if (lladdr != neigh->ha) {
··· 1161 neigh->parms->reachable_time : 1162 0))); 1163 neigh->nud_state = new; 1164 + notify = 1; 1165 } 1166 1167 if (lladdr != neigh->ha) {
+1
net/ipv4/netfilter/ipt_SYNPROXY.c
··· 423 static struct xt_target synproxy_tg4_reg __read_mostly = { 424 .name = "SYNPROXY", 425 .family = NFPROTO_IPV4, 426 .target = synproxy_tg4, 427 .targetsize = sizeof(struct xt_synproxy_info), 428 .checkentry = synproxy_tg4_check,
··· 423 static struct xt_target synproxy_tg4_reg __read_mostly = { 424 .name = "SYNPROXY", 425 .family = NFPROTO_IPV4, 426 + .hooks = (1 << NF_INET_LOCAL_IN) | (1 << NF_INET_FORWARD), 427 .target = synproxy_tg4, 428 .targetsize = sizeof(struct xt_synproxy_info), 429 .checkentry = synproxy_tg4_check,
+1 -1
net/ipv4/netfilter/nft_reject_ipv4.c
··· 72 { 73 const struct nft_reject *priv = nft_expr_priv(expr); 74 75 - if (nla_put_be32(skb, NFTA_REJECT_TYPE, priv->type)) 76 goto nla_put_failure; 77 78 switch (priv->type) {
··· 72 { 73 const struct nft_reject *priv = nft_expr_priv(expr); 74 75 + if (nla_put_be32(skb, NFTA_REJECT_TYPE, htonl(priv->type))) 76 goto nla_put_failure; 77 78 switch (priv->type) {
+4 -9
net/ipv4/udp.c
··· 1600 } 1601 1602 /* For TCP sockets, sk_rx_dst is protected by socket lock 1603 - * For UDP, we use sk_dst_lock to guard against concurrent changes. 1604 */ 1605 static void udp_sk_rx_dst_set(struct sock *sk, struct dst_entry *dst) 1606 { 1607 struct dst_entry *old; 1608 1609 - spin_lock(&sk->sk_dst_lock); 1610 - old = sk->sk_rx_dst; 1611 - if (likely(old != dst)) { 1612 - dst_hold(dst); 1613 - sk->sk_rx_dst = dst; 1614 - dst_release(old); 1615 - } 1616 - spin_unlock(&sk->sk_dst_lock); 1617 } 1618 1619 /*
··· 1600 } 1601 1602 /* For TCP sockets, sk_rx_dst is protected by socket lock 1603 + * For UDP, we use xchg() to guard against concurrent changes. 1604 */ 1605 static void udp_sk_rx_dst_set(struct sock *sk, struct dst_entry *dst) 1606 { 1607 struct dst_entry *old; 1608 1609 + dst_hold(dst); 1610 + old = xchg(&sk->sk_rx_dst, dst); 1611 + dst_release(old); 1612 } 1613 1614 /*
+1
net/ipv6/netfilter/ip6t_SYNPROXY.c
··· 446 static struct xt_target synproxy_tg6_reg __read_mostly = { 447 .name = "SYNPROXY", 448 .family = NFPROTO_IPV6, 449 .target = synproxy_tg6, 450 .targetsize = sizeof(struct xt_synproxy_info), 451 .checkentry = synproxy_tg6_check,
··· 446 static struct xt_target synproxy_tg6_reg __read_mostly = { 447 .name = "SYNPROXY", 448 .family = NFPROTO_IPV6, 449 + .hooks = (1 << NF_INET_LOCAL_IN) | (1 << NF_INET_FORWARD), 450 .target = synproxy_tg6, 451 .targetsize = sizeof(struct xt_synproxy_info), 452 .checkentry = synproxy_tg6_check,
+16 -1
net/sctp/probe.c
··· 38 #include <net/sctp/sctp.h> 39 #include <net/sctp/sm.h> 40 41 MODULE_AUTHOR("Wei Yongjun <yjwei@cn.fujitsu.com>"); 42 MODULE_DESCRIPTION("SCTP snooper"); 43 MODULE_LICENSE("GPL"); ··· 183 .entry = jsctp_sf_eat_sack, 184 }; 185 186 static __init int sctpprobe_init(void) 187 { 188 int ret = -ENOMEM; ··· 217 &sctpprobe_fops)) 218 goto free_kfifo; 219 220 - ret = register_jprobe(&sctp_recv_probe); 221 if (ret) 222 goto remove_proc; 223
··· 38 #include <net/sctp/sctp.h> 39 #include <net/sctp/sm.h> 40 41 + MODULE_SOFTDEP("pre: sctp"); 42 MODULE_AUTHOR("Wei Yongjun <yjwei@cn.fujitsu.com>"); 43 MODULE_DESCRIPTION("SCTP snooper"); 44 MODULE_LICENSE("GPL"); ··· 182 .entry = jsctp_sf_eat_sack, 183 }; 184 185 + static __init int sctp_setup_jprobe(void) 186 + { 187 + int ret = register_jprobe(&sctp_recv_probe); 188 + 189 + if (ret) { 190 + if (request_module("sctp")) 191 + goto out; 192 + ret = register_jprobe(&sctp_recv_probe); 193 + } 194 + 195 + out: 196 + return ret; 197 + } 198 + 199 static __init int sctpprobe_init(void) 200 { 201 int ret = -ENOMEM; ··· 202 &sctpprobe_fops)) 203 goto free_kfifo; 204 205 + ret = sctp_setup_jprobe(); 206 if (ret) 207 goto remove_proc; 208
+6 -2
net/unix/af_unix.c
··· 718 int err; 719 unsigned int retries = 0; 720 721 - mutex_lock(&u->readlock); 722 723 err = 0; 724 if (u->addr) ··· 879 goto out; 880 addr_len = err; 881 882 - mutex_lock(&u->readlock); 883 884 err = -EINVAL; 885 if (u->addr)
··· 718 int err; 719 unsigned int retries = 0; 720 721 + err = mutex_lock_interruptible(&u->readlock); 722 + if (err) 723 + return err; 724 725 err = 0; 726 if (u->addr) ··· 877 goto out; 878 addr_len = err; 879 880 + err = mutex_lock_interruptible(&u->readlock); 881 + if (err) 882 + goto out; 883 884 err = -EINVAL; 885 if (u->addr)
+2
sound/core/pcm_lib.c
··· 1937 case SNDRV_PCM_STATE_DISCONNECTED: 1938 err = -EBADFD; 1939 goto _endloop; 1940 } 1941 if (!tout) { 1942 snd_printd("%s write error (DMA or IRQ trouble?)\n",
··· 1937 case SNDRV_PCM_STATE_DISCONNECTED: 1938 err = -EBADFD; 1939 goto _endloop; 1940 + case SNDRV_PCM_STATE_PAUSED: 1941 + continue; 1942 } 1943 if (!tout) { 1944 snd_printd("%s write error (DMA or IRQ trouble?)\n",
+4
sound/pci/hda/hda_intel.c
··· 3433 * white/black-list for enable_msi 3434 */ 3435 static struct snd_pci_quirk msi_black_list[] = { 3436 SND_PCI_QUIRK(0x1043, 0x81f2, "ASUS", 0), /* Athlon64 X2 + nvidia */ 3437 SND_PCI_QUIRK(0x1043, 0x81f6, "ASUS", 0), /* nvidia */ 3438 SND_PCI_QUIRK(0x1043, 0x822d, "ASUS", 0), /* Athlon64 X2 + nvidia MCP55 */
··· 3433 * white/black-list for enable_msi 3434 */ 3435 static struct snd_pci_quirk msi_black_list[] = { 3436 + SND_PCI_QUIRK(0x103c, 0x2191, "HP", 0), /* AMD Hudson */ 3437 + SND_PCI_QUIRK(0x103c, 0x2192, "HP", 0), /* AMD Hudson */ 3438 + SND_PCI_QUIRK(0x103c, 0x21f7, "HP", 0), /* AMD Hudson */ 3439 + SND_PCI_QUIRK(0x103c, 0x21fa, "HP", 0), /* AMD Hudson */ 3440 SND_PCI_QUIRK(0x1043, 0x81f2, "ASUS", 0), /* Athlon64 X2 + nvidia */ 3441 SND_PCI_QUIRK(0x1043, 0x81f6, "ASUS", 0), /* nvidia */ 3442 SND_PCI_QUIRK(0x1043, 0x822d, "ASUS", 0), /* Athlon64 X2 + nvidia MCP55 */
+4
sound/pci/hda/patch_realtek.c
··· 4247 SND_PCI_QUIRK(0x1028, 0x0606, "Dell", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE), 4248 SND_PCI_QUIRK(0x1028, 0x0608, "Dell", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE), 4249 SND_PCI_QUIRK(0x1028, 0x0609, "Dell", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE), 4250 SND_PCI_QUIRK(0x1028, 0x0613, "Dell", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE), 4251 SND_PCI_QUIRK(0x1028, 0x0614, "Dell Inspiron 3135", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE), 4252 SND_PCI_QUIRK(0x1028, 0x0616, "Dell Vostro 5470", ALC290_FIXUP_MONO_SPEAKERS), 4253 SND_PCI_QUIRK(0x1028, 0x061f, "Dell", ALC255_FIXUP_DELL1_MIC_NO_PRESENCE), 4254 SND_PCI_QUIRK(0x1028, 0x0638, "Dell Inspiron 5439", ALC290_FIXUP_MONO_SPEAKERS), 4255 SND_PCI_QUIRK(0x1028, 0x063f, "Dell", ALC255_FIXUP_DELL1_MIC_NO_PRESENCE), 4256 SND_PCI_QUIRK(0x1028, 0x15cc, "Dell X5 Precision", ALC269_FIXUP_DELL2_MIC_NO_PRESENCE), 4257 SND_PCI_QUIRK(0x1028, 0x15cd, "Dell X5 Precision", ALC269_FIXUP_DELL2_MIC_NO_PRESENCE), 4258 SND_PCI_QUIRK(0x103c, 0x1586, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC2),
··· 4247 SND_PCI_QUIRK(0x1028, 0x0606, "Dell", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE), 4248 SND_PCI_QUIRK(0x1028, 0x0608, "Dell", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE), 4249 SND_PCI_QUIRK(0x1028, 0x0609, "Dell", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE), 4250 + SND_PCI_QUIRK(0x1028, 0x0610, "Dell", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE), 4251 SND_PCI_QUIRK(0x1028, 0x0613, "Dell", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE), 4252 SND_PCI_QUIRK(0x1028, 0x0614, "Dell Inspiron 3135", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE), 4253 SND_PCI_QUIRK(0x1028, 0x0616, "Dell Vostro 5470", ALC290_FIXUP_MONO_SPEAKERS), 4254 SND_PCI_QUIRK(0x1028, 0x061f, "Dell", ALC255_FIXUP_DELL1_MIC_NO_PRESENCE), 4255 + SND_PCI_QUIRK(0x1028, 0x0629, "Dell", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE), 4256 SND_PCI_QUIRK(0x1028, 0x0638, "Dell Inspiron 5439", ALC290_FIXUP_MONO_SPEAKERS), 4257 + SND_PCI_QUIRK(0x1028, 0x063e, "Dell", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE), 4258 SND_PCI_QUIRK(0x1028, 0x063f, "Dell", ALC255_FIXUP_DELL1_MIC_NO_PRESENCE), 4259 + SND_PCI_QUIRK(0x1028, 0x0640, "Dell", ALC255_FIXUP_DELL1_MIC_NO_PRESENCE), 4260 SND_PCI_QUIRK(0x1028, 0x15cc, "Dell X5 Precision", ALC269_FIXUP_DELL2_MIC_NO_PRESENCE), 4261 SND_PCI_QUIRK(0x1028, 0x15cd, "Dell X5 Precision", ALC269_FIXUP_DELL2_MIC_NO_PRESENCE), 4262 SND_PCI_QUIRK(0x103c, 0x1586, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC2),
+29 -1
sound/soc/atmel/atmel_ssc_dai.c
··· 648 649 dma_params = ssc_p->dma_params[dir]; 650 651 - ssc_writel(ssc_p->ssc->regs, CR, dma_params->mask->ssc_enable); 652 ssc_writel(ssc_p->ssc->regs, IDR, dma_params->mask->ssc_error); 653 654 pr_debug("%s enabled SSC_SR=0x%08x\n", ··· 657 return 0; 658 } 659 660 661 #ifdef CONFIG_PM 662 static int atmel_ssc_suspend(struct snd_soc_dai *cpu_dai) ··· 758 .startup = atmel_ssc_startup, 759 .shutdown = atmel_ssc_shutdown, 760 .prepare = atmel_ssc_prepare, 761 .hw_params = atmel_ssc_hw_params, 762 .set_fmt = atmel_ssc_set_dai_fmt, 763 .set_clkdiv = atmel_ssc_set_dai_clkdiv,
··· 648 649 dma_params = ssc_p->dma_params[dir]; 650 651 + ssc_writel(ssc_p->ssc->regs, CR, dma_params->mask->ssc_disable); 652 ssc_writel(ssc_p->ssc->regs, IDR, dma_params->mask->ssc_error); 653 654 pr_debug("%s enabled SSC_SR=0x%08x\n", ··· 657 return 0; 658 } 659 660 + static int atmel_ssc_trigger(struct snd_pcm_substream *substream, 661 + int cmd, struct snd_soc_dai *dai) 662 + { 663 + struct atmel_ssc_info *ssc_p = &ssc_info[dai->id]; 664 + struct atmel_pcm_dma_params *dma_params; 665 + int dir; 666 + 667 + if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK) 668 + dir = 0; 669 + else 670 + dir = 1; 671 + 672 + dma_params = ssc_p->dma_params[dir]; 673 + 674 + switch (cmd) { 675 + case SNDRV_PCM_TRIGGER_START: 676 + case SNDRV_PCM_TRIGGER_RESUME: 677 + case SNDRV_PCM_TRIGGER_PAUSE_RELEASE: 678 + ssc_writel(ssc_p->ssc->regs, CR, dma_params->mask->ssc_enable); 679 + break; 680 + default: 681 + ssc_writel(ssc_p->ssc->regs, CR, dma_params->mask->ssc_disable); 682 + break; 683 + } 684 + 685 + return 0; 686 + } 687 688 #ifdef CONFIG_PM 689 static int atmel_ssc_suspend(struct snd_soc_dai *cpu_dai) ··· 731 .startup = atmel_ssc_startup, 732 .shutdown = atmel_ssc_shutdown, 733 .prepare = atmel_ssc_prepare, 734 + .trigger = atmel_ssc_trigger, 735 .hw_params = atmel_ssc_hw_params, 736 .set_fmt = atmel_ssc_set_dai_fmt, 737 .set_clkdiv = atmel_ssc_set_dai_clkdiv,
+1 -1
sound/soc/atmel/sam9x5_wm8731.c
··· 109 dai->stream_name = "WM8731 PCM"; 110 dai->codec_dai_name = "wm8731-hifi"; 111 dai->init = sam9x5_wm8731_init; 112 - dai->dai_fmt = SND_SOC_DAIFMT_I2S | SND_SOC_DAIFMT_NB_NF 113 | SND_SOC_DAIFMT_CBM_CFM; 114 115 ret = snd_soc_of_parse_card_name(card, "atmel,model");
··· 109 dai->stream_name = "WM8731 PCM"; 110 dai->codec_dai_name = "wm8731-hifi"; 111 dai->init = sam9x5_wm8731_init; 112 + dai->dai_fmt = SND_SOC_DAIFMT_DSP_A | SND_SOC_DAIFMT_NB_NF 113 | SND_SOC_DAIFMT_CBM_CFM; 114 115 ret = snd_soc_of_parse_card_name(card, "atmel,model");
+1 -1
sound/soc/codecs/wm5110.c
··· 1012 { "AEC Loopback", "HPOUT3L", "OUT3L" }, 1013 { "AEC Loopback", "HPOUT3R", "OUT3R" }, 1014 { "HPOUT3L", NULL, "OUT3L" }, 1015 - { "HPOUT3R", NULL, "OUT3L" }, 1016 1017 { "AEC Loopback", "SPKOUTL", "OUT4L" }, 1018 { "SPKOUTLN", NULL, "OUT4L" },
··· 1012 { "AEC Loopback", "HPOUT3L", "OUT3L" }, 1013 { "AEC Loopback", "HPOUT3R", "OUT3R" }, 1014 { "HPOUT3L", NULL, "OUT3L" }, 1015 + { "HPOUT3R", NULL, "OUT3R" }, 1016 1017 { "AEC Loopback", "SPKOUTL", "OUT4L" }, 1018 { "SPKOUTLN", NULL, "OUT4L" },
+1 -1
sound/soc/codecs/wm8904.c
··· 1444 1445 switch (fmt & SND_SOC_DAIFMT_FORMAT_MASK) { 1446 case SND_SOC_DAIFMT_DSP_B: 1447 - aif1 |= WM8904_AIF_LRCLK_INV; 1448 case SND_SOC_DAIFMT_DSP_A: 1449 aif1 |= 0x3; 1450 break;
··· 1444 1445 switch (fmt & SND_SOC_DAIFMT_FORMAT_MASK) { 1446 case SND_SOC_DAIFMT_DSP_B: 1447 + aif1 |= 0x3 | WM8904_AIF_LRCLK_INV; 1448 case SND_SOC_DAIFMT_DSP_A: 1449 aif1 |= 0x3; 1450 break;
+13
sound/soc/codecs/wm8962.c
··· 2439 snd_soc_update_bits(codec, WM8962_CLOCKING_4, 2440 WM8962_SYSCLK_RATE_MASK, clocking4); 2441 2442 dspclk = snd_soc_read(codec, WM8962_CLOCKING1); 2443 if (dspclk < 0) { 2444 dev_err(codec->dev, "Failed to read DSPCLK: %d\n", dspclk); 2445 return;
··· 2439 snd_soc_update_bits(codec, WM8962_CLOCKING_4, 2440 WM8962_SYSCLK_RATE_MASK, clocking4); 2441 2442 + /* DSPCLK_DIV can be only generated correctly after enabling SYSCLK. 2443 + * So we here provisionally enable it and then disable it afterward 2444 + * if current bias_level hasn't reached SND_SOC_BIAS_ON. 2445 + */ 2446 + if (codec->dapm.bias_level != SND_SOC_BIAS_ON) 2447 + snd_soc_update_bits(codec, WM8962_CLOCKING2, 2448 + WM8962_SYSCLK_ENA_MASK, WM8962_SYSCLK_ENA); 2449 + 2450 dspclk = snd_soc_read(codec, WM8962_CLOCKING1); 2451 + 2452 + if (codec->dapm.bias_level != SND_SOC_BIAS_ON) 2453 + snd_soc_update_bits(codec, WM8962_CLOCKING2, 2454 + WM8962_SYSCLK_ENA_MASK, 0); 2455 + 2456 if (dspclk < 0) { 2457 dev_err(codec->dev, "Failed to read DSPCLK: %d\n", dspclk); 2458 return;
+7 -3
sound/soc/codecs/wm_adsp.c
··· 1474 return ret; 1475 1476 /* Wait for the RAM to start, should be near instantaneous */ 1477 - count = 0; 1478 - do { 1479 ret = regmap_read(dsp->regmap, dsp->base + ADSP2_STATUS1, 1480 &val); 1481 if (ret != 0) 1482 return ret; 1483 - } while (!(val & ADSP2_RAM_RDY) && ++count < 10); 1484 1485 if (!(val & ADSP2_RAM_RDY)) { 1486 adsp_err(dsp, "Failed to start DSP RAM\n");
··· 1474 return ret; 1475 1476 /* Wait for the RAM to start, should be near instantaneous */ 1477 + for (count = 0; count < 10; ++count) { 1478 ret = regmap_read(dsp->regmap, dsp->base + ADSP2_STATUS1, 1479 &val); 1480 if (ret != 0) 1481 return ret; 1482 + 1483 + if (val & ADSP2_RAM_RDY) 1484 + break; 1485 + 1486 + msleep(1); 1487 + } 1488 1489 if (!(val & ADSP2_RAM_RDY)) { 1490 adsp_err(dsp, "Failed to start DSP RAM\n");
-2
sound/soc/fsl/imx-wm8962.c
··· 130 break; 131 } 132 133 - dapm->bias_level = level; 134 - 135 return 0; 136 } 137
··· 130 break; 131 } 132 133 return 0; 134 } 135
+12 -12
sound/soc/kirkwood/kirkwood-i2s.c
··· 473 .playback = { 474 .channels_min = 1, 475 .channels_max = 2, 476 - .rates = SNDRV_PCM_RATE_8000_192000 | 477 - SNDRV_PCM_RATE_CONTINUOUS | 478 - SNDRV_PCM_RATE_KNOT, 479 .formats = KIRKWOOD_I2S_FORMATS, 480 }, 481 .capture = { 482 .channels_min = 1, 483 .channels_max = 2, 484 - .rates = SNDRV_PCM_RATE_8000_192000 | 485 - SNDRV_PCM_RATE_CONTINUOUS | 486 - SNDRV_PCM_RATE_KNOT, 487 .formats = KIRKWOOD_I2S_FORMATS, 488 }, 489 .ops = &kirkwood_i2s_dai_ops, ··· 494 .playback = { 495 .channels_min = 1, 496 .channels_max = 2, 497 - .rates = SNDRV_PCM_RATE_8000_192000 | 498 - SNDRV_PCM_RATE_CONTINUOUS | 499 - SNDRV_PCM_RATE_KNOT, 500 .formats = KIRKWOOD_SPDIF_FORMATS, 501 }, 502 .capture = { 503 .channels_min = 1, 504 .channels_max = 2, 505 - .rates = SNDRV_PCM_RATE_8000_192000 | 506 - SNDRV_PCM_RATE_CONTINUOUS | 507 - SNDRV_PCM_RATE_KNOT, 508 .formats = KIRKWOOD_SPDIF_FORMATS, 509 }, 510 .ops = &kirkwood_i2s_dai_ops,
··· 473 .playback = { 474 .channels_min = 1, 475 .channels_max = 2, 476 + .rates = SNDRV_PCM_RATE_CONTINUOUS, 477 + .rate_min = 5512, 478 + .rate_max = 192000, 479 .formats = KIRKWOOD_I2S_FORMATS, 480 }, 481 .capture = { 482 .channels_min = 1, 483 .channels_max = 2, 484 + .rates = SNDRV_PCM_RATE_CONTINUOUS, 485 + .rate_min = 5512, 486 + .rate_max = 192000, 487 .formats = KIRKWOOD_I2S_FORMATS, 488 }, 489 .ops = &kirkwood_i2s_dai_ops, ··· 494 .playback = { 495 .channels_min = 1, 496 .channels_max = 2, 497 + .rates = SNDRV_PCM_RATE_CONTINUOUS, 498 + .rate_min = 5512, 499 + .rate_max = 192000, 500 .formats = KIRKWOOD_SPDIF_FORMATS, 501 }, 502 .capture = { 503 .channels_min = 1, 504 .channels_max = 2, 505 + .rates = SNDRV_PCM_RATE_CONTINUOUS, 506 + .rate_min = 5512, 507 + .rate_max = 192000, 508 .formats = KIRKWOOD_SPDIF_FORMATS, 509 }, 510 .ops = &kirkwood_i2s_dai_ops,
+27 -11
sound/soc/soc-generic-dmaengine-pcm.c
··· 305 } 306 } 307 308 /** 309 * snd_dmaengine_pcm_register - Register a dmaengine based PCM device 310 * @dev: The parent device for the PCM device ··· 329 const struct snd_dmaengine_pcm_config *config, unsigned int flags) 330 { 331 struct dmaengine_pcm *pcm; 332 333 pcm = kzalloc(sizeof(*pcm), GFP_KERNEL); 334 if (!pcm) ··· 341 dmaengine_pcm_request_chan_of(pcm, dev); 342 343 if (flags & SND_DMAENGINE_PCM_FLAG_NO_RESIDUE) 344 - return snd_soc_add_platform(dev, &pcm->platform, 345 &dmaengine_no_residue_pcm_platform); 346 else 347 - return snd_soc_add_platform(dev, &pcm->platform, 348 &dmaengine_pcm_platform); 349 } 350 EXPORT_SYMBOL_GPL(snd_dmaengine_pcm_register); 351 ··· 369 { 370 struct snd_soc_platform *platform; 371 struct dmaengine_pcm *pcm; 372 - unsigned int i; 373 374 platform = snd_soc_lookup_platform(dev); 375 if (!platform) ··· 376 377 pcm = soc_platform_to_pcm(platform); 378 379 - for (i = SNDRV_PCM_STREAM_PLAYBACK; i <= SNDRV_PCM_STREAM_CAPTURE; i++) { 380 - if (pcm->chan[i]) { 381 - dma_release_channel(pcm->chan[i]); 382 - if (pcm->flags & SND_DMAENGINE_PCM_FLAG_HALF_DUPLEX) 383 - break; 384 - } 385 - } 386 - 387 snd_soc_remove_platform(platform); 388 kfree(pcm); 389 } 390 EXPORT_SYMBOL_GPL(snd_dmaengine_pcm_unregister);
··· 305 } 306 } 307 308 + static void dmaengine_pcm_release_chan(struct dmaengine_pcm *pcm) 309 + { 310 + unsigned int i; 311 + 312 + for (i = SNDRV_PCM_STREAM_PLAYBACK; i <= SNDRV_PCM_STREAM_CAPTURE; 313 + i++) { 314 + if (!pcm->chan[i]) 315 + continue; 316 + dma_release_channel(pcm->chan[i]); 317 + if (pcm->flags & SND_DMAENGINE_PCM_FLAG_HALF_DUPLEX) 318 + break; 319 + } 320 + } 321 + 322 /** 323 * snd_dmaengine_pcm_register - Register a dmaengine based PCM device 324 * @dev: The parent device for the PCM device ··· 315 const struct snd_dmaengine_pcm_config *config, unsigned int flags) 316 { 317 struct dmaengine_pcm *pcm; 318 + int ret; 319 320 pcm = kzalloc(sizeof(*pcm), GFP_KERNEL); 321 if (!pcm) ··· 326 dmaengine_pcm_request_chan_of(pcm, dev); 327 328 if (flags & SND_DMAENGINE_PCM_FLAG_NO_RESIDUE) 329 + ret = snd_soc_add_platform(dev, &pcm->platform, 330 &dmaengine_no_residue_pcm_platform); 331 else 332 + ret = snd_soc_add_platform(dev, &pcm->platform, 333 &dmaengine_pcm_platform); 334 + if (ret) 335 + goto err_free_dma; 336 + 337 + return 0; 338 + 339 + err_free_dma: 340 + dmaengine_pcm_release_chan(pcm); 341 + kfree(pcm); 342 + return ret; 343 } 344 EXPORT_SYMBOL_GPL(snd_dmaengine_pcm_register); 345 ··· 345 { 346 struct snd_soc_platform *platform; 347 struct dmaengine_pcm *pcm; 348 349 platform = snd_soc_lookup_platform(dev); 350 if (!platform) ··· 353 354 pcm = soc_platform_to_pcm(platform); 355 356 snd_soc_remove_platform(platform); 357 + dmaengine_pcm_release_chan(pcm); 358 kfree(pcm); 359 } 360 EXPORT_SYMBOL_GPL(snd_dmaengine_pcm_unregister);
+3 -2
sound/soc/soc-pcm.c
··· 600 struct snd_soc_platform *platform = rtd->platform; 601 struct snd_soc_dai *cpu_dai = rtd->cpu_dai; 602 struct snd_soc_dai *codec_dai = rtd->codec_dai; 603 - struct snd_soc_codec *codec = rtd->codec; 604 605 mutex_lock_nested(&rtd->pcm_mutex, rtd->pcm_subclass); 606 607 /* apply codec digital mute */ 608 - if (!codec->active) 609 snd_soc_dai_digital_mute(codec_dai, 1, substream->stream); 610 611 /* free any machine hw params */
··· 600 struct snd_soc_platform *platform = rtd->platform; 601 struct snd_soc_dai *cpu_dai = rtd->cpu_dai; 602 struct snd_soc_dai *codec_dai = rtd->codec_dai; 603 + bool playback = substream->stream == SNDRV_PCM_STREAM_PLAYBACK; 604 605 mutex_lock_nested(&rtd->pcm_mutex, rtd->pcm_subclass); 606 607 /* apply codec digital mute */ 608 + if ((playback && codec_dai->playback_active == 1) || 609 + (!playback && codec_dai->capture_active == 1)) 610 snd_soc_dai_digital_mute(codec_dai, 1, substream->stream); 611 612 /* free any machine hw params */
+3 -3
sound/soc/tegra/tegra20_i2s.c
··· 74 unsigned int fmt) 75 { 76 struct tegra20_i2s *i2s = snd_soc_dai_get_drvdata(dai); 77 - unsigned int mask, val; 78 79 switch (fmt & SND_SOC_DAIFMT_INV_MASK) { 80 case SND_SOC_DAIFMT_NB_NF: ··· 83 return -EINVAL; 84 } 85 86 - mask = TEGRA20_I2S_CTRL_MASTER_ENABLE; 87 switch (fmt & SND_SOC_DAIFMT_MASTER_MASK) { 88 case SND_SOC_DAIFMT_CBS_CFS: 89 - val = TEGRA20_I2S_CTRL_MASTER_ENABLE; 90 break; 91 case SND_SOC_DAIFMT_CBM_CFM: 92 break;
··· 74 unsigned int fmt) 75 { 76 struct tegra20_i2s *i2s = snd_soc_dai_get_drvdata(dai); 77 + unsigned int mask = 0, val = 0; 78 79 switch (fmt & SND_SOC_DAIFMT_INV_MASK) { 80 case SND_SOC_DAIFMT_NB_NF: ··· 83 return -EINVAL; 84 } 85 86 + mask |= TEGRA20_I2S_CTRL_MASTER_ENABLE; 87 switch (fmt & SND_SOC_DAIFMT_MASTER_MASK) { 88 case SND_SOC_DAIFMT_CBS_CFS: 89 + val |= TEGRA20_I2S_CTRL_MASTER_ENABLE; 90 break; 91 case SND_SOC_DAIFMT_CBM_CFM: 92 break;
+5 -5
sound/soc/tegra/tegra20_spdif.c
··· 67 { 68 struct device *dev = dai->dev; 69 struct tegra20_spdif *spdif = snd_soc_dai_get_drvdata(dai); 70 - unsigned int mask, val; 71 int ret, spdifclock; 72 73 - mask = TEGRA20_SPDIF_CTRL_PACK | 74 - TEGRA20_SPDIF_CTRL_BIT_MODE_MASK; 75 switch (params_format(params)) { 76 case SNDRV_PCM_FORMAT_S16_LE: 77 - val = TEGRA20_SPDIF_CTRL_PACK | 78 - TEGRA20_SPDIF_CTRL_BIT_MODE_16BIT; 79 break; 80 default: 81 return -EINVAL;
··· 67 { 68 struct device *dev = dai->dev; 69 struct tegra20_spdif *spdif = snd_soc_dai_get_drvdata(dai); 70 + unsigned int mask = 0, val = 0; 71 int ret, spdifclock; 72 73 + mask |= TEGRA20_SPDIF_CTRL_PACK | 74 + TEGRA20_SPDIF_CTRL_BIT_MODE_MASK; 75 switch (params_format(params)) { 76 case SNDRV_PCM_FORMAT_S16_LE: 77 + val |= TEGRA20_SPDIF_CTRL_PACK | 78 + TEGRA20_SPDIF_CTRL_BIT_MODE_16BIT; 79 break; 80 default: 81 return -EINVAL;
+3 -3
sound/soc/tegra/tegra30_i2s.c
··· 118 unsigned int fmt) 119 { 120 struct tegra30_i2s *i2s = snd_soc_dai_get_drvdata(dai); 121 - unsigned int mask, val; 122 123 switch (fmt & SND_SOC_DAIFMT_INV_MASK) { 124 case SND_SOC_DAIFMT_NB_NF: ··· 127 return -EINVAL; 128 } 129 130 - mask = TEGRA30_I2S_CTRL_MASTER_ENABLE; 131 switch (fmt & SND_SOC_DAIFMT_MASTER_MASK) { 132 case SND_SOC_DAIFMT_CBS_CFS: 133 - val = TEGRA30_I2S_CTRL_MASTER_ENABLE; 134 break; 135 case SND_SOC_DAIFMT_CBM_CFM: 136 break;
··· 118 unsigned int fmt) 119 { 120 struct tegra30_i2s *i2s = snd_soc_dai_get_drvdata(dai); 121 + unsigned int mask = 0, val = 0; 122 123 switch (fmt & SND_SOC_DAIFMT_INV_MASK) { 124 case SND_SOC_DAIFMT_NB_NF: ··· 127 return -EINVAL; 128 } 129 130 + mask |= TEGRA30_I2S_CTRL_MASTER_ENABLE; 131 switch (fmt & SND_SOC_DAIFMT_MASTER_MASK) { 132 case SND_SOC_DAIFMT_CBS_CFS: 133 + val |= TEGRA30_I2S_CTRL_MASTER_ENABLE; 134 break; 135 case SND_SOC_DAIFMT_CBM_CFM: 136 break;
+3 -3
tools/power/cpupower/utils/cpupower-set.c
··· 18 #include "helpers/bitmask.h" 19 20 static struct option set_opts[] = { 21 - { .name = "perf-bias", .has_arg = optional_argument, .flag = NULL, .val = 'b'}, 22 - { .name = "sched-mc", .has_arg = optional_argument, .flag = NULL, .val = 'm'}, 23 - { .name = "sched-smt", .has_arg = optional_argument, .flag = NULL, .val = 's'}, 24 { }, 25 }; 26
··· 18 #include "helpers/bitmask.h" 19 20 static struct option set_opts[] = { 21 + { .name = "perf-bias", .has_arg = required_argument, .flag = NULL, .val = 'b'}, 22 + { .name = "sched-mc", .has_arg = required_argument, .flag = NULL, .val = 'm'}, 23 + { .name = "sched-smt", .has_arg = required_argument, .flag = NULL, .val = 's'}, 24 { }, 25 }; 26