Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'nand/for-4.11' of github.com:linux-nand/linux

From Boris:

"""
This pull request contains minor fixes/improvements on existing drivers:
- sunxi: avoid busy-waiting for NAND events
- ifc: fix ECC handling on IFC v1.0
- OX820: add explicit dependency on ARCH_OXNAS in Kconfig
- core: add a new manufacture ID and fix a kernel-doc warning
- fsmc: kill pdata support
- lpc32xx_slc: remove unneeded NULL check
"""

Conflicts:
include/linux/mtd/nand.h
[Brian: trivial conflict in the comment section]

+3498 -2222
+3
Documentation/devicetree/bindings/interrupt-controller/snps,archs-idu-intc.txt
··· 15 15 Second cell specifies the irq distribution mode to cores 16 16 0=Round Robin; 1=cpu0, 2=cpu1, 4=cpu2, 8=cpu3 17 17 18 + The second cell in interrupts property is deprecated and may be ignored by 19 + the kernel. 20 + 18 21 intc accessed via the special ARC AUX register interface, hence "reg" property 19 22 is not specified. 20 23
+1 -1
Documentation/devicetree/bindings/net/mediatek-net.txt
··· 7 7 * Ethernet controller node 8 8 9 9 Required properties: 10 - - compatible: Should be "mediatek,mt7623-eth" 10 + - compatible: Should be "mediatek,mt2701-eth" 11 11 - reg: Address and length of the register set for the device 12 12 - interrupts: Should contain the three frame engines interrupts in numeric 13 13 order. These are fe_int0, fe_int1 and fe_int2.
+3 -2
Documentation/devicetree/bindings/net/phy.txt
··· 19 19 specifications. If neither of these are specified, the default is to 20 20 assume clause 22. 21 21 22 - If the phy's identifier is known then the list may contain an entry 23 - of the form: "ethernet-phy-idAAAA.BBBB" where 22 + If the PHY reports an incorrect ID (or none at all) then the 23 + "compatible" list may contain an entry with the correct PHY ID in the 24 + form: "ethernet-phy-idAAAA.BBBB" where 24 25 AAAA - The value of the 16 bit Phy Identifier 1 register as 25 26 4 hex digits. This is the chip vendor OUI bits 3:18 26 27 BBBB - The value of the 16 bit Phy Identifier 2 register as
+3 -2
Documentation/filesystems/proc.txt
··· 212 212 snapshot of a moment, you can see /proc/<pid>/smaps file and scan page table. 213 213 It's slow but very precise. 214 214 215 - Table 1-2: Contents of the status files (as of 4.1) 215 + Table 1-2: Contents of the status files (as of 4.8) 216 216 .............................................................................. 217 217 Field Content 218 218 Name filename of the executable 219 + Umask file mode creation mask 219 220 State state (R is running, S is sleeping, D is sleeping 220 221 in an uninterruptible wait, Z is zombie, 221 222 T is traced or stopped) ··· 227 226 TracerPid PID of process tracing this process (0 if not) 228 227 Uid Real, effective, saved set, and file system UIDs 229 228 Gid Real, effective, saved set, and file system GIDs 230 - Umask file mode creation mask 231 229 FDSize number of file descriptor slots currently allocated 232 230 Groups supplementary group list 233 231 NStgid descendant namespace thread group ID hierarchy ··· 236 236 VmPeak peak virtual memory size 237 237 VmSize total program size 238 238 VmLck locked memory size 239 + VmPin pinned memory size 239 240 VmHWM peak resident set size ("high water mark") 240 241 VmRSS size of memory portions. It contains the three 241 242 following parts (VmRSS = RssAnon + RssFile + RssShmem)
+1 -3
Documentation/power/states.txt
··· 35 35 The default suspend mode (ie. the one to be used without writing anything into 36 36 /sys/power/mem_sleep) is either "deep" (if Suspend-To-RAM is supported) or 37 37 "s2idle", but it can be overridden by the value of the "mem_sleep_default" 38 - parameter in the kernel command line. On some ACPI-based systems, depending on 39 - the information in the FADT, the default may be "s2idle" even if Suspend-To-RAM 40 - is supported. 38 + parameter in the kernel command line. 41 39 42 40 The properties of all of the sleep states are described below. 43 41
+18 -5
MAINTAINERS
··· 3567 3567 F: include/uapi/rdma/cxgb3-abi.h 3568 3568 3569 3569 CXGB4 ETHERNET DRIVER (CXGB4) 3570 - M: Hariprasad S <hariprasad@chelsio.com> 3570 + M: Ganesh Goudar <ganeshgr@chelsio.com> 3571 3571 L: netdev@vger.kernel.org 3572 3572 W: http://www.chelsio.com 3573 3573 S: Supported ··· 4100 4100 4101 4101 DRM DRIVER FOR BOCHS VIRTUAL GPU 4102 4102 M: Gerd Hoffmann <kraxel@redhat.com> 4103 - S: Odd Fixes 4103 + L: virtualization@lists.linux-foundation.org 4104 + T: git git://git.kraxel.org/linux drm-qemu 4105 + S: Maintained 4104 4106 F: drivers/gpu/drm/bochs/ 4105 4107 4106 4108 DRM DRIVER FOR QEMU'S CIRRUS DEVICE 4107 4109 M: Dave Airlie <airlied@redhat.com> 4108 - S: Odd Fixes 4110 + M: Gerd Hoffmann <kraxel@redhat.com> 4111 + L: virtualization@lists.linux-foundation.org 4112 + T: git git://git.kraxel.org/linux drm-qemu 4113 + S: Obsolete 4114 + W: https://www.kraxel.org/blog/2014/10/qemu-using-cirrus-considered-harmful/ 4109 4115 F: drivers/gpu/drm/cirrus/ 4110 4116 4111 4117 RADEON and AMDGPU DRM DRIVERS ··· 4153 4147 INTEL GVT-g DRIVERS (Intel GPU Virtualization) 4154 4148 M: Zhenyu Wang <zhenyuw@linux.intel.com> 4155 4149 M: Zhi Wang <zhi.a.wang@intel.com> 4156 - L: igvt-g-dev@lists.01.org 4150 + L: intel-gvt-dev@lists.freedesktop.org 4157 4151 L: intel-gfx@lists.freedesktop.org 4158 4152 W: https://01.org/igvt-g 4159 4153 T: git https://github.com/01org/gvt-linux.git ··· 4304 4298 4305 4299 DRM DRIVER FOR QXL VIRTUAL GPU 4306 4300 M: Dave Airlie <airlied@redhat.com> 4307 - S: Odd Fixes 4301 + M: Gerd Hoffmann <kraxel@redhat.com> 4302 + L: virtualization@lists.linux-foundation.org 4303 + T: git git://git.kraxel.org/linux drm-qemu 4304 + S: Maintained 4308 4305 F: drivers/gpu/drm/qxl/ 4309 4306 F: include/uapi/drm/qxl_drm.h 4310 4307 ··· 13101 13092 M: Gerd Hoffmann <kraxel@redhat.com> 13102 13093 L: dri-devel@lists.freedesktop.org 13103 13094 L: virtualization@lists.linux-foundation.org 13095 + T: git git://git.kraxel.org/linux drm-qemu 13104 13096 S: Maintained 13105 13097 F: drivers/gpu/drm/virtio/ 13106 13098 F: include/uapi/linux/virtio_gpu.h ··· 13453 13443 13454 13444 X86 PLATFORM DRIVERS 13455 13445 M: Darren Hart <dvhart@infradead.org> 13446 + M: Andy Shevchenko <andy@infradead.org> 13456 13447 L: platform-driver-x86@vger.kernel.org 13457 13448 T: git git://git.infradead.org/users/dvhart/linux-platform-drivers-x86.git 13458 13449 S: Maintained ··· 13625 13614 13626 13615 ZBUD COMPRESSED PAGE ALLOCATOR 13627 13616 M: Seth Jennings <sjenning@redhat.com> 13617 + M: Dan Streetman <ddstreet@ieee.org> 13628 13618 L: linux-mm@kvack.org 13629 13619 S: Maintained 13630 13620 F: mm/zbud.c ··· 13681 13669 13682 13670 ZSWAP COMPRESSED SWAP CACHING 13683 13671 M: Seth Jennings <sjenning@redhat.com> 13672 + M: Dan Streetman <ddstreet@ieee.org> 13684 13673 L: linux-mm@kvack.org 13685 13674 S: Maintained 13686 13675 F: mm/zswap.c
+2 -2
Makefile
··· 1 1 VERSION = 4 2 2 PATCHLEVEL = 10 3 3 SUBLEVEL = 0 4 - EXTRAVERSION = -rc5 5 - NAME = Anniversary Edition 4 + EXTRAVERSION = -rc6 5 + NAME = Fearless Coyote 6 6 7 7 # *DOCUMENTATION* 8 8 # To see a list of typical targets execute "make help"
+3 -1
arch/arc/include/asm/delay.h
··· 26 26 " lp 1f \n" 27 27 " nop \n" 28 28 "1: \n" 29 - : : "r"(loops)); 29 + : 30 + : "r"(loops) 31 + : "lp_count"); 30 32 } 31 33 32 34 extern void __bad_udelay(void);
+7 -7
arch/arc/kernel/head.S
··· 71 71 GET_CPU_ID r5 72 72 cmp r5, 0 73 73 mov.nz r0, r5 74 - #ifdef CONFIG_ARC_SMP_HALT_ON_RESET 75 - ; Non-Master can proceed as system would be booted sufficiently 76 - jnz first_lines_of_secondary 77 - #else 74 + bz .Lmaster_proceed 75 + 78 76 ; Non-Masters wait for Master to boot enough and bring them up 79 - jnz arc_platform_smp_wait_to_boot 80 - #endif 81 - ; Master falls thru 77 + ; when they resume, tail-call to entry point 78 + mov blink, @first_lines_of_secondary 79 + j arc_platform_smp_wait_to_boot 80 + 81 + .Lmaster_proceed: 82 82 #endif 83 83 84 84 ; Clear BSS before updating any globals
+23 -32
arch/arc/kernel/mcip.c
··· 93 93 READ_BCR(ARC_REG_MCIP_BCR, mp); 94 94 95 95 sprintf(smp_cpuinfo_buf, 96 - "Extn [SMP]\t: ARConnect (v%d): %d cores with %s%s%s%s%s\n", 96 + "Extn [SMP]\t: ARConnect (v%d): %d cores with %s%s%s%s\n", 97 97 mp.ver, mp.num_cores, 98 98 IS_AVAIL1(mp.ipi, "IPI "), 99 99 IS_AVAIL1(mp.idu, "IDU "), 100 - IS_AVAIL1(mp.llm, "LLM "), 101 100 IS_AVAIL1(mp.dbg, "DEBUG "), 102 101 IS_AVAIL1(mp.gfrc, "GFRC")); 103 102 ··· 174 175 raw_spin_unlock_irqrestore(&mcip_lock, flags); 175 176 } 176 177 177 - #ifdef CONFIG_SMP 178 178 static int 179 179 idu_irq_set_affinity(struct irq_data *data, const struct cpumask *cpumask, 180 180 bool force) ··· 203 205 204 206 return IRQ_SET_MASK_OK; 205 207 } 206 - #endif 208 + 209 + static void idu_irq_enable(struct irq_data *data) 210 + { 211 + /* 212 + * By default send all common interrupts to all available online CPUs. 213 + * The affinity of common interrupts in IDU must be set manually since 214 + * in some cases the kernel will not call irq_set_affinity() by itself: 215 + * 1. When the kernel is not configured with support of SMP. 216 + * 2. When the kernel is configured with support of SMP but upper 217 + * interrupt controllers does not support setting of the affinity 218 + * and cannot propagate it to IDU. 219 + */ 220 + idu_irq_set_affinity(data, cpu_online_mask, false); 221 + idu_irq_unmask(data); 222 + } 207 223 208 224 static struct irq_chip idu_irq_chip = { 209 225 .name = "MCIP IDU Intc", 210 226 .irq_mask = idu_irq_mask, 211 227 .irq_unmask = idu_irq_unmask, 228 + .irq_enable = idu_irq_enable, 212 229 #ifdef CONFIG_SMP 213 230 .irq_set_affinity = idu_irq_set_affinity, 214 231 #endif ··· 256 243 const u32 *intspec, unsigned int intsize, 257 244 irq_hw_number_t *out_hwirq, unsigned int *out_type) 258 245 { 259 - irq_hw_number_t hwirq = *out_hwirq = intspec[0]; 260 - int distri = intspec[1]; 261 - unsigned long flags; 262 - 246 + /* 247 + * Ignore value of interrupt distribution mode for common interrupts in 248 + * IDU which resides in intspec[1] since setting an affinity using value 249 + * from Device Tree is deprecated in ARC. 250 + */ 251 + *out_hwirq = intspec[0]; 263 252 *out_type = IRQ_TYPE_NONE; 264 - 265 - /* XXX: validate distribution scheme again online cpu mask */ 266 - if (distri == 0) { 267 - /* 0 - Round Robin to all cpus, otherwise 1 bit per core */ 268 - raw_spin_lock_irqsave(&mcip_lock, flags); 269 - idu_set_dest(hwirq, BIT(num_online_cpus()) - 1); 270 - idu_set_mode(hwirq, IDU_M_TRIG_LEVEL, IDU_M_DISTRI_RR); 271 - raw_spin_unlock_irqrestore(&mcip_lock, flags); 272 - } else { 273 - /* 274 - * DEST based distribution for Level Triggered intr can only 275 - * have 1 CPU, so generalize it to always contain 1 cpu 276 - */ 277 - int cpu = ffs(distri); 278 - 279 - if (cpu != fls(distri)) 280 - pr_warn("IDU irq %lx distri mode set to cpu %x\n", 281 - hwirq, cpu); 282 - 283 - raw_spin_lock_irqsave(&mcip_lock, flags); 284 - idu_set_dest(hwirq, cpu); 285 - idu_set_mode(hwirq, IDU_M_TRIG_LEVEL, IDU_M_DISTRI_DEST); 286 - raw_spin_unlock_irqrestore(&mcip_lock, flags); 287 - } 288 253 289 254 return 0; 290 255 }
+20 -5
arch/arc/kernel/smp.c
··· 90 90 */ 91 91 static volatile int wake_flag; 92 92 93 + #ifdef CONFIG_ISA_ARCOMPACT 94 + 95 + #define __boot_read(f) f 96 + #define __boot_write(f, v) f = v 97 + 98 + #else 99 + 100 + #define __boot_read(f) arc_read_uncached_32(&f) 101 + #define __boot_write(f, v) arc_write_uncached_32(&f, v) 102 + 103 + #endif 104 + 93 105 static void arc_default_smp_cpu_kick(int cpu, unsigned long pc) 94 106 { 95 107 BUG_ON(cpu == 0); 96 - wake_flag = cpu; 108 + 109 + __boot_write(wake_flag, cpu); 97 110 } 98 111 99 112 void arc_platform_smp_wait_to_boot(int cpu) 100 113 { 101 - while (wake_flag != cpu) 114 + /* for halt-on-reset, we've waited already */ 115 + if (IS_ENABLED(CONFIG_ARC_SMP_HALT_ON_RESET)) 116 + return; 117 + 118 + while (__boot_read(wake_flag) != cpu) 102 119 ; 103 120 104 - wake_flag = 0; 105 - __asm__ __volatile__("j @first_lines_of_secondary \n"); 121 + __boot_write(wake_flag, 0); 106 122 } 107 - 108 123 109 124 const char *arc_platform_smp_cpuinfo(void) 110 125 {
+2 -1
arch/arc/kernel/unaligned.c
··· 241 241 if (state.fault) 242 242 goto fault; 243 243 244 + /* clear any remanants of delay slot */ 244 245 if (delay_mode(regs)) { 245 - regs->ret = regs->bta; 246 + regs->ret = regs->bta ~1U; 246 247 regs->status32 &= ~STATUS_DE_MASK; 247 248 } else { 248 249 regs->ret += state.instr_len;
+7 -1
arch/arm64/kernel/topology.c
··· 11 11 * for more details. 12 12 */ 13 13 14 + #include <linux/acpi.h> 14 15 #include <linux/cpu.h> 15 16 #include <linux/cpumask.h> 16 17 #include <linux/init.h> ··· 210 209 211 210 static int __init register_cpufreq_notifier(void) 212 211 { 213 - if (cap_parsing_failed) 212 + /* 213 + * on ACPI-based systems we need to use the default cpu capacity 214 + * until we have the necessary code to parse the cpu capacity, so 215 + * skip registering cpufreq notifier. 216 + */ 217 + if (!acpi_disabled || cap_parsing_failed) 214 218 return -EINVAL; 215 219 216 220 if (!alloc_cpumask_var(&cpus_to_visit, GFP_KERNEL)) {
+34 -1
arch/frv/include/asm/atomic.h
··· 139 139 #define atomic64_sub_and_test(i,v) (atomic64_sub_return((i), (v)) == 0) 140 140 #define atomic64_dec_and_test(v) (atomic64_dec_return((v)) == 0) 141 141 #define atomic64_inc_and_test(v) (atomic64_inc_return((v)) == 0) 142 - 142 + #define atomic64_inc_not_zero(v) atomic64_add_unless((v), 1, 0) 143 143 144 144 #define atomic_cmpxchg(v, old, new) (cmpxchg(&(v)->counter, old, new)) 145 145 #define atomic_xchg(v, new) (xchg(&(v)->counter, new)) ··· 159 159 c = old; 160 160 } 161 161 return c; 162 + } 163 + 164 + static inline int atomic64_add_unless(atomic64_t *v, long long i, long long u) 165 + { 166 + long long c, old; 167 + 168 + c = atomic64_read(v); 169 + for (;;) { 170 + if (unlikely(c == u)) 171 + break; 172 + old = atomic64_cmpxchg(v, c, c + i); 173 + if (likely(old == c)) 174 + break; 175 + c = old; 176 + } 177 + return c != u; 178 + } 179 + 180 + static inline long long atomic64_dec_if_positive(atomic64_t *v) 181 + { 182 + long long c, old, dec; 183 + 184 + c = atomic64_read(v); 185 + for (;;) { 186 + dec = c - 1; 187 + if (unlikely(dec < 0)) 188 + break; 189 + old = atomic64_cmpxchg((v), c, dec); 190 + if (likely(old == c)) 191 + break; 192 + c = old; 193 + } 194 + return dec; 162 195 } 163 196 164 197 #define ATOMIC_OP(op) \
+1 -1
arch/mn10300/include/asm/switch_to.h
··· 16 16 struct task_struct; 17 17 struct thread_struct; 18 18 19 - #if !defined(CONFIG_LAZY_SAVE_FPU) 19 + #if defined(CONFIG_FPU) && !defined(CONFIG_LAZY_SAVE_FPU) 20 20 struct fpu_state_struct; 21 21 extern asmlinkage void fpu_save(struct fpu_state_struct *); 22 22 #define switch_fpu(prev, next) \
+7 -1
arch/parisc/include/asm/bitops.h
··· 6 6 #endif 7 7 8 8 #include <linux/compiler.h> 9 - #include <asm/types.h> /* for BITS_PER_LONG/SHIFT_PER_LONG */ 9 + #include <asm/types.h> 10 10 #include <asm/byteorder.h> 11 11 #include <asm/barrier.h> 12 12 #include <linux/atomic.h> ··· 16 16 * for a detailed description of the functions please refer 17 17 * to include/asm-i386/bitops.h or kerneldoc 18 18 */ 19 + 20 + #if __BITS_PER_LONG == 64 21 + #define SHIFT_PER_LONG 6 22 + #else 23 + #define SHIFT_PER_LONG 5 24 + #endif 19 25 20 26 #define CHOP_SHIFTCOUNT(x) (((unsigned long) (x)) & (BITS_PER_LONG - 1)) 21 27
-2
arch/parisc/include/uapi/asm/bitsperlong.h
··· 3 3 4 4 #if defined(__LP64__) 5 5 #define __BITS_PER_LONG 64 6 - #define SHIFT_PER_LONG 6 7 6 #else 8 7 #define __BITS_PER_LONG 32 9 - #define SHIFT_PER_LONG 5 10 8 #endif 11 9 12 10 #include <asm-generic/bitsperlong.h>
+3 -2
arch/parisc/include/uapi/asm/swab.h
··· 1 1 #ifndef _PARISC_SWAB_H 2 2 #define _PARISC_SWAB_H 3 3 4 + #include <asm/bitsperlong.h> 4 5 #include <linux/types.h> 5 6 #include <linux/compiler.h> 6 7 ··· 39 38 } 40 39 #define __arch_swab32 __arch_swab32 41 40 42 - #if BITS_PER_LONG > 32 41 + #if __BITS_PER_LONG > 32 43 42 /* 44 43 ** From "PA-RISC 2.0 Architecture", HP Professional Books. 45 44 ** See Appendix I page 8 , "Endian Byte Swapping". ··· 62 61 return x; 63 62 } 64 63 #define __arch_swab64 __arch_swab64 65 - #endif /* BITS_PER_LONG > 32 */ 64 + #endif /* __BITS_PER_LONG > 32 */ 66 65 67 66 #endif /* _PARISC_SWAB_H */
+8
arch/s390/kernel/ptrace.c
··· 963 963 if (target == current) 964 964 save_fpu_regs(); 965 965 966 + if (MACHINE_HAS_VX) 967 + convert_vx_to_fp(fprs, target->thread.fpu.vxrs); 968 + else 969 + memcpy(&fprs, target->thread.fpu.fprs, sizeof(fprs)); 970 + 966 971 /* If setting FPC, must validate it first. */ 967 972 if (count > 0 && pos < offsetof(s390_fp_regs, fprs)) { 968 973 u32 ufpc[2] = { target->thread.fpu.fpc, 0 }; ··· 1071 1066 return -ENODEV; 1072 1067 if (target == current) 1073 1068 save_fpu_regs(); 1069 + 1070 + for (i = 0; i < __NUM_VXRS_LOW; i++) 1071 + vxrs[i] = *((__u64 *)(target->thread.fpu.vxrs + i) + 1); 1074 1072 1075 1073 rc = user_regset_copyin(&pos, &count, &kbuf, &ubuf, vxrs, 0, -1); 1076 1074 if (rc == 0)
+4 -3
arch/s390/mm/pgtable.c
··· 202 202 return pgste; 203 203 } 204 204 205 - static inline void ptep_xchg_commit(struct mm_struct *mm, 205 + static inline pte_t ptep_xchg_commit(struct mm_struct *mm, 206 206 unsigned long addr, pte_t *ptep, 207 207 pgste_t pgste, pte_t old, pte_t new) 208 208 { ··· 220 220 } else { 221 221 *ptep = new; 222 222 } 223 + return old; 223 224 } 224 225 225 226 pte_t ptep_xchg_direct(struct mm_struct *mm, unsigned long addr, ··· 232 231 preempt_disable(); 233 232 pgste = ptep_xchg_start(mm, addr, ptep); 234 233 old = ptep_flush_direct(mm, addr, ptep); 235 - ptep_xchg_commit(mm, addr, ptep, pgste, old, new); 234 + old = ptep_xchg_commit(mm, addr, ptep, pgste, old, new); 236 235 preempt_enable(); 237 236 return old; 238 237 } ··· 247 246 preempt_disable(); 248 247 pgste = ptep_xchg_start(mm, addr, ptep); 249 248 old = ptep_flush_lazy(mm, addr, ptep); 250 - ptep_xchg_commit(mm, addr, ptep, pgste, old, new); 249 + old = ptep_xchg_commit(mm, addr, ptep, pgste, old, new); 251 250 preempt_enable(); 252 251 return old; 253 252 }
+1 -1
arch/tile/kernel/ptrace.c
··· 111 111 const void *kbuf, const void __user *ubuf) 112 112 { 113 113 int ret; 114 - struct pt_regs regs; 114 + struct pt_regs regs = *task_pt_regs(target); 115 115 116 116 ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf, &regs, 0, 117 117 sizeof(regs));
+2 -7
drivers/acpi/acpica/tbdata.c
··· 852 852 853 853 ACPI_FUNCTION_TRACE(tb_install_and_load_table); 854 854 855 - (void)acpi_ut_acquire_mutex(ACPI_MTX_TABLES); 856 - 857 855 /* Install the table and load it into the namespace */ 858 856 859 857 status = acpi_tb_install_standard_table(address, flags, TRUE, 860 858 override, &i); 861 859 if (ACPI_FAILURE(status)) { 862 - goto unlock_and_exit; 860 + goto exit; 863 861 } 864 862 865 - (void)acpi_ut_release_mutex(ACPI_MTX_TABLES); 866 863 status = acpi_tb_load_table(i, acpi_gbl_root_node); 867 - (void)acpi_ut_acquire_mutex(ACPI_MTX_TABLES); 868 864 869 - unlock_and_exit: 865 + exit: 870 866 *table_index = i; 871 - (void)acpi_ut_release_mutex(ACPI_MTX_TABLES); 872 867 return_ACPI_STATUS(status); 873 868 } 874 869
+15 -2
drivers/acpi/acpica/tbinstal.c
··· 217 217 goto release_and_exit; 218 218 } 219 219 220 + /* Acquire the table lock */ 221 + 222 + (void)acpi_ut_acquire_mutex(ACPI_MTX_TABLES); 223 + 220 224 if (reload) { 221 225 /* 222 226 * Validate the incoming table signature. ··· 248 244 new_table_desc.signature.integer)); 249 245 250 246 status = AE_BAD_SIGNATURE; 251 - goto release_and_exit; 247 + goto unlock_and_exit; 252 248 } 253 249 254 250 /* Check if table is already registered */ ··· 283 279 /* Table is still loaded, this is an error */ 284 280 285 281 status = AE_ALREADY_EXISTS; 286 - goto release_and_exit; 282 + goto unlock_and_exit; 287 283 } else { 288 284 /* 289 285 * Table was unloaded, allow it to be reloaded. ··· 294 290 * indicate the re-installation. 295 291 */ 296 292 acpi_tb_uninstall_table(&new_table_desc); 293 + (void)acpi_ut_release_mutex(ACPI_MTX_TABLES); 297 294 *table_index = i; 298 295 return_ACPI_STATUS(AE_OK); 299 296 } ··· 308 303 309 304 /* Invoke table handler if present */ 310 305 306 + (void)acpi_ut_release_mutex(ACPI_MTX_TABLES); 311 307 if (acpi_gbl_table_handler) { 312 308 (void)acpi_gbl_table_handler(ACPI_TABLE_EVENT_INSTALL, 313 309 new_table_desc.pointer, 314 310 acpi_gbl_table_handler_context); 315 311 } 312 + (void)acpi_ut_acquire_mutex(ACPI_MTX_TABLES); 313 + 314 + unlock_and_exit: 315 + 316 + /* Release the table lock */ 317 + 318 + (void)acpi_ut_release_mutex(ACPI_MTX_TABLES); 316 319 317 320 release_and_exit: 318 321
-8
drivers/acpi/sleep.c
··· 674 674 if (acpi_sleep_state_supported(i)) 675 675 sleep_states[i] = 1; 676 676 677 - /* 678 - * Use suspend-to-idle by default if ACPI_FADT_LOW_POWER_S0 is set and 679 - * the default suspend mode was not selected from the command line. 680 - */ 681 - if (acpi_gbl_FADT.flags & ACPI_FADT_LOW_POWER_S0 && 682 - mem_sleep_default > PM_SUSPEND_MEM) 683 - mem_sleep_default = PM_SUSPEND_FREEZE; 684 - 685 677 suspend_set_ops(old_suspend_ordering ? 686 678 &acpi_suspend_ops_old : &acpi_suspend_ops); 687 679 freeze_set_ops(&acpi_freeze_ops);
-11
drivers/acpi/video_detect.c
··· 305 305 DMI_MATCH(DMI_PRODUCT_NAME, "Dell System XPS L702X"), 306 306 }, 307 307 }, 308 - { 309 - /* https://bugzilla.redhat.com/show_bug.cgi?id=1204476 */ 310 - /* https://bugs.launchpad.net/ubuntu/+source/linux-lts-trusty/+bug/1416940 */ 311 - .callback = video_detect_force_native, 312 - .ident = "HP Pavilion dv6", 313 - .matches = { 314 - DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"), 315 - DMI_MATCH(DMI_PRODUCT_NAME, "HP Pavilion dv6 Notebook PC"), 316 - }, 317 - }, 318 - 319 308 { }, 320 309 }; 321 310
+2 -2
drivers/base/memory.c
··· 408 408 sprintf(buf, "%s", zone->name); 409 409 410 410 /* MMOP_ONLINE_KERNEL */ 411 - zone_shift = zone_can_shift(start_pfn, nr_pages, ZONE_NORMAL); 411 + zone_can_shift(start_pfn, nr_pages, ZONE_NORMAL, &zone_shift); 412 412 if (zone_shift) { 413 413 strcat(buf, " "); 414 414 strcat(buf, (zone + zone_shift)->name); 415 415 } 416 416 417 417 /* MMOP_ONLINE_MOVABLE */ 418 - zone_shift = zone_can_shift(start_pfn, nr_pages, ZONE_MOVABLE); 418 + zone_can_shift(start_pfn, nr_pages, ZONE_MOVABLE, &zone_shift); 419 419 if (zone_shift) { 420 420 strcat(buf, " "); 421 421 strcat(buf, (zone + zone_shift)->name);
+14 -8
drivers/block/xen-blkfront.c
··· 197 197 /* Number of pages per ring buffer. */ 198 198 unsigned int nr_ring_pages; 199 199 struct request_queue *rq; 200 - unsigned int feature_flush; 201 - unsigned int feature_fua; 200 + unsigned int feature_flush:1; 201 + unsigned int feature_fua:1; 202 202 unsigned int feature_discard:1; 203 203 unsigned int feature_secdiscard:1; 204 + unsigned int feature_persistent:1; 204 205 unsigned int discard_granularity; 205 206 unsigned int discard_alignment; 206 - unsigned int feature_persistent:1; 207 207 /* Number of 4KB segments handled */ 208 208 unsigned int max_indirect_segments; 209 209 int is_ready; ··· 2223 2223 } 2224 2224 else 2225 2225 grants = info->max_indirect_segments; 2226 - psegs = grants / GRANTS_PER_PSEG; 2226 + psegs = DIV_ROUND_UP(grants, GRANTS_PER_PSEG); 2227 2227 2228 2228 err = fill_grant_buffer(rinfo, 2229 2229 (grants + INDIRECT_GREFS(grants)) * BLK_RING_SIZE(info)); ··· 2323 2323 blkfront_setup_discard(info); 2324 2324 2325 2325 info->feature_persistent = 2326 - xenbus_read_unsigned(info->xbdev->otherend, 2327 - "feature-persistent", 0); 2326 + !!xenbus_read_unsigned(info->xbdev->otherend, 2327 + "feature-persistent", 0); 2328 2328 2329 2329 indirect_segments = xenbus_read_unsigned(info->xbdev->otherend, 2330 2330 "feature-max-indirect-segments", 0); 2331 - info->max_indirect_segments = min(indirect_segments, 2332 - xen_blkif_max_segments); 2331 + if (indirect_segments > xen_blkif_max_segments) 2332 + indirect_segments = xen_blkif_max_segments; 2333 + if (indirect_segments <= BLKIF_MAX_SEGMENTS_PER_REQUEST) 2334 + indirect_segments = 0; 2335 + info->max_indirect_segments = indirect_segments; 2333 2336 } 2334 2337 2335 2338 /* ··· 2654 2651 2655 2652 if (!xen_domain()) 2656 2653 return -ENODEV; 2654 + 2655 + if (xen_blkif_max_segments < BLKIF_MAX_SEGMENTS_PER_REQUEST) 2656 + xen_blkif_max_segments = BLKIF_MAX_SEGMENTS_PER_REQUEST; 2657 2657 2658 2658 if (xen_blkif_max_ring_order > XENBUS_MAX_RING_GRANT_ORDER) { 2659 2659 pr_info("Invalid max_ring_order (%d), will use default max: %d.\n",
+13 -1
drivers/cpufreq/intel_pstate.c
··· 2005 2005 limits = &performance_limits; 2006 2006 perf_limits = limits; 2007 2007 } 2008 - if (policy->max >= policy->cpuinfo.max_freq) { 2008 + if (policy->max >= policy->cpuinfo.max_freq && 2009 + !limits->no_turbo) { 2009 2010 pr_debug("set performance\n"); 2010 2011 intel_pstate_set_performance_limits(perf_limits); 2011 2012 goto out; ··· 2047 2046 if (policy->policy != CPUFREQ_POLICY_POWERSAVE && 2048 2047 policy->policy != CPUFREQ_POLICY_PERFORMANCE) 2049 2048 return -EINVAL; 2049 + 2050 + /* When per-CPU limits are used, sysfs limits are not used */ 2051 + if (!per_cpu_limits) { 2052 + unsigned int max_freq, min_freq; 2053 + 2054 + max_freq = policy->cpuinfo.max_freq * 2055 + limits->max_sysfs_pct / 100; 2056 + min_freq = policy->cpuinfo.max_freq * 2057 + limits->min_sysfs_pct / 100; 2058 + cpufreq_verify_within_limits(policy, min_freq, max_freq); 2059 + } 2050 2060 2051 2061 return 0; 2052 2062 }
+9 -9
drivers/gpio/gpiolib.c
··· 1723 1723 } 1724 1724 1725 1725 /** 1726 - * _gpiochip_irqchip_add() - adds an irqchip to a gpiochip 1726 + * gpiochip_irqchip_add_key() - adds an irqchip to a gpiochip 1727 1727 * @gpiochip: the gpiochip to add the irqchip to 1728 1728 * @irqchip: the irqchip to add to the gpiochip 1729 1729 * @first_irq: if not dynamically assigned, the base (first) IRQ to ··· 1749 1749 * the pins on the gpiochip can generate a unique IRQ. Everything else 1750 1750 * need to be open coded. 1751 1751 */ 1752 - int _gpiochip_irqchip_add(struct gpio_chip *gpiochip, 1753 - struct irq_chip *irqchip, 1754 - unsigned int first_irq, 1755 - irq_flow_handler_t handler, 1756 - unsigned int type, 1757 - bool nested, 1758 - struct lock_class_key *lock_key) 1752 + int gpiochip_irqchip_add_key(struct gpio_chip *gpiochip, 1753 + struct irq_chip *irqchip, 1754 + unsigned int first_irq, 1755 + irq_flow_handler_t handler, 1756 + unsigned int type, 1757 + bool nested, 1758 + struct lock_class_key *lock_key) 1759 1759 { 1760 1760 struct device_node *of_node; 1761 1761 bool irq_base_set = false; ··· 1840 1840 1841 1841 return 0; 1842 1842 } 1843 - EXPORT_SYMBOL_GPL(_gpiochip_irqchip_add); 1843 + EXPORT_SYMBOL_GPL(gpiochip_irqchip_add_key); 1844 1844 1845 1845 #else /* CONFIG_GPIOLIB_IRQCHIP */ 1846 1846
+7
drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
··· 83 83 } 84 84 break; 85 85 } 86 + 87 + if (!(*out_ring && (*out_ring)->adev)) { 88 + DRM_ERROR("Ring %d is not initialized on IP %d\n", 89 + ring, ip_type); 90 + return -EINVAL; 91 + } 92 + 86 93 return 0; 87 94 } 88 95
+7 -15
drivers/gpu/drm/amd/amdgpu/dce_v10_0.c
··· 2512 2512 2513 2513 WREG32(mmCUR_POSITION + amdgpu_crtc->crtc_offset, (x << 16) | y); 2514 2514 WREG32(mmCUR_HOT_SPOT + amdgpu_crtc->crtc_offset, (xorigin << 16) | yorigin); 2515 + WREG32(mmCUR_SIZE + amdgpu_crtc->crtc_offset, 2516 + ((amdgpu_crtc->cursor_width - 1) << 16) | (amdgpu_crtc->cursor_height - 1)); 2515 2517 2516 2518 return 0; 2517 2519 } ··· 2539 2537 int32_t hot_y) 2540 2538 { 2541 2539 struct amdgpu_crtc *amdgpu_crtc = to_amdgpu_crtc(crtc); 2542 - struct amdgpu_device *adev = crtc->dev->dev_private; 2543 2540 struct drm_gem_object *obj; 2544 2541 struct amdgpu_bo *aobj; 2545 2542 int ret; ··· 2579 2578 2580 2579 dce_v10_0_lock_cursor(crtc, true); 2581 2580 2582 - if (hot_x != amdgpu_crtc->cursor_hot_x || 2581 + if (width != amdgpu_crtc->cursor_width || 2582 + height != amdgpu_crtc->cursor_height || 2583 + hot_x != amdgpu_crtc->cursor_hot_x || 2583 2584 hot_y != amdgpu_crtc->cursor_hot_y) { 2584 2585 int x, y; 2585 2586 ··· 2590 2587 2591 2588 dce_v10_0_cursor_move_locked(crtc, x, y); 2592 2589 2593 - amdgpu_crtc->cursor_hot_x = hot_x; 2594 - amdgpu_crtc->cursor_hot_y = hot_y; 2595 - } 2596 - 2597 - if (width != amdgpu_crtc->cursor_width || 2598 - height != amdgpu_crtc->cursor_height) { 2599 - WREG32(mmCUR_SIZE + amdgpu_crtc->crtc_offset, 2600 - (width - 1) << 16 | (height - 1)); 2601 2590 amdgpu_crtc->cursor_width = width; 2602 2591 amdgpu_crtc->cursor_height = height; 2592 + amdgpu_crtc->cursor_hot_x = hot_x; 2593 + amdgpu_crtc->cursor_hot_y = hot_y; 2603 2594 } 2604 2595 2605 2596 dce_v10_0_show_cursor(crtc); ··· 2617 2620 static void dce_v10_0_cursor_reset(struct drm_crtc *crtc) 2618 2621 { 2619 2622 struct amdgpu_crtc *amdgpu_crtc = to_amdgpu_crtc(crtc); 2620 - struct amdgpu_device *adev = crtc->dev->dev_private; 2621 2623 2622 2624 if (amdgpu_crtc->cursor_bo) { 2623 2625 dce_v10_0_lock_cursor(crtc, true); 2624 2626 2625 2627 dce_v10_0_cursor_move_locked(crtc, amdgpu_crtc->cursor_x, 2626 2628 amdgpu_crtc->cursor_y); 2627 - 2628 - WREG32(mmCUR_SIZE + amdgpu_crtc->crtc_offset, 2629 - (amdgpu_crtc->cursor_width - 1) << 16 | 2630 - (amdgpu_crtc->cursor_height - 1)); 2631 2629 2632 2630 dce_v10_0_show_cursor(crtc); 2633 2631
+7 -15
drivers/gpu/drm/amd/amdgpu/dce_v11_0.c
··· 2532 2532 2533 2533 WREG32(mmCUR_POSITION + amdgpu_crtc->crtc_offset, (x << 16) | y); 2534 2534 WREG32(mmCUR_HOT_SPOT + amdgpu_crtc->crtc_offset, (xorigin << 16) | yorigin); 2535 + WREG32(mmCUR_SIZE + amdgpu_crtc->crtc_offset, 2536 + ((amdgpu_crtc->cursor_width - 1) << 16) | (amdgpu_crtc->cursor_height - 1)); 2535 2537 2536 2538 return 0; 2537 2539 } ··· 2559 2557 int32_t hot_y) 2560 2558 { 2561 2559 struct amdgpu_crtc *amdgpu_crtc = to_amdgpu_crtc(crtc); 2562 - struct amdgpu_device *adev = crtc->dev->dev_private; 2563 2560 struct drm_gem_object *obj; 2564 2561 struct amdgpu_bo *aobj; 2565 2562 int ret; ··· 2599 2598 2600 2599 dce_v11_0_lock_cursor(crtc, true); 2601 2600 2602 - if (hot_x != amdgpu_crtc->cursor_hot_x || 2601 + if (width != amdgpu_crtc->cursor_width || 2602 + height != amdgpu_crtc->cursor_height || 2603 + hot_x != amdgpu_crtc->cursor_hot_x || 2603 2604 hot_y != amdgpu_crtc->cursor_hot_y) { 2604 2605 int x, y; 2605 2606 ··· 2610 2607 2611 2608 dce_v11_0_cursor_move_locked(crtc, x, y); 2612 2609 2613 - amdgpu_crtc->cursor_hot_x = hot_x; 2614 - amdgpu_crtc->cursor_hot_y = hot_y; 2615 - } 2616 - 2617 - if (width != amdgpu_crtc->cursor_width || 2618 - height != amdgpu_crtc->cursor_height) { 2619 - WREG32(mmCUR_SIZE + amdgpu_crtc->crtc_offset, 2620 - (width - 1) << 16 | (height - 1)); 2621 2610 amdgpu_crtc->cursor_width = width; 2622 2611 amdgpu_crtc->cursor_height = height; 2612 + amdgpu_crtc->cursor_hot_x = hot_x; 2613 + amdgpu_crtc->cursor_hot_y = hot_y; 2623 2614 } 2624 2615 2625 2616 dce_v11_0_show_cursor(crtc); ··· 2637 2640 static void dce_v11_0_cursor_reset(struct drm_crtc *crtc) 2638 2641 { 2639 2642 struct amdgpu_crtc *amdgpu_crtc = to_amdgpu_crtc(crtc); 2640 - struct amdgpu_device *adev = crtc->dev->dev_private; 2641 2643 2642 2644 if (amdgpu_crtc->cursor_bo) { 2643 2645 dce_v11_0_lock_cursor(crtc, true); 2644 2646 2645 2647 dce_v11_0_cursor_move_locked(crtc, amdgpu_crtc->cursor_x, 2646 2648 amdgpu_crtc->cursor_y); 2647 - 2648 - WREG32(mmCUR_SIZE + amdgpu_crtc->crtc_offset, 2649 - (amdgpu_crtc->cursor_width - 1) << 16 | 2650 - (amdgpu_crtc->cursor_height - 1)); 2651 2649 2652 2650 dce_v11_0_show_cursor(crtc); 2653 2651
+9 -15
drivers/gpu/drm/amd/amdgpu/dce_v6_0.c
··· 1859 1859 struct amdgpu_device *adev = crtc->dev->dev_private; 1860 1860 int xorigin = 0, yorigin = 0; 1861 1861 1862 + int w = amdgpu_crtc->cursor_width; 1863 + 1862 1864 amdgpu_crtc->cursor_x = x; 1863 1865 amdgpu_crtc->cursor_y = y; 1864 1866 ··· 1880 1878 1881 1879 WREG32(mmCUR_POSITION + amdgpu_crtc->crtc_offset, (x << 16) | y); 1882 1880 WREG32(mmCUR_HOT_SPOT + amdgpu_crtc->crtc_offset, (xorigin << 16) | yorigin); 1881 + WREG32(mmCUR_SIZE + amdgpu_crtc->crtc_offset, 1882 + ((w - 1) << 16) | (amdgpu_crtc->cursor_height - 1)); 1883 1883 1884 1884 return 0; 1885 1885 } ··· 1907 1903 int32_t hot_y) 1908 1904 { 1909 1905 struct amdgpu_crtc *amdgpu_crtc = to_amdgpu_crtc(crtc); 1910 - struct amdgpu_device *adev = crtc->dev->dev_private; 1911 1906 struct drm_gem_object *obj; 1912 1907 struct amdgpu_bo *aobj; 1913 1908 int ret; ··· 1947 1944 1948 1945 dce_v6_0_lock_cursor(crtc, true); 1949 1946 1950 - if (hot_x != amdgpu_crtc->cursor_hot_x || 1947 + if (width != amdgpu_crtc->cursor_width || 1948 + height != amdgpu_crtc->cursor_height || 1949 + hot_x != amdgpu_crtc->cursor_hot_x || 1951 1950 hot_y != amdgpu_crtc->cursor_hot_y) { 1952 1951 int x, y; 1953 1952 ··· 1958 1953 1959 1954 dce_v6_0_cursor_move_locked(crtc, x, y); 1960 1955 1961 - amdgpu_crtc->cursor_hot_x = hot_x; 1962 - amdgpu_crtc->cursor_hot_y = hot_y; 1963 - } 1964 - 1965 - if (width != amdgpu_crtc->cursor_width || 1966 - height != amdgpu_crtc->cursor_height) { 1967 - WREG32(mmCUR_SIZE + amdgpu_crtc->crtc_offset, 1968 - (width - 1) << 16 | (height - 1)); 1969 1956 amdgpu_crtc->cursor_width = width; 1970 1957 amdgpu_crtc->cursor_height = height; 1958 + amdgpu_crtc->cursor_hot_x = hot_x; 1959 + amdgpu_crtc->cursor_hot_y = hot_y; 1971 1960 } 1972 1961 1973 1962 dce_v6_0_show_cursor(crtc); ··· 1985 1986 static void dce_v6_0_cursor_reset(struct drm_crtc *crtc) 1986 1987 { 1987 1988 struct amdgpu_crtc *amdgpu_crtc = to_amdgpu_crtc(crtc); 1988 - struct amdgpu_device *adev = crtc->dev->dev_private; 1989 1989 1990 1990 if (amdgpu_crtc->cursor_bo) { 1991 1991 dce_v6_0_lock_cursor(crtc, true); 1992 1992 1993 1993 dce_v6_0_cursor_move_locked(crtc, amdgpu_crtc->cursor_x, 1994 1994 amdgpu_crtc->cursor_y); 1995 - 1996 - WREG32(mmCUR_SIZE + amdgpu_crtc->crtc_offset, 1997 - (amdgpu_crtc->cursor_width - 1) << 16 | 1998 - (amdgpu_crtc->cursor_height - 1)); 1999 1995 2000 1996 dce_v6_0_show_cursor(crtc); 2001 1997 dce_v6_0_lock_cursor(crtc, false);
+7 -15
drivers/gpu/drm/amd/amdgpu/dce_v8_0.c
··· 2363 2363 2364 2364 WREG32(mmCUR_POSITION + amdgpu_crtc->crtc_offset, (x << 16) | y); 2365 2365 WREG32(mmCUR_HOT_SPOT + amdgpu_crtc->crtc_offset, (xorigin << 16) | yorigin); 2366 + WREG32(mmCUR_SIZE + amdgpu_crtc->crtc_offset, 2367 + ((amdgpu_crtc->cursor_width - 1) << 16) | (amdgpu_crtc->cursor_height - 1)); 2366 2368 2367 2369 return 0; 2368 2370 } ··· 2390 2388 int32_t hot_y) 2391 2389 { 2392 2390 struct amdgpu_crtc *amdgpu_crtc = to_amdgpu_crtc(crtc); 2393 - struct amdgpu_device *adev = crtc->dev->dev_private; 2394 2391 struct drm_gem_object *obj; 2395 2392 struct amdgpu_bo *aobj; 2396 2393 int ret; ··· 2430 2429 2431 2430 dce_v8_0_lock_cursor(crtc, true); 2432 2431 2433 - if (hot_x != amdgpu_crtc->cursor_hot_x || 2432 + if (width != amdgpu_crtc->cursor_width || 2433 + height != amdgpu_crtc->cursor_height || 2434 + hot_x != amdgpu_crtc->cursor_hot_x || 2434 2435 hot_y != amdgpu_crtc->cursor_hot_y) { 2435 2436 int x, y; 2436 2437 ··· 2441 2438 2442 2439 dce_v8_0_cursor_move_locked(crtc, x, y); 2443 2440 2444 - amdgpu_crtc->cursor_hot_x = hot_x; 2445 - amdgpu_crtc->cursor_hot_y = hot_y; 2446 - } 2447 - 2448 - if (width != amdgpu_crtc->cursor_width || 2449 - height != amdgpu_crtc->cursor_height) { 2450 - WREG32(mmCUR_SIZE + amdgpu_crtc->crtc_offset, 2451 - (width - 1) << 16 | (height - 1)); 2452 2441 amdgpu_crtc->cursor_width = width; 2453 2442 amdgpu_crtc->cursor_height = height; 2443 + amdgpu_crtc->cursor_hot_x = hot_x; 2444 + amdgpu_crtc->cursor_hot_y = hot_y; 2454 2445 } 2455 2446 2456 2447 dce_v8_0_show_cursor(crtc); ··· 2468 2471 static void dce_v8_0_cursor_reset(struct drm_crtc *crtc) 2469 2472 { 2470 2473 struct amdgpu_crtc *amdgpu_crtc = to_amdgpu_crtc(crtc); 2471 - struct amdgpu_device *adev = crtc->dev->dev_private; 2472 2474 2473 2475 if (amdgpu_crtc->cursor_bo) { 2474 2476 dce_v8_0_lock_cursor(crtc, true); 2475 2477 2476 2478 dce_v8_0_cursor_move_locked(crtc, amdgpu_crtc->cursor_x, 2477 2479 amdgpu_crtc->cursor_y); 2478 - 2479 - WREG32(mmCUR_SIZE + amdgpu_crtc->crtc_offset, 2480 - (amdgpu_crtc->cursor_width - 1) << 16 | 2481 - (amdgpu_crtc->cursor_height - 1)); 2482 2480 2483 2481 dce_v8_0_show_cursor(crtc); 2484 2482
+1 -4
drivers/gpu/drm/amd/amdgpu/dce_virtual.c
··· 627 627 628 628 static void dce_virtual_encoder_destroy(struct drm_encoder *encoder) 629 629 { 630 - struct amdgpu_encoder *amdgpu_encoder = to_amdgpu_encoder(encoder); 631 - 632 - kfree(amdgpu_encoder->enc_priv); 633 630 drm_encoder_cleanup(encoder); 634 - kfree(amdgpu_encoder); 631 + kfree(encoder); 635 632 } 636 633 637 634 static const struct drm_encoder_funcs dce_virtual_encoder_funcs = {
+19 -15
drivers/gpu/drm/amd/amdgpu/gmc_v6_0.c
··· 44 44 MODULE_FIRMWARE("radeon/pitcairn_mc.bin"); 45 45 MODULE_FIRMWARE("radeon/verde_mc.bin"); 46 46 MODULE_FIRMWARE("radeon/oland_mc.bin"); 47 + MODULE_FIRMWARE("radeon/si58_mc.bin"); 47 48 48 49 #define MC_SEQ_MISC0__MT__MASK 0xf0000000 49 50 #define MC_SEQ_MISC0__MT__GDDR1 0x10000000 ··· 114 113 const char *chip_name; 115 114 char fw_name[30]; 116 115 int err; 116 + bool is_58_fw = false; 117 117 118 118 DRM_DEBUG("\n"); 119 119 ··· 137 135 default: BUG(); 138 136 } 139 137 140 - snprintf(fw_name, sizeof(fw_name), "radeon/%s_mc.bin", chip_name); 138 + /* this memory configuration requires special firmware */ 139 + if (((RREG32(mmMC_SEQ_MISC0) & 0xff000000) >> 24) == 0x58) 140 + is_58_fw = true; 141 + 142 + if (is_58_fw) 143 + snprintf(fw_name, sizeof(fw_name), "radeon/si58_mc.bin"); 144 + else 145 + snprintf(fw_name, sizeof(fw_name), "radeon/%s_mc.bin", chip_name); 141 146 err = request_firmware(&adev->mc.fw, fw_name, adev->dev); 142 147 if (err) 143 148 goto out; ··· 472 463 WREG32(mmVM_CONTEXT1_CNTL, 473 464 VM_CONTEXT1_CNTL__ENABLE_CONTEXT_MASK | 474 465 (1UL << VM_CONTEXT1_CNTL__PAGE_TABLE_DEPTH__SHIFT) | 475 - ((amdgpu_vm_block_size - 9) << VM_CONTEXT1_CNTL__PAGE_TABLE_BLOCK_SIZE__SHIFT) | 476 - VM_CONTEXT1_CNTL__RANGE_PROTECTION_FAULT_ENABLE_INTERRUPT_MASK | 477 - VM_CONTEXT1_CNTL__RANGE_PROTECTION_FAULT_ENABLE_DEFAULT_MASK | 478 - VM_CONTEXT1_CNTL__DUMMY_PAGE_PROTECTION_FAULT_ENABLE_INTERRUPT_MASK | 479 - VM_CONTEXT1_CNTL__DUMMY_PAGE_PROTECTION_FAULT_ENABLE_DEFAULT_MASK | 480 - VM_CONTEXT1_CNTL__PDE0_PROTECTION_FAULT_ENABLE_INTERRUPT_MASK | 481 - VM_CONTEXT1_CNTL__PDE0_PROTECTION_FAULT_ENABLE_DEFAULT_MASK | 482 - VM_CONTEXT1_CNTL__VALID_PROTECTION_FAULT_ENABLE_INTERRUPT_MASK | 483 - VM_CONTEXT1_CNTL__VALID_PROTECTION_FAULT_ENABLE_DEFAULT_MASK | 484 - VM_CONTEXT1_CNTL__READ_PROTECTION_FAULT_ENABLE_INTERRUPT_MASK | 485 - VM_CONTEXT1_CNTL__READ_PROTECTION_FAULT_ENABLE_DEFAULT_MASK | 486 - VM_CONTEXT1_CNTL__WRITE_PROTECTION_FAULT_ENABLE_INTERRUPT_MASK | 487 - VM_CONTEXT1_CNTL__WRITE_PROTECTION_FAULT_ENABLE_DEFAULT_MASK); 466 + ((amdgpu_vm_block_size - 9) << VM_CONTEXT1_CNTL__PAGE_TABLE_BLOCK_SIZE__SHIFT)); 467 + if (amdgpu_vm_fault_stop == AMDGPU_VM_FAULT_STOP_ALWAYS) 468 + gmc_v6_0_set_fault_enable_default(adev, false); 469 + else 470 + gmc_v6_0_set_fault_enable_default(adev, true); 488 471 489 472 gmc_v6_0_gart_flush_gpu_tlb(adev, 0); 490 473 dev_info(adev->dev, "PCIE GART of %uM enabled (table at 0x%016llX).\n", ··· 755 754 { 756 755 struct amdgpu_device *adev = (struct amdgpu_device *)handle; 757 756 758 - return amdgpu_irq_get(adev, &adev->mc.vm_fault, 0); 757 + if (amdgpu_vm_fault_stop != AMDGPU_VM_FAULT_STOP_ALWAYS) 758 + return amdgpu_irq_get(adev, &adev->mc.vm_fault, 0); 759 + else 760 + return 0; 759 761 } 760 762 761 763 static int gmc_v6_0_sw_init(void *handle)
+5 -15
drivers/gpu/drm/amd/amdgpu/si_dpm.c
··· 64 64 MODULE_FIRMWARE("radeon/oland_k_smc.bin"); 65 65 MODULE_FIRMWARE("radeon/hainan_smc.bin"); 66 66 MODULE_FIRMWARE("radeon/hainan_k_smc.bin"); 67 + MODULE_FIRMWARE("radeon/banks_k_2_smc.bin"); 67 68 68 69 union power_info { 69 70 struct _ATOM_POWERPLAY_INFO info; ··· 3488 3487 (adev->pdev->device == 0x6817) || 3489 3488 (adev->pdev->device == 0x6806)) 3490 3489 max_mclk = 120000; 3491 - } else if (adev->asic_type == CHIP_OLAND) { 3492 - if ((adev->pdev->revision == 0xC7) || 3493 - (adev->pdev->revision == 0x80) || 3494 - (adev->pdev->revision == 0x81) || 3495 - (adev->pdev->revision == 0x83) || 3496 - (adev->pdev->revision == 0x87) || 3497 - (adev->pdev->device == 0x6604) || 3498 - (adev->pdev->device == 0x6605)) { 3499 - max_sclk = 75000; 3500 - max_mclk = 80000; 3501 - } 3502 3490 } else if (adev->asic_type == CHIP_HAINAN) { 3503 3491 if ((adev->pdev->revision == 0x81) || 3504 3492 (adev->pdev->revision == 0x83) || ··· 3496 3506 (adev->pdev->device == 0x6665) || 3497 3507 (adev->pdev->device == 0x6667)) { 3498 3508 max_sclk = 75000; 3499 - max_mclk = 80000; 3500 3509 } 3501 3510 } 3502 3511 /* Apply dpm quirks */ ··· 7702 7713 ((adev->pdev->device == 0x6660) || 7703 7714 (adev->pdev->device == 0x6663) || 7704 7715 (adev->pdev->device == 0x6665) || 7705 - (adev->pdev->device == 0x6667))) || 7706 - ((adev->pdev->revision == 0xc3) && 7707 - (adev->pdev->device == 0x6665))) 7716 + (adev->pdev->device == 0x6667)))) 7708 7717 chip_name = "hainan_k"; 7718 + else if ((adev->pdev->revision == 0xc3) && 7719 + (adev->pdev->device == 0x6665)) 7720 + chip_name = "banks_k_2"; 7709 7721 else 7710 7722 chip_name = "hainan"; 7711 7723 break;
+10 -32
drivers/gpu/drm/amd/amdgpu/uvd_v4_2.c
··· 40 40 #include "smu/smu_7_0_1_sh_mask.h" 41 41 42 42 static void uvd_v4_2_mc_resume(struct amdgpu_device *adev); 43 - static void uvd_v4_2_init_cg(struct amdgpu_device *adev); 44 43 static void uvd_v4_2_set_ring_funcs(struct amdgpu_device *adev); 45 44 static void uvd_v4_2_set_irq_funcs(struct amdgpu_device *adev); 46 45 static int uvd_v4_2_start(struct amdgpu_device *adev); 47 46 static void uvd_v4_2_stop(struct amdgpu_device *adev); 48 47 static int uvd_v4_2_set_clockgating_state(void *handle, 49 48 enum amd_clockgating_state state); 49 + static void uvd_v4_2_set_dcm(struct amdgpu_device *adev, 50 + bool sw_mode); 50 51 /** 51 52 * uvd_v4_2_ring_get_rptr - get read pointer 52 53 * ··· 141 140 142 141 return r; 143 142 } 144 - 143 + static void uvd_v4_2_enable_mgcg(struct amdgpu_device *adev, 144 + bool enable); 145 145 /** 146 146 * uvd_v4_2_hw_init - start and test UVD block 147 147 * ··· 157 155 uint32_t tmp; 158 156 int r; 159 157 160 - uvd_v4_2_init_cg(adev); 161 - uvd_v4_2_set_clockgating_state(adev, AMD_CG_STATE_GATE); 158 + uvd_v4_2_enable_mgcg(adev, true); 162 159 amdgpu_asic_set_uvd_clocks(adev, 10000, 10000); 163 160 r = uvd_v4_2_start(adev); 164 161 if (r) ··· 267 266 struct amdgpu_ring *ring = &adev->uvd.ring; 268 267 uint32_t rb_bufsz; 269 268 int i, j, r; 270 - 271 269 /* disable byte swapping */ 272 270 u32 lmi_swap_cntl = 0; 273 271 u32 mp_swap_cntl = 0; 272 + 273 + WREG32(mmUVD_CGC_GATE, 0); 274 + uvd_v4_2_set_dcm(adev, true); 274 275 275 276 uvd_v4_2_mc_resume(adev); 276 277 ··· 409 406 410 407 /* Unstall UMC and register bus */ 411 408 WREG32_P(mmUVD_LMI_CTRL2, 0, ~(1 << 8)); 409 + 410 + uvd_v4_2_set_dcm(adev, false); 412 411 } 413 412 414 413 /** ··· 624 619 WREG32_UVD_CTX(ixUVD_CGC_CTRL2, tmp2); 625 620 } 626 621 627 - static void uvd_v4_2_init_cg(struct amdgpu_device *adev) 628 - { 629 - bool hw_mode = true; 630 - 631 - if (hw_mode) { 632 - uvd_v4_2_set_dcm(adev, false); 633 - } else { 634 - u32 tmp = RREG32(mmUVD_CGC_CTRL); 635 - tmp &= ~UVD_CGC_CTRL__DYN_CLOCK_MODE_MASK; 636 - WREG32(mmUVD_CGC_CTRL, tmp); 637 - } 638 - } 639 - 640 622 static bool uvd_v4_2_is_idle(void *handle) 641 623 { 642 624 struct amdgpu_device *adev = (struct amdgpu_device *)handle; ··· 677 685 static int uvd_v4_2_set_clockgating_state(void *handle, 678 686 enum amd_clockgating_state state) 679 687 { 680 - bool gate = false; 681 - struct amdgpu_device *adev = (struct amdgpu_device *)handle; 682 - 683 - if (!(adev->cg_flags & AMD_CG_SUPPORT_UVD_MGCG)) 684 - return 0; 685 - 686 - if (state == AMD_CG_STATE_GATE) 687 - gate = true; 688 - 689 - uvd_v4_2_enable_mgcg(adev, gate); 690 - 691 688 return 0; 692 689 } 693 690 ··· 691 710 * the smc and the hw blocks 692 711 */ 693 712 struct amdgpu_device *adev = (struct amdgpu_device *)handle; 694 - 695 - if (!(adev->pg_flags & AMD_PG_SUPPORT_UVD)) 696 - return 0; 697 713 698 714 if (state == AMD_PG_STATE_GATE) { 699 715 uvd_v4_2_stop(adev);
+17 -10
drivers/gpu/drm/amd/amdgpu/vce_v3_0.c
··· 43 43 44 44 #define GRBM_GFX_INDEX__VCE_INSTANCE__SHIFT 0x04 45 45 #define GRBM_GFX_INDEX__VCE_INSTANCE_MASK 0x10 46 + #define GRBM_GFX_INDEX__VCE_ALL_PIPE 0x07 47 + 46 48 #define mmVCE_LMI_VCPU_CACHE_40BIT_BAR0 0x8616 47 49 #define mmVCE_LMI_VCPU_CACHE_40BIT_BAR1 0x8617 48 50 #define mmVCE_LMI_VCPU_CACHE_40BIT_BAR2 0x8618 51 + #define mmGRBM_GFX_INDEX_DEFAULT 0xE0000000 52 + 49 53 #define VCE_STATUS_VCPU_REPORT_FW_LOADED_MASK 0x02 50 54 51 55 #define VCE_V3_0_FW_SIZE (384 * 1024) ··· 57 53 #define VCE_V3_0_DATA_SIZE ((16 * 1024 * AMDGPU_MAX_VCE_HANDLES) + (52 * 1024)) 58 54 59 55 #define FW_52_8_3 ((52 << 24) | (8 << 16) | (3 << 8)) 56 + 57 + #define GET_VCE_INSTANCE(i) ((i) << GRBM_GFX_INDEX__VCE_INSTANCE__SHIFT \ 58 + | GRBM_GFX_INDEX__VCE_ALL_PIPE) 60 59 61 60 static void vce_v3_0_mc_resume(struct amdgpu_device *adev, int idx); 62 61 static void vce_v3_0_set_ring_funcs(struct amdgpu_device *adev); ··· 182 175 WREG32(mmVCE_UENC_CLOCK_GATING_2, data); 183 176 184 177 data = RREG32(mmVCE_UENC_REG_CLOCK_GATING); 185 - data &= ~0xffc00000; 178 + data &= ~0x3ff; 186 179 WREG32(mmVCE_UENC_REG_CLOCK_GATING, data); 187 180 188 181 data = RREG32(mmVCE_UENC_DMA_DCLK_CTRL); ··· 256 249 if (adev->vce.harvest_config & (1 << idx)) 257 250 continue; 258 251 259 - WREG32_FIELD(GRBM_GFX_INDEX, VCE_INSTANCE, idx); 252 + WREG32(mmGRBM_GFX_INDEX, GET_VCE_INSTANCE(idx)); 260 253 vce_v3_0_mc_resume(adev, idx); 261 254 WREG32_FIELD(VCE_STATUS, JOB_BUSY, 1); 262 255 ··· 280 273 } 281 274 } 282 275 283 - WREG32_FIELD(GRBM_GFX_INDEX, VCE_INSTANCE, 0); 276 + WREG32(mmGRBM_GFX_INDEX, mmGRBM_GFX_INDEX_DEFAULT); 284 277 mutex_unlock(&adev->grbm_idx_mutex); 285 278 286 279 return 0; ··· 295 288 if (adev->vce.harvest_config & (1 << idx)) 296 289 continue; 297 290 298 - WREG32_FIELD(GRBM_GFX_INDEX, VCE_INSTANCE, idx); 291 + WREG32(mmGRBM_GFX_INDEX, GET_VCE_INSTANCE(idx)); 299 292 300 293 if (adev->asic_type >= CHIP_STONEY) 301 294 WREG32_P(mmVCE_VCPU_CNTL, 0, ~0x200001); ··· 313 306 vce_v3_0_set_vce_sw_clock_gating(adev, false); 314 307 } 315 308 316 - WREG32_FIELD(GRBM_GFX_INDEX, VCE_INSTANCE, 0); 309 + WREG32(mmGRBM_GFX_INDEX, mmGRBM_GFX_INDEX_DEFAULT); 317 310 mutex_unlock(&adev->grbm_idx_mutex); 318 311 319 312 return 0; ··· 593 586 * VCE team suggest use bit 3--bit 6 for busy status check 594 587 */ 595 588 mutex_lock(&adev->grbm_idx_mutex); 596 - WREG32_FIELD(GRBM_GFX_INDEX, INSTANCE_INDEX, 0); 589 + WREG32(mmGRBM_GFX_INDEX, GET_VCE_INSTANCE(0)); 597 590 if (RREG32(mmVCE_STATUS) & AMDGPU_VCE_STATUS_BUSY_MASK) { 598 591 srbm_soft_reset = REG_SET_FIELD(srbm_soft_reset, SRBM_SOFT_RESET, SOFT_RESET_VCE0, 1); 599 592 srbm_soft_reset = REG_SET_FIELD(srbm_soft_reset, SRBM_SOFT_RESET, SOFT_RESET_VCE1, 1); 600 593 } 601 - WREG32_FIELD(GRBM_GFX_INDEX, INSTANCE_INDEX, 0x10); 594 + WREG32(mmGRBM_GFX_INDEX, GET_VCE_INSTANCE(1)); 602 595 if (RREG32(mmVCE_STATUS) & AMDGPU_VCE_STATUS_BUSY_MASK) { 603 596 srbm_soft_reset = REG_SET_FIELD(srbm_soft_reset, SRBM_SOFT_RESET, SOFT_RESET_VCE0, 1); 604 597 srbm_soft_reset = REG_SET_FIELD(srbm_soft_reset, SRBM_SOFT_RESET, SOFT_RESET_VCE1, 1); 605 598 } 606 - WREG32_FIELD(GRBM_GFX_INDEX, INSTANCE_INDEX, 0); 599 + WREG32(mmGRBM_GFX_INDEX, GET_VCE_INSTANCE(0)); 607 600 mutex_unlock(&adev->grbm_idx_mutex); 608 601 609 602 if (srbm_soft_reset) { ··· 741 734 if (adev->vce.harvest_config & (1 << i)) 742 735 continue; 743 736 744 - WREG32_FIELD(GRBM_GFX_INDEX, VCE_INSTANCE, i); 737 + WREG32(mmGRBM_GFX_INDEX, GET_VCE_INSTANCE(i)); 745 738 746 739 if (enable) { 747 740 /* initialize VCE_CLOCK_GATING_A: Clock ON/OFF delay */ ··· 760 753 vce_v3_0_set_vce_sw_clock_gating(adev, enable); 761 754 } 762 755 763 - WREG32_FIELD(GRBM_GFX_INDEX, VCE_INSTANCE, 0); 756 + WREG32(mmGRBM_GFX_INDEX, mmGRBM_GFX_INDEX_DEFAULT); 764 757 mutex_unlock(&adev->grbm_idx_mutex); 765 758 766 759 return 0;
+2 -2
drivers/gpu/drm/amd/powerplay/hwmgr/cz_clockpowergating.c
··· 200 200 cgs_set_clockgating_state( 201 201 hwmgr->device, 202 202 AMD_IP_BLOCK_TYPE_VCE, 203 - AMD_CG_STATE_UNGATE); 203 + AMD_CG_STATE_GATE); 204 204 cgs_set_powergating_state( 205 205 hwmgr->device, 206 206 AMD_IP_BLOCK_TYPE_VCE, ··· 218 218 cgs_set_clockgating_state( 219 219 hwmgr->device, 220 220 AMD_IP_BLOCK_TYPE_VCE, 221 - AMD_PG_STATE_GATE); 221 + AMD_PG_STATE_UNGATE); 222 222 cz_dpm_update_vce_dpm(hwmgr); 223 223 cz_enable_disable_vce_dpm(hwmgr, true); 224 224 return 0;
+16 -8
drivers/gpu/drm/amd/powerplay/hwmgr/cz_hwmgr.c
··· 1402 1402 cz_hwmgr->vce_dpm.hard_min_clk, 1403 1403 PPSMC_MSG_SetEclkHardMin)); 1404 1404 } else { 1405 - /*EPR# 419220 -HW limitation to to */ 1406 - cz_hwmgr->vce_dpm.hard_min_clk = hwmgr->vce_arbiter.ecclk; 1407 - smum_send_msg_to_smc_with_parameter(hwmgr->smumgr, 1408 - PPSMC_MSG_SetEclkHardMin, 1409 - cz_get_eclk_level(hwmgr, 1410 - cz_hwmgr->vce_dpm.hard_min_clk, 1411 - PPSMC_MSG_SetEclkHardMin)); 1412 - 1405 + /*Program HardMin based on the vce_arbiter.ecclk */ 1406 + if (hwmgr->vce_arbiter.ecclk == 0) { 1407 + smum_send_msg_to_smc_with_parameter(hwmgr->smumgr, 1408 + PPSMC_MSG_SetEclkHardMin, 0); 1409 + /* disable ECLK DPM 0. Otherwise VCE could hang if 1410 + * switching SCLK from DPM 0 to 6/7 */ 1411 + smum_send_msg_to_smc_with_parameter(hwmgr->smumgr, 1412 + PPSMC_MSG_SetEclkSoftMin, 1); 1413 + } else { 1414 + cz_hwmgr->vce_dpm.hard_min_clk = hwmgr->vce_arbiter.ecclk; 1415 + smum_send_msg_to_smc_with_parameter(hwmgr->smumgr, 1416 + PPSMC_MSG_SetEclkHardMin, 1417 + cz_get_eclk_level(hwmgr, 1418 + cz_hwmgr->vce_dpm.hard_min_clk, 1419 + PPSMC_MSG_SetEclkHardMin)); 1420 + } 1413 1421 } 1414 1422 return 0; 1415 1423 }
+1
drivers/gpu/drm/ast/ast_drv.h
··· 113 113 struct ttm_bo_kmap_obj cache_kmap; 114 114 int next_cursor; 115 115 bool support_wide_screen; 116 + bool DisableP2A; 116 117 117 118 enum ast_tx_chip tx_chip_type; 118 119 u8 dp501_maxclk;
+83 -74
drivers/gpu/drm/ast/ast_main.c
··· 124 124 } else 125 125 *need_post = false; 126 126 127 + /* Check P2A Access */ 128 + ast->DisableP2A = true; 129 + data = ast_read32(ast, 0xf004); 130 + if (data != 0xFFFFFFFF) 131 + ast->DisableP2A = false; 132 + 127 133 /* Check if we support wide screen */ 128 134 switch (ast->chip) { 129 135 case AST1180: ··· 146 140 ast->support_wide_screen = true; 147 141 else { 148 142 ast->support_wide_screen = false; 149 - /* Read SCU7c (silicon revision register) */ 150 - ast_write32(ast, 0xf004, 0x1e6e0000); 151 - ast_write32(ast, 0xf000, 0x1); 152 - data = ast_read32(ast, 0x1207c); 153 - data &= 0x300; 154 - if (ast->chip == AST2300 && data == 0x0) /* ast1300 */ 155 - ast->support_wide_screen = true; 156 - if (ast->chip == AST2400 && data == 0x100) /* ast1400 */ 157 - ast->support_wide_screen = true; 143 + if (ast->DisableP2A == false) { 144 + /* Read SCU7c (silicon revision register) */ 145 + ast_write32(ast, 0xf004, 0x1e6e0000); 146 + ast_write32(ast, 0xf000, 0x1); 147 + data = ast_read32(ast, 0x1207c); 148 + data &= 0x300; 149 + if (ast->chip == AST2300 && data == 0x0) /* ast1300 */ 150 + ast->support_wide_screen = true; 151 + if (ast->chip == AST2400 && data == 0x100) /* ast1400 */ 152 + ast->support_wide_screen = true; 153 + } 158 154 } 159 155 break; 160 156 } ··· 224 216 uint32_t data, data2; 225 217 uint32_t denum, num, div, ref_pll; 226 218 227 - ast_write32(ast, 0xf004, 0x1e6e0000); 228 - ast_write32(ast, 0xf000, 0x1); 229 - 230 - 231 - ast_write32(ast, 0x10000, 0xfc600309); 232 - 233 - do { 234 - if (pci_channel_offline(dev->pdev)) 235 - return -EIO; 236 - } while (ast_read32(ast, 0x10000) != 0x01); 237 - data = ast_read32(ast, 0x10004); 238 - 239 - if (data & 0x40) 219 + if (ast->DisableP2A) 220 + { 240 221 ast->dram_bus_width = 16; 222 + ast->dram_type = AST_DRAM_1Gx16; 223 + ast->mclk = 396; 224 + } 241 225 else 242 - ast->dram_bus_width = 32; 226 + { 227 + ast_write32(ast, 0xf004, 0x1e6e0000); 228 + ast_write32(ast, 0xf000, 0x1); 229 + data = ast_read32(ast, 0x10004); 243 230 244 - if (ast->chip == AST2300 || ast->chip == AST2400) { 245 - switch (data & 0x03) { 246 - case 0: 247 - ast->dram_type = AST_DRAM_512Mx16; 248 - break; 249 - default: 250 - case 1: 251 - ast->dram_type = AST_DRAM_1Gx16; 231 + if (data & 0x40) 232 + ast->dram_bus_width = 16; 233 + else 234 + ast->dram_bus_width = 32; 235 + 236 + if (ast->chip == AST2300 || ast->chip == AST2400) { 237 + switch (data & 0x03) { 238 + case 0: 239 + ast->dram_type = AST_DRAM_512Mx16; 240 + break; 241 + default: 242 + case 1: 243 + ast->dram_type = AST_DRAM_1Gx16; 244 + break; 245 + case 2: 246 + ast->dram_type = AST_DRAM_2Gx16; 247 + break; 248 + case 3: 249 + ast->dram_type = AST_DRAM_4Gx16; 250 + break; 251 + } 252 + } else { 253 + switch (data & 0x0c) { 254 + case 0: 255 + case 4: 256 + ast->dram_type = AST_DRAM_512Mx16; 257 + break; 258 + case 8: 259 + if (data & 0x40) 260 + ast->dram_type = AST_DRAM_1Gx16; 261 + else 262 + ast->dram_type = AST_DRAM_512Mx32; 263 + break; 264 + case 0xc: 265 + ast->dram_type = AST_DRAM_1Gx32; 266 + break; 267 + } 268 + } 269 + 270 + data = ast_read32(ast, 0x10120); 271 + data2 = ast_read32(ast, 0x10170); 272 + if (data2 & 0x2000) 273 + ref_pll = 14318; 274 + else 275 + ref_pll = 12000; 276 + 277 + denum = data & 0x1f; 278 + num = (data & 0x3fe0) >> 5; 279 + data = (data & 0xc000) >> 14; 280 + switch (data) { 281 + case 3: 282 + div = 0x4; 252 283 break; 253 284 case 2: 254 - ast->dram_type = AST_DRAM_2Gx16; 285 + case 1: 286 + div = 0x2; 255 287 break; 256 - case 3: 257 - ast->dram_type = AST_DRAM_4Gx16; 258 - break; 259 - } 260 - } else { 261 - switch (data & 0x0c) { 262 - case 0: 263 - case 4: 264 - ast->dram_type = AST_DRAM_512Mx16; 265 - break; 266 - case 8: 267 - if (data & 0x40) 268 - ast->dram_type = AST_DRAM_1Gx16; 269 - else 270 - ast->dram_type = AST_DRAM_512Mx32; 271 - break; 272 - case 0xc: 273 - ast->dram_type = AST_DRAM_1Gx32; 288 + default: 289 + div = 0x1; 274 290 break; 275 291 } 292 + ast->mclk = ref_pll * (num + 2) / (denum + 2) * (div * 1000); 276 293 } 277 - 278 - data = ast_read32(ast, 0x10120); 279 - data2 = ast_read32(ast, 0x10170); 280 - if (data2 & 0x2000) 281 - ref_pll = 14318; 282 - else 283 - ref_pll = 12000; 284 - 285 - denum = data & 0x1f; 286 - num = (data & 0x3fe0) >> 5; 287 - data = (data & 0xc000) >> 14; 288 - switch (data) { 289 - case 3: 290 - div = 0x4; 291 - break; 292 - case 2: 293 - case 1: 294 - div = 0x2; 295 - break; 296 - default: 297 - div = 0x1; 298 - break; 299 - } 300 - ast->mclk = ref_pll * (num + 2) / (denum + 2) * (div * 1000); 301 294 return 0; 302 295 } 303 296
+13 -5
drivers/gpu/drm/ast/ast_post.c
··· 379 379 ast_open_key(ast); 380 380 ast_set_def_ext_reg(dev); 381 381 382 - if (ast->chip == AST2300 || ast->chip == AST2400) 383 - ast_init_dram_2300(dev); 384 - else 385 - ast_init_dram_reg(dev); 382 + if (ast->DisableP2A == false) 383 + { 384 + if (ast->chip == AST2300 || ast->chip == AST2400) 385 + ast_init_dram_2300(dev); 386 + else 387 + ast_init_dram_reg(dev); 386 388 387 - ast_init_3rdtx(dev); 389 + ast_init_3rdtx(dev); 390 + } 391 + else 392 + { 393 + if (ast->tx_chip_type != AST_TX_NONE) 394 + ast_set_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xa3, 0xcf, 0x80); /* Enable DVO */ 395 + } 388 396 } 389 397 390 398 /* AST 2300 DRAM settings */
+7
drivers/gpu/drm/bridge/analogix/analogix_dp_core.c
··· 1382 1382 1383 1383 pm_runtime_enable(dev); 1384 1384 1385 + pm_runtime_get_sync(dev); 1385 1386 phy_power_on(dp->phy); 1386 1387 1387 1388 analogix_dp_init_dp(dp); ··· 1415 1414 goto err_disable_pm_runtime; 1416 1415 } 1417 1416 1417 + phy_power_off(dp->phy); 1418 + pm_runtime_put(dev); 1419 + 1418 1420 return 0; 1419 1421 1420 1422 err_disable_pm_runtime: 1423 + 1424 + phy_power_off(dp->phy); 1425 + pm_runtime_put(dev); 1421 1426 pm_runtime_disable(dev); 1422 1427 1423 1428 return ret;
+9
drivers/gpu/drm/cirrus/Kconfig
··· 7 7 This is a KMS driver for emulated cirrus device in qemu. 8 8 It is *NOT* intended for real cirrus devices. This requires 9 9 the modesetting userspace X.org driver. 10 + 11 + Cirrus is obsolete, the hardware was designed in the 90ies 12 + and can't keep up with todays needs. More background: 13 + https://www.kraxel.org/blog/2014/10/qemu-using-cirrus-considered-harmful/ 14 + 15 + Better alternatives are: 16 + - stdvga (DRM_BOCHS, qemu -vga std, default in qemu 2.2+) 17 + - qxl (DRM_QXL, qemu -vga qxl, works best with spice) 18 + - virtio (DRM_VIRTIO_GPU), qemu -vga virtio)
+6 -6
drivers/gpu/drm/drm_atomic.c
··· 291 291 EXPORT_SYMBOL(drm_atomic_get_crtc_state); 292 292 293 293 static void set_out_fence_for_crtc(struct drm_atomic_state *state, 294 - struct drm_crtc *crtc, s64 __user *fence_ptr) 294 + struct drm_crtc *crtc, s32 __user *fence_ptr) 295 295 { 296 296 state->crtcs[drm_crtc_index(crtc)].out_fence_ptr = fence_ptr; 297 297 } 298 298 299 - static s64 __user *get_out_fence_for_crtc(struct drm_atomic_state *state, 299 + static s32 __user *get_out_fence_for_crtc(struct drm_atomic_state *state, 300 300 struct drm_crtc *crtc) 301 301 { 302 - s64 __user *fence_ptr; 302 + s32 __user *fence_ptr; 303 303 304 304 fence_ptr = state->crtcs[drm_crtc_index(crtc)].out_fence_ptr; 305 305 state->crtcs[drm_crtc_index(crtc)].out_fence_ptr = NULL; ··· 512 512 state->color_mgmt_changed |= replaced; 513 513 return ret; 514 514 } else if (property == config->prop_out_fence_ptr) { 515 - s64 __user *fence_ptr = u64_to_user_ptr(val); 515 + s32 __user *fence_ptr = u64_to_user_ptr(val); 516 516 517 517 if (!fence_ptr) 518 518 return 0; ··· 1915 1915 */ 1916 1916 1917 1917 struct drm_out_fence_state { 1918 - s64 __user *out_fence_ptr; 1918 + s32 __user *out_fence_ptr; 1919 1919 struct sync_file *sync_file; 1920 1920 int fd; 1921 1921 }; ··· 1952 1952 return 0; 1953 1953 1954 1954 for_each_crtc_in_state(state, crtc, crtc_state, i) { 1955 - u64 __user *fence_ptr; 1955 + s32 __user *fence_ptr; 1956 1956 1957 1957 fence_ptr = get_out_fence_for_crtc(crtc_state->state, crtc); 1958 1958
+7
drivers/gpu/drm/drm_modes.c
··· 1460 1460 return NULL; 1461 1461 1462 1462 mode->type |= DRM_MODE_TYPE_USERDEF; 1463 + /* fix up 1368x768: GFT/CVT can't express 1366 width due to alignment */ 1464 + if (cmd->xres == 1366 && mode->hdisplay == 1368) { 1465 + mode->hdisplay = 1366; 1466 + mode->hsync_start--; 1467 + mode->hsync_end--; 1468 + drm_mode_set_name(mode); 1469 + } 1463 1470 drm_mode_set_crtcinfo(mode, CRTC_INTERLACE_HALVE_V); 1464 1471 return mode; 1465 1472 }
+11 -1
drivers/gpu/drm/drm_probe_helper.c
··· 143 143 } 144 144 145 145 if (dev->mode_config.delayed_event) { 146 + /* 147 + * FIXME: 148 + * 149 + * Use short (1s) delay to handle the initial delayed event. 150 + * This delay should not be needed, but Optimus/nouveau will 151 + * fail in a mysterious way if the delayed event is handled as 152 + * soon as possible like it is done in 153 + * drm_helper_probe_single_connector_modes() in case the poll 154 + * was enabled before. 155 + */ 146 156 poll = true; 147 - delay = 0; 157 + delay = HZ; 148 158 } 149 159 150 160 if (poll)
+6 -1
drivers/gpu/drm/etnaviv/etnaviv_mmu.c
··· 116 116 struct list_head list; 117 117 bool found; 118 118 119 + /* 120 + * XXX: The DRM_MM_SEARCH_BELOW is really a hack to trick 121 + * drm_mm into giving out a low IOVA after address space 122 + * rollover. This needs a proper fix. 123 + */ 119 124 ret = drm_mm_insert_node_in_range(&mmu->mm, node, 120 125 size, 0, mmu->last_iova, ~0UL, 121 - DRM_MM_SEARCH_DEFAULT); 126 + mmu->last_iova ? DRM_MM_SEARCH_DEFAULT : DRM_MM_SEARCH_BELOW); 122 127 123 128 if (ret != -ENOSPC) 124 129 break;
+6 -9
drivers/gpu/drm/exynos/exynos5433_drm_decon.c
··· 46 46 BIT_CLKS_ENABLED, 47 47 BIT_IRQS_ENABLED, 48 48 BIT_WIN_UPDATED, 49 - BIT_SUSPENDED 49 + BIT_SUSPENDED, 50 + BIT_REQUEST_UPDATE 50 51 }; 51 52 52 53 struct decon_context { ··· 141 140 m->crtc_vsync_start = m->crtc_vdisplay + 1; 142 141 m->crtc_vsync_end = m->crtc_vsync_start + 1; 143 142 } 144 - 145 - decon_set_bits(ctx, DECON_VIDCON0, VIDCON0_ENVID, 0); 146 - 147 - /* enable clock gate */ 148 - val = CMU_CLKGAGE_MODE_SFR_F | CMU_CLKGAGE_MODE_MEM_F; 149 - writel(val, ctx->addr + DECON_CMU); 150 143 151 144 if (ctx->out_type & (IFTYPE_I80 | I80_HW_TRG)) 152 145 decon_setup_trigger(ctx); ··· 310 315 311 316 /* window enable */ 312 317 decon_set_bits(ctx, DECON_WINCONx(win), WINCONx_ENWIN_F, ~0); 318 + set_bit(BIT_REQUEST_UPDATE, &ctx->flags); 313 319 } 314 320 315 321 static void decon_disable_plane(struct exynos_drm_crtc *crtc, ··· 323 327 return; 324 328 325 329 decon_set_bits(ctx, DECON_WINCONx(win), WINCONx_ENWIN_F, 0); 330 + set_bit(BIT_REQUEST_UPDATE, &ctx->flags); 326 331 } 327 332 328 333 static void decon_atomic_flush(struct exynos_drm_crtc *crtc) ··· 337 340 for (i = ctx->first_win; i < WINDOWS_NR; i++) 338 341 decon_shadow_protect_win(ctx, i, false); 339 342 340 - /* standalone update */ 341 - decon_set_bits(ctx, DECON_UPDATE, STANDALONE_UPDATE_F, ~0); 343 + if (test_and_clear_bit(BIT_REQUEST_UPDATE, &ctx->flags)) 344 + decon_set_bits(ctx, DECON_UPDATE, STANDALONE_UPDATE_F, ~0); 342 345 343 346 if (ctx->out_type & IFTYPE_I80) 344 347 set_bit(BIT_WIN_UPDATED, &ctx->flags);
+27 -9
drivers/gpu/drm/i915/gvt/aperture_gm.c
··· 37 37 #include "i915_drv.h" 38 38 #include "gvt.h" 39 39 40 - #define MB_TO_BYTES(mb) ((mb) << 20ULL) 41 - #define BYTES_TO_MB(b) ((b) >> 20ULL) 42 - 43 - #define HOST_LOW_GM_SIZE MB_TO_BYTES(128) 44 - #define HOST_HIGH_GM_SIZE MB_TO_BYTES(384) 45 - #define HOST_FENCE 4 46 - 47 40 static int alloc_gm(struct intel_vgpu *vgpu, bool high_gm) 48 41 { 49 42 struct intel_gvt *gvt = vgpu->gvt; ··· 158 165 POSTING_READ(fence_reg_lo); 159 166 } 160 167 168 + static void _clear_vgpu_fence(struct intel_vgpu *vgpu) 169 + { 170 + int i; 171 + 172 + for (i = 0; i < vgpu_fence_sz(vgpu); i++) 173 + intel_vgpu_write_fence(vgpu, i, 0); 174 + } 175 + 161 176 static void free_vgpu_fence(struct intel_vgpu *vgpu) 162 177 { 163 178 struct intel_gvt *gvt = vgpu->gvt; ··· 179 178 intel_runtime_pm_get(dev_priv); 180 179 181 180 mutex_lock(&dev_priv->drm.struct_mutex); 181 + _clear_vgpu_fence(vgpu); 182 182 for (i = 0; i < vgpu_fence_sz(vgpu); i++) { 183 183 reg = vgpu->fence.regs[i]; 184 - intel_vgpu_write_fence(vgpu, i, 0); 185 184 list_add_tail(&reg->link, 186 185 &dev_priv->mm.fence_list); 187 186 } ··· 209 208 continue; 210 209 list_del(pos); 211 210 vgpu->fence.regs[i] = reg; 212 - intel_vgpu_write_fence(vgpu, i, 0); 213 211 if (++i == vgpu_fence_sz(vgpu)) 214 212 break; 215 213 } 216 214 if (i != vgpu_fence_sz(vgpu)) 217 215 goto out_free_fence; 216 + 217 + _clear_vgpu_fence(vgpu); 218 218 219 219 mutex_unlock(&dev_priv->drm.struct_mutex); 220 220 intel_runtime_pm_put(dev_priv); ··· 313 311 free_vgpu_gm(vgpu); 314 312 free_vgpu_fence(vgpu); 315 313 free_resource(vgpu); 314 + } 315 + 316 + /** 317 + * intel_vgpu_reset_resource - reset resource state owned by a vGPU 318 + * @vgpu: a vGPU 319 + * 320 + * This function is used to reset resource state owned by a vGPU. 321 + * 322 + */ 323 + void intel_vgpu_reset_resource(struct intel_vgpu *vgpu) 324 + { 325 + struct drm_i915_private *dev_priv = vgpu->gvt->dev_priv; 326 + 327 + intel_runtime_pm_get(dev_priv); 328 + _clear_vgpu_fence(vgpu); 329 + intel_runtime_pm_put(dev_priv); 316 330 } 317 331 318 332 /**
+74
drivers/gpu/drm/i915/gvt/cfg_space.c
··· 282 282 } 283 283 return 0; 284 284 } 285 + 286 + /** 287 + * intel_vgpu_init_cfg_space - init vGPU configuration space when create vGPU 288 + * 289 + * @vgpu: a vGPU 290 + * @primary: is the vGPU presented as primary 291 + * 292 + */ 293 + void intel_vgpu_init_cfg_space(struct intel_vgpu *vgpu, 294 + bool primary) 295 + { 296 + struct intel_gvt *gvt = vgpu->gvt; 297 + const struct intel_gvt_device_info *info = &gvt->device_info; 298 + u16 *gmch_ctl; 299 + int i; 300 + 301 + memcpy(vgpu_cfg_space(vgpu), gvt->firmware.cfg_space, 302 + info->cfg_space_size); 303 + 304 + if (!primary) { 305 + vgpu_cfg_space(vgpu)[PCI_CLASS_DEVICE] = 306 + INTEL_GVT_PCI_CLASS_VGA_OTHER; 307 + vgpu_cfg_space(vgpu)[PCI_CLASS_PROG] = 308 + INTEL_GVT_PCI_CLASS_VGA_OTHER; 309 + } 310 + 311 + /* Show guest that there isn't any stolen memory.*/ 312 + gmch_ctl = (u16 *)(vgpu_cfg_space(vgpu) + INTEL_GVT_PCI_GMCH_CONTROL); 313 + *gmch_ctl &= ~(BDW_GMCH_GMS_MASK << BDW_GMCH_GMS_SHIFT); 314 + 315 + intel_vgpu_write_pci_bar(vgpu, PCI_BASE_ADDRESS_2, 316 + gvt_aperture_pa_base(gvt), true); 317 + 318 + vgpu_cfg_space(vgpu)[PCI_COMMAND] &= ~(PCI_COMMAND_IO 319 + | PCI_COMMAND_MEMORY 320 + | PCI_COMMAND_MASTER); 321 + /* 322 + * Clear the bar upper 32bit and let guest to assign the new value 323 + */ 324 + memset(vgpu_cfg_space(vgpu) + PCI_BASE_ADDRESS_1, 0, 4); 325 + memset(vgpu_cfg_space(vgpu) + PCI_BASE_ADDRESS_3, 0, 4); 326 + memset(vgpu_cfg_space(vgpu) + INTEL_GVT_PCI_OPREGION, 0, 4); 327 + 328 + for (i = 0; i < INTEL_GVT_MAX_BAR_NUM; i++) { 329 + vgpu->cfg_space.bar[i].size = pci_resource_len( 330 + gvt->dev_priv->drm.pdev, i * 2); 331 + vgpu->cfg_space.bar[i].tracked = false; 332 + } 333 + } 334 + 335 + /** 336 + * intel_vgpu_reset_cfg_space - reset vGPU configuration space 337 + * 338 + * @vgpu: a vGPU 339 + * 340 + */ 341 + void intel_vgpu_reset_cfg_space(struct intel_vgpu *vgpu) 342 + { 343 + u8 cmd = vgpu_cfg_space(vgpu)[PCI_COMMAND]; 344 + bool primary = vgpu_cfg_space(vgpu)[PCI_CLASS_DEVICE] != 345 + INTEL_GVT_PCI_CLASS_VGA_OTHER; 346 + 347 + if (cmd & PCI_COMMAND_MEMORY) { 348 + trap_gttmmio(vgpu, false); 349 + map_aperture(vgpu, false); 350 + } 351 + 352 + /** 353 + * Currently we only do such reset when vGPU is not 354 + * owned by any VM, so we simply restore entire cfg 355 + * space to default value. 356 + */ 357 + intel_vgpu_init_cfg_space(vgpu, primary); 358 + }
-4
drivers/gpu/drm/i915/gvt/cmd_parser.c
··· 481 481 (s->vgpu->gvt->device_info.gmadr_bytes_in_cmd >> 2) 482 482 483 483 static unsigned long bypass_scan_mask = 0; 484 - static bool bypass_batch_buffer_scan = true; 485 484 486 485 /* ring ALL, type = 0 */ 487 486 static struct sub_op_bits sub_op_mi[] = { ··· 1523 1524 static int batch_buffer_needs_scan(struct parser_exec_state *s) 1524 1525 { 1525 1526 struct intel_gvt *gvt = s->vgpu->gvt; 1526 - 1527 - if (bypass_batch_buffer_scan) 1528 - return 0; 1529 1527 1530 1528 if (IS_BROADWELL(gvt->dev_priv) || IS_SKYLAKE(gvt->dev_priv)) { 1531 1529 /* BDW decides privilege based on address space */
+19 -47
drivers/gpu/drm/i915/gvt/execlist.c
··· 364 364 #define get_desc_from_elsp_dwords(ed, i) \ 365 365 ((struct execlist_ctx_descriptor_format *)&((ed)->data[i * 2])) 366 366 367 - 368 - #define BATCH_BUFFER_ADDR_MASK ((1UL << 32) - (1U << 2)) 369 - #define BATCH_BUFFER_ADDR_HIGH_MASK ((1UL << 16) - (1U)) 370 - static int set_gma_to_bb_cmd(struct intel_shadow_bb_entry *entry_obj, 371 - unsigned long add, int gmadr_bytes) 372 - { 373 - if (WARN_ON(gmadr_bytes != 4 && gmadr_bytes != 8)) 374 - return -1; 375 - 376 - *((u32 *)(entry_obj->bb_start_cmd_va + (1 << 2))) = add & 377 - BATCH_BUFFER_ADDR_MASK; 378 - if (gmadr_bytes == 8) { 379 - *((u32 *)(entry_obj->bb_start_cmd_va + (2 << 2))) = 380 - add & BATCH_BUFFER_ADDR_HIGH_MASK; 381 - } 382 - 383 - return 0; 384 - } 385 - 386 367 static void prepare_shadow_batch_buffer(struct intel_vgpu_workload *workload) 387 368 { 388 - int gmadr_bytes = workload->vgpu->gvt->device_info.gmadr_bytes_in_cmd; 369 + const int gmadr_bytes = workload->vgpu->gvt->device_info.gmadr_bytes_in_cmd; 370 + struct intel_shadow_bb_entry *entry_obj; 389 371 390 372 /* pin the gem object to ggtt */ 391 - if (!list_empty(&workload->shadow_bb)) { 392 - struct intel_shadow_bb_entry *entry_obj = 393 - list_first_entry(&workload->shadow_bb, 394 - struct intel_shadow_bb_entry, 395 - list); 396 - struct intel_shadow_bb_entry *temp; 373 + list_for_each_entry(entry_obj, &workload->shadow_bb, list) { 374 + struct i915_vma *vma; 397 375 398 - list_for_each_entry_safe(entry_obj, temp, &workload->shadow_bb, 399 - list) { 400 - struct i915_vma *vma; 401 - 402 - vma = i915_gem_object_ggtt_pin(entry_obj->obj, NULL, 0, 403 - 4, 0); 404 - if (IS_ERR(vma)) { 405 - gvt_err("Cannot pin\n"); 406 - return; 407 - } 408 - 409 - /* FIXME: we are not tracking our pinned VMA leaving it 410 - * up to the core to fix up the stray pin_count upon 411 - * free. 412 - */ 413 - 414 - /* update the relocate gma with shadow batch buffer*/ 415 - set_gma_to_bb_cmd(entry_obj, 416 - i915_ggtt_offset(vma), 417 - gmadr_bytes); 376 + vma = i915_gem_object_ggtt_pin(entry_obj->obj, NULL, 0, 4, 0); 377 + if (IS_ERR(vma)) { 378 + gvt_err("Cannot pin\n"); 379 + return; 418 380 } 381 + 382 + /* FIXME: we are not tracking our pinned VMA leaving it 383 + * up to the core to fix up the stray pin_count upon 384 + * free. 385 + */ 386 + 387 + /* update the relocate gma with shadow batch buffer*/ 388 + entry_obj->bb_start_cmd_va[1] = i915_ggtt_offset(vma); 389 + if (gmadr_bytes == 8) 390 + entry_obj->bb_start_cmd_va[2] = 0; 419 391 } 420 392 } 421 393 ··· 798 826 INIT_LIST_HEAD(&vgpu->workload_q_head[i]); 799 827 } 800 828 801 - vgpu->workloads = kmem_cache_create("gvt-g vgpu workload", 829 + vgpu->workloads = kmem_cache_create("gvt-g_vgpu_workload", 802 830 sizeof(struct intel_vgpu_workload), 0, 803 831 SLAB_HWCACHE_ALIGN, 804 832 NULL);
+44 -37
drivers/gpu/drm/i915/gvt/gtt.c
··· 240 240 static u64 read_pte64(struct drm_i915_private *dev_priv, unsigned long index) 241 241 { 242 242 void __iomem *addr = (gen8_pte_t __iomem *)dev_priv->ggtt.gsm + index; 243 - u64 pte; 244 243 245 - #ifdef readq 246 - pte = readq(addr); 247 - #else 248 - pte = ioread32(addr); 249 - pte |= (u64)ioread32(addr + 4) << 32; 250 - #endif 251 - return pte; 244 + return readq(addr); 252 245 } 253 246 254 247 static void write_pte64(struct drm_i915_private *dev_priv, ··· 249 256 { 250 257 void __iomem *addr = (gen8_pte_t __iomem *)dev_priv->ggtt.gsm + index; 251 258 252 - #ifdef writeq 253 259 writeq(pte, addr); 254 - #else 255 - iowrite32((u32)pte, addr); 256 - iowrite32(pte >> 32, addr + 4); 257 - #endif 260 + 258 261 I915_WRITE(GFX_FLSH_CNTL_GEN6, GFX_FLSH_CNTL_EN); 259 262 POSTING_READ(GFX_FLSH_CNTL_GEN6); 260 263 } ··· 1369 1380 info->gtt_entry_size; 1370 1381 mem = kzalloc(mm->has_shadow_page_table ? 1371 1382 mm->page_table_entry_size * 2 1372 - : mm->page_table_entry_size, 1373 - GFP_ATOMIC); 1383 + : mm->page_table_entry_size, GFP_KERNEL); 1374 1384 if (!mem) 1375 1385 return -ENOMEM; 1376 1386 mm->virtual_page_table = mem; ··· 1520 1532 struct intel_vgpu_mm *mm; 1521 1533 int ret; 1522 1534 1523 - mm = kzalloc(sizeof(*mm), GFP_ATOMIC); 1535 + mm = kzalloc(sizeof(*mm), GFP_KERNEL); 1524 1536 if (!mm) { 1525 1537 ret = -ENOMEM; 1526 1538 goto fail; ··· 1874 1886 struct intel_gvt_gtt_pte_ops *ops = vgpu->gvt->gtt.pte_ops; 1875 1887 int page_entry_num = GTT_PAGE_SIZE >> 1876 1888 vgpu->gvt->device_info.gtt_entry_size_shift; 1877 - struct page *scratch_pt; 1889 + void *scratch_pt; 1878 1890 unsigned long mfn; 1879 1891 int i; 1880 - void *p; 1881 1892 1882 1893 if (WARN_ON(type < GTT_TYPE_PPGTT_PTE_PT || type >= GTT_TYPE_MAX)) 1883 1894 return -EINVAL; 1884 1895 1885 - scratch_pt = alloc_page(GFP_KERNEL | GFP_ATOMIC | __GFP_ZERO); 1896 + scratch_pt = (void *)get_zeroed_page(GFP_KERNEL); 1886 1897 if (!scratch_pt) { 1887 1898 gvt_err("fail to allocate scratch page\n"); 1888 1899 return -ENOMEM; 1889 1900 } 1890 1901 1891 - p = kmap_atomic(scratch_pt); 1892 - mfn = intel_gvt_hypervisor_virt_to_mfn(p); 1902 + mfn = intel_gvt_hypervisor_virt_to_mfn(scratch_pt); 1893 1903 if (mfn == INTEL_GVT_INVALID_ADDR) { 1894 - gvt_err("fail to translate vaddr:0x%llx\n", (u64)p); 1895 - kunmap_atomic(p); 1896 - __free_page(scratch_pt); 1904 + gvt_err("fail to translate vaddr:0x%lx\n", (unsigned long)scratch_pt); 1905 + free_page((unsigned long)scratch_pt); 1897 1906 return -EFAULT; 1898 1907 } 1899 1908 gtt->scratch_pt[type].page_mfn = mfn; 1900 - gtt->scratch_pt[type].page = scratch_pt; 1909 + gtt->scratch_pt[type].page = virt_to_page(scratch_pt); 1901 1910 gvt_dbg_mm("vgpu%d create scratch_pt: type %d mfn=0x%lx\n", 1902 1911 vgpu->id, type, mfn); 1903 1912 ··· 1903 1918 * scratch_pt[type] indicate the scratch pt/scratch page used by the 1904 1919 * 'type' pt. 1905 1920 * e.g. scratch_pt[GTT_TYPE_PPGTT_PDE_PT] is used by 1906 - * GTT_TYPE_PPGTT_PDE_PT level pt, that means this scatch_pt it self 1921 + * GTT_TYPE_PPGTT_PDE_PT level pt, that means this scratch_pt it self 1907 1922 * is GTT_TYPE_PPGTT_PTE_PT, and full filled by scratch page mfn. 1908 1923 */ 1909 1924 if (type > GTT_TYPE_PPGTT_PTE_PT && type < GTT_TYPE_MAX) { ··· 1921 1936 se.val64 |= PPAT_CACHED_INDEX; 1922 1937 1923 1938 for (i = 0; i < page_entry_num; i++) 1924 - ops->set_entry(p, &se, i, false, 0, vgpu); 1939 + ops->set_entry(scratch_pt, &se, i, false, 0, vgpu); 1925 1940 } 1926 - 1927 - kunmap_atomic(p); 1928 1941 1929 1942 return 0; 1930 1943 } ··· 2191 2208 int intel_gvt_init_gtt(struct intel_gvt *gvt) 2192 2209 { 2193 2210 int ret; 2194 - void *page_addr; 2211 + void *page; 2195 2212 2196 2213 gvt_dbg_core("init gtt\n"); 2197 2214 ··· 2204 2221 return -ENODEV; 2205 2222 } 2206 2223 2207 - gvt->gtt.scratch_ggtt_page = 2208 - alloc_page(GFP_KERNEL | GFP_ATOMIC | __GFP_ZERO); 2209 - if (!gvt->gtt.scratch_ggtt_page) { 2224 + page = (void *)get_zeroed_page(GFP_KERNEL); 2225 + if (!page) { 2210 2226 gvt_err("fail to allocate scratch ggtt page\n"); 2211 2227 return -ENOMEM; 2212 2228 } 2229 + gvt->gtt.scratch_ggtt_page = virt_to_page(page); 2213 2230 2214 - page_addr = page_address(gvt->gtt.scratch_ggtt_page); 2215 - 2216 - gvt->gtt.scratch_ggtt_mfn = 2217 - intel_gvt_hypervisor_virt_to_mfn(page_addr); 2231 + gvt->gtt.scratch_ggtt_mfn = intel_gvt_hypervisor_virt_to_mfn(page); 2218 2232 if (gvt->gtt.scratch_ggtt_mfn == INTEL_GVT_INVALID_ADDR) { 2219 2233 gvt_err("fail to translate scratch ggtt page\n"); 2220 2234 __free_page(gvt->gtt.scratch_ggtt_page); ··· 2276 2296 num_entries = vgpu_hidden_sz(vgpu) >> PAGE_SHIFT; 2277 2297 for (offset = 0; offset < num_entries; offset++) 2278 2298 ops->set_entry(NULL, &e, index + offset, false, 0, vgpu); 2299 + } 2300 + 2301 + /** 2302 + * intel_vgpu_reset_gtt - reset the all GTT related status 2303 + * @vgpu: a vGPU 2304 + * @dmlr: true for vGPU Device Model Level Reset, false for GT Reset 2305 + * 2306 + * This function is called from vfio core to reset reset all 2307 + * GTT related status, including GGTT, PPGTT, scratch page. 2308 + * 2309 + */ 2310 + void intel_vgpu_reset_gtt(struct intel_vgpu *vgpu, bool dmlr) 2311 + { 2312 + int i; 2313 + 2314 + ppgtt_free_all_shadow_page(vgpu); 2315 + if (!dmlr) 2316 + return; 2317 + 2318 + intel_vgpu_reset_ggtt(vgpu); 2319 + 2320 + /* clear scratch page for security */ 2321 + for (i = GTT_TYPE_PPGTT_PTE_PT; i < GTT_TYPE_MAX; i++) { 2322 + if (vgpu->gtt.scratch_pt[i].page != NULL) 2323 + memset(page_address(vgpu->gtt.scratch_pt[i].page), 2324 + 0, PAGE_SIZE); 2325 + } 2279 2326 }
+1
drivers/gpu/drm/i915/gvt/gtt.h
··· 208 208 void intel_vgpu_reset_ggtt(struct intel_vgpu *vgpu); 209 209 210 210 extern int intel_gvt_init_gtt(struct intel_gvt *gvt); 211 + extern void intel_vgpu_reset_gtt(struct intel_vgpu *vgpu, bool dmlr); 211 212 extern void intel_gvt_clean_gtt(struct intel_gvt *gvt); 212 213 213 214 extern struct intel_vgpu_mm *intel_gvt_find_ppgtt_mm(struct intel_vgpu *vgpu,
+7 -1
drivers/gpu/drm/i915/gvt/gvt.c
··· 201 201 intel_gvt_hypervisor_host_exit(&dev_priv->drm.pdev->dev, gvt); 202 202 intel_gvt_clean_vgpu_types(gvt); 203 203 204 + idr_destroy(&gvt->vgpu_idr); 205 + 204 206 kfree(dev_priv->gvt); 205 207 dev_priv->gvt = NULL; 206 208 } ··· 239 237 240 238 gvt_dbg_core("init gvt device\n"); 241 239 240 + idr_init(&gvt->vgpu_idr); 241 + 242 242 mutex_init(&gvt->lock); 243 243 gvt->dev_priv = dev_priv; 244 244 ··· 248 244 249 245 ret = intel_gvt_setup_mmio_info(gvt); 250 246 if (ret) 251 - return ret; 247 + goto out_clean_idr; 252 248 253 249 ret = intel_gvt_load_firmware(gvt); 254 250 if (ret) ··· 317 313 intel_gvt_free_firmware(gvt); 318 314 out_clean_mmio_info: 319 315 intel_gvt_clean_mmio_info(gvt); 316 + out_clean_idr: 317 + idr_destroy(&gvt->vgpu_idr); 320 318 kfree(gvt); 321 319 return ret; 322 320 }
+7 -1
drivers/gpu/drm/i915/gvt/gvt.h
··· 323 323 324 324 int intel_vgpu_alloc_resource(struct intel_vgpu *vgpu, 325 325 struct intel_vgpu_creation_params *param); 326 + void intel_vgpu_reset_resource(struct intel_vgpu *vgpu); 326 327 void intel_vgpu_free_resource(struct intel_vgpu *vgpu); 327 328 void intel_vgpu_write_fence(struct intel_vgpu *vgpu, 328 329 u32 fence, u64 value); ··· 376 375 struct intel_vgpu *intel_gvt_create_vgpu(struct intel_gvt *gvt, 377 376 struct intel_vgpu_type *type); 378 377 void intel_gvt_destroy_vgpu(struct intel_vgpu *vgpu); 378 + void intel_gvt_reset_vgpu_locked(struct intel_vgpu *vgpu, bool dmlr, 379 + unsigned int engine_mask); 379 380 void intel_gvt_reset_vgpu(struct intel_vgpu *vgpu); 380 381 381 382 ··· 414 411 int intel_gvt_ggtt_h2g_index(struct intel_vgpu *vgpu, unsigned long h_index, 415 412 unsigned long *g_index); 416 413 414 + void intel_vgpu_init_cfg_space(struct intel_vgpu *vgpu, 415 + bool primary); 416 + void intel_vgpu_reset_cfg_space(struct intel_vgpu *vgpu); 417 + 417 418 int intel_vgpu_emulate_cfg_read(struct intel_vgpu *vgpu, unsigned int offset, 418 419 void *p_data, unsigned int bytes); 419 420 ··· 431 424 int intel_vgpu_init_opregion(struct intel_vgpu *vgpu, u32 gpa); 432 425 433 426 int intel_vgpu_emulate_opregion_request(struct intel_vgpu *vgpu, u32 swsci); 434 - int setup_vgpu_mmio(struct intel_vgpu *vgpu); 435 427 void populate_pvinfo_page(struct intel_vgpu *vgpu); 436 428 437 429 struct intel_gvt_ops {
+35 -68
drivers/gpu/drm/i915/gvt/handlers.c
··· 93 93 static int new_mmio_info(struct intel_gvt *gvt, 94 94 u32 offset, u32 flags, u32 size, 95 95 u32 addr_mask, u32 ro_mask, u32 device, 96 - void *read, void *write) 96 + int (*read)(struct intel_vgpu *, unsigned int, void *, unsigned int), 97 + int (*write)(struct intel_vgpu *, unsigned int, void *, unsigned int)) 97 98 { 98 99 struct intel_gvt_mmio_info *info, *p; 99 100 u32 start, end, i; ··· 220 219 default: 221 220 /*should not hit here*/ 222 221 gvt_err("invalid forcewake offset 0x%x\n", offset); 223 - return 1; 222 + return -EINVAL; 224 223 } 225 224 } else { 226 225 ack_reg_offset = FORCEWAKE_ACK_HSW_REG; ··· 231 230 return 0; 232 231 } 233 232 234 - static int handle_device_reset(struct intel_vgpu *vgpu, unsigned int offset, 235 - void *p_data, unsigned int bytes, unsigned long bitmap) 236 - { 237 - struct intel_gvt_workload_scheduler *scheduler = 238 - &vgpu->gvt->scheduler; 239 - 240 - vgpu->resetting = true; 241 - 242 - intel_vgpu_stop_schedule(vgpu); 243 - /* 244 - * The current_vgpu will set to NULL after stopping the 245 - * scheduler when the reset is triggered by current vgpu. 246 - */ 247 - if (scheduler->current_vgpu == NULL) { 248 - mutex_unlock(&vgpu->gvt->lock); 249 - intel_gvt_wait_vgpu_idle(vgpu); 250 - mutex_lock(&vgpu->gvt->lock); 251 - } 252 - 253 - intel_vgpu_reset_execlist(vgpu, bitmap); 254 - 255 - /* full GPU reset */ 256 - if (bitmap == 0xff) { 257 - mutex_unlock(&vgpu->gvt->lock); 258 - intel_vgpu_clean_gtt(vgpu); 259 - mutex_lock(&vgpu->gvt->lock); 260 - setup_vgpu_mmio(vgpu); 261 - populate_pvinfo_page(vgpu); 262 - intel_vgpu_init_gtt(vgpu); 263 - } 264 - 265 - vgpu->resetting = false; 266 - 267 - return 0; 268 - } 269 - 270 233 static int gdrst_mmio_write(struct intel_vgpu *vgpu, unsigned int offset, 271 - void *p_data, unsigned int bytes) 234 + void *p_data, unsigned int bytes) 272 235 { 236 + unsigned int engine_mask = 0; 273 237 u32 data; 274 - u64 bitmap = 0; 275 238 276 239 write_vreg(vgpu, offset, p_data, bytes); 277 240 data = vgpu_vreg(vgpu, offset); 278 241 279 242 if (data & GEN6_GRDOM_FULL) { 280 243 gvt_dbg_mmio("vgpu%d: request full GPU reset\n", vgpu->id); 281 - bitmap = 0xff; 244 + engine_mask = ALL_ENGINES; 245 + } else { 246 + if (data & GEN6_GRDOM_RENDER) { 247 + gvt_dbg_mmio("vgpu%d: request RCS reset\n", vgpu->id); 248 + engine_mask |= (1 << RCS); 249 + } 250 + if (data & GEN6_GRDOM_MEDIA) { 251 + gvt_dbg_mmio("vgpu%d: request VCS reset\n", vgpu->id); 252 + engine_mask |= (1 << VCS); 253 + } 254 + if (data & GEN6_GRDOM_BLT) { 255 + gvt_dbg_mmio("vgpu%d: request BCS Reset\n", vgpu->id); 256 + engine_mask |= (1 << BCS); 257 + } 258 + if (data & GEN6_GRDOM_VECS) { 259 + gvt_dbg_mmio("vgpu%d: request VECS Reset\n", vgpu->id); 260 + engine_mask |= (1 << VECS); 261 + } 262 + if (data & GEN8_GRDOM_MEDIA2) { 263 + gvt_dbg_mmio("vgpu%d: request VCS2 Reset\n", vgpu->id); 264 + if (HAS_BSD2(vgpu->gvt->dev_priv)) 265 + engine_mask |= (1 << VCS2); 266 + } 282 267 } 283 - if (data & GEN6_GRDOM_RENDER) { 284 - gvt_dbg_mmio("vgpu%d: request RCS reset\n", vgpu->id); 285 - bitmap |= (1 << RCS); 286 - } 287 - if (data & GEN6_GRDOM_MEDIA) { 288 - gvt_dbg_mmio("vgpu%d: request VCS reset\n", vgpu->id); 289 - bitmap |= (1 << VCS); 290 - } 291 - if (data & GEN6_GRDOM_BLT) { 292 - gvt_dbg_mmio("vgpu%d: request BCS Reset\n", vgpu->id); 293 - bitmap |= (1 << BCS); 294 - } 295 - if (data & GEN6_GRDOM_VECS) { 296 - gvt_dbg_mmio("vgpu%d: request VECS Reset\n", vgpu->id); 297 - bitmap |= (1 << VECS); 298 - } 299 - if (data & GEN8_GRDOM_MEDIA2) { 300 - gvt_dbg_mmio("vgpu%d: request VCS2 Reset\n", vgpu->id); 301 - if (HAS_BSD2(vgpu->gvt->dev_priv)) 302 - bitmap |= (1 << VCS2); 303 - } 304 - return handle_device_reset(vgpu, offset, p_data, bytes, bitmap); 268 + 269 + intel_gvt_reset_vgpu_locked(vgpu, false, engine_mask); 270 + 271 + return 0; 305 272 } 306 273 307 274 static int gmbus_mmio_read(struct intel_vgpu *vgpu, unsigned int offset, ··· 943 974 return 0; 944 975 } 945 976 946 - static bool sbi_ctl_mmio_write(struct intel_vgpu *vgpu, unsigned int offset, 977 + static int sbi_ctl_mmio_write(struct intel_vgpu *vgpu, unsigned int offset, 947 978 void *p_data, unsigned int bytes) 948 979 { 949 980 u32 data; ··· 1335 1366 static int gvt_reg_tlb_control_handler(struct intel_vgpu *vgpu, 1336 1367 unsigned int offset, void *p_data, unsigned int bytes) 1337 1368 { 1338 - int rc = 0; 1339 1369 unsigned int id = 0; 1340 1370 1341 1371 write_vreg(vgpu, offset, p_data, bytes); ··· 1357 1389 id = VECS; 1358 1390 break; 1359 1391 default: 1360 - rc = -EINVAL; 1361 - break; 1392 + return -EINVAL; 1362 1393 } 1363 1394 set_bit(id, (void *)vgpu->tlb_handle_pending); 1364 1395 1365 - return rc; 1396 + return 0; 1366 1397 } 1367 1398 1368 1399 static int ring_reset_ctl_write(struct intel_vgpu *vgpu,
+14 -8
drivers/gpu/drm/i915/gvt/kvmgt.c
··· 230 230 return NULL; 231 231 } 232 232 233 - static ssize_t available_instance_show(struct kobject *kobj, struct device *dev, 234 - char *buf) 233 + static ssize_t available_instances_show(struct kobject *kobj, 234 + struct device *dev, char *buf) 235 235 { 236 236 struct intel_vgpu_type *type; 237 237 unsigned int num = 0; ··· 269 269 type->fence); 270 270 } 271 271 272 - static MDEV_TYPE_ATTR_RO(available_instance); 272 + static MDEV_TYPE_ATTR_RO(available_instances); 273 273 static MDEV_TYPE_ATTR_RO(device_api); 274 274 static MDEV_TYPE_ATTR_RO(description); 275 275 276 276 static struct attribute *type_attrs[] = { 277 - &mdev_type_attr_available_instance.attr, 277 + &mdev_type_attr_available_instances.attr, 278 278 &mdev_type_attr_device_api.attr, 279 279 &mdev_type_attr_description.attr, 280 280 NULL, ··· 398 398 struct intel_vgpu_type *type; 399 399 struct device *pdev; 400 400 void *gvt; 401 + int ret; 401 402 402 403 pdev = mdev_parent_dev(mdev); 403 404 gvt = kdev_to_i915(pdev)->gvt; ··· 407 406 if (!type) { 408 407 gvt_err("failed to find type %s to create\n", 409 408 kobject_name(kobj)); 410 - return -EINVAL; 409 + ret = -EINVAL; 410 + goto out; 411 411 } 412 412 413 413 vgpu = intel_gvt_ops->vgpu_create(gvt, type); 414 414 if (IS_ERR_OR_NULL(vgpu)) { 415 - gvt_err("create intel vgpu failed\n"); 416 - return -EINVAL; 415 + ret = vgpu == NULL ? -EFAULT : PTR_ERR(vgpu); 416 + gvt_err("failed to create intel vgpu: %d\n", ret); 417 + goto out; 417 418 } 418 419 419 420 INIT_WORK(&vgpu->vdev.release_work, intel_vgpu_release_work); ··· 425 422 426 423 gvt_dbg_core("intel_vgpu_create succeeded for mdev: %s\n", 427 424 dev_name(mdev_dev(mdev))); 428 - return 0; 425 + ret = 0; 426 + 427 + out: 428 + return ret; 429 429 } 430 430 431 431 static int intel_vgpu_remove(struct mdev_device *mdev)
+69 -15
drivers/gpu/drm/i915/gvt/mmio.c
··· 125 125 if (WARN_ON(!reg_is_mmio(gvt, offset + bytes - 1))) 126 126 goto err; 127 127 128 - mmio = intel_gvt_find_mmio_info(gvt, rounddown(offset, 4)); 129 - if (!mmio && !vgpu->mmio.disable_warn_untrack) { 130 - gvt_err("vgpu%d: read untracked MMIO %x len %d val %x\n", 131 - vgpu->id, offset, bytes, *(u32 *)p_data); 132 - 133 - if (offset == 0x206c) { 134 - gvt_err("------------------------------------------\n"); 135 - gvt_err("vgpu%d: likely triggers a gfx reset\n", 136 - vgpu->id); 137 - gvt_err("------------------------------------------\n"); 138 - vgpu->mmio.disable_warn_untrack = true; 139 - } 140 - } 141 - 142 128 if (!intel_gvt_mmio_is_unalign(gvt, offset)) { 143 129 if (WARN_ON(!IS_ALIGNED(offset, bytes))) 144 130 goto err; 145 131 } 146 132 133 + mmio = intel_gvt_find_mmio_info(gvt, rounddown(offset, 4)); 147 134 if (mmio) { 148 135 if (!intel_gvt_mmio_is_unalign(gvt, mmio->offset)) { 149 136 if (WARN_ON(offset + bytes > mmio->offset + mmio->size)) ··· 139 152 goto err; 140 153 } 141 154 ret = mmio->read(vgpu, offset, p_data, bytes); 142 - } else 155 + } else { 143 156 ret = intel_vgpu_default_mmio_read(vgpu, offset, p_data, bytes); 157 + 158 + if (!vgpu->mmio.disable_warn_untrack) { 159 + gvt_err("vgpu%d: read untracked MMIO %x(%dB) val %x\n", 160 + vgpu->id, offset, bytes, *(u32 *)p_data); 161 + 162 + if (offset == 0x206c) { 163 + gvt_err("------------------------------------------\n"); 164 + gvt_err("vgpu%d: likely triggers a gfx reset\n", 165 + vgpu->id); 166 + gvt_err("------------------------------------------\n"); 167 + vgpu->mmio.disable_warn_untrack = true; 168 + } 169 + } 170 + } 144 171 145 172 if (ret) 146 173 goto err; ··· 302 301 vgpu->id, offset, bytes); 303 302 mutex_unlock(&gvt->lock); 304 303 return ret; 304 + } 305 + 306 + 307 + /** 308 + * intel_vgpu_reset_mmio - reset virtual MMIO space 309 + * @vgpu: a vGPU 310 + * 311 + */ 312 + void intel_vgpu_reset_mmio(struct intel_vgpu *vgpu) 313 + { 314 + struct intel_gvt *gvt = vgpu->gvt; 315 + const struct intel_gvt_device_info *info = &gvt->device_info; 316 + 317 + memcpy(vgpu->mmio.vreg, gvt->firmware.mmio, info->mmio_size); 318 + memcpy(vgpu->mmio.sreg, gvt->firmware.mmio, info->mmio_size); 319 + 320 + vgpu_vreg(vgpu, GEN6_GT_THREAD_STATUS_REG) = 0; 321 + 322 + /* set the bit 0:2(Core C-State ) to C0 */ 323 + vgpu_vreg(vgpu, GEN6_GT_CORE_STATUS) = 0; 324 + } 325 + 326 + /** 327 + * intel_vgpu_init_mmio - init MMIO space 328 + * @vgpu: a vGPU 329 + * 330 + * Returns: 331 + * Zero on success, negative error code if failed 332 + */ 333 + int intel_vgpu_init_mmio(struct intel_vgpu *vgpu) 334 + { 335 + const struct intel_gvt_device_info *info = &vgpu->gvt->device_info; 336 + 337 + vgpu->mmio.vreg = vzalloc(info->mmio_size * 2); 338 + if (!vgpu->mmio.vreg) 339 + return -ENOMEM; 340 + 341 + vgpu->mmio.sreg = vgpu->mmio.vreg + info->mmio_size; 342 + 343 + intel_vgpu_reset_mmio(vgpu); 344 + 345 + return 0; 346 + } 347 + 348 + /** 349 + * intel_vgpu_clean_mmio - clean MMIO space 350 + * @vgpu: a vGPU 351 + * 352 + */ 353 + void intel_vgpu_clean_mmio(struct intel_vgpu *vgpu) 354 + { 355 + vfree(vgpu->mmio.vreg); 356 + vgpu->mmio.vreg = vgpu->mmio.sreg = NULL; 305 357 }
+4
drivers/gpu/drm/i915/gvt/mmio.h
··· 86 86 *offset; \ 87 87 }) 88 88 89 + int intel_vgpu_init_mmio(struct intel_vgpu *vgpu); 90 + void intel_vgpu_reset_mmio(struct intel_vgpu *vgpu); 91 + void intel_vgpu_clean_mmio(struct intel_vgpu *vgpu); 92 + 89 93 int intel_vgpu_gpa_to_mmio_offset(struct intel_vgpu *vgpu, u64 gpa); 90 94 91 95 int intel_vgpu_emulate_mmio_read(struct intel_vgpu *vgpu, u64 pa,
+4 -4
drivers/gpu/drm/i915/gvt/opregion.c
··· 36 36 vgpu->id)) 37 37 return -EINVAL; 38 38 39 - vgpu_opregion(vgpu)->va = (void *)__get_free_pages(GFP_ATOMIC | 40 - GFP_DMA32 | __GFP_ZERO, 41 - INTEL_GVT_OPREGION_PORDER); 39 + vgpu_opregion(vgpu)->va = (void *)__get_free_pages(GFP_KERNEL | 40 + __GFP_ZERO, 41 + get_order(INTEL_GVT_OPREGION_SIZE)); 42 42 43 43 if (!vgpu_opregion(vgpu)->va) 44 44 return -ENOMEM; ··· 97 97 if (intel_gvt_host.hypervisor_type == INTEL_GVT_HYPERVISOR_XEN) { 98 98 map_vgpu_opregion(vgpu, false); 99 99 free_pages((unsigned long)vgpu_opregion(vgpu)->va, 100 - INTEL_GVT_OPREGION_PORDER); 100 + get_order(INTEL_GVT_OPREGION_SIZE)); 101 101 102 102 vgpu_opregion(vgpu)->va = NULL; 103 103 }
+1 -2
drivers/gpu/drm/i915/gvt/reg.h
··· 50 50 #define INTEL_GVT_OPREGION_PARM 0x204 51 51 52 52 #define INTEL_GVT_OPREGION_PAGES 2 53 - #define INTEL_GVT_OPREGION_PORDER 1 54 - #define INTEL_GVT_OPREGION_SIZE (2 * 4096) 53 + #define INTEL_GVT_OPREGION_SIZE (INTEL_GVT_OPREGION_PAGES * PAGE_SIZE) 55 54 56 55 #define VGT_SPRSTRIDE(pipe) _PIPE(pipe, _SPRA_STRIDE, _PLANE_STRIDE_2_B) 57 56
+7 -7
drivers/gpu/drm/i915/gvt/scheduler.c
··· 350 350 { 351 351 struct intel_gvt_workload_scheduler *scheduler = &gvt->scheduler; 352 352 struct intel_vgpu_workload *workload; 353 + struct intel_vgpu *vgpu; 353 354 int event; 354 355 355 356 mutex_lock(&gvt->lock); 356 357 357 358 workload = scheduler->current_workload[ring_id]; 359 + vgpu = workload->vgpu; 358 360 359 - if (!workload->status && !workload->vgpu->resetting) { 361 + if (!workload->status && !vgpu->resetting) { 360 362 wait_event(workload->shadow_ctx_status_wq, 361 363 !atomic_read(&workload->shadow_ctx_active)); 362 364 ··· 366 364 367 365 for_each_set_bit(event, workload->pending_events, 368 366 INTEL_GVT_EVENT_MAX) 369 - intel_vgpu_trigger_virtual_event(workload->vgpu, 370 - event); 367 + intel_vgpu_trigger_virtual_event(vgpu, event); 371 368 } 372 369 373 370 gvt_dbg_sched("ring id %d complete workload %p status %d\n", ··· 374 373 375 374 scheduler->current_workload[ring_id] = NULL; 376 375 377 - atomic_dec(&workload->vgpu->running_workload_num); 378 - 379 376 list_del_init(&workload->list); 380 377 workload->complete(workload); 381 378 379 + atomic_dec(&vgpu->running_workload_num); 382 380 wake_up(&scheduler->workload_complete_wq); 383 381 mutex_unlock(&gvt->lock); 384 382 } ··· 459 459 gvt_dbg_sched("will complete workload %p\n, status: %d\n", 460 460 workload, workload->status); 461 461 462 - complete_current_workload(gvt, ring_id); 463 - 464 462 if (workload->req) 465 463 i915_gem_request_put(fetch_and_zero(&workload->req)); 464 + 465 + complete_current_workload(gvt, ring_id); 466 466 467 467 if (need_force_wake) 468 468 intel_uncore_forcewake_put(gvt->dev_priv,
+1 -1
drivers/gpu/drm/i915/gvt/scheduler.h
··· 113 113 struct drm_i915_gem_object *obj; 114 114 void *va; 115 115 unsigned long len; 116 - void *bb_start_cmd_va; 116 + u32 *bb_start_cmd_va; 117 117 }; 118 118 119 119 #define workload_q_head(vgpu, ring_id) \
+81 -81
drivers/gpu/drm/i915/gvt/vgpu.c
··· 35 35 #include "gvt.h" 36 36 #include "i915_pvinfo.h" 37 37 38 - static void clean_vgpu_mmio(struct intel_vgpu *vgpu) 39 - { 40 - vfree(vgpu->mmio.vreg); 41 - vgpu->mmio.vreg = vgpu->mmio.sreg = NULL; 42 - } 43 - 44 - int setup_vgpu_mmio(struct intel_vgpu *vgpu) 45 - { 46 - struct intel_gvt *gvt = vgpu->gvt; 47 - const struct intel_gvt_device_info *info = &gvt->device_info; 48 - 49 - if (vgpu->mmio.vreg) 50 - memset(vgpu->mmio.vreg, 0, info->mmio_size * 2); 51 - else { 52 - vgpu->mmio.vreg = vzalloc(info->mmio_size * 2); 53 - if (!vgpu->mmio.vreg) 54 - return -ENOMEM; 55 - } 56 - 57 - vgpu->mmio.sreg = vgpu->mmio.vreg + info->mmio_size; 58 - 59 - memcpy(vgpu->mmio.vreg, gvt->firmware.mmio, info->mmio_size); 60 - memcpy(vgpu->mmio.sreg, gvt->firmware.mmio, info->mmio_size); 61 - 62 - vgpu_vreg(vgpu, GEN6_GT_THREAD_STATUS_REG) = 0; 63 - 64 - /* set the bit 0:2(Core C-State ) to C0 */ 65 - vgpu_vreg(vgpu, GEN6_GT_CORE_STATUS) = 0; 66 - return 0; 67 - } 68 - 69 - static void setup_vgpu_cfg_space(struct intel_vgpu *vgpu, 70 - struct intel_vgpu_creation_params *param) 71 - { 72 - struct intel_gvt *gvt = vgpu->gvt; 73 - const struct intel_gvt_device_info *info = &gvt->device_info; 74 - u16 *gmch_ctl; 75 - int i; 76 - 77 - memcpy(vgpu_cfg_space(vgpu), gvt->firmware.cfg_space, 78 - info->cfg_space_size); 79 - 80 - if (!param->primary) { 81 - vgpu_cfg_space(vgpu)[PCI_CLASS_DEVICE] = 82 - INTEL_GVT_PCI_CLASS_VGA_OTHER; 83 - vgpu_cfg_space(vgpu)[PCI_CLASS_PROG] = 84 - INTEL_GVT_PCI_CLASS_VGA_OTHER; 85 - } 86 - 87 - /* Show guest that there isn't any stolen memory.*/ 88 - gmch_ctl = (u16 *)(vgpu_cfg_space(vgpu) + INTEL_GVT_PCI_GMCH_CONTROL); 89 - *gmch_ctl &= ~(BDW_GMCH_GMS_MASK << BDW_GMCH_GMS_SHIFT); 90 - 91 - intel_vgpu_write_pci_bar(vgpu, PCI_BASE_ADDRESS_2, 92 - gvt_aperture_pa_base(gvt), true); 93 - 94 - vgpu_cfg_space(vgpu)[PCI_COMMAND] &= ~(PCI_COMMAND_IO 95 - | PCI_COMMAND_MEMORY 96 - | PCI_COMMAND_MASTER); 97 - /* 98 - * Clear the bar upper 32bit and let guest to assign the new value 99 - */ 100 - memset(vgpu_cfg_space(vgpu) + PCI_BASE_ADDRESS_1, 0, 4); 101 - memset(vgpu_cfg_space(vgpu) + PCI_BASE_ADDRESS_3, 0, 4); 102 - memset(vgpu_cfg_space(vgpu) + INTEL_GVT_PCI_OPREGION, 0, 4); 103 - 104 - for (i = 0; i < INTEL_GVT_MAX_BAR_NUM; i++) { 105 - vgpu->cfg_space.bar[i].size = pci_resource_len( 106 - gvt->dev_priv->drm.pdev, i * 2); 107 - vgpu->cfg_space.bar[i].tracked = false; 108 - } 109 - } 110 - 111 38 void populate_pvinfo_page(struct intel_vgpu *vgpu) 112 39 { 113 40 /* setup the ballooning information */ ··· 104 177 if (low_avail / min_low == 0) 105 178 break; 106 179 gvt->types[i].low_gm_size = min_low; 107 - gvt->types[i].high_gm_size = 3 * gvt->types[i].low_gm_size; 180 + gvt->types[i].high_gm_size = max((min_low<<3), MB_TO_BYTES(384U)); 108 181 gvt->types[i].fence = 4; 109 182 gvt->types[i].max_instance = low_avail / min_low; 110 183 gvt->types[i].avail_instance = gvt->types[i].max_instance; ··· 144 217 */ 145 218 low_gm_avail = MB_TO_BYTES(256) - HOST_LOW_GM_SIZE - 146 219 gvt->gm.vgpu_allocated_low_gm_size; 147 - high_gm_avail = MB_TO_BYTES(256) * 3 - HOST_HIGH_GM_SIZE - 220 + high_gm_avail = MB_TO_BYTES(256) * 8UL - HOST_HIGH_GM_SIZE - 148 221 gvt->gm.vgpu_allocated_high_gm_size; 149 222 fence_avail = gvt_fence_sz(gvt) - HOST_FENCE - 150 223 gvt->fence.vgpu_allocated_fence_num; ··· 195 268 intel_vgpu_clean_gtt(vgpu); 196 269 intel_gvt_hypervisor_detach_vgpu(vgpu); 197 270 intel_vgpu_free_resource(vgpu); 198 - clean_vgpu_mmio(vgpu); 271 + intel_vgpu_clean_mmio(vgpu); 199 272 vfree(vgpu); 200 273 201 274 intel_gvt_update_vgpu_types(gvt); ··· 227 300 vgpu->gvt = gvt; 228 301 bitmap_zero(vgpu->tlb_handle_pending, I915_NUM_ENGINES); 229 302 230 - setup_vgpu_cfg_space(vgpu, param); 303 + intel_vgpu_init_cfg_space(vgpu, param->primary); 231 304 232 - ret = setup_vgpu_mmio(vgpu); 305 + ret = intel_vgpu_init_mmio(vgpu); 233 306 if (ret) 234 - goto out_free_vgpu; 307 + goto out_clean_idr; 235 308 236 309 ret = intel_vgpu_alloc_resource(vgpu, param); 237 310 if (ret) ··· 281 354 out_clean_vgpu_resource: 282 355 intel_vgpu_free_resource(vgpu); 283 356 out_clean_vgpu_mmio: 284 - clean_vgpu_mmio(vgpu); 357 + intel_vgpu_clean_mmio(vgpu); 358 + out_clean_idr: 359 + idr_remove(&gvt->vgpu_idr, vgpu->id); 285 360 out_free_vgpu: 286 361 vfree(vgpu); 287 362 mutex_unlock(&gvt->lock); ··· 327 398 } 328 399 329 400 /** 330 - * intel_gvt_reset_vgpu - reset a virtual GPU 401 + * intel_gvt_reset_vgpu_locked - reset a virtual GPU by DMLR or GT reset 402 + * @vgpu: virtual GPU 403 + * @dmlr: vGPU Device Model Level Reset or GT Reset 404 + * @engine_mask: engines to reset for GT reset 405 + * 406 + * This function is called when user wants to reset a virtual GPU through 407 + * device model reset or GT reset. The caller should hold the gvt lock. 408 + * 409 + * vGPU Device Model Level Reset (DMLR) simulates the PCI level reset to reset 410 + * the whole vGPU to default state as when it is created. This vGPU function 411 + * is required both for functionary and security concerns.The ultimate goal 412 + * of vGPU FLR is that reuse a vGPU instance by virtual machines. When we 413 + * assign a vGPU to a virtual machine we must isse such reset first. 414 + * 415 + * Full GT Reset and Per-Engine GT Reset are soft reset flow for GPU engines 416 + * (Render, Blitter, Video, Video Enhancement). It is defined by GPU Spec. 417 + * Unlike the FLR, GT reset only reset particular resource of a vGPU per 418 + * the reset request. Guest driver can issue a GT reset by programming the 419 + * virtual GDRST register to reset specific virtual GPU engine or all 420 + * engines. 421 + * 422 + * The parameter dev_level is to identify if we will do DMLR or GT reset. 423 + * The parameter engine_mask is to specific the engines that need to be 424 + * resetted. If value ALL_ENGINES is given for engine_mask, it means 425 + * the caller requests a full GT reset that we will reset all virtual 426 + * GPU engines. For FLR, engine_mask is ignored. 427 + */ 428 + void intel_gvt_reset_vgpu_locked(struct intel_vgpu *vgpu, bool dmlr, 429 + unsigned int engine_mask) 430 + { 431 + struct intel_gvt *gvt = vgpu->gvt; 432 + struct intel_gvt_workload_scheduler *scheduler = &gvt->scheduler; 433 + 434 + gvt_dbg_core("------------------------------------------\n"); 435 + gvt_dbg_core("resseting vgpu%d, dmlr %d, engine_mask %08x\n", 436 + vgpu->id, dmlr, engine_mask); 437 + vgpu->resetting = true; 438 + 439 + intel_vgpu_stop_schedule(vgpu); 440 + /* 441 + * The current_vgpu will set to NULL after stopping the 442 + * scheduler when the reset is triggered by current vgpu. 443 + */ 444 + if (scheduler->current_vgpu == NULL) { 445 + mutex_unlock(&gvt->lock); 446 + intel_gvt_wait_vgpu_idle(vgpu); 447 + mutex_lock(&gvt->lock); 448 + } 449 + 450 + intel_vgpu_reset_execlist(vgpu, dmlr ? ALL_ENGINES : engine_mask); 451 + 452 + /* full GPU reset or device model level reset */ 453 + if (engine_mask == ALL_ENGINES || dmlr) { 454 + intel_vgpu_reset_gtt(vgpu, dmlr); 455 + intel_vgpu_reset_resource(vgpu); 456 + intel_vgpu_reset_mmio(vgpu); 457 + populate_pvinfo_page(vgpu); 458 + 459 + if (dmlr) 460 + intel_vgpu_reset_cfg_space(vgpu); 461 + } 462 + 463 + vgpu->resetting = false; 464 + gvt_dbg_core("reset vgpu%d done\n", vgpu->id); 465 + gvt_dbg_core("------------------------------------------\n"); 466 + } 467 + 468 + /** 469 + * intel_gvt_reset_vgpu - reset a virtual GPU (Function Level) 331 470 * @vgpu: virtual GPU 332 471 * 333 472 * This function is called when user wants to reset a virtual GPU. ··· 403 406 */ 404 407 void intel_gvt_reset_vgpu(struct intel_vgpu *vgpu) 405 408 { 409 + mutex_lock(&vgpu->gvt->lock); 410 + intel_gvt_reset_vgpu_locked(vgpu, true, 0); 411 + mutex_unlock(&vgpu->gvt->lock); 406 412 }
+1 -1
drivers/gpu/drm/i915/i915_drv.c
··· 2378 2378 2379 2379 assert_forcewakes_inactive(dev_priv); 2380 2380 2381 - if (!IS_VALLEYVIEW(dev_priv) || !IS_CHERRYVIEW(dev_priv)) 2381 + if (!IS_VALLEYVIEW(dev_priv) && !IS_CHERRYVIEW(dev_priv)) 2382 2382 intel_hpd_poll_init(dev_priv); 2383 2383 2384 2384 DRM_DEBUG_KMS("Device suspended\n");
+5
drivers/gpu/drm/i915/i915_drv.h
··· 1977 1977 1978 1978 struct i915_frontbuffer_tracking fb_tracking; 1979 1979 1980 + struct intel_atomic_helper { 1981 + struct llist_head free_list; 1982 + struct work_struct free_work; 1983 + } atomic_helper; 1984 + 1980 1985 u16 orig_clock; 1981 1986 1982 1987 bool mchbar_need_disable;
+4 -30
drivers/gpu/drm/i915/i915_gem.c
··· 595 595 struct drm_i915_gem_pwrite *args, 596 596 struct drm_file *file) 597 597 { 598 - struct drm_device *dev = obj->base.dev; 599 598 void *vaddr = obj->phys_handle->vaddr + args->offset; 600 599 char __user *user_data = u64_to_user_ptr(args->data_ptr); 601 - int ret; 602 600 603 601 /* We manually control the domain here and pretend that it 604 602 * remains coherent i.e. in the GTT domain, like shmem_pwrite. 605 603 */ 606 - lockdep_assert_held(&obj->base.dev->struct_mutex); 607 - ret = i915_gem_object_wait(obj, 608 - I915_WAIT_INTERRUPTIBLE | 609 - I915_WAIT_LOCKED | 610 - I915_WAIT_ALL, 611 - MAX_SCHEDULE_TIMEOUT, 612 - to_rps_client(file)); 613 - if (ret) 614 - return ret; 615 - 616 604 intel_fb_obj_invalidate(obj, ORIGIN_CPU); 617 - if (__copy_from_user_inatomic_nocache(vaddr, user_data, args->size)) { 618 - unsigned long unwritten; 619 - 620 - /* The physical object once assigned is fixed for the lifetime 621 - * of the obj, so we can safely drop the lock and continue 622 - * to access vaddr. 623 - */ 624 - mutex_unlock(&dev->struct_mutex); 625 - unwritten = copy_from_user(vaddr, user_data, args->size); 626 - mutex_lock(&dev->struct_mutex); 627 - if (unwritten) { 628 - ret = -EFAULT; 629 - goto out; 630 - } 631 - } 605 + if (copy_from_user(vaddr, user_data, args->size)) 606 + return -EFAULT; 632 607 633 608 drm_clflush_virt_range(vaddr, args->size); 634 - i915_gem_chipset_flush(to_i915(dev)); 609 + i915_gem_chipset_flush(to_i915(obj->base.dev)); 635 610 636 - out: 637 611 intel_fb_obj_flush(obj, false, ORIGIN_CPU); 638 - return ret; 612 + return 0; 639 613 } 640 614 641 615 void *i915_gem_object_alloc(struct drm_device *dev)
+1
drivers/gpu/drm/i915/i915_gem_evict.c
··· 199 199 } 200 200 201 201 /* Unbinding will emit any required flushes */ 202 + ret = 0; 202 203 while (!list_empty(&eviction_list)) { 203 204 vma = list_first_entry(&eviction_list, 204 205 struct i915_vma,
+1
drivers/gpu/drm/i915/i915_vma.c
··· 185 185 return ret; 186 186 } 187 187 188 + trace_i915_vma_bind(vma, bind_flags); 188 189 ret = vma->vm->bind_vma(vma, cache_level, bind_flags); 189 190 if (ret) 190 191 return ret;
+5 -4
drivers/gpu/drm/i915/intel_crt.c
··· 499 499 struct drm_i915_private *dev_priv = to_i915(crt->base.base.dev); 500 500 struct edid *edid; 501 501 struct i2c_adapter *i2c; 502 + bool ret = false; 502 503 503 504 BUG_ON(crt->base.type != INTEL_OUTPUT_ANALOG); 504 505 ··· 516 515 */ 517 516 if (!is_digital) { 518 517 DRM_DEBUG_KMS("CRT detected via DDC:0x50 [EDID]\n"); 519 - return true; 518 + ret = true; 519 + } else { 520 + DRM_DEBUG_KMS("CRT not detected via DDC:0x50 [EDID reports a digital panel]\n"); 520 521 } 521 - 522 - DRM_DEBUG_KMS("CRT not detected via DDC:0x50 [EDID reports a digital panel]\n"); 523 522 } else { 524 523 DRM_DEBUG_KMS("CRT not detected via DDC:0x50 [no valid EDID found]\n"); 525 524 } 526 525 527 526 kfree(edid); 528 527 529 - return false; 528 + return ret; 530 529 } 531 530 532 531 static enum drm_connector_status
+44 -5
drivers/gpu/drm/i915/intel_display.c
··· 2251 2251 intel_fill_fb_ggtt_view(&view, fb, rotation); 2252 2252 vma = i915_gem_object_to_ggtt(obj, &view); 2253 2253 2254 + if (WARN_ON_ONCE(!vma)) 2255 + return; 2256 + 2254 2257 i915_vma_unpin_fence(vma); 2255 2258 i915_gem_object_unpin_from_display_plane(vma); 2256 2259 } ··· 2588 2585 * We only keep the x/y offsets, so push all of the 2589 2586 * gtt offset into the x/y offsets. 2590 2587 */ 2591 - _intel_adjust_tile_offset(&x, &y, tile_size, 2592 - tile_width, tile_height, pitch_tiles, 2588 + _intel_adjust_tile_offset(&x, &y, 2589 + tile_width, tile_height, 2590 + tile_size, pitch_tiles, 2593 2591 gtt_offset_rotated * tile_size, 0); 2594 2592 2595 2593 gtt_offset_rotated += rot_info->plane[i].width * rot_info->plane[i].height; ··· 2970 2966 const struct drm_framebuffer *fb = plane_state->base.fb; 2971 2967 unsigned int rotation = plane_state->base.rotation; 2972 2968 int ret; 2969 + 2970 + if (!plane_state->base.visible) 2971 + return 0; 2973 2972 2974 2973 /* Rotate src coordinates to match rotated GTT view */ 2975 2974 if (drm_rotation_90_or_270(rotation)) ··· 6853 6846 } 6854 6847 6855 6848 state = drm_atomic_state_alloc(crtc->dev); 6849 + if (!state) { 6850 + DRM_DEBUG_KMS("failed to disable [CRTC:%d:%s], out of memory", 6851 + crtc->base.id, crtc->name); 6852 + return; 6853 + } 6854 + 6856 6855 state->acquire_ctx = crtc->dev->mode_config.acquire_ctx; 6857 6856 6858 6857 /* Everything's already locked, -EDEADLK can't happen. */ ··· 11256 11243 } 11257 11244 11258 11245 old->restore_state = restore_state; 11246 + drm_atomic_state_put(state); 11259 11247 11260 11248 /* let the connector get through one full cycle before testing */ 11261 11249 intel_wait_for_vblank(dev_priv, intel_crtc->pipe); ··· 14526 14512 break; 14527 14513 14528 14514 case FENCE_FREE: 14529 - drm_atomic_state_put(&state->base); 14530 - break; 14515 + { 14516 + struct intel_atomic_helper *helper = 14517 + &to_i915(state->base.dev)->atomic_helper; 14518 + 14519 + if (llist_add(&state->freed, &helper->free_list)) 14520 + schedule_work(&helper->free_work); 14521 + break; 14522 + } 14531 14523 } 14532 14524 14533 14525 return NOTIFY_DONE; ··· 16412 16392 drm_modeset_acquire_fini(&ctx); 16413 16393 } 16414 16394 16395 + static void intel_atomic_helper_free_state(struct work_struct *work) 16396 + { 16397 + struct drm_i915_private *dev_priv = 16398 + container_of(work, typeof(*dev_priv), atomic_helper.free_work); 16399 + struct intel_atomic_state *state, *next; 16400 + struct llist_node *freed; 16401 + 16402 + freed = llist_del_all(&dev_priv->atomic_helper.free_list); 16403 + llist_for_each_entry_safe(state, next, freed, freed) 16404 + drm_atomic_state_put(&state->base); 16405 + } 16406 + 16415 16407 int intel_modeset_init(struct drm_device *dev) 16416 16408 { 16417 16409 struct drm_i915_private *dev_priv = to_i915(dev); ··· 16442 16410 dev->mode_config.allow_fb_modifiers = true; 16443 16411 16444 16412 dev->mode_config.funcs = &intel_mode_funcs; 16413 + 16414 + INIT_WORK(&dev_priv->atomic_helper.free_work, 16415 + intel_atomic_helper_free_state); 16445 16416 16446 16417 intel_init_quirks(dev); 16447 16418 ··· 17059 17024 17060 17025 if (ret) 17061 17026 DRM_ERROR("Restoring old state failed with %i\n", ret); 17062 - drm_atomic_state_put(state); 17027 + if (state) 17028 + drm_atomic_state_put(state); 17063 17029 } 17064 17030 17065 17031 void intel_modeset_gem_init(struct drm_device *dev) ··· 17129 17093 void intel_modeset_cleanup(struct drm_device *dev) 17130 17094 { 17131 17095 struct drm_i915_private *dev_priv = to_i915(dev); 17096 + 17097 + flush_work(&dev_priv->atomic_helper.free_work); 17098 + WARN_ON(!llist_empty(&dev_priv->atomic_helper.free_list)); 17132 17099 17133 17100 intel_disable_gt_powersave(dev_priv); 17134 17101
+2
drivers/gpu/drm/i915/intel_drv.h
··· 370 370 struct skl_wm_values wm_results; 371 371 372 372 struct i915_sw_fence commit_ready; 373 + 374 + struct llist_node freed; 373 375 }; 374 376 375 377 struct intel_plane_state {
+3
drivers/gpu/drm/i915/intel_fbdev.c
··· 742 742 { 743 743 struct intel_fbdev *ifbdev = to_i915(dev)->fbdev; 744 744 745 + if (!ifbdev) 746 + return; 747 + 745 748 ifbdev->cookie = async_schedule(intel_fbdev_initial_config, ifbdev); 746 749 } 747 750
-10
drivers/gpu/drm/i915/intel_lrc.c
··· 979 979 uint32_t *batch, 980 980 uint32_t index) 981 981 { 982 - struct drm_i915_private *dev_priv = engine->i915; 983 982 uint32_t l3sqc4_flush = (0x40400000 | GEN8_LQSC_FLUSH_COHERENT_LINES); 984 - 985 - /* 986 - * WaDisableLSQCROPERFforOCL:kbl 987 - * This WA is implemented in skl_init_clock_gating() but since 988 - * this batch updates GEN8_L3SQCREG4 with default value we need to 989 - * set this bit here to retain the WA during flush. 990 - */ 991 - if (IS_KBL_REVID(dev_priv, 0, KBL_REVID_E0)) 992 - l3sqc4_flush |= GEN8_LQSC_RO_PERF_DIS; 993 983 994 984 wa_ctx_emit(batch, index, (MI_STORE_REGISTER_MEM_GEN8 | 995 985 MI_SRM_LRM_GLOBAL_GTT));
-8
drivers/gpu/drm/i915/intel_ringbuffer.c
··· 1095 1095 WA_SET_BIT_MASKED(HDC_CHICKEN0, 1096 1096 HDC_FENCE_DEST_SLM_DISABLE); 1097 1097 1098 - /* GEN8_L3SQCREG4 has a dependency with WA batch so any new changes 1099 - * involving this register should also be added to WA batch as required. 1100 - */ 1101 - if (IS_KBL_REVID(dev_priv, 0, KBL_REVID_E0)) 1102 - /* WaDisableLSQCROPERFforOCL:kbl */ 1103 - I915_WRITE(GEN8_L3SQCREG4, I915_READ(GEN8_L3SQCREG4) | 1104 - GEN8_LQSC_RO_PERF_DIS); 1105 - 1106 1098 /* WaToEnableHwFixForPushConstHWBug:kbl */ 1107 1099 if (IS_KBL_REVID(dev_priv, KBL_REVID_C0, REVID_FOREVER)) 1108 1100 WA_SET_BIT_MASKED(COMMON_SLICE_CHICKEN2,
+2 -3
drivers/gpu/drm/msm/adreno/adreno_gpu.c
··· 345 345 { 346 346 struct adreno_platform_config *config = pdev->dev.platform_data; 347 347 struct msm_gpu *gpu = &adreno_gpu->base; 348 - struct msm_mmu *mmu; 349 348 int ret; 350 349 351 350 adreno_gpu->funcs = funcs; ··· 384 385 return ret; 385 386 } 386 387 387 - mmu = gpu->aspace->mmu; 388 - if (mmu) { 388 + if (gpu->aspace && gpu->aspace->mmu) { 389 + struct msm_mmu *mmu = gpu->aspace->mmu; 389 390 ret = mmu->funcs->attach(mmu, iommu_ports, 390 391 ARRAY_SIZE(iommu_ports)); 391 392 if (ret)
-6
drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.c
··· 119 119 120 120 static void mdp5_complete_commit(struct msm_kms *kms, struct drm_atomic_state *state) 121 121 { 122 - int i; 123 122 struct mdp5_kms *mdp5_kms = to_mdp5_kms(to_mdp_kms(kms)); 124 - struct drm_plane *plane; 125 - struct drm_plane_state *plane_state; 126 - 127 - for_each_plane_in_state(state, plane, plane_state, i) 128 - mdp5_plane_complete_commit(plane, plane_state); 129 123 130 124 if (mdp5_kms->smp) 131 125 mdp5_smp_complete_commit(mdp5_kms->smp, &mdp5_kms->state->smp);
-4
drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.h
··· 104 104 105 105 /* assigned by crtc blender */ 106 106 enum mdp_mixer_stage_id stage; 107 - 108 - bool pending : 1; 109 107 }; 110 108 #define to_mdp5_plane_state(x) \ 111 109 container_of(x, struct mdp5_plane_state, base) ··· 230 232 void mdp5_irq_domain_fini(struct mdp5_kms *mdp5_kms); 231 233 232 234 uint32_t mdp5_plane_get_flush(struct drm_plane *plane); 233 - void mdp5_plane_complete_commit(struct drm_plane *plane, 234 - struct drm_plane_state *state); 235 235 enum mdp5_pipe mdp5_plane_pipe(struct drm_plane *plane); 236 236 struct drm_plane *mdp5_plane_init(struct drm_device *dev, bool primary); 237 237
-22
drivers/gpu/drm/msm/mdp/mdp5/mdp5_plane.c
··· 179 179 drm_printf(p, "\tzpos=%u\n", pstate->zpos); 180 180 drm_printf(p, "\talpha=%u\n", pstate->alpha); 181 181 drm_printf(p, "\tstage=%s\n", stage2name(pstate->stage)); 182 - drm_printf(p, "\tpending=%u\n", pstate->pending); 183 182 } 184 183 185 184 static void mdp5_plane_reset(struct drm_plane *plane) ··· 218 219 219 220 if (mdp5_state && mdp5_state->base.fb) 220 221 drm_framebuffer_reference(mdp5_state->base.fb); 221 - 222 - mdp5_state->pending = false; 223 222 224 223 return &mdp5_state->base; 225 224 } ··· 284 287 285 288 DBG("%s: check (%d -> %d)", plane->name, 286 289 plane_enabled(old_state), plane_enabled(state)); 287 - 288 - /* We don't allow faster-than-vblank updates.. if we did add this 289 - * some day, we would need to disallow in cases where hwpipe 290 - * changes 291 - */ 292 - if (WARN_ON(to_mdp5_plane_state(old_state)->pending)) 293 - return -EBUSY; 294 290 295 291 max_width = config->hw->lm.max_width << 16; 296 292 max_height = config->hw->lm.max_height << 16; ··· 360 370 struct drm_plane_state *old_state) 361 371 { 362 372 struct drm_plane_state *state = plane->state; 363 - struct mdp5_plane_state *mdp5_state = to_mdp5_plane_state(state); 364 373 365 374 DBG("%s: update", plane->name); 366 - 367 - mdp5_state->pending = true; 368 375 369 376 if (plane_enabled(state)) { 370 377 int ret; ··· 836 849 return 0; 837 850 838 851 return pstate->hwpipe->flush_mask; 839 - } 840 - 841 - /* called after vsync in thread context */ 842 - void mdp5_plane_complete_commit(struct drm_plane *plane, 843 - struct drm_plane_state *state) 844 - { 845 - struct mdp5_plane_state *pstate = to_mdp5_plane_state(plane->state); 846 - 847 - pstate->pending = false; 848 852 } 849 853 850 854 /* initialize plane */
+2
drivers/gpu/drm/msm/msm_gem.c
··· 294 294 WARN_ON(!mutex_is_locked(&dev->struct_mutex)); 295 295 296 296 for (id = 0; id < ARRAY_SIZE(msm_obj->domain); id++) { 297 + if (!priv->aspace[id]) 298 + continue; 297 299 msm_gem_unmap_vma(priv->aspace[id], 298 300 &msm_obj->domain[id], msm_obj->sgt); 299 301 }
+2 -1
drivers/gpu/drm/nouveau/nouveau_display.c
··· 411 411 return ret; 412 412 413 413 /* enable polling for external displays */ 414 - drm_kms_helper_poll_enable(dev); 414 + if (!dev->mode_config.poll_enabled) 415 + drm_kms_helper_poll_enable(dev); 415 416 416 417 /* enable hotplug interrupts */ 417 418 list_for_each_entry(connector, &dev->mode_config.connector_list, head) {
+4 -1
drivers/gpu/drm/nouveau/nouveau_drm.c
··· 773 773 pci_set_master(pdev); 774 774 775 775 ret = nouveau_do_resume(drm_dev, true); 776 - drm_kms_helper_poll_enable(drm_dev); 776 + 777 + if (!drm_dev->mode_config.poll_enabled) 778 + drm_kms_helper_poll_enable(drm_dev); 779 + 777 780 /* do magic */ 778 781 nvif_mask(&device->object, 0x088488, (1 << 25), (1 << 25)); 779 782 vga_switcheroo_set_dynamic_switch(pdev, VGA_SWITCHEROO_ON);
+2
drivers/gpu/drm/nouveau/nouveau_drv.h
··· 165 165 struct backlight_device *backlight; 166 166 struct list_head bl_connectors; 167 167 struct work_struct hpd_work; 168 + struct work_struct fbcon_work; 169 + int fbcon_new_state; 168 170 #ifdef CONFIG_ACPI 169 171 struct notifier_block acpi_nb; 170 172 #endif
+34 -9
drivers/gpu/drm/nouveau/nouveau_fbcon.c
··· 470 470 .fb_probe = nouveau_fbcon_create, 471 471 }; 472 472 473 + static void 474 + nouveau_fbcon_set_suspend_work(struct work_struct *work) 475 + { 476 + struct nouveau_drm *drm = container_of(work, typeof(*drm), fbcon_work); 477 + int state = READ_ONCE(drm->fbcon_new_state); 478 + 479 + if (state == FBINFO_STATE_RUNNING) 480 + pm_runtime_get_sync(drm->dev->dev); 481 + 482 + console_lock(); 483 + if (state == FBINFO_STATE_RUNNING) 484 + nouveau_fbcon_accel_restore(drm->dev); 485 + drm_fb_helper_set_suspend(&drm->fbcon->helper, state); 486 + if (state != FBINFO_STATE_RUNNING) 487 + nouveau_fbcon_accel_save_disable(drm->dev); 488 + console_unlock(); 489 + 490 + if (state == FBINFO_STATE_RUNNING) { 491 + pm_runtime_mark_last_busy(drm->dev->dev); 492 + pm_runtime_put_sync(drm->dev->dev); 493 + } 494 + } 495 + 473 496 void 474 497 nouveau_fbcon_set_suspend(struct drm_device *dev, int state) 475 498 { 476 499 struct nouveau_drm *drm = nouveau_drm(dev); 477 - if (drm->fbcon) { 478 - console_lock(); 479 - if (state == FBINFO_STATE_RUNNING) 480 - nouveau_fbcon_accel_restore(dev); 481 - drm_fb_helper_set_suspend(&drm->fbcon->helper, state); 482 - if (state != FBINFO_STATE_RUNNING) 483 - nouveau_fbcon_accel_save_disable(dev); 484 - console_unlock(); 485 - } 500 + 501 + if (!drm->fbcon) 502 + return; 503 + 504 + drm->fbcon_new_state = state; 505 + /* Since runtime resume can happen as a result of a sysfs operation, 506 + * it's possible we already have the console locked. So handle fbcon 507 + * init/deinit from a seperate work thread 508 + */ 509 + schedule_work(&drm->fbcon_work); 486 510 } 487 511 488 512 int ··· 526 502 return -ENOMEM; 527 503 528 504 drm->fbcon = fbcon; 505 + INIT_WORK(&drm->fbcon_work, nouveau_fbcon_set_suspend_work); 529 506 530 507 drm_fb_helper_prepare(dev, &fbcon->helper, &nouveau_fbcon_helper_funcs); 531 508
+3 -4
drivers/gpu/drm/radeon/radeon_drv.c
··· 366 366 radeon_pci_shutdown(struct pci_dev *pdev) 367 367 { 368 368 /* if we are running in a VM, make sure the device 369 - * torn down properly on reboot/shutdown. 370 - * unfortunately we can't detect certain 371 - * hypervisors so just do this all the time. 369 + * torn down properly on reboot/shutdown 372 370 */ 373 - radeon_pci_remove(pdev); 371 + if (radeon_device_is_virtual()) 372 + radeon_pci_remove(pdev); 374 373 } 375 374 376 375 static int radeon_pmops_suspend(struct device *dev)
+20 -5
drivers/gpu/drm/radeon/si.c
··· 114 114 MODULE_FIRMWARE("radeon/hainan_rlc.bin"); 115 115 MODULE_FIRMWARE("radeon/hainan_smc.bin"); 116 116 MODULE_FIRMWARE("radeon/hainan_k_smc.bin"); 117 + MODULE_FIRMWARE("radeon/banks_k_2_smc.bin"); 118 + 119 + MODULE_FIRMWARE("radeon/si58_mc.bin"); 117 120 118 121 static u32 si_get_cu_active_bitmap(struct radeon_device *rdev, u32 se, u32 sh); 119 122 static void si_pcie_gen3_enable(struct radeon_device *rdev); ··· 1653 1650 int err; 1654 1651 int new_fw = 0; 1655 1652 bool new_smc = false; 1653 + bool si58_fw = false; 1654 + bool banks2_fw = false; 1656 1655 1657 1656 DRM_DEBUG("\n"); 1658 1657 ··· 1732 1727 ((rdev->pdev->device == 0x6660) || 1733 1728 (rdev->pdev->device == 0x6663) || 1734 1729 (rdev->pdev->device == 0x6665) || 1735 - (rdev->pdev->device == 0x6667))) || 1736 - ((rdev->pdev->revision == 0xc3) && 1737 - (rdev->pdev->device == 0x6665))) 1730 + (rdev->pdev->device == 0x6667)))) 1738 1731 new_smc = true; 1732 + else if ((rdev->pdev->revision == 0xc3) && 1733 + (rdev->pdev->device == 0x6665)) 1734 + banks2_fw = true; 1739 1735 new_chip_name = "hainan"; 1740 1736 pfp_req_size = SI_PFP_UCODE_SIZE * 4; 1741 1737 me_req_size = SI_PM4_UCODE_SIZE * 4; ··· 1747 1741 break; 1748 1742 default: BUG(); 1749 1743 } 1744 + 1745 + /* this memory configuration requires special firmware */ 1746 + if (((RREG32(MC_SEQ_MISC0) & 0xff000000) >> 24) == 0x58) 1747 + si58_fw = true; 1750 1748 1751 1749 DRM_INFO("Loading %s Microcode\n", new_chip_name); 1752 1750 ··· 1855 1845 } 1856 1846 } 1857 1847 1858 - snprintf(fw_name, sizeof(fw_name), "radeon/%s_mc.bin", new_chip_name); 1848 + if (si58_fw) 1849 + snprintf(fw_name, sizeof(fw_name), "radeon/si58_mc.bin"); 1850 + else 1851 + snprintf(fw_name, sizeof(fw_name), "radeon/%s_mc.bin", new_chip_name); 1859 1852 err = request_firmware(&rdev->mc_fw, fw_name, rdev->dev); 1860 1853 if (err) { 1861 1854 snprintf(fw_name, sizeof(fw_name), "radeon/%s_mc2.bin", chip_name); ··· 1889 1876 } 1890 1877 } 1891 1878 1892 - if (new_smc) 1879 + if (banks2_fw) 1880 + snprintf(fw_name, sizeof(fw_name), "radeon/banks_k_2_smc.bin"); 1881 + else if (new_smc) 1893 1882 snprintf(fw_name, sizeof(fw_name), "radeon/%s_k_smc.bin", new_chip_name); 1894 1883 else 1895 1884 snprintf(fw_name, sizeof(fw_name), "radeon/%s_smc.bin", new_chip_name);
-12
drivers/gpu/drm/radeon/si_dpm.c
··· 3008 3008 (rdev->pdev->device == 0x6817) || 3009 3009 (rdev->pdev->device == 0x6806)) 3010 3010 max_mclk = 120000; 3011 - } else if (rdev->family == CHIP_OLAND) { 3012 - if ((rdev->pdev->revision == 0xC7) || 3013 - (rdev->pdev->revision == 0x80) || 3014 - (rdev->pdev->revision == 0x81) || 3015 - (rdev->pdev->revision == 0x83) || 3016 - (rdev->pdev->revision == 0x87) || 3017 - (rdev->pdev->device == 0x6604) || 3018 - (rdev->pdev->device == 0x6605)) { 3019 - max_sclk = 75000; 3020 - max_mclk = 80000; 3021 - } 3022 3011 } else if (rdev->family == CHIP_HAINAN) { 3023 3012 if ((rdev->pdev->revision == 0x81) || 3024 3013 (rdev->pdev->revision == 0x83) || ··· 3016 3027 (rdev->pdev->device == 0x6665) || 3017 3028 (rdev->pdev->device == 0x6667)) { 3018 3029 max_sclk = 75000; 3019 - max_mclk = 80000; 3020 3030 } 3021 3031 } 3022 3032 /* Apply dpm quirks */
+1 -1
drivers/gpu/drm/vc4/vc4_crtc.c
··· 839 839 840 840 } 841 841 842 - __drm_atomic_helper_crtc_destroy_state(state); 842 + drm_atomic_helper_crtc_destroy_state(crtc, state); 843 843 } 844 844 845 845 static const struct drm_crtc_funcs vc4_crtc_funcs = {
+3 -1
drivers/gpu/drm/vc4/vc4_gem.c
··· 594 594 args->shader_rec_count); 595 595 struct vc4_bo *bo; 596 596 597 - if (uniforms_offset < shader_rec_offset || 597 + if (shader_rec_offset < args->bin_cl_size || 598 + uniforms_offset < shader_rec_offset || 598 599 exec_size < uniforms_offset || 599 600 args->shader_rec_count >= (UINT_MAX / 600 601 sizeof(struct vc4_shader_state)) || 601 602 temp_size < exec_size) { 602 603 DRM_ERROR("overflow in exec arguments\n"); 604 + ret = -EINVAL; 603 605 goto fail; 604 606 } 605 607
+1 -1
drivers/gpu/drm/vc4/vc4_render_cl.c
··· 461 461 } 462 462 463 463 ret = vc4_full_res_bounds_check(exec, *obj, surf); 464 - if (!ret) 464 + if (ret) 465 465 return ret; 466 466 467 467 return 0;
+1 -1
drivers/gpu/drm/virtio/virtgpu_fb.c
··· 331 331 info->fbops = &virtio_gpufb_ops; 332 332 info->pixmap.flags = FB_PIXMAP_SYSTEM; 333 333 334 - info->screen_base = obj->vmap; 334 + info->screen_buffer = obj->vmap; 335 335 info->screen_size = obj->gem_base.size; 336 336 drm_fb_helper_fill_fix(info, fb->pitches[0], fb->depth); 337 337 drm_fb_helper_fill_var(info, &vfbdev->helper,
+4 -4
drivers/i2c/busses/i2c-cadence.c
··· 962 962 goto err_clk_dis; 963 963 } 964 964 965 - ret = i2c_add_adapter(&id->adap); 966 - if (ret < 0) 967 - goto err_clk_dis; 968 - 969 965 /* 970 966 * Cadence I2C controller has a bug wherein it generates 971 967 * invalid read transaction after HW timeout in master receiver mode. ··· 970 974 * is written to this register to reduce the chances of error. 971 975 */ 972 976 cdns_i2c_writereg(CDNS_I2C_TIMEOUT_MAX, CDNS_I2C_TIME_OUT_OFFSET); 977 + 978 + ret = i2c_add_adapter(&id->adap); 979 + if (ret < 0) 980 + goto err_clk_dis; 973 981 974 982 dev_info(&pdev->dev, "%u kHz mmio %08lx irq %d\n", 975 983 id->i2c_clk / 1000, (unsigned long)r_mem->start, id->irq);
+20
drivers/i2c/busses/i2c-imx-lpi2c.c
··· 28 28 #include <linux/module.h> 29 29 #include <linux/of.h> 30 30 #include <linux/of_device.h> 31 + #include <linux/pinctrl/consumer.h> 31 32 #include <linux/platform_device.h> 32 33 #include <linux/sched.h> 33 34 #include <linux/slab.h> ··· 637 636 return 0; 638 637 } 639 638 639 + #ifdef CONFIG_PM_SLEEP 640 + static int lpi2c_imx_suspend(struct device *dev) 641 + { 642 + pinctrl_pm_select_sleep_state(dev); 643 + 644 + return 0; 645 + } 646 + 647 + static int lpi2c_imx_resume(struct device *dev) 648 + { 649 + pinctrl_pm_select_default_state(dev); 650 + 651 + return 0; 652 + } 653 + #endif 654 + 655 + static SIMPLE_DEV_PM_OPS(imx_lpi2c_pm, lpi2c_imx_suspend, lpi2c_imx_resume); 656 + 640 657 static struct platform_driver lpi2c_imx_driver = { 641 658 .probe = lpi2c_imx_probe, 642 659 .remove = lpi2c_imx_remove, 643 660 .driver = { 644 661 .name = DRIVER_NAME, 645 662 .of_match_table = lpi2c_imx_of_match, 663 + .pm = &imx_lpi2c_pm, 646 664 }, 647 665 }; 648 666
+2 -1
drivers/infiniband/core/cma.c
··· 2811 2811 if (!src_addr || !src_addr->sa_family) { 2812 2812 src_addr = (struct sockaddr *) &id->route.addr.src_addr; 2813 2813 src_addr->sa_family = dst_addr->sa_family; 2814 - if (dst_addr->sa_family == AF_INET6) { 2814 + if (IS_ENABLED(CONFIG_IPV6) && 2815 + dst_addr->sa_family == AF_INET6) { 2815 2816 struct sockaddr_in6 *src_addr6 = (struct sockaddr_in6 *) src_addr; 2816 2817 struct sockaddr_in6 *dst_addr6 = (struct sockaddr_in6 *) dst_addr; 2817 2818 src_addr6->sin6_scope_id = dst_addr6->sin6_scope_id;
+2
drivers/infiniband/core/umem.c
··· 134 134 IB_ACCESS_REMOTE_ATOMIC | IB_ACCESS_MW_BIND)); 135 135 136 136 if (access & IB_ACCESS_ON_DEMAND) { 137 + put_pid(umem->pid); 137 138 ret = ib_umem_odp_get(context, umem); 138 139 if (ret) { 139 140 kfree(umem); ··· 150 149 151 150 page_list = (struct page **) __get_free_page(GFP_KERNEL); 152 151 if (!page_list) { 152 + put_pid(umem->pid); 153 153 kfree(umem); 154 154 return ERR_PTR(-ENOMEM); 155 155 }
+1 -10
drivers/infiniband/hw/cxgb3/iwch_provider.c
··· 1135 1135 1136 1136 memset(props, 0, sizeof(struct ib_port_attr)); 1137 1137 props->max_mtu = IB_MTU_4096; 1138 - if (netdev->mtu >= 4096) 1139 - props->active_mtu = IB_MTU_4096; 1140 - else if (netdev->mtu >= 2048) 1141 - props->active_mtu = IB_MTU_2048; 1142 - else if (netdev->mtu >= 1024) 1143 - props->active_mtu = IB_MTU_1024; 1144 - else if (netdev->mtu >= 512) 1145 - props->active_mtu = IB_MTU_512; 1146 - else 1147 - props->active_mtu = IB_MTU_256; 1138 + props->active_mtu = ib_mtu_int_to_enum(netdev->mtu); 1148 1139 1149 1140 if (!netif_carrier_ok(netdev)) 1150 1141 props->state = IB_PORT_DOWN;
+4 -3
drivers/infiniband/hw/cxgb4/cm.c
··· 1804 1804 skb_trim(skb, dlen); 1805 1805 mutex_lock(&ep->com.mutex); 1806 1806 1807 - /* update RX credits */ 1808 - update_rx_credits(ep, dlen); 1809 - 1810 1807 switch (ep->com.state) { 1811 1808 case MPA_REQ_SENT: 1809 + update_rx_credits(ep, dlen); 1812 1810 ep->rcv_seq += dlen; 1813 1811 disconnect = process_mpa_reply(ep, skb); 1814 1812 break; 1815 1813 case MPA_REQ_WAIT: 1814 + update_rx_credits(ep, dlen); 1816 1815 ep->rcv_seq += dlen; 1817 1816 disconnect = process_mpa_request(ep, skb); 1818 1817 break; 1819 1818 case FPDU_MODE: { 1820 1819 struct c4iw_qp_attributes attrs; 1820 + 1821 + update_rx_credits(ep, dlen); 1821 1822 BUG_ON(!ep->com.qp); 1822 1823 if (status) 1823 1824 pr_err("%s Unexpected streaming data." \
+13 -8
drivers/infiniband/hw/cxgb4/cq.c
··· 505 505 } 506 506 507 507 /* 508 + * Special cqe for drain WR completions... 509 + */ 510 + if (CQE_OPCODE(hw_cqe) == C4IW_DRAIN_OPCODE) { 511 + *cookie = CQE_DRAIN_COOKIE(hw_cqe); 512 + *cqe = *hw_cqe; 513 + goto skip_cqe; 514 + } 515 + 516 + /* 508 517 * Gotta tweak READ completions: 509 518 * 1) the cqe doesn't contain the sq_wptr from the wr. 510 519 * 2) opcode not reflected from the wr. ··· 762 753 c4iw_invalidate_mr(qhp->rhp, 763 754 CQE_WRID_FR_STAG(&cqe)); 764 755 break; 756 + case C4IW_DRAIN_OPCODE: 757 + wc->opcode = IB_WC_SEND; 758 + break; 765 759 default: 766 760 printk(KERN_ERR MOD "Unexpected opcode %d " 767 761 "in the CQE received for QPID=0x%0x\n", ··· 829 817 } 830 818 } 831 819 out: 832 - if (wq) { 833 - if (unlikely(qhp->attr.state != C4IW_QP_STATE_RTS)) { 834 - if (t4_sq_empty(wq)) 835 - complete(&qhp->sq_drained); 836 - if (t4_rq_empty(wq)) 837 - complete(&qhp->rq_drained); 838 - } 820 + if (wq) 839 821 spin_unlock(&qhp->lock); 840 - } 841 822 return ret; 842 823 } 843 824
+9
drivers/infiniband/hw/cxgb4/device.c
··· 846 846 } 847 847 } 848 848 849 + rdev->free_workq = create_singlethread_workqueue("iw_cxgb4_free"); 850 + if (!rdev->free_workq) { 851 + err = -ENOMEM; 852 + goto err_free_status_page; 853 + } 854 + 849 855 rdev->status_page->db_off = 0; 850 856 851 857 return 0; 858 + err_free_status_page: 859 + free_page((unsigned long)rdev->status_page); 852 860 destroy_ocqp_pool: 853 861 c4iw_ocqp_pool_destroy(rdev); 854 862 destroy_rqtpool: ··· 870 862 871 863 static void c4iw_rdev_close(struct c4iw_rdev *rdev) 872 864 { 865 + destroy_workqueue(rdev->free_workq); 873 866 kfree(rdev->wr_log); 874 867 free_page((unsigned long)rdev->status_page); 875 868 c4iw_pblpool_destroy(rdev);
+20 -4
drivers/infiniband/hw/cxgb4/iw_cxgb4.h
··· 45 45 #include <linux/kref.h> 46 46 #include <linux/timer.h> 47 47 #include <linux/io.h> 48 + #include <linux/workqueue.h> 48 49 49 50 #include <asm/byteorder.h> 50 51 ··· 108 107 struct list_head qpids; 109 108 struct list_head cqids; 110 109 struct mutex lock; 110 + struct kref kref; 111 111 }; 112 112 113 113 enum c4iw_rdev_flags { ··· 185 183 atomic_t wr_log_idx; 186 184 struct wr_log_entry *wr_log; 187 185 int wr_log_size; 186 + struct workqueue_struct *free_workq; 188 187 }; 189 188 190 189 static inline int c4iw_fatal_error(struct c4iw_rdev *rdev) ··· 483 480 wait_queue_head_t wait; 484 481 struct timer_list timer; 485 482 int sq_sig_all; 486 - struct completion rq_drained; 487 - struct completion sq_drained; 483 + struct work_struct free_work; 484 + struct c4iw_ucontext *ucontext; 488 485 }; 489 486 490 487 static inline struct c4iw_qp *to_c4iw_qp(struct ib_qp *ibqp) ··· 498 495 u32 key; 499 496 spinlock_t mmap_lock; 500 497 struct list_head mmaps; 498 + struct kref kref; 501 499 }; 502 500 503 501 static inline struct c4iw_ucontext *to_c4iw_ucontext(struct ib_ucontext *c) 504 502 { 505 503 return container_of(c, struct c4iw_ucontext, ibucontext); 504 + } 505 + 506 + void _c4iw_free_ucontext(struct kref *kref); 507 + 508 + static inline void c4iw_put_ucontext(struct c4iw_ucontext *ucontext) 509 + { 510 + kref_put(&ucontext->kref, _c4iw_free_ucontext); 511 + } 512 + 513 + static inline void c4iw_get_ucontext(struct c4iw_ucontext *ucontext) 514 + { 515 + kref_get(&ucontext->kref); 506 516 } 507 517 508 518 struct c4iw_mm_entry { ··· 630 614 } 631 615 return IB_QPS_ERR; 632 616 } 617 + 618 + #define C4IW_DRAIN_OPCODE FW_RI_SGE_EC_CR_RETURN 633 619 634 620 static inline u32 c4iw_ib_to_tpt_access(int a) 635 621 { ··· 1015 997 extern int db_fc_threshold; 1016 998 extern int db_coalescing_threshold; 1017 999 extern int use_dsgl; 1018 - void c4iw_drain_rq(struct ib_qp *qp); 1019 - void c4iw_drain_sq(struct ib_qp *qp); 1020 1000 void c4iw_invalidate_mr(struct c4iw_dev *rhp, u32 rkey); 1021 1001 1022 1002 #endif
+17 -16
drivers/infiniband/hw/cxgb4/provider.c
··· 93 93 return -ENOSYS; 94 94 } 95 95 96 - static int c4iw_dealloc_ucontext(struct ib_ucontext *context) 96 + void _c4iw_free_ucontext(struct kref *kref) 97 97 { 98 - struct c4iw_dev *rhp = to_c4iw_dev(context->device); 99 - struct c4iw_ucontext *ucontext = to_c4iw_ucontext(context); 98 + struct c4iw_ucontext *ucontext; 99 + struct c4iw_dev *rhp; 100 100 struct c4iw_mm_entry *mm, *tmp; 101 101 102 - PDBG("%s context %p\n", __func__, context); 102 + ucontext = container_of(kref, struct c4iw_ucontext, kref); 103 + rhp = to_c4iw_dev(ucontext->ibucontext.device); 104 + 105 + PDBG("%s ucontext %p\n", __func__, ucontext); 103 106 list_for_each_entry_safe(mm, tmp, &ucontext->mmaps, entry) 104 107 kfree(mm); 105 108 c4iw_release_dev_ucontext(&rhp->rdev, &ucontext->uctx); 106 109 kfree(ucontext); 110 + } 111 + 112 + static int c4iw_dealloc_ucontext(struct ib_ucontext *context) 113 + { 114 + struct c4iw_ucontext *ucontext = to_c4iw_ucontext(context); 115 + 116 + PDBG("%s context %p\n", __func__, context); 117 + c4iw_put_ucontext(ucontext); 107 118 return 0; 108 119 } 109 120 ··· 138 127 c4iw_init_dev_ucontext(&rhp->rdev, &context->uctx); 139 128 INIT_LIST_HEAD(&context->mmaps); 140 129 spin_lock_init(&context->mmap_lock); 130 + kref_init(&context->kref); 141 131 142 132 if (udata->outlen < sizeof(uresp) - sizeof(uresp.reserved)) { 143 133 if (!warned++) ··· 373 361 374 362 memset(props, 0, sizeof(struct ib_port_attr)); 375 363 props->max_mtu = IB_MTU_4096; 376 - if (netdev->mtu >= 4096) 377 - props->active_mtu = IB_MTU_4096; 378 - else if (netdev->mtu >= 2048) 379 - props->active_mtu = IB_MTU_2048; 380 - else if (netdev->mtu >= 1024) 381 - props->active_mtu = IB_MTU_1024; 382 - else if (netdev->mtu >= 512) 383 - props->active_mtu = IB_MTU_512; 384 - else 385 - props->active_mtu = IB_MTU_256; 364 + props->active_mtu = ib_mtu_int_to_enum(netdev->mtu); 386 365 387 366 if (!netif_carrier_ok(netdev)) 388 367 props->state = IB_PORT_DOWN; ··· 610 607 dev->ibdev.uverbs_abi_ver = C4IW_UVERBS_ABI_VERSION; 611 608 dev->ibdev.get_port_immutable = c4iw_port_immutable; 612 609 dev->ibdev.get_dev_fw_str = get_dev_fw_str; 613 - dev->ibdev.drain_sq = c4iw_drain_sq; 614 - dev->ibdev.drain_rq = c4iw_drain_rq; 615 610 616 611 dev->ibdev.iwcm = kmalloc(sizeof(struct iw_cm_verbs), GFP_KERNEL); 617 612 if (!dev->ibdev.iwcm)
+94 -53
drivers/infiniband/hw/cxgb4/qp.c
··· 715 715 return 0; 716 716 } 717 717 718 - static void _free_qp(struct kref *kref) 718 + static void free_qp_work(struct work_struct *work) 719 + { 720 + struct c4iw_ucontext *ucontext; 721 + struct c4iw_qp *qhp; 722 + struct c4iw_dev *rhp; 723 + 724 + qhp = container_of(work, struct c4iw_qp, free_work); 725 + ucontext = qhp->ucontext; 726 + rhp = qhp->rhp; 727 + 728 + PDBG("%s qhp %p ucontext %p\n", __func__, qhp, ucontext); 729 + destroy_qp(&rhp->rdev, &qhp->wq, 730 + ucontext ? &ucontext->uctx : &rhp->rdev.uctx); 731 + 732 + if (ucontext) 733 + c4iw_put_ucontext(ucontext); 734 + kfree(qhp); 735 + } 736 + 737 + static void queue_qp_free(struct kref *kref) 719 738 { 720 739 struct c4iw_qp *qhp; 721 740 722 741 qhp = container_of(kref, struct c4iw_qp, kref); 723 742 PDBG("%s qhp %p\n", __func__, qhp); 724 - kfree(qhp); 743 + queue_work(qhp->rhp->rdev.free_workq, &qhp->free_work); 725 744 } 726 745 727 746 void c4iw_qp_add_ref(struct ib_qp *qp) ··· 752 733 void c4iw_qp_rem_ref(struct ib_qp *qp) 753 734 { 754 735 PDBG("%s ib_qp %p\n", __func__, qp); 755 - kref_put(&to_c4iw_qp(qp)->kref, _free_qp); 736 + kref_put(&to_c4iw_qp(qp)->kref, queue_qp_free); 756 737 } 757 738 758 739 static void add_to_fc_list(struct list_head *head, struct list_head *entry) ··· 795 776 return 0; 796 777 } 797 778 779 + static void complete_sq_drain_wr(struct c4iw_qp *qhp, struct ib_send_wr *wr) 780 + { 781 + struct t4_cqe cqe = {}; 782 + struct c4iw_cq *schp; 783 + unsigned long flag; 784 + struct t4_cq *cq; 785 + 786 + schp = to_c4iw_cq(qhp->ibqp.send_cq); 787 + cq = &schp->cq; 788 + 789 + cqe.u.drain_cookie = wr->wr_id; 790 + cqe.header = cpu_to_be32(CQE_STATUS_V(T4_ERR_SWFLUSH) | 791 + CQE_OPCODE_V(C4IW_DRAIN_OPCODE) | 792 + CQE_TYPE_V(1) | 793 + CQE_SWCQE_V(1) | 794 + CQE_QPID_V(qhp->wq.sq.qid)); 795 + 796 + spin_lock_irqsave(&schp->lock, flag); 797 + cqe.bits_type_ts = cpu_to_be64(CQE_GENBIT_V((u64)cq->gen)); 798 + cq->sw_queue[cq->sw_pidx] = cqe; 799 + t4_swcq_produce(cq); 800 + spin_unlock_irqrestore(&schp->lock, flag); 801 + 802 + spin_lock_irqsave(&schp->comp_handler_lock, flag); 803 + (*schp->ibcq.comp_handler)(&schp->ibcq, 804 + schp->ibcq.cq_context); 805 + spin_unlock_irqrestore(&schp->comp_handler_lock, flag); 806 + } 807 + 808 + static void complete_rq_drain_wr(struct c4iw_qp *qhp, struct ib_recv_wr *wr) 809 + { 810 + struct t4_cqe cqe = {}; 811 + struct c4iw_cq *rchp; 812 + unsigned long flag; 813 + struct t4_cq *cq; 814 + 815 + rchp = to_c4iw_cq(qhp->ibqp.recv_cq); 816 + cq = &rchp->cq; 817 + 818 + cqe.u.drain_cookie = wr->wr_id; 819 + cqe.header = cpu_to_be32(CQE_STATUS_V(T4_ERR_SWFLUSH) | 820 + CQE_OPCODE_V(C4IW_DRAIN_OPCODE) | 821 + CQE_TYPE_V(0) | 822 + CQE_SWCQE_V(1) | 823 + CQE_QPID_V(qhp->wq.sq.qid)); 824 + 825 + spin_lock_irqsave(&rchp->lock, flag); 826 + cqe.bits_type_ts = cpu_to_be64(CQE_GENBIT_V((u64)cq->gen)); 827 + cq->sw_queue[cq->sw_pidx] = cqe; 828 + t4_swcq_produce(cq); 829 + spin_unlock_irqrestore(&rchp->lock, flag); 830 + 831 + spin_lock_irqsave(&rchp->comp_handler_lock, flag); 832 + (*rchp->ibcq.comp_handler)(&rchp->ibcq, 833 + rchp->ibcq.cq_context); 834 + spin_unlock_irqrestore(&rchp->comp_handler_lock, flag); 835 + } 836 + 798 837 int c4iw_post_send(struct ib_qp *ibqp, struct ib_send_wr *wr, 799 838 struct ib_send_wr **bad_wr) 800 839 { ··· 871 794 spin_lock_irqsave(&qhp->lock, flag); 872 795 if (t4_wq_in_error(&qhp->wq)) { 873 796 spin_unlock_irqrestore(&qhp->lock, flag); 874 - *bad_wr = wr; 875 - return -EINVAL; 797 + complete_sq_drain_wr(qhp, wr); 798 + return err; 876 799 } 877 800 num_wrs = t4_sq_avail(&qhp->wq); 878 801 if (num_wrs == 0) { ··· 1014 937 spin_lock_irqsave(&qhp->lock, flag); 1015 938 if (t4_wq_in_error(&qhp->wq)) { 1016 939 spin_unlock_irqrestore(&qhp->lock, flag); 1017 - *bad_wr = wr; 1018 - return -EINVAL; 940 + complete_rq_drain_wr(qhp, wr); 941 + return err; 1019 942 } 1020 943 num_wrs = t4_rq_avail(&qhp->wq); 1021 944 if (num_wrs == 0) { ··· 1627 1550 } 1628 1551 break; 1629 1552 case C4IW_QP_STATE_CLOSING: 1630 - if (!internal) { 1553 + 1554 + /* 1555 + * Allow kernel users to move to ERROR for qp draining. 1556 + */ 1557 + if (!internal && (qhp->ibqp.uobject || attrs->next_state != 1558 + C4IW_QP_STATE_ERROR)) { 1631 1559 ret = -EINVAL; 1632 1560 goto out; 1633 1561 } ··· 1725 1643 struct c4iw_dev *rhp; 1726 1644 struct c4iw_qp *qhp; 1727 1645 struct c4iw_qp_attributes attrs; 1728 - struct c4iw_ucontext *ucontext; 1729 1646 1730 1647 qhp = to_c4iw_qp(ib_qp); 1731 1648 rhp = qhp->rhp; ··· 1743 1662 list_del_init(&qhp->db_fc_entry); 1744 1663 spin_unlock_irq(&rhp->lock); 1745 1664 free_ird(rhp, qhp->attr.max_ird); 1746 - 1747 - ucontext = ib_qp->uobject ? 1748 - to_c4iw_ucontext(ib_qp->uobject->context) : NULL; 1749 - destroy_qp(&rhp->rdev, &qhp->wq, 1750 - ucontext ? &ucontext->uctx : &rhp->rdev.uctx); 1751 1665 1752 1666 c4iw_qp_rem_ref(ib_qp); 1753 1667 ··· 1839 1763 qhp->attr.max_ird = 0; 1840 1764 qhp->sq_sig_all = attrs->sq_sig_type == IB_SIGNAL_ALL_WR; 1841 1765 spin_lock_init(&qhp->lock); 1842 - init_completion(&qhp->sq_drained); 1843 - init_completion(&qhp->rq_drained); 1844 1766 mutex_init(&qhp->mutex); 1845 1767 init_waitqueue_head(&qhp->wait); 1846 1768 kref_init(&qhp->kref); 1769 + INIT_WORK(&qhp->free_work, free_qp_work); 1847 1770 1848 1771 ret = insert_handle(rhp, &rhp->qpidr, qhp, qhp->wq.sq.qid); 1849 1772 if (ret) ··· 1929 1854 ma_sync_key_mm->len = PAGE_SIZE; 1930 1855 insert_mmap(ucontext, ma_sync_key_mm); 1931 1856 } 1857 + 1858 + c4iw_get_ucontext(ucontext); 1859 + qhp->ucontext = ucontext; 1932 1860 } 1933 1861 qhp->ibqp.qp_num = qhp->wq.sq.qid; 1934 1862 init_timer(&(qhp->timer)); ··· 2035 1957 init_attr->cap.max_inline_data = T4_MAX_SEND_INLINE; 2036 1958 init_attr->sq_sig_type = qhp->sq_sig_all ? IB_SIGNAL_ALL_WR : 0; 2037 1959 return 0; 2038 - } 2039 - 2040 - static void move_qp_to_err(struct c4iw_qp *qp) 2041 - { 2042 - struct c4iw_qp_attributes attrs = { .next_state = C4IW_QP_STATE_ERROR }; 2043 - 2044 - (void)c4iw_modify_qp(qp->rhp, qp, C4IW_QP_ATTR_NEXT_STATE, &attrs, 1); 2045 - } 2046 - 2047 - void c4iw_drain_sq(struct ib_qp *ibqp) 2048 - { 2049 - struct c4iw_qp *qp = to_c4iw_qp(ibqp); 2050 - unsigned long flag; 2051 - bool need_to_wait; 2052 - 2053 - move_qp_to_err(qp); 2054 - spin_lock_irqsave(&qp->lock, flag); 2055 - need_to_wait = !t4_sq_empty(&qp->wq); 2056 - spin_unlock_irqrestore(&qp->lock, flag); 2057 - 2058 - if (need_to_wait) 2059 - wait_for_completion(&qp->sq_drained); 2060 - } 2061 - 2062 - void c4iw_drain_rq(struct ib_qp *ibqp) 2063 - { 2064 - struct c4iw_qp *qp = to_c4iw_qp(ibqp); 2065 - unsigned long flag; 2066 - bool need_to_wait; 2067 - 2068 - move_qp_to_err(qp); 2069 - spin_lock_irqsave(&qp->lock, flag); 2070 - need_to_wait = !t4_rq_empty(&qp->wq); 2071 - spin_unlock_irqrestore(&qp->lock, flag); 2072 - 2073 - if (need_to_wait) 2074 - wait_for_completion(&qp->rq_drained); 2075 1960 }
+2
drivers/infiniband/hw/cxgb4/t4.h
··· 179 179 __be32 wrid_hi; 180 180 __be32 wrid_low; 181 181 } gen; 182 + u64 drain_cookie; 182 183 } u; 183 184 __be64 reserved; 184 185 __be64 bits_type_ts; ··· 239 238 /* generic accessor macros */ 240 239 #define CQE_WRID_HI(x) (be32_to_cpu((x)->u.gen.wrid_hi)) 241 240 #define CQE_WRID_LOW(x) (be32_to_cpu((x)->u.gen.wrid_low)) 241 + #define CQE_DRAIN_COOKIE(x) ((x)->u.drain_cookie) 242 242 243 243 /* macros for flit 3 of the cqe */ 244 244 #define CQE_GENBIT_S 63
+1 -10
drivers/infiniband/hw/i40iw/i40iw_verbs.c
··· 100 100 memset(props, 0, sizeof(*props)); 101 101 102 102 props->max_mtu = IB_MTU_4096; 103 - if (netdev->mtu >= 4096) 104 - props->active_mtu = IB_MTU_4096; 105 - else if (netdev->mtu >= 2048) 106 - props->active_mtu = IB_MTU_2048; 107 - else if (netdev->mtu >= 1024) 108 - props->active_mtu = IB_MTU_1024; 109 - else if (netdev->mtu >= 512) 110 - props->active_mtu = IB_MTU_512; 111 - else 112 - props->active_mtu = IB_MTU_256; 103 + props->active_mtu = ib_mtu_int_to_enum(netdev->mtu); 113 104 114 105 props->lid = 1; 115 106 if (netif_carrier_ok(iwdev->netdev))
+1 -11
drivers/infiniband/hw/nes/nes_verbs.c
··· 478 478 memset(props, 0, sizeof(*props)); 479 479 480 480 props->max_mtu = IB_MTU_4096; 481 - 482 - if (netdev->mtu >= 4096) 483 - props->active_mtu = IB_MTU_4096; 484 - else if (netdev->mtu >= 2048) 485 - props->active_mtu = IB_MTU_2048; 486 - else if (netdev->mtu >= 1024) 487 - props->active_mtu = IB_MTU_1024; 488 - else if (netdev->mtu >= 512) 489 - props->active_mtu = IB_MTU_512; 490 - else 491 - props->active_mtu = IB_MTU_256; 481 + props->active_mtu = ib_mtu_int_to_enum(netdev->mtu); 492 482 493 483 props->lid = 1; 494 484 props->lmc = 0;
+15 -8
drivers/infiniband/hw/qedr/main.c
··· 576 576 return 0; 577 577 } 578 578 579 - void qedr_unaffiliated_event(void *context, 580 - u8 event_code) 579 + void qedr_unaffiliated_event(void *context, u8 event_code) 581 580 { 582 581 pr_err("unaffiliated event not implemented yet\n"); 583 582 } ··· 791 792 if (device_create_file(&dev->ibdev.dev, qedr_attributes[i])) 792 793 goto sysfs_err; 793 794 795 + if (!test_and_set_bit(QEDR_ENET_STATE_BIT, &dev->enet_state)) 796 + qedr_ib_dispatch_event(dev, QEDR_PORT, IB_EVENT_PORT_ACTIVE); 797 + 794 798 DP_DEBUG(dev, QEDR_MSG_INIT, "qedr driver loaded successfully\n"); 795 799 return dev; 796 800 ··· 826 824 ib_dealloc_device(&dev->ibdev); 827 825 } 828 826 829 - static int qedr_close(struct qedr_dev *dev) 827 + static void qedr_close(struct qedr_dev *dev) 830 828 { 831 - qedr_ib_dispatch_event(dev, 1, IB_EVENT_PORT_ERR); 832 - 833 - return 0; 829 + if (test_and_clear_bit(QEDR_ENET_STATE_BIT, &dev->enet_state)) 830 + qedr_ib_dispatch_event(dev, QEDR_PORT, IB_EVENT_PORT_ERR); 834 831 } 835 832 836 833 static void qedr_shutdown(struct qedr_dev *dev) 837 834 { 838 835 qedr_close(dev); 839 836 qedr_remove(dev); 837 + } 838 + 839 + static void qedr_open(struct qedr_dev *dev) 840 + { 841 + if (!test_and_set_bit(QEDR_ENET_STATE_BIT, &dev->enet_state)) 842 + qedr_ib_dispatch_event(dev, QEDR_PORT, IB_EVENT_PORT_ACTIVE); 840 843 } 841 844 842 845 static void qedr_mac_address_change(struct qedr_dev *dev) ··· 870 863 871 864 ether_addr_copy(dev->gsi_ll2_mac_address, dev->ndev->dev_addr); 872 865 873 - qedr_ib_dispatch_event(dev, 1, IB_EVENT_GID_CHANGE); 866 + qedr_ib_dispatch_event(dev, QEDR_PORT, IB_EVENT_GID_CHANGE); 874 867 875 868 if (rc) 876 869 DP_ERR(dev, "Error updating mac filter\n"); ··· 884 877 { 885 878 switch (event) { 886 879 case QEDE_UP: 887 - qedr_ib_dispatch_event(dev, 1, IB_EVENT_PORT_ACTIVE); 880 + qedr_open(dev); 888 881 break; 889 882 case QEDE_DOWN: 890 883 qedr_close(dev);
+5 -3
drivers/infiniband/hw/qedr/qedr.h
··· 113 113 struct qed_rdma_events events; 114 114 }; 115 115 116 + #define QEDR_ENET_STATE_BIT (0) 117 + 116 118 struct qedr_dev { 117 119 struct ib_device ibdev; 118 120 struct qed_dev *cdev; ··· 155 153 struct qedr_cq *gsi_sqcq; 156 154 struct qedr_cq *gsi_rqcq; 157 155 struct qedr_qp *gsi_qp; 156 + 157 + unsigned long enet_state; 158 158 }; 159 159 160 160 #define QEDR_MAX_SQ_PBL (0x8000) ··· 192 188 #define QEDR_ROCE_MAX_CNQ_SIZE (0x4000) 193 189 194 190 #define QEDR_MAX_PORT (1) 191 + #define QEDR_PORT (1) 195 192 196 193 #define QEDR_UVERBS(CMD_NAME) (1ull << IB_USER_VERBS_CMD_##CMD_NAME) 197 194 ··· 255 250 u32 sig; 256 251 257 252 u16 icid; 258 - 259 - /* Lock to protect completion handler */ 260 - spinlock_t comp_handler_lock; 261 253 262 254 /* Lock to protect multiplem CQ's */ 263 255 spinlock_t cq_lock;
+4 -10
drivers/infiniband/hw/qedr/qedr_cm.c
··· 87 87 qedr_inc_sw_gsi_cons(&qp->sq); 88 88 spin_unlock_irqrestore(&qp->q_lock, flags); 89 89 90 - if (cq->ibcq.comp_handler) { 91 - spin_lock_irqsave(&cq->comp_handler_lock, flags); 90 + if (cq->ibcq.comp_handler) 92 91 (*cq->ibcq.comp_handler) (&cq->ibcq, cq->ibcq.cq_context); 93 - spin_unlock_irqrestore(&cq->comp_handler_lock, flags); 94 - } 95 92 } 96 93 97 94 void qedr_ll2_rx_cb(void *_dev, struct qed_roce_ll2_packet *pkt, ··· 110 113 111 114 spin_unlock_irqrestore(&qp->q_lock, flags); 112 115 113 - if (cq->ibcq.comp_handler) { 114 - spin_lock_irqsave(&cq->comp_handler_lock, flags); 116 + if (cq->ibcq.comp_handler) 115 117 (*cq->ibcq.comp_handler) (&cq->ibcq, cq->ibcq.cq_context); 116 - spin_unlock_irqrestore(&cq->comp_handler_lock, flags); 117 - } 118 118 } 119 119 120 120 static void qedr_destroy_gsi_cq(struct qedr_dev *dev, ··· 398 404 } 399 405 400 406 if (ether_addr_equal(udh.eth.smac_h, udh.eth.dmac_h)) 401 - packet->tx_dest = QED_ROCE_LL2_TX_DEST_NW; 402 - else 403 407 packet->tx_dest = QED_ROCE_LL2_TX_DEST_LB; 408 + else 409 + packet->tx_dest = QED_ROCE_LL2_TX_DEST_NW; 404 410 405 411 packet->roce_mode = roce_mode; 406 412 memcpy(packet->header.vaddr, ud_header_buffer, header_size);
+41 -21
drivers/infiniband/hw/qedr/verbs.c
··· 471 471 struct ib_ucontext *context, struct ib_udata *udata) 472 472 { 473 473 struct qedr_dev *dev = get_qedr_dev(ibdev); 474 - struct qedr_ucontext *uctx = NULL; 475 - struct qedr_alloc_pd_uresp uresp; 476 474 struct qedr_pd *pd; 477 475 u16 pd_id; 478 476 int rc; ··· 487 489 if (!pd) 488 490 return ERR_PTR(-ENOMEM); 489 491 490 - dev->ops->rdma_alloc_pd(dev->rdma_ctx, &pd_id); 492 + rc = dev->ops->rdma_alloc_pd(dev->rdma_ctx, &pd_id); 493 + if (rc) 494 + goto err; 491 495 492 - uresp.pd_id = pd_id; 493 496 pd->pd_id = pd_id; 494 497 495 498 if (udata && context) { 499 + struct qedr_alloc_pd_uresp uresp; 500 + 501 + uresp.pd_id = pd_id; 502 + 496 503 rc = ib_copy_to_udata(udata, &uresp, sizeof(uresp)); 497 - if (rc) 504 + if (rc) { 498 505 DP_ERR(dev, "copy error pd_id=0x%x.\n", pd_id); 499 - uctx = get_qedr_ucontext(context); 500 - uctx->pd = pd; 501 - pd->uctx = uctx; 506 + dev->ops->rdma_dealloc_pd(dev->rdma_ctx, pd_id); 507 + goto err; 508 + } 509 + 510 + pd->uctx = get_qedr_ucontext(context); 511 + pd->uctx->pd = pd; 502 512 } 503 513 504 514 return &pd->ibpd; 515 + 516 + err: 517 + kfree(pd); 518 + return ERR_PTR(rc); 505 519 } 506 520 507 521 int qedr_dealloc_pd(struct ib_pd *ibpd) ··· 1610 1600 return ERR_PTR(-EFAULT); 1611 1601 } 1612 1602 1613 - enum ib_qp_state qedr_get_ibqp_state(enum qed_roce_qp_state qp_state) 1603 + static enum ib_qp_state qedr_get_ibqp_state(enum qed_roce_qp_state qp_state) 1614 1604 { 1615 1605 switch (qp_state) { 1616 1606 case QED_ROCE_QP_STATE_RESET: ··· 1631 1621 return IB_QPS_ERR; 1632 1622 } 1633 1623 1634 - enum qed_roce_qp_state qedr_get_state_from_ibqp(enum ib_qp_state qp_state) 1624 + static enum qed_roce_qp_state qedr_get_state_from_ibqp( 1625 + enum ib_qp_state qp_state) 1635 1626 { 1636 1627 switch (qp_state) { 1637 1628 case IB_QPS_RESET: ··· 1668 1657 int status = 0; 1669 1658 1670 1659 if (new_state == qp->state) 1671 - return 1; 1660 + return 0; 1672 1661 1673 1662 switch (qp->state) { 1674 1663 case QED_ROCE_QP_STATE_RESET: ··· 1744 1733 /* ERR->XXX */ 1745 1734 switch (new_state) { 1746 1735 case QED_ROCE_QP_STATE_RESET: 1736 + if ((qp->rq.prod != qp->rq.cons) || 1737 + (qp->sq.prod != qp->sq.cons)) { 1738 + DP_NOTICE(dev, 1739 + "Error->Reset with rq/sq not empty rq.prod=%x rq.cons=%x sq.prod=%x sq.cons=%x\n", 1740 + qp->rq.prod, qp->rq.cons, qp->sq.prod, 1741 + qp->sq.cons); 1742 + status = -EINVAL; 1743 + } 1747 1744 break; 1748 1745 default: 1749 1746 status = -EINVAL; ··· 1884 1865 qp_params.sgid.dwords[2], qp_params.sgid.dwords[3]); 1885 1866 DP_DEBUG(dev, QEDR_MSG_QP, "remote_mac=[%pM]\n", 1886 1867 qp_params.remote_mac_addr); 1887 - ; 1888 1868 1889 1869 qp_params.mtu = qp->mtu; 1890 1870 qp_params.lb_indication = false; ··· 2034 2016 2035 2017 qp_attr->qp_state = qedr_get_ibqp_state(params.state); 2036 2018 qp_attr->cur_qp_state = qedr_get_ibqp_state(params.state); 2037 - qp_attr->path_mtu = iboe_get_mtu(params.mtu); 2019 + qp_attr->path_mtu = ib_mtu_int_to_enum(params.mtu); 2038 2020 qp_attr->path_mig_state = IB_MIG_MIGRATED; 2039 2021 qp_attr->rq_psn = params.rq_psn; 2040 2022 qp_attr->sq_psn = params.sq_psn; ··· 2046 2028 qp_attr->cap.max_recv_wr = qp->rq.max_wr; 2047 2029 qp_attr->cap.max_send_sge = qp->sq.max_sges; 2048 2030 qp_attr->cap.max_recv_sge = qp->rq.max_sges; 2049 - qp_attr->cap.max_inline_data = qp->max_inline_data; 2031 + qp_attr->cap.max_inline_data = ROCE_REQ_MAX_INLINE_DATA_SIZE; 2050 2032 qp_init_attr->cap = qp_attr->cap; 2051 2033 2052 2034 memcpy(&qp_attr->ah_attr.grh.dgid.raw[0], &params.dgid.bytes[0], ··· 2320 2302 return rc; 2321 2303 } 2322 2304 2323 - struct qedr_mr *__qedr_alloc_mr(struct ib_pd *ibpd, int max_page_list_len) 2305 + static struct qedr_mr *__qedr_alloc_mr(struct ib_pd *ibpd, 2306 + int max_page_list_len) 2324 2307 { 2325 2308 struct qedr_pd *pd = get_qedr_pd(ibpd); 2326 2309 struct qedr_dev *dev = get_qedr_dev(ibpd->device); ··· 2723 2704 return 0; 2724 2705 } 2725 2706 2726 - enum ib_wc_opcode qedr_ib_to_wc_opcode(enum ib_wr_opcode opcode) 2707 + static enum ib_wc_opcode qedr_ib_to_wc_opcode(enum ib_wr_opcode opcode) 2727 2708 { 2728 2709 switch (opcode) { 2729 2710 case IB_WR_RDMA_WRITE: ··· 2748 2729 } 2749 2730 } 2750 2731 2751 - inline bool qedr_can_post_send(struct qedr_qp *qp, struct ib_send_wr *wr) 2732 + static inline bool qedr_can_post_send(struct qedr_qp *qp, struct ib_send_wr *wr) 2752 2733 { 2753 2734 int wq_is_full, err_wr, pbl_is_full; 2754 2735 struct qedr_dev *dev = qp->dev; ··· 2785 2766 return true; 2786 2767 } 2787 2768 2788 - int __qedr_post_send(struct ib_qp *ibqp, struct ib_send_wr *wr, 2769 + static int __qedr_post_send(struct ib_qp *ibqp, struct ib_send_wr *wr, 2789 2770 struct ib_send_wr **bad_wr) 2790 2771 { 2791 2772 struct qedr_dev *dev = get_qedr_dev(ibqp->device); ··· 3253 3234 IB_WC_SUCCESS, 0); 3254 3235 break; 3255 3236 case RDMA_CQE_REQ_STS_WORK_REQUEST_FLUSHED_ERR: 3256 - DP_ERR(dev, 3257 - "Error: POLL CQ with RDMA_CQE_REQ_STS_WORK_REQUEST_FLUSHED_ERR. CQ icid=0x%x, QP icid=0x%x\n", 3258 - cq->icid, qp->icid); 3237 + if (qp->state != QED_ROCE_QP_STATE_ERR) 3238 + DP_ERR(dev, 3239 + "Error: POLL CQ with RDMA_CQE_REQ_STS_WORK_REQUEST_FLUSHED_ERR. CQ icid=0x%x, QP icid=0x%x\n", 3240 + cq->icid, qp->icid); 3259 3241 cnt = process_req(dev, qp, cq, num_entries, wc, req->sq_cons, 3260 3242 IB_WC_WR_FLUSH_ERR, 1); 3261 3243 break;
+1 -3
drivers/infiniband/hw/vmw_pvrdma/pvrdma_main.c
··· 1029 1029 if (ret) { 1030 1030 dev_err(&pdev->dev, "failed to allocate interrupts\n"); 1031 1031 ret = -ENOMEM; 1032 - goto err_netdevice; 1032 + goto err_free_cq_ring; 1033 1033 } 1034 1034 1035 1035 /* Allocate UAR table. */ ··· 1092 1092 err_free_intrs: 1093 1093 pvrdma_free_irq(dev); 1094 1094 pvrdma_disable_msi_all(dev); 1095 - err_netdevice: 1096 - unregister_netdevice_notifier(&dev->nb_netdev); 1097 1095 err_free_cq_ring: 1098 1096 pvrdma_page_dir_cleanup(dev, &dev->cq_pdir); 1099 1097 err_free_async_ring:
+1 -1
drivers/infiniband/hw/vmw_pvrdma/pvrdma_verbs.c
··· 306 306 union pvrdma_cmd_resp rsp; 307 307 struct pvrdma_cmd_create_uc *cmd = &req.create_uc; 308 308 struct pvrdma_cmd_create_uc_resp *resp = &rsp.create_uc_resp; 309 - struct pvrdma_alloc_ucontext_resp uresp; 309 + struct pvrdma_alloc_ucontext_resp uresp = {0}; 310 310 int ret; 311 311 void *ptr; 312 312
+1 -1
drivers/infiniband/sw/rxe/rxe_net.c
··· 555 555 } 556 556 557 557 spin_lock_bh(&dev_list_lock); 558 - list_add_tail(&rxe_dev_list, &rxe->list); 558 + list_add_tail(&rxe->list, &rxe_dev_list); 559 559 spin_unlock_bh(&dev_list_lock); 560 560 return rxe; 561 561 }
+1 -2
drivers/infiniband/sw/rxe/rxe_qp.c
··· 813 813 del_timer_sync(&qp->rnr_nak_timer); 814 814 815 815 rxe_cleanup_task(&qp->req.task); 816 - if (qp_type(qp) == IB_QPT_RC) 817 - rxe_cleanup_task(&qp->comp.task); 816 + rxe_cleanup_task(&qp->comp.task); 818 817 819 818 /* flush out any receive wr's or pending requests */ 820 819 __rxe_do_task(&qp->req.task);
+4 -7
drivers/infiniband/ulp/iser/iscsi_iser.c
··· 651 651 SHOST_DIX_GUARD_CRC); 652 652 } 653 653 654 - /* 655 - * Limit the sg_tablesize and max_sectors based on the device 656 - * max fastreg page list length. 657 - */ 658 - shost->sg_tablesize = min_t(unsigned short, shost->sg_tablesize, 659 - ib_conn->device->ib_device->attrs.max_fast_reg_page_list_len); 660 - 661 654 if (iscsi_host_add(shost, 662 655 ib_conn->device->ib_device->dma_device)) { 663 656 mutex_unlock(&iser_conn->state_mutex); ··· 671 678 */ 672 679 max_fr_sectors = ((shost->sg_tablesize - 1) * PAGE_SIZE) >> 9; 673 680 shost->max_sectors = min(iser_max_sectors, max_fr_sectors); 681 + 682 + iser_dbg("iser_conn %p, sg_tablesize %u, max_sectors %u\n", 683 + iser_conn, shost->sg_tablesize, 684 + shost->max_sectors); 674 685 675 686 if (cmds_max > max_cmds) { 676 687 iser_info("cmds_max changed from %u to %u\n",
-2
drivers/infiniband/ulp/iser/iscsi_iser.h
··· 496 496 * @rx_descs: rx buffers array (cyclic buffer) 497 497 * @num_rx_descs: number of rx descriptors 498 498 * @scsi_sg_tablesize: scsi host sg_tablesize 499 - * @scsi_max_sectors: scsi host max sectors 500 499 */ 501 500 struct iser_conn { 502 501 struct ib_conn ib_conn; ··· 518 519 struct iser_rx_desc *rx_descs; 519 520 u32 num_rx_descs; 520 521 unsigned short scsi_sg_tablesize; 521 - unsigned int scsi_max_sectors; 522 522 bool snd_w_inv; 523 523 }; 524 524
+1 -12
drivers/infiniband/ulp/iser/iser_verbs.c
··· 707 707 sup_sg_tablesize = min_t(unsigned, ISCSI_ISER_MAX_SG_TABLESIZE, 708 708 device->ib_device->attrs.max_fast_reg_page_list_len); 709 709 710 - if (sg_tablesize > sup_sg_tablesize) { 711 - sg_tablesize = sup_sg_tablesize; 712 - iser_conn->scsi_max_sectors = sg_tablesize * SIZE_4K / 512; 713 - } else { 714 - iser_conn->scsi_max_sectors = max_sectors; 715 - } 716 - 717 - iser_conn->scsi_sg_tablesize = sg_tablesize; 718 - 719 - iser_dbg("iser_conn %p, sg_tablesize %u, max_sectors %u\n", 720 - iser_conn, iser_conn->scsi_sg_tablesize, 721 - iser_conn->scsi_max_sectors); 710 + iser_conn->scsi_sg_tablesize = min(sg_tablesize, sup_sg_tablesize); 722 711 } 723 712 724 713 /**
+13 -2
drivers/infiniband/ulp/srp/ib_srp.c
··· 371 371 struct srp_fr_desc *d; 372 372 struct ib_mr *mr; 373 373 int i, ret = -EINVAL; 374 + enum ib_mr_type mr_type; 374 375 375 376 if (pool_size <= 0) 376 377 goto err; ··· 385 384 spin_lock_init(&pool->lock); 386 385 INIT_LIST_HEAD(&pool->free_list); 387 386 387 + if (device->attrs.device_cap_flags & IB_DEVICE_SG_GAPS_REG) 388 + mr_type = IB_MR_TYPE_SG_GAPS; 389 + else 390 + mr_type = IB_MR_TYPE_MEM_REG; 391 + 388 392 for (i = 0, d = &pool->desc[0]; i < pool->size; i++, d++) { 389 - mr = ib_alloc_mr(pd, IB_MR_TYPE_MEM_REG, 390 - max_page_list_len); 393 + mr = ib_alloc_mr(pd, mr_type, max_page_list_len); 391 394 if (IS_ERR(mr)) { 392 395 ret = PTR_ERR(mr); 393 396 if (ret == -ENOMEM) ··· 3697 3692 pr_warn("Bumping up indirect_sg_entries to match cmd_sg_entries (%u)\n", 3698 3693 cmd_sg_entries); 3699 3694 indirect_sg_entries = cmd_sg_entries; 3695 + } 3696 + 3697 + if (indirect_sg_entries > SG_MAX_SEGMENTS) { 3698 + pr_warn("Clamping indirect_sg_entries to %u\n", 3699 + SG_MAX_SEGMENTS); 3700 + indirect_sg_entries = SG_MAX_SEGMENTS; 3700 3701 } 3701 3702 3702 3703 srp_remove_wq = create_workqueue("srp_remove");
+2 -1
drivers/isdn/hardware/eicon/message.c
··· 11297 11297 ((CAPI_MSG *) msg)->header.ncci = 0; 11298 11298 ((CAPI_MSG *) msg)->info.facility_req.Selector = SELECTOR_LINE_INTERCONNECT; 11299 11299 ((CAPI_MSG *) msg)->info.facility_req.structs[0] = 3; 11300 - PUT_WORD(&(((CAPI_MSG *) msg)->info.facility_req.structs[1]), LI_REQ_SILENT_UPDATE); 11300 + ((CAPI_MSG *) msg)->info.facility_req.structs[1] = LI_REQ_SILENT_UPDATE & 0xff; 11301 + ((CAPI_MSG *) msg)->info.facility_req.structs[2] = LI_REQ_SILENT_UPDATE >> 8; 11301 11302 ((CAPI_MSG *) msg)->info.facility_req.structs[3] = 0; 11302 11303 w = api_put(notify_plci->appl, (CAPI_MSG *) msg); 11303 11304 if (w != _QUEUE_FULL)
+5
drivers/md/md.c
··· 5291 5291 if (start_readonly && mddev->ro == 0) 5292 5292 mddev->ro = 2; /* read-only, but switch on first write */ 5293 5293 5294 + /* 5295 + * NOTE: some pers->run(), for example r5l_recovery_log(), wakes 5296 + * up mddev->thread. It is important to initialize critical 5297 + * resources for mddev->thread BEFORE calling pers->run(). 5298 + */ 5294 5299 err = pers->run(mddev); 5295 5300 if (err) 5296 5301 pr_warn("md: pers->run() failed ...\n");
+88 -18
drivers/md/raid5-cache.c
··· 162 162 163 163 /* to submit async io_units, to fulfill ordering of flush */ 164 164 struct work_struct deferred_io_work; 165 + /* to disable write back during in degraded mode */ 166 + struct work_struct disable_writeback_work; 165 167 }; 166 168 167 169 /* ··· 611 609 spin_unlock_irqrestore(&log->io_list_lock, flags); 612 610 if (io) 613 611 r5l_do_submit_io(log, io); 612 + } 613 + 614 + static void r5c_disable_writeback_async(struct work_struct *work) 615 + { 616 + struct r5l_log *log = container_of(work, struct r5l_log, 617 + disable_writeback_work); 618 + struct mddev *mddev = log->rdev->mddev; 619 + 620 + if (log->r5c_journal_mode == R5C_JOURNAL_MODE_WRITE_THROUGH) 621 + return; 622 + pr_info("md/raid:%s: Disabling writeback cache for degraded array.\n", 623 + mdname(mddev)); 624 + mddev_suspend(mddev); 625 + log->r5c_journal_mode = R5C_JOURNAL_MODE_WRITE_THROUGH; 626 + mddev_resume(mddev); 614 627 } 615 628 616 629 static void r5l_submit_current_io(struct r5l_log *log) ··· 1410 1393 next_checkpoint = r5c_calculate_new_cp(conf); 1411 1394 spin_unlock_irq(&log->io_list_lock); 1412 1395 1413 - BUG_ON(reclaimable < 0); 1414 - 1415 1396 if (reclaimable == 0 || !write_super) 1416 1397 return; 1417 1398 ··· 2077 2062 r5c_recovery_rewrite_data_only_stripes(struct r5l_log *log, 2078 2063 struct r5l_recovery_ctx *ctx) 2079 2064 { 2080 - struct stripe_head *sh, *next; 2065 + struct stripe_head *sh; 2081 2066 struct mddev *mddev = log->rdev->mddev; 2082 2067 struct page *page; 2083 2068 sector_t next_checkpoint = MaxSector; ··· 2091 2076 2092 2077 WARN_ON(list_empty(&ctx->cached_list)); 2093 2078 2094 - list_for_each_entry_safe(sh, next, &ctx->cached_list, lru) { 2079 + list_for_each_entry(sh, &ctx->cached_list, lru) { 2095 2080 struct r5l_meta_block *mb; 2096 2081 int i; 2097 2082 int offset; ··· 2141 2126 ctx->pos = write_pos; 2142 2127 ctx->seq += 1; 2143 2128 next_checkpoint = sh->log_start; 2144 - list_del_init(&sh->lru); 2145 - raid5_release_stripe(sh); 2146 2129 } 2147 2130 log->next_checkpoint = next_checkpoint; 2148 2131 __free_page(page); 2149 2132 return 0; 2133 + } 2134 + 2135 + static void r5c_recovery_flush_data_only_stripes(struct r5l_log *log, 2136 + struct r5l_recovery_ctx *ctx) 2137 + { 2138 + struct mddev *mddev = log->rdev->mddev; 2139 + struct r5conf *conf = mddev->private; 2140 + struct stripe_head *sh, *next; 2141 + 2142 + if (ctx->data_only_stripes == 0) 2143 + return; 2144 + 2145 + log->r5c_journal_mode = R5C_JOURNAL_MODE_WRITE_BACK; 2146 + 2147 + list_for_each_entry_safe(sh, next, &ctx->cached_list, lru) { 2148 + r5c_make_stripe_write_out(sh); 2149 + set_bit(STRIPE_HANDLE, &sh->state); 2150 + list_del_init(&sh->lru); 2151 + raid5_release_stripe(sh); 2152 + } 2153 + 2154 + md_wakeup_thread(conf->mddev->thread); 2155 + /* reuse conf->wait_for_quiescent in recovery */ 2156 + wait_event(conf->wait_for_quiescent, 2157 + atomic_read(&conf->active_stripes) == 0); 2158 + 2159 + log->r5c_journal_mode = R5C_JOURNAL_MODE_WRITE_THROUGH; 2150 2160 } 2151 2161 2152 2162 static int r5l_recovery_log(struct r5l_log *log) ··· 2200 2160 pos = ctx.pos; 2201 2161 ctx.seq += 10000; 2202 2162 2203 - if (ctx.data_only_stripes == 0) { 2204 - log->next_checkpoint = ctx.pos; 2205 - r5l_log_write_empty_meta_block(log, ctx.pos, ctx.seq++); 2206 - ctx.pos = r5l_ring_add(log, ctx.pos, BLOCK_SECTORS); 2207 - } 2208 2163 2209 2164 if ((ctx.data_only_stripes == 0) && (ctx.data_parity_stripes == 0)) 2210 2165 pr_debug("md/raid:%s: starting from clean shutdown\n", 2211 2166 mdname(mddev)); 2212 - else { 2167 + else 2213 2168 pr_debug("md/raid:%s: recovering %d data-only stripes and %d data-parity stripes\n", 2214 2169 mdname(mddev), ctx.data_only_stripes, 2215 2170 ctx.data_parity_stripes); 2216 2171 2217 - if (ctx.data_only_stripes > 0) 2218 - if (r5c_recovery_rewrite_data_only_stripes(log, &ctx)) { 2219 - pr_err("md/raid:%s: failed to rewrite stripes to journal\n", 2220 - mdname(mddev)); 2221 - return -EIO; 2222 - } 2172 + if (ctx.data_only_stripes == 0) { 2173 + log->next_checkpoint = ctx.pos; 2174 + r5l_log_write_empty_meta_block(log, ctx.pos, ctx.seq++); 2175 + ctx.pos = r5l_ring_add(log, ctx.pos, BLOCK_SECTORS); 2176 + } else if (r5c_recovery_rewrite_data_only_stripes(log, &ctx)) { 2177 + pr_err("md/raid:%s: failed to rewrite stripes to journal\n", 2178 + mdname(mddev)); 2179 + return -EIO; 2223 2180 } 2224 2181 2225 2182 log->log_start = ctx.pos; 2226 2183 log->seq = ctx.seq; 2227 2184 log->last_checkpoint = pos; 2228 2185 r5l_write_super(log, pos); 2186 + 2187 + r5c_recovery_flush_data_only_stripes(log, &ctx); 2229 2188 return 0; 2230 2189 } 2231 2190 ··· 2286 2247 val > R5C_JOURNAL_MODE_WRITE_BACK) 2287 2248 return -EINVAL; 2288 2249 2250 + if (raid5_calc_degraded(conf) > 0 && 2251 + val == R5C_JOURNAL_MODE_WRITE_BACK) 2252 + return -EINVAL; 2253 + 2289 2254 mddev_suspend(mddev); 2290 2255 conf->log->r5c_journal_mode = val; 2291 2256 mddev_resume(mddev); ··· 2344 2301 set_bit(STRIPE_R5C_CACHING, &sh->state); 2345 2302 } 2346 2303 2304 + /* 2305 + * When run in degraded mode, array is set to write-through mode. 2306 + * This check helps drain pending write safely in the transition to 2307 + * write-through mode. 2308 + */ 2309 + if (s->failed) { 2310 + r5c_make_stripe_write_out(sh); 2311 + return -EAGAIN; 2312 + } 2313 + 2347 2314 for (i = disks; i--; ) { 2348 2315 dev = &sh->dev[i]; 2349 2316 /* if non-overwrite, use writing-out phase */ ··· 2404 2351 struct page *p = sh->dev[i].orig_page; 2405 2352 2406 2353 sh->dev[i].orig_page = sh->dev[i].page; 2354 + clear_bit(R5_OrigPageUPTDODATE, &sh->dev[i].flags); 2355 + 2407 2356 if (!using_disk_info_extra_page) 2408 2357 put_page(p); 2409 2358 } ··· 2610 2555 return ret; 2611 2556 } 2612 2557 2558 + void r5c_update_on_rdev_error(struct mddev *mddev) 2559 + { 2560 + struct r5conf *conf = mddev->private; 2561 + struct r5l_log *log = conf->log; 2562 + 2563 + if (!log) 2564 + return; 2565 + 2566 + if (raid5_calc_degraded(conf) > 0 && 2567 + conf->log->r5c_journal_mode == R5C_JOURNAL_MODE_WRITE_BACK) 2568 + schedule_work(&log->disable_writeback_work); 2569 + } 2570 + 2613 2571 int r5l_init_log(struct r5conf *conf, struct md_rdev *rdev) 2614 2572 { 2615 2573 struct request_queue *q = bdev_get_queue(rdev->bdev); ··· 2695 2627 spin_lock_init(&log->no_space_stripes_lock); 2696 2628 2697 2629 INIT_WORK(&log->deferred_io_work, r5l_submit_io_async); 2630 + INIT_WORK(&log->disable_writeback_work, r5c_disable_writeback_async); 2698 2631 2699 2632 log->r5c_journal_mode = R5C_JOURNAL_MODE_WRITE_THROUGH; 2700 2633 INIT_LIST_HEAD(&log->stripe_in_journal_list); ··· 2728 2659 2729 2660 void r5l_exit_log(struct r5l_log *log) 2730 2661 { 2662 + flush_work(&log->disable_writeback_work); 2731 2663 md_unregister_thread(&log->reclaim_thread); 2732 2664 mempool_destroy(log->meta_pool); 2733 2665 bioset_free(log->bs);
+94 -27
drivers/md/raid5.c
··· 556 556 * of the two sections, and some non-in_sync devices may 557 557 * be insync in the section most affected by failed devices. 558 558 */ 559 - static int calc_degraded(struct r5conf *conf) 559 + int raid5_calc_degraded(struct r5conf *conf) 560 560 { 561 561 int degraded, degraded2; 562 562 int i; ··· 619 619 if (conf->mddev->reshape_position == MaxSector) 620 620 return conf->mddev->degraded > conf->max_degraded; 621 621 622 - degraded = calc_degraded(conf); 622 + degraded = raid5_calc_degraded(conf); 623 623 if (degraded > conf->max_degraded) 624 624 return 1; 625 625 return 0; ··· 1015 1015 1016 1016 if (test_bit(R5_SkipCopy, &sh->dev[i].flags)) 1017 1017 WARN_ON(test_bit(R5_UPTODATE, &sh->dev[i].flags)); 1018 - sh->dev[i].vec.bv_page = sh->dev[i].page; 1018 + 1019 + if (!op_is_write(op) && 1020 + test_bit(R5_InJournal, &sh->dev[i].flags)) 1021 + /* 1022 + * issuing read for a page in journal, this 1023 + * must be preparing for prexor in rmw; read 1024 + * the data into orig_page 1025 + */ 1026 + sh->dev[i].vec.bv_page = sh->dev[i].orig_page; 1027 + else 1028 + sh->dev[i].vec.bv_page = sh->dev[i].page; 1019 1029 bi->bi_vcnt = 1; 1020 1030 bi->bi_io_vec[0].bv_len = STRIPE_SIZE; 1021 1031 bi->bi_io_vec[0].bv_offset = 0; ··· 2390 2380 } else if (test_bit(R5_ReadNoMerge, &sh->dev[i].flags)) 2391 2381 clear_bit(R5_ReadNoMerge, &sh->dev[i].flags); 2392 2382 2383 + if (test_bit(R5_InJournal, &sh->dev[i].flags)) 2384 + /* 2385 + * end read for a page in journal, this 2386 + * must be preparing for prexor in rmw 2387 + */ 2388 + set_bit(R5_OrigPageUPTDODATE, &sh->dev[i].flags); 2389 + 2393 2390 if (atomic_read(&rdev->read_errors)) 2394 2391 atomic_set(&rdev->read_errors, 0); 2395 2392 } else { ··· 2555 2538 2556 2539 spin_lock_irqsave(&conf->device_lock, flags); 2557 2540 clear_bit(In_sync, &rdev->flags); 2558 - mddev->degraded = calc_degraded(conf); 2541 + mddev->degraded = raid5_calc_degraded(conf); 2559 2542 spin_unlock_irqrestore(&conf->device_lock, flags); 2560 2543 set_bit(MD_RECOVERY_INTR, &mddev->recovery); 2561 2544 ··· 2569 2552 bdevname(rdev->bdev, b), 2570 2553 mdname(mddev), 2571 2554 conf->raid_disks - mddev->degraded); 2555 + r5c_update_on_rdev_error(mddev); 2572 2556 } 2573 2557 2574 2558 /* ··· 2898 2880 return r_sector; 2899 2881 } 2900 2882 2883 + /* 2884 + * There are cases where we want handle_stripe_dirtying() and 2885 + * schedule_reconstruction() to delay towrite to some dev of a stripe. 2886 + * 2887 + * This function checks whether we want to delay the towrite. Specifically, 2888 + * we delay the towrite when: 2889 + * 2890 + * 1. degraded stripe has a non-overwrite to the missing dev, AND this 2891 + * stripe has data in journal (for other devices). 2892 + * 2893 + * In this case, when reading data for the non-overwrite dev, it is 2894 + * necessary to handle complex rmw of write back cache (prexor with 2895 + * orig_page, and xor with page). To keep read path simple, we would 2896 + * like to flush data in journal to RAID disks first, so complex rmw 2897 + * is handled in the write patch (handle_stripe_dirtying). 2898 + * 2899 + */ 2900 + static inline bool delay_towrite(struct r5dev *dev, 2901 + struct stripe_head_state *s) 2902 + { 2903 + return !test_bit(R5_OVERWRITE, &dev->flags) && 2904 + !test_bit(R5_Insync, &dev->flags) && s->injournal; 2905 + } 2906 + 2901 2907 static void 2902 2908 schedule_reconstruction(struct stripe_head *sh, struct stripe_head_state *s, 2903 2909 int rcw, int expand) ··· 2942 2900 for (i = disks; i--; ) { 2943 2901 struct r5dev *dev = &sh->dev[i]; 2944 2902 2945 - if (dev->towrite) { 2903 + if (dev->towrite && !delay_towrite(dev, s)) { 2946 2904 set_bit(R5_LOCKED, &dev->flags); 2947 2905 set_bit(R5_Wantdrain, &dev->flags); 2948 2906 if (!expand) ··· 3337 3295 return rv; 3338 3296 } 3339 3297 3340 - /* fetch_block - checks the given member device to see if its data needs 3341 - * to be read or computed to satisfy a request. 3342 - * 3343 - * Returns 1 when no more member devices need to be checked, otherwise returns 3344 - * 0 to tell the loop in handle_stripe_fill to continue 3345 - */ 3346 - 3347 3298 static int need_this_block(struct stripe_head *sh, struct stripe_head_state *s, 3348 3299 int disk_idx, int disks) 3349 3300 { ··· 3427 3392 return 0; 3428 3393 } 3429 3394 3395 + /* fetch_block - checks the given member device to see if its data needs 3396 + * to be read or computed to satisfy a request. 3397 + * 3398 + * Returns 1 when no more member devices need to be checked, otherwise returns 3399 + * 0 to tell the loop in handle_stripe_fill to continue 3400 + */ 3430 3401 static int fetch_block(struct stripe_head *sh, struct stripe_head_state *s, 3431 3402 int disk_idx, int disks) 3432 3403 { ··· 3519 3478 * midst of changing due to a write 3520 3479 */ 3521 3480 if (!test_bit(STRIPE_COMPUTE_RUN, &sh->state) && !sh->check_state && 3522 - !sh->reconstruct_state) 3481 + !sh->reconstruct_state) { 3482 + 3483 + /* 3484 + * For degraded stripe with data in journal, do not handle 3485 + * read requests yet, instead, flush the stripe to raid 3486 + * disks first, this avoids handling complex rmw of write 3487 + * back cache (prexor with orig_page, and then xor with 3488 + * page) in the read path 3489 + */ 3490 + if (s->injournal && s->failed) { 3491 + if (test_bit(STRIPE_R5C_CACHING, &sh->state)) 3492 + r5c_make_stripe_write_out(sh); 3493 + goto out; 3494 + } 3495 + 3523 3496 for (i = disks; i--; ) 3524 3497 if (fetch_block(sh, s, i, disks)) 3525 3498 break; 3499 + } 3500 + out: 3526 3501 set_bit(STRIPE_HANDLE, &sh->state); 3527 3502 } 3528 3503 ··· 3651 3594 break_stripe_batch_list(head_sh, STRIPE_EXPAND_SYNC_FLAGS); 3652 3595 } 3653 3596 3597 + /* 3598 + * For RMW in write back cache, we need extra page in prexor to store the 3599 + * old data. This page is stored in dev->orig_page. 3600 + * 3601 + * This function checks whether we have data for prexor. The exact logic 3602 + * is: 3603 + * R5_UPTODATE && (!R5_InJournal || R5_OrigPageUPTDODATE) 3604 + */ 3605 + static inline bool uptodate_for_rmw(struct r5dev *dev) 3606 + { 3607 + return (test_bit(R5_UPTODATE, &dev->flags)) && 3608 + (!test_bit(R5_InJournal, &dev->flags) || 3609 + test_bit(R5_OrigPageUPTDODATE, &dev->flags)); 3610 + } 3611 + 3654 3612 static int handle_stripe_dirtying(struct r5conf *conf, 3655 3613 struct stripe_head *sh, 3656 3614 struct stripe_head_state *s, ··· 3694 3622 } else for (i = disks; i--; ) { 3695 3623 /* would I have to read this buffer for read_modify_write */ 3696 3624 struct r5dev *dev = &sh->dev[i]; 3697 - if ((dev->towrite || i == sh->pd_idx || i == sh->qd_idx || 3625 + if (((dev->towrite && !delay_towrite(dev, s)) || 3626 + i == sh->pd_idx || i == sh->qd_idx || 3698 3627 test_bit(R5_InJournal, &dev->flags)) && 3699 3628 !test_bit(R5_LOCKED, &dev->flags) && 3700 - !((test_bit(R5_UPTODATE, &dev->flags) && 3701 - (!test_bit(R5_InJournal, &dev->flags) || 3702 - dev->page != dev->orig_page)) || 3629 + !(uptodate_for_rmw(dev) || 3703 3630 test_bit(R5_Wantcompute, &dev->flags))) { 3704 3631 if (test_bit(R5_Insync, &dev->flags)) 3705 3632 rmw++; ··· 3710 3639 i != sh->pd_idx && i != sh->qd_idx && 3711 3640 !test_bit(R5_LOCKED, &dev->flags) && 3712 3641 !(test_bit(R5_UPTODATE, &dev->flags) || 3713 - test_bit(R5_InJournal, &dev->flags) || 3714 3642 test_bit(R5_Wantcompute, &dev->flags))) { 3715 3643 if (test_bit(R5_Insync, &dev->flags)) 3716 3644 rcw++; ··· 3759 3689 3760 3690 for (i = disks; i--; ) { 3761 3691 struct r5dev *dev = &sh->dev[i]; 3762 - if ((dev->towrite || 3692 + if (((dev->towrite && !delay_towrite(dev, s)) || 3763 3693 i == sh->pd_idx || i == sh->qd_idx || 3764 3694 test_bit(R5_InJournal, &dev->flags)) && 3765 3695 !test_bit(R5_LOCKED, &dev->flags) && 3766 - !((test_bit(R5_UPTODATE, &dev->flags) && 3767 - (!test_bit(R5_InJournal, &dev->flags) || 3768 - dev->page != dev->orig_page)) || 3696 + !(uptodate_for_rmw(dev) || 3769 3697 test_bit(R5_Wantcompute, &dev->flags)) && 3770 3698 test_bit(R5_Insync, &dev->flags)) { 3771 3699 if (test_bit(STRIPE_PREREAD_ACTIVE, ··· 3790 3722 i != sh->pd_idx && i != sh->qd_idx && 3791 3723 !test_bit(R5_LOCKED, &dev->flags) && 3792 3724 !(test_bit(R5_UPTODATE, &dev->flags) || 3793 - test_bit(R5_InJournal, &dev->flags) || 3794 3725 test_bit(R5_Wantcompute, &dev->flags))) { 3795 3726 rcw++; 3796 3727 if (test_bit(R5_Insync, &dev->flags) && ··· 7092 7025 /* 7093 7026 * 0 for a fully functional array, 1 or 2 for a degraded array. 7094 7027 */ 7095 - mddev->degraded = calc_degraded(conf); 7028 + mddev->degraded = raid5_calc_degraded(conf); 7096 7029 7097 7030 if (has_failed(conf)) { 7098 7031 pr_crit("md/raid:%s: not enough operational devices (%d/%d failed)\n", ··· 7339 7272 } 7340 7273 } 7341 7274 spin_lock_irqsave(&conf->device_lock, flags); 7342 - mddev->degraded = calc_degraded(conf); 7275 + mddev->degraded = raid5_calc_degraded(conf); 7343 7276 spin_unlock_irqrestore(&conf->device_lock, flags); 7344 7277 print_raid5_conf(conf); 7345 7278 return count; ··· 7699 7632 * pre and post number of devices. 7700 7633 */ 7701 7634 spin_lock_irqsave(&conf->device_lock, flags); 7702 - mddev->degraded = calc_degraded(conf); 7635 + mddev->degraded = raid5_calc_degraded(conf); 7703 7636 spin_unlock_irqrestore(&conf->device_lock, flags); 7704 7637 } 7705 7638 mddev->raid_disks = conf->raid_disks; ··· 7787 7720 } else { 7788 7721 int d; 7789 7722 spin_lock_irq(&conf->device_lock); 7790 - mddev->degraded = calc_degraded(conf); 7723 + mddev->degraded = raid5_calc_degraded(conf); 7791 7724 spin_unlock_irq(&conf->device_lock); 7792 7725 for (d = conf->raid_disks ; 7793 7726 d < conf->raid_disks - mddev->delta_disks;
+7
drivers/md/raid5.h
··· 322 322 * data and parity being written are in the journal 323 323 * device 324 324 */ 325 + R5_OrigPageUPTDODATE, /* with write back cache, we read old data into 326 + * dev->orig_page for prexor. When this flag is 327 + * set, orig_page contains latest data in the 328 + * raid disk. 329 + */ 325 330 }; 326 331 327 332 /* ··· 758 753 extern struct stripe_head * 759 754 raid5_get_active_stripe(struct r5conf *conf, sector_t sector, 760 755 int previous, int noblock, int noquiesce); 756 + extern int raid5_calc_degraded(struct r5conf *conf); 761 757 extern int r5l_init_log(struct r5conf *conf, struct md_rdev *rdev); 762 758 extern void r5l_exit_log(struct r5l_log *log); 763 759 extern int r5l_write_stripe(struct r5l_log *log, struct stripe_head *head_sh); ··· 787 781 extern void r5c_check_stripe_cache_usage(struct r5conf *conf); 788 782 extern void r5c_check_cached_full_stripe(struct r5conf *conf); 789 783 extern struct md_sysfs_entry r5c_journal_mode; 784 + extern void r5c_update_on_rdev_error(struct mddev *mddev); 790 785 #endif
+52 -51
drivers/media/cec/cec-adap.c
··· 30 30 31 31 #include "cec-priv.h" 32 32 33 - static int cec_report_features(struct cec_adapter *adap, unsigned int la_idx); 34 - static int cec_report_phys_addr(struct cec_adapter *adap, unsigned int la_idx); 33 + static void cec_fill_msg_report_features(struct cec_adapter *adap, 34 + struct cec_msg *msg, 35 + unsigned int la_idx); 35 36 36 37 /* 37 38 * 400 ms is the time it takes for one 16 byte message to be ··· 289 288 290 289 /* Mark it as an error */ 291 290 data->msg.tx_ts = ktime_get_ns(); 292 - data->msg.tx_status = CEC_TX_STATUS_ERROR | 293 - CEC_TX_STATUS_MAX_RETRIES; 291 + data->msg.tx_status |= CEC_TX_STATUS_ERROR | 292 + CEC_TX_STATUS_MAX_RETRIES; 293 + data->msg.tx_error_cnt++; 294 294 data->attempts = 0; 295 - data->msg.tx_error_cnt = 1; 296 295 /* Queue transmitted message for monitoring purposes */ 297 296 cec_queue_msg_monitor(data->adap, &data->msg, 1); 298 297 ··· 852 851 [CEC_MSG_REQUEST_ARC_TERMINATION] = 2 | DIRECTED, 853 852 [CEC_MSG_TERMINATE_ARC] = 2 | DIRECTED, 854 853 [CEC_MSG_REQUEST_CURRENT_LATENCY] = 4 | BCAST, 855 - [CEC_MSG_REPORT_CURRENT_LATENCY] = 7 | BCAST, 854 + [CEC_MSG_REPORT_CURRENT_LATENCY] = 6 | BCAST, 856 855 [CEC_MSG_CDC_MESSAGE] = 2 | BCAST, 857 856 }; 858 857 ··· 1251 1250 for (i = 1; i < las->num_log_addrs; i++) 1252 1251 las->log_addr[i] = CEC_LOG_ADDR_INVALID; 1253 1252 } 1253 + for (i = las->num_log_addrs; i < CEC_MAX_LOG_ADDRS; i++) 1254 + las->log_addr[i] = CEC_LOG_ADDR_INVALID; 1254 1255 adap->is_configured = true; 1255 1256 adap->is_configuring = false; 1256 1257 cec_post_state_event(adap); 1257 - mutex_unlock(&adap->lock); 1258 1258 1259 + /* 1260 + * Now post the Report Features and Report Physical Address broadcast 1261 + * messages. Note that these are non-blocking transmits, meaning that 1262 + * they are just queued up and once adap->lock is unlocked the main 1263 + * thread will kick in and start transmitting these. 1264 + * 1265 + * If after this function is done (but before one or more of these 1266 + * messages are actually transmitted) the CEC adapter is unconfigured, 1267 + * then any remaining messages will be dropped by the main thread. 1268 + */ 1259 1269 for (i = 0; i < las->num_log_addrs; i++) { 1270 + struct cec_msg msg = {}; 1271 + 1260 1272 if (las->log_addr[i] == CEC_LOG_ADDR_INVALID || 1261 1273 (las->flags & CEC_LOG_ADDRS_FL_CDC_ONLY)) 1262 1274 continue; 1263 1275 1264 - /* 1265 - * Report Features must come first according 1266 - * to CEC 2.0 1267 - */ 1268 - if (las->log_addr[i] != CEC_LOG_ADDR_UNREGISTERED) 1269 - cec_report_features(adap, i); 1270 - cec_report_phys_addr(adap, i); 1276 + msg.msg[0] = (las->log_addr[i] << 4) | 0x0f; 1277 + 1278 + /* Report Features must come first according to CEC 2.0 */ 1279 + if (las->log_addr[i] != CEC_LOG_ADDR_UNREGISTERED && 1280 + adap->log_addrs.cec_version >= CEC_OP_CEC_VERSION_2_0) { 1281 + cec_fill_msg_report_features(adap, &msg, i); 1282 + cec_transmit_msg_fh(adap, &msg, NULL, false); 1283 + } 1284 + 1285 + /* Report Physical Address */ 1286 + cec_msg_report_physical_addr(&msg, adap->phys_addr, 1287 + las->primary_device_type[i]); 1288 + dprintk(2, "config: la %d pa %x.%x.%x.%x\n", 1289 + las->log_addr[i], 1290 + cec_phys_addr_exp(adap->phys_addr)); 1291 + cec_transmit_msg_fh(adap, &msg, NULL, false); 1271 1292 } 1272 - for (i = las->num_log_addrs; i < CEC_MAX_LOG_ADDRS; i++) 1273 - las->log_addr[i] = CEC_LOG_ADDR_INVALID; 1274 - mutex_lock(&adap->lock); 1275 1293 adap->kthread_config = NULL; 1276 - mutex_unlock(&adap->lock); 1277 1294 complete(&adap->config_completion); 1295 + mutex_unlock(&adap->lock); 1278 1296 return 0; 1279 1297 1280 1298 unconfigure: ··· 1546 1526 1547 1527 /* High-level core CEC message handling */ 1548 1528 1549 - /* Transmit the Report Features message */ 1550 - static int cec_report_features(struct cec_adapter *adap, unsigned int la_idx) 1529 + /* Fill in the Report Features message */ 1530 + static void cec_fill_msg_report_features(struct cec_adapter *adap, 1531 + struct cec_msg *msg, 1532 + unsigned int la_idx) 1551 1533 { 1552 - struct cec_msg msg = { }; 1553 1534 const struct cec_log_addrs *las = &adap->log_addrs; 1554 1535 const u8 *features = las->features[la_idx]; 1555 1536 bool op_is_dev_features = false; 1556 1537 unsigned int idx; 1557 1538 1558 - /* This is 2.0 and up only */ 1559 - if (adap->log_addrs.cec_version < CEC_OP_CEC_VERSION_2_0) 1560 - return 0; 1561 - 1562 1539 /* Report Features */ 1563 - msg.msg[0] = (las->log_addr[la_idx] << 4) | 0x0f; 1564 - msg.len = 4; 1565 - msg.msg[1] = CEC_MSG_REPORT_FEATURES; 1566 - msg.msg[2] = adap->log_addrs.cec_version; 1567 - msg.msg[3] = las->all_device_types[la_idx]; 1540 + msg->msg[0] = (las->log_addr[la_idx] << 4) | 0x0f; 1541 + msg->len = 4; 1542 + msg->msg[1] = CEC_MSG_REPORT_FEATURES; 1543 + msg->msg[2] = adap->log_addrs.cec_version; 1544 + msg->msg[3] = las->all_device_types[la_idx]; 1568 1545 1569 1546 /* Write RC Profiles first, then Device Features */ 1570 1547 for (idx = 0; idx < ARRAY_SIZE(las->features[0]); idx++) { 1571 - msg.msg[msg.len++] = features[idx]; 1548 + msg->msg[msg->len++] = features[idx]; 1572 1549 if ((features[idx] & CEC_OP_FEAT_EXT) == 0) { 1573 1550 if (op_is_dev_features) 1574 1551 break; 1575 1552 op_is_dev_features = true; 1576 1553 } 1577 1554 } 1578 - return cec_transmit_msg(adap, &msg, false); 1579 - } 1580 - 1581 - /* Transmit the Report Physical Address message */ 1582 - static int cec_report_phys_addr(struct cec_adapter *adap, unsigned int la_idx) 1583 - { 1584 - const struct cec_log_addrs *las = &adap->log_addrs; 1585 - struct cec_msg msg = { }; 1586 - 1587 - /* Report Physical Address */ 1588 - msg.msg[0] = (las->log_addr[la_idx] << 4) | 0x0f; 1589 - cec_msg_report_physical_addr(&msg, adap->phys_addr, 1590 - las->primary_device_type[la_idx]); 1591 - dprintk(2, "config: la %d pa %x.%x.%x.%x\n", 1592 - las->log_addr[la_idx], 1593 - cec_phys_addr_exp(adap->phys_addr)); 1594 - return cec_transmit_msg(adap, &msg, false); 1595 1555 } 1596 1556 1597 1557 /* Transmit the Feature Abort message */ ··· 1777 1777 } 1778 1778 1779 1779 case CEC_MSG_GIVE_FEATURES: 1780 - if (adap->log_addrs.cec_version >= CEC_OP_CEC_VERSION_2_0) 1781 - return cec_report_features(adap, la_idx); 1782 - return 0; 1780 + if (adap->log_addrs.cec_version < CEC_OP_CEC_VERSION_2_0) 1781 + return cec_feature_abort(adap, msg); 1782 + cec_fill_msg_report_features(adap, &tx_cec_msg, la_idx); 1783 + return cec_transmit_msg(adap, &tx_cec_msg, false); 1783 1784 1784 1785 default: 1785 1786 /*
+5 -10
drivers/media/dvb-core/dvb_net.c
··· 719 719 skb_copy_from_linear_data(h->priv->ule_skb, dest_addr, 720 720 ETH_ALEN); 721 721 skb_pull(h->priv->ule_skb, ETH_ALEN); 722 + } else { 723 + /* dest_addr buffer is only valid if h->priv->ule_dbit == 0 */ 724 + eth_zero_addr(dest_addr); 722 725 } 723 726 724 727 /* Handle ULE Extension Headers. */ ··· 753 750 if (!h->priv->ule_bridged) { 754 751 skb_push(h->priv->ule_skb, ETH_HLEN); 755 752 h->ethh = (struct ethhdr *)h->priv->ule_skb->data; 756 - if (!h->priv->ule_dbit) { 757 - /* 758 - * dest_addr buffer is only valid if 759 - * h->priv->ule_dbit == 0 760 - */ 761 - memcpy(h->ethh->h_dest, dest_addr, ETH_ALEN); 762 - eth_zero_addr(h->ethh->h_source); 763 - } else /* zeroize source and dest */ 764 - memset(h->ethh, 0, ETH_ALEN * 2); 765 - 753 + memcpy(h->ethh->h_dest, dest_addr, ETH_ALEN); 754 + eth_zero_addr(h->ethh->h_source); 766 755 h->ethh->h_proto = htons(h->priv->ule_sndu_type); 767 756 } 768 757 /* else: skb is in correct state; nothing to do. */
+1
drivers/media/i2c/Kconfig
··· 655 655 config VIDEO_S5K4ECGX 656 656 tristate "Samsung S5K4ECGX sensor support" 657 657 depends on I2C && VIDEO_V4L2 && VIDEO_V4L2_SUBDEV_API 658 + select CRC32 658 659 ---help--- 659 660 This is a V4L2 sensor-level driver for Samsung S5K4ECGX 5M 660 661 camera sensor with an embedded SoC image signal processor.
+12 -21
drivers/media/i2c/smiapp/smiapp-core.c
··· 2741 2741 * I2C Driver 2742 2742 */ 2743 2743 2744 - #ifdef CONFIG_PM 2745 - 2746 - static int smiapp_suspend(struct device *dev) 2744 + static int __maybe_unused smiapp_suspend(struct device *dev) 2747 2745 { 2748 2746 struct i2c_client *client = to_i2c_client(dev); 2749 2747 struct v4l2_subdev *subdev = i2c_get_clientdata(client); ··· 2766 2768 return 0; 2767 2769 } 2768 2770 2769 - static int smiapp_resume(struct device *dev) 2771 + static int __maybe_unused smiapp_resume(struct device *dev) 2770 2772 { 2771 2773 struct i2c_client *client = to_i2c_client(dev); 2772 2774 struct v4l2_subdev *subdev = i2c_get_clientdata(client); ··· 2780 2782 2781 2783 return rval; 2782 2784 } 2783 - 2784 - #else 2785 - 2786 - #define smiapp_suspend NULL 2787 - #define smiapp_resume NULL 2788 - 2789 - #endif /* CONFIG_PM */ 2790 2785 2791 2786 static struct smiapp_hwconfig *smiapp_get_hwconfig(struct device *dev) 2792 2787 { ··· 2904 2913 if (IS_ERR(sensor->xshutdown)) 2905 2914 return PTR_ERR(sensor->xshutdown); 2906 2915 2907 - pm_runtime_enable(&client->dev); 2908 - 2909 - rval = pm_runtime_get_sync(&client->dev); 2910 - if (rval < 0) { 2911 - rval = -ENODEV; 2912 - goto out_power_off; 2913 - } 2916 + rval = smiapp_power_on(&client->dev); 2917 + if (rval < 0) 2918 + return rval; 2914 2919 2915 2920 rval = smiapp_identify_module(sensor); 2916 2921 if (rval) { ··· 3087 3100 if (rval < 0) 3088 3101 goto out_media_entity_cleanup; 3089 3102 3103 + pm_runtime_set_active(&client->dev); 3104 + pm_runtime_get_noresume(&client->dev); 3105 + pm_runtime_enable(&client->dev); 3090 3106 pm_runtime_set_autosuspend_delay(&client->dev, 1000); 3091 3107 pm_runtime_use_autosuspend(&client->dev); 3092 3108 pm_runtime_put_autosuspend(&client->dev); ··· 3103 3113 smiapp_cleanup(sensor); 3104 3114 3105 3115 out_power_off: 3106 - pm_runtime_put(&client->dev); 3107 - pm_runtime_disable(&client->dev); 3116 + smiapp_power_off(&client->dev); 3108 3117 3109 3118 return rval; 3110 3119 } ··· 3116 3127 3117 3128 v4l2_async_unregister_subdev(subdev); 3118 3129 3119 - pm_runtime_suspend(&client->dev); 3120 3130 pm_runtime_disable(&client->dev); 3131 + if (!pm_runtime_status_suspended(&client->dev)) 3132 + smiapp_power_off(&client->dev); 3133 + pm_runtime_set_suspended(&client->dev); 3121 3134 3122 3135 for (i = 0; i < sensor->ssds_used; i++) { 3123 3136 v4l2_device_unregister_subdev(&sensor->ssds[i].sd);
+34 -20
drivers/media/i2c/tvp5150.c
··· 291 291 tvp5150_write(sd, TVP5150_OP_MODE_CTL, opmode); 292 292 tvp5150_write(sd, TVP5150_VD_IN_SRC_SEL_1, input); 293 293 294 - /* Svideo should enable YCrCb output and disable GPCL output 295 - * For Composite and TV, it should be the reverse 294 + /* 295 + * Setup the FID/GLCO/VLK/HVLK and INTREQ/GPCL/VBLK output signals. For 296 + * S-Video we output the vertical lock (VLK) signal on FID/GLCO/VLK/HVLK 297 + * and set INTREQ/GPCL/VBLK to logic 0. For composite we output the 298 + * field indicator (FID) signal on FID/GLCO/VLK/HVLK and set 299 + * INTREQ/GPCL/VBLK to logic 1. 296 300 */ 297 301 val = tvp5150_read(sd, TVP5150_MISC_CTL); 298 302 if (val < 0) { ··· 305 301 } 306 302 307 303 if (decoder->input == TVP5150_SVIDEO) 308 - val = (val & ~0x40) | 0x10; 304 + val = (val & ~TVP5150_MISC_CTL_GPCL) | TVP5150_MISC_CTL_HVLK; 309 305 else 310 - val = (val & ~0x10) | 0x40; 306 + val = (val & ~TVP5150_MISC_CTL_HVLK) | TVP5150_MISC_CTL_GPCL; 311 307 tvp5150_write(sd, TVP5150_MISC_CTL, val); 312 308 }; 313 309 ··· 459 455 },{ /* Automatic offset and AGC enabled */ 460 456 TVP5150_ANAL_CHL_CTL, 0x15 461 457 },{ /* Activate YCrCb output 0x9 or 0xd ? */ 462 - TVP5150_MISC_CTL, 0x6f 458 + TVP5150_MISC_CTL, TVP5150_MISC_CTL_GPCL | 459 + TVP5150_MISC_CTL_INTREQ_OE | 460 + TVP5150_MISC_CTL_YCBCR_OE | 461 + TVP5150_MISC_CTL_SYNC_OE | 462 + TVP5150_MISC_CTL_VBLANK | 463 + TVP5150_MISC_CTL_CLOCK_OE, 463 464 },{ /* Activates video std autodetection for all standards */ 464 465 TVP5150_AUTOSW_MSK, 0x0 465 466 },{ /* Default format: 0x47. For 4:2:2: 0x40 */ ··· 870 861 871 862 f = &format->format; 872 863 873 - tvp5150_reset(sd, 0); 874 - 875 864 f->width = decoder->rect.width; 876 865 f->height = decoder->rect.height / 2; 877 866 ··· 1058 1051 static int tvp5150_s_stream(struct v4l2_subdev *sd, int enable) 1059 1052 { 1060 1053 struct tvp5150 *decoder = to_tvp5150(sd); 1061 - /* Output format: 8-bit ITU-R BT.656 with embedded syncs */ 1062 - int val = 0x09; 1054 + int val; 1063 1055 1064 - /* Output format: 8-bit 4:2:2 YUV with discrete sync */ 1065 - if (decoder->mbus_type == V4L2_MBUS_PARALLEL) 1066 - val = 0x0d; 1056 + /* Enable or disable the video output signals. */ 1057 + val = tvp5150_read(sd, TVP5150_MISC_CTL); 1058 + if (val < 0) 1059 + return val; 1067 1060 1068 - /* Initializes TVP5150 to its default values */ 1069 - /* # set PCLK (27MHz) */ 1070 - tvp5150_write(sd, TVP5150_CONF_SHARED_PIN, 0x00); 1061 + val &= ~(TVP5150_MISC_CTL_YCBCR_OE | TVP5150_MISC_CTL_SYNC_OE | 1062 + TVP5150_MISC_CTL_CLOCK_OE); 1071 1063 1072 - if (enable) 1073 - tvp5150_write(sd, TVP5150_MISC_CTL, val); 1074 - else 1075 - tvp5150_write(sd, TVP5150_MISC_CTL, 0x00); 1064 + if (enable) { 1065 + /* 1066 + * Enable the YCbCr and clock outputs. In discrete sync mode 1067 + * (non-BT.656) additionally enable the the sync outputs. 1068 + */ 1069 + val |= TVP5150_MISC_CTL_YCBCR_OE | TVP5150_MISC_CTL_CLOCK_OE; 1070 + if (decoder->mbus_type == V4L2_MBUS_PARALLEL) 1071 + val |= TVP5150_MISC_CTL_SYNC_OE; 1072 + } 1073 + 1074 + tvp5150_write(sd, TVP5150_MISC_CTL, val); 1076 1075 1077 1076 return 0; 1078 1077 } ··· 1537 1524 res = core->hdl.error; 1538 1525 goto err; 1539 1526 } 1540 - v4l2_ctrl_handler_setup(&core->hdl); 1541 1527 1542 1528 /* Default is no cropping */ 1543 1529 core->rect.top = 0; ··· 1546 1534 core->rect.height = TVP5150_V_MAX_OTHERS; 1547 1535 core->rect.left = 0; 1548 1536 core->rect.width = TVP5150_H_MAX; 1537 + 1538 + tvp5150_reset(sd, 0); /* Calls v4l2_ctrl_handler_setup() */ 1549 1539 1550 1540 res = v4l2_async_register_subdev(sd); 1551 1541 if (res < 0)
+9
drivers/media/i2c/tvp5150_reg.h
··· 9 9 #define TVP5150_ANAL_CHL_CTL 0x01 /* Analog channel controls */ 10 10 #define TVP5150_OP_MODE_CTL 0x02 /* Operation mode controls */ 11 11 #define TVP5150_MISC_CTL 0x03 /* Miscellaneous controls */ 12 + #define TVP5150_MISC_CTL_VBLK_GPCL BIT(7) 13 + #define TVP5150_MISC_CTL_GPCL BIT(6) 14 + #define TVP5150_MISC_CTL_INTREQ_OE BIT(5) 15 + #define TVP5150_MISC_CTL_HVLK BIT(4) 16 + #define TVP5150_MISC_CTL_YCBCR_OE BIT(3) 17 + #define TVP5150_MISC_CTL_SYNC_OE BIT(2) 18 + #define TVP5150_MISC_CTL_VBLANK BIT(1) 19 + #define TVP5150_MISC_CTL_CLOCK_OE BIT(0) 20 + 12 21 #define TVP5150_AUTOSW_MSK 0x04 /* Autoswitch mask: TVP5150A / TVP5150AM */ 13 22 14 23 /* Reserved 05h */
+2 -6
drivers/media/pci/cobalt/cobalt-driver.c
··· 308 308 static void cobalt_free_msi(struct cobalt *cobalt, struct pci_dev *pci_dev) 309 309 { 310 310 free_irq(pci_dev->irq, (void *)cobalt); 311 - 312 - if (cobalt->msi_enabled) 313 - pci_disable_msi(pci_dev); 311 + pci_free_irq_vectors(pci_dev); 314 312 } 315 313 316 314 static int cobalt_setup_pci(struct cobalt *cobalt, struct pci_dev *pci_dev, ··· 385 387 from being generated. */ 386 388 cobalt_set_interrupt(cobalt, false); 387 389 388 - if (pci_enable_msi_range(pci_dev, 1, 1) < 1) { 390 + if (pci_alloc_irq_vectors(pci_dev, 1, 1, PCI_IRQ_MSI) < 1) { 389 391 cobalt_err("Could not enable MSI\n"); 390 - cobalt->msi_enabled = false; 391 392 ret = -EIO; 392 393 goto err_release; 393 394 } 394 395 msi_config_show(cobalt, pci_dev); 395 - cobalt->msi_enabled = true; 396 396 397 397 /* Register IRQ */ 398 398 if (request_irq(pci_dev->irq, cobalt_irq_handler, IRQF_SHARED,
-2
drivers/media/pci/cobalt/cobalt-driver.h
··· 287 287 u32 irq_none; 288 288 u32 irq_full_fifo; 289 289 290 - bool msi_enabled; 291 - 292 290 /* omnitek dma */ 293 291 int dma_channels; 294 292 int first_fifo_channel;
+72 -61
drivers/media/usb/dvb-usb/pctv452e.c
··· 97 97 u8 c; /* transaction counter, wraps around... */ 98 98 u8 initialized; /* set to 1 if 0x15 has been sent */ 99 99 u16 last_rc_key; 100 - 101 - unsigned char data[80]; 102 100 }; 103 101 104 102 static int tt3650_ci_msg(struct dvb_usb_device *d, u8 cmd, u8 *data, 105 103 unsigned int write_len, unsigned int read_len) 106 104 { 107 105 struct pctv452e_state *state = (struct pctv452e_state *)d->priv; 106 + u8 *buf; 108 107 u8 id; 109 108 unsigned int rlen; 110 109 int ret; ··· 113 114 return -EIO; 114 115 } 115 116 116 - mutex_lock(&state->ca_mutex); 117 + buf = kmalloc(64, GFP_KERNEL); 118 + if (!buf) 119 + return -ENOMEM; 120 + 117 121 id = state->c++; 118 122 119 - state->data[0] = SYNC_BYTE_OUT; 120 - state->data[1] = id; 121 - state->data[2] = cmd; 122 - state->data[3] = write_len; 123 + buf[0] = SYNC_BYTE_OUT; 124 + buf[1] = id; 125 + buf[2] = cmd; 126 + buf[3] = write_len; 123 127 124 - memcpy(state->data + 4, data, write_len); 128 + memcpy(buf + 4, data, write_len); 125 129 126 130 rlen = (read_len > 0) ? 64 : 0; 127 - ret = dvb_usb_generic_rw(d, state->data, 4 + write_len, 128 - state->data, rlen, /* delay_ms */ 0); 131 + ret = dvb_usb_generic_rw(d, buf, 4 + write_len, 132 + buf, rlen, /* delay_ms */ 0); 129 133 if (0 != ret) 130 134 goto failed; 131 135 132 136 ret = -EIO; 133 - if (SYNC_BYTE_IN != state->data[0] || id != state->data[1]) 137 + if (SYNC_BYTE_IN != buf[0] || id != buf[1]) 134 138 goto failed; 135 139 136 - memcpy(data, state->data + 4, read_len); 140 + memcpy(data, buf + 4, read_len); 137 141 138 - mutex_unlock(&state->ca_mutex); 142 + kfree(buf); 139 143 return 0; 140 144 141 145 failed: 142 146 err("CI error %d; %02X %02X %02X -> %*ph.", 143 - ret, SYNC_BYTE_OUT, id, cmd, 3, state->data); 147 + ret, SYNC_BYTE_OUT, id, cmd, 3, buf); 144 148 145 - mutex_unlock(&state->ca_mutex); 149 + kfree(buf); 146 150 return ret; 147 151 } 148 152 ··· 412 410 u8 *rcv_buf, u8 rcv_len) 413 411 { 414 412 struct pctv452e_state *state = (struct pctv452e_state *)d->priv; 413 + u8 *buf; 415 414 u8 id; 416 415 int ret; 417 416 418 - mutex_lock(&state->ca_mutex); 417 + buf = kmalloc(64, GFP_KERNEL); 418 + if (!buf) 419 + return -ENOMEM; 420 + 419 421 id = state->c++; 420 422 421 423 ret = -EINVAL; 422 424 if (snd_len > 64 - 7 || rcv_len > 64 - 7) 423 425 goto failed; 424 426 425 - state->data[0] = SYNC_BYTE_OUT; 426 - state->data[1] = id; 427 - state->data[2] = PCTV_CMD_I2C; 428 - state->data[3] = snd_len + 3; 429 - state->data[4] = addr << 1; 430 - state->data[5] = snd_len; 431 - state->data[6] = rcv_len; 427 + buf[0] = SYNC_BYTE_OUT; 428 + buf[1] = id; 429 + buf[2] = PCTV_CMD_I2C; 430 + buf[3] = snd_len + 3; 431 + buf[4] = addr << 1; 432 + buf[5] = snd_len; 433 + buf[6] = rcv_len; 432 434 433 - memcpy(state->data + 7, snd_buf, snd_len); 435 + memcpy(buf + 7, snd_buf, snd_len); 434 436 435 - ret = dvb_usb_generic_rw(d, state->data, 7 + snd_len, 436 - state->data, /* rcv_len */ 64, 437 + ret = dvb_usb_generic_rw(d, buf, 7 + snd_len, 438 + buf, /* rcv_len */ 64, 437 439 /* delay_ms */ 0); 438 440 if (ret < 0) 439 441 goto failed; 440 442 441 443 /* TT USB protocol error. */ 442 444 ret = -EIO; 443 - if (SYNC_BYTE_IN != state->data[0] || id != state->data[1]) 445 + if (SYNC_BYTE_IN != buf[0] || id != buf[1]) 444 446 goto failed; 445 447 446 448 /* I2C device didn't respond as expected. */ 447 449 ret = -EREMOTEIO; 448 - if (state->data[5] < snd_len || state->data[6] < rcv_len) 450 + if (buf[5] < snd_len || buf[6] < rcv_len) 449 451 goto failed; 450 452 451 - memcpy(rcv_buf, state->data + 7, rcv_len); 452 - mutex_unlock(&state->ca_mutex); 453 + memcpy(rcv_buf, buf + 7, rcv_len); 453 454 455 + kfree(buf); 454 456 return rcv_len; 455 457 456 458 failed: 457 459 err("I2C error %d; %02X %02X %02X %02X %02X -> %*ph", 458 460 ret, SYNC_BYTE_OUT, id, addr << 1, snd_len, rcv_len, 459 - 7, state->data); 461 + 7, buf); 460 462 461 - mutex_unlock(&state->ca_mutex); 463 + kfree(buf); 462 464 return ret; 463 465 } 464 466 ··· 511 505 static int pctv452e_power_ctrl(struct dvb_usb_device *d, int i) 512 506 { 513 507 struct pctv452e_state *state = (struct pctv452e_state *)d->priv; 514 - u8 *rx; 508 + u8 *b0, *rx; 515 509 int ret; 516 510 517 511 info("%s: %d\n", __func__, i); ··· 522 516 if (state->initialized) 523 517 return 0; 524 518 525 - rx = kmalloc(PCTV_ANSWER_LEN, GFP_KERNEL); 526 - if (!rx) 519 + b0 = kmalloc(5 + PCTV_ANSWER_LEN, GFP_KERNEL); 520 + if (!b0) 527 521 return -ENOMEM; 528 522 529 - mutex_lock(&state->ca_mutex); 523 + rx = b0 + 5; 524 + 530 525 /* hmm where shoud this should go? */ 531 526 ret = usb_set_interface(d->udev, 0, ISOC_INTERFACE_ALTERNATIVE); 532 527 if (ret != 0) ··· 535 528 __func__, ret); 536 529 537 530 /* this is a one-time initialization, dont know where to put */ 538 - state->data[0] = 0xaa; 539 - state->data[1] = state->c++; 540 - state->data[2] = PCTV_CMD_RESET; 541 - state->data[3] = 1; 542 - state->data[4] = 0; 531 + b0[0] = 0xaa; 532 + b0[1] = state->c++; 533 + b0[2] = PCTV_CMD_RESET; 534 + b0[3] = 1; 535 + b0[4] = 0; 543 536 /* reset board */ 544 - ret = dvb_usb_generic_rw(d, state->data, 5, rx, PCTV_ANSWER_LEN, 0); 537 + ret = dvb_usb_generic_rw(d, b0, 5, rx, PCTV_ANSWER_LEN, 0); 545 538 if (ret) 546 539 goto ret; 547 540 548 - state->data[1] = state->c++; 549 - state->data[4] = 1; 541 + b0[1] = state->c++; 542 + b0[4] = 1; 550 543 /* reset board (again?) */ 551 - ret = dvb_usb_generic_rw(d, state->data, 5, rx, PCTV_ANSWER_LEN, 0); 544 + ret = dvb_usb_generic_rw(d, b0, 5, rx, PCTV_ANSWER_LEN, 0); 552 545 if (ret) 553 546 goto ret; 554 547 555 548 state->initialized = 1; 556 549 557 550 ret: 558 - mutex_unlock(&state->ca_mutex); 559 - kfree(rx); 551 + kfree(b0); 560 552 return ret; 561 553 } 562 554 563 555 static int pctv452e_rc_query(struct dvb_usb_device *d) 564 556 { 565 557 struct pctv452e_state *state = (struct pctv452e_state *)d->priv; 558 + u8 *b, *rx; 566 559 int ret, i; 567 560 u8 id; 568 561 569 - mutex_lock(&state->ca_mutex); 562 + b = kmalloc(CMD_BUFFER_SIZE + PCTV_ANSWER_LEN, GFP_KERNEL); 563 + if (!b) 564 + return -ENOMEM; 565 + 566 + rx = b + CMD_BUFFER_SIZE; 567 + 570 568 id = state->c++; 571 569 572 570 /* prepare command header */ 573 - state->data[0] = SYNC_BYTE_OUT; 574 - state->data[1] = id; 575 - state->data[2] = PCTV_CMD_IR; 576 - state->data[3] = 0; 571 + b[0] = SYNC_BYTE_OUT; 572 + b[1] = id; 573 + b[2] = PCTV_CMD_IR; 574 + b[3] = 0; 577 575 578 576 /* send ir request */ 579 - ret = dvb_usb_generic_rw(d, state->data, 4, 580 - state->data, PCTV_ANSWER_LEN, 0); 577 + ret = dvb_usb_generic_rw(d, b, 4, rx, PCTV_ANSWER_LEN, 0); 581 578 if (ret != 0) 582 579 goto ret; 583 580 584 581 if (debug > 3) { 585 - info("%s: read: %2d: %*ph: ", __func__, ret, 3, state->data); 586 - for (i = 0; (i < state->data[3]) && ((i + 3) < PCTV_ANSWER_LEN); i++) 587 - info(" %02x", state->data[i + 3]); 582 + info("%s: read: %2d: %*ph: ", __func__, ret, 3, rx); 583 + for (i = 0; (i < rx[3]) && ((i+3) < PCTV_ANSWER_LEN); i++) 584 + info(" %02x", rx[i+3]); 588 585 589 586 info("\n"); 590 587 } 591 588 592 - if ((state->data[3] == 9) && (state->data[12] & 0x01)) { 589 + if ((rx[3] == 9) && (rx[12] & 0x01)) { 593 590 /* got a "press" event */ 594 - state->last_rc_key = RC_SCANCODE_RC5(state->data[7], state->data[6]); 591 + state->last_rc_key = RC_SCANCODE_RC5(rx[7], rx[6]); 595 592 if (debug > 2) 596 593 info("%s: cmd=0x%02x sys=0x%02x\n", 597 - __func__, state->data[6], state->data[7]); 594 + __func__, rx[6], rx[7]); 598 595 599 596 rc_keydown(d->rc_dev, RC_TYPE_RC5, state->last_rc_key, 0); 600 597 } else if (state->last_rc_key) { ··· 606 595 state->last_rc_key = 0; 607 596 } 608 597 ret: 609 - mutex_unlock(&state->ca_mutex); 598 + kfree(b); 610 599 return ret; 611 600 } 612 601
+1 -1
drivers/memstick/core/memstick.c
··· 330 330 struct ms_id_register id_reg; 331 331 332 332 if (!(*mrq)) { 333 - memstick_init_req(&card->current_mrq, MS_TPC_READ_REG, NULL, 333 + memstick_init_req(&card->current_mrq, MS_TPC_READ_REG, &id_reg, 334 334 sizeof(struct ms_id_register)); 335 335 *mrq = &card->current_mrq; 336 336 return 0;
+4 -3
drivers/mmc/host/dw_mmc.c
··· 3354 3354 3355 3355 if (!slot) 3356 3356 continue; 3357 - if (slot->mmc->pm_flags & MMC_PM_KEEP_POWER) { 3357 + if (slot->mmc->pm_flags & MMC_PM_KEEP_POWER) 3358 3358 dw_mci_set_ios(slot->mmc, &slot->mmc->ios); 3359 - dw_mci_setup_bus(slot, true); 3360 - } 3359 + 3360 + /* Force setup bus to guarantee available clock output */ 3361 + dw_mci_setup_bus(slot, true); 3361 3362 } 3362 3363 3363 3364 /* Now that slots are all setup, we can enable card detect */
+2
drivers/mtd/nand/Kconfig
··· 426 426 427 427 config MTD_NAND_OXNAS 428 428 tristate "NAND Flash support for Oxford Semiconductor SoC" 429 + depends on ARCH_OXNAS || COMPILE_TEST 429 430 depends on HAS_IOMEM 430 431 help 431 432 This enables the NAND flash controller on Oxford Semiconductor SoCs. ··· 536 535 537 536 config MTD_NAND_FSMC 538 537 tristate "Support for NAND on ST Micros FSMC" 538 + depends on OF 539 539 depends on PLAT_SPEAR || ARCH_NOMADIK || ARCH_U8500 || MACH_U300 540 540 help 541 541 Enables support for NAND Flash chips on the ST Microelectronics
+7 -1
drivers/mtd/nand/fsl_ifc_nand.c
··· 258 258 int bufnum = nctrl->page & priv->bufnum_mask; 259 259 int sector = bufnum * chip->ecc.steps; 260 260 int sector_end = sector + chip->ecc.steps - 1; 261 + __be32 *eccstat_regs; 262 + 263 + if (ctrl->version >= FSL_IFC_VERSION_2_0_0) 264 + eccstat_regs = ifc->ifc_nand.v2_nand_eccstat; 265 + else 266 + eccstat_regs = ifc->ifc_nand.v1_nand_eccstat; 261 267 262 268 for (i = sector / 4; i <= sector_end / 4; i++) 263 - eccstat[i] = ifc_in32(&ifc->ifc_nand.nand_eccstat[i]); 269 + eccstat[i] = ifc_in32(&eccstat_regs[i]); 264 270 265 271 for (i = sector; i <= sector_end; i++) { 266 272 errors = check_read_ecc(mtd, ctrl, eccstat, i);
+132 -21
drivers/mtd/nand/fsmc_nand.c
··· 35 35 #include <linux/mtd/partitions.h> 36 36 #include <linux/io.h> 37 37 #include <linux/slab.h> 38 - #include <linux/mtd/fsmc.h> 39 38 #include <linux/amba/bus.h> 40 39 #include <mtd/mtd-abi.h> 40 + 41 + #define FSMC_NAND_BW8 1 42 + #define FSMC_NAND_BW16 2 43 + 44 + #define FSMC_MAX_NOR_BANKS 4 45 + #define FSMC_MAX_NAND_BANKS 4 46 + 47 + #define FSMC_FLASH_WIDTH8 1 48 + #define FSMC_FLASH_WIDTH16 2 49 + 50 + /* fsmc controller registers for NOR flash */ 51 + #define CTRL 0x0 52 + /* ctrl register definitions */ 53 + #define BANK_ENABLE (1 << 0) 54 + #define MUXED (1 << 1) 55 + #define NOR_DEV (2 << 2) 56 + #define WIDTH_8 (0 << 4) 57 + #define WIDTH_16 (1 << 4) 58 + #define RSTPWRDWN (1 << 6) 59 + #define WPROT (1 << 7) 60 + #define WRT_ENABLE (1 << 12) 61 + #define WAIT_ENB (1 << 13) 62 + 63 + #define CTRL_TIM 0x4 64 + /* ctrl_tim register definitions */ 65 + 66 + #define FSMC_NOR_BANK_SZ 0x8 67 + #define FSMC_NOR_REG_SIZE 0x40 68 + 69 + #define FSMC_NOR_REG(base, bank, reg) (base + \ 70 + FSMC_NOR_BANK_SZ * (bank) + \ 71 + reg) 72 + 73 + /* fsmc controller registers for NAND flash */ 74 + #define PC 0x00 75 + /* pc register definitions */ 76 + #define FSMC_RESET (1 << 0) 77 + #define FSMC_WAITON (1 << 1) 78 + #define FSMC_ENABLE (1 << 2) 79 + #define FSMC_DEVTYPE_NAND (1 << 3) 80 + #define FSMC_DEVWID_8 (0 << 4) 81 + #define FSMC_DEVWID_16 (1 << 4) 82 + #define FSMC_ECCEN (1 << 6) 83 + #define FSMC_ECCPLEN_512 (0 << 7) 84 + #define FSMC_ECCPLEN_256 (1 << 7) 85 + #define FSMC_TCLR_1 (1) 86 + #define FSMC_TCLR_SHIFT (9) 87 + #define FSMC_TCLR_MASK (0xF) 88 + #define FSMC_TAR_1 (1) 89 + #define FSMC_TAR_SHIFT (13) 90 + #define FSMC_TAR_MASK (0xF) 91 + #define STS 0x04 92 + /* sts register definitions */ 93 + #define FSMC_CODE_RDY (1 << 15) 94 + #define COMM 0x08 95 + /* comm register definitions */ 96 + #define FSMC_TSET_0 0 97 + #define FSMC_TSET_SHIFT 0 98 + #define FSMC_TSET_MASK 0xFF 99 + #define FSMC_TWAIT_6 6 100 + #define FSMC_TWAIT_SHIFT 8 101 + #define FSMC_TWAIT_MASK 0xFF 102 + #define FSMC_THOLD_4 4 103 + #define FSMC_THOLD_SHIFT 16 104 + #define FSMC_THOLD_MASK 0xFF 105 + #define FSMC_THIZ_1 1 106 + #define FSMC_THIZ_SHIFT 24 107 + #define FSMC_THIZ_MASK 0xFF 108 + #define ATTRIB 0x0C 109 + #define IOATA 0x10 110 + #define ECC1 0x14 111 + #define ECC2 0x18 112 + #define ECC3 0x1C 113 + #define FSMC_NAND_BANK_SZ 0x20 114 + 115 + #define FSMC_NAND_REG(base, bank, reg) (base + FSMC_NOR_REG_SIZE + \ 116 + (FSMC_NAND_BANK_SZ * (bank)) + \ 117 + reg) 118 + 119 + #define FSMC_BUSY_WAIT_TIMEOUT (1 * HZ) 120 + 121 + struct fsmc_nand_timings { 122 + uint8_t tclr; 123 + uint8_t tar; 124 + uint8_t thiz; 125 + uint8_t thold; 126 + uint8_t twait; 127 + uint8_t tset; 128 + }; 129 + 130 + enum access_mode { 131 + USE_DMA_ACCESS = 1, 132 + USE_WORD_ACCESS, 133 + }; 134 + 135 + /** 136 + * fsmc_nand_platform_data - platform specific NAND controller config 137 + * @nand_timings: timing setup for the physical NAND interface 138 + * @partitions: partition table for the platform, use a default fallback 139 + * if this is NULL 140 + * @nr_partitions: the number of partitions in the previous entry 141 + * @options: different options for the driver 142 + * @width: bus width 143 + * @bank: default bank 144 + * @select_bank: callback to select a certain bank, this is 145 + * platform-specific. If the controller only supports one bank 146 + * this may be set to NULL 147 + */ 148 + struct fsmc_nand_platform_data { 149 + struct fsmc_nand_timings *nand_timings; 150 + struct mtd_partition *partitions; 151 + unsigned int nr_partitions; 152 + unsigned int options; 153 + unsigned int width; 154 + unsigned int bank; 155 + 156 + enum access_mode mode; 157 + 158 + void (*select_bank)(uint32_t bank, uint32_t busw); 159 + 160 + /* priv structures for dma accesses */ 161 + void *read_dma_priv; 162 + void *write_dma_priv; 163 + }; 41 164 42 165 static int fsmc_ecc1_ooblayout_ecc(struct mtd_info *mtd, int section, 43 166 struct mtd_oob_region *oobregion) ··· 837 714 return true; 838 715 } 839 716 840 - #ifdef CONFIG_OF 841 717 static int fsmc_nand_probe_config_dt(struct platform_device *pdev, 842 718 struct device_node *np) 843 719 { ··· 879 757 } 880 758 return 0; 881 759 } 882 - #else 883 - static int fsmc_nand_probe_config_dt(struct platform_device *pdev, 884 - struct device_node *np) 885 - { 886 - return -ENOSYS; 887 - } 888 - #endif 889 760 890 761 /* 891 762 * fsmc_nand_probe - Probe function ··· 897 782 u32 pid; 898 783 int i; 899 784 900 - if (np) { 901 - pdata = devm_kzalloc(&pdev->dev, sizeof(*pdata), GFP_KERNEL); 902 - pdev->dev.platform_data = pdata; 903 - ret = fsmc_nand_probe_config_dt(pdev, np); 904 - if (ret) { 905 - dev_err(&pdev->dev, "no platform data\n"); 906 - return -ENODEV; 907 - } 908 - } 785 + pdata = devm_kzalloc(&pdev->dev, sizeof(*pdata), GFP_KERNEL); 786 + if (!pdata) 787 + return -ENOMEM; 909 788 910 - if (!pdata) { 911 - dev_err(&pdev->dev, "platform data is NULL\n"); 912 - return -EINVAL; 789 + pdev->dev.platform_data = pdata; 790 + ret = fsmc_nand_probe_config_dt(pdev, np); 791 + if (ret) { 792 + dev_err(&pdev->dev, "no platform data\n"); 793 + return -ENODEV; 913 794 } 914 795 915 796 /* Allocate memory for the device structure (and zero it) */
+2 -7
drivers/mtd/nand/lpc32xx_slc.c
··· 797 797 struct resource *rc; 798 798 int res; 799 799 800 - rc = platform_get_resource(pdev, IORESOURCE_MEM, 0); 801 - if (rc == NULL) { 802 - dev_err(&pdev->dev, "No memory resource found for device\n"); 803 - return -EBUSY; 804 - } 805 - 806 800 /* Allocate memory for the device structure (and zero it) */ 807 801 host = devm_kzalloc(&pdev->dev, sizeof(*host), GFP_KERNEL); 808 802 if (!host) 809 803 return -ENOMEM; 810 - host->io_base_dma = rc->start; 811 804 805 + rc = platform_get_resource(pdev, IORESOURCE_MEM, 0); 812 806 host->io_base = devm_ioremap_resource(&pdev->dev, rc); 813 807 if (IS_ERR(host->io_base)) 814 808 return PTR_ERR(host->io_base); 815 809 810 + host->io_base_dma = rc->start; 816 811 if (pdev->dev.of_node) 817 812 host->ncfg = lpc32xx_parse_dt(&pdev->dev); 818 813 if (!host->ncfg) {
-1
drivers/mtd/nand/mtk_nand.c
··· 1383 1383 nfc->regs = devm_ioremap_resource(dev, res); 1384 1384 if (IS_ERR(nfc->regs)) { 1385 1385 ret = PTR_ERR(nfc->regs); 1386 - dev_err(dev, "no nfi base\n"); 1387 1386 goto release_ecc; 1388 1387 } 1389 1388
+1
drivers/mtd/nand/nand_ids.c
··· 185 185 {NAND_MFR_SANDISK, "SanDisk"}, 186 186 {NAND_MFR_INTEL, "Intel"}, 187 187 {NAND_MFR_ATO, "ATO"}, 188 + {NAND_MFR_WINBOND, "Winbond"}, 188 189 {0x0, "Unknown"} 189 190 }; 190 191
+26 -10
drivers/mtd/nand/sunxi_nand.c
··· 321 321 322 322 ret = wait_for_completion_timeout(&nfc->complete, 323 323 msecs_to_jiffies(timeout_ms)); 324 + if (!ret) 325 + ret = -ETIMEDOUT; 326 + else 327 + ret = 0; 324 328 325 329 writel(0, nfc->regs + NFC_REG_INT); 326 330 } else { ··· 522 518 u32 tmp; 523 519 524 520 while (len > offs) { 521 + bool poll = false; 522 + 525 523 cnt = min(len - offs, NFC_SRAM_SIZE); 526 524 527 525 ret = sunxi_nfc_wait_cmd_fifo_empty(nfc); ··· 534 528 tmp = NFC_DATA_TRANS | NFC_DATA_SWAP_METHOD; 535 529 writel(tmp, nfc->regs + NFC_REG_CMD); 536 530 537 - ret = sunxi_nfc_wait_events(nfc, NFC_CMD_INT_FLAG, true, 0); 531 + /* Arbitrary limit for polling mode */ 532 + if (cnt < 64) 533 + poll = true; 534 + 535 + ret = sunxi_nfc_wait_events(nfc, NFC_CMD_INT_FLAG, poll, 0); 538 536 if (ret) 539 537 break; 540 538 ··· 561 551 u32 tmp; 562 552 563 553 while (len > offs) { 554 + bool poll = false; 555 + 564 556 cnt = min(len - offs, NFC_SRAM_SIZE); 565 557 566 558 ret = sunxi_nfc_wait_cmd_fifo_empty(nfc); ··· 575 563 NFC_ACCESS_DIR; 576 564 writel(tmp, nfc->regs + NFC_REG_CMD); 577 565 578 - ret = sunxi_nfc_wait_events(nfc, NFC_CMD_INT_FLAG, true, 0); 566 + /* Arbitrary limit for polling mode */ 567 + if (cnt < 64) 568 + poll = true; 569 + 570 + ret = sunxi_nfc_wait_events(nfc, NFC_CMD_INT_FLAG, poll, 0); 579 571 if (ret) 580 572 break; 581 573 ··· 603 587 struct sunxi_nand_chip *sunxi_nand = to_sunxi_nand(nand); 604 588 struct sunxi_nfc *nfc = to_sunxi_nfc(sunxi_nand->nand.controller); 605 589 int ret; 606 - 607 - ret = sunxi_nfc_wait_cmd_fifo_empty(nfc); 608 - if (ret) 609 - return; 610 590 611 591 if (dat == NAND_CMD_NONE && (ctrl & NAND_NCE) && 612 592 !(ctrl & (NAND_CLE | NAND_ALE))) { ··· 632 620 if (sunxi_nand->addr_cycles > 4) 633 621 writel(sunxi_nand->addr[1], 634 622 nfc->regs + NFC_REG_ADDR_HIGH); 623 + 624 + ret = sunxi_nfc_wait_cmd_fifo_empty(nfc); 625 + if (ret) 626 + return; 635 627 636 628 writel(cmd, nfc->regs + NFC_REG_CMD); 637 629 sunxi_nand->addr[0] = 0; ··· 973 957 writel(NFC_DATA_TRANS | NFC_DATA_SWAP_METHOD | NFC_ECC_OP, 974 958 nfc->regs + NFC_REG_CMD); 975 959 976 - ret = sunxi_nfc_wait_events(nfc, NFC_CMD_INT_FLAG, true, 0); 960 + ret = sunxi_nfc_wait_events(nfc, NFC_CMD_INT_FLAG, false, 0); 977 961 sunxi_nfc_randomizer_disable(mtd); 978 962 if (ret) 979 963 return ret; ··· 1085 1069 writel(NFC_PAGE_OP | NFC_DATA_SWAP_METHOD | NFC_DATA_TRANS, 1086 1070 nfc->regs + NFC_REG_CMD); 1087 1071 1088 - ret = sunxi_nfc_wait_events(nfc, NFC_CMD_INT_FLAG, true, 0); 1072 + ret = sunxi_nfc_wait_events(nfc, NFC_CMD_INT_FLAG, false, 0); 1089 1073 if (ret) 1090 1074 dmaengine_terminate_all(nfc->dmac); 1091 1075 ··· 1205 1189 NFC_ACCESS_DIR | NFC_ECC_OP, 1206 1190 nfc->regs + NFC_REG_CMD); 1207 1191 1208 - ret = sunxi_nfc_wait_events(nfc, NFC_CMD_INT_FLAG, true, 0); 1192 + ret = sunxi_nfc_wait_events(nfc, NFC_CMD_INT_FLAG, false, 0); 1209 1193 sunxi_nfc_randomizer_disable(mtd); 1210 1194 if (ret) 1211 1195 return ret; ··· 1444 1428 NFC_DATA_TRANS | NFC_ACCESS_DIR, 1445 1429 nfc->regs + NFC_REG_CMD); 1446 1430 1447 - ret = sunxi_nfc_wait_events(nfc, NFC_CMD_INT_FLAG, true, 0); 1431 + ret = sunxi_nfc_wait_events(nfc, NFC_CMD_INT_FLAG, false, 0); 1448 1432 if (ret) 1449 1433 dmaengine_terminate_all(nfc->dmac); 1450 1434
+1
drivers/net/can/c_can/c_can_pci.c
··· 161 161 162 162 dev->irq = pdev->irq; 163 163 priv->base = addr; 164 + priv->device = &pdev->dev; 164 165 165 166 if (!c_can_pci_data->freq) { 166 167 dev_err(&pdev->dev, "no clock frequency defined\n");
+12 -4
drivers/net/can/ti_hecc.c
··· 948 948 netif_napi_add(ndev, &priv->napi, ti_hecc_rx_poll, 949 949 HECC_DEF_NAPI_WEIGHT); 950 950 951 - clk_enable(priv->clk); 951 + err = clk_prepare_enable(priv->clk); 952 + if (err) { 953 + dev_err(&pdev->dev, "clk_prepare_enable() failed\n"); 954 + goto probe_exit_clk; 955 + } 956 + 952 957 err = register_candev(ndev); 953 958 if (err) { 954 959 dev_err(&pdev->dev, "register_candev() failed\n"); ··· 986 981 struct ti_hecc_priv *priv = netdev_priv(ndev); 987 982 988 983 unregister_candev(ndev); 989 - clk_disable(priv->clk); 984 + clk_disable_unprepare(priv->clk); 990 985 clk_put(priv->clk); 991 986 res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 992 987 iounmap(priv->base); ··· 1011 1006 hecc_set_bit(priv, HECC_CANMC, HECC_CANMC_PDR); 1012 1007 priv->can.state = CAN_STATE_SLEEPING; 1013 1008 1014 - clk_disable(priv->clk); 1009 + clk_disable_unprepare(priv->clk); 1015 1010 1016 1011 return 0; 1017 1012 } ··· 1020 1015 { 1021 1016 struct net_device *dev = platform_get_drvdata(pdev); 1022 1017 struct ti_hecc_priv *priv = netdev_priv(dev); 1018 + int err; 1023 1019 1024 - clk_enable(priv->clk); 1020 + err = clk_prepare_enable(priv->clk); 1021 + if (err) 1022 + return err; 1025 1023 1026 1024 hecc_clear_bit(priv, HECC_CANMC, HECC_CANMC_PDR); 1027 1025 priv->can.state = CAN_STATE_ERROR_ACTIVE;
+2
drivers/net/ethernet/amd/xgbe/xgbe-common.h
··· 891 891 #define PCS_V1_WINDOW_SELECT 0x03fc 892 892 #define PCS_V2_WINDOW_DEF 0x9060 893 893 #define PCS_V2_WINDOW_SELECT 0x9064 894 + #define PCS_V2_RV_WINDOW_DEF 0x1060 895 + #define PCS_V2_RV_WINDOW_SELECT 0x1064 894 896 895 897 /* PCS register entry bit positions and sizes */ 896 898 #define PCS_V2_WINDOW_DEF_OFFSET_INDEX 6
+5 -3
drivers/net/ethernet/amd/xgbe/xgbe-dev.c
··· 1151 1151 offset = pdata->xpcs_window + (mmd_address & pdata->xpcs_window_mask); 1152 1152 1153 1153 spin_lock_irqsave(&pdata->xpcs_lock, flags); 1154 - XPCS32_IOWRITE(pdata, PCS_V2_WINDOW_SELECT, index); 1154 + XPCS32_IOWRITE(pdata, pdata->xpcs_window_sel_reg, index); 1155 1155 mmd_data = XPCS16_IOREAD(pdata, offset); 1156 1156 spin_unlock_irqrestore(&pdata->xpcs_lock, flags); 1157 1157 ··· 1183 1183 offset = pdata->xpcs_window + (mmd_address & pdata->xpcs_window_mask); 1184 1184 1185 1185 spin_lock_irqsave(&pdata->xpcs_lock, flags); 1186 - XPCS32_IOWRITE(pdata, PCS_V2_WINDOW_SELECT, index); 1186 + XPCS32_IOWRITE(pdata, pdata->xpcs_window_sel_reg, index); 1187 1187 XPCS16_IOWRITE(pdata, offset, mmd_data); 1188 1188 spin_unlock_irqrestore(&pdata->xpcs_lock, flags); 1189 1189 } ··· 3407 3407 3408 3408 /* Flush Tx queues */ 3409 3409 ret = xgbe_flush_tx_queues(pdata); 3410 - if (ret) 3410 + if (ret) { 3411 + netdev_err(pdata->netdev, "error flushing TX queues\n"); 3411 3412 return ret; 3413 + } 3412 3414 3413 3415 /* 3414 3416 * Initialize DMA related features
+3 -1
drivers/net/ethernet/amd/xgbe/xgbe-drv.c
··· 1070 1070 1071 1071 DBGPR("-->xgbe_start\n"); 1072 1072 1073 - hw_if->init(pdata); 1073 + ret = hw_if->init(pdata); 1074 + if (ret) 1075 + return ret; 1074 1076 1075 1077 xgbe_napi_enable(pdata, 1); 1076 1078
+14 -1
drivers/net/ethernet/amd/xgbe/xgbe-pci.c
··· 265 265 struct xgbe_prv_data *pdata; 266 266 struct device *dev = &pdev->dev; 267 267 void __iomem * const *iomap_table; 268 + struct pci_dev *rdev; 268 269 unsigned int ma_lo, ma_hi; 269 270 unsigned int reg; 270 271 int bar_mask; ··· 327 326 if (netif_msg_probe(pdata)) 328 327 dev_dbg(dev, "xpcs_regs = %p\n", pdata->xpcs_regs); 329 328 329 + /* Set the PCS indirect addressing definition registers */ 330 + rdev = pci_get_domain_bus_and_slot(0, 0, PCI_DEVFN(0, 0)); 331 + if (rdev && 332 + (rdev->vendor == PCI_VENDOR_ID_AMD) && (rdev->device == 0x15d0)) { 333 + pdata->xpcs_window_def_reg = PCS_V2_RV_WINDOW_DEF; 334 + pdata->xpcs_window_sel_reg = PCS_V2_RV_WINDOW_SELECT; 335 + } else { 336 + pdata->xpcs_window_def_reg = PCS_V2_WINDOW_DEF; 337 + pdata->xpcs_window_sel_reg = PCS_V2_WINDOW_SELECT; 338 + } 339 + pci_dev_put(rdev); 340 + 330 341 /* Configure the PCS indirect addressing support */ 331 - reg = XPCS32_IOREAD(pdata, PCS_V2_WINDOW_DEF); 342 + reg = XPCS32_IOREAD(pdata, pdata->xpcs_window_def_reg); 332 343 pdata->xpcs_window = XPCS_GET_BITS(reg, PCS_V2_WINDOW_DEF, OFFSET); 333 344 pdata->xpcs_window <<= 6; 334 345 pdata->xpcs_window_size = XPCS_GET_BITS(reg, PCS_V2_WINDOW_DEF, SIZE);
+2
drivers/net/ethernet/amd/xgbe/xgbe.h
··· 955 955 956 956 /* XPCS indirect addressing lock */ 957 957 spinlock_t xpcs_lock; 958 + unsigned int xpcs_window_def_reg; 959 + unsigned int xpcs_window_sel_reg; 958 960 unsigned int xpcs_window; 959 961 unsigned int xpcs_window_size; 960 962 unsigned int xpcs_window_mask;
+8 -3
drivers/net/ethernet/atheros/alx/main.c
··· 685 685 return -ENOMEM; 686 686 } 687 687 688 - alx_reinit_rings(alx); 689 - 690 688 return 0; 691 689 } 692 690 ··· 701 703 if (alx->qnapi[0] && alx->qnapi[0]->rxq) 702 704 kfree(alx->qnapi[0]->rxq->bufs); 703 705 704 - if (!alx->descmem.virt) 706 + if (alx->descmem.virt) 705 707 dma_free_coherent(&alx->hw.pdev->dev, 706 708 alx->descmem.size, 707 709 alx->descmem.virt, ··· 982 984 alx_free_rings(alx); 983 985 alx_free_napis(alx); 984 986 alx_disable_advanced_intr(alx); 987 + alx_init_intr(alx, false); 985 988 986 989 err = alx_alloc_napis(alx); 987 990 if (err) ··· 1239 1240 err = alx_request_irq(alx); 1240 1241 if (err) 1241 1242 goto out_free_rings; 1243 + 1244 + /* must be called after alx_request_irq because the chip stops working 1245 + * if we copy the dma addresses in alx_init_ring_ptrs twice when 1246 + * requesting msi-x interrupts failed 1247 + */ 1248 + alx_reinit_rings(alx); 1242 1249 1243 1250 netif_set_real_num_tx_queues(alx->dev, alx->num_txq); 1244 1251 netif_set_real_num_rx_queues(alx->dev, alx->num_rxq);
+4 -2
drivers/net/ethernet/broadcom/bcm63xx_enet.c
··· 913 913 priv->old_link = 0; 914 914 priv->old_duplex = -1; 915 915 priv->old_pause = -1; 916 + } else { 917 + phydev = NULL; 916 918 } 917 919 918 920 /* mask all interrupts and request them */ ··· 1085 1083 enet_dmac_writel(priv, priv->dma_chan_int_mask, 1086 1084 ENETDMAC_IRMASK, priv->tx_chan); 1087 1085 1088 - if (priv->has_phy) 1086 + if (phydev) 1089 1087 phy_start(phydev); 1090 1088 else 1091 1089 bcm_enet_adjust_link(dev); ··· 1128 1126 free_irq(dev->irq, dev); 1129 1127 1130 1128 out_phy_disconnect: 1131 - if (priv->has_phy) 1129 + if (phydev) 1132 1130 phy_disconnect(phydev); 1133 1131 1134 1132 return ret;
+47 -33
drivers/net/ethernet/broadcom/bnxt/bnxt.c
··· 1099 1099 { 1100 1100 #ifdef CONFIG_INET 1101 1101 struct tcphdr *th; 1102 - int len, nw_off, tcp_opt_len; 1102 + int len, nw_off, tcp_opt_len = 0; 1103 1103 1104 1104 if (tcp_ts) 1105 1105 tcp_opt_len = 12; ··· 5314 5314 if ((link_info->support_auto_speeds | diff) != 5315 5315 link_info->support_auto_speeds) { 5316 5316 /* An advertised speed is no longer supported, so we need to 5317 - * update the advertisement settings. See bnxt_reset() for 5318 - * comments about the rtnl_lock() sequence below. 5317 + * update the advertisement settings. Caller holds RTNL 5318 + * so we can modify link settings. 5319 5319 */ 5320 - clear_bit(BNXT_STATE_IN_SP_TASK, &bp->state); 5321 - rtnl_lock(); 5322 5320 link_info->advertising = link_info->support_auto_speeds; 5323 - if (test_bit(BNXT_STATE_OPEN, &bp->state) && 5324 - (link_info->autoneg & BNXT_AUTONEG_SPEED)) 5321 + if (link_info->autoneg & BNXT_AUTONEG_SPEED) 5325 5322 bnxt_hwrm_set_link_setting(bp, true, false); 5326 - set_bit(BNXT_STATE_IN_SP_TASK, &bp->state); 5327 - rtnl_unlock(); 5328 5323 } 5329 5324 return 0; 5330 5325 } ··· 6195 6200 mod_timer(&bp->timer, jiffies + bp->current_interval); 6196 6201 } 6197 6202 6198 - /* Only called from bnxt_sp_task() */ 6199 - static void bnxt_reset(struct bnxt *bp, bool silent) 6203 + static void bnxt_rtnl_lock_sp(struct bnxt *bp) 6200 6204 { 6201 - /* bnxt_reset_task() calls bnxt_close_nic() which waits 6202 - * for BNXT_STATE_IN_SP_TASK to clear. 6203 - * If there is a parallel dev_close(), bnxt_close() may be holding 6205 + /* We are called from bnxt_sp_task which has BNXT_STATE_IN_SP_TASK 6206 + * set. If the device is being closed, bnxt_close() may be holding 6204 6207 * rtnl() and waiting for BNXT_STATE_IN_SP_TASK to clear. So we 6205 6208 * must clear BNXT_STATE_IN_SP_TASK before holding rtnl(). 6206 6209 */ 6207 6210 clear_bit(BNXT_STATE_IN_SP_TASK, &bp->state); 6208 6211 rtnl_lock(); 6209 - if (test_bit(BNXT_STATE_OPEN, &bp->state)) 6210 - bnxt_reset_task(bp, silent); 6212 + } 6213 + 6214 + static void bnxt_rtnl_unlock_sp(struct bnxt *bp) 6215 + { 6211 6216 set_bit(BNXT_STATE_IN_SP_TASK, &bp->state); 6212 6217 rtnl_unlock(); 6218 + } 6219 + 6220 + /* Only called from bnxt_sp_task() */ 6221 + static void bnxt_reset(struct bnxt *bp, bool silent) 6222 + { 6223 + bnxt_rtnl_lock_sp(bp); 6224 + if (test_bit(BNXT_STATE_OPEN, &bp->state)) 6225 + bnxt_reset_task(bp, silent); 6226 + bnxt_rtnl_unlock_sp(bp); 6213 6227 } 6214 6228 6215 6229 static void bnxt_cfg_ntp_filters(struct bnxt *); ··· 6226 6222 static void bnxt_sp_task(struct work_struct *work) 6227 6223 { 6228 6224 struct bnxt *bp = container_of(work, struct bnxt, sp_task); 6229 - int rc; 6230 6225 6231 6226 set_bit(BNXT_STATE_IN_SP_TASK, &bp->state); 6232 6227 smp_mb__after_atomic(); ··· 6239 6236 6240 6237 if (test_and_clear_bit(BNXT_RX_NTP_FLTR_SP_EVENT, &bp->sp_event)) 6241 6238 bnxt_cfg_ntp_filters(bp); 6242 - if (test_and_clear_bit(BNXT_LINK_CHNG_SP_EVENT, &bp->sp_event)) { 6243 - if (test_and_clear_bit(BNXT_LINK_SPEED_CHNG_SP_EVENT, 6244 - &bp->sp_event)) 6245 - bnxt_hwrm_phy_qcaps(bp); 6246 - 6247 - rc = bnxt_update_link(bp, true); 6248 - if (rc) 6249 - netdev_err(bp->dev, "SP task can't update link (rc: %x)\n", 6250 - rc); 6251 - } 6252 6239 if (test_and_clear_bit(BNXT_HWRM_EXEC_FWD_REQ_SP_EVENT, &bp->sp_event)) 6253 6240 bnxt_hwrm_exec_fwd_req(bp); 6254 6241 if (test_and_clear_bit(BNXT_VXLAN_ADD_PORT_SP_EVENT, &bp->sp_event)) { ··· 6259 6266 bnxt_hwrm_tunnel_dst_port_free( 6260 6267 bp, TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_GENEVE); 6261 6268 } 6269 + if (test_and_clear_bit(BNXT_PERIODIC_STATS_SP_EVENT, &bp->sp_event)) 6270 + bnxt_hwrm_port_qstats(bp); 6271 + 6272 + /* These functions below will clear BNXT_STATE_IN_SP_TASK. They 6273 + * must be the last functions to be called before exiting. 6274 + */ 6275 + if (test_and_clear_bit(BNXT_LINK_CHNG_SP_EVENT, &bp->sp_event)) { 6276 + int rc = 0; 6277 + 6278 + if (test_and_clear_bit(BNXT_LINK_SPEED_CHNG_SP_EVENT, 6279 + &bp->sp_event)) 6280 + bnxt_hwrm_phy_qcaps(bp); 6281 + 6282 + bnxt_rtnl_lock_sp(bp); 6283 + if (test_bit(BNXT_STATE_OPEN, &bp->state)) 6284 + rc = bnxt_update_link(bp, true); 6285 + bnxt_rtnl_unlock_sp(bp); 6286 + if (rc) 6287 + netdev_err(bp->dev, "SP task can't update link (rc: %x)\n", 6288 + rc); 6289 + } 6290 + if (test_and_clear_bit(BNXT_HWRM_PORT_MODULE_SP_EVENT, &bp->sp_event)) { 6291 + bnxt_rtnl_lock_sp(bp); 6292 + if (test_bit(BNXT_STATE_OPEN, &bp->state)) 6293 + bnxt_get_port_module_status(bp); 6294 + bnxt_rtnl_unlock_sp(bp); 6295 + } 6262 6296 if (test_and_clear_bit(BNXT_RESET_TASK_SP_EVENT, &bp->sp_event)) 6263 6297 bnxt_reset(bp, false); 6264 6298 6265 6299 if (test_and_clear_bit(BNXT_RESET_TASK_SILENT_SP_EVENT, &bp->sp_event)) 6266 6300 bnxt_reset(bp, true); 6267 - 6268 - if (test_and_clear_bit(BNXT_HWRM_PORT_MODULE_SP_EVENT, &bp->sp_event)) 6269 - bnxt_get_port_module_status(bp); 6270 - 6271 - if (test_and_clear_bit(BNXT_PERIODIC_STATS_SP_EVENT, &bp->sp_event)) 6272 - bnxt_hwrm_port_qstats(bp); 6273 6301 6274 6302 smp_mb__before_atomic(); 6275 6303 clear_bit(BNXT_STATE_IN_SP_TASK, &bp->state);
+1 -1
drivers/net/ethernet/freescale/gianfar.c
··· 2948 2948 } 2949 2949 2950 2950 /* try reuse page */ 2951 - if (unlikely(page_count(page) != 1)) 2951 + if (unlikely(page_count(page) != 1 || page_is_pfmemalloc(page))) 2952 2952 return false; 2953 2953 2954 2954 /* change offset to the other half */
+5 -2
drivers/net/ethernet/ibm/ibmveth.c
··· 1601 1601 netdev->netdev_ops = &ibmveth_netdev_ops; 1602 1602 netdev->ethtool_ops = &netdev_ethtool_ops; 1603 1603 SET_NETDEV_DEV(netdev, &dev->dev); 1604 - netdev->hw_features = NETIF_F_SG | NETIF_F_RXCSUM | 1605 - NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM; 1604 + netdev->hw_features = NETIF_F_SG; 1605 + if (vio_get_attribute(dev, "ibm,illan-options", NULL) != NULL) { 1606 + netdev->hw_features |= NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM | 1607 + NETIF_F_RXCSUM; 1608 + } 1606 1609 1607 1610 netdev->features |= netdev->hw_features; 1608 1611
+1 -1
drivers/net/ethernet/mediatek/mtk_eth_soc.c
··· 2517 2517 } 2518 2518 2519 2519 const struct of_device_id of_mtk_match[] = { 2520 - { .compatible = "mediatek,mt7623-eth" }, 2520 + { .compatible = "mediatek,mt2701-eth" }, 2521 2521 {}, 2522 2522 }; 2523 2523 MODULE_DEVICE_TABLE(of, of_mtk_match);
+1 -6
drivers/net/ethernet/mellanox/mlx4/en_ethtool.c
··· 1732 1732 { 1733 1733 struct mlx4_en_priv *priv = netdev_priv(dev); 1734 1734 1735 - memset(channel, 0, sizeof(*channel)); 1736 - 1737 1735 channel->max_rx = MAX_RX_RINGS; 1738 1736 channel->max_tx = MLX4_EN_MAX_TX_RING_P_UP; 1739 1737 ··· 1750 1752 int xdp_count; 1751 1753 int err = 0; 1752 1754 1753 - if (channel->other_count || channel->combined_count || 1754 - channel->tx_count > MLX4_EN_MAX_TX_RING_P_UP || 1755 - channel->rx_count > MAX_RX_RINGS || 1756 - !channel->tx_count || !channel->rx_count) 1755 + if (!channel->tx_count || !channel->rx_count) 1757 1756 return -EINVAL; 1758 1757 1759 1758 tmp = kzalloc(sizeof(*tmp), GFP_KERNEL);
-11
drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
··· 543 543 struct ethtool_channels *ch) 544 544 { 545 545 struct mlx5e_priv *priv = netdev_priv(dev); 546 - int ncv = mlx5e_get_max_num_channels(priv->mdev); 547 546 unsigned int count = ch->combined_count; 548 547 bool arfs_enabled; 549 548 bool was_opened; ··· 551 552 if (!count) { 552 553 netdev_info(dev, "%s: combined_count=0 not supported\n", 553 554 __func__); 554 - return -EINVAL; 555 - } 556 - if (ch->rx_count || ch->tx_count) { 557 - netdev_info(dev, "%s: separate rx/tx count not supported\n", 558 - __func__); 559 - return -EINVAL; 560 - } 561 - if (count > ncv) { 562 - netdev_info(dev, "%s: count (%d) > max (%d)\n", 563 - __func__, count, ncv); 564 555 return -EINVAL; 565 556 } 566 557
+3
drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
··· 193 193 return false; 194 194 } 195 195 196 + if (unlikely(page_is_pfmemalloc(dma_info->page))) 197 + return false; 198 + 196 199 cache->page_cache[cache->tail] = *dma_info; 197 200 cache->tail = tail_next; 198 201 return true;
+6 -4
drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c
··· 1172 1172 1173 1173 static int 1174 1174 mlxsw_sp_nexthop_group_mac_update(struct mlxsw_sp *mlxsw_sp, 1175 - struct mlxsw_sp_nexthop_group *nh_grp) 1175 + struct mlxsw_sp_nexthop_group *nh_grp, 1176 + bool reallocate) 1176 1177 { 1177 1178 u32 adj_index = nh_grp->adj_index; /* base */ 1178 1179 struct mlxsw_sp_nexthop *nh; ··· 1188 1187 continue; 1189 1188 } 1190 1189 1191 - if (nh->update) { 1190 + if (nh->update || reallocate) { 1192 1191 err = mlxsw_sp_nexthop_mac_update(mlxsw_sp, 1193 1192 adj_index, nh); 1194 1193 if (err) ··· 1249 1248 /* Nothing was added or removed, so no need to reallocate. Just 1250 1249 * update MAC on existing adjacency indexes. 1251 1250 */ 1252 - err = mlxsw_sp_nexthop_group_mac_update(mlxsw_sp, nh_grp); 1251 + err = mlxsw_sp_nexthop_group_mac_update(mlxsw_sp, nh_grp, 1252 + false); 1253 1253 if (err) { 1254 1254 dev_warn(mlxsw_sp->bus_info->dev, "Failed to update neigh MAC in adjacency table.\n"); 1255 1255 goto set_trap; ··· 1278 1276 nh_grp->adj_index_valid = 1; 1279 1277 nh_grp->adj_index = adj_index; 1280 1278 nh_grp->ecmp_size = ecmp_size; 1281 - err = mlxsw_sp_nexthop_group_mac_update(mlxsw_sp, nh_grp); 1279 + err = mlxsw_sp_nexthop_group_mac_update(mlxsw_sp, nh_grp, true); 1282 1280 if (err) { 1283 1281 dev_warn(mlxsw_sp->bus_info->dev, "Failed to update neigh MAC in adjacency table.\n"); 1284 1282 goto set_trap;
+37 -49
drivers/net/ethernet/qlogic/qed/qed_ll2.c
··· 297 297 list_del(&p_pkt->list_entry); 298 298 b_last_packet = list_empty(&p_tx->active_descq); 299 299 list_add_tail(&p_pkt->list_entry, &p_tx->free_descq); 300 - if (p_ll2_conn->conn_type == QED_LL2_TYPE_ISCSI_OOO) { 300 + if (p_ll2_conn->conn.conn_type == QED_LL2_TYPE_ISCSI_OOO) { 301 301 struct qed_ooo_buffer *p_buffer; 302 302 303 303 p_buffer = (struct qed_ooo_buffer *)p_pkt->cookie; ··· 309 309 b_last_frag = 310 310 p_tx->cur_completing_bd_idx == p_pkt->bd_used; 311 311 tx_frag = p_pkt->bds_set[0].tx_frag; 312 - if (p_ll2_conn->gsi_enable) 312 + if (p_ll2_conn->conn.gsi_enable) 313 313 qed_ll2b_release_tx_gsi_packet(p_hwfn, 314 314 p_ll2_conn-> 315 315 my_id, ··· 378 378 379 379 spin_unlock_irqrestore(&p_tx->lock, flags); 380 380 tx_frag = p_pkt->bds_set[0].tx_frag; 381 - if (p_ll2_conn->gsi_enable) 381 + if (p_ll2_conn->conn.gsi_enable) 382 382 qed_ll2b_complete_tx_gsi_packet(p_hwfn, 383 383 p_ll2_conn->my_id, 384 384 p_pkt->cookie, ··· 550 550 551 551 list_move_tail(&p_pkt->list_entry, &p_rx->free_descq); 552 552 553 - if (p_ll2_conn->conn_type == QED_LL2_TYPE_ISCSI_OOO) { 553 + if (p_ll2_conn->conn.conn_type == QED_LL2_TYPE_ISCSI_OOO) { 554 554 struct qed_ooo_buffer *p_buffer; 555 555 556 556 p_buffer = (struct qed_ooo_buffer *)p_pkt->cookie; ··· 738 738 rc = qed_ll2_prepare_tx_packet(p_hwfn, p_ll2_conn->my_id, 1, 739 739 p_buffer->vlan, bd_flags, 740 740 l4_hdr_offset_w, 741 - p_ll2_conn->tx_dest, 0, 741 + p_ll2_conn->conn.tx_dest, 0, 742 742 first_frag, 743 743 p_buffer->packet_length, 744 744 p_buffer, true); ··· 858 858 u16 buf_idx; 859 859 int rc = 0; 860 860 861 - if (p_ll2_info->conn_type != QED_LL2_TYPE_ISCSI_OOO) 861 + if (p_ll2_info->conn.conn_type != QED_LL2_TYPE_ISCSI_OOO) 862 862 return rc; 863 863 864 864 if (!rx_num_ooo_buffers) ··· 901 901 qed_ll2_establish_connection_ooo(struct qed_hwfn *p_hwfn, 902 902 struct qed_ll2_info *p_ll2_conn) 903 903 { 904 - if (p_ll2_conn->conn_type != QED_LL2_TYPE_ISCSI_OOO) 904 + if (p_ll2_conn->conn.conn_type != QED_LL2_TYPE_ISCSI_OOO) 905 905 return; 906 906 907 907 qed_ooo_release_all_isles(p_hwfn, p_hwfn->p_ooo_info); ··· 913 913 { 914 914 struct qed_ooo_buffer *p_buffer; 915 915 916 - if (p_ll2_conn->conn_type != QED_LL2_TYPE_ISCSI_OOO) 916 + if (p_ll2_conn->conn.conn_type != QED_LL2_TYPE_ISCSI_OOO) 917 917 return; 918 918 919 919 qed_ooo_release_all_isles(p_hwfn, p_hwfn->p_ooo_info); ··· 945 945 { 946 946 struct qed_hwfn *hwfn = QED_LEADING_HWFN(cdev); 947 947 u8 *handle = &hwfn->pf_params.iscsi_pf_params.ll2_ooo_queue_id; 948 - struct qed_ll2_info *ll2_info; 948 + struct qed_ll2_conn ll2_info; 949 949 int rc; 950 950 951 - ll2_info = kzalloc(sizeof(*ll2_info), GFP_KERNEL); 952 - if (!ll2_info) 953 - return -ENOMEM; 954 - ll2_info->conn_type = QED_LL2_TYPE_ISCSI_OOO; 955 - ll2_info->mtu = params->mtu; 956 - ll2_info->rx_drop_ttl0_flg = params->drop_ttl0_packets; 957 - ll2_info->rx_vlan_removal_en = params->rx_vlan_stripping; 958 - ll2_info->tx_tc = OOO_LB_TC; 959 - ll2_info->tx_dest = CORE_TX_DEST_LB; 951 + ll2_info.conn_type = QED_LL2_TYPE_ISCSI_OOO; 952 + ll2_info.mtu = params->mtu; 953 + ll2_info.rx_drop_ttl0_flg = params->drop_ttl0_packets; 954 + ll2_info.rx_vlan_removal_en = params->rx_vlan_stripping; 955 + ll2_info.tx_tc = OOO_LB_TC; 956 + ll2_info.tx_dest = CORE_TX_DEST_LB; 960 957 961 - rc = qed_ll2_acquire_connection(hwfn, ll2_info, 958 + rc = qed_ll2_acquire_connection(hwfn, &ll2_info, 962 959 QED_LL2_RX_SIZE, QED_LL2_TX_SIZE, 963 960 handle); 964 - kfree(ll2_info); 965 961 if (rc) { 966 962 DP_INFO(cdev, "Failed to acquire LL2 OOO connection\n"); 967 963 goto out; ··· 1002 1006 struct qed_ll2_info *p_ll2_conn, 1003 1007 u8 action_on_error) 1004 1008 { 1005 - enum qed_ll2_conn_type conn_type = p_ll2_conn->conn_type; 1009 + enum qed_ll2_conn_type conn_type = p_ll2_conn->conn.conn_type; 1006 1010 struct qed_ll2_rx_queue *p_rx = &p_ll2_conn->rx_queue; 1007 1011 struct core_rx_start_ramrod_data *p_ramrod = NULL; 1008 1012 struct qed_spq_entry *p_ent = NULL; ··· 1028 1032 p_ramrod->sb_index = p_rx->rx_sb_index; 1029 1033 p_ramrod->complete_event_flg = 1; 1030 1034 1031 - p_ramrod->mtu = cpu_to_le16(p_ll2_conn->mtu); 1035 + p_ramrod->mtu = cpu_to_le16(p_ll2_conn->conn.mtu); 1032 1036 DMA_REGPAIR_LE(p_ramrod->bd_base, 1033 1037 p_rx->rxq_chain.p_phys_addr); 1034 1038 cqe_pbl_size = (u16)qed_chain_get_page_cnt(&p_rx->rcq_chain); ··· 1036 1040 DMA_REGPAIR_LE(p_ramrod->cqe_pbl_addr, 1037 1041 qed_chain_get_pbl_phys(&p_rx->rcq_chain)); 1038 1042 1039 - p_ramrod->drop_ttl0_flg = p_ll2_conn->rx_drop_ttl0_flg; 1040 - p_ramrod->inner_vlan_removal_en = p_ll2_conn->rx_vlan_removal_en; 1043 + p_ramrod->drop_ttl0_flg = p_ll2_conn->conn.rx_drop_ttl0_flg; 1044 + p_ramrod->inner_vlan_removal_en = p_ll2_conn->conn.rx_vlan_removal_en; 1041 1045 p_ramrod->queue_id = p_ll2_conn->queue_id; 1042 1046 p_ramrod->main_func_queue = (conn_type == QED_LL2_TYPE_ISCSI_OOO) ? 0 1043 1047 : 1; ··· 1052 1056 } 1053 1057 1054 1058 p_ramrod->action_on_error.error_type = action_on_error; 1055 - p_ramrod->gsi_offload_flag = p_ll2_conn->gsi_enable; 1059 + p_ramrod->gsi_offload_flag = p_ll2_conn->conn.gsi_enable; 1056 1060 return qed_spq_post(p_hwfn, p_ent, NULL); 1057 1061 } 1058 1062 1059 1063 static int qed_sp_ll2_tx_queue_start(struct qed_hwfn *p_hwfn, 1060 1064 struct qed_ll2_info *p_ll2_conn) 1061 1065 { 1062 - enum qed_ll2_conn_type conn_type = p_ll2_conn->conn_type; 1066 + enum qed_ll2_conn_type conn_type = p_ll2_conn->conn.conn_type; 1063 1067 struct qed_ll2_tx_queue *p_tx = &p_ll2_conn->tx_queue; 1064 1068 struct core_tx_start_ramrod_data *p_ramrod = NULL; 1065 1069 struct qed_spq_entry *p_ent = NULL; ··· 1071 1075 if (!QED_LL2_TX_REGISTERED(p_ll2_conn)) 1072 1076 return 0; 1073 1077 1074 - if (p_ll2_conn->conn_type == QED_LL2_TYPE_ISCSI_OOO) 1078 + if (p_ll2_conn->conn.conn_type == QED_LL2_TYPE_ISCSI_OOO) 1075 1079 p_ll2_conn->tx_stats_en = 0; 1076 1080 else 1077 1081 p_ll2_conn->tx_stats_en = 1; ··· 1092 1096 1093 1097 p_ramrod->sb_id = cpu_to_le16(qed_int_get_sp_sb_id(p_hwfn)); 1094 1098 p_ramrod->sb_index = p_tx->tx_sb_index; 1095 - p_ramrod->mtu = cpu_to_le16(p_ll2_conn->mtu); 1099 + p_ramrod->mtu = cpu_to_le16(p_ll2_conn->conn.mtu); 1096 1100 p_ramrod->stats_en = p_ll2_conn->tx_stats_en; 1097 1101 p_ramrod->stats_id = p_ll2_conn->tx_stats_id; 1098 1102 ··· 1102 1106 p_ramrod->pbl_size = cpu_to_le16(pbl_size); 1103 1107 1104 1108 memset(&pq_params, 0, sizeof(pq_params)); 1105 - pq_params.core.tc = p_ll2_conn->tx_tc; 1109 + pq_params.core.tc = p_ll2_conn->conn.tx_tc; 1106 1110 pq_id = qed_get_qm_pq(p_hwfn, PROTOCOLID_CORE, &pq_params); 1107 1111 p_ramrod->qm_pq_id = cpu_to_le16(pq_id); 1108 1112 ··· 1119 1123 DP_NOTICE(p_hwfn, "Unknown connection type: %d\n", conn_type); 1120 1124 } 1121 1125 1122 - p_ramrod->gsi_offload_flag = p_ll2_conn->gsi_enable; 1126 + p_ramrod->gsi_offload_flag = p_ll2_conn->conn.gsi_enable; 1123 1127 return qed_spq_post(p_hwfn, p_ent, NULL); 1124 1128 } 1125 1129 ··· 1220 1224 1221 1225 DP_VERBOSE(p_hwfn, QED_MSG_LL2, 1222 1226 "Allocated LL2 Rxq [Type %08x] with 0x%08x buffers\n", 1223 - p_ll2_info->conn_type, rx_num_desc); 1227 + p_ll2_info->conn.conn_type, rx_num_desc); 1224 1228 1225 1229 out: 1226 1230 return rc; ··· 1258 1262 1259 1263 DP_VERBOSE(p_hwfn, QED_MSG_LL2, 1260 1264 "Allocated LL2 Txq [Type %08x] with 0x%08x buffers\n", 1261 - p_ll2_info->conn_type, tx_num_desc); 1265 + p_ll2_info->conn.conn_type, tx_num_desc); 1262 1266 1263 1267 out: 1264 1268 if (rc) ··· 1269 1273 } 1270 1274 1271 1275 int qed_ll2_acquire_connection(struct qed_hwfn *p_hwfn, 1272 - struct qed_ll2_info *p_params, 1276 + struct qed_ll2_conn *p_params, 1273 1277 u16 rx_num_desc, 1274 1278 u16 tx_num_desc, 1275 1279 u8 *p_connection_handle) ··· 1298 1302 if (!p_ll2_info) 1299 1303 return -EBUSY; 1300 1304 1301 - p_ll2_info->conn_type = p_params->conn_type; 1302 - p_ll2_info->mtu = p_params->mtu; 1303 - p_ll2_info->rx_drop_ttl0_flg = p_params->rx_drop_ttl0_flg; 1304 - p_ll2_info->rx_vlan_removal_en = p_params->rx_vlan_removal_en; 1305 - p_ll2_info->tx_tc = p_params->tx_tc; 1306 - p_ll2_info->tx_dest = p_params->tx_dest; 1307 - p_ll2_info->ai_err_packet_too_big = p_params->ai_err_packet_too_big; 1308 - p_ll2_info->ai_err_no_buf = p_params->ai_err_no_buf; 1309 - p_ll2_info->gsi_enable = p_params->gsi_enable; 1305 + p_ll2_info->conn = *p_params; 1310 1306 1311 1307 rc = qed_ll2_acquire_connection_rx(p_hwfn, p_ll2_info, rx_num_desc); 1312 1308 if (rc) ··· 1359 1371 1360 1372 SET_FIELD(action_on_error, 1361 1373 CORE_RX_ACTION_ON_ERROR_PACKET_TOO_BIG, 1362 - p_ll2_conn->ai_err_packet_too_big); 1374 + p_ll2_conn->conn.ai_err_packet_too_big); 1363 1375 SET_FIELD(action_on_error, 1364 - CORE_RX_ACTION_ON_ERROR_NO_BUFF, p_ll2_conn->ai_err_no_buf); 1376 + CORE_RX_ACTION_ON_ERROR_NO_BUFF, p_ll2_conn->conn.ai_err_no_buf); 1365 1377 1366 1378 return qed_sp_ll2_rx_queue_start(p_hwfn, p_ll2_conn, action_on_error); 1367 1379 } ··· 1588 1600 "LL2 [q 0x%02x cid 0x%08x type 0x%08x] Tx Producer at [0x%04x] - set with a %04x bytes %02x BDs buffer at %08x:%08x\n", 1589 1601 p_ll2->queue_id, 1590 1602 p_ll2->cid, 1591 - p_ll2->conn_type, 1603 + p_ll2->conn.conn_type, 1592 1604 prod_idx, 1593 1605 first_frag_len, 1594 1606 num_of_bds, ··· 1664 1676 (NETIF_MSG_TX_QUEUED | QED_MSG_LL2), 1665 1677 "LL2 [q 0x%02x cid 0x%08x type 0x%08x] Doorbelled [producer 0x%04x]\n", 1666 1678 p_ll2_conn->queue_id, 1667 - p_ll2_conn->cid, p_ll2_conn->conn_type, db_msg.spq_prod); 1679 + p_ll2_conn->cid, p_ll2_conn->conn.conn_type, db_msg.spq_prod); 1668 1680 } 1669 1681 1670 1682 int qed_ll2_prepare_tx_packet(struct qed_hwfn *p_hwfn, ··· 1805 1817 qed_ll2_rxq_flush(p_hwfn, connection_handle); 1806 1818 } 1807 1819 1808 - if (p_ll2_conn->conn_type == QED_LL2_TYPE_ISCSI_OOO) 1820 + if (p_ll2_conn->conn.conn_type == QED_LL2_TYPE_ISCSI_OOO) 1809 1821 qed_ooo_release_all_isles(p_hwfn, p_hwfn->p_ooo_info); 1810 1822 1811 1823 return rc; ··· 1981 1993 1982 1994 static int qed_ll2_start(struct qed_dev *cdev, struct qed_ll2_params *params) 1983 1995 { 1984 - struct qed_ll2_info ll2_info; 1996 + struct qed_ll2_conn ll2_info; 1985 1997 struct qed_ll2_buffer *buffer, *tmp_buffer; 1986 1998 enum qed_ll2_conn_type conn_type; 1987 1999 struct qed_ptt *p_ptt; ··· 2029 2041 2030 2042 /* Prepare the temporary ll2 information */ 2031 2043 memset(&ll2_info, 0, sizeof(ll2_info)); 2044 + 2032 2045 ll2_info.conn_type = conn_type; 2033 2046 ll2_info.mtu = params->mtu; 2034 2047 ll2_info.rx_drop_ttl0_flg = params->drop_ttl0_packets; ··· 2109 2120 } 2110 2121 2111 2122 ether_addr_copy(cdev->ll2_mac_address, params->ll2_mac_address); 2112 - 2113 2123 return 0; 2114 2124 2115 2125 release_terminate_all:
+14 -10
drivers/net/ethernet/qlogic/qed/qed_ll2.h
··· 112 112 bool b_completing_packet; 113 113 }; 114 114 115 - struct qed_ll2_info { 116 - /* Lock protecting the state of LL2 */ 117 - struct mutex mutex; 115 + struct qed_ll2_conn { 118 116 enum qed_ll2_conn_type conn_type; 119 - u32 cid; 120 - u8 my_id; 121 - u8 queue_id; 122 - u8 tx_stats_id; 123 - bool b_active; 124 117 u16 mtu; 125 118 u8 rx_drop_ttl0_flg; 126 119 u8 rx_vlan_removal_en; ··· 121 128 enum core_tx_dest tx_dest; 122 129 enum core_error_handle ai_err_packet_too_big; 123 130 enum core_error_handle ai_err_no_buf; 131 + u8 gsi_enable; 132 + }; 133 + 134 + struct qed_ll2_info { 135 + /* Lock protecting the state of LL2 */ 136 + struct mutex mutex; 137 + struct qed_ll2_conn conn; 138 + u32 cid; 139 + u8 my_id; 140 + u8 queue_id; 141 + u8 tx_stats_id; 142 + bool b_active; 124 143 u8 tx_stats_en; 125 144 struct qed_ll2_rx_queue rx_queue; 126 145 struct qed_ll2_tx_queue tx_queue; 127 - u8 gsi_enable; 128 146 }; 129 147 130 148 /** ··· 153 149 * @return 0 on success, failure otherwise 154 150 */ 155 151 int qed_ll2_acquire_connection(struct qed_hwfn *p_hwfn, 156 - struct qed_ll2_info *p_params, 152 + struct qed_ll2_conn *p_params, 157 153 u16 rx_num_desc, 158 154 u16 tx_num_desc, 159 155 u8 *p_connection_handle);
+1 -1
drivers/net/ethernet/qlogic/qed/qed_roce.c
··· 2632 2632 { 2633 2633 struct qed_hwfn *hwfn = QED_LEADING_HWFN(cdev); 2634 2634 struct qed_roce_ll2_info *roce_ll2; 2635 - struct qed_ll2_info ll2_params; 2635 + struct qed_ll2_conn ll2_params; 2636 2636 int rc; 2637 2637 2638 2638 if (!params) {
+64 -48
drivers/net/ethernet/renesas/ravb_main.c
··· 179 179 .get_mdio_data = ravb_get_mdio_data, 180 180 }; 181 181 182 + /* Free TX skb function for AVB-IP */ 183 + static int ravb_tx_free(struct net_device *ndev, int q, bool free_txed_only) 184 + { 185 + struct ravb_private *priv = netdev_priv(ndev); 186 + struct net_device_stats *stats = &priv->stats[q]; 187 + struct ravb_tx_desc *desc; 188 + int free_num = 0; 189 + int entry; 190 + u32 size; 191 + 192 + for (; priv->cur_tx[q] - priv->dirty_tx[q] > 0; priv->dirty_tx[q]++) { 193 + bool txed; 194 + 195 + entry = priv->dirty_tx[q] % (priv->num_tx_ring[q] * 196 + NUM_TX_DESC); 197 + desc = &priv->tx_ring[q][entry]; 198 + txed = desc->die_dt == DT_FEMPTY; 199 + if (free_txed_only && !txed) 200 + break; 201 + /* Descriptor type must be checked before all other reads */ 202 + dma_rmb(); 203 + size = le16_to_cpu(desc->ds_tagl) & TX_DS; 204 + /* Free the original skb. */ 205 + if (priv->tx_skb[q][entry / NUM_TX_DESC]) { 206 + dma_unmap_single(ndev->dev.parent, le32_to_cpu(desc->dptr), 207 + size, DMA_TO_DEVICE); 208 + /* Last packet descriptor? */ 209 + if (entry % NUM_TX_DESC == NUM_TX_DESC - 1) { 210 + entry /= NUM_TX_DESC; 211 + dev_kfree_skb_any(priv->tx_skb[q][entry]); 212 + priv->tx_skb[q][entry] = NULL; 213 + if (txed) 214 + stats->tx_packets++; 215 + } 216 + free_num++; 217 + } 218 + if (txed) 219 + stats->tx_bytes += size; 220 + desc->die_dt = DT_EEMPTY; 221 + } 222 + return free_num; 223 + } 224 + 182 225 /* Free skb's and DMA buffers for Ethernet AVB */ 183 226 static void ravb_ring_free(struct net_device *ndev, int q) 184 227 { ··· 237 194 kfree(priv->rx_skb[q]); 238 195 priv->rx_skb[q] = NULL; 239 196 240 - /* Free TX skb ringbuffer */ 241 - if (priv->tx_skb[q]) { 242 - for (i = 0; i < priv->num_tx_ring[q]; i++) 243 - dev_kfree_skb(priv->tx_skb[q][i]); 244 - } 245 - kfree(priv->tx_skb[q]); 246 - priv->tx_skb[q] = NULL; 247 - 248 197 /* Free aligned TX buffers */ 249 198 kfree(priv->tx_align[q]); 250 199 priv->tx_align[q] = NULL; 251 200 252 201 if (priv->rx_ring[q]) { 202 + for (i = 0; i < priv->num_rx_ring[q]; i++) { 203 + struct ravb_ex_rx_desc *desc = &priv->rx_ring[q][i]; 204 + 205 + if (!dma_mapping_error(ndev->dev.parent, 206 + le32_to_cpu(desc->dptr))) 207 + dma_unmap_single(ndev->dev.parent, 208 + le32_to_cpu(desc->dptr), 209 + PKT_BUF_SZ, 210 + DMA_FROM_DEVICE); 211 + } 253 212 ring_size = sizeof(struct ravb_ex_rx_desc) * 254 213 (priv->num_rx_ring[q] + 1); 255 214 dma_free_coherent(ndev->dev.parent, ring_size, priv->rx_ring[q], ··· 260 215 } 261 216 262 217 if (priv->tx_ring[q]) { 218 + ravb_tx_free(ndev, q, false); 219 + 263 220 ring_size = sizeof(struct ravb_tx_desc) * 264 221 (priv->num_tx_ring[q] * NUM_TX_DESC + 1); 265 222 dma_free_coherent(ndev->dev.parent, ring_size, priv->tx_ring[q], 266 223 priv->tx_desc_dma[q]); 267 224 priv->tx_ring[q] = NULL; 268 225 } 226 + 227 + /* Free TX skb ringbuffer. 228 + * SKBs are freed by ravb_tx_free() call above. 229 + */ 230 + kfree(priv->tx_skb[q]); 231 + priv->tx_skb[q] = NULL; 269 232 } 270 233 271 234 /* Format skb and descriptor buffer for Ethernet AVB */ ··· 482 429 ravb_modify(ndev, CCC, CCC_OPC, CCC_OPC_OPERATION); 483 430 484 431 return 0; 485 - } 486 - 487 - /* Free TX skb function for AVB-IP */ 488 - static int ravb_tx_free(struct net_device *ndev, int q) 489 - { 490 - struct ravb_private *priv = netdev_priv(ndev); 491 - struct net_device_stats *stats = &priv->stats[q]; 492 - struct ravb_tx_desc *desc; 493 - int free_num = 0; 494 - int entry; 495 - u32 size; 496 - 497 - for (; priv->cur_tx[q] - priv->dirty_tx[q] > 0; priv->dirty_tx[q]++) { 498 - entry = priv->dirty_tx[q] % (priv->num_tx_ring[q] * 499 - NUM_TX_DESC); 500 - desc = &priv->tx_ring[q][entry]; 501 - if (desc->die_dt != DT_FEMPTY) 502 - break; 503 - /* Descriptor type must be checked before all other reads */ 504 - dma_rmb(); 505 - size = le16_to_cpu(desc->ds_tagl) & TX_DS; 506 - /* Free the original skb. */ 507 - if (priv->tx_skb[q][entry / NUM_TX_DESC]) { 508 - dma_unmap_single(ndev->dev.parent, le32_to_cpu(desc->dptr), 509 - size, DMA_TO_DEVICE); 510 - /* Last packet descriptor? */ 511 - if (entry % NUM_TX_DESC == NUM_TX_DESC - 1) { 512 - entry /= NUM_TX_DESC; 513 - dev_kfree_skb_any(priv->tx_skb[q][entry]); 514 - priv->tx_skb[q][entry] = NULL; 515 - stats->tx_packets++; 516 - } 517 - free_num++; 518 - } 519 - stats->tx_bytes += size; 520 - desc->die_dt = DT_EEMPTY; 521 - } 522 - return free_num; 523 432 } 524 433 525 434 static void ravb_get_tx_tstamp(struct net_device *ndev) ··· 917 902 spin_lock_irqsave(&priv->lock, flags); 918 903 /* Clear TX interrupt */ 919 904 ravb_write(ndev, ~mask, TIS); 920 - ravb_tx_free(ndev, q); 905 + ravb_tx_free(ndev, q, true); 921 906 netif_wake_subqueue(ndev, q); 922 907 mmiowb(); 923 908 spin_unlock_irqrestore(&priv->lock, flags); ··· 1582 1567 1583 1568 priv->cur_tx[q] += NUM_TX_DESC; 1584 1569 if (priv->cur_tx[q] - priv->dirty_tx[q] > 1585 - (priv->num_tx_ring[q] - 1) * NUM_TX_DESC && !ravb_tx_free(ndev, q)) 1570 + (priv->num_tx_ring[q] - 1) * NUM_TX_DESC && 1571 + !ravb_tx_free(ndev, q, true)) 1586 1572 netif_stop_subqueue(ndev, q); 1587 1573 1588 1574 exit:
+1
drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
··· 351 351 if (of_phy_is_fixed_link(np)) 352 352 of_phy_deregister_fixed_link(np); 353 353 of_node_put(plat->phy_node); 354 + of_node_put(plat->mdio_node); 354 355 } 355 356 #else 356 357 struct plat_stmmacenet_data *
+6 -7
drivers/net/gtp.c
··· 69 69 struct socket *sock0; 70 70 struct socket *sock1u; 71 71 72 - struct net *net; 73 72 struct net_device *dev; 74 73 75 74 unsigned int hash_size; ··· 315 316 316 317 netdev_dbg(gtp->dev, "encap_recv sk=%p\n", sk); 317 318 318 - xnet = !net_eq(gtp->net, dev_net(gtp->dev)); 319 + xnet = !net_eq(sock_net(sk), dev_net(gtp->dev)); 319 320 320 321 switch (udp_sk(sk)->encap_type) { 321 322 case UDP_ENCAP_GTP0: ··· 611 612 pktinfo.fl4.saddr, pktinfo.fl4.daddr, 612 613 pktinfo.iph->tos, 613 614 ip4_dst_hoplimit(&pktinfo.rt->dst), 614 - htons(IP_DF), 615 + 0, 615 616 pktinfo.gtph_port, pktinfo.gtph_port, 616 617 true, false); 617 618 break; ··· 657 658 static int gtp_hashtable_new(struct gtp_dev *gtp, int hsize); 658 659 static void gtp_hashtable_free(struct gtp_dev *gtp); 659 660 static int gtp_encap_enable(struct net_device *dev, struct gtp_dev *gtp, 660 - int fd_gtp0, int fd_gtp1, struct net *src_net); 661 + int fd_gtp0, int fd_gtp1); 661 662 662 663 static int gtp_newlink(struct net *src_net, struct net_device *dev, 663 664 struct nlattr *tb[], struct nlattr *data[]) ··· 674 675 fd0 = nla_get_u32(data[IFLA_GTP_FD0]); 675 676 fd1 = nla_get_u32(data[IFLA_GTP_FD1]); 676 677 677 - err = gtp_encap_enable(dev, gtp, fd0, fd1, src_net); 678 + err = gtp_encap_enable(dev, gtp, fd0, fd1); 678 679 if (err < 0) 679 680 goto out_err; 680 681 ··· 820 821 } 821 822 822 823 static int gtp_encap_enable(struct net_device *dev, struct gtp_dev *gtp, 823 - int fd_gtp0, int fd_gtp1, struct net *src_net) 824 + int fd_gtp0, int fd_gtp1) 824 825 { 825 826 struct udp_tunnel_sock_cfg tuncfg = {NULL}; 826 827 struct socket *sock0, *sock1u; ··· 857 858 858 859 gtp->sock0 = sock0; 859 860 gtp->sock1u = sock1u; 860 - gtp->net = src_net; 861 861 862 862 tuncfg.sk_user_data = gtp; 863 863 tuncfg.encap_rcv = gtp_encap_recv; ··· 1374 1376 MODULE_AUTHOR("Harald Welte <hwelte@sysmocom.de>"); 1375 1377 MODULE_DESCRIPTION("Interface driver for GTP encapsulated traffic"); 1376 1378 MODULE_ALIAS_RTNL_LINK("gtp"); 1379 + MODULE_ALIAS_GENL_FAMILY("gtp");
+1 -1
drivers/net/macvtap.c
··· 825 825 return -EINVAL; 826 826 827 827 if (virtio_net_hdr_from_skb(skb, &vnet_hdr, 828 - macvtap_is_little_endian(q))) 828 + macvtap_is_little_endian(q), true)) 829 829 BUG(); 830 830 831 831 if (copy_to_iter(&vnet_hdr, sizeof(vnet_hdr), iter) !=
+19 -2
drivers/net/phy/bcm63xx.c
··· 21 21 MODULE_AUTHOR("Maxime Bizon <mbizon@freebox.fr>"); 22 22 MODULE_LICENSE("GPL"); 23 23 24 + static int bcm63xx_config_intr(struct phy_device *phydev) 25 + { 26 + int reg, err; 27 + 28 + reg = phy_read(phydev, MII_BCM63XX_IR); 29 + if (reg < 0) 30 + return reg; 31 + 32 + if (phydev->interrupts == PHY_INTERRUPT_ENABLED) 33 + reg &= ~MII_BCM63XX_IR_GMASK; 34 + else 35 + reg |= MII_BCM63XX_IR_GMASK; 36 + 37 + err = phy_write(phydev, MII_BCM63XX_IR, reg); 38 + return err; 39 + } 40 + 24 41 static int bcm63xx_config_init(struct phy_device *phydev) 25 42 { 26 43 int reg, err; ··· 72 55 .config_aneg = genphy_config_aneg, 73 56 .read_status = genphy_read_status, 74 57 .ack_interrupt = bcm_phy_ack_intr, 75 - .config_intr = bcm_phy_config_intr, 58 + .config_intr = bcm63xx_config_intr, 76 59 }, { 77 60 /* same phy as above, with just a different OUI */ 78 61 .phy_id = 0x002bdc00, ··· 84 67 .config_aneg = genphy_config_aneg, 85 68 .read_status = genphy_read_status, 86 69 .ack_interrupt = bcm_phy_ack_intr, 87 - .config_intr = bcm_phy_config_intr, 70 + .config_intr = bcm63xx_config_intr, 88 71 } }; 89 72 90 73 module_phy_driver(bcm63xx_driver);
+3
drivers/net/phy/dp83848.c
··· 17 17 #include <linux/phy.h> 18 18 19 19 #define TI_DP83848C_PHY_ID 0x20005ca0 20 + #define TI_DP83620_PHY_ID 0x20005ce0 20 21 #define NS_DP83848C_PHY_ID 0x20005c90 21 22 #define TLK10X_PHY_ID 0x2000a210 22 23 #define TI_DP83822_PHY_ID 0x2000a240 ··· 78 77 static struct mdio_device_id __maybe_unused dp83848_tbl[] = { 79 78 { TI_DP83848C_PHY_ID, 0xfffffff0 }, 80 79 { NS_DP83848C_PHY_ID, 0xfffffff0 }, 80 + { TI_DP83620_PHY_ID, 0xfffffff0 }, 81 81 { TLK10X_PHY_ID, 0xfffffff0 }, 82 82 { TI_DP83822_PHY_ID, 0xfffffff0 }, 83 83 { } ··· 108 106 static struct phy_driver dp83848_driver[] = { 109 107 DP83848_PHY_DRIVER(TI_DP83848C_PHY_ID, "TI DP83848C 10/100 Mbps PHY"), 110 108 DP83848_PHY_DRIVER(NS_DP83848C_PHY_ID, "NS DP83848C 10/100 Mbps PHY"), 109 + DP83848_PHY_DRIVER(TI_DP83620_PHY_ID, "TI DP83620 10/100 Mbps PHY"), 111 110 DP83848_PHY_DRIVER(TLK10X_PHY_ID, "TI TLK10X 10/100 Mbps PHY"), 112 111 DP83848_PHY_DRIVER(TI_DP83822_PHY_ID, "TI DP83822 10/100 Mbps PHY"), 113 112 };
+2
drivers/net/phy/marvell.c
··· 1679 1679 .ack_interrupt = &marvell_ack_interrupt, 1680 1680 .config_intr = &marvell_config_intr, 1681 1681 .did_interrupt = &m88e1121_did_interrupt, 1682 + .get_wol = &m88e1318_get_wol, 1683 + .set_wol = &m88e1318_set_wol, 1682 1684 .resume = &marvell_resume, 1683 1685 .suspend = &marvell_suspend, 1684 1686 .get_sset_count = marvell_get_sset_count,
+14
drivers/net/phy/micrel.c
··· 1008 1008 .get_stats = kszphy_get_stats, 1009 1009 .suspend = genphy_suspend, 1010 1010 .resume = genphy_resume, 1011 + }, { 1012 + .phy_id = PHY_ID_KSZ8795, 1013 + .phy_id_mask = MICREL_PHY_ID_MASK, 1014 + .name = "Micrel KSZ8795", 1015 + .features = (SUPPORTED_Pause | SUPPORTED_Asym_Pause), 1016 + .flags = PHY_HAS_MAGICANEG | PHY_HAS_INTERRUPT, 1017 + .config_init = kszphy_config_init, 1018 + .config_aneg = ksz8873mll_config_aneg, 1019 + .read_status = ksz8873mll_read_status, 1020 + .get_sset_count = kszphy_get_sset_count, 1021 + .get_strings = kszphy_get_strings, 1022 + .get_stats = kszphy_get_stats, 1023 + .suspend = genphy_suspend, 1024 + .resume = genphy_resume, 1011 1025 } }; 1012 1026 1013 1027 module_phy_driver(ksphy_driver);
+10 -5
drivers/net/phy/phy.c
··· 29 29 #include <linux/mii.h> 30 30 #include <linux/ethtool.h> 31 31 #include <linux/phy.h> 32 + #include <linux/phy_led_triggers.h> 32 33 #include <linux/timer.h> 33 34 #include <linux/workqueue.h> 34 35 #include <linux/mdio.h> ··· 650 649 * phy_trigger_machine - trigger the state machine to run 651 650 * 652 651 * @phydev: the phy_device struct 652 + * @sync: indicate whether we should wait for the workqueue cancelation 653 653 * 654 654 * Description: There has been a change in state which requires that the 655 655 * state machine runs. 656 656 */ 657 657 658 - static void phy_trigger_machine(struct phy_device *phydev) 658 + static void phy_trigger_machine(struct phy_device *phydev, bool sync) 659 659 { 660 - cancel_delayed_work_sync(&phydev->state_queue); 660 + if (sync) 661 + cancel_delayed_work_sync(&phydev->state_queue); 662 + else 663 + cancel_delayed_work(&phydev->state_queue); 661 664 queue_delayed_work(system_power_efficient_wq, &phydev->state_queue, 0); 662 665 } 663 666 ··· 698 693 phydev->state = PHY_HALTED; 699 694 mutex_unlock(&phydev->lock); 700 695 701 - phy_trigger_machine(phydev); 696 + phy_trigger_machine(phydev, false); 702 697 } 703 698 704 699 /** ··· 845 840 } 846 841 847 842 /* reschedule state queue work to run as soon as possible */ 848 - phy_trigger_machine(phydev); 843 + phy_trigger_machine(phydev, true); 849 844 return; 850 845 851 846 ignore: ··· 947 942 if (do_resume) 948 943 phy_resume(phydev); 949 944 950 - phy_trigger_machine(phydev); 945 + phy_trigger_machine(phydev, true); 951 946 } 952 947 EXPORT_SYMBOL(phy_start); 953 948
+7 -2
drivers/net/phy/phy_led_triggers.c
··· 12 12 */ 13 13 #include <linux/leds.h> 14 14 #include <linux/phy.h> 15 + #include <linux/phy_led_triggers.h> 15 16 #include <linux/netdevice.h> 16 17 17 18 static struct phy_led_trigger *phy_speed_to_led_trigger(struct phy_device *phy, ··· 103 102 sizeof(struct phy_led_trigger) * 104 103 phy->phy_num_led_triggers, 105 104 GFP_KERNEL); 106 - if (!phy->phy_led_triggers) 107 - return -ENOMEM; 105 + if (!phy->phy_led_triggers) { 106 + err = -ENOMEM; 107 + goto out_clear; 108 + } 108 109 109 110 for (i = 0; i < phy->phy_num_led_triggers; i++) { 110 111 err = phy_led_trigger_register(phy, &phy->phy_led_triggers[i], ··· 123 120 while (i--) 124 121 phy_led_trigger_unregister(&phy->phy_led_triggers[i]); 125 122 devm_kfree(&phy->mdio.dev, phy->phy_led_triggers); 123 + out_clear: 124 + phy->phy_num_led_triggers = 0; 126 125 return err; 127 126 } 128 127 EXPORT_SYMBOL_GPL(phy_led_triggers_register);
+1 -1
drivers/net/tun.c
··· 1360 1360 return -EINVAL; 1361 1361 1362 1362 if (virtio_net_hdr_from_skb(skb, &gso, 1363 - tun_is_little_endian(tun))) { 1363 + tun_is_little_endian(tun), true)) { 1364 1364 struct skb_shared_info *sinfo = skb_shinfo(skb); 1365 1365 pr_err("unexpected GSO type: " 1366 1366 "0x%x, gso_size %d, hdr_len %d\n",
+8
drivers/net/usb/cdc_ether.c
··· 531 531 #define SAMSUNG_VENDOR_ID 0x04e8 532 532 #define LENOVO_VENDOR_ID 0x17ef 533 533 #define NVIDIA_VENDOR_ID 0x0955 534 + #define HP_VENDOR_ID 0x03f0 534 535 535 536 static const struct usb_device_id products[] = { 536 537 /* BLACKLIST !! ··· 675 674 { 676 675 USB_DEVICE_AND_INTERFACE_INFO(NOVATEL_VENDOR_ID, 0x9011, USB_CLASS_COMM, 677 676 USB_CDC_SUBCLASS_ETHERNET, USB_CDC_PROTO_NONE), 677 + .driver_info = 0, 678 + }, 679 + 680 + /* HP lt2523 (Novatel E371) - handled by qmi_wwan */ 681 + { 682 + USB_DEVICE_AND_INTERFACE_INFO(HP_VENDOR_ID, 0x421d, USB_CLASS_COMM, 683 + USB_CDC_SUBCLASS_ETHERNET, USB_CDC_PROTO_NONE), 678 684 .driver_info = 0, 679 685 }, 680 686
+7
drivers/net/usb/qmi_wwan.c
··· 654 654 USB_CDC_PROTO_NONE), 655 655 .driver_info = (unsigned long)&qmi_wwan_info, 656 656 }, 657 + { /* HP lt2523 (Novatel E371) */ 658 + USB_DEVICE_AND_INTERFACE_INFO(0x03f0, 0x421d, 659 + USB_CLASS_COMM, 660 + USB_CDC_SUBCLASS_ETHERNET, 661 + USB_CDC_PROTO_NONE), 662 + .driver_info = (unsigned long)&qmi_wwan_info, 663 + }, 657 664 { /* HP lt4112 LTE/HSPA+ Gobi 4G Module (Huawei me906e) */ 658 665 USB_DEVICE_AND_INTERFACE_INFO(0x03f0, 0x581d, USB_CLASS_VENDOR_SPEC, 1, 7), 659 666 .driver_info = (unsigned long)&qmi_wwan_info,
+28 -6
drivers/net/usb/r8152.c
··· 32 32 #define NETNEXT_VERSION "08" 33 33 34 34 /* Information for net */ 35 - #define NET_VERSION "6" 35 + #define NET_VERSION "8" 36 36 37 37 #define DRIVER_VERSION "v1." NETNEXT_VERSION "." NET_VERSION 38 38 #define DRIVER_AUTHOR "Realtek linux nic maintainers <nic_swsd@realtek.com>" ··· 1936 1936 napi_complete(napi); 1937 1937 if (!list_empty(&tp->rx_done)) 1938 1938 napi_schedule(napi); 1939 + else if (!skb_queue_empty(&tp->tx_queue) && 1940 + !list_empty(&tp->tx_free)) 1941 + napi_schedule(napi); 1939 1942 } 1940 1943 1941 1944 return work_done; ··· 3158 3155 if (!netif_carrier_ok(netdev)) { 3159 3156 tp->rtl_ops.enable(tp); 3160 3157 set_bit(RTL8152_SET_RX_MODE, &tp->flags); 3158 + netif_stop_queue(netdev); 3161 3159 napi_disable(&tp->napi); 3162 3160 netif_carrier_on(netdev); 3163 3161 rtl_start_rx(tp); 3164 3162 napi_enable(&tp->napi); 3163 + netif_wake_queue(netdev); 3164 + netif_info(tp, link, netdev, "carrier on\n"); 3165 3165 } 3166 3166 } else { 3167 3167 if (netif_carrier_ok(netdev)) { ··· 3172 3166 napi_disable(&tp->napi); 3173 3167 tp->rtl_ops.disable(tp); 3174 3168 napi_enable(&tp->napi); 3169 + netif_info(tp, link, netdev, "carrier off\n"); 3175 3170 } 3176 3171 } 3177 3172 } ··· 3522 3515 if (!netif_running(netdev)) 3523 3516 return 0; 3524 3517 3518 + netif_stop_queue(netdev); 3525 3519 napi_disable(&tp->napi); 3526 3520 clear_bit(WORK_ENABLE, &tp->flags); 3527 3521 usb_kill_urb(tp->intr_urb); 3528 3522 cancel_delayed_work_sync(&tp->schedule); 3529 3523 if (netif_carrier_ok(netdev)) { 3530 - netif_stop_queue(netdev); 3531 3524 mutex_lock(&tp->control); 3532 3525 tp->rtl_ops.disable(tp); 3533 3526 mutex_unlock(&tp->control); ··· 3552 3545 if (netif_carrier_ok(netdev)) { 3553 3546 mutex_lock(&tp->control); 3554 3547 tp->rtl_ops.enable(tp); 3548 + rtl_start_rx(tp); 3555 3549 rtl8152_set_rx_mode(netdev); 3556 3550 mutex_unlock(&tp->control); 3557 - netif_wake_queue(netdev); 3558 3551 } 3559 3552 3560 3553 napi_enable(&tp->napi); 3554 + netif_wake_queue(netdev); 3555 + usb_submit_urb(tp->intr_urb, GFP_KERNEL); 3556 + 3557 + if (!list_empty(&tp->rx_done)) 3558 + napi_schedule(&tp->napi); 3561 3559 3562 3560 return 0; 3563 3561 } ··· 3584 3572 */ 3585 3573 if (!sw_linking && tp->rtl_ops.in_nway(tp)) 3586 3574 return true; 3575 + else if (!skb_queue_empty(&tp->tx_queue)) 3576 + return true; 3587 3577 else 3588 3578 return false; 3589 3579 } ··· 3595 3581 struct net_device *netdev = tp->netdev; 3596 3582 int ret = 0; 3597 3583 3584 + set_bit(SELECTIVE_SUSPEND, &tp->flags); 3585 + smp_mb__after_atomic(); 3586 + 3598 3587 if (netif_running(netdev) && test_bit(WORK_ENABLE, &tp->flags)) { 3599 3588 u32 rcr = 0; 3600 3589 3601 3590 if (delay_autosuspend(tp)) { 3591 + clear_bit(SELECTIVE_SUSPEND, &tp->flags); 3592 + smp_mb__after_atomic(); 3602 3593 ret = -EBUSY; 3603 3594 goto out1; 3604 3595 } ··· 3620 3601 if (!(ocp_data & RXFIFO_EMPTY)) { 3621 3602 rxdy_gated_en(tp, false); 3622 3603 ocp_write_dword(tp, MCU_TYPE_PLA, PLA_RCR, rcr); 3604 + clear_bit(SELECTIVE_SUSPEND, &tp->flags); 3605 + smp_mb__after_atomic(); 3623 3606 ret = -EBUSY; 3624 3607 goto out1; 3625 3608 } ··· 3640 3619 napi_enable(&tp->napi); 3641 3620 } 3642 3621 } 3643 - 3644 - set_bit(SELECTIVE_SUSPEND, &tp->flags); 3645 3622 3646 3623 out1: 3647 3624 return ret; ··· 3696 3677 if (netif_running(tp->netdev) && tp->netdev->flags & IFF_UP) { 3697 3678 if (test_bit(SELECTIVE_SUSPEND, &tp->flags)) { 3698 3679 tp->rtl_ops.autosuspend_en(tp, false); 3699 - clear_bit(SELECTIVE_SUSPEND, &tp->flags); 3700 3680 napi_disable(&tp->napi); 3701 3681 set_bit(WORK_ENABLE, &tp->flags); 3702 3682 if (netif_carrier_ok(tp->netdev)) 3703 3683 rtl_start_rx(tp); 3704 3684 napi_enable(&tp->napi); 3685 + clear_bit(SELECTIVE_SUSPEND, &tp->flags); 3686 + smp_mb__after_atomic(); 3687 + if (!list_empty(&tp->rx_done)) 3688 + napi_schedule(&tp->napi); 3705 3689 } else { 3706 3690 tp->rtl_ops.up(tp); 3707 3691 netif_carrier_off(tp->netdev);
+21 -4
drivers/net/virtio_net.c
··· 48 48 */ 49 49 DECLARE_EWMA(pkt_len, 1, 64) 50 50 51 + /* With mergeable buffers we align buffer address and use the low bits to 52 + * encode its true size. Buffer size is up to 1 page so we need to align to 53 + * square root of page size to ensure we reserve enough bits to encode the true 54 + * size. 55 + */ 56 + #define MERGEABLE_BUFFER_MIN_ALIGN_SHIFT ((PAGE_SHIFT + 1) / 2) 57 + 51 58 /* Minimum alignment for mergeable packet buffers. */ 52 - #define MERGEABLE_BUFFER_ALIGN max(L1_CACHE_BYTES, 256) 59 + #define MERGEABLE_BUFFER_ALIGN max(L1_CACHE_BYTES, \ 60 + 1 << MERGEABLE_BUFFER_MIN_ALIGN_SHIFT) 53 61 54 62 #define VIRTNET_DRIVER_VERSION "1.0.0" 55 63 ··· 1112 1104 hdr = skb_vnet_hdr(skb); 1113 1105 1114 1106 if (virtio_net_hdr_from_skb(skb, &hdr->hdr, 1115 - virtio_is_little_endian(vi->vdev))) 1107 + virtio_is_little_endian(vi->vdev), false)) 1116 1108 BUG(); 1117 1109 1118 1110 if (vi->mergeable_rx_bufs) ··· 1715 1707 u16 xdp_qp = 0, curr_qp; 1716 1708 int i, err; 1717 1709 1710 + if (prog && prog->xdp_adjust_head) { 1711 + netdev_warn(dev, "Does not support bpf_xdp_adjust_head()\n"); 1712 + return -EOPNOTSUPP; 1713 + } 1714 + 1718 1715 if (virtio_has_feature(vi->vdev, VIRTIO_NET_F_GUEST_TSO4) || 1719 1716 virtio_has_feature(vi->vdev, VIRTIO_NET_F_GUEST_TSO6) || 1720 1717 virtio_has_feature(vi->vdev, VIRTIO_NET_F_GUEST_ECN) || ··· 1903 1890 put_page(vi->rq[i].alloc_frag.page); 1904 1891 } 1905 1892 1906 - static bool is_xdp_queue(struct virtnet_info *vi, int q) 1893 + static bool is_xdp_raw_buffer_queue(struct virtnet_info *vi, int q) 1907 1894 { 1895 + /* For small receive mode always use kfree_skb variants */ 1896 + if (!vi->mergeable_rx_bufs) 1897 + return false; 1898 + 1908 1899 if (q < (vi->curr_queue_pairs - vi->xdp_queue_pairs)) 1909 1900 return false; 1910 1901 else if (q < vi->curr_queue_pairs) ··· 1925 1908 for (i = 0; i < vi->max_queue_pairs; i++) { 1926 1909 struct virtqueue *vq = vi->sq[i].vq; 1927 1910 while ((buf = virtqueue_detach_unused_buf(vq)) != NULL) { 1928 - if (!is_xdp_queue(vi, i)) 1911 + if (!is_xdp_raw_buffer_queue(vi, i)) 1929 1912 dev_kfree_skb(buf); 1930 1913 else 1931 1914 put_page(virt_to_head_page(buf));
+8 -4
drivers/net/vxlan.c
··· 2268 2268 = container_of(p, struct vxlan_fdb, hlist); 2269 2269 unsigned long timeout; 2270 2270 2271 - if (f->state & NUD_PERMANENT) 2271 + if (f->state & (NUD_PERMANENT | NUD_NOARP)) 2272 2272 continue; 2273 2273 2274 2274 timeout = f->used + vxlan->cfg.age_interval * HZ; ··· 2354 2354 } 2355 2355 2356 2356 /* Purge the forwarding table */ 2357 - static void vxlan_flush(struct vxlan_dev *vxlan) 2357 + static void vxlan_flush(struct vxlan_dev *vxlan, bool do_all) 2358 2358 { 2359 2359 unsigned int h; 2360 2360 ··· 2364 2364 hlist_for_each_safe(p, n, &vxlan->fdb_head[h]) { 2365 2365 struct vxlan_fdb *f 2366 2366 = container_of(p, struct vxlan_fdb, hlist); 2367 + if (!do_all && (f->state & (NUD_PERMANENT | NUD_NOARP))) 2368 + continue; 2367 2369 /* the all_zeros_mac entry is deleted at vxlan_uninit */ 2368 2370 if (!is_zero_ether_addr(f->eth_addr)) 2369 2371 vxlan_fdb_destroy(vxlan, f); ··· 2387 2385 2388 2386 del_timer_sync(&vxlan->age_timer); 2389 2387 2390 - vxlan_flush(vxlan); 2388 + vxlan_flush(vxlan, false); 2391 2389 vxlan_sock_release(vxlan); 2392 2390 2393 2391 return ret; ··· 2892 2890 memcpy(&vxlan->cfg, conf, sizeof(*conf)); 2893 2891 if (!vxlan->cfg.dst_port) { 2894 2892 if (conf->flags & VXLAN_F_GPE) 2895 - vxlan->cfg.dst_port = 4790; /* IANA assigned VXLAN-GPE port */ 2893 + vxlan->cfg.dst_port = htons(4790); /* IANA VXLAN-GPE port */ 2896 2894 else 2897 2895 vxlan->cfg.dst_port = default_port; 2898 2896 } ··· 3059 3057 { 3060 3058 struct vxlan_dev *vxlan = netdev_priv(dev); 3061 3059 struct vxlan_net *vn = net_generic(vxlan->net, vxlan_net_id); 3060 + 3061 + vxlan_flush(vxlan, true); 3062 3062 3063 3063 spin_lock(&vn->sock_lock); 3064 3064 if (!hlist_unhashed(&vxlan->hlist))
+4 -2
drivers/net/xen-netback/interface.c
··· 221 221 { 222 222 struct xenvif *vif = netdev_priv(dev); 223 223 struct xenvif_queue *queue = NULL; 224 - unsigned int num_queues = vif->num_queues; 225 224 unsigned long rx_bytes = 0; 226 225 unsigned long rx_packets = 0; 227 226 unsigned long tx_bytes = 0; 228 227 unsigned long tx_packets = 0; 229 228 unsigned int index; 230 229 230 + spin_lock(&vif->lock); 231 231 if (vif->queues == NULL) 232 232 goto out; 233 233 234 234 /* Aggregate tx and rx stats from each queue */ 235 - for (index = 0; index < num_queues; ++index) { 235 + for (index = 0; index < vif->num_queues; ++index) { 236 236 queue = &vif->queues[index]; 237 237 rx_bytes += queue->stats.rx_bytes; 238 238 rx_packets += queue->stats.rx_packets; ··· 241 241 } 242 242 243 243 out: 244 + spin_unlock(&vif->lock); 245 + 244 246 vif->dev->stats.rx_bytes = rx_bytes; 245 247 vif->dev->stats.rx_packets = rx_packets; 246 248 vif->dev->stats.tx_bytes = tx_bytes;
+13
drivers/net/xen-netback/xenbus.c
··· 493 493 static void backend_disconnect(struct backend_info *be) 494 494 { 495 495 if (be->vif) { 496 + unsigned int queue_index; 497 + 496 498 xen_unregister_watchers(be->vif); 497 499 #ifdef CONFIG_DEBUG_FS 498 500 xenvif_debugfs_delif(be->vif); 499 501 #endif /* CONFIG_DEBUG_FS */ 500 502 xenvif_disconnect_data(be->vif); 503 + for (queue_index = 0; queue_index < be->vif->num_queues; ++queue_index) 504 + xenvif_deinit_queue(&be->vif->queues[queue_index]); 505 + 506 + spin_lock(&be->vif->lock); 507 + vfree(be->vif->queues); 508 + be->vif->num_queues = 0; 509 + be->vif->queues = NULL; 510 + spin_unlock(&be->vif->lock); 511 + 501 512 xenvif_disconnect_ctrl(be->vif); 502 513 } 503 514 } ··· 1045 1034 err: 1046 1035 if (be->vif->num_queues > 0) 1047 1036 xenvif_disconnect_data(be->vif); /* Clean up existing queues */ 1037 + for (queue_index = 0; queue_index < be->vif->num_queues; ++queue_index) 1038 + xenvif_deinit_queue(&be->vif->queues[queue_index]); 1048 1039 vfree(be->vif->queues); 1049 1040 be->vif->queues = NULL; 1050 1041 be->vif->num_queues = 0;
+1 -1
drivers/net/xen-netfront.c
··· 321 321 queue->rx.req_prod_pvt = req_prod; 322 322 323 323 /* Not enough requests? Try again later. */ 324 - if (req_prod - queue->rx.rsp_cons < NET_RX_SLOTS_MIN) { 324 + if (req_prod - queue->rx.sring->req_prod < NET_RX_SLOTS_MIN) { 325 325 mod_timer(&queue->rx_refill_timer, jiffies + (HZ/10)); 326 326 return; 327 327 }
+3 -3
drivers/nvme/host/fc.c
··· 1663 1663 return 0; 1664 1664 1665 1665 freq->sg_table.sgl = freq->first_sgl; 1666 - ret = sg_alloc_table_chained(&freq->sg_table, rq->nr_phys_segments, 1667 - freq->sg_table.sgl); 1666 + ret = sg_alloc_table_chained(&freq->sg_table, 1667 + blk_rq_nr_phys_segments(rq), freq->sg_table.sgl); 1668 1668 if (ret) 1669 1669 return -ENOMEM; 1670 1670 1671 1671 op->nents = blk_rq_map_sg(rq->q, rq, freq->sg_table.sgl); 1672 - WARN_ON(op->nents > rq->nr_phys_segments); 1672 + WARN_ON(op->nents > blk_rq_nr_phys_segments(rq)); 1673 1673 dir = (rq_data_dir(rq) == WRITE) ? DMA_TO_DEVICE : DMA_FROM_DEVICE; 1674 1674 freq->sg_cnt = fc_dma_map_sg(ctrl->lport->dev, freq->sg_table.sgl, 1675 1675 op->nents, dir);
+1
drivers/nvme/target/configfs.c
··· 631 631 { 632 632 struct nvmet_subsys *subsys = to_subsys(item); 633 633 634 + nvmet_subsys_del_ctrls(subsys); 634 635 nvmet_subsys_put(subsys); 635 636 } 636 637
+14 -1
drivers/nvme/target/core.c
··· 200 200 pr_err("ctrl %d keep-alive timer (%d seconds) expired!\n", 201 201 ctrl->cntlid, ctrl->kato); 202 202 203 - ctrl->ops->delete_ctrl(ctrl); 203 + nvmet_ctrl_fatal_error(ctrl); 204 204 } 205 205 206 206 static void nvmet_start_keep_alive_timer(struct nvmet_ctrl *ctrl) ··· 816 816 list_del(&ctrl->subsys_entry); 817 817 mutex_unlock(&subsys->lock); 818 818 819 + flush_work(&ctrl->async_event_work); 820 + cancel_work_sync(&ctrl->fatal_err_work); 821 + 819 822 ida_simple_remove(&subsys->cntlid_ida, ctrl->cntlid); 820 823 nvmet_subsys_put(subsys); 821 824 ··· 936 933 ida_destroy(&subsys->cntlid_ida); 937 934 kfree(subsys->subsysnqn); 938 935 kfree(subsys); 936 + } 937 + 938 + void nvmet_subsys_del_ctrls(struct nvmet_subsys *subsys) 939 + { 940 + struct nvmet_ctrl *ctrl; 941 + 942 + mutex_lock(&subsys->lock); 943 + list_for_each_entry(ctrl, &subsys->ctrls, subsys_entry) 944 + ctrl->ops->delete_ctrl(ctrl); 945 + mutex_unlock(&subsys->lock); 939 946 } 940 947 941 948 void nvmet_subsys_put(struct nvmet_subsys *subsys)
+22 -14
drivers/nvme/target/fc.c
··· 1314 1314 (struct fcnvme_ls_disconnect_rqst *)iod->rqstbuf; 1315 1315 struct fcnvme_ls_disconnect_acc *acc = 1316 1316 (struct fcnvme_ls_disconnect_acc *)iod->rspbuf; 1317 - struct nvmet_fc_tgt_queue *queue; 1317 + struct nvmet_fc_tgt_queue *queue = NULL; 1318 1318 struct nvmet_fc_tgt_assoc *assoc; 1319 1319 int ret = 0; 1320 1320 bool del_assoc = false; ··· 1348 1348 assoc = nvmet_fc_find_target_assoc(tgtport, 1349 1349 be64_to_cpu(rqst->associd.association_id)); 1350 1350 iod->assoc = assoc; 1351 - if (!assoc) 1351 + if (assoc) { 1352 + if (rqst->discon_cmd.scope == 1353 + FCNVME_DISCONN_CONNECTION) { 1354 + queue = nvmet_fc_find_target_queue(tgtport, 1355 + be64_to_cpu( 1356 + rqst->discon_cmd.id)); 1357 + if (!queue) { 1358 + nvmet_fc_tgt_a_put(assoc); 1359 + ret = VERR_NO_CONN; 1360 + } 1361 + } 1362 + } else 1352 1363 ret = VERR_NO_ASSOC; 1353 1364 } 1354 1365 ··· 1384 1373 FCNVME_LS_DISCONNECT); 1385 1374 1386 1375 1387 - if (rqst->discon_cmd.scope == FCNVME_DISCONN_CONNECTION) { 1388 - queue = nvmet_fc_find_target_queue(tgtport, 1389 - be64_to_cpu(rqst->discon_cmd.id)); 1390 - if (queue) { 1391 - int qid = queue->qid; 1376 + /* are we to delete a Connection ID (queue) */ 1377 + if (queue) { 1378 + int qid = queue->qid; 1392 1379 1393 - nvmet_fc_delete_target_queue(queue); 1380 + nvmet_fc_delete_target_queue(queue); 1394 1381 1395 - /* release the get taken by find_target_queue */ 1396 - nvmet_fc_tgt_q_put(queue); 1382 + /* release the get taken by find_target_queue */ 1383 + nvmet_fc_tgt_q_put(queue); 1397 1384 1398 - /* tear association down if io queue terminated */ 1399 - if (!qid) 1400 - del_assoc = true; 1401 - } 1385 + /* tear association down if io queue terminated */ 1386 + if (!qid) 1387 + del_assoc = true; 1402 1388 } 1403 1389 1404 1390 /* release get taken in nvmet_fc_find_target_assoc */
+1
drivers/nvme/target/nvmet.h
··· 282 282 struct nvmet_subsys *nvmet_subsys_alloc(const char *subsysnqn, 283 283 enum nvme_subsys_type type); 284 284 void nvmet_subsys_put(struct nvmet_subsys *subsys); 285 + void nvmet_subsys_del_ctrls(struct nvmet_subsys *subsys); 285 286 286 287 struct nvmet_ns *nvmet_find_namespace(struct nvmet_ctrl *ctrl, __le32 nsid); 287 288 void nvmet_put_namespace(struct nvmet_ns *ns);
+17
drivers/nvme/target/rdma.c
··· 438 438 { 439 439 struct ib_recv_wr *bad_wr; 440 440 441 + ib_dma_sync_single_for_device(ndev->device, 442 + cmd->sge[0].addr, cmd->sge[0].length, 443 + DMA_FROM_DEVICE); 444 + 441 445 if (ndev->srq) 442 446 return ib_post_srq_recv(ndev->srq, &cmd->wr, &bad_wr); 443 447 return ib_post_recv(cmd->queue->cm_id->qp, &cmd->wr, &bad_wr); ··· 542 538 first_wr = &rsp->send_wr; 543 539 544 540 nvmet_rdma_post_recv(rsp->queue->dev, rsp->cmd); 541 + 542 + ib_dma_sync_single_for_device(rsp->queue->dev->device, 543 + rsp->send_sge.addr, rsp->send_sge.length, 544 + DMA_TO_DEVICE); 545 + 545 546 if (ib_post_send(cm_id->qp, first_wr, &bad_wr)) { 546 547 pr_err("sending cmd response failed\n"); 547 548 nvmet_rdma_release_rsp(rsp); ··· 706 697 cmd->queue = queue; 707 698 cmd->n_rdma = 0; 708 699 cmd->req.port = queue->port; 700 + 701 + 702 + ib_dma_sync_single_for_cpu(queue->dev->device, 703 + cmd->cmd->sge[0].addr, cmd->cmd->sge[0].length, 704 + DMA_FROM_DEVICE); 705 + ib_dma_sync_single_for_cpu(queue->dev->device, 706 + cmd->send_sge.addr, cmd->send_sge.length, 707 + DMA_TO_DEVICE); 709 708 710 709 if (!nvmet_req_init(&cmd->req, &queue->nvme_cq, 711 710 &queue->nvme_sq, &nvmet_rdma_ops))
+4 -4
drivers/parport/parport_gsc.c
··· 293 293 p->irq = PARPORT_IRQ_NONE; 294 294 } 295 295 if (p->irq != PARPORT_IRQ_NONE) { 296 - printk(", irq %d", p->irq); 296 + pr_cont(", irq %d", p->irq); 297 297 298 298 if (p->dma == PARPORT_DMA_AUTO) { 299 299 p->dma = PARPORT_DMA_NONE; ··· 303 303 is mandatory (see above) */ 304 304 p->dma = PARPORT_DMA_NONE; 305 305 306 - printk(" ["); 307 - #define printmode(x) {if(p->modes&PARPORT_MODE_##x){printk("%s%s",f?",":"",#x);f++;}} 306 + pr_cont(" ["); 307 + #define printmode(x) {if(p->modes&PARPORT_MODE_##x){pr_cont("%s%s",f?",":"",#x);f++;}} 308 308 { 309 309 int f = 0; 310 310 printmode(PCSPP); ··· 315 315 // printmode(DMA); 316 316 } 317 317 #undef printmode 318 - printk("]\n"); 318 + pr_cont("]\n"); 319 319 320 320 if (p->irq != PARPORT_IRQ_NONE) { 321 321 if (request_irq (p->irq, parport_irq_handler,
+25 -14
drivers/pinctrl/intel/pinctrl-baytrail.c
··· 1092 1092 enum pin_config_param param = pinconf_to_config_param(*config); 1093 1093 void __iomem *conf_reg = byt_gpio_reg(vg, offset, BYT_CONF0_REG); 1094 1094 void __iomem *val_reg = byt_gpio_reg(vg, offset, BYT_VAL_REG); 1095 + void __iomem *db_reg = byt_gpio_reg(vg, offset, BYT_DEBOUNCE_REG); 1095 1096 unsigned long flags; 1096 1097 u32 conf, pull, val, debounce; 1097 1098 u16 arg = 0; ··· 1129 1128 return -EINVAL; 1130 1129 1131 1130 raw_spin_lock_irqsave(&vg->lock, flags); 1132 - debounce = readl(byt_gpio_reg(vg, offset, BYT_DEBOUNCE_REG)); 1131 + debounce = readl(db_reg); 1133 1132 raw_spin_unlock_irqrestore(&vg->lock, flags); 1134 1133 1135 1134 switch (debounce & BYT_DEBOUNCE_PULSE_MASK) { ··· 1177 1176 unsigned int param, arg; 1178 1177 void __iomem *conf_reg = byt_gpio_reg(vg, offset, BYT_CONF0_REG); 1179 1178 void __iomem *val_reg = byt_gpio_reg(vg, offset, BYT_VAL_REG); 1179 + void __iomem *db_reg = byt_gpio_reg(vg, offset, BYT_DEBOUNCE_REG); 1180 1180 unsigned long flags; 1181 1181 u32 conf, val, debounce; 1182 1182 int i, ret = 0; ··· 1240 1238 1241 1239 break; 1242 1240 case PIN_CONFIG_INPUT_DEBOUNCE: 1243 - debounce = readl(byt_gpio_reg(vg, offset, 1244 - BYT_DEBOUNCE_REG)); 1245 - conf &= ~BYT_DEBOUNCE_PULSE_MASK; 1241 + debounce = readl(db_reg); 1242 + debounce &= ~BYT_DEBOUNCE_PULSE_MASK; 1246 1243 1247 1244 switch (arg) { 1245 + case 0: 1246 + conf &= BYT_DEBOUNCE_EN; 1247 + break; 1248 1248 case 375: 1249 - conf |= BYT_DEBOUNCE_PULSE_375US; 1249 + debounce |= BYT_DEBOUNCE_PULSE_375US; 1250 1250 break; 1251 1251 case 750: 1252 - conf |= BYT_DEBOUNCE_PULSE_750US; 1252 + debounce |= BYT_DEBOUNCE_PULSE_750US; 1253 1253 break; 1254 1254 case 1500: 1255 - conf |= BYT_DEBOUNCE_PULSE_1500US; 1255 + debounce |= BYT_DEBOUNCE_PULSE_1500US; 1256 1256 break; 1257 1257 case 3000: 1258 - conf |= BYT_DEBOUNCE_PULSE_3MS; 1258 + debounce |= BYT_DEBOUNCE_PULSE_3MS; 1259 1259 break; 1260 1260 case 6000: 1261 - conf |= BYT_DEBOUNCE_PULSE_6MS; 1261 + debounce |= BYT_DEBOUNCE_PULSE_6MS; 1262 1262 break; 1263 1263 case 12000: 1264 - conf |= BYT_DEBOUNCE_PULSE_12MS; 1264 + debounce |= BYT_DEBOUNCE_PULSE_12MS; 1265 1265 break; 1266 1266 case 24000: 1267 - conf |= BYT_DEBOUNCE_PULSE_24MS; 1267 + debounce |= BYT_DEBOUNCE_PULSE_24MS; 1268 1268 break; 1269 1269 default: 1270 1270 ret = -EINVAL; 1271 1271 } 1272 1272 1273 + if (!ret) 1274 + writel(debounce, db_reg); 1273 1275 break; 1274 1276 default: 1275 1277 ret = -ENOTSUPP; ··· 1623 1617 1624 1618 static void byt_gpio_irq_init_hw(struct byt_gpio *vg) 1625 1619 { 1620 + struct gpio_chip *gc = &vg->chip; 1621 + struct device *dev = &vg->pdev->dev; 1626 1622 void __iomem *reg; 1627 1623 u32 base, value; 1628 1624 int i; ··· 1646 1638 } 1647 1639 1648 1640 value = readl(reg); 1649 - if ((value & BYT_PIN_MUX) == byt_get_gpio_mux(vg, i) && 1650 - !(value & BYT_DIRECT_IRQ_EN)) { 1641 + if (value & BYT_DIRECT_IRQ_EN) { 1642 + clear_bit(i, gc->irq_valid_mask); 1643 + dev_dbg(dev, "excluding GPIO %d from IRQ domain\n", i); 1644 + } else if ((value & BYT_PIN_MUX) == byt_get_gpio_mux(vg, i)) { 1651 1645 byt_gpio_clear_triggering(vg, i); 1652 - dev_dbg(&vg->pdev->dev, "disabling GPIO %d\n", i); 1646 + dev_dbg(dev, "disabling GPIO %d\n", i); 1653 1647 } 1654 1648 } 1655 1649 ··· 1690 1680 gc->can_sleep = false; 1691 1681 gc->parent = &vg->pdev->dev; 1692 1682 gc->ngpio = vg->soc_data->npins; 1683 + gc->irq_need_valid_mask = true; 1693 1684 1694 1685 #ifdef CONFIG_PM_SLEEP 1695 1686 vg->saved_context = devm_kcalloc(&vg->pdev->dev, gc->ngpio,
+1 -1
drivers/pinctrl/intel/pinctrl-broxton.c
··· 19 19 20 20 #define BXT_PAD_OWN 0x020 21 21 #define BXT_HOSTSW_OWN 0x080 22 - #define BXT_PADCFGLOCK 0x090 22 + #define BXT_PADCFGLOCK 0x060 23 23 #define BXT_GPI_IE 0x110 24 24 25 25 #define BXT_COMMUNITY(s, e) \
+19 -11
drivers/pinctrl/intel/pinctrl-intel.c
··· 353 353 return 0; 354 354 } 355 355 356 + static void __intel_gpio_set_direction(void __iomem *padcfg0, bool input) 357 + { 358 + u32 value; 359 + 360 + value = readl(padcfg0); 361 + if (input) { 362 + value &= ~PADCFG0_GPIORXDIS; 363 + value |= PADCFG0_GPIOTXDIS; 364 + } else { 365 + value &= ~PADCFG0_GPIOTXDIS; 366 + value |= PADCFG0_GPIORXDIS; 367 + } 368 + writel(value, padcfg0); 369 + } 370 + 356 371 static int intel_gpio_request_enable(struct pinctrl_dev *pctldev, 357 372 struct pinctrl_gpio_range *range, 358 373 unsigned pin) ··· 390 375 /* Disable SCI/SMI/NMI generation */ 391 376 value &= ~(PADCFG0_GPIROUTIOXAPIC | PADCFG0_GPIROUTSCI); 392 377 value &= ~(PADCFG0_GPIROUTSMI | PADCFG0_GPIROUTNMI); 393 - /* Disable TX buffer and enable RX (this will be input) */ 394 - value &= ~PADCFG0_GPIORXDIS; 395 - value |= PADCFG0_GPIOTXDIS; 396 378 writel(value, padcfg0); 379 + 380 + /* Disable TX buffer and enable RX (this will be input) */ 381 + __intel_gpio_set_direction(padcfg0, true); 397 382 398 383 raw_spin_unlock_irqrestore(&pctrl->lock, flags); 399 384 ··· 407 392 struct intel_pinctrl *pctrl = pinctrl_dev_get_drvdata(pctldev); 408 393 void __iomem *padcfg0; 409 394 unsigned long flags; 410 - u32 value; 411 395 412 396 raw_spin_lock_irqsave(&pctrl->lock, flags); 413 397 414 398 padcfg0 = intel_get_padcfg(pctrl, pin, PADCFG0); 415 - 416 - value = readl(padcfg0); 417 - if (input) 418 - value |= PADCFG0_GPIOTXDIS; 419 - else 420 - value &= ~PADCFG0_GPIOTXDIS; 421 - writel(value, padcfg0); 399 + __intel_gpio_set_direction(padcfg0, input); 422 400 423 401 raw_spin_unlock_irqrestore(&pctrl->lock, flags); 424 402
+3 -4
drivers/pinctrl/meson/pinctrl-meson-gxbb.c
··· 253 253 static const unsigned int uart_rx_ao_a_pins[] = { PIN(GPIOAO_1, 0) }; 254 254 static const unsigned int uart_cts_ao_a_pins[] = { PIN(GPIOAO_2, 0) }; 255 255 static const unsigned int uart_rts_ao_a_pins[] = { PIN(GPIOAO_3, 0) }; 256 - static const unsigned int uart_tx_ao_b_pins[] = { PIN(GPIOAO_0, 0) }; 257 - static const unsigned int uart_rx_ao_b_pins[] = { PIN(GPIOAO_1, 0), 258 - PIN(GPIOAO_5, 0) }; 256 + static const unsigned int uart_tx_ao_b_pins[] = { PIN(GPIOAO_4, 0) }; 257 + static const unsigned int uart_rx_ao_b_pins[] = { PIN(GPIOAO_5, 0) }; 259 258 static const unsigned int uart_cts_ao_b_pins[] = { PIN(GPIOAO_2, 0) }; 260 259 static const unsigned int uart_rts_ao_b_pins[] = { PIN(GPIOAO_3, 0) }; 261 260 ··· 497 498 GPIO_GROUP(GPIOAO_13, 0), 498 499 499 500 /* bank AO */ 500 - GROUP(uart_tx_ao_b, 0, 26), 501 + GROUP(uart_tx_ao_b, 0, 24), 501 502 GROUP(uart_rx_ao_b, 0, 25), 502 503 GROUP(uart_tx_ao_a, 0, 12), 503 504 GROUP(uart_rx_ao_a, 0, 11),
+3 -4
drivers/pinctrl/meson/pinctrl-meson-gxl.c
··· 214 214 static const unsigned int uart_rx_ao_a_pins[] = { PIN(GPIOAO_1, 0) }; 215 215 static const unsigned int uart_cts_ao_a_pins[] = { PIN(GPIOAO_2, 0) }; 216 216 static const unsigned int uart_rts_ao_a_pins[] = { PIN(GPIOAO_3, 0) }; 217 - static const unsigned int uart_tx_ao_b_pins[] = { PIN(GPIOAO_0, 0) }; 218 - static const unsigned int uart_rx_ao_b_pins[] = { PIN(GPIOAO_1, 0), 219 - PIN(GPIOAO_5, 0) }; 217 + static const unsigned int uart_tx_ao_b_pins[] = { PIN(GPIOAO_4, 0) }; 218 + static const unsigned int uart_rx_ao_b_pins[] = { PIN(GPIOAO_5, 0) }; 220 219 static const unsigned int uart_cts_ao_b_pins[] = { PIN(GPIOAO_2, 0) }; 221 220 static const unsigned int uart_rts_ao_b_pins[] = { PIN(GPIOAO_3, 0) }; 222 221 ··· 408 409 GPIO_GROUP(GPIOAO_9, 0), 409 410 410 411 /* bank AO */ 411 - GROUP(uart_tx_ao_b, 0, 26), 412 + GROUP(uart_tx_ao_b, 0, 24), 412 413 GROUP(uart_rx_ao_b, 0, 25), 413 414 GROUP(uart_tx_ao_a, 0, 12), 414 415 GROUP(uart_rx_ao_a, 0, 11),
+2
drivers/pinctrl/pinctrl-amd.c
··· 202 202 i = 128; 203 203 pin_num = AMD_GPIO_PINS_BANK2 + i; 204 204 break; 205 + default: 206 + return; 205 207 } 206 208 207 209 for (; i < pin_num; i++) {
+1 -1
drivers/pinctrl/uniphier/pinctrl-uniphier-ld20.c
··· 561 561 0, 0, 0, 0}; 562 562 static const unsigned ether_rmii_pins[] = {30, 31, 32, 33, 34, 35, 36, 37, 39, 563 563 41, 42, 45}; 564 - static const int ether_rmii_muxvals[] = {1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1}; 564 + static const int ether_rmii_muxvals[] = {0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1}; 565 565 static const unsigned i2c0_pins[] = {63, 64}; 566 566 static const int i2c0_muxvals[] = {0, 0}; 567 567 static const unsigned i2c1_pins[] = {65, 66};
+1
drivers/platform/x86/ideapad-laptop.c
··· 813 813 case 8: 814 814 case 7: 815 815 case 6: 816 + case 1: 816 817 ideapad_input_report(priv, vpc_bit); 817 818 break; 818 819 case 5:
+1 -1
drivers/platform/x86/intel_mid_powerbtn.c
··· 77 77 78 78 input_set_capability(input, EV_KEY, KEY_POWER); 79 79 80 - error = request_threaded_irq(irq, NULL, mfld_pb_isr, 0, 80 + error = request_threaded_irq(irq, NULL, mfld_pb_isr, IRQF_ONESHOT, 81 81 DRIVER_NAME, input); 82 82 if (error) { 83 83 dev_err(&pdev->dev, "Unable to request irq %d for mfld power"
+1 -1
drivers/platform/x86/mlx-platform.c
··· 326 326 return 0; 327 327 328 328 fail_platform_mux_register: 329 - for (i--; i > 0 ; i--) 329 + while (--i >= 0) 330 330 platform_device_unregister(priv->pdev_mux[i]); 331 331 platform_device_unregister(priv->pdev_i2c); 332 332 fail_alloc:
+2 -4
drivers/platform/x86/surface3-wmi.c
··· 139 139 140 140 static int s3_wmi_check_platform_device(struct device *dev, void *data) 141 141 { 142 - struct acpi_device *adev, *ts_adev; 142 + struct acpi_device *adev, *ts_adev = NULL; 143 143 acpi_handle handle; 144 144 acpi_status status; 145 145 ··· 244 244 return 0; 245 245 } 246 246 247 - #ifdef CONFIG_PM 248 - static int s3_wmi_resume(struct device *dev) 247 + static int __maybe_unused s3_wmi_resume(struct device *dev) 249 248 { 250 249 s3_wmi_send_lid_state(); 251 250 return 0; 252 251 } 253 - #endif 254 252 static SIMPLE_DEV_PM_OPS(s3_wmi_pm, NULL, s3_wmi_resume); 255 253 256 254 static struct platform_driver s3_wmi_driver = {
+16 -1
drivers/scsi/sd.c
··· 836 836 struct bio *bio = rq->bio; 837 837 sector_t sector = blk_rq_pos(rq); 838 838 unsigned int nr_sectors = blk_rq_sectors(rq); 839 + unsigned int nr_bytes = blk_rq_bytes(rq); 839 840 int ret; 840 841 841 842 if (sdkp->device->no_write_same) ··· 869 868 870 869 cmd->transfersize = sdp->sector_size; 871 870 cmd->allowed = SD_MAX_RETRIES; 872 - return scsi_init_io(cmd); 871 + 872 + /* 873 + * For WRITE SAME the data transferred via the DATA OUT buffer is 874 + * different from the amount of data actually written to the target. 875 + * 876 + * We set up __data_len to the amount of data transferred via the 877 + * DATA OUT buffer so that blk_rq_map_sg sets up the proper S/G list 878 + * to transfer a single sector of data first, but then reset it to 879 + * the amount of data to be written right after so that the I/O path 880 + * knows how much to actually write. 881 + */ 882 + rq->__data_len = sdp->sector_size; 883 + ret = scsi_init_io(cmd); 884 + rq->__data_len = nr_bytes; 885 + return ret; 873 886 } 874 887 875 888 static int sd_setup_flush_cmnd(struct scsi_cmnd *cmd)
+17 -3
drivers/thermal/thermal_hwmon.c
··· 59 59 static DEFINE_MUTEX(thermal_hwmon_list_lock); 60 60 61 61 static ssize_t 62 + name_show(struct device *dev, struct device_attribute *attr, char *buf) 63 + { 64 + struct thermal_hwmon_device *hwmon = dev_get_drvdata(dev); 65 + return sprintf(buf, "%s\n", hwmon->type); 66 + } 67 + static DEVICE_ATTR_RO(name); 68 + 69 + static ssize_t 62 70 temp_input_show(struct device *dev, struct device_attribute *attr, char *buf) 63 71 { 64 72 int temperature; ··· 165 157 166 158 INIT_LIST_HEAD(&hwmon->tz_list); 167 159 strlcpy(hwmon->type, tz->type, THERMAL_NAME_LENGTH); 168 - hwmon->device = hwmon_device_register_with_info(NULL, hwmon->type, 169 - hwmon, NULL, NULL); 160 + hwmon->device = hwmon_device_register(NULL); 170 161 if (IS_ERR(hwmon->device)) { 171 162 result = PTR_ERR(hwmon->device); 172 163 goto free_mem; 173 164 } 165 + dev_set_drvdata(hwmon->device, hwmon); 166 + result = device_create_file(hwmon->device, &dev_attr_name); 167 + if (result) 168 + goto free_mem; 174 169 175 170 register_sys_interface: 176 171 temp = kzalloc(sizeof(*temp), GFP_KERNEL); ··· 222 211 free_temp_mem: 223 212 kfree(temp); 224 213 unregister_name: 225 - if (new_hwmon_device) 214 + if (new_hwmon_device) { 215 + device_remove_file(hwmon->device, &dev_attr_name); 226 216 hwmon_device_unregister(hwmon->device); 217 + } 227 218 free_mem: 228 219 if (new_hwmon_device) 229 220 kfree(hwmon); ··· 267 254 list_del(&hwmon->node); 268 255 mutex_unlock(&thermal_hwmon_list_lock); 269 256 257 + device_remove_file(hwmon->device, &dev_attr_name); 270 258 hwmon_device_unregister(hwmon->device); 271 259 kfree(hwmon); 272 260 }
+4
drivers/vfio/vfio_iommu_spapr_tce.c
··· 1270 1270 /* pr_debug("tce_vfio: Attaching group #%u to iommu %p\n", 1271 1271 iommu_group_id(iommu_group), iommu_group); */ 1272 1272 table_group = iommu_group_get_iommudata(iommu_group); 1273 + if (!table_group) { 1274 + ret = -ENODEV; 1275 + goto unlock_exit; 1276 + } 1273 1277 1274 1278 if (tce_groups_attached(container) && (!table_group->ops || 1275 1279 !table_group->ops->take_ownership ||
+9 -4
drivers/vhost/vsock.c
··· 373 373 374 374 static int vhost_vsock_start(struct vhost_vsock *vsock) 375 375 { 376 + struct vhost_virtqueue *vq; 376 377 size_t i; 377 378 int ret; 378 379 ··· 384 383 goto err; 385 384 386 385 for (i = 0; i < ARRAY_SIZE(vsock->vqs); i++) { 387 - struct vhost_virtqueue *vq = &vsock->vqs[i]; 386 + vq = &vsock->vqs[i]; 388 387 389 388 mutex_lock(&vq->mutex); 390 389 391 390 if (!vhost_vq_access_ok(vq)) { 392 391 ret = -EFAULT; 393 - mutex_unlock(&vq->mutex); 394 392 goto err_vq; 395 393 } 396 394 397 395 if (!vq->private_data) { 398 396 vq->private_data = vsock; 399 - vhost_vq_init_access(vq); 397 + ret = vhost_vq_init_access(vq); 398 + if (ret) 399 + goto err_vq; 400 400 } 401 401 402 402 mutex_unlock(&vq->mutex); ··· 407 405 return 0; 408 406 409 407 err_vq: 408 + vq->private_data = NULL; 409 + mutex_unlock(&vq->mutex); 410 + 410 411 for (i = 0; i < ARRAY_SIZE(vsock->vqs); i++) { 411 - struct vhost_virtqueue *vq = &vsock->vqs[i]; 412 + vq = &vsock->vqs[i]; 412 413 413 414 mutex_lock(&vq->mutex); 414 415 vq->private_data = NULL;
+14 -12
drivers/video/fbdev/core/fbcmap.c
··· 163 163 164 164 int fb_copy_cmap(const struct fb_cmap *from, struct fb_cmap *to) 165 165 { 166 - int tooff = 0, fromoff = 0; 167 - int size; 166 + unsigned int tooff = 0, fromoff = 0; 167 + size_t size; 168 168 169 169 if (to->start > from->start) 170 170 fromoff = to->start - from->start; 171 171 else 172 172 tooff = from->start - to->start; 173 - size = to->len - tooff; 174 - if (size > (int) (from->len - fromoff)) 175 - size = from->len - fromoff; 176 - if (size <= 0) 173 + if (fromoff >= from->len || tooff >= to->len) 174 + return -EINVAL; 175 + 176 + size = min_t(size_t, to->len - tooff, from->len - fromoff); 177 + if (size == 0) 177 178 return -EINVAL; 178 179 size *= sizeof(u16); 179 180 ··· 188 187 189 188 int fb_cmap_to_user(const struct fb_cmap *from, struct fb_cmap_user *to) 190 189 { 191 - int tooff = 0, fromoff = 0; 192 - int size; 190 + unsigned int tooff = 0, fromoff = 0; 191 + size_t size; 193 192 194 193 if (to->start > from->start) 195 194 fromoff = to->start - from->start; 196 195 else 197 196 tooff = from->start - to->start; 198 - size = to->len - tooff; 199 - if (size > (int) (from->len - fromoff)) 200 - size = from->len - fromoff; 201 - if (size <= 0) 197 + if (fromoff >= from->len || tooff >= to->len) 198 + return -EINVAL; 199 + 200 + size = min_t(size_t, to->len - tooff, from->len - fromoff); 201 + if (size == 0) 202 202 return -EINVAL; 203 203 size *= sizeof(u16); 204 204
+19 -1
drivers/virtio/virtio_mmio.c
··· 59 59 #define pr_fmt(fmt) "virtio-mmio: " fmt 60 60 61 61 #include <linux/acpi.h> 62 + #include <linux/dma-mapping.h> 62 63 #include <linux/highmem.h> 63 64 #include <linux/interrupt.h> 64 65 #include <linux/io.h> ··· 499 498 struct virtio_mmio_device *vm_dev; 500 499 struct resource *mem; 501 500 unsigned long magic; 501 + int rc; 502 502 503 503 mem = platform_get_resource(pdev, IORESOURCE_MEM, 0); 504 504 if (!mem) ··· 549 547 } 550 548 vm_dev->vdev.id.vendor = readl(vm_dev->base + VIRTIO_MMIO_VENDOR_ID); 551 549 552 - if (vm_dev->version == 1) 550 + if (vm_dev->version == 1) { 553 551 writel(PAGE_SIZE, vm_dev->base + VIRTIO_MMIO_GUEST_PAGE_SIZE); 552 + 553 + rc = dma_set_mask(&pdev->dev, DMA_BIT_MASK(64)); 554 + /* 555 + * In the legacy case, ensure our coherently-allocated virtio 556 + * ring will be at an address expressable as a 32-bit PFN. 557 + */ 558 + if (!rc) 559 + dma_set_coherent_mask(&pdev->dev, 560 + DMA_BIT_MASK(32 + PAGE_SHIFT)); 561 + } else { 562 + rc = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64)); 563 + } 564 + if (rc) 565 + rc = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32)); 566 + if (rc) 567 + dev_warn(&pdev->dev, "Failed to enable 64-bit or 32-bit DMA. Trying to continue, but this might not work.\n"); 554 568 555 569 platform_set_drvdata(pdev, vm_dev); 556 570
+7
drivers/virtio/virtio_ring.c
··· 159 159 if (xen_domain()) 160 160 return true; 161 161 162 + /* 163 + * On ARM-based machines, the DMA ops will do the right thing, 164 + * so always use them with legacy devices. 165 + */ 166 + if (IS_ENABLED(CONFIG_ARM) || IS_ENABLED(CONFIG_ARM64)) 167 + return !virtio_has_feature(vdev, VIRTIO_F_VERSION_1); 168 + 162 169 return false; 163 170 } 164 171
+3 -2
drivers/xen/swiotlb-xen.c
··· 414 414 if (map == SWIOTLB_MAP_ERROR) 415 415 return DMA_ERROR_CODE; 416 416 417 + dev_addr = xen_phys_to_bus(map); 417 418 xen_dma_map_page(dev, pfn_to_page(map >> PAGE_SHIFT), 418 419 dev_addr, map & ~PAGE_MASK, size, dir, attrs); 419 - dev_addr = xen_phys_to_bus(map); 420 420 421 421 /* 422 422 * Ensure that the address returned is DMA'ble ··· 575 575 sg_dma_len(sgl) = 0; 576 576 return 0; 577 577 } 578 + dev_addr = xen_phys_to_bus(map); 578 579 xen_dma_map_page(hwdev, pfn_to_page(map >> PAGE_SHIFT), 579 580 dev_addr, 580 581 map & ~PAGE_MASK, 581 582 sg->length, 582 583 dir, 583 584 attrs); 584 - sg->dma_address = xen_phys_to_bus(map); 585 + sg->dma_address = dev_addr; 585 586 } else { 586 587 /* we are not interested in the dma_addr returned by 587 588 * xen_dma_map_page, only in the potential cache flushes executed
+1
fs/Kconfig
··· 38 38 bool "Direct Access (DAX) support" 39 39 depends on MMU 40 40 depends on !(ARM || MIPS || SPARC) 41 + select FS_IOMAP 41 42 help 42 43 Direct Access (DAX) can be used on memory-backed block devices. 43 44 If the block device supports DAX and the filesystem supports DAX,
+3 -3
fs/block_dev.c
··· 331 331 struct blk_plug plug; 332 332 struct blkdev_dio *dio; 333 333 struct bio *bio; 334 - bool is_read = (iov_iter_rw(iter) == READ); 334 + bool is_read = (iov_iter_rw(iter) == READ), is_sync; 335 335 loff_t pos = iocb->ki_pos; 336 336 blk_qc_t qc = BLK_QC_T_NONE; 337 337 int ret; ··· 344 344 bio_get(bio); /* extra ref for the completion handler */ 345 345 346 346 dio = container_of(bio, struct blkdev_dio, bio); 347 - dio->is_sync = is_sync_kiocb(iocb); 347 + dio->is_sync = is_sync = is_sync_kiocb(iocb); 348 348 if (dio->is_sync) 349 349 dio->waiter = current; 350 350 else ··· 398 398 } 399 399 blk_finish_plug(&plug); 400 400 401 - if (!dio->is_sync) 401 + if (!is_sync) 402 402 return -EIOCBQUEUED; 403 403 404 404 for (;;) {
+17 -9
fs/btrfs/inode.c
··· 3835 3835 break; 3836 3836 case S_IFDIR: 3837 3837 inode->i_fop = &btrfs_dir_file_operations; 3838 - if (root == fs_info->tree_root) 3839 - inode->i_op = &btrfs_dir_ro_inode_operations; 3840 - else 3841 - inode->i_op = &btrfs_dir_inode_operations; 3838 + inode->i_op = &btrfs_dir_inode_operations; 3842 3839 break; 3843 3840 case S_IFLNK: 3844 3841 inode->i_op = &btrfs_symlink_inode_operations; ··· 4502 4505 if (found_type > min_type) { 4503 4506 del_item = 1; 4504 4507 } else { 4505 - if (item_end < new_size) 4508 + if (item_end < new_size) { 4509 + /* 4510 + * With NO_HOLES mode, for the following mapping 4511 + * 4512 + * [0-4k][hole][8k-12k] 4513 + * 4514 + * if truncating isize down to 6k, it ends up 4515 + * isize being 8k. 4516 + */ 4517 + if (btrfs_fs_incompat(root->fs_info, NO_HOLES)) 4518 + last_size = new_size; 4506 4519 break; 4520 + } 4507 4521 if (found_key.offset >= new_size) 4508 4522 del_item = 1; 4509 4523 else ··· 5718 5710 5719 5711 inode->i_ino = BTRFS_EMPTY_SUBVOL_DIR_OBJECTID; 5720 5712 inode->i_op = &btrfs_dir_ro_inode_operations; 5713 + inode->i_opflags &= ~IOP_XATTR; 5721 5714 inode->i_fop = &simple_dir_operations; 5722 5715 inode->i_mode = S_IFDIR | S_IRUGO | S_IWUSR | S_IXUGO; 5723 5716 inode->i_mtime = current_time(inode); ··· 7224 7215 struct extent_map *em = NULL; 7225 7216 int ret; 7226 7217 7227 - down_read(&BTRFS_I(inode)->dio_sem); 7228 7218 if (type != BTRFS_ORDERED_NOCOW) { 7229 7219 em = create_pinned_em(inode, start, len, orig_start, 7230 7220 block_start, block_len, orig_block_len, ··· 7242 7234 em = ERR_PTR(ret); 7243 7235 } 7244 7236 out: 7245 - up_read(&BTRFS_I(inode)->dio_sem); 7246 7237 7247 7238 return em; 7248 7239 } ··· 8699 8692 dio_data.unsubmitted_oe_range_start = (u64)offset; 8700 8693 dio_data.unsubmitted_oe_range_end = (u64)offset; 8701 8694 current->journal_info = &dio_data; 8695 + down_read(&BTRFS_I(inode)->dio_sem); 8702 8696 } else if (test_bit(BTRFS_INODE_READDIO_NEED_LOCK, 8703 8697 &BTRFS_I(inode)->runtime_flags)) { 8704 8698 inode_dio_end(inode); ··· 8712 8704 iter, btrfs_get_blocks_direct, NULL, 8713 8705 btrfs_submit_direct, flags); 8714 8706 if (iov_iter_rw(iter) == WRITE) { 8707 + up_read(&BTRFS_I(inode)->dio_sem); 8715 8708 current->journal_info = NULL; 8716 8709 if (ret < 0 && ret != -EIOCBQUEUED) { 8717 8710 if (dio_data.reserve) ··· 9221 9212 break; 9222 9213 } 9223 9214 9215 + btrfs_block_rsv_release(fs_info, rsv, -1); 9224 9216 ret = btrfs_block_rsv_migrate(&fs_info->trans_block_rsv, 9225 9217 rsv, min_size, 0); 9226 9218 BUG_ON(ret); /* shouldn't happen */ ··· 10589 10579 static const struct inode_operations btrfs_dir_ro_inode_operations = { 10590 10580 .lookup = btrfs_lookup, 10591 10581 .permission = btrfs_permission, 10592 - .get_acl = btrfs_get_acl, 10593 - .set_acl = btrfs_set_acl, 10594 10582 .update_time = btrfs_update_time, 10595 10583 }; 10596 10584
-2
fs/dax.c
··· 990 990 } 991 991 EXPORT_SYMBOL_GPL(__dax_zero_page_range); 992 992 993 - #ifdef CONFIG_FS_IOMAP 994 993 static sector_t dax_iomap_sector(struct iomap *iomap, loff_t pos) 995 994 { 996 995 return iomap->blkno + (((pos & PAGE_MASK) - iomap->offset) >> 9); ··· 1427 1428 } 1428 1429 EXPORT_SYMBOL_GPL(dax_iomap_pmd_fault); 1429 1430 #endif /* CONFIG_FS_DAX_PMD */ 1430 - #endif /* CONFIG_FS_IOMAP */
-1
fs/ext2/Kconfig
··· 1 1 config EXT2_FS 2 2 tristate "Second extended fs support" 3 - select FS_IOMAP if FS_DAX 4 3 help 5 4 Ext2 is a standard Linux file system for hard disks. 6 5
-1
fs/ext4/Kconfig
··· 37 37 select CRC16 38 38 select CRYPTO 39 39 select CRYPTO_CRC32C 40 - select FS_IOMAP if FS_DAX 41 40 help 42 41 This is the next generation of the ext3 filesystem. 43 42
+3 -1
fs/nfs/nfs4proc.c
··· 2700 2700 sattr->ia_valid |= ATTR_MTIME; 2701 2701 2702 2702 /* Except MODE, it seems harmless of setting twice. */ 2703 - if ((attrset[1] & FATTR4_WORD1_MODE)) 2703 + if (opendata->o_arg.createmode != NFS4_CREATE_EXCLUSIVE && 2704 + attrset[1] & FATTR4_WORD1_MODE) 2704 2705 sattr->ia_valid &= ~ATTR_MODE; 2705 2706 2706 2707 if (attrset[2] & FATTR4_WORD2_SECURITY_LABEL) ··· 8491 8490 goto out; 8492 8491 } 8493 8492 8493 + nfs4_sequence_free_slot(&lgp->res.seq_res); 8494 8494 err = nfs4_handle_exception(server, nfs4err, exception); 8495 8495 if (!status) { 8496 8496 if (exception->retry)
+1
fs/nfs/nfs4state.c
··· 1091 1091 case -NFS4ERR_BADXDR: 1092 1092 case -NFS4ERR_RESOURCE: 1093 1093 case -NFS4ERR_NOFILEHANDLE: 1094 + case -NFS4ERR_MOVED: 1094 1095 /* Non-seqid mutating errors */ 1095 1096 return; 1096 1097 };
+1 -1
fs/nfs/pnfs.c
··· 1200 1200 1201 1201 send = pnfs_prepare_layoutreturn(lo, &stateid, NULL); 1202 1202 spin_unlock(&ino->i_lock); 1203 - pnfs_free_lseg_list(&tmp_list); 1204 1203 if (send) 1205 1204 status = pnfs_send_layoutreturn(lo, &stateid, IOMODE_ANY, true); 1206 1205 out_put_layout_hdr: 1206 + pnfs_free_lseg_list(&tmp_list); 1207 1207 pnfs_put_layout_hdr(lo); 1208 1208 out: 1209 1209 dprintk("<-- %s status: %d\n", __func__, status);
+2
fs/proc/base.c
··· 3179 3179 iter.tgid += 1, iter = next_tgid(ns, iter)) { 3180 3180 char name[PROC_NUMBUF]; 3181 3181 int len; 3182 + 3183 + cond_resched(); 3182 3184 if (!has_pid_permissions(ns, iter.task, 2)) 3183 3185 continue; 3184 3186
+22 -1
fs/romfs/super.c
··· 74 74 #include <linux/highmem.h> 75 75 #include <linux/pagemap.h> 76 76 #include <linux/uaccess.h> 77 + #include <linux/major.h> 77 78 #include "internal.h" 78 79 79 80 static struct kmem_cache *romfs_inode_cachep; ··· 417 416 static int romfs_statfs(struct dentry *dentry, struct kstatfs *buf) 418 417 { 419 418 struct super_block *sb = dentry->d_sb; 420 - u64 id = huge_encode_dev(sb->s_bdev->bd_dev); 419 + u64 id = 0; 420 + 421 + /* When calling huge_encode_dev(), 422 + * use sb->s_bdev->bd_dev when, 423 + * - CONFIG_ROMFS_ON_BLOCK defined 424 + * use sb->s_dev when, 425 + * - CONFIG_ROMFS_ON_BLOCK undefined and 426 + * - CONFIG_ROMFS_ON_MTD defined 427 + * leave id as 0 when, 428 + * - CONFIG_ROMFS_ON_BLOCK undefined and 429 + * - CONFIG_ROMFS_ON_MTD undefined 430 + */ 431 + if (sb->s_bdev) 432 + id = huge_encode_dev(sb->s_bdev->bd_dev); 433 + else if (sb->s_dev) 434 + id = huge_encode_dev(sb->s_dev); 421 435 422 436 buf->f_type = ROMFS_MAGIC; 423 437 buf->f_namelen = ROMFS_MAXFN; ··· 505 489 sb->s_flags |= MS_RDONLY | MS_NOATIME; 506 490 sb->s_op = &romfs_super_ops; 507 491 492 + #ifdef CONFIG_ROMFS_ON_MTD 493 + /* Use same dev ID from the underlying mtdblock device */ 494 + if (sb->s_mtd) 495 + sb->s_dev = MKDEV(MTD_BLOCK_MAJOR, sb->s_mtd->index); 496 + #endif 508 497 /* read the image superblock and check it */ 509 498 rsb = kmalloc(512, GFP_KERNEL); 510 499 if (!rsb)
+35 -2
fs/userfaultfd.c
··· 63 63 struct uffd_msg msg; 64 64 wait_queue_t wq; 65 65 struct userfaultfd_ctx *ctx; 66 + bool waken; 66 67 }; 67 68 68 69 struct userfaultfd_wake_range { ··· 87 86 if (len && (start > uwq->msg.arg.pagefault.address || 88 87 start + len <= uwq->msg.arg.pagefault.address)) 89 88 goto out; 89 + WRITE_ONCE(uwq->waken, true); 90 + /* 91 + * The implicit smp_mb__before_spinlock in try_to_wake_up() 92 + * renders uwq->waken visible to other CPUs before the task is 93 + * waken. 94 + */ 90 95 ret = wake_up_state(wq->private, mode); 91 96 if (ret) 92 97 /* ··· 271 264 struct userfaultfd_wait_queue uwq; 272 265 int ret; 273 266 bool must_wait, return_to_userland; 267 + long blocking_state; 274 268 275 269 BUG_ON(!rwsem_is_locked(&mm->mmap_sem)); 276 270 ··· 342 334 uwq.wq.private = current; 343 335 uwq.msg = userfault_msg(vmf->address, vmf->flags, reason); 344 336 uwq.ctx = ctx; 337 + uwq.waken = false; 345 338 346 339 return_to_userland = 347 340 (vmf->flags & (FAULT_FLAG_USER|FAULT_FLAG_KILLABLE)) == 348 341 (FAULT_FLAG_USER|FAULT_FLAG_KILLABLE); 342 + blocking_state = return_to_userland ? TASK_INTERRUPTIBLE : 343 + TASK_KILLABLE; 349 344 350 345 spin_lock(&ctx->fault_pending_wqh.lock); 351 346 /* ··· 361 350 * following the spin_unlock to happen before the list_add in 362 351 * __add_wait_queue. 363 352 */ 364 - set_current_state(return_to_userland ? TASK_INTERRUPTIBLE : 365 - TASK_KILLABLE); 353 + set_current_state(blocking_state); 366 354 spin_unlock(&ctx->fault_pending_wqh.lock); 367 355 368 356 must_wait = userfaultfd_must_wait(ctx, vmf->address, vmf->flags, ··· 374 364 wake_up_poll(&ctx->fd_wqh, POLLIN); 375 365 schedule(); 376 366 ret |= VM_FAULT_MAJOR; 367 + 368 + /* 369 + * False wakeups can orginate even from rwsem before 370 + * up_read() however userfaults will wait either for a 371 + * targeted wakeup on the specific uwq waitqueue from 372 + * wake_userfault() or for signals or for uffd 373 + * release. 374 + */ 375 + while (!READ_ONCE(uwq.waken)) { 376 + /* 377 + * This needs the full smp_store_mb() 378 + * guarantee as the state write must be 379 + * visible to other CPUs before reading 380 + * uwq.waken from other CPUs. 381 + */ 382 + set_current_state(blocking_state); 383 + if (READ_ONCE(uwq.waken) || 384 + READ_ONCE(ctx->released) || 385 + (return_to_userland ? signal_pending(current) : 386 + fatal_signal_pending(current))) 387 + break; 388 + schedule(); 389 + } 377 390 } 378 391 379 392 __set_current_state(TASK_RUNNING);
+55 -15
fs/xfs/libxfs/xfs_ag_resv.c
··· 39 39 #include "xfs_rmap_btree.h" 40 40 #include "xfs_btree.h" 41 41 #include "xfs_refcount_btree.h" 42 + #include "xfs_ialloc_btree.h" 42 43 43 44 /* 44 45 * Per-AG Block Reservations ··· 201 200 struct xfs_mount *mp = pag->pag_mount; 202 201 struct xfs_ag_resv *resv; 203 202 int error; 203 + xfs_extlen_t reserved; 204 204 205 - resv = xfs_perag_resv(pag, type); 206 205 if (used > ask) 207 206 ask = used; 208 - resv->ar_asked = ask; 209 - resv->ar_reserved = resv->ar_orig_reserved = ask - used; 210 - mp->m_ag_max_usable -= ask; 207 + reserved = ask - used; 211 208 212 - trace_xfs_ag_resv_init(pag, type, ask); 213 - 214 - error = xfs_mod_fdblocks(mp, -(int64_t)resv->ar_reserved, true); 215 - if (error) 209 + error = xfs_mod_fdblocks(mp, -(int64_t)reserved, true); 210 + if (error) { 216 211 trace_xfs_ag_resv_init_error(pag->pag_mount, pag->pag_agno, 217 212 error, _RET_IP_); 213 + xfs_warn(mp, 214 + "Per-AG reservation for AG %u failed. Filesystem may run out of space.", 215 + pag->pag_agno); 216 + return error; 217 + } 218 218 219 - return error; 219 + mp->m_ag_max_usable -= ask; 220 + 221 + resv = xfs_perag_resv(pag, type); 222 + resv->ar_asked = ask; 223 + resv->ar_reserved = resv->ar_orig_reserved = reserved; 224 + 225 + trace_xfs_ag_resv_init(pag, type, ask); 226 + return 0; 220 227 } 221 228 222 229 /* Create a per-AG block reservation. */ ··· 232 223 xfs_ag_resv_init( 233 224 struct xfs_perag *pag) 234 225 { 226 + struct xfs_mount *mp = pag->pag_mount; 227 + xfs_agnumber_t agno = pag->pag_agno; 235 228 xfs_extlen_t ask; 236 229 xfs_extlen_t used; 237 230 int error = 0; ··· 242 231 if (pag->pag_meta_resv.ar_asked == 0) { 243 232 ask = used = 0; 244 233 245 - error = xfs_refcountbt_calc_reserves(pag->pag_mount, 246 - pag->pag_agno, &ask, &used); 234 + error = xfs_refcountbt_calc_reserves(mp, agno, &ask, &used); 235 + if (error) 236 + goto out; 237 + 238 + error = xfs_finobt_calc_reserves(mp, agno, &ask, &used); 247 239 if (error) 248 240 goto out; 249 241 250 242 error = __xfs_ag_resv_init(pag, XFS_AG_RESV_METADATA, 251 243 ask, used); 252 - if (error) 253 - goto out; 244 + if (error) { 245 + /* 246 + * Because we didn't have per-AG reservations when the 247 + * finobt feature was added we might not be able to 248 + * reserve all needed blocks. Warn and fall back to the 249 + * old and potentially buggy code in that case, but 250 + * ensure we do have the reservation for the refcountbt. 251 + */ 252 + ask = used = 0; 253 + 254 + mp->m_inotbt_nores = true; 255 + 256 + error = xfs_refcountbt_calc_reserves(mp, agno, &ask, 257 + &used); 258 + if (error) 259 + goto out; 260 + 261 + error = __xfs_ag_resv_init(pag, XFS_AG_RESV_METADATA, 262 + ask, used); 263 + if (error) 264 + goto out; 265 + } 254 266 } 255 267 256 268 /* Create the AGFL metadata reservation */ 257 269 if (pag->pag_agfl_resv.ar_asked == 0) { 258 270 ask = used = 0; 259 271 260 - error = xfs_rmapbt_calc_reserves(pag->pag_mount, pag->pag_agno, 261 - &ask, &used); 272 + error = xfs_rmapbt_calc_reserves(mp, agno, &ask, &used); 262 273 if (error) 263 274 goto out; 264 275 ··· 289 256 goto out; 290 257 } 291 258 259 + #ifdef DEBUG 260 + /* need to read in the AGF for the ASSERT below to work */ 261 + error = xfs_alloc_pagf_init(pag->pag_mount, NULL, pag->pag_agno, 0); 262 + if (error) 263 + return error; 264 + 292 265 ASSERT(xfs_perag_resv(pag, XFS_AG_RESV_METADATA)->ar_reserved + 293 266 xfs_perag_resv(pag, XFS_AG_RESV_AGFL)->ar_reserved <= 294 267 pag->pagf_freeblks + pag->pagf_flcount); 268 + #endif 295 269 out: 296 270 return error; 297 271 }
-6
fs/xfs/libxfs/xfs_attr.c
··· 131 131 if (XFS_FORCED_SHUTDOWN(ip->i_mount)) 132 132 return -EIO; 133 133 134 - if (!xfs_inode_hasattr(ip)) 135 - return -ENOATTR; 136 - 137 134 error = xfs_attr_args_init(&args, ip, name, flags); 138 135 if (error) 139 136 return error; ··· 388 391 389 392 if (XFS_FORCED_SHUTDOWN(dp->i_mount)) 390 393 return -EIO; 391 - 392 - if (!xfs_inode_hasattr(dp)) 393 - return -ENOATTR; 394 394 395 395 error = xfs_attr_args_init(&args, dp, name, flags); 396 396 if (error)
+34 -14
fs/xfs/libxfs/xfs_bmap.c
··· 3629 3629 align = xfs_get_cowextsz_hint(ap->ip); 3630 3630 else if (xfs_alloc_is_userdata(ap->datatype)) 3631 3631 align = xfs_get_extsz_hint(ap->ip); 3632 - if (unlikely(align)) { 3632 + if (align) { 3633 3633 error = xfs_bmap_extsize_align(mp, &ap->got, &ap->prev, 3634 3634 align, 0, ap->eof, 0, ap->conv, 3635 3635 &ap->offset, &ap->length); ··· 3701 3701 args.minlen = ap->minlen; 3702 3702 } 3703 3703 /* apply extent size hints if obtained earlier */ 3704 - if (unlikely(align)) { 3704 + if (align) { 3705 3705 args.prod = align; 3706 3706 if ((args.mod = (xfs_extlen_t)do_mod(ap->offset, args.prod))) 3707 3707 args.mod = (xfs_extlen_t)(args.prod - args.mod); ··· 4514 4514 int n; /* current extent index */ 4515 4515 xfs_fileoff_t obno; /* old block number (offset) */ 4516 4516 int whichfork; /* data or attr fork */ 4517 - char inhole; /* current location is hole in file */ 4518 - char wasdelay; /* old extent was delayed */ 4519 4517 4520 4518 #ifdef DEBUG 4521 4519 xfs_fileoff_t orig_bno; /* original block number value */ ··· 4601 4603 bma.firstblock = firstblock; 4602 4604 4603 4605 while (bno < end && n < *nmap) { 4604 - inhole = eof || bma.got.br_startoff > bno; 4605 - wasdelay = !inhole && isnullstartblock(bma.got.br_startblock); 4606 + bool need_alloc = false, wasdelay = false; 4606 4607 4607 - /* 4608 - * Make sure we only reflink into a hole. 4609 - */ 4610 - if (flags & XFS_BMAPI_REMAP) 4611 - ASSERT(inhole); 4612 - if (flags & XFS_BMAPI_COWFORK) 4613 - ASSERT(!inhole); 4608 + /* in hole or beyoned EOF? */ 4609 + if (eof || bma.got.br_startoff > bno) { 4610 + if (flags & XFS_BMAPI_DELALLOC) { 4611 + /* 4612 + * For the COW fork we can reasonably get a 4613 + * request for converting an extent that races 4614 + * with other threads already having converted 4615 + * part of it, as there converting COW to 4616 + * regular blocks is not protected using the 4617 + * IOLOCK. 4618 + */ 4619 + ASSERT(flags & XFS_BMAPI_COWFORK); 4620 + if (!(flags & XFS_BMAPI_COWFORK)) { 4621 + error = -EIO; 4622 + goto error0; 4623 + } 4624 + 4625 + if (eof || bno >= end) 4626 + break; 4627 + } else { 4628 + need_alloc = true; 4629 + } 4630 + } else { 4631 + /* 4632 + * Make sure we only reflink into a hole. 4633 + */ 4634 + ASSERT(!(flags & XFS_BMAPI_REMAP)); 4635 + if (isnullstartblock(bma.got.br_startblock)) 4636 + wasdelay = true; 4637 + } 4614 4638 4615 4639 /* 4616 4640 * First, deal with the hole before the allocated space 4617 4641 * that we found, if any. 4618 4642 */ 4619 - if (inhole || wasdelay) { 4643 + if (need_alloc || wasdelay) { 4620 4644 bma.eof = eof; 4621 4645 bma.conv = !!(flags & XFS_BMAPI_CONVERT); 4622 4646 bma.wasdel = wasdelay;
+5 -1
fs/xfs/libxfs/xfs_bmap.h
··· 110 110 /* Map something in the CoW fork. */ 111 111 #define XFS_BMAPI_COWFORK 0x200 112 112 113 + /* Only convert delalloc space, don't allocate entirely new extents */ 114 + #define XFS_BMAPI_DELALLOC 0x400 115 + 113 116 #define XFS_BMAPI_FLAGS \ 114 117 { XFS_BMAPI_ENTIRE, "ENTIRE" }, \ 115 118 { XFS_BMAPI_METADATA, "METADATA" }, \ ··· 123 120 { XFS_BMAPI_CONVERT, "CONVERT" }, \ 124 121 { XFS_BMAPI_ZERO, "ZERO" }, \ 125 122 { XFS_BMAPI_REMAP, "REMAP" }, \ 126 - { XFS_BMAPI_COWFORK, "COWFORK" } 123 + { XFS_BMAPI_COWFORK, "COWFORK" }, \ 124 + { XFS_BMAPI_DELALLOC, "DELALLOC" } 127 125 128 126 129 127 static inline int xfs_bmapi_aflag(int w)
+87 -3
fs/xfs/libxfs/xfs_ialloc_btree.c
··· 82 82 } 83 83 84 84 STATIC int 85 - xfs_inobt_alloc_block( 85 + __xfs_inobt_alloc_block( 86 86 struct xfs_btree_cur *cur, 87 87 union xfs_btree_ptr *start, 88 88 union xfs_btree_ptr *new, 89 - int *stat) 89 + int *stat, 90 + enum xfs_ag_resv_type resv) 90 91 { 91 92 xfs_alloc_arg_t args; /* block allocation args */ 92 93 int error; /* error return value */ ··· 104 103 args.maxlen = 1; 105 104 args.prod = 1; 106 105 args.type = XFS_ALLOCTYPE_NEAR_BNO; 106 + args.resv = resv; 107 107 108 108 error = xfs_alloc_vextent(&args); 109 109 if (error) { ··· 122 120 new->s = cpu_to_be32(XFS_FSB_TO_AGBNO(args.mp, args.fsbno)); 123 121 *stat = 1; 124 122 return 0; 123 + } 124 + 125 + STATIC int 126 + xfs_inobt_alloc_block( 127 + struct xfs_btree_cur *cur, 128 + union xfs_btree_ptr *start, 129 + union xfs_btree_ptr *new, 130 + int *stat) 131 + { 132 + return __xfs_inobt_alloc_block(cur, start, new, stat, XFS_AG_RESV_NONE); 133 + } 134 + 135 + STATIC int 136 + xfs_finobt_alloc_block( 137 + struct xfs_btree_cur *cur, 138 + union xfs_btree_ptr *start, 139 + union xfs_btree_ptr *new, 140 + int *stat) 141 + { 142 + return __xfs_inobt_alloc_block(cur, start, new, stat, 143 + XFS_AG_RESV_METADATA); 125 144 } 126 145 127 146 STATIC int ··· 351 328 352 329 .dup_cursor = xfs_inobt_dup_cursor, 353 330 .set_root = xfs_finobt_set_root, 354 - .alloc_block = xfs_inobt_alloc_block, 331 + .alloc_block = xfs_finobt_alloc_block, 355 332 .free_block = xfs_inobt_free_block, 356 333 .get_minrecs = xfs_inobt_get_minrecs, 357 334 .get_maxrecs = xfs_inobt_get_maxrecs, ··· 503 480 return 0; 504 481 } 505 482 #endif /* DEBUG */ 483 + 484 + static xfs_extlen_t 485 + xfs_inobt_max_size( 486 + struct xfs_mount *mp) 487 + { 488 + /* Bail out if we're uninitialized, which can happen in mkfs. */ 489 + if (mp->m_inobt_mxr[0] == 0) 490 + return 0; 491 + 492 + return xfs_btree_calc_size(mp, mp->m_inobt_mnr, 493 + (uint64_t)mp->m_sb.sb_agblocks * mp->m_sb.sb_inopblock / 494 + XFS_INODES_PER_CHUNK); 495 + } 496 + 497 + static int 498 + xfs_inobt_count_blocks( 499 + struct xfs_mount *mp, 500 + xfs_agnumber_t agno, 501 + xfs_btnum_t btnum, 502 + xfs_extlen_t *tree_blocks) 503 + { 504 + struct xfs_buf *agbp; 505 + struct xfs_btree_cur *cur; 506 + int error; 507 + 508 + error = xfs_ialloc_read_agi(mp, NULL, agno, &agbp); 509 + if (error) 510 + return error; 511 + 512 + cur = xfs_inobt_init_cursor(mp, NULL, agbp, agno, btnum); 513 + error = xfs_btree_count_blocks(cur, tree_blocks); 514 + xfs_btree_del_cursor(cur, error ? XFS_BTREE_ERROR : XFS_BTREE_NOERROR); 515 + xfs_buf_relse(agbp); 516 + 517 + return error; 518 + } 519 + 520 + /* 521 + * Figure out how many blocks to reserve and how many are used by this btree. 522 + */ 523 + int 524 + xfs_finobt_calc_reserves( 525 + struct xfs_mount *mp, 526 + xfs_agnumber_t agno, 527 + xfs_extlen_t *ask, 528 + xfs_extlen_t *used) 529 + { 530 + xfs_extlen_t tree_len = 0; 531 + int error; 532 + 533 + if (!xfs_sb_version_hasfinobt(&mp->m_sb)) 534 + return 0; 535 + 536 + error = xfs_inobt_count_blocks(mp, agno, XFS_BTNUM_FINO, &tree_len); 537 + if (error) 538 + return error; 539 + 540 + *ask += xfs_inobt_max_size(mp); 541 + *used += tree_len; 542 + return 0; 543 + }
+3
fs/xfs/libxfs/xfs_ialloc_btree.h
··· 72 72 #define xfs_inobt_rec_check_count(mp, rec) 0 73 73 #endif /* DEBUG */ 74 74 75 + int xfs_finobt_calc_reserves(struct xfs_mount *mp, xfs_agnumber_t agno, 76 + xfs_extlen_t *ask, xfs_extlen_t *used); 77 + 75 78 #endif /* __XFS_IALLOC_BTREE_H__ */
+1 -1
fs/xfs/libxfs/xfs_sb.c
··· 242 242 sbp->sb_blocklog < XFS_MIN_BLOCKSIZE_LOG || 243 243 sbp->sb_blocklog > XFS_MAX_BLOCKSIZE_LOG || 244 244 sbp->sb_blocksize != (1 << sbp->sb_blocklog) || 245 - sbp->sb_dirblklog > XFS_MAX_BLOCKSIZE_LOG || 245 + sbp->sb_dirblklog + sbp->sb_blocklog > XFS_MAX_BLOCKSIZE_LOG || 246 246 sbp->sb_inodesize < XFS_DINODE_MIN_SIZE || 247 247 sbp->sb_inodesize > XFS_DINODE_MAX_SIZE || 248 248 sbp->sb_inodelog < XFS_DINODE_MIN_LOG ||
+18 -10
fs/xfs/xfs_bmap_util.c
··· 528 528 xfs_bmbt_irec_t *map; /* buffer for user's data */ 529 529 xfs_mount_t *mp; /* file system mount point */ 530 530 int nex; /* # of user extents can do */ 531 - int nexleft; /* # of user extents left */ 532 531 int subnex; /* # of bmapi's can do */ 533 532 int nmap; /* number of map entries */ 534 533 struct getbmapx *out; /* output structure */ ··· 685 686 goto out_free_map; 686 687 } 687 688 688 - nexleft = nex; 689 - 690 689 do { 691 - nmap = (nexleft > subnex) ? subnex : nexleft; 690 + nmap = (nex> subnex) ? subnex : nex; 692 691 error = xfs_bmapi_read(ip, XFS_BB_TO_FSBT(mp, bmv->bmv_offset), 693 692 XFS_BB_TO_FSB(mp, bmv->bmv_length), 694 693 map, &nmap, bmapi_flags); ··· 694 697 goto out_free_map; 695 698 ASSERT(nmap <= subnex); 696 699 697 - for (i = 0; i < nmap && nexleft && bmv->bmv_length && 698 - cur_ext < bmv->bmv_count; i++) { 700 + for (i = 0; i < nmap && bmv->bmv_length && 701 + cur_ext < bmv->bmv_count - 1; i++) { 699 702 out[cur_ext].bmv_oflags = 0; 700 703 if (map[i].br_state == XFS_EXT_UNWRITTEN) 701 704 out[cur_ext].bmv_oflags |= BMV_OF_PREALLOC; ··· 757 760 continue; 758 761 } 759 762 763 + /* 764 + * In order to report shared extents accurately, 765 + * we report each distinct shared/unshared part 766 + * of a single bmbt record using multiple bmap 767 + * extents. To make that happen, we iterate the 768 + * same map array item multiple times, each 769 + * time trimming out the subextent that we just 770 + * reported. 771 + * 772 + * Because of this, we must check the out array 773 + * index (cur_ext) directly against bmv_count-1 774 + * to avoid overflows. 775 + */ 760 776 if (inject_map.br_startblock != NULLFSBLOCK) { 761 777 map[i] = inject_map; 762 778 i--; 763 - } else 764 - nexleft--; 779 + } 765 780 bmv->bmv_entries++; 766 781 cur_ext++; 767 782 } 768 - } while (nmap && nexleft && bmv->bmv_length && 769 - cur_ext < bmv->bmv_count); 783 + } while (nmap && bmv->bmv_length && cur_ext < bmv->bmv_count - 1); 770 784 771 785 out_free_map: 772 786 kmem_free(map);
+1
fs/xfs/xfs_buf.c
··· 422 422 out_free_pages: 423 423 for (i = 0; i < bp->b_page_count; i++) 424 424 __free_page(bp->b_pages[i]); 425 + bp->b_flags &= ~_XBF_PAGES; 425 426 return error; 426 427 } 427 428
+12 -11
fs/xfs/xfs_inode.c
··· 1792 1792 int error; 1793 1793 1794 1794 /* 1795 - * The ifree transaction might need to allocate blocks for record 1796 - * insertion to the finobt. We don't want to fail here at ENOSPC, so 1797 - * allow ifree to dip into the reserved block pool if necessary. 1798 - * 1799 - * Freeing large sets of inodes generally means freeing inode chunks, 1800 - * directory and file data blocks, so this should be relatively safe. 1801 - * Only under severe circumstances should it be possible to free enough 1802 - * inodes to exhaust the reserve block pool via finobt expansion while 1803 - * at the same time not creating free space in the filesystem. 1795 + * We try to use a per-AG reservation for any block needed by the finobt 1796 + * tree, but as the finobt feature predates the per-AG reservation 1797 + * support a degraded file system might not have enough space for the 1798 + * reservation at mount time. In that case try to dip into the reserved 1799 + * pool and pray. 1804 1800 * 1805 1801 * Send a warning if the reservation does happen to fail, as the inode 1806 1802 * now remains allocated and sits on the unlinked list until the fs is 1807 1803 * repaired. 1808 1804 */ 1809 - error = xfs_trans_alloc(mp, &M_RES(mp)->tr_ifree, 1810 - XFS_IFREE_SPACE_RES(mp), 0, XFS_TRANS_RESERVE, &tp); 1805 + if (unlikely(mp->m_inotbt_nores)) { 1806 + error = xfs_trans_alloc(mp, &M_RES(mp)->tr_ifree, 1807 + XFS_IFREE_SPACE_RES(mp), 0, XFS_TRANS_RESERVE, 1808 + &tp); 1809 + } else { 1810 + error = xfs_trans_alloc(mp, &M_RES(mp)->tr_ifree, 0, 0, 0, &tp); 1811 + } 1811 1812 if (error) { 1812 1813 if (error == -ENOSPC) { 1813 1814 xfs_warn_ratelimited(mp,
+1 -1
fs/xfs/xfs_iomap.c
··· 681 681 xfs_trans_t *tp; 682 682 int nimaps; 683 683 int error = 0; 684 - int flags = 0; 684 + int flags = XFS_BMAPI_DELALLOC; 685 685 int nres; 686 686 687 687 if (whichfork == XFS_COW_FORK)
+1
fs/xfs/xfs_mount.h
··· 140 140 int m_fixedfsid[2]; /* unchanged for life of FS */ 141 141 uint m_dmevmask; /* DMI events for this FS */ 142 142 __uint64_t m_flags; /* global mount flags */ 143 + bool m_inotbt_nores; /* no per-AG finobt resv. */ 143 144 int m_ialloc_inos; /* inodes in inode allocation */ 144 145 int m_ialloc_blks; /* blocks in inode allocation */ 145 146 int m_ialloc_min_blks;/* min blocks in sparse inode
+2 -1
fs/xfs/xfs_qm.c
··· 1177 1177 * the case in all other instances. It's OK that we do this because 1178 1178 * quotacheck is done only at mount time. 1179 1179 */ 1180 - error = xfs_iget(mp, NULL, ino, 0, XFS_ILOCK_EXCL, &ip); 1180 + error = xfs_iget(mp, NULL, ino, XFS_IGET_DONTCACHE, XFS_ILOCK_EXCL, 1181 + &ip); 1181 1182 if (error) { 1182 1183 *res = BULKSTAT_RV_NOTHING; 1183 1184 return error;
+1 -1
include/drm/drm_atomic.h
··· 144 144 struct drm_crtc *ptr; 145 145 struct drm_crtc_state *state; 146 146 struct drm_crtc_commit *commit; 147 - s64 __user *out_fence_ptr; 147 + s32 __user *out_fence_ptr; 148 148 }; 149 149 150 150 struct __drm_connnectors_state {
+1 -1
include/drm/drm_mode_config.h
··· 488 488 /** 489 489 * @prop_out_fence_ptr: Sync File fd pointer representing the 490 490 * outgoing fences for a CRTC. Userspace should provide a pointer to a 491 - * value of type s64, and then cast that pointer to u64. 491 + * value of type s32, and then cast that pointer to u64. 492 492 */ 493 493 struct drm_property *prop_out_fence_ptr; 494 494 /**
+2
include/linux/bpf.h
··· 247 247 void bpf_map_put_with_uref(struct bpf_map *map); 248 248 void bpf_map_put(struct bpf_map *map); 249 249 int bpf_map_precharge_memlock(u32 pages); 250 + void *bpf_map_area_alloc(size_t size); 251 + void bpf_map_area_free(void *base); 250 252 251 253 extern int sysctl_unprivileged_bpf_disabled; 252 254
+6 -2
include/linux/fsl_ifc.h
··· 733 733 __be32 nand_erattr1; 734 734 u32 res19[0x10]; 735 735 __be32 nand_fsr; 736 - u32 res20[0x3]; 737 - __be32 nand_eccstat[6]; 736 + u32 res20; 737 + /* The V1 nand_eccstat is actually 4 words that overlaps the 738 + * V2 nand_eccstat. 739 + */ 740 + __be32 v1_nand_eccstat[2]; 741 + __be32 v2_nand_eccstat[6]; 738 742 u32 res21[0x1c]; 739 743 __be32 nanndcr; 740 744 u32 res22[0x2];
+52 -22
include/linux/gpio/driver.h
··· 274 274 struct irq_chip *irqchip, 275 275 int parent_irq); 276 276 277 - int _gpiochip_irqchip_add(struct gpio_chip *gpiochip, 278 - struct irq_chip *irqchip, 279 - unsigned int first_irq, 280 - irq_flow_handler_t handler, 281 - unsigned int type, 282 - bool nested, 283 - struct lock_class_key *lock_key); 277 + int gpiochip_irqchip_add_key(struct gpio_chip *gpiochip, 278 + struct irq_chip *irqchip, 279 + unsigned int first_irq, 280 + irq_flow_handler_t handler, 281 + unsigned int type, 282 + bool nested, 283 + struct lock_class_key *lock_key); 284 284 285 - /* FIXME: I assume threaded IRQchips do not have the lockdep problem */ 285 + #ifdef CONFIG_LOCKDEP 286 + 287 + /* 288 + * Lockdep requires that each irqchip instance be created with a 289 + * unique key so as to avoid unnecessary warnings. This upfront 290 + * boilerplate static inlines provides such a key for each 291 + * unique instance. 292 + */ 293 + static inline int gpiochip_irqchip_add(struct gpio_chip *gpiochip, 294 + struct irq_chip *irqchip, 295 + unsigned int first_irq, 296 + irq_flow_handler_t handler, 297 + unsigned int type) 298 + { 299 + static struct lock_class_key key; 300 + 301 + return gpiochip_irqchip_add_key(gpiochip, irqchip, first_irq, 302 + handler, type, false, &key); 303 + } 304 + 286 305 static inline int gpiochip_irqchip_add_nested(struct gpio_chip *gpiochip, 287 306 struct irq_chip *irqchip, 288 307 unsigned int first_irq, 289 308 irq_flow_handler_t handler, 290 309 unsigned int type) 291 310 { 292 - return _gpiochip_irqchip_add(gpiochip, irqchip, first_irq, 293 - handler, type, true, NULL); 311 + 312 + static struct lock_class_key key; 313 + 314 + return gpiochip_irqchip_add_key(gpiochip, irqchip, first_irq, 315 + handler, type, true, &key); 316 + } 317 + #else 318 + static inline int gpiochip_irqchip_add(struct gpio_chip *gpiochip, 319 + struct irq_chip *irqchip, 320 + unsigned int first_irq, 321 + irq_flow_handler_t handler, 322 + unsigned int type) 323 + { 324 + return gpiochip_irqchip_add_key(gpiochip, irqchip, first_irq, 325 + handler, type, false, NULL); 294 326 } 295 327 296 - #ifdef CONFIG_LOCKDEP 297 - #define gpiochip_irqchip_add(...) \ 298 - ( \ 299 - ({ \ 300 - static struct lock_class_key _key; \ 301 - _gpiochip_irqchip_add(__VA_ARGS__, false, &_key); \ 302 - }) \ 303 - ) 304 - #else 305 - #define gpiochip_irqchip_add(...) \ 306 - _gpiochip_irqchip_add(__VA_ARGS__, false, NULL) 307 - #endif 328 + static inline int gpiochip_irqchip_add_nested(struct gpio_chip *gpiochip, 329 + struct irq_chip *irqchip, 330 + unsigned int first_irq, 331 + irq_flow_handler_t handler, 332 + unsigned int type) 333 + { 334 + return gpiochip_irqchip_add_key(gpiochip, irqchip, first_irq, 335 + handler, type, true, NULL); 336 + } 337 + #endif /* CONFIG_LOCKDEP */ 308 338 309 339 #endif /* CONFIG_GPIOLIB_IRQCHIP */ 310 340
+2 -2
include/linux/memory_hotplug.h
··· 284 284 unsigned long map_offset); 285 285 extern struct page *sparse_decode_mem_map(unsigned long coded_mem_map, 286 286 unsigned long pnum); 287 - extern int zone_can_shift(unsigned long pfn, unsigned long nr_pages, 288 - enum zone_type target); 287 + extern bool zone_can_shift(unsigned long pfn, unsigned long nr_pages, 288 + enum zone_type target, int *zone_shift); 289 289 290 290 #endif /* __LINUX_MEMORY_HOTPLUG_H */
+2
include/linux/micrel_phy.h
··· 35 35 #define PHY_ID_KSZ886X 0x00221430 36 36 #define PHY_ID_KSZ8863 0x00221435 37 37 38 + #define PHY_ID_KSZ8795 0x00221550 39 + 38 40 /* struct phy_device dev_flags definitions */ 39 41 #define MICREL_PHY_50MHZ_CLK 0x00000001 40 42 #define MICREL_PHY_FXEN 0x00000002
+5 -1
include/linux/mmzone.h
··· 972 972 * @zonelist - The zonelist to search for a suitable zone 973 973 * @highest_zoneidx - The zone index of the highest zone to return 974 974 * @nodes - An optional nodemask to filter the zonelist with 975 - * @zone - The first suitable zone found is returned via this parameter 975 + * @return - Zoneref pointer for the first suitable zone found (see below) 976 976 * 977 977 * This function returns the first zone at or below a given zone index that is 978 978 * within the allowed nodemask. The zoneref returned is a cursor that can be 979 979 * used to iterate the zonelist with next_zones_zonelist by advancing it by 980 980 * one before calling. 981 + * 982 + * When no eligible zone is found, zoneref->zone is NULL (zoneref itself is 983 + * never NULL). This may happen either genuinely, or due to concurrent nodemask 984 + * update due to cpuset modification. 981 985 */ 982 986 static inline struct zoneref *first_zones_zonelist(struct zonelist *zonelist, 983 987 enum zone_type highest_zoneidx,
-156
include/linux/mtd/fsmc.h
··· 1 - /* 2 - * incude/mtd/fsmc.h 3 - * 4 - * ST Microelectronics 5 - * Flexible Static Memory Controller (FSMC) 6 - * platform data interface and header file 7 - * 8 - * Copyright © 2010 ST Microelectronics 9 - * Vipin Kumar <vipin.kumar@st.com> 10 - * 11 - * This file is licensed under the terms of the GNU General Public 12 - * License version 2. This program is licensed "as is" without any 13 - * warranty of any kind, whether express or implied. 14 - */ 15 - 16 - #ifndef __MTD_FSMC_H 17 - #define __MTD_FSMC_H 18 - 19 - #include <linux/io.h> 20 - #include <linux/platform_device.h> 21 - #include <linux/mtd/physmap.h> 22 - #include <linux/types.h> 23 - #include <linux/mtd/partitions.h> 24 - #include <asm/param.h> 25 - 26 - #define FSMC_NAND_BW8 1 27 - #define FSMC_NAND_BW16 2 28 - 29 - #define FSMC_MAX_NOR_BANKS 4 30 - #define FSMC_MAX_NAND_BANKS 4 31 - 32 - #define FSMC_FLASH_WIDTH8 1 33 - #define FSMC_FLASH_WIDTH16 2 34 - 35 - /* fsmc controller registers for NOR flash */ 36 - #define CTRL 0x0 37 - /* ctrl register definitions */ 38 - #define BANK_ENABLE (1 << 0) 39 - #define MUXED (1 << 1) 40 - #define NOR_DEV (2 << 2) 41 - #define WIDTH_8 (0 << 4) 42 - #define WIDTH_16 (1 << 4) 43 - #define RSTPWRDWN (1 << 6) 44 - #define WPROT (1 << 7) 45 - #define WRT_ENABLE (1 << 12) 46 - #define WAIT_ENB (1 << 13) 47 - 48 - #define CTRL_TIM 0x4 49 - /* ctrl_tim register definitions */ 50 - 51 - #define FSMC_NOR_BANK_SZ 0x8 52 - #define FSMC_NOR_REG_SIZE 0x40 53 - 54 - #define FSMC_NOR_REG(base, bank, reg) (base + \ 55 - FSMC_NOR_BANK_SZ * (bank) + \ 56 - reg) 57 - 58 - /* fsmc controller registers for NAND flash */ 59 - #define PC 0x00 60 - /* pc register definitions */ 61 - #define FSMC_RESET (1 << 0) 62 - #define FSMC_WAITON (1 << 1) 63 - #define FSMC_ENABLE (1 << 2) 64 - #define FSMC_DEVTYPE_NAND (1 << 3) 65 - #define FSMC_DEVWID_8 (0 << 4) 66 - #define FSMC_DEVWID_16 (1 << 4) 67 - #define FSMC_ECCEN (1 << 6) 68 - #define FSMC_ECCPLEN_512 (0 << 7) 69 - #define FSMC_ECCPLEN_256 (1 << 7) 70 - #define FSMC_TCLR_1 (1) 71 - #define FSMC_TCLR_SHIFT (9) 72 - #define FSMC_TCLR_MASK (0xF) 73 - #define FSMC_TAR_1 (1) 74 - #define FSMC_TAR_SHIFT (13) 75 - #define FSMC_TAR_MASK (0xF) 76 - #define STS 0x04 77 - /* sts register definitions */ 78 - #define FSMC_CODE_RDY (1 << 15) 79 - #define COMM 0x08 80 - /* comm register definitions */ 81 - #define FSMC_TSET_0 0 82 - #define FSMC_TSET_SHIFT 0 83 - #define FSMC_TSET_MASK 0xFF 84 - #define FSMC_TWAIT_6 6 85 - #define FSMC_TWAIT_SHIFT 8 86 - #define FSMC_TWAIT_MASK 0xFF 87 - #define FSMC_THOLD_4 4 88 - #define FSMC_THOLD_SHIFT 16 89 - #define FSMC_THOLD_MASK 0xFF 90 - #define FSMC_THIZ_1 1 91 - #define FSMC_THIZ_SHIFT 24 92 - #define FSMC_THIZ_MASK 0xFF 93 - #define ATTRIB 0x0C 94 - #define IOATA 0x10 95 - #define ECC1 0x14 96 - #define ECC2 0x18 97 - #define ECC3 0x1C 98 - #define FSMC_NAND_BANK_SZ 0x20 99 - 100 - #define FSMC_NAND_REG(base, bank, reg) (base + FSMC_NOR_REG_SIZE + \ 101 - (FSMC_NAND_BANK_SZ * (bank)) + \ 102 - reg) 103 - 104 - #define FSMC_BUSY_WAIT_TIMEOUT (1 * HZ) 105 - 106 - struct fsmc_nand_timings { 107 - uint8_t tclr; 108 - uint8_t tar; 109 - uint8_t thiz; 110 - uint8_t thold; 111 - uint8_t twait; 112 - uint8_t tset; 113 - }; 114 - 115 - enum access_mode { 116 - USE_DMA_ACCESS = 1, 117 - USE_WORD_ACCESS, 118 - }; 119 - 120 - /** 121 - * fsmc_nand_platform_data - platform specific NAND controller config 122 - * @nand_timings: timing setup for the physical NAND interface 123 - * @partitions: partition table for the platform, use a default fallback 124 - * if this is NULL 125 - * @nr_partitions: the number of partitions in the previous entry 126 - * @options: different options for the driver 127 - * @width: bus width 128 - * @bank: default bank 129 - * @select_bank: callback to select a certain bank, this is 130 - * platform-specific. If the controller only supports one bank 131 - * this may be set to NULL 132 - */ 133 - struct fsmc_nand_platform_data { 134 - struct fsmc_nand_timings *nand_timings; 135 - struct mtd_partition *partitions; 136 - unsigned int nr_partitions; 137 - unsigned int options; 138 - unsigned int width; 139 - unsigned int bank; 140 - 141 - enum access_mode mode; 142 - 143 - void (*select_bank)(uint32_t bank, uint32_t busw); 144 - 145 - /* priv structures for dma accesses */ 146 - void *read_dma_priv; 147 - void *write_dma_priv; 148 - }; 149 - 150 - extern int __init fsmc_nor_init(struct platform_device *pdev, 151 - unsigned long base, uint32_t bank, uint32_t width); 152 - extern void __init fsmc_init_board_info(struct platform_device *pdev, 153 - struct mtd_partition *partitions, unsigned int nr_partitions, 154 - unsigned int width); 155 - 156 - #endif /* __MTD_FSMC_H */
+3 -1
include/linux/mtd/nand.h
··· 615 615 * @tALS_min: ALE setup time 616 616 * @tAR_min: ALE to RE# delay 617 617 * @tCEA_max: CE# access time 618 - * @tCEH_min: 618 + * @tCEH_min: CE# high hold time 619 619 * @tCH_min: CE# hold time 620 620 * @tCHZ_max: CE# high to output hi-Z 621 621 * @tCLH_min: CLE hold time ··· 804 804 * @max_bb_per_die: [INTERN] the max number of bad blocks each die of a 805 805 * this nand device will encounter their life times. 806 806 * @blocks_per_die: [INTERN] The number of PEBs in a die 807 + * @data_interface: [INTERN] NAND interface timing information 807 808 * @read_retries: [INTERN] the number of read retry modes supported 808 809 * @onfi_set_features: [REPLACEABLE] set the features for ONFI nand 809 810 * @onfi_get_features: [REPLACEABLE] get the features for ONFI nand ··· 964 963 #define NAND_MFR_SANDISK 0x45 965 964 #define NAND_MFR_INTEL 0x89 966 965 #define NAND_MFR_ATO 0x9b 966 + #define NAND_MFR_WINBOND 0xef 967 967 968 968 /* The maximum expected count of bytes in the NAND ID sequence */ 969 969 #define NAND_MAX_ID_LEN 8
+2 -1
include/linux/nfs4.h
··· 282 282 283 283 static inline bool seqid_mutating_err(u32 err) 284 284 { 285 - /* rfc 3530 section 8.1.5: */ 285 + /* See RFC 7530, section 9.1.7 */ 286 286 switch (err) { 287 287 case NFS4ERR_STALE_CLIENTID: 288 288 case NFS4ERR_STALE_STATEID: ··· 291 291 case NFS4ERR_BADXDR: 292 292 case NFS4ERR_RESOURCE: 293 293 case NFS4ERR_NOFILEHANDLE: 294 + case NFS4ERR_MOVED: 294 295 return false; 295 296 }; 296 297 return true;
+1
include/linux/nmi.h
··· 110 110 extern int watchdog_thresh; 111 111 extern unsigned long watchdog_enabled; 112 112 extern unsigned long *watchdog_cpumask_bits; 113 + extern atomic_t watchdog_park_in_progress; 113 114 #ifdef CONFIG_SMP 114 115 extern int sysctl_softlockup_all_cpu_backtrace; 115 116 extern int sysctl_hardlockup_all_cpu_backtrace;
-1
include/linux/phy.h
··· 25 25 #include <linux/timer.h> 26 26 #include <linux/workqueue.h> 27 27 #include <linux/mod_devicetable.h> 28 - #include <linux/phy_led_triggers.h> 29 28 30 29 #include <linux/atomic.h> 31 30
+2 -2
include/linux/phy_led_triggers.h
··· 18 18 #ifdef CONFIG_LED_TRIGGER_PHY 19 19 20 20 #include <linux/leds.h> 21 + #include <linux/phy.h> 21 22 22 23 #define PHY_LED_TRIGGER_SPEED_SUFFIX_SIZE 10 23 - #define PHY_MII_BUS_ID_SIZE (20 - 3) 24 24 25 - #define PHY_LINK_LED_TRIGGER_NAME_SIZE (PHY_MII_BUS_ID_SIZE + \ 25 + #define PHY_LINK_LED_TRIGGER_NAME_SIZE (MII_BUS_ID_SIZE + \ 26 26 FIELD_SIZEOF(struct mdio_device, addr)+\ 27 27 PHY_LED_TRIGGER_SPEED_SUFFIX_SIZE) 28 28
+1
include/linux/sunrpc/clnt.h
··· 216 216 void rpc_clnt_xprt_switch_add_xprt(struct rpc_clnt *, struct rpc_xprt *); 217 217 bool rpc_clnt_xprt_switch_has_addr(struct rpc_clnt *clnt, 218 218 const struct sockaddr *sap); 219 + void rpc_cleanup_clids(void); 219 220 #endif /* __KERNEL__ */ 220 221 #endif /* _LINUX_SUNRPC_CLNT_H */
-2
include/linux/suspend.h
··· 194 194 }; 195 195 196 196 #ifdef CONFIG_SUSPEND 197 - extern suspend_state_t mem_sleep_default; 198 - 199 197 /** 200 198 * suspend_set_ops - set platform dependent suspend operations 201 199 * @ops: The new suspend operations to set.
+4 -2
include/linux/virtio_net.h
··· 56 56 57 57 static inline int virtio_net_hdr_from_skb(const struct sk_buff *skb, 58 58 struct virtio_net_hdr *hdr, 59 - bool little_endian) 59 + bool little_endian, 60 + bool has_data_valid) 60 61 { 61 62 memset(hdr, 0, sizeof(*hdr)); /* no info leak */ 62 63 ··· 92 91 skb_checksum_start_offset(skb)); 93 92 hdr->csum_offset = __cpu_to_virtio16(little_endian, 94 93 skb->csum_offset); 95 - } else if (skb->ip_summed == CHECKSUM_UNNECESSARY) { 94 + } else if (has_data_valid && 95 + skb->ip_summed == CHECKSUM_UNNECESSARY) { 96 96 hdr->flags = VIRTIO_NET_HDR_F_DATA_VALID; 97 97 } /* else everything is zero */ 98 98
+1 -1
include/net/ipv6.h
··· 871 871 * upper-layer output functions 872 872 */ 873 873 int ip6_xmit(const struct sock *sk, struct sk_buff *skb, struct flowi6 *fl6, 874 - struct ipv6_txoptions *opt, int tclass); 874 + __u32 mark, struct ipv6_txoptions *opt, int tclass); 875 875 876 876 int ip6_find_1stfragopt(struct sk_buff *skb, u8 **nexthdr); 877 877
+13
include/net/lwtunnel.h
··· 44 44 int (*get_encap_size)(struct lwtunnel_state *lwtstate); 45 45 int (*cmp_encap)(struct lwtunnel_state *a, struct lwtunnel_state *b); 46 46 int (*xmit)(struct sk_buff *skb); 47 + 48 + struct module *owner; 47 49 }; 48 50 49 51 #ifdef CONFIG_LWTUNNEL ··· 107 105 unsigned int num); 108 106 int lwtunnel_encap_del_ops(const struct lwtunnel_encap_ops *op, 109 107 unsigned int num); 108 + int lwtunnel_valid_encap_type(u16 encap_type); 109 + int lwtunnel_valid_encap_type_attr(struct nlattr *attr, int len); 110 110 int lwtunnel_build_state(struct net_device *dev, u16 encap_type, 111 111 struct nlattr *encap, 112 112 unsigned int family, const void *cfg, ··· 168 164 169 165 static inline int lwtunnel_encap_del_ops(const struct lwtunnel_encap_ops *op, 170 166 unsigned int num) 167 + { 168 + return -EOPNOTSUPP; 169 + } 170 + 171 + static inline int lwtunnel_valid_encap_type(u16 encap_type) 172 + { 173 + return -EOPNOTSUPP; 174 + } 175 + static inline int lwtunnel_valid_encap_type_attr(struct nlattr *attr, int len) 171 176 { 172 177 return -EOPNOTSUPP; 173 178 }
+3 -3
include/net/netfilter/nf_tables.h
··· 207 207 unsigned int skip; 208 208 int err; 209 209 int (*fn)(const struct nft_ctx *ctx, 210 - const struct nft_set *set, 210 + struct nft_set *set, 211 211 const struct nft_set_iter *iter, 212 - const struct nft_set_elem *elem); 212 + struct nft_set_elem *elem); 213 213 }; 214 214 215 215 /** ··· 301 301 void (*remove)(const struct nft_set *set, 302 302 const struct nft_set_elem *elem); 303 303 void (*walk)(const struct nft_ctx *ctx, 304 - const struct nft_set *set, 304 + struct nft_set *set, 305 305 struct nft_set_iter *iter); 306 306 307 307 unsigned int (*privsize)(const struct nlattr * const nla[]);
+6
include/net/netfilter/nft_fib.h
··· 9 9 10 10 extern const struct nla_policy nft_fib_policy[]; 11 11 12 + static inline bool 13 + nft_fib_is_loopback(const struct sk_buff *skb, const struct net_device *in) 14 + { 15 + return skb->pkt_type == PACKET_LOOPBACK || in->flags & IFF_LOOPBACK; 16 + } 17 + 12 18 int nft_fib_dump(struct sk_buff *skb, const struct nft_expr *expr); 13 19 int nft_fib_init(const struct nft_ctx *ctx, const struct nft_expr *expr, 14 20 const struct nlattr * const tb[]);
+14
include/rdma/ib_verbs.h
··· 352 352 } 353 353 } 354 354 355 + static inline enum ib_mtu ib_mtu_int_to_enum(int mtu) 356 + { 357 + if (mtu >= 4096) 358 + return IB_MTU_4096; 359 + else if (mtu >= 2048) 360 + return IB_MTU_2048; 361 + else if (mtu >= 1024) 362 + return IB_MTU_1024; 363 + else if (mtu >= 512) 364 + return IB_MTU_512; 365 + else 366 + return IB_MTU_256; 367 + } 368 + 355 369 enum ib_port_state { 356 370 IB_PORT_NOP = 0, 357 371 IB_PORT_DOWN = 1,
+8 -8
include/soc/arc/mcip.h
··· 55 55 56 56 struct mcip_bcr { 57 57 #ifdef CONFIG_CPU_BIG_ENDIAN 58 - unsigned int pad3:8, 59 - idu:1, llm:1, num_cores:6, 60 - iocoh:1, gfrc:1, dbg:1, pad2:1, 61 - msg:1, sem:1, ipi:1, pad:1, 58 + unsigned int pad4:6, pw_dom:1, pad3:1, 59 + idu:1, pad2:1, num_cores:6, 60 + pad:1, gfrc:1, dbg:1, pw:1, 61 + msg:1, sem:1, ipi:1, slv:1, 62 62 ver:8; 63 63 #else 64 64 unsigned int ver:8, 65 - pad:1, ipi:1, sem:1, msg:1, 66 - pad2:1, dbg:1, gfrc:1, iocoh:1, 67 - num_cores:6, llm:1, idu:1, 68 - pad3:8; 65 + slv:1, ipi:1, sem:1, msg:1, 66 + pw:1, dbg:1, gfrc:1, pad:1, 67 + num_cores:6, pad2:1, idu:1, 68 + pad3:1, pw_dom:1, pad4:6; 69 69 #endif 70 70 }; 71 71
+7 -3
include/uapi/linux/cec-funcs.h
··· 1665 1665 __u8 audio_out_compensated, 1666 1666 __u8 audio_out_delay) 1667 1667 { 1668 - msg->len = 7; 1668 + msg->len = 6; 1669 1669 msg->msg[0] |= 0xf; /* broadcast */ 1670 1670 msg->msg[1] = CEC_MSG_REPORT_CURRENT_LATENCY; 1671 1671 msg->msg[2] = phys_addr >> 8; 1672 1672 msg->msg[3] = phys_addr & 0xff; 1673 1673 msg->msg[4] = video_latency; 1674 1674 msg->msg[5] = (low_latency_mode << 2) | audio_out_compensated; 1675 - msg->msg[6] = audio_out_delay; 1675 + if (audio_out_compensated == 3) 1676 + msg->msg[msg->len++] = audio_out_delay; 1676 1677 } 1677 1678 1678 1679 static inline void cec_ops_report_current_latency(const struct cec_msg *msg, ··· 1687 1686 *video_latency = msg->msg[4]; 1688 1687 *low_latency_mode = (msg->msg[5] >> 2) & 1; 1689 1688 *audio_out_compensated = msg->msg[5] & 3; 1690 - *audio_out_delay = msg->msg[6]; 1689 + if (*audio_out_compensated == 3 && msg->len >= 7) 1690 + *audio_out_delay = msg->msg[6]; 1691 + else 1692 + *audio_out_delay = 0; 1691 1693 } 1692 1694 1693 1695 static inline void cec_msg_request_current_latency(struct cec_msg *msg,
+2
include/uapi/linux/netfilter/nf_log.h
··· 9 9 #define NF_LOG_MACDECODE 0x20 /* Decode MAC header */ 10 10 #define NF_LOG_MASK 0x2f 11 11 12 + #define NF_LOG_PREFIXLEN 128 13 + 12 14 #endif /* _NETFILTER_NF_LOG_H */
+2 -2
include/uapi/linux/netfilter/nf_tables.h
··· 235 235 /** 236 236 * enum nft_rule_compat_attributes - nf_tables rule compat attributes 237 237 * 238 - * @NFTA_RULE_COMPAT_PROTO: numerice value of handled protocol (NLA_U32) 238 + * @NFTA_RULE_COMPAT_PROTO: numeric value of handled protocol (NLA_U32) 239 239 * @NFTA_RULE_COMPAT_FLAGS: bitmask of enum nft_rule_compat_flags (NLA_U32) 240 240 */ 241 241 enum nft_rule_compat_attributes { ··· 499 499 * enum nft_byteorder_ops - nf_tables byteorder operators 500 500 * 501 501 * @NFT_BYTEORDER_NTOH: network to host operator 502 - * @NFT_BYTEORDER_HTON: host to network opertaor 502 + * @NFT_BYTEORDER_HTON: host to network operator 503 503 */ 504 504 enum nft_byteorder_ops { 505 505 NFT_BYTEORDER_NTOH,
+1
include/uapi/rdma/Kbuild
··· 16 16 header-y += ocrdma-abi.h 17 17 header-y += hns-abi.h 18 18 header-y += vmw_pvrdma-abi.h 19 + header-y += qedr-abi.h
+1 -1
include/uapi/rdma/cxgb3-abi.h
··· 30 30 * SOFTWARE. 31 31 */ 32 32 #ifndef CXGB3_ABI_USER_H 33 - #define CXBG3_ABI_USER_H 33 + #define CXGB3_ABI_USER_H 34 34 35 35 #include <linux/types.h> 36 36
+7 -11
kernel/bpf/arraymap.c
··· 11 11 */ 12 12 #include <linux/bpf.h> 13 13 #include <linux/err.h> 14 - #include <linux/vmalloc.h> 15 14 #include <linux/slab.h> 16 15 #include <linux/mm.h> 17 16 #include <linux/filter.h> ··· 73 74 if (array_size >= U32_MAX - PAGE_SIZE) 74 75 return ERR_PTR(-ENOMEM); 75 76 76 - 77 77 /* allocate all map elements and zero-initialize them */ 78 - array = kzalloc(array_size, GFP_USER | __GFP_NOWARN); 79 - if (!array) { 80 - array = vzalloc(array_size); 81 - if (!array) 82 - return ERR_PTR(-ENOMEM); 83 - } 78 + array = bpf_map_area_alloc(array_size); 79 + if (!array) 80 + return ERR_PTR(-ENOMEM); 84 81 85 82 /* copy mandatory map attributes */ 86 83 array->map.map_type = attr->map_type; ··· 92 97 93 98 if (array_size >= U32_MAX - PAGE_SIZE || 94 99 elem_size > PCPU_MIN_UNIT_SIZE || bpf_array_alloc_percpu(array)) { 95 - kvfree(array); 100 + bpf_map_area_free(array); 96 101 return ERR_PTR(-ENOMEM); 97 102 } 98 103 out: ··· 257 262 if (array->map.map_type == BPF_MAP_TYPE_PERCPU_ARRAY) 258 263 bpf_array_free_percpu(array); 259 264 260 - kvfree(array); 265 + bpf_map_area_free(array); 261 266 } 262 267 263 268 static const struct bpf_map_ops array_ops = { ··· 314 319 /* make sure it's empty */ 315 320 for (i = 0; i < array->map.max_entries; i++) 316 321 BUG_ON(array->ptrs[i] != NULL); 317 - kvfree(array); 322 + 323 + bpf_map_area_free(array); 318 324 } 319 325 320 326 static void *fd_array_map_lookup_elem(struct bpf_map *map, void *key)
+9 -13
kernel/bpf/hashtab.c
··· 13 13 #include <linux/bpf.h> 14 14 #include <linux/jhash.h> 15 15 #include <linux/filter.h> 16 - #include <linux/vmalloc.h> 17 16 #include "percpu_freelist.h" 18 17 #include "bpf_lru_list.h" 19 18 ··· 102 103 free_percpu(pptr); 103 104 } 104 105 free_elems: 105 - vfree(htab->elems); 106 + bpf_map_area_free(htab->elems); 106 107 } 107 108 108 109 static struct htab_elem *prealloc_lru_pop(struct bpf_htab *htab, void *key, ··· 124 125 { 125 126 int err = -ENOMEM, i; 126 127 127 - htab->elems = vzalloc(htab->elem_size * htab->map.max_entries); 128 + htab->elems = bpf_map_area_alloc(htab->elem_size * 129 + htab->map.max_entries); 128 130 if (!htab->elems) 129 131 return -ENOMEM; 130 132 ··· 320 320 goto free_htab; 321 321 322 322 err = -ENOMEM; 323 - htab->buckets = kmalloc_array(htab->n_buckets, sizeof(struct bucket), 324 - GFP_USER | __GFP_NOWARN); 325 - 326 - if (!htab->buckets) { 327 - htab->buckets = vmalloc(htab->n_buckets * sizeof(struct bucket)); 328 - if (!htab->buckets) 329 - goto free_htab; 330 - } 323 + htab->buckets = bpf_map_area_alloc(htab->n_buckets * 324 + sizeof(struct bucket)); 325 + if (!htab->buckets) 326 + goto free_htab; 331 327 332 328 for (i = 0; i < htab->n_buckets; i++) { 333 329 INIT_HLIST_HEAD(&htab->buckets[i].head); ··· 350 354 free_extra_elems: 351 355 free_percpu(htab->extra_elems); 352 356 free_buckets: 353 - kvfree(htab->buckets); 357 + bpf_map_area_free(htab->buckets); 354 358 free_htab: 355 359 kfree(htab); 356 360 return ERR_PTR(err); ··· 1010 1014 prealloc_destroy(htab); 1011 1015 1012 1016 free_percpu(htab->extra_elems); 1013 - kvfree(htab->buckets); 1017 + bpf_map_area_free(htab->buckets); 1014 1018 kfree(htab); 1015 1019 } 1016 1020
+8 -12
kernel/bpf/stackmap.c
··· 7 7 #include <linux/bpf.h> 8 8 #include <linux/jhash.h> 9 9 #include <linux/filter.h> 10 - #include <linux/vmalloc.h> 11 10 #include <linux/stacktrace.h> 12 11 #include <linux/perf_event.h> 13 12 #include "percpu_freelist.h" ··· 31 32 u32 elem_size = sizeof(struct stack_map_bucket) + smap->map.value_size; 32 33 int err; 33 34 34 - smap->elems = vzalloc(elem_size * smap->map.max_entries); 35 + smap->elems = bpf_map_area_alloc(elem_size * smap->map.max_entries); 35 36 if (!smap->elems) 36 37 return -ENOMEM; 37 38 ··· 44 45 return 0; 45 46 46 47 free_elems: 47 - vfree(smap->elems); 48 + bpf_map_area_free(smap->elems); 48 49 return err; 49 50 } 50 51 ··· 75 76 if (cost >= U32_MAX - PAGE_SIZE) 76 77 return ERR_PTR(-E2BIG); 77 78 78 - smap = kzalloc(cost, GFP_USER | __GFP_NOWARN); 79 - if (!smap) { 80 - smap = vzalloc(cost); 81 - if (!smap) 82 - return ERR_PTR(-ENOMEM); 83 - } 79 + smap = bpf_map_area_alloc(cost); 80 + if (!smap) 81 + return ERR_PTR(-ENOMEM); 84 82 85 83 err = -E2BIG; 86 84 cost += n_buckets * (value_size + sizeof(struct stack_map_bucket)); ··· 108 112 put_buffers: 109 113 put_callchain_buffers(); 110 114 free_smap: 111 - kvfree(smap); 115 + bpf_map_area_free(smap); 112 116 return ERR_PTR(err); 113 117 } 114 118 ··· 258 262 /* wait for bpf programs to complete before freeing stack map */ 259 263 synchronize_rcu(); 260 264 261 - vfree(smap->elems); 265 + bpf_map_area_free(smap->elems); 262 266 pcpu_freelist_destroy(&smap->freelist); 263 - kvfree(smap); 267 + bpf_map_area_free(smap); 264 268 put_callchain_buffers(); 265 269 } 266 270
+26
kernel/bpf/syscall.c
··· 12 12 #include <linux/bpf.h> 13 13 #include <linux/syscalls.h> 14 14 #include <linux/slab.h> 15 + #include <linux/vmalloc.h> 16 + #include <linux/mmzone.h> 15 17 #include <linux/anon_inodes.h> 16 18 #include <linux/file.h> 17 19 #include <linux/license.h> ··· 49 47 void bpf_register_map_type(struct bpf_map_type_list *tl) 50 48 { 51 49 list_add(&tl->list_node, &bpf_map_types); 50 + } 51 + 52 + void *bpf_map_area_alloc(size_t size) 53 + { 54 + /* We definitely need __GFP_NORETRY, so OOM killer doesn't 55 + * trigger under memory pressure as we really just want to 56 + * fail instead. 57 + */ 58 + const gfp_t flags = __GFP_NOWARN | __GFP_NORETRY | __GFP_ZERO; 59 + void *area; 60 + 61 + if (size <= (PAGE_SIZE << PAGE_ALLOC_COSTLY_ORDER)) { 62 + area = kmalloc(size, GFP_USER | flags); 63 + if (area != NULL) 64 + return area; 65 + } 66 + 67 + return __vmalloc(size, GFP_KERNEL | __GFP_HIGHMEM | flags, 68 + PAGE_KERNEL); 69 + } 70 + 71 + void bpf_map_area_free(void *area) 72 + { 73 + kvfree(area); 52 74 } 53 75 54 76 int bpf_map_precharge_memlock(u32 pages)
+1 -1
kernel/panic.c
··· 249 249 * Delay timeout seconds before rebooting the machine. 250 250 * We can't use the "normal" timers since we just panicked. 251 251 */ 252 - pr_emerg("Rebooting in %d seconds..", panic_timeout); 252 + pr_emerg("Rebooting in %d seconds..\n", panic_timeout); 253 253 254 254 for (i = 0; i < panic_timeout * 1000; i += PANIC_TIMER_STEP) { 255 255 touch_nmi_watchdog();
+2 -2
kernel/power/suspend.c
··· 46 46 const char *mem_sleep_states[PM_SUSPEND_MAX]; 47 47 48 48 suspend_state_t mem_sleep_current = PM_SUSPEND_FREEZE; 49 - suspend_state_t mem_sleep_default = PM_SUSPEND_MAX; 49 + static suspend_state_t mem_sleep_default = PM_SUSPEND_MEM; 50 50 51 51 unsigned int pm_suspend_global_flags; 52 52 EXPORT_SYMBOL_GPL(pm_suspend_global_flags); ··· 168 168 } 169 169 if (valid_state(PM_SUSPEND_MEM)) { 170 170 mem_sleep_states[PM_SUSPEND_MEM] = mem_sleep_labels[PM_SUSPEND_MEM]; 171 - if (mem_sleep_default >= PM_SUSPEND_MEM) 171 + if (mem_sleep_default == PM_SUSPEND_MEM) 172 172 mem_sleep_current = PM_SUSPEND_MEM; 173 173 } 174 174
+1
kernel/sysctl.c
··· 2475 2475 break; 2476 2476 if (neg) 2477 2477 continue; 2478 + val = convmul * val / convdiv; 2478 2479 if ((min && val < *min) || (max && val > *max)) 2479 2480 continue; 2480 2481 *i = val;
+8 -6
kernel/ucount.c
··· 128 128 struct hlist_head *hashent = ucounts_hashentry(ns, uid); 129 129 struct ucounts *ucounts, *new; 130 130 131 - spin_lock(&ucounts_lock); 131 + spin_lock_irq(&ucounts_lock); 132 132 ucounts = find_ucounts(ns, uid, hashent); 133 133 if (!ucounts) { 134 - spin_unlock(&ucounts_lock); 134 + spin_unlock_irq(&ucounts_lock); 135 135 136 136 new = kzalloc(sizeof(*new), GFP_KERNEL); 137 137 if (!new) ··· 141 141 new->uid = uid; 142 142 atomic_set(&new->count, 0); 143 143 144 - spin_lock(&ucounts_lock); 144 + spin_lock_irq(&ucounts_lock); 145 145 ucounts = find_ucounts(ns, uid, hashent); 146 146 if (ucounts) { 147 147 kfree(new); ··· 152 152 } 153 153 if (!atomic_add_unless(&ucounts->count, 1, INT_MAX)) 154 154 ucounts = NULL; 155 - spin_unlock(&ucounts_lock); 155 + spin_unlock_irq(&ucounts_lock); 156 156 return ucounts; 157 157 } 158 158 159 159 static void put_ucounts(struct ucounts *ucounts) 160 160 { 161 + unsigned long flags; 162 + 161 163 if (atomic_dec_and_test(&ucounts->count)) { 162 - spin_lock(&ucounts_lock); 164 + spin_lock_irqsave(&ucounts_lock, flags); 163 165 hlist_del_init(&ucounts->node); 164 - spin_unlock(&ucounts_lock); 166 + spin_unlock_irqrestore(&ucounts_lock, flags); 165 167 166 168 kfree(ucounts); 167 169 }
+9
kernel/watchdog.c
··· 49 49 #define for_each_watchdog_cpu(cpu) \ 50 50 for_each_cpu_and((cpu), cpu_online_mask, &watchdog_cpumask) 51 51 52 + atomic_t watchdog_park_in_progress = ATOMIC_INIT(0); 53 + 52 54 /* 53 55 * The 'watchdog_running' variable is set to 1 when the watchdog threads 54 56 * are registered/started and is set to 0 when the watchdog threads are ··· 262 260 int duration; 263 261 int softlockup_all_cpu_backtrace = sysctl_softlockup_all_cpu_backtrace; 264 262 263 + if (atomic_read(&watchdog_park_in_progress) != 0) 264 + return HRTIMER_NORESTART; 265 + 265 266 /* kick the hardlockup detector */ 266 267 watchdog_interrupt_count(); 267 268 ··· 472 467 { 473 468 int cpu, ret = 0; 474 469 470 + atomic_set(&watchdog_park_in_progress, 1); 471 + 475 472 for_each_watchdog_cpu(cpu) { 476 473 ret = kthread_park(per_cpu(softlockup_watchdog, cpu)); 477 474 if (ret) 478 475 break; 479 476 } 477 + 478 + atomic_set(&watchdog_park_in_progress, 0); 480 479 481 480 return ret; 482 481 }
+3
kernel/watchdog_hld.c
··· 84 84 /* Ensure the watchdog never gets throttled */ 85 85 event->hw.interrupts = 0; 86 86 87 + if (atomic_read(&watchdog_park_in_progress) != 0) 88 + return; 89 + 87 90 if (__this_cpu_read(watchdog_nmi_touch) == true) { 88 91 __this_cpu_write(watchdog_nmi_touch, false); 89 92 return;
-1
lib/ioremap.c
··· 144 144 145 145 return err; 146 146 } 147 - EXPORT_SYMBOL_GPL(ioremap_page_range);
+1 -1
lib/radix-tree.c
··· 769 769 struct radix_tree_node *old = child; 770 770 offset = child->offset + 1; 771 771 child = child->parent; 772 - WARN_ON_ONCE(!list_empty(&node->private_list)); 772 + WARN_ON_ONCE(!list_empty(&old->private_list)); 773 773 radix_tree_node_free(old); 774 774 if (old == entry_to_node(node)) 775 775 return;
+17 -1
mm/huge_memory.c
··· 783 783 784 784 assert_spin_locked(pmd_lockptr(mm, pmd)); 785 785 786 + /* 787 + * When we COW a devmap PMD entry, we split it into PTEs, so we should 788 + * not be in this function with `flags & FOLL_COW` set. 789 + */ 790 + WARN_ONCE(flags & FOLL_COW, "mm: In follow_devmap_pmd with FOLL_COW set"); 791 + 786 792 if (flags & FOLL_WRITE && !pmd_write(*pmd)) 787 793 return NULL; 788 794 ··· 1134 1128 return ret; 1135 1129 } 1136 1130 1131 + /* 1132 + * FOLL_FORCE can write to even unwritable pmd's, but only 1133 + * after we've gone through a COW cycle and they are dirty. 1134 + */ 1135 + static inline bool can_follow_write_pmd(pmd_t pmd, unsigned int flags) 1136 + { 1137 + return pmd_write(pmd) || 1138 + ((flags & FOLL_FORCE) && (flags & FOLL_COW) && pmd_dirty(pmd)); 1139 + } 1140 + 1137 1141 struct page *follow_trans_huge_pmd(struct vm_area_struct *vma, 1138 1142 unsigned long addr, 1139 1143 pmd_t *pmd, ··· 1154 1138 1155 1139 assert_spin_locked(pmd_lockptr(mm, pmd)); 1156 1140 1157 - if (flags & FOLL_WRITE && !pmd_write(*pmd)) 1141 + if (flags & FOLL_WRITE && !can_follow_write_pmd(*pmd, flags)) 1158 1142 goto out; 1159 1143 1160 1144 /* Avoid dumping huge zero page */
+2 -2
mm/memcontrol.c
··· 4353 4353 return ret; 4354 4354 } 4355 4355 4356 - /* Try charges one by one with reclaim */ 4356 + /* Try charges one by one with reclaim, but do not retry */ 4357 4357 while (count--) { 4358 - ret = try_charge(mc.to, GFP_KERNEL & ~__GFP_NORETRY, 1); 4358 + ret = try_charge(mc.to, GFP_KERNEL | __GFP_NORETRY, 1); 4359 4359 if (ret) 4360 4360 return ret; 4361 4361 mc.precharge++;
+17 -11
mm/memory_hotplug.c
··· 1033 1033 node_set_state(node, N_MEMORY); 1034 1034 } 1035 1035 1036 - int zone_can_shift(unsigned long pfn, unsigned long nr_pages, 1037 - enum zone_type target) 1036 + bool zone_can_shift(unsigned long pfn, unsigned long nr_pages, 1037 + enum zone_type target, int *zone_shift) 1038 1038 { 1039 1039 struct zone *zone = page_zone(pfn_to_page(pfn)); 1040 1040 enum zone_type idx = zone_idx(zone); 1041 1041 int i; 1042 1042 1043 + *zone_shift = 0; 1044 + 1043 1045 if (idx < target) { 1044 1046 /* pages must be at end of current zone */ 1045 1047 if (pfn + nr_pages != zone_end_pfn(zone)) 1046 - return 0; 1048 + return false; 1047 1049 1048 1050 /* no zones in use between current zone and target */ 1049 1051 for (i = idx + 1; i < target; i++) 1050 1052 if (zone_is_initialized(zone - idx + i)) 1051 - return 0; 1053 + return false; 1052 1054 } 1053 1055 1054 1056 if (target < idx) { 1055 1057 /* pages must be at beginning of current zone */ 1056 1058 if (pfn != zone->zone_start_pfn) 1057 - return 0; 1059 + return false; 1058 1060 1059 1061 /* no zones in use between current zone and target */ 1060 1062 for (i = target + 1; i < idx; i++) 1061 1063 if (zone_is_initialized(zone - idx + i)) 1062 - return 0; 1064 + return false; 1063 1065 } 1064 1066 1065 - return target - idx; 1067 + *zone_shift = target - idx; 1068 + return true; 1066 1069 } 1067 1070 1068 1071 /* Must be protected by mem_hotplug_begin() */ ··· 1092 1089 !can_online_high_movable(zone)) 1093 1090 return -EINVAL; 1094 1091 1095 - if (online_type == MMOP_ONLINE_KERNEL) 1096 - zone_shift = zone_can_shift(pfn, nr_pages, ZONE_NORMAL); 1097 - else if (online_type == MMOP_ONLINE_MOVABLE) 1098 - zone_shift = zone_can_shift(pfn, nr_pages, ZONE_MOVABLE); 1092 + if (online_type == MMOP_ONLINE_KERNEL) { 1093 + if (!zone_can_shift(pfn, nr_pages, ZONE_NORMAL, &zone_shift)) 1094 + return -EINVAL; 1095 + } else if (online_type == MMOP_ONLINE_MOVABLE) { 1096 + if (!zone_can_shift(pfn, nr_pages, ZONE_MOVABLE, &zone_shift)) 1097 + return -EINVAL; 1098 + } 1099 1099 1100 1100 zone = move_pfn_range(zone_shift, pfn, pfn + nr_pages); 1101 1101 if (!zone)
+1 -1
mm/mempolicy.c
··· 2017 2017 2018 2018 nmask = policy_nodemask(gfp, pol); 2019 2019 zl = policy_zonelist(gfp, pol, node); 2020 - mpol_cond_put(pol); 2021 2020 page = __alloc_pages_nodemask(gfp, order, zl, nmask); 2021 + mpol_cond_put(pol); 2022 2022 out: 2023 2023 if (unlikely(!page && read_mems_allowed_retry(cpuset_mems_cookie))) 2024 2024 goto retry_cpuset;
+48 -21
mm/page_alloc.c
··· 3523 3523 struct page *page = NULL; 3524 3524 unsigned int alloc_flags; 3525 3525 unsigned long did_some_progress; 3526 - enum compact_priority compact_priority = DEF_COMPACT_PRIORITY; 3526 + enum compact_priority compact_priority; 3527 3527 enum compact_result compact_result; 3528 - int compaction_retries = 0; 3529 - int no_progress_loops = 0; 3528 + int compaction_retries; 3529 + int no_progress_loops; 3530 3530 unsigned long alloc_start = jiffies; 3531 3531 unsigned int stall_timeout = 10 * HZ; 3532 + unsigned int cpuset_mems_cookie; 3532 3533 3533 3534 /* 3534 3535 * In the slowpath, we sanity check order to avoid ever trying to ··· 3549 3548 if (WARN_ON_ONCE((gfp_mask & (__GFP_ATOMIC|__GFP_DIRECT_RECLAIM)) == 3550 3549 (__GFP_ATOMIC|__GFP_DIRECT_RECLAIM))) 3551 3550 gfp_mask &= ~__GFP_ATOMIC; 3551 + 3552 + retry_cpuset: 3553 + compaction_retries = 0; 3554 + no_progress_loops = 0; 3555 + compact_priority = DEF_COMPACT_PRIORITY; 3556 + cpuset_mems_cookie = read_mems_allowed_begin(); 3557 + /* 3558 + * We need to recalculate the starting point for the zonelist iterator 3559 + * because we might have used different nodemask in the fast path, or 3560 + * there was a cpuset modification and we are retrying - otherwise we 3561 + * could end up iterating over non-eligible zones endlessly. 3562 + */ 3563 + ac->preferred_zoneref = first_zones_zonelist(ac->zonelist, 3564 + ac->high_zoneidx, ac->nodemask); 3565 + if (!ac->preferred_zoneref->zone) 3566 + goto nopage; 3567 + 3552 3568 3553 3569 /* 3554 3570 * The fast path uses conservative alloc_flags to succeed only until ··· 3726 3708 &compaction_retries)) 3727 3709 goto retry; 3728 3710 3711 + /* 3712 + * It's possible we raced with cpuset update so the OOM would be 3713 + * premature (see below the nopage: label for full explanation). 3714 + */ 3715 + if (read_mems_allowed_retry(cpuset_mems_cookie)) 3716 + goto retry_cpuset; 3717 + 3729 3718 /* Reclaim has failed us, start killing things */ 3730 3719 page = __alloc_pages_may_oom(gfp_mask, order, ac, &did_some_progress); 3731 3720 if (page) ··· 3745 3720 } 3746 3721 3747 3722 nopage: 3723 + /* 3724 + * When updating a task's mems_allowed or mempolicy nodemask, it is 3725 + * possible to race with parallel threads in such a way that our 3726 + * allocation can fail while the mask is being updated. If we are about 3727 + * to fail, check if the cpuset changed during allocation and if so, 3728 + * retry. 3729 + */ 3730 + if (read_mems_allowed_retry(cpuset_mems_cookie)) 3731 + goto retry_cpuset; 3732 + 3748 3733 warn_alloc(gfp_mask, 3749 3734 "page allocation failure: order:%u", order); 3750 3735 got_pg: ··· 3769 3734 struct zonelist *zonelist, nodemask_t *nodemask) 3770 3735 { 3771 3736 struct page *page; 3772 - unsigned int cpuset_mems_cookie; 3773 3737 unsigned int alloc_flags = ALLOC_WMARK_LOW; 3774 3738 gfp_t alloc_mask = gfp_mask; /* The gfp_t that was actually used for allocation */ 3775 3739 struct alloc_context ac = { ··· 3805 3771 if (IS_ENABLED(CONFIG_CMA) && ac.migratetype == MIGRATE_MOVABLE) 3806 3772 alloc_flags |= ALLOC_CMA; 3807 3773 3808 - retry_cpuset: 3809 - cpuset_mems_cookie = read_mems_allowed_begin(); 3810 - 3811 3774 /* Dirty zone balancing only done in the fast path */ 3812 3775 ac.spread_dirty_pages = (gfp_mask & __GFP_WRITE); 3813 3776 ··· 3815 3784 */ 3816 3785 ac.preferred_zoneref = first_zones_zonelist(ac.zonelist, 3817 3786 ac.high_zoneidx, ac.nodemask); 3818 - if (!ac.preferred_zoneref) { 3787 + if (!ac.preferred_zoneref->zone) { 3819 3788 page = NULL; 3789 + /* 3790 + * This might be due to race with cpuset_current_mems_allowed 3791 + * update, so make sure we retry with original nodemask in the 3792 + * slow path. 3793 + */ 3820 3794 goto no_zone; 3821 3795 } 3822 3796 ··· 3830 3794 if (likely(page)) 3831 3795 goto out; 3832 3796 3797 + no_zone: 3833 3798 /* 3834 3799 * Runtime PM, block IO and its error handling path can deadlock 3835 3800 * because I/O on the device might not complete. ··· 3842 3805 * Restore the original nodemask if it was potentially replaced with 3843 3806 * &cpuset_current_mems_allowed to optimize the fast-path attempt. 3844 3807 */ 3845 - if (cpusets_enabled()) 3808 + if (unlikely(ac.nodemask != nodemask)) 3846 3809 ac.nodemask = nodemask; 3847 - page = __alloc_pages_slowpath(alloc_mask, order, &ac); 3848 3810 3849 - no_zone: 3850 - /* 3851 - * When updating a task's mems_allowed, it is possible to race with 3852 - * parallel threads in such a way that an allocation can fail while 3853 - * the mask is being updated. If a page allocation is about to fail, 3854 - * check if the cpuset changed during allocation and if so, retry. 3855 - */ 3856 - if (unlikely(!page && read_mems_allowed_retry(cpuset_mems_cookie))) { 3857 - alloc_mask = gfp_mask; 3858 - goto retry_cpuset; 3859 - } 3811 + page = __alloc_pages_slowpath(alloc_mask, order, &ac); 3860 3812 3861 3813 out: 3862 3814 if (memcg_kmem_enabled() && (gfp_mask & __GFP_ACCOUNT) && page && ··· 7274 7248 .zone = page_zone(pfn_to_page(start)), 7275 7249 .mode = MIGRATE_SYNC, 7276 7250 .ignore_skip_hint = true, 7251 + .gfp_mask = GFP_KERNEL, 7277 7252 }; 7278 7253 INIT_LIST_HEAD(&cc.migratepages); 7279 7254
+13 -10
mm/slub.c
··· 496 496 return 1; 497 497 } 498 498 499 - static void print_section(char *text, u8 *addr, unsigned int length) 499 + static void print_section(char *level, char *text, u8 *addr, 500 + unsigned int length) 500 501 { 501 502 metadata_access_enable(); 502 - print_hex_dump(KERN_ERR, text, DUMP_PREFIX_ADDRESS, 16, 1, addr, 503 + print_hex_dump(level, text, DUMP_PREFIX_ADDRESS, 16, 1, addr, 503 504 length, 1); 504 505 metadata_access_disable(); 505 506 } ··· 637 636 p, p - addr, get_freepointer(s, p)); 638 637 639 638 if (s->flags & SLAB_RED_ZONE) 640 - print_section("Redzone ", p - s->red_left_pad, s->red_left_pad); 639 + print_section(KERN_ERR, "Redzone ", p - s->red_left_pad, 640 + s->red_left_pad); 641 641 else if (p > addr + 16) 642 - print_section("Bytes b4 ", p - 16, 16); 642 + print_section(KERN_ERR, "Bytes b4 ", p - 16, 16); 643 643 644 - print_section("Object ", p, min_t(unsigned long, s->object_size, 645 - PAGE_SIZE)); 644 + print_section(KERN_ERR, "Object ", p, 645 + min_t(unsigned long, s->object_size, PAGE_SIZE)); 646 646 if (s->flags & SLAB_RED_ZONE) 647 - print_section("Redzone ", p + s->object_size, 647 + print_section(KERN_ERR, "Redzone ", p + s->object_size, 648 648 s->inuse - s->object_size); 649 649 650 650 if (s->offset) ··· 660 658 661 659 if (off != size_from_object(s)) 662 660 /* Beginning of the filler is the free pointer */ 663 - print_section("Padding ", p + off, size_from_object(s) - off); 661 + print_section(KERN_ERR, "Padding ", p + off, 662 + size_from_object(s) - off); 664 663 665 664 dump_stack(); 666 665 } ··· 823 820 end--; 824 821 825 822 slab_err(s, page, "Padding overwritten. 0x%p-0x%p", fault, end - 1); 826 - print_section("Padding ", end - remainder, remainder); 823 + print_section(KERN_ERR, "Padding ", end - remainder, remainder); 827 824 828 825 restore_bytes(s, "slab padding", POISON_INUSE, end - remainder, end); 829 826 return 0; ··· 976 973 page->freelist); 977 974 978 975 if (!alloc) 979 - print_section("Object ", (void *)object, 976 + print_section(KERN_INFO, "Object ", (void *)object, 980 977 s->object_size); 981 978 982 979 dump_stack();
+5 -5
net/batman-adv/fragmentation.c
··· 474 474 primary_if = batadv_primary_if_get_selected(bat_priv); 475 475 if (!primary_if) { 476 476 ret = -EINVAL; 477 - goto put_primary_if; 477 + goto free_skb; 478 478 } 479 479 480 480 /* Create one header to be copied to all fragments */ ··· 502 502 skb_fragment = batadv_frag_create(skb, &frag_header, mtu); 503 503 if (!skb_fragment) { 504 504 ret = -ENOMEM; 505 - goto free_skb; 505 + goto put_primary_if; 506 506 } 507 507 508 508 batadv_inc_counter(bat_priv, BATADV_CNT_FRAG_TX); ··· 511 511 ret = batadv_send_unicast_skb(skb_fragment, neigh_node); 512 512 if (ret != NET_XMIT_SUCCESS) { 513 513 ret = NET_XMIT_DROP; 514 - goto free_skb; 514 + goto put_primary_if; 515 515 } 516 516 517 517 frag_header.no++; ··· 519 519 /* The initial check in this function should cover this case */ 520 520 if (frag_header.no == BATADV_FRAG_MAX_FRAGMENTS - 1) { 521 521 ret = -EINVAL; 522 - goto free_skb; 522 + goto put_primary_if; 523 523 } 524 524 } 525 525 ··· 527 527 if (batadv_skb_head_push(skb, header_size) < 0 || 528 528 pskb_expand_head(skb, header_size + ETH_HLEN, 0, GFP_ATOMIC) < 0) { 529 529 ret = -ENOMEM; 530 - goto free_skb; 530 + goto put_primary_if; 531 531 } 532 532 533 533 memcpy(skb->data, &frag_header, header_size);
+19 -14
net/bridge/br_netlink.c
··· 781 781 return 0; 782 782 } 783 783 784 - static int br_dev_newlink(struct net *src_net, struct net_device *dev, 785 - struct nlattr *tb[], struct nlattr *data[]) 786 - { 787 - struct net_bridge *br = netdev_priv(dev); 788 - 789 - if (tb[IFLA_ADDRESS]) { 790 - spin_lock_bh(&br->lock); 791 - br_stp_change_bridge_id(br, nla_data(tb[IFLA_ADDRESS])); 792 - spin_unlock_bh(&br->lock); 793 - } 794 - 795 - return register_netdevice(dev); 796 - } 797 - 798 784 static int br_port_slave_changelink(struct net_device *brdev, 799 785 struct net_device *dev, 800 786 struct nlattr *tb[], ··· 1099 1113 #endif 1100 1114 1101 1115 return 0; 1116 + } 1117 + 1118 + static int br_dev_newlink(struct net *src_net, struct net_device *dev, 1119 + struct nlattr *tb[], struct nlattr *data[]) 1120 + { 1121 + struct net_bridge *br = netdev_priv(dev); 1122 + int err; 1123 + 1124 + if (tb[IFLA_ADDRESS]) { 1125 + spin_lock_bh(&br->lock); 1126 + br_stp_change_bridge_id(br, nla_data(tb[IFLA_ADDRESS])); 1127 + spin_unlock_bh(&br->lock); 1128 + } 1129 + 1130 + err = br_changelink(dev, tb, data); 1131 + if (err) 1132 + return err; 1133 + 1134 + return register_netdevice(dev); 1102 1135 } 1103 1136 1104 1137 static size_t br_get_size(const struct net_device *brdev)
+2 -2
net/core/dev.c
··· 2795 2795 if (skb->ip_summed != CHECKSUM_NONE && 2796 2796 !can_checksum_protocol(features, type)) { 2797 2797 features &= ~(NETIF_F_CSUM_MASK | NETIF_F_GSO_MASK); 2798 - } else if (illegal_highdma(skb->dev, skb)) { 2799 - features &= ~NETIF_F_SG; 2800 2798 } 2799 + if (illegal_highdma(skb->dev, skb)) 2800 + features &= ~NETIF_F_SG; 2801 2801 2802 2802 return features; 2803 2803 }
+1 -1
net/core/ethtool.c
··· 1712 1712 static noinline_for_stack int ethtool_set_channels(struct net_device *dev, 1713 1713 void __user *useraddr) 1714 1714 { 1715 - struct ethtool_channels channels, max; 1715 + struct ethtool_channels channels, max = { .cmd = ETHTOOL_GCHANNELS }; 1716 1716 u32 max_rx_in_use = 0; 1717 1717 1718 1718 if (!dev->ethtool_ops->set_channels || !dev->ethtool_ops->get_channels)
+1
net/core/lwt_bpf.c
··· 386 386 .fill_encap = bpf_fill_encap_info, 387 387 .get_encap_size = bpf_encap_nlsize, 388 388 .cmp_encap = bpf_encap_cmp, 389 + .owner = THIS_MODULE, 389 390 }; 390 391 391 392 static int __init bpf_lwt_init(void)
+67 -13
net/core/lwtunnel.c
··· 26 26 #include <net/lwtunnel.h> 27 27 #include <net/rtnetlink.h> 28 28 #include <net/ip6_fib.h> 29 + #include <net/nexthop.h> 29 30 30 31 #ifdef CONFIG_MODULES 31 32 ··· 115 114 ret = -EOPNOTSUPP; 116 115 rcu_read_lock(); 117 116 ops = rcu_dereference(lwtun_encaps[encap_type]); 118 - #ifdef CONFIG_MODULES 119 - if (!ops) { 120 - const char *encap_type_str = lwtunnel_encap_str(encap_type); 121 - 122 - if (encap_type_str) { 123 - rcu_read_unlock(); 124 - request_module("rtnl-lwt-%s", encap_type_str); 125 - rcu_read_lock(); 126 - ops = rcu_dereference(lwtun_encaps[encap_type]); 127 - } 128 - } 129 - #endif 130 - if (likely(ops && ops->build_state)) 117 + if (likely(ops && ops->build_state && try_module_get(ops->owner))) { 131 118 ret = ops->build_state(dev, encap, family, cfg, lws); 119 + if (ret) 120 + module_put(ops->owner); 121 + } 132 122 rcu_read_unlock(); 133 123 134 124 return ret; 135 125 } 136 126 EXPORT_SYMBOL(lwtunnel_build_state); 127 + 128 + int lwtunnel_valid_encap_type(u16 encap_type) 129 + { 130 + const struct lwtunnel_encap_ops *ops; 131 + int ret = -EINVAL; 132 + 133 + if (encap_type == LWTUNNEL_ENCAP_NONE || 134 + encap_type > LWTUNNEL_ENCAP_MAX) 135 + return ret; 136 + 137 + rcu_read_lock(); 138 + ops = rcu_dereference(lwtun_encaps[encap_type]); 139 + rcu_read_unlock(); 140 + #ifdef CONFIG_MODULES 141 + if (!ops) { 142 + const char *encap_type_str = lwtunnel_encap_str(encap_type); 143 + 144 + if (encap_type_str) { 145 + __rtnl_unlock(); 146 + request_module("rtnl-lwt-%s", encap_type_str); 147 + rtnl_lock(); 148 + 149 + rcu_read_lock(); 150 + ops = rcu_dereference(lwtun_encaps[encap_type]); 151 + rcu_read_unlock(); 152 + } 153 + } 154 + #endif 155 + return ops ? 0 : -EOPNOTSUPP; 156 + } 157 + EXPORT_SYMBOL(lwtunnel_valid_encap_type); 158 + 159 + int lwtunnel_valid_encap_type_attr(struct nlattr *attr, int remaining) 160 + { 161 + struct rtnexthop *rtnh = (struct rtnexthop *)attr; 162 + struct nlattr *nla_entype; 163 + struct nlattr *attrs; 164 + struct nlattr *nla; 165 + u16 encap_type; 166 + int attrlen; 167 + 168 + while (rtnh_ok(rtnh, remaining)) { 169 + attrlen = rtnh_attrlen(rtnh); 170 + if (attrlen > 0) { 171 + attrs = rtnh_attrs(rtnh); 172 + nla = nla_find(attrs, attrlen, RTA_ENCAP); 173 + nla_entype = nla_find(attrs, attrlen, RTA_ENCAP_TYPE); 174 + 175 + if (nla_entype) { 176 + encap_type = nla_get_u16(nla_entype); 177 + 178 + if (lwtunnel_valid_encap_type(encap_type) != 0) 179 + return -EOPNOTSUPP; 180 + } 181 + } 182 + rtnh = rtnh_next(rtnh, &remaining); 183 + } 184 + 185 + return 0; 186 + } 187 + EXPORT_SYMBOL(lwtunnel_valid_encap_type_attr); 137 188 138 189 void lwtstate_free(struct lwtunnel_state *lws) 139 190 { ··· 197 144 } else { 198 145 kfree(lws); 199 146 } 147 + module_put(ops->owner); 200 148 } 201 149 EXPORT_SYMBOL(lwtstate_free); 202 150
+2 -2
net/dccp/ipv6.c
··· 227 227 opt = ireq->ipv6_opt; 228 228 if (!opt) 229 229 opt = rcu_dereference(np->opt); 230 - err = ip6_xmit(sk, skb, &fl6, opt, np->tclass); 230 + err = ip6_xmit(sk, skb, &fl6, sk->sk_mark, opt, np->tclass); 231 231 rcu_read_unlock(); 232 232 err = net_xmit_eval(err); 233 233 } ··· 281 281 dst = ip6_dst_lookup_flow(ctl_sk, &fl6, NULL); 282 282 if (!IS_ERR(dst)) { 283 283 skb_dst_set(skb, dst); 284 - ip6_xmit(ctl_sk, skb, &fl6, NULL, 0); 284 + ip6_xmit(ctl_sk, skb, &fl6, 0, NULL, 0); 285 285 DCCP_INC_STATS(DCCP_MIB_OUTSEGS); 286 286 DCCP_INC_STATS(DCCP_MIB_OUTRSTS); 287 287 return;
+4 -4
net/dsa/slave.c
··· 1105 1105 /* Use already configured phy mode */ 1106 1106 if (p->phy_interface == PHY_INTERFACE_MODE_NA) 1107 1107 p->phy_interface = p->phy->interface; 1108 - phy_connect_direct(slave_dev, p->phy, dsa_slave_adjust_link, 1109 - p->phy_interface); 1110 - 1111 - return 0; 1108 + return phy_connect_direct(slave_dev, p->phy, dsa_slave_adjust_link, 1109 + p->phy_interface); 1112 1110 } 1113 1111 1114 1112 static int dsa_slave_phy_setup(struct dsa_slave_priv *p, ··· 1200 1202 int dsa_slave_suspend(struct net_device *slave_dev) 1201 1203 { 1202 1204 struct dsa_slave_priv *p = netdev_priv(slave_dev); 1205 + 1206 + netif_device_detach(slave_dev); 1203 1207 1204 1208 if (p->phy) { 1205 1209 phy_stop(p->phy);
+8
net/ipv4/fib_frontend.c
··· 46 46 #include <net/rtnetlink.h> 47 47 #include <net/xfrm.h> 48 48 #include <net/l3mdev.h> 49 + #include <net/lwtunnel.h> 49 50 #include <trace/events/fib.h> 50 51 51 52 #ifndef CONFIG_IP_MULTIPLE_TABLES ··· 678 677 cfg->fc_mx_len = nla_len(attr); 679 678 break; 680 679 case RTA_MULTIPATH: 680 + err = lwtunnel_valid_encap_type_attr(nla_data(attr), 681 + nla_len(attr)); 682 + if (err < 0) 683 + goto errout; 681 684 cfg->fc_mp = nla_data(attr); 682 685 cfg->fc_mp_len = nla_len(attr); 683 686 break; ··· 696 691 break; 697 692 case RTA_ENCAP_TYPE: 698 693 cfg->fc_encap_type = nla_get_u16(attr); 694 + err = lwtunnel_valid_encap_type(cfg->fc_encap_type); 695 + if (err < 0) 696 + goto errout; 699 697 break; 700 698 } 701 699 }
+1
net/ipv4/ip_output.c
··· 1629 1629 sk->sk_protocol = ip_hdr(skb)->protocol; 1630 1630 sk->sk_bound_dev_if = arg->bound_dev_if; 1631 1631 sk->sk_sndbuf = sysctl_wmem_default; 1632 + sk->sk_mark = fl4.flowi4_mark; 1632 1633 err = ip_append_data(sk, &fl4, ip_reply_glue_bits, arg->iov->iov_base, 1633 1634 len, 0, &ipc, &rt, MSG_DONTWAIT); 1634 1635 if (unlikely(err)) {
+2
net/ipv4/ip_tunnel_core.c
··· 313 313 .fill_encap = ip_tun_fill_encap_info, 314 314 .get_encap_size = ip_tun_encap_nlsize, 315 315 .cmp_encap = ip_tun_cmp_encap, 316 + .owner = THIS_MODULE, 316 317 }; 317 318 318 319 static const struct nla_policy ip6_tun_policy[LWTUNNEL_IP6_MAX + 1] = { ··· 404 403 .fill_encap = ip6_tun_fill_encap_info, 405 404 .get_encap_size = ip6_tun_encap_nlsize, 406 405 .cmp_encap = ip_tun_cmp_encap, 406 + .owner = THIS_MODULE, 407 407 }; 408 408 409 409 void __init ip_tunnel_core_init(void)
+6 -1
net/ipv4/netfilter/ipt_CLUSTERIP.c
··· 144 144 rcu_read_lock_bh(); 145 145 c = __clusterip_config_find(net, clusterip); 146 146 if (c) { 147 - if (!c->pde || unlikely(!atomic_inc_not_zero(&c->refcount))) 147 + #ifdef CONFIG_PROC_FS 148 + if (!c->pde) 149 + c = NULL; 150 + else 151 + #endif 152 + if (unlikely(!atomic_inc_not_zero(&c->refcount))) 148 153 c = NULL; 149 154 else if (entry) 150 155 atomic_inc(&c->entries);
+4 -4
net/ipv4/netfilter/ipt_rpfilter.c
··· 63 63 return dev_match || flags & XT_RPFILTER_LOOSE; 64 64 } 65 65 66 - static bool rpfilter_is_local(const struct sk_buff *skb) 66 + static bool 67 + rpfilter_is_loopback(const struct sk_buff *skb, const struct net_device *in) 67 68 { 68 - const struct rtable *rt = skb_rtable(skb); 69 - return rt && (rt->rt_flags & RTCF_LOCAL); 69 + return skb->pkt_type == PACKET_LOOPBACK || in->flags & IFF_LOOPBACK; 70 70 } 71 71 72 72 static bool rpfilter_mt(const struct sk_buff *skb, struct xt_action_param *par) ··· 79 79 info = par->matchinfo; 80 80 invert = info->flags & XT_RPFILTER_INVERT; 81 81 82 - if (rpfilter_is_local(skb)) 82 + if (rpfilter_is_loopback(skb, xt_in(par))) 83 83 return true ^ invert; 84 84 85 85 iph = ip_hdr(skb);
+2
net/ipv4/netfilter/nf_reject_ipv4.c
··· 126 126 /* ip_route_me_harder expects skb->dst to be set */ 127 127 skb_dst_set_noref(nskb, skb_dst(oldskb)); 128 128 129 + nskb->mark = IP4_REPLY_MARK(net, oldskb->mark); 130 + 129 131 skb_reserve(nskb, LL_MAX_HEADER); 130 132 niph = nf_reject_iphdr_put(nskb, oldskb, IPPROTO_TCP, 131 133 ip4_dst_hoplimit(skb_dst(nskb)));
+5 -10
net/ipv4/netfilter/nft_fib_ipv4.c
··· 26 26 return addr; 27 27 } 28 28 29 - static bool fib4_is_local(const struct sk_buff *skb) 30 - { 31 - const struct rtable *rt = skb_rtable(skb); 32 - 33 - return rt && (rt->rt_flags & RTCF_LOCAL); 34 - } 35 - 36 29 #define DSCP_BITS 0xfc 37 30 38 31 void nft_fib4_eval_type(const struct nft_expr *expr, struct nft_regs *regs, ··· 88 95 else 89 96 oif = NULL; 90 97 91 - if (nft_hook(pkt) == NF_INET_PRE_ROUTING && fib4_is_local(pkt->skb)) { 92 - nft_fib_store_result(dest, priv->result, pkt, LOOPBACK_IFINDEX); 98 + if (nft_hook(pkt) == NF_INET_PRE_ROUTING && 99 + nft_fib_is_loopback(pkt->skb, nft_in(pkt))) { 100 + nft_fib_store_result(dest, priv->result, pkt, 101 + nft_in(pkt)->ifindex); 93 102 return; 94 103 } 95 104 ··· 126 131 switch (res.type) { 127 132 case RTN_UNICAST: 128 133 break; 129 - case RTN_LOCAL: /* should not appear here, see fib4_is_local() above */ 134 + case RTN_LOCAL: /* Should not see RTN_LOCAL here */ 130 135 return; 131 136 default: 132 137 break;
+1
net/ipv4/tcp_fastopen.c
··· 205 205 * scaled. So correct it appropriately. 206 206 */ 207 207 tp->snd_wnd = ntohs(tcp_hdr(skb)->window); 208 + tp->max_window = tp->snd_wnd; 208 209 209 210 /* Activate the retrans timer so that SYNACK can be retransmitted. 210 211 * The request socket is not added to the ehash
+1 -1
net/ipv4/tcp_input.c
··· 5078 5078 if (sock_flag(sk, SOCK_QUEUE_SHRUNK)) { 5079 5079 sock_reset_flag(sk, SOCK_QUEUE_SHRUNK); 5080 5080 /* pairs with tcp_poll() */ 5081 - smp_mb__after_atomic(); 5081 + smp_mb(); 5082 5082 if (sk->sk_socket && 5083 5083 test_bit(SOCK_NOSPACE, &sk->sk_socket->flags)) { 5084 5084 tcp_new_space(sk);
+1 -3
net/ipv6/addrconf.c
··· 5540 5540 struct net_device *dev; 5541 5541 struct inet6_dev *idev; 5542 5542 5543 - rcu_read_lock(); 5544 - for_each_netdev_rcu(net, dev) { 5543 + for_each_netdev(net, dev) { 5545 5544 idev = __in6_dev_get(dev); 5546 5545 if (idev) { 5547 5546 int changed = (!idev->cnf.disable_ipv6) ^ (!newf); ··· 5549 5550 dev_disable_change(idev); 5550 5551 } 5551 5552 } 5552 - rcu_read_unlock(); 5553 5553 } 5554 5554 5555 5555 static int addrconf_disable_ipv6(struct ctl_table *table, int *p, int newf)
+1
net/ipv6/ila/ila_lwt.c
··· 238 238 .fill_encap = ila_fill_encap_info, 239 239 .get_encap_size = ila_encap_nlsize, 240 240 .cmp_encap = ila_encap_cmp, 241 + .owner = THIS_MODULE, 241 242 }; 242 243 243 244 int ila_lwt_init(void)
+1 -1
net/ipv6/inet6_connection_sock.c
··· 176 176 /* Restore final destination back after routing done */ 177 177 fl6.daddr = sk->sk_v6_daddr; 178 178 179 - res = ip6_xmit(sk, skb, &fl6, rcu_dereference(np->opt), 179 + res = ip6_xmit(sk, skb, &fl6, sk->sk_mark, rcu_dereference(np->opt), 180 180 np->tclass); 181 181 rcu_read_unlock(); 182 182 return res;
+3
net/ipv6/ip6_gre.c
··· 582 582 return -1; 583 583 584 584 offset = ip6_tnl_parse_tlv_enc_lim(skb, skb_network_header(skb)); 585 + /* ip6_tnl_parse_tlv_enc_lim() might have reallocated skb->head */ 586 + ipv6h = ipv6_hdr(skb); 587 + 585 588 if (offset > 0) { 586 589 struct ipv6_tlv_tnl_enc_lim *tel; 587 590 tel = (struct ipv6_tlv_tnl_enc_lim *)&skb_network_header(skb)[offset];
+2 -2
net/ipv6/ip6_output.c
··· 172 172 * which are using proper atomic operations or spinlocks. 173 173 */ 174 174 int ip6_xmit(const struct sock *sk, struct sk_buff *skb, struct flowi6 *fl6, 175 - struct ipv6_txoptions *opt, int tclass) 175 + __u32 mark, struct ipv6_txoptions *opt, int tclass) 176 176 { 177 177 struct net *net = sock_net(sk); 178 178 const struct ipv6_pinfo *np = inet6_sk(sk); ··· 240 240 241 241 skb->protocol = htons(ETH_P_IPV6); 242 242 skb->priority = sk->sk_priority; 243 - skb->mark = sk->sk_mark; 243 + skb->mark = mark; 244 244 245 245 mtu = dst_mtu(dst); 246 246 if ((skb->len <= mtu) || skb->ignore_df || skb_is_gso(skb)) {
+24 -12
net/ipv6/ip6_tunnel.c
··· 400 400 401 401 __u16 ip6_tnl_parse_tlv_enc_lim(struct sk_buff *skb, __u8 *raw) 402 402 { 403 - const struct ipv6hdr *ipv6h = (const struct ipv6hdr *) raw; 404 - __u8 nexthdr = ipv6h->nexthdr; 405 - __u16 off = sizeof(*ipv6h); 403 + const struct ipv6hdr *ipv6h = (const struct ipv6hdr *)raw; 404 + unsigned int nhoff = raw - skb->data; 405 + unsigned int off = nhoff + sizeof(*ipv6h); 406 + u8 next, nexthdr = ipv6h->nexthdr; 406 407 407 408 while (ipv6_ext_hdr(nexthdr) && nexthdr != NEXTHDR_NONE) { 408 - __u16 optlen = 0; 409 409 struct ipv6_opt_hdr *hdr; 410 - if (raw + off + sizeof(*hdr) > skb->data && 411 - !pskb_may_pull(skb, raw - skb->data + off + sizeof (*hdr))) 410 + u16 optlen; 411 + 412 + if (!pskb_may_pull(skb, off + sizeof(*hdr))) 412 413 break; 413 414 414 - hdr = (struct ipv6_opt_hdr *) (raw + off); 415 + hdr = (struct ipv6_opt_hdr *)(skb->data + off); 415 416 if (nexthdr == NEXTHDR_FRAGMENT) { 416 417 struct frag_hdr *frag_hdr = (struct frag_hdr *) hdr; 417 418 if (frag_hdr->frag_off) ··· 423 422 } else { 424 423 optlen = ipv6_optlen(hdr); 425 424 } 425 + /* cache hdr->nexthdr, since pskb_may_pull() might 426 + * invalidate hdr 427 + */ 428 + next = hdr->nexthdr; 426 429 if (nexthdr == NEXTHDR_DEST) { 427 - __u16 i = off + 2; 430 + u16 i = 2; 431 + 432 + /* Remember : hdr is no longer valid at this point. */ 433 + if (!pskb_may_pull(skb, off + optlen)) 434 + break; 435 + 428 436 while (1) { 429 437 struct ipv6_tlv_tnl_enc_lim *tel; 430 438 431 439 /* No more room for encapsulation limit */ 432 - if (i + sizeof (*tel) > off + optlen) 440 + if (i + sizeof(*tel) > optlen) 433 441 break; 434 442 435 - tel = (struct ipv6_tlv_tnl_enc_lim *) &raw[i]; 443 + tel = (struct ipv6_tlv_tnl_enc_lim *) skb->data + off + i; 436 444 /* return index of option if found and valid */ 437 445 if (tel->type == IPV6_TLV_TNL_ENCAP_LIMIT && 438 446 tel->length == 1) 439 - return i; 447 + return i + off - nhoff; 440 448 /* else jump to next option */ 441 449 if (tel->type) 442 450 i += tel->length + 2; ··· 453 443 i++; 454 444 } 455 445 } 456 - nexthdr = hdr->nexthdr; 446 + nexthdr = next; 457 447 off += optlen; 458 448 } 459 449 return 0; ··· 1313 1303 fl6.flowlabel = key->label; 1314 1304 } else { 1315 1305 offset = ip6_tnl_parse_tlv_enc_lim(skb, skb_network_header(skb)); 1306 + /* ip6_tnl_parse_tlv_enc_lim() might have reallocated skb->head */ 1307 + ipv6h = ipv6_hdr(skb); 1316 1308 if (offset > 0) { 1317 1309 struct ipv6_tlv_tnl_enc_lim *tel; 1318 1310
+4 -4
net/ipv6/netfilter/ip6t_rpfilter.c
··· 72 72 return ret; 73 73 } 74 74 75 - static bool rpfilter_is_local(const struct sk_buff *skb) 75 + static bool 76 + rpfilter_is_loopback(const struct sk_buff *skb, const struct net_device *in) 76 77 { 77 - const struct rt6_info *rt = (const void *) skb_dst(skb); 78 - return rt && (rt->rt6i_flags & RTF_LOCAL); 78 + return skb->pkt_type == PACKET_LOOPBACK || in->flags & IFF_LOOPBACK; 79 79 } 80 80 81 81 static bool rpfilter_mt(const struct sk_buff *skb, struct xt_action_param *par) ··· 85 85 struct ipv6hdr *iph; 86 86 bool invert = info->flags & XT_RPFILTER_INVERT; 87 87 88 - if (rpfilter_is_local(skb)) 88 + if (rpfilter_is_loopback(skb, xt_in(par))) 89 89 return true ^ invert; 90 90 91 91 iph = ipv6_hdr(skb);
+3
net/ipv6/netfilter/nf_reject_ipv6.c
··· 157 157 fl6.fl6_sport = otcph->dest; 158 158 fl6.fl6_dport = otcph->source; 159 159 fl6.flowi6_oif = l3mdev_master_ifindex(skb_dst(oldskb)->dev); 160 + fl6.flowi6_mark = IP6_REPLY_MARK(net, oldskb->mark); 160 161 security_skb_classify_flow(oldskb, flowi6_to_flowi(&fl6)); 161 162 dst = ip6_route_output(net, NULL, &fl6); 162 163 if (dst->error) { ··· 180 179 } 181 180 182 181 skb_dst_set(nskb, dst); 182 + 183 + nskb->mark = fl6.flowi6_mark; 183 184 184 185 skb_reserve(nskb, hh_len + dst->header_len); 185 186 ip6h = nf_reject_ip6hdr_put(nskb, oldskb, IPPROTO_TCP,
+4 -9
net/ipv6/netfilter/nft_fib_ipv6.c
··· 18 18 #include <net/ip6_fib.h> 19 19 #include <net/ip6_route.h> 20 20 21 - static bool fib6_is_local(const struct sk_buff *skb) 22 - { 23 - const struct rt6_info *rt = (const void *)skb_dst(skb); 24 - 25 - return rt && (rt->rt6i_flags & RTF_LOCAL); 26 - } 27 - 28 21 static int get_ifindex(const struct net_device *dev) 29 22 { 30 23 return dev ? dev->ifindex : 0; ··· 157 164 158 165 lookup_flags = nft_fib6_flowi_init(&fl6, priv, pkt, oif); 159 166 160 - if (nft_hook(pkt) == NF_INET_PRE_ROUTING && fib6_is_local(pkt->skb)) { 161 - nft_fib_store_result(dest, priv->result, pkt, LOOPBACK_IFINDEX); 167 + if (nft_hook(pkt) == NF_INET_PRE_ROUTING && 168 + nft_fib_is_loopback(pkt->skb, nft_in(pkt))) { 169 + nft_fib_store_result(dest, priv->result, pkt, 170 + nft_in(pkt)->ifindex); 162 171 return; 163 172 } 164 173
+11 -1
net/ipv6/route.c
··· 2896 2896 if (tb[RTA_MULTIPATH]) { 2897 2897 cfg->fc_mp = nla_data(tb[RTA_MULTIPATH]); 2898 2898 cfg->fc_mp_len = nla_len(tb[RTA_MULTIPATH]); 2899 + 2900 + err = lwtunnel_valid_encap_type_attr(cfg->fc_mp, 2901 + cfg->fc_mp_len); 2902 + if (err < 0) 2903 + goto errout; 2899 2904 } 2900 2905 2901 2906 if (tb[RTA_PREF]) { ··· 2914 2909 if (tb[RTA_ENCAP]) 2915 2910 cfg->fc_encap = tb[RTA_ENCAP]; 2916 2911 2917 - if (tb[RTA_ENCAP_TYPE]) 2912 + if (tb[RTA_ENCAP_TYPE]) { 2918 2913 cfg->fc_encap_type = nla_get_u16(tb[RTA_ENCAP_TYPE]); 2914 + 2915 + err = lwtunnel_valid_encap_type(cfg->fc_encap_type); 2916 + if (err < 0) 2917 + goto errout; 2918 + } 2919 2919 2920 2920 if (tb[RTA_EXPIRES]) { 2921 2921 unsigned long timeout = addrconf_timeout_fixup(nla_get_u32(tb[RTA_EXPIRES]), HZ);
+2
net/ipv6/seg6.c
··· 176 176 177 177 val = nla_data(info->attrs[SEG6_ATTR_DST]); 178 178 t_new = kmemdup(val, sizeof(*val), GFP_KERNEL); 179 + if (!t_new) 180 + return -ENOMEM; 179 181 180 182 mutex_lock(&sdata->lock); 181 183
+1
net/ipv6/seg6_iptunnel.c
··· 422 422 .fill_encap = seg6_fill_encap_info, 423 423 .get_encap_size = seg6_encap_nlsize, 424 424 .cmp_encap = seg6_encap_cmp, 425 + .owner = THIS_MODULE, 425 426 }; 426 427 427 428 int __init seg6_iptunnel_init(void)
+2 -2
net/ipv6/tcp_ipv6.c
··· 469 469 opt = ireq->ipv6_opt; 470 470 if (!opt) 471 471 opt = rcu_dereference(np->opt); 472 - err = ip6_xmit(sk, skb, fl6, opt, np->tclass); 472 + err = ip6_xmit(sk, skb, fl6, sk->sk_mark, opt, np->tclass); 473 473 rcu_read_unlock(); 474 474 err = net_xmit_eval(err); 475 475 } ··· 840 840 dst = ip6_dst_lookup_flow(ctl_sk, &fl6, NULL); 841 841 if (!IS_ERR(dst)) { 842 842 skb_dst_set(buff, dst); 843 - ip6_xmit(ctl_sk, buff, &fl6, NULL, tclass); 843 + ip6_xmit(ctl_sk, buff, &fl6, fl6.flowi6_mark, NULL, tclass); 844 844 TCP_INC_STATS(net, TCP_MIB_OUTSEGS); 845 845 if (rst) 846 846 TCP_INC_STATS(net, TCP_MIB_OUTRSTS);
-2
net/mac80211/rate.c
··· 40 40 41 41 ieee80211_sta_set_rx_nss(sta); 42 42 43 - ieee80211_recalc_min_chandef(sta->sdata); 44 - 45 43 if (!ref) 46 44 return; 47 45
+25 -23
net/mpls/af_mpls.c
··· 98 98 } 99 99 EXPORT_SYMBOL_GPL(mpls_pkt_too_big); 100 100 101 - static u32 mpls_multipath_hash(struct mpls_route *rt, 102 - struct sk_buff *skb, bool bos) 101 + static u32 mpls_multipath_hash(struct mpls_route *rt, struct sk_buff *skb) 103 102 { 104 103 struct mpls_entry_decoded dec; 104 + unsigned int mpls_hdr_len = 0; 105 105 struct mpls_shim_hdr *hdr; 106 106 bool eli_seen = false; 107 107 int label_index; 108 108 u32 hash = 0; 109 109 110 - for (label_index = 0; label_index < MAX_MP_SELECT_LABELS && !bos; 110 + for (label_index = 0; label_index < MAX_MP_SELECT_LABELS; 111 111 label_index++) { 112 - if (!pskb_may_pull(skb, sizeof(*hdr) * label_index)) 112 + mpls_hdr_len += sizeof(*hdr); 113 + if (!pskb_may_pull(skb, mpls_hdr_len)) 113 114 break; 114 115 115 116 /* Read and decode the current label */ ··· 135 134 eli_seen = true; 136 135 } 137 136 138 - bos = dec.bos; 139 - if (bos && pskb_may_pull(skb, sizeof(*hdr) * label_index + 140 - sizeof(struct iphdr))) { 137 + if (!dec.bos) 138 + continue; 139 + 140 + /* found bottom label; does skb have room for a header? */ 141 + if (pskb_may_pull(skb, mpls_hdr_len + sizeof(struct iphdr))) { 141 142 const struct iphdr *v4hdr; 142 143 143 - v4hdr = (const struct iphdr *)(mpls_hdr(skb) + 144 - label_index); 144 + v4hdr = (const struct iphdr *)(hdr + 1); 145 145 if (v4hdr->version == 4) { 146 146 hash = jhash_3words(ntohl(v4hdr->saddr), 147 147 ntohl(v4hdr->daddr), 148 148 v4hdr->protocol, hash); 149 149 } else if (v4hdr->version == 6 && 150 - pskb_may_pull(skb, sizeof(*hdr) * label_index + 151 - sizeof(struct ipv6hdr))) { 150 + pskb_may_pull(skb, mpls_hdr_len + 151 + sizeof(struct ipv6hdr))) { 152 152 const struct ipv6hdr *v6hdr; 153 153 154 - v6hdr = (const struct ipv6hdr *)(mpls_hdr(skb) + 155 - label_index); 156 - 154 + v6hdr = (const struct ipv6hdr *)(hdr + 1); 157 155 hash = __ipv6_addr_jhash(&v6hdr->saddr, hash); 158 156 hash = __ipv6_addr_jhash(&v6hdr->daddr, hash); 159 157 hash = jhash_1word(v6hdr->nexthdr, hash); 160 158 } 161 159 } 160 + 161 + break; 162 162 } 163 163 164 164 return hash; 165 165 } 166 166 167 167 static struct mpls_nh *mpls_select_multipath(struct mpls_route *rt, 168 - struct sk_buff *skb, bool bos) 168 + struct sk_buff *skb) 169 169 { 170 170 int alive = ACCESS_ONCE(rt->rt_nhn_alive); 171 171 u32 hash = 0; ··· 182 180 if (alive <= 0) 183 181 return NULL; 184 182 185 - hash = mpls_multipath_hash(rt, skb, bos); 183 + hash = mpls_multipath_hash(rt, skb); 186 184 nh_index = hash % alive; 187 185 if (alive == rt->rt_nhn) 188 186 goto out; ··· 280 278 hdr = mpls_hdr(skb); 281 279 dec = mpls_entry_decode(hdr); 282 280 283 - /* Pop the label */ 284 - skb_pull(skb, sizeof(*hdr)); 285 - skb_reset_network_header(skb); 286 - 287 - skb_orphan(skb); 288 - 289 281 rt = mpls_route_input_rcu(net, dec.label); 290 282 if (!rt) 291 283 goto drop; 292 284 293 - nh = mpls_select_multipath(rt, skb, dec.bos); 285 + nh = mpls_select_multipath(rt, skb); 294 286 if (!nh) 295 287 goto drop; 296 288 ··· 292 296 out_dev = rcu_dereference(nh->nh_dev); 293 297 if (!mpls_output_possible(out_dev)) 294 298 goto drop; 299 + 300 + /* Pop the label */ 301 + skb_pull(skb, sizeof(*hdr)); 302 + skb_reset_network_header(skb); 303 + 304 + skb_orphan(skb); 295 305 296 306 if (skb_warn_if_lro(skb)) 297 307 goto drop;
+1
net/mpls/mpls_iptunnel.c
··· 215 215 .fill_encap = mpls_fill_encap_info, 216 216 .get_encap_size = mpls_encap_nlsize, 217 217 .cmp_encap = mpls_encap_cmp, 218 + .owner = THIS_MODULE, 218 219 }; 219 220 220 221 static int __init mpls_iptunnel_init(void)
+1 -1
net/netfilter/Kconfig
··· 494 494 depends on NF_CONNTRACK 495 495 tristate "Netfilter nf_tables conntrack module" 496 496 help 497 - This option adds the "meta" expression that you can use to match 497 + This option adds the "ct" expression that you can use to match 498 498 connection tracking information such as the flow state. 499 499 500 500 config NFT_SET_RBTREE
+21 -23
net/netfilter/nf_conntrack_core.c
··· 85 85 static __read_mostly bool nf_conntrack_locks_all; 86 86 87 87 /* every gc cycle scans at most 1/GC_MAX_BUCKETS_DIV part of table */ 88 - #define GC_MAX_BUCKETS_DIV 64u 89 - /* upper bound of scan intervals */ 90 - #define GC_INTERVAL_MAX (2 * HZ) 91 - /* maximum conntracks to evict per gc run */ 92 - #define GC_MAX_EVICTS 256u 88 + #define GC_MAX_BUCKETS_DIV 128u 89 + /* upper bound of full table scan */ 90 + #define GC_MAX_SCAN_JIFFIES (16u * HZ) 91 + /* desired ratio of entries found to be expired */ 92 + #define GC_EVICT_RATIO 50u 93 93 94 94 static struct conntrack_gc_work conntrack_gc_work; 95 95 ··· 938 938 939 939 static void gc_worker(struct work_struct *work) 940 940 { 941 + unsigned int min_interval = max(HZ / GC_MAX_BUCKETS_DIV, 1u); 941 942 unsigned int i, goal, buckets = 0, expired_count = 0; 942 943 struct conntrack_gc_work *gc_work; 943 944 unsigned int ratio, scanned = 0; ··· 980 979 */ 981 980 rcu_read_unlock(); 982 981 cond_resched_rcu_qs(); 983 - } while (++buckets < goal && 984 - expired_count < GC_MAX_EVICTS); 982 + } while (++buckets < goal); 985 983 986 984 if (gc_work->exiting) 987 985 return; ··· 997 997 * 1. Minimize time until we notice a stale entry 998 998 * 2. Maximize scan intervals to not waste cycles 999 999 * 1000 - * Normally, expired_count will be 0, this increases the next_run time 1001 - * to priorize 2) above. 1000 + * Normally, expire ratio will be close to 0. 1002 1001 * 1003 - * As soon as a timed-out entry is found, move towards 1) and increase 1004 - * the scan frequency. 1005 - * In case we have lots of evictions next scan is done immediately. 1002 + * As soon as a sizeable fraction of the entries have expired 1003 + * increase scan frequency. 1006 1004 */ 1007 1005 ratio = scanned ? expired_count * 100 / scanned : 0; 1008 - if (ratio >= 90 || expired_count == GC_MAX_EVICTS) { 1009 - gc_work->next_gc_run = 0; 1010 - next_run = 0; 1011 - } else if (expired_count) { 1012 - gc_work->next_gc_run /= 2U; 1013 - next_run = msecs_to_jiffies(1); 1006 + if (ratio > GC_EVICT_RATIO) { 1007 + gc_work->next_gc_run = min_interval; 1014 1008 } else { 1015 - if (gc_work->next_gc_run < GC_INTERVAL_MAX) 1016 - gc_work->next_gc_run += msecs_to_jiffies(1); 1009 + unsigned int max = GC_MAX_SCAN_JIFFIES / GC_MAX_BUCKETS_DIV; 1017 1010 1018 - next_run = gc_work->next_gc_run; 1011 + BUILD_BUG_ON((GC_MAX_SCAN_JIFFIES / GC_MAX_BUCKETS_DIV) == 0); 1012 + 1013 + gc_work->next_gc_run += min_interval; 1014 + if (gc_work->next_gc_run > max) 1015 + gc_work->next_gc_run = max; 1019 1016 } 1020 1017 1018 + next_run = gc_work->next_gc_run; 1021 1019 gc_work->last_bucket = i; 1022 1020 queue_delayed_work(system_long_wq, &gc_work->dwork, next_run); 1023 1021 } ··· 1023 1025 static void conntrack_gc_work_init(struct conntrack_gc_work *gc_work) 1024 1026 { 1025 1027 INIT_DELAYED_WORK(&gc_work->dwork, gc_worker); 1026 - gc_work->next_gc_run = GC_INTERVAL_MAX; 1028 + gc_work->next_gc_run = HZ; 1027 1029 gc_work->exiting = false; 1028 1030 } 1029 1031 ··· 1915 1917 nf_ct_untracked_status_or(IPS_CONFIRMED | IPS_UNTRACKED); 1916 1918 1917 1919 conntrack_gc_work_init(&conntrack_gc_work); 1918 - queue_delayed_work(system_long_wq, &conntrack_gc_work.dwork, GC_INTERVAL_MAX); 1920 + queue_delayed_work(system_long_wq, &conntrack_gc_work.dwork, HZ); 1919 1921 1920 1922 return 0; 1921 1923
-1
net/netfilter/nf_log.c
··· 13 13 /* Internal logging interface, which relies on the real 14 14 LOG target modules */ 15 15 16 - #define NF_LOG_PREFIXLEN 128 17 16 #define NFLOGGER_NAME_LEN 64 18 17 19 18 static struct nf_logger __rcu *loggers[NFPROTO_NUMPROTO][NF_LOG_TYPE_MAX] __read_mostly;
+39 -28
net/netfilter/nf_tables_api.c
··· 928 928 } 929 929 930 930 static const struct nla_policy nft_chain_policy[NFTA_CHAIN_MAX + 1] = { 931 - [NFTA_CHAIN_TABLE] = { .type = NLA_STRING }, 931 + [NFTA_CHAIN_TABLE] = { .type = NLA_STRING, 932 + .len = NFT_TABLE_MAXNAMELEN - 1 }, 932 933 [NFTA_CHAIN_HANDLE] = { .type = NLA_U64 }, 933 934 [NFTA_CHAIN_NAME] = { .type = NLA_STRING, 934 935 .len = NFT_CHAIN_MAXNAMELEN - 1 }, ··· 1855 1854 } 1856 1855 1857 1856 static const struct nla_policy nft_rule_policy[NFTA_RULE_MAX + 1] = { 1858 - [NFTA_RULE_TABLE] = { .type = NLA_STRING }, 1857 + [NFTA_RULE_TABLE] = { .type = NLA_STRING, 1858 + .len = NFT_TABLE_MAXNAMELEN - 1 }, 1859 1859 [NFTA_RULE_CHAIN] = { .type = NLA_STRING, 1860 1860 .len = NFT_CHAIN_MAXNAMELEN - 1 }, 1861 1861 [NFTA_RULE_HANDLE] = { .type = NLA_U64 }, ··· 2445 2443 } 2446 2444 2447 2445 static const struct nla_policy nft_set_policy[NFTA_SET_MAX + 1] = { 2448 - [NFTA_SET_TABLE] = { .type = NLA_STRING }, 2446 + [NFTA_SET_TABLE] = { .type = NLA_STRING, 2447 + .len = NFT_TABLE_MAXNAMELEN - 1 }, 2449 2448 [NFTA_SET_NAME] = { .type = NLA_STRING, 2450 2449 .len = NFT_SET_MAXNAMELEN - 1 }, 2451 2450 [NFTA_SET_FLAGS] = { .type = NLA_U32 }, ··· 3087 3084 } 3088 3085 3089 3086 static int nf_tables_bind_check_setelem(const struct nft_ctx *ctx, 3090 - const struct nft_set *set, 3087 + struct nft_set *set, 3091 3088 const struct nft_set_iter *iter, 3092 - const struct nft_set_elem *elem) 3089 + struct nft_set_elem *elem) 3093 3090 { 3094 3091 const struct nft_set_ext *ext = nft_set_elem_ext(set, elem->priv); 3095 3092 enum nft_registers dreg; ··· 3195 3192 }; 3196 3193 3197 3194 static const struct nla_policy nft_set_elem_list_policy[NFTA_SET_ELEM_LIST_MAX + 1] = { 3198 - [NFTA_SET_ELEM_LIST_TABLE] = { .type = NLA_STRING }, 3199 - [NFTA_SET_ELEM_LIST_SET] = { .type = NLA_STRING }, 3195 + [NFTA_SET_ELEM_LIST_TABLE] = { .type = NLA_STRING, 3196 + .len = NFT_TABLE_MAXNAMELEN - 1 }, 3197 + [NFTA_SET_ELEM_LIST_SET] = { .type = NLA_STRING, 3198 + .len = NFT_SET_MAXNAMELEN - 1 }, 3200 3199 [NFTA_SET_ELEM_LIST_ELEMENTS] = { .type = NLA_NESTED }, 3201 3200 [NFTA_SET_ELEM_LIST_SET_ID] = { .type = NLA_U32 }, 3202 3201 }; ··· 3308 3303 }; 3309 3304 3310 3305 static int nf_tables_dump_setelem(const struct nft_ctx *ctx, 3311 - const struct nft_set *set, 3306 + struct nft_set *set, 3312 3307 const struct nft_set_iter *iter, 3313 - const struct nft_set_elem *elem) 3308 + struct nft_set_elem *elem) 3314 3309 { 3315 3310 struct nft_set_dump_args *args; 3316 3311 ··· 3322 3317 { 3323 3318 struct net *net = sock_net(skb->sk); 3324 3319 u8 genmask = nft_genmask_cur(net); 3325 - const struct nft_set *set; 3320 + struct nft_set *set; 3326 3321 struct nft_set_dump_args args; 3327 3322 struct nft_ctx ctx; 3328 3323 struct nlattr *nla[NFTA_SET_ELEM_LIST_MAX + 1]; ··· 3745 3740 goto err5; 3746 3741 } 3747 3742 3743 + if (set->size && 3744 + !atomic_add_unless(&set->nelems, 1, set->size + set->ndeact)) { 3745 + err = -ENFILE; 3746 + goto err6; 3747 + } 3748 + 3748 3749 nft_trans_elem(trans) = elem; 3749 3750 list_add_tail(&trans->list, &ctx->net->nft.commit_list); 3750 3751 return 0; 3751 3752 3753 + err6: 3754 + set->ops->remove(set, &elem); 3752 3755 err5: 3753 3756 kfree(trans); 3754 3757 err4: ··· 3803 3790 return -EBUSY; 3804 3791 3805 3792 nla_for_each_nested(attr, nla[NFTA_SET_ELEM_LIST_ELEMENTS], rem) { 3806 - if (set->size && 3807 - !atomic_add_unless(&set->nelems, 1, set->size + set->ndeact)) 3808 - return -ENFILE; 3809 - 3810 3793 err = nft_add_set_elem(&ctx, set, attr, nlh->nlmsg_flags); 3811 - if (err < 0) { 3812 - atomic_dec(&set->nelems); 3794 + if (err < 0) 3813 3795 break; 3814 - } 3815 3796 } 3816 3797 return err; 3817 3798 } ··· 3890 3883 } 3891 3884 3892 3885 static int nft_flush_set(const struct nft_ctx *ctx, 3893 - const struct nft_set *set, 3886 + struct nft_set *set, 3894 3887 const struct nft_set_iter *iter, 3895 - const struct nft_set_elem *elem) 3888 + struct nft_set_elem *elem) 3896 3889 { 3897 3890 struct nft_trans *trans; 3898 3891 int err; ··· 3906 3899 err = -ENOENT; 3907 3900 goto err1; 3908 3901 } 3902 + set->ndeact++; 3909 3903 3910 - nft_trans_elem_set(trans) = (struct nft_set *)set; 3911 - nft_trans_elem(trans) = *((struct nft_set_elem *)elem); 3904 + nft_trans_elem_set(trans) = set; 3905 + nft_trans_elem(trans) = *elem; 3912 3906 list_add_tail(&trans->list, &ctx->net->nft.commit_list); 3913 3907 3914 3908 return 0; ··· 4040 4032 EXPORT_SYMBOL_GPL(nf_tables_obj_lookup); 4041 4033 4042 4034 static const struct nla_policy nft_obj_policy[NFTA_OBJ_MAX + 1] = { 4043 - [NFTA_OBJ_TABLE] = { .type = NLA_STRING }, 4044 - [NFTA_OBJ_NAME] = { .type = NLA_STRING }, 4035 + [NFTA_OBJ_TABLE] = { .type = NLA_STRING, 4036 + .len = NFT_TABLE_MAXNAMELEN - 1 }, 4037 + [NFTA_OBJ_NAME] = { .type = NLA_STRING, 4038 + .len = NFT_OBJ_MAXNAMELEN - 1 }, 4045 4039 [NFTA_OBJ_TYPE] = { .type = NLA_U32 }, 4046 4040 [NFTA_OBJ_DATA] = { .type = NLA_NESTED }, 4047 4041 }; ··· 4272 4262 if (idx > s_idx) 4273 4263 memset(&cb->args[1], 0, 4274 4264 sizeof(cb->args) - sizeof(cb->args[0])); 4275 - if (filter->table[0] && 4265 + if (filter && filter->table[0] && 4276 4266 strcmp(filter->table, table->name)) 4277 4267 goto cont; 4278 - if (filter->type != NFT_OBJECT_UNSPEC && 4268 + if (filter && 4269 + filter->type != NFT_OBJECT_UNSPEC && 4279 4270 obj->type->type != filter->type) 4280 4271 goto cont; 4281 4272 ··· 5020 5009 const struct nft_chain *chain); 5021 5010 5022 5011 static int nf_tables_loop_check_setelem(const struct nft_ctx *ctx, 5023 - const struct nft_set *set, 5012 + struct nft_set *set, 5024 5013 const struct nft_set_iter *iter, 5025 - const struct nft_set_elem *elem) 5014 + struct nft_set_elem *elem) 5026 5015 { 5027 5016 const struct nft_set_ext *ext = nft_set_elem_ext(set, elem->priv); 5028 5017 const struct nft_data *data; ··· 5046 5035 { 5047 5036 const struct nft_rule *rule; 5048 5037 const struct nft_expr *expr, *last; 5049 - const struct nft_set *set; 5038 + struct nft_set *set; 5050 5039 struct nft_set_binding *binding; 5051 5040 struct nft_set_iter iter; 5052 5041
+2 -1
net/netfilter/nft_dynset.c
··· 98 98 } 99 99 100 100 static const struct nla_policy nft_dynset_policy[NFTA_DYNSET_MAX + 1] = { 101 - [NFTA_DYNSET_SET_NAME] = { .type = NLA_STRING }, 101 + [NFTA_DYNSET_SET_NAME] = { .type = NLA_STRING, 102 + .len = NFT_SET_MAXNAMELEN - 1 }, 102 103 [NFTA_DYNSET_SET_ID] = { .type = NLA_U32 }, 103 104 [NFTA_DYNSET_OP] = { .type = NLA_U32 }, 104 105 [NFTA_DYNSET_SREG_KEY] = { .type = NLA_U32 },
+2 -1
net/netfilter/nft_log.c
··· 39 39 40 40 static const struct nla_policy nft_log_policy[NFTA_LOG_MAX + 1] = { 41 41 [NFTA_LOG_GROUP] = { .type = NLA_U16 }, 42 - [NFTA_LOG_PREFIX] = { .type = NLA_STRING }, 42 + [NFTA_LOG_PREFIX] = { .type = NLA_STRING, 43 + .len = NF_LOG_PREFIXLEN - 1 }, 43 44 [NFTA_LOG_SNAPLEN] = { .type = NLA_U32 }, 44 45 [NFTA_LOG_QTHRESHOLD] = { .type = NLA_U16 }, 45 46 [NFTA_LOG_LEVEL] = { .type = NLA_U32 },
+2 -1
net/netfilter/nft_lookup.c
··· 49 49 } 50 50 51 51 static const struct nla_policy nft_lookup_policy[NFTA_LOOKUP_MAX + 1] = { 52 - [NFTA_LOOKUP_SET] = { .type = NLA_STRING }, 52 + [NFTA_LOOKUP_SET] = { .type = NLA_STRING, 53 + .len = NFT_SET_MAXNAMELEN - 1 }, 53 54 [NFTA_LOOKUP_SET_ID] = { .type = NLA_U32 }, 54 55 [NFTA_LOOKUP_SREG] = { .type = NLA_U32 }, 55 56 [NFTA_LOOKUP_DREG] = { .type = NLA_U32 },
+4 -2
net/netfilter/nft_objref.c
··· 193 193 } 194 194 195 195 static const struct nla_policy nft_objref_policy[NFTA_OBJREF_MAX + 1] = { 196 - [NFTA_OBJREF_IMM_NAME] = { .type = NLA_STRING }, 196 + [NFTA_OBJREF_IMM_NAME] = { .type = NLA_STRING, 197 + .len = NFT_OBJ_MAXNAMELEN - 1 }, 197 198 [NFTA_OBJREF_IMM_TYPE] = { .type = NLA_U32 }, 198 199 [NFTA_OBJREF_SET_SREG] = { .type = NLA_U32 }, 199 - [NFTA_OBJREF_SET_NAME] = { .type = NLA_STRING }, 200 + [NFTA_OBJREF_SET_NAME] = { .type = NLA_STRING, 201 + .len = NFT_SET_MAXNAMELEN - 1 }, 200 202 [NFTA_OBJREF_SET_ID] = { .type = NLA_U32 }, 201 203 }; 202 204
+1 -1
net/netfilter/nft_set_hash.c
··· 212 212 rhashtable_remove_fast(&priv->ht, &he->node, nft_hash_params); 213 213 } 214 214 215 - static void nft_hash_walk(const struct nft_ctx *ctx, const struct nft_set *set, 215 + static void nft_hash_walk(const struct nft_ctx *ctx, struct nft_set *set, 216 216 struct nft_set_iter *iter) 217 217 { 218 218 struct nft_hash *priv = nft_set_priv(set);
+1 -1
net/netfilter/nft_set_rbtree.c
··· 221 221 } 222 222 223 223 static void nft_rbtree_walk(const struct nft_ctx *ctx, 224 - const struct nft_set *set, 224 + struct nft_set *set, 225 225 struct nft_set_iter *iter) 226 226 { 227 227 const struct nft_rbtree *priv = nft_set_priv(set);
+2 -2
net/packet/af_packet.c
··· 1976 1976 return -EINVAL; 1977 1977 *len -= sizeof(vnet_hdr); 1978 1978 1979 - if (virtio_net_hdr_from_skb(skb, &vnet_hdr, vio_le())) 1979 + if (virtio_net_hdr_from_skb(skb, &vnet_hdr, vio_le(), true)) 1980 1980 return -EINVAL; 1981 1981 1982 1982 return memcpy_to_msg(msg, (void *)&vnet_hdr, sizeof(vnet_hdr)); ··· 2237 2237 if (po->has_vnet_hdr) { 2238 2238 if (virtio_net_hdr_from_skb(skb, h.raw + macoff - 2239 2239 sizeof(struct virtio_net_hdr), 2240 - vio_le())) { 2240 + vio_le(), true)) { 2241 2241 spin_lock(&sk->sk_receive_queue.lock); 2242 2242 goto drop_n_account; 2243 2243 }
+2 -1
net/sctp/ipv6.c
··· 222 222 SCTP_INC_STATS(sock_net(sk), SCTP_MIB_OUTSCTPPACKS); 223 223 224 224 rcu_read_lock(); 225 - res = ip6_xmit(sk, skb, fl6, rcu_dereference(np->opt), np->tclass); 225 + res = ip6_xmit(sk, skb, fl6, sk->sk_mark, rcu_dereference(np->opt), 226 + np->tclass); 226 227 rcu_read_unlock(); 227 228 return res; 228 229 }
+1 -1
net/sctp/offload.c
··· 68 68 goto out; 69 69 } 70 70 71 - segs = skb_segment(skb, features | NETIF_F_HW_CSUM); 71 + segs = skb_segment(skb, features | NETIF_F_HW_CSUM | NETIF_F_SG); 72 72 if (IS_ERR(segs)) 73 73 goto out; 74 74
+5 -1
net/sctp/socket.c
··· 235 235 sctp_assoc_t id) 236 236 { 237 237 struct sctp_association *addr_asoc = NULL, *id_asoc = NULL; 238 - struct sctp_transport *transport; 238 + struct sctp_af *af = sctp_get_af_specific(addr->ss_family); 239 239 union sctp_addr *laddr = (union sctp_addr *)addr; 240 + struct sctp_transport *transport; 241 + 242 + if (sctp_verify_addr(sk, laddr, af->sockaddr_len)) 243 + return NULL; 240 244 241 245 addr_asoc = sctp_endpoint_lookup_assoc(sctp_sk(sk)->ep, 242 246 laddr,
+5
net/sunrpc/clnt.c
··· 336 336 337 337 static DEFINE_IDA(rpc_clids); 338 338 339 + void rpc_cleanup_clids(void) 340 + { 341 + ida_destroy(&rpc_clids); 342 + } 343 + 339 344 static int rpc_alloc_clid(struct rpc_clnt *clnt) 340 345 { 341 346 int clid;
+1
net/sunrpc/sunrpc_syms.c
··· 119 119 static void __exit 120 120 cleanup_sunrpc(void) 121 121 { 122 + rpc_cleanup_clids(); 122 123 rpcauth_remove_module(); 123 124 cleanup_socket_xprt(); 124 125 svc_cleanup_xprt_sock();
+7 -2
net/tipc/node.c
··· 263 263 write_lock_bh(&n->lock); 264 264 } 265 265 266 + static void tipc_node_write_unlock_fast(struct tipc_node *n) 267 + { 268 + write_unlock_bh(&n->lock); 269 + } 270 + 266 271 static void tipc_node_write_unlock(struct tipc_node *n) 267 272 { 268 273 struct net *net = n->net; ··· 422 417 } 423 418 tipc_node_write_lock(n); 424 419 list_add_tail(subscr, &n->publ_list); 425 - tipc_node_write_unlock(n); 420 + tipc_node_write_unlock_fast(n); 426 421 tipc_node_put(n); 427 422 } 428 423 ··· 440 435 } 441 436 tipc_node_write_lock(n); 442 437 list_del_init(subscr); 443 - tipc_node_write_unlock(n); 438 + tipc_node_write_unlock_fast(n); 444 439 tipc_node_put(n); 445 440 } 446 441
+21 -27
net/tipc/server.c
··· 86 86 static void tipc_recv_work(struct work_struct *work); 87 87 static void tipc_send_work(struct work_struct *work); 88 88 static void tipc_clean_outqueues(struct tipc_conn *con); 89 - static void tipc_sock_release(struct tipc_conn *con); 90 89 91 90 static void tipc_conn_kref_release(struct kref *kref) 92 91 { 93 92 struct tipc_conn *con = container_of(kref, struct tipc_conn, kref); 94 - struct sockaddr_tipc *saddr = con->server->saddr; 93 + struct tipc_server *s = con->server; 94 + struct sockaddr_tipc *saddr = s->saddr; 95 95 struct socket *sock = con->sock; 96 96 struct sock *sk; 97 97 ··· 103 103 } 104 104 saddr->scope = -TIPC_NODE_SCOPE; 105 105 kernel_bind(sock, (struct sockaddr *)saddr, sizeof(*saddr)); 106 - tipc_sock_release(con); 107 106 sock_release(sock); 108 107 con->sock = NULL; 108 + 109 + spin_lock_bh(&s->idr_lock); 110 + idr_remove(&s->conn_idr, con->conid); 111 + s->idr_in_use--; 112 + spin_unlock_bh(&s->idr_lock); 109 113 } 110 114 111 115 tipc_clean_outqueues(con); ··· 132 128 133 129 spin_lock_bh(&s->idr_lock); 134 130 con = idr_find(&s->conn_idr, conid); 135 - if (con) 131 + if (con && test_bit(CF_CONNECTED, &con->flags)) 136 132 conn_get(con); 133 + else 134 + con = NULL; 137 135 spin_unlock_bh(&s->idr_lock); 138 136 return con; 139 137 } ··· 192 186 write_unlock_bh(&sk->sk_callback_lock); 193 187 } 194 188 195 - static void tipc_sock_release(struct tipc_conn *con) 196 - { 197 - struct tipc_server *s = con->server; 198 - 199 - if (con->conid) 200 - s->tipc_conn_release(con->conid, con->usr_data); 201 - 202 - tipc_unregister_callbacks(con); 203 - } 204 - 205 189 static void tipc_close_conn(struct tipc_conn *con) 206 190 { 207 191 struct tipc_server *s = con->server; 208 192 209 193 if (test_and_clear_bit(CF_CONNECTED, &con->flags)) { 194 + tipc_unregister_callbacks(con); 210 195 211 - spin_lock_bh(&s->idr_lock); 212 - idr_remove(&s->conn_idr, con->conid); 213 - s->idr_in_use--; 214 - spin_unlock_bh(&s->idr_lock); 196 + if (con->conid) 197 + s->tipc_conn_release(con->conid, con->usr_data); 215 198 216 199 /* We shouldn't flush pending works as we may be in the 217 200 * thread. In fact the races with pending rx/tx work structs ··· 453 458 if (!con) 454 459 return -EINVAL; 455 460 461 + if (!test_bit(CF_CONNECTED, &con->flags)) { 462 + conn_put(con); 463 + return 0; 464 + } 465 + 456 466 e = tipc_alloc_entry(data, len); 457 467 if (!e) { 458 468 conn_put(con); ··· 471 471 list_add_tail(&e->list, &con->outqueue); 472 472 spin_unlock_bh(&con->outqueue_lock); 473 473 474 - if (test_bit(CF_CONNECTED, &con->flags)) { 475 - if (!queue_work(s->send_wq, &con->swork)) 476 - conn_put(con); 477 - } else { 474 + if (!queue_work(s->send_wq, &con->swork)) 478 475 conn_put(con); 479 - } 480 476 return 0; 481 477 } 482 478 ··· 496 500 int ret; 497 501 498 502 spin_lock_bh(&con->outqueue_lock); 499 - while (1) { 503 + while (test_bit(CF_CONNECTED, &con->flags)) { 500 504 e = list_entry(con->outqueue.next, struct outqueue_entry, 501 505 list); 502 506 if ((struct list_head *) e == &con->outqueue) ··· 619 623 void tipc_server_stop(struct tipc_server *s) 620 624 { 621 625 struct tipc_conn *con; 622 - int total = 0; 623 626 int id; 624 627 625 628 spin_lock_bh(&s->idr_lock); 626 - for (id = 0; total < s->idr_in_use; id++) { 629 + for (id = 0; s->idr_in_use; id++) { 627 630 con = idr_find(&s->conn_idr, id); 628 631 if (con) { 629 - total++; 630 632 spin_unlock_bh(&s->idr_lock); 631 633 tipc_close_conn(con); 632 634 spin_lock_bh(&s->idr_lock);
+71 -55
net/tipc/subscr.c
··· 54 54 55 55 static void tipc_subscrp_delete(struct tipc_subscription *sub); 56 56 static void tipc_subscrb_put(struct tipc_subscriber *subscriber); 57 + static void tipc_subscrp_put(struct tipc_subscription *subscription); 58 + static void tipc_subscrp_get(struct tipc_subscription *subscription); 57 59 58 60 /** 59 61 * htohl - convert value to endianness used by destination ··· 125 123 { 126 124 struct tipc_name_seq seq; 127 125 126 + tipc_subscrp_get(sub); 128 127 tipc_subscrp_convert_seq(&sub->evt.s.seq, sub->swap, &seq); 129 128 if (!tipc_subscrp_check_overlap(&seq, found_lower, found_upper)) 130 129 return; ··· 135 132 136 133 tipc_subscrp_send_event(sub, found_lower, found_upper, event, port_ref, 137 134 node); 135 + tipc_subscrp_put(sub); 138 136 } 139 137 140 138 static void tipc_subscrp_timeout(unsigned long data) 141 139 { 142 140 struct tipc_subscription *sub = (struct tipc_subscription *)data; 143 - struct tipc_subscriber *subscriber = sub->subscriber; 144 141 145 142 /* Notify subscriber of timeout */ 146 143 tipc_subscrp_send_event(sub, sub->evt.s.seq.lower, sub->evt.s.seq.upper, 147 144 TIPC_SUBSCR_TIMEOUT, 0, 0); 148 145 149 - spin_lock_bh(&subscriber->lock); 150 - tipc_subscrp_delete(sub); 151 - spin_unlock_bh(&subscriber->lock); 152 - 153 - tipc_subscrb_put(subscriber); 146 + tipc_subscrp_put(sub); 154 147 } 155 148 156 149 static void tipc_subscrb_kref_release(struct kref *kref) 157 150 { 158 - struct tipc_subscriber *subcriber = container_of(kref, 159 - struct tipc_subscriber, kref); 160 - 161 - kfree(subcriber); 151 + kfree(container_of(kref,struct tipc_subscriber, kref)); 162 152 } 163 153 164 154 static void tipc_subscrb_put(struct tipc_subscriber *subscriber) ··· 164 168 kref_get(&subscriber->kref); 165 169 } 166 170 171 + static void tipc_subscrp_kref_release(struct kref *kref) 172 + { 173 + struct tipc_subscription *sub = container_of(kref, 174 + struct tipc_subscription, 175 + kref); 176 + struct tipc_net *tn = net_generic(sub->net, tipc_net_id); 177 + struct tipc_subscriber *subscriber = sub->subscriber; 178 + 179 + spin_lock_bh(&subscriber->lock); 180 + tipc_nametbl_unsubscribe(sub); 181 + list_del(&sub->subscrp_list); 182 + atomic_dec(&tn->subscription_count); 183 + spin_unlock_bh(&subscriber->lock); 184 + kfree(sub); 185 + tipc_subscrb_put(subscriber); 186 + } 187 + 188 + static void tipc_subscrp_put(struct tipc_subscription *subscription) 189 + { 190 + kref_put(&subscription->kref, tipc_subscrp_kref_release); 191 + } 192 + 193 + static void tipc_subscrp_get(struct tipc_subscription *subscription) 194 + { 195 + kref_get(&subscription->kref); 196 + } 197 + 198 + /* tipc_subscrb_subscrp_delete - delete a specific subscription or all 199 + * subscriptions for a given subscriber. 200 + */ 201 + static void tipc_subscrb_subscrp_delete(struct tipc_subscriber *subscriber, 202 + struct tipc_subscr *s) 203 + { 204 + struct list_head *subscription_list = &subscriber->subscrp_list; 205 + struct tipc_subscription *sub, *temp; 206 + 207 + spin_lock_bh(&subscriber->lock); 208 + list_for_each_entry_safe(sub, temp, subscription_list, subscrp_list) { 209 + if (s && memcmp(s, &sub->evt.s, sizeof(struct tipc_subscr))) 210 + continue; 211 + 212 + tipc_subscrp_get(sub); 213 + spin_unlock_bh(&subscriber->lock); 214 + tipc_subscrp_delete(sub); 215 + tipc_subscrp_put(sub); 216 + spin_lock_bh(&subscriber->lock); 217 + 218 + if (s) 219 + break; 220 + } 221 + spin_unlock_bh(&subscriber->lock); 222 + } 223 + 167 224 static struct tipc_subscriber *tipc_subscrb_create(int conid) 168 225 { 169 226 struct tipc_subscriber *subscriber; ··· 226 177 pr_warn("Subscriber rejected, no memory\n"); 227 178 return NULL; 228 179 } 229 - kref_init(&subscriber->kref); 230 180 INIT_LIST_HEAD(&subscriber->subscrp_list); 181 + kref_init(&subscriber->kref); 231 182 subscriber->conid = conid; 232 183 spin_lock_init(&subscriber->lock); 233 184 ··· 236 187 237 188 static void tipc_subscrb_delete(struct tipc_subscriber *subscriber) 238 189 { 239 - struct tipc_subscription *sub, *temp; 240 - u32 timeout; 241 - 242 - spin_lock_bh(&subscriber->lock); 243 - /* Destroy any existing subscriptions for subscriber */ 244 - list_for_each_entry_safe(sub, temp, &subscriber->subscrp_list, 245 - subscrp_list) { 246 - timeout = htohl(sub->evt.s.timeout, sub->swap); 247 - if ((timeout == TIPC_WAIT_FOREVER) || del_timer(&sub->timer)) { 248 - tipc_subscrp_delete(sub); 249 - tipc_subscrb_put(subscriber); 250 - } 251 - } 252 - spin_unlock_bh(&subscriber->lock); 253 - 190 + tipc_subscrb_subscrp_delete(subscriber, NULL); 254 191 tipc_subscrb_put(subscriber); 255 192 } 256 193 257 194 static void tipc_subscrp_delete(struct tipc_subscription *sub) 258 195 { 259 - struct tipc_net *tn = net_generic(sub->net, tipc_net_id); 196 + u32 timeout = htohl(sub->evt.s.timeout, sub->swap); 260 197 261 - tipc_nametbl_unsubscribe(sub); 262 - list_del(&sub->subscrp_list); 263 - kfree(sub); 264 - atomic_dec(&tn->subscription_count); 198 + if (timeout == TIPC_WAIT_FOREVER || del_timer(&sub->timer)) 199 + tipc_subscrp_put(sub); 265 200 } 266 201 267 202 static void tipc_subscrp_cancel(struct tipc_subscr *s, 268 203 struct tipc_subscriber *subscriber) 269 204 { 270 - struct tipc_subscription *sub, *temp; 271 - u32 timeout; 272 - 273 - spin_lock_bh(&subscriber->lock); 274 - /* Find first matching subscription, exit if not found */ 275 - list_for_each_entry_safe(sub, temp, &subscriber->subscrp_list, 276 - subscrp_list) { 277 - if (!memcmp(s, &sub->evt.s, sizeof(struct tipc_subscr))) { 278 - timeout = htohl(sub->evt.s.timeout, sub->swap); 279 - if ((timeout == TIPC_WAIT_FOREVER) || 280 - del_timer(&sub->timer)) { 281 - tipc_subscrp_delete(sub); 282 - tipc_subscrb_put(subscriber); 283 - } 284 - break; 285 - } 286 - } 287 - spin_unlock_bh(&subscriber->lock); 205 + tipc_subscrb_subscrp_delete(subscriber, s); 288 206 } 289 207 290 208 static struct tipc_subscription *tipc_subscrp_create(struct net *net, ··· 288 272 sub->swap = swap; 289 273 memcpy(&sub->evt.s, s, sizeof(*s)); 290 274 atomic_inc(&tn->subscription_count); 275 + kref_init(&sub->kref); 291 276 return sub; 292 277 } 293 278 ··· 305 288 306 289 spin_lock_bh(&subscriber->lock); 307 290 list_add(&sub->subscrp_list, &subscriber->subscrp_list); 308 - tipc_subscrb_get(subscriber); 309 291 sub->subscriber = subscriber; 310 292 tipc_nametbl_subscribe(sub); 293 + tipc_subscrb_get(subscriber); 311 294 spin_unlock_bh(&subscriber->lock); 312 295 313 - timeout = htohl(sub->evt.s.timeout, swap); 314 - if (timeout == TIPC_WAIT_FOREVER) 315 - return; 316 - 317 296 setup_timer(&sub->timer, tipc_subscrp_timeout, (unsigned long)sub); 318 - mod_timer(&sub->timer, jiffies + msecs_to_jiffies(timeout)); 297 + timeout = htohl(sub->evt.s.timeout, swap); 298 + 299 + if (timeout != TIPC_WAIT_FOREVER) 300 + mod_timer(&sub->timer, jiffies + msecs_to_jiffies(timeout)); 319 301 } 320 302 321 303 /* Handle one termination request for the subscriber */
+1
net/tipc/subscr.h
··· 57 57 * @evt: template for events generated by subscription 58 58 */ 59 59 struct tipc_subscription { 60 + struct kref kref; 60 61 struct tipc_subscriber *subscriber; 61 62 struct net *net; 62 63 struct timer_list timer;
+16 -11
net/unix/af_unix.c
··· 995 995 unsigned int hash; 996 996 struct unix_address *addr; 997 997 struct hlist_head *list; 998 + struct path path = { NULL, NULL }; 998 999 999 1000 err = -EINVAL; 1000 1001 if (sunaddr->sun_family != AF_UNIX) ··· 1011 1010 goto out; 1012 1011 addr_len = err; 1013 1012 1013 + if (sun_path[0]) { 1014 + umode_t mode = S_IFSOCK | 1015 + (SOCK_INODE(sock)->i_mode & ~current_umask()); 1016 + err = unix_mknod(sun_path, mode, &path); 1017 + if (err) { 1018 + if (err == -EEXIST) 1019 + err = -EADDRINUSE; 1020 + goto out; 1021 + } 1022 + } 1023 + 1014 1024 err = mutex_lock_interruptible(&u->bindlock); 1015 1025 if (err) 1016 - goto out; 1026 + goto out_put; 1017 1027 1018 1028 err = -EINVAL; 1019 1029 if (u->addr) ··· 1041 1029 atomic_set(&addr->refcnt, 1); 1042 1030 1043 1031 if (sun_path[0]) { 1044 - struct path path; 1045 - umode_t mode = S_IFSOCK | 1046 - (SOCK_INODE(sock)->i_mode & ~current_umask()); 1047 - err = unix_mknod(sun_path, mode, &path); 1048 - if (err) { 1049 - if (err == -EEXIST) 1050 - err = -EADDRINUSE; 1051 - unix_release_addr(addr); 1052 - goto out_up; 1053 - } 1054 1032 addr->hash = UNIX_HASH_SIZE; 1055 1033 hash = d_backing_inode(path.dentry)->i_ino & (UNIX_HASH_SIZE - 1); 1056 1034 spin_lock(&unix_table_lock); ··· 1067 1065 spin_unlock(&unix_table_lock); 1068 1066 out_up: 1069 1067 mutex_unlock(&u->bindlock); 1068 + out_put: 1069 + if (err) 1070 + path_put(&path); 1070 1071 out: 1071 1072 return err; 1072 1073 }
+1
samples/bpf/tc_l2_redirect_kern.c
··· 4 4 * modify it under the terms of version 2 of the GNU General Public 5 5 * License as published by the Free Software Foundation. 6 6 */ 7 + #define KBUILD_MODNAME "foo" 7 8 #include <uapi/linux/bpf.h> 8 9 #include <uapi/linux/if_ether.h> 9 10 #include <uapi/linux/if_packet.h>
+1
samples/bpf/xdp_tx_iptunnel_kern.c
··· 8 8 * encapsulating the incoming packet in an IPv4/v6 header 9 9 * and then XDP_TX it out. 10 10 */ 11 + #define KBUILD_MODNAME "foo" 11 12 #include <uapi/linux/bpf.h> 12 13 #include <linux/in.h> 13 14 #include <linux/if_ether.h>
+27 -26
tools/testing/selftests/bpf/test_lru_map.c
··· 67 67 return map_subset(lru_map, expected) && map_subset(expected, lru_map); 68 68 } 69 69 70 - static int sched_next_online(int pid, int next_to_try) 70 + static int sched_next_online(int pid, int *next_to_try) 71 71 { 72 72 cpu_set_t cpuset; 73 + int next = *next_to_try; 74 + int ret = -1; 73 75 74 - if (next_to_try == nr_cpus) 75 - return -1; 76 - 77 - while (next_to_try < nr_cpus) { 76 + while (next < nr_cpus) { 78 77 CPU_ZERO(&cpuset); 79 - CPU_SET(next_to_try++, &cpuset); 80 - if (!sched_setaffinity(pid, sizeof(cpuset), &cpuset)) 78 + CPU_SET(next++, &cpuset); 79 + if (!sched_setaffinity(pid, sizeof(cpuset), &cpuset)) { 80 + ret = 0; 81 81 break; 82 + } 82 83 } 83 84 84 - return next_to_try; 85 + *next_to_try = next; 86 + return ret; 85 87 } 86 88 87 89 /* Size of the LRU amp is 2 ··· 98 96 { 99 97 unsigned long long key, value[nr_cpus]; 100 98 int lru_map_fd, expected_map_fd; 99 + int next_cpu = 0; 101 100 102 101 printf("%s (map_type:%d map_flags:0x%X): ", __func__, map_type, 103 102 map_flags); 104 103 105 - assert(sched_next_online(0, 0) != -1); 104 + assert(sched_next_online(0, &next_cpu) != -1); 106 105 107 106 if (map_flags & BPF_F_NO_COMMON_LRU) 108 107 lru_map_fd = create_map(map_type, map_flags, 2 * nr_cpus); ··· 186 183 int lru_map_fd, expected_map_fd; 187 184 unsigned int batch_size; 188 185 unsigned int map_size; 186 + int next_cpu = 0; 189 187 190 188 if (map_flags & BPF_F_NO_COMMON_LRU) 191 189 /* Ther percpu lru list (i.e each cpu has its own LRU ··· 200 196 printf("%s (map_type:%d map_flags:0x%X): ", __func__, map_type, 201 197 map_flags); 202 198 203 - assert(sched_next_online(0, 0) != -1); 199 + assert(sched_next_online(0, &next_cpu) != -1); 204 200 205 201 batch_size = tgt_free / 2; 206 202 assert(batch_size * 2 == tgt_free); ··· 266 262 int lru_map_fd, expected_map_fd; 267 263 unsigned int batch_size; 268 264 unsigned int map_size; 265 + int next_cpu = 0; 269 266 270 267 if (map_flags & BPF_F_NO_COMMON_LRU) 271 268 /* Ther percpu lru list (i.e each cpu has its own LRU ··· 280 275 printf("%s (map_type:%d map_flags:0x%X): ", __func__, map_type, 281 276 map_flags); 282 277 283 - assert(sched_next_online(0, 0) != -1); 278 + assert(sched_next_online(0, &next_cpu) != -1); 284 279 285 280 batch_size = tgt_free / 2; 286 281 assert(batch_size * 2 == tgt_free); ··· 375 370 int lru_map_fd, expected_map_fd; 376 371 unsigned int batch_size; 377 372 unsigned int map_size; 373 + int next_cpu = 0; 378 374 379 375 printf("%s (map_type:%d map_flags:0x%X): ", __func__, map_type, 380 376 map_flags); 381 377 382 - assert(sched_next_online(0, 0) != -1); 378 + assert(sched_next_online(0, &next_cpu) != -1); 383 379 384 380 batch_size = tgt_free / 2; 385 381 assert(batch_size * 2 == tgt_free); ··· 436 430 int lru_map_fd, expected_map_fd; 437 431 unsigned long long key, value[nr_cpus]; 438 432 unsigned long long end_key; 433 + int next_cpu = 0; 439 434 440 435 printf("%s (map_type:%d map_flags:0x%X): ", __func__, map_type, 441 436 map_flags); 442 437 443 - assert(sched_next_online(0, 0) != -1); 438 + assert(sched_next_online(0, &next_cpu) != -1); 444 439 445 440 if (map_flags & BPF_F_NO_COMMON_LRU) 446 441 lru_map_fd = create_map(map_type, map_flags, ··· 509 502 static void test_lru_sanity5(int map_type, int map_flags) 510 503 { 511 504 unsigned long long key, value[nr_cpus]; 512 - int next_sched_cpu = 0; 505 + int next_cpu = 0; 513 506 int map_fd; 514 - int i; 515 507 516 508 if (map_flags & BPF_F_NO_COMMON_LRU) 517 509 return; ··· 525 519 key = 0; 526 520 assert(!bpf_map_update(map_fd, &key, value, BPF_NOEXIST)); 527 521 528 - for (i = 0; i < nr_cpus; i++) { 522 + while (sched_next_online(0, &next_cpu) != -1) { 529 523 pid_t pid; 530 524 531 525 pid = fork(); 532 526 if (pid == 0) { 533 - next_sched_cpu = sched_next_online(0, next_sched_cpu); 534 - if (next_sched_cpu != -1) 535 - do_test_lru_sanity5(key, map_fd); 527 + do_test_lru_sanity5(key, map_fd); 536 528 exit(0); 537 529 } else if (pid == -1) { 538 - printf("couldn't spawn #%d process\n", i); 530 + printf("couldn't spawn process to test key:%llu\n", 531 + key); 539 532 exit(1); 540 533 } else { 541 534 int status; 542 - 543 - /* It is mostly redundant and just allow the parent 544 - * process to update next_shced_cpu for the next child 545 - * process 546 - */ 547 - next_sched_cpu = sched_next_online(pid, next_sched_cpu); 548 535 549 536 assert(waitpid(pid, &status, 0) == pid); 550 537 assert(status == 0); ··· 546 547 } 547 548 548 549 close(map_fd); 550 + /* At least one key should be tested */ 551 + assert(key > 0); 549 552 550 553 printf("Pass\n"); 551 554 }