Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge remote-tracking branch 'torvalds/master' into perf/urgent

To check if more kernel API sync is needed and also to see if the perf
build tests continue to pass.

Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>

+4749 -2207
+3
.mailmap
··· 80 80 Christian Borntraeger <borntraeger@linux.ibm.com> <borntraeger@de.ibm.com> 81 81 Christian Borntraeger <borntraeger@linux.ibm.com> <cborntra@de.ibm.com> 82 82 Christian Borntraeger <borntraeger@linux.ibm.com> <borntrae@de.ibm.com> 83 + Christian Brauner <brauner@kernel.org> <christian@brauner.io> 84 + Christian Brauner <brauner@kernel.org> <christian.brauner@canonical.com> 85 + Christian Brauner <brauner@kernel.org> <christian.brauner@ubuntu.com> 83 86 Christophe Ricard <christophe.ricard@gmail.com> 84 87 Christoph Hellwig <hch@lst.de> 85 88 Colin Ian King <colin.king@intel.com> <colin.king@canonical.com>
+2
Documentation/arm64/silicon-errata.rst
··· 100 100 +----------------+-----------------+-----------------+-----------------------------+ 101 101 | ARM | Cortex-A510 | #2051678 | ARM64_ERRATUM_2051678 | 102 102 +----------------+-----------------+-----------------+-----------------------------+ 103 + | ARM | Cortex-A510 | #2077057 | ARM64_ERRATUM_2077057 | 104 + +----------------+-----------------+-----------------+-----------------------------+ 103 105 | ARM | Cortex-A710 | #2119858 | ARM64_ERRATUM_2119858 | 104 106 +----------------+-----------------+-----------------+-----------------------------+ 105 107 | ARM | Cortex-A710 | #2054223 | ARM64_ERRATUM_2054223 |
+8
Documentation/dev-tools/kselftest.rst
··· 7 7 paths in the kernel. Tests are intended to be run after building, installing 8 8 and booting a kernel. 9 9 10 + Kselftest from mainline can be run on older stable kernels. Running tests 11 + from mainline offers the best coverage. Several test rings run mainline 12 + kselftest suite on stable releases. The reason is that when a new test 13 + gets added to test existing code to regression test a bug, we should be 14 + able to run that test on an older kernel. Hence, it is important to keep 15 + code that can still test an older kernel and make sure it skips the test 16 + gracefully on newer releases. 17 + 10 18 You can find additional information on Kselftest framework, how to 11 19 write new tests using the framework on Kselftest wiki: 12 20
+6
Documentation/devicetree/bindings/net/qcom,ipa.yaml
··· 107 107 - const: imem 108 108 - const: config 109 109 110 + qcom,qmp: 111 + $ref: /schemas/types.yaml#/definitions/phandle 112 + description: phandle to the AOSS side-channel message RAM 113 + 110 114 qcom,smem-states: 111 115 $ref: /schemas/types.yaml#/definitions/phandle-array 112 116 description: State bits used in by the AP to signal the modem. ··· 225 221 interconnect-names = "memory", 226 222 "imem", 227 223 "config"; 224 + 225 + qcom,qmp = <&aoss_qmp>; 228 226 229 227 qcom,smem-states = <&ipa_smp2p_out 0>, 230 228 <&ipa_smp2p_out 1>;
+3 -2
Documentation/devicetree/bindings/spi/spi-peripheral-props.yaml
··· 23 23 minItems: 1 24 24 maxItems: 256 25 25 items: 26 - minimum: 0 27 - maximum: 256 26 + items: 27 + - minimum: 0 28 + maximum: 256 28 29 description: 29 30 Chip select used by the device. 30 31
+16
Documentation/filesystems/netfs_library.rst
··· 462 462 struct iov_iter *iter, 463 463 netfs_io_terminated_t term_func, 464 464 void *term_func_priv); 465 + 466 + int (*query_occupancy)(struct netfs_cache_resources *cres, 467 + loff_t start, size_t len, size_t granularity, 468 + loff_t *_data_start, size_t *_data_len); 465 469 }; 466 470 467 471 With a termination handler function pointer:: ··· 539 535 with the number of bytes transferred or an error code, plus a flag 540 536 indicating whether the termination is definitely happening in the caller's 541 537 context. 538 + 539 + * ``query_occupancy()`` 540 + 541 + [Required] Called to find out where the next piece of data is within a 542 + particular region of the cache. The start and length of the region to be 543 + queried are passed in, along with the granularity to which the answer needs 544 + to be aligned. The function passes back the start and length of the data, 545 + if any, available within that region. Note that there may be a hole at the 546 + front. 547 + 548 + It returns 0 if some data was found, -ENODATA if there was no usable data 549 + within the region or -ENOBUFS if there is no caching on this file. 542 550 543 551 Note that these methods are passed a pointer to the cache resource structure, 544 552 not the read request structure as they could be used in other situations where
-24
Documentation/gpu/todo.rst
··· 300 300 301 301 Level: Advanced 302 302 303 - Garbage collect fbdev scrolling acceleration 304 - -------------------------------------------- 305 - 306 - Scroll acceleration has been disabled in fbcon. Now it works as the old 307 - SCROLL_REDRAW mode. A ton of code was removed in fbcon.c and the hook bmove was 308 - removed from fbcon_ops. 309 - Remaining tasks: 310 - 311 - - a bunch of the hooks in fbcon_ops could be removed or simplified by calling 312 - directly instead of the function table (with a switch on p->rotate) 313 - 314 - - fb_copyarea is unused after this, and can be deleted from all drivers 315 - 316 - - after that, fb_copyarea can be deleted from fb_ops in include/linux/fb.h as 317 - well as cfb_copyarea 318 - 319 - Note that not all acceleration code can be deleted, since clearing and cursor 320 - support is still accelerated, which might be good candidates for further 321 - deletion projects. 322 - 323 - Contact: Daniel Vetter 324 - 325 - Level: Intermediate 326 - 327 303 idr_init_base() 328 304 --------------- 329 305
+3
Documentation/userspace-api/ioctl/ioctl-number.rst
··· 115 115 'B' 00-1F linux/cciss_ioctl.h conflict! 116 116 'B' 00-0F include/linux/pmu.h conflict! 117 117 'B' C0-FF advanced bbus <mailto:maassen@uni-freiburg.de> 118 + 'B' 00-0F xen/xenbus_dev.h conflict! 118 119 'C' all linux/soundcard.h conflict! 119 120 'C' 01-2F linux/capi.h conflict! 120 121 'C' F0-FF drivers/net/wan/cosa.h conflict! ··· 135 134 'F' 80-8F linux/arcfb.h conflict! 136 135 'F' DD video/sstfb.h conflict! 137 136 'G' 00-3F drivers/misc/sgi-gru/grulib.h conflict! 137 + 'G' 00-0F xen/gntalloc.h, xen/gntdev.h conflict! 138 138 'H' 00-7F linux/hiddev.h conflict! 139 139 'H' 00-0F linux/hidraw.h conflict! 140 140 'H' 01 linux/mei.h conflict! ··· 178 176 'P' 60-6F sound/sscape_ioctl.h conflict! 179 177 'P' 00-0F drivers/usb/class/usblp.c conflict! 180 178 'P' 01-09 drivers/misc/pci_endpoint_test.c conflict! 179 + 'P' 00-0F xen/privcmd.h conflict! 181 180 'Q' all linux/soundcard.h 182 181 'R' 00-1F linux/random.h conflict! 183 182 'R' 01 linux/rfkill.h conflict!
+16 -3
MAINTAINERS
··· 4157 4157 K: csky 4158 4158 4159 4159 CA8210 IEEE-802.15.4 RADIO DRIVER 4160 - M: Harry Morris <h.morris@cascoda.com> 4161 4160 L: linux-wpan@vger.kernel.org 4162 - S: Maintained 4161 + S: Orphan 4163 4162 W: https://github.com/Cascoda/ca8210-linux.git 4164 4163 F: Documentation/devicetree/bindings/net/ieee802154/ca8210.txt 4165 4164 F: drivers/net/ieee802154/ca8210.c ··· 10879 10880 F: drivers/ata/pata_arasan_cf.c 10880 10881 F: include/linux/pata_arasan_cf_data.h 10881 10882 10883 + LIBATA PATA DRIVERS 10884 + R: Sergey Shtylyov <s.shtylyov@omp.ru> 10885 + L: linux-ide@vger.kernel.org 10886 + F: drivers/ata/ata_*.c 10887 + F: drivers/ata/pata_*.c 10888 + 10882 10889 LIBATA PATA FARADAY FTIDE010 AND GEMINI SATA BRIDGE DRIVERS 10883 10890 M: Linus Walleij <linus.walleij@linaro.org> 10884 10891 L: linux-ide@vger.kernel.org ··· 12405 12400 F: kernel/sched/membarrier.c 12406 12401 12407 12402 MEMBLOCK 12408 - M: Mike Rapoport <rppt@linux.ibm.com> 12403 + M: Mike Rapoport <rppt@kernel.org> 12409 12404 L: linux-mm@kvack.org 12410 12405 S: Maintained 12411 12406 F: Documentation/core-api/boot-time-mm.rst ··· 16473 16468 F: Documentation/devicetree/bindings/i2c/renesas,rmobile-iic.yaml 16474 16469 F: drivers/i2c/busses/i2c-rcar.c 16475 16470 F: drivers/i2c/busses/i2c-sh_mobile.c 16471 + 16472 + RENESAS R-CAR SATA DRIVER 16473 + R: Sergey Shtylyov <s.shtylyov@omp.ru> 16474 + S: Supported 16475 + L: linux-ide@vger.kernel.org 16476 + L: linux-renesas-soc@vger.kernel.org 16477 + F: Documentation/devicetree/bindings/ata/renesas,rcar-sata.yaml 16478 + F: drivers/ata/sata_rcar.c 16476 16479 16477 16480 RENESAS R-CAR THERMAL DRIVERS 16478 16481 M: Niklas Söderlund <niklas.soderlund@ragnatech.se>
+2 -2
arch/arm/crypto/blake2s-shash.c
··· 13 13 static int crypto_blake2s_update_arm(struct shash_desc *desc, 14 14 const u8 *in, unsigned int inlen) 15 15 { 16 - return crypto_blake2s_update(desc, in, inlen, blake2s_compress); 16 + return crypto_blake2s_update(desc, in, inlen, false); 17 17 } 18 18 19 19 static int crypto_blake2s_final_arm(struct shash_desc *desc, u8 *out) 20 20 { 21 - return crypto_blake2s_final(desc, out, blake2s_compress); 21 + return crypto_blake2s_final(desc, out, false); 22 22 } 23 23 24 24 #define BLAKE2S_ALG(name, driver_name, digest_size) \
+16
arch/arm64/Kconfig
··· 680 680 681 681 If unsure, say Y. 682 682 683 + config ARM64_ERRATUM_2077057 684 + bool "Cortex-A510: 2077057: workaround software-step corrupting SPSR_EL2" 685 + help 686 + This option adds the workaround for ARM Cortex-A510 erratum 2077057. 687 + Affected Cortex-A510 may corrupt SPSR_EL2 when the a step exception is 688 + expected, but a Pointer Authentication trap is taken instead. The 689 + erratum causes SPSR_EL1 to be copied to SPSR_EL2, which could allow 690 + EL1 to cause a return to EL2 with a guest controlled ELR_EL2. 691 + 692 + This can only happen when EL2 is stepping EL1. 693 + 694 + When these conditions occur, the SPSR_EL2 value is unchanged from the 695 + previous guest entry, and can be restored from the in-memory copy. 696 + 697 + If unsure, say Y. 698 + 683 699 config ARM64_ERRATUM_2119858 684 700 bool "Cortex-A710/X2: 2119858: workaround TRBE overwriting trace data in FILL mode" 685 701 default y
+8
arch/arm64/kernel/cpu_errata.c
··· 600 600 CAP_MIDR_RANGE_LIST(trbe_write_out_of_range_cpus), 601 601 }, 602 602 #endif 603 + #ifdef CONFIG_ARM64_ERRATUM_2077057 604 + { 605 + .desc = "ARM erratum 2077057", 606 + .capability = ARM64_WORKAROUND_2077057, 607 + .type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM, 608 + ERRATA_MIDR_REV_RANGE(MIDR_CORTEX_A510, 0, 0, 2), 609 + }, 610 + #endif 603 611 #ifdef CONFIG_ARM64_ERRATUM_2064142 604 612 { 605 613 .desc = "ARM erratum 2064142",
+33 -18
arch/arm64/kvm/arm.c
··· 797 797 xfer_to_guest_mode_work_pending(); 798 798 } 799 799 800 + /* 801 + * Actually run the vCPU, entering an RCU extended quiescent state (EQS) while 802 + * the vCPU is running. 803 + * 804 + * This must be noinstr as instrumentation may make use of RCU, and this is not 805 + * safe during the EQS. 806 + */ 807 + static int noinstr kvm_arm_vcpu_enter_exit(struct kvm_vcpu *vcpu) 808 + { 809 + int ret; 810 + 811 + guest_state_enter_irqoff(); 812 + ret = kvm_call_hyp_ret(__kvm_vcpu_run, vcpu); 813 + guest_state_exit_irqoff(); 814 + 815 + return ret; 816 + } 817 + 800 818 /** 801 819 * kvm_arch_vcpu_ioctl_run - the main VCPU run function to execute guest code 802 820 * @vcpu: The VCPU pointer ··· 899 881 * Enter the guest 900 882 */ 901 883 trace_kvm_entry(*vcpu_pc(vcpu)); 902 - guest_enter_irqoff(); 884 + guest_timing_enter_irqoff(); 903 885 904 - ret = kvm_call_hyp_ret(__kvm_vcpu_run, vcpu); 886 + ret = kvm_arm_vcpu_enter_exit(vcpu); 905 887 906 888 vcpu->mode = OUTSIDE_GUEST_MODE; 907 889 vcpu->stat.exits++; ··· 936 918 kvm_arch_vcpu_ctxsync_fp(vcpu); 937 919 938 920 /* 939 - * We may have taken a host interrupt in HYP mode (ie 940 - * while executing the guest). This interrupt is still 941 - * pending, as we haven't serviced it yet! 921 + * We must ensure that any pending interrupts are taken before 922 + * we exit guest timing so that timer ticks are accounted as 923 + * guest time. Transiently unmask interrupts so that any 924 + * pending interrupts are taken. 942 925 * 943 - * We're now back in SVC mode, with interrupts 944 - * disabled. Enabling the interrupts now will have 945 - * the effect of taking the interrupt again, in SVC 946 - * mode this time. 926 + * Per ARM DDI 0487G.b section D1.13.4, an ISB (or other 927 + * context synchronization event) is necessary to ensure that 928 + * pending interrupts are taken. 947 929 */ 948 930 local_irq_enable(); 931 + isb(); 932 + local_irq_disable(); 949 933 950 - /* 951 - * We do local_irq_enable() before calling guest_exit() so 952 - * that if a timer interrupt hits while running the guest we 953 - * account that tick as being spent in the guest. We enable 954 - * preemption after calling guest_exit() so that if we get 955 - * preempted we make sure ticks after that is not counted as 956 - * guest time. 957 - */ 958 - guest_exit(); 934 + guest_timing_exit_irqoff(); 935 + 936 + local_irq_enable(); 937 + 959 938 trace_kvm_exit(ret, kvm_vcpu_trap_get_class(vcpu), *vcpu_pc(vcpu)); 960 939 961 940 /* Exit types that need handling before we can be preempted */
+8
arch/arm64/kvm/handle_exit.c
··· 228 228 { 229 229 struct kvm_run *run = vcpu->run; 230 230 231 + if (ARM_SERROR_PENDING(exception_index)) { 232 + /* 233 + * The SError is handled by handle_exit_early(). If the guest 234 + * survives it will re-execute the original instruction. 235 + */ 236 + return 1; 237 + } 238 + 231 239 exception_index = ARM_EXCEPTION_CODE(exception_index); 232 240 233 241 switch (exception_index) {
+21 -2
arch/arm64/kvm/hyp/include/hyp/switch.h
··· 402 402 return false; 403 403 } 404 404 405 + static inline void synchronize_vcpu_pstate(struct kvm_vcpu *vcpu, u64 *exit_code) 406 + { 407 + /* 408 + * Check for the conditions of Cortex-A510's #2077057. When these occur 409 + * SPSR_EL2 can't be trusted, but isn't needed either as it is 410 + * unchanged from the value in vcpu_gp_regs(vcpu)->pstate. 411 + * Are we single-stepping the guest, and took a PAC exception from the 412 + * active-not-pending state? 413 + */ 414 + if (cpus_have_final_cap(ARM64_WORKAROUND_2077057) && 415 + vcpu->guest_debug & KVM_GUESTDBG_SINGLESTEP && 416 + *vcpu_cpsr(vcpu) & DBG_SPSR_SS && 417 + ESR_ELx_EC(read_sysreg_el2(SYS_ESR)) == ESR_ELx_EC_PAC) 418 + write_sysreg_el2(*vcpu_cpsr(vcpu), SYS_SPSR); 419 + 420 + vcpu->arch.ctxt.regs.pstate = read_sysreg_el2(SYS_SPSR); 421 + } 422 + 405 423 /* 406 424 * Return true when we were able to fixup the guest exit and should return to 407 425 * the guest, false when we should restore the host state and return to the ··· 431 413 * Save PSTATE early so that we can evaluate the vcpu mode 432 414 * early on. 433 415 */ 434 - vcpu->arch.ctxt.regs.pstate = read_sysreg_el2(SYS_SPSR); 416 + synchronize_vcpu_pstate(vcpu, exit_code); 435 417 436 418 /* 437 419 * Check whether we want to repaint the state one way or ··· 442 424 if (ARM_EXCEPTION_CODE(*exit_code) != ARM_EXCEPTION_IRQ) 443 425 vcpu->arch.fault.esr_el2 = read_sysreg_el2(SYS_ESR); 444 426 445 - if (ARM_SERROR_PENDING(*exit_code)) { 427 + if (ARM_SERROR_PENDING(*exit_code) && 428 + ARM_EXCEPTION_CODE(*exit_code) != ARM_EXCEPTION_IRQ) { 446 429 u8 esr_ec = kvm_vcpu_trap_get_class(vcpu); 447 430 448 431 /*
+3 -2
arch/arm64/tools/cpucaps
··· 55 55 WORKAROUND_1463225 56 56 WORKAROUND_1508412 57 57 WORKAROUND_1542419 58 - WORKAROUND_2064142 59 - WORKAROUND_2038923 60 58 WORKAROUND_1902691 59 + WORKAROUND_2038923 60 + WORKAROUND_2064142 61 + WORKAROUND_2077057 61 62 WORKAROUND_TRBE_OVERWRITE_FILL_MODE 62 63 WORKAROUND_TSB_FLUSH_FAILURE 63 64 WORKAROUND_TRBE_WRITE_OUT_OF_RANGE
+1 -1
arch/mips/cavium-octeon/octeon-memcpy.S
··· 74 74 #define EXC(inst_reg,addr,handler) \ 75 75 9: inst_reg, addr; \ 76 76 .section __ex_table,"a"; \ 77 - PTR 9b, handler; \ 77 + PTR_WD 9b, handler; \ 78 78 .previous 79 79 80 80 /*
+46 -4
arch/mips/kvm/mips.c
··· 414 414 return -ENOIOCTLCMD; 415 415 } 416 416 417 + /* 418 + * Actually run the vCPU, entering an RCU extended quiescent state (EQS) while 419 + * the vCPU is running. 420 + * 421 + * This must be noinstr as instrumentation may make use of RCU, and this is not 422 + * safe during the EQS. 423 + */ 424 + static int noinstr kvm_mips_vcpu_enter_exit(struct kvm_vcpu *vcpu) 425 + { 426 + int ret; 427 + 428 + guest_state_enter_irqoff(); 429 + ret = kvm_mips_callbacks->vcpu_run(vcpu); 430 + guest_state_exit_irqoff(); 431 + 432 + return ret; 433 + } 434 + 417 435 int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) 418 436 { 419 437 int r = -EINTR; ··· 452 434 lose_fpu(1); 453 435 454 436 local_irq_disable(); 455 - guest_enter_irqoff(); 437 + guest_timing_enter_irqoff(); 456 438 trace_kvm_enter(vcpu); 457 439 458 440 /* ··· 463 445 */ 464 446 smp_store_mb(vcpu->mode, IN_GUEST_MODE); 465 447 466 - r = kvm_mips_callbacks->vcpu_run(vcpu); 448 + r = kvm_mips_vcpu_enter_exit(vcpu); 449 + 450 + /* 451 + * We must ensure that any pending interrupts are taken before 452 + * we exit guest timing so that timer ticks are accounted as 453 + * guest time. Transiently unmask interrupts so that any 454 + * pending interrupts are taken. 455 + * 456 + * TODO: is there a barrier which ensures that pending interrupts are 457 + * recognised? Currently this just hopes that the CPU takes any pending 458 + * interrupts between the enable and disable. 459 + */ 460 + local_irq_enable(); 461 + local_irq_disable(); 467 462 468 463 trace_kvm_out(vcpu); 469 - guest_exit_irqoff(); 464 + guest_timing_exit_irqoff(); 470 465 local_irq_enable(); 471 466 472 467 out: ··· 1199 1168 /* 1200 1169 * Return value is in the form (errcode<<2 | RESUME_FLAG_HOST | RESUME_FLAG_NV) 1201 1170 */ 1202 - int kvm_mips_handle_exit(struct kvm_vcpu *vcpu) 1171 + static int __kvm_mips_handle_exit(struct kvm_vcpu *vcpu) 1203 1172 { 1204 1173 struct kvm_run *run = vcpu->run; 1205 1174 u32 cause = vcpu->arch.host_cp0_cause; ··· 1385 1354 read_c0_config5() & MIPS_CONF5_MSAEN) 1386 1355 __kvm_restore_msacsr(&vcpu->arch); 1387 1356 } 1357 + return ret; 1358 + } 1359 + 1360 + int noinstr kvm_mips_handle_exit(struct kvm_vcpu *vcpu) 1361 + { 1362 + int ret; 1363 + 1364 + guest_state_exit_irqoff(); 1365 + ret = __kvm_mips_handle_exit(vcpu); 1366 + guest_state_enter_irqoff(); 1367 + 1388 1368 return ret; 1389 1369 } 1390 1370
+9 -3
arch/mips/kvm/vz.c
··· 458 458 /** 459 459 * _kvm_vz_save_htimer() - Switch to software emulation of guest timer. 460 460 * @vcpu: Virtual CPU. 461 - * @compare: Pointer to write compare value to. 462 - * @cause: Pointer to write cause value to. 461 + * @out_compare: Pointer to write compare value to. 462 + * @out_cause: Pointer to write cause value to. 463 463 * 464 464 * Save VZ guest timer state and switch to software emulation of guest CP0 465 465 * timer. The hard timer must already be in use, so preemption should be ··· 1541 1541 } 1542 1542 1543 1543 /** 1544 - * kvm_trap_vz_handle_cop_unusuable() - Guest used unusable coprocessor. 1544 + * kvm_trap_vz_handle_cop_unusable() - Guest used unusable coprocessor. 1545 1545 * @vcpu: Virtual CPU context. 1546 1546 * 1547 1547 * Handle when the guest attempts to use a coprocessor which hasn't been allowed 1548 1548 * by the root context. 1549 + * 1550 + * Return: value indicating whether to resume the host or the guest 1551 + * (RESUME_HOST or RESUME_GUEST) 1549 1552 */ 1550 1553 static int kvm_trap_vz_handle_cop_unusable(struct kvm_vcpu *vcpu) 1551 1554 { ··· 1595 1592 * 1596 1593 * Handle when the guest attempts to use MSA when it is disabled in the root 1597 1594 * context. 1595 + * 1596 + * Return: value indicating whether to resume the host or the guest 1597 + * (RESUME_HOST or RESUME_GUEST) 1598 1598 */ 1599 1599 static int kvm_trap_vz_handle_msa_disabled(struct kvm_vcpu *vcpu) 1600 1600 {
+31 -17
arch/riscv/kvm/vcpu.c
··· 90 90 int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu) 91 91 { 92 92 struct kvm_cpu_context *cntx; 93 + struct kvm_vcpu_csr *reset_csr = &vcpu->arch.guest_reset_csr; 93 94 94 95 /* Mark this VCPU never ran */ 95 96 vcpu->arch.ran_atleast_once = false; ··· 106 105 cntx->hstatus |= HSTATUS_VTW; 107 106 cntx->hstatus |= HSTATUS_SPVP; 108 107 cntx->hstatus |= HSTATUS_SPV; 108 + 109 + /* By default, make CY, TM, and IR counters accessible in VU mode */ 110 + reset_csr->scounteren = 0x7; 109 111 110 112 /* Setup VCPU timer */ 111 113 kvm_riscv_vcpu_timer_init(vcpu); ··· 703 699 csr_write(CSR_HVIP, csr->hvip); 704 700 } 705 701 702 + /* 703 + * Actually run the vCPU, entering an RCU extended quiescent state (EQS) while 704 + * the vCPU is running. 705 + * 706 + * This must be noinstr as instrumentation may make use of RCU, and this is not 707 + * safe during the EQS. 708 + */ 709 + static void noinstr kvm_riscv_vcpu_enter_exit(struct kvm_vcpu *vcpu) 710 + { 711 + guest_state_enter_irqoff(); 712 + __kvm_riscv_switch_to(&vcpu->arch); 713 + guest_state_exit_irqoff(); 714 + } 715 + 706 716 int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) 707 717 { 708 718 int ret; ··· 808 790 continue; 809 791 } 810 792 811 - guest_enter_irqoff(); 793 + guest_timing_enter_irqoff(); 812 794 813 - __kvm_riscv_switch_to(&vcpu->arch); 795 + kvm_riscv_vcpu_enter_exit(vcpu); 814 796 815 797 vcpu->mode = OUTSIDE_GUEST_MODE; 816 798 vcpu->stat.exits++; ··· 830 812 kvm_riscv_vcpu_sync_interrupts(vcpu); 831 813 832 814 /* 833 - * We may have taken a host interrupt in VS/VU-mode (i.e. 834 - * while executing the guest). This interrupt is still 835 - * pending, as we haven't serviced it yet! 815 + * We must ensure that any pending interrupts are taken before 816 + * we exit guest timing so that timer ticks are accounted as 817 + * guest time. Transiently unmask interrupts so that any 818 + * pending interrupts are taken. 836 819 * 837 - * We're now back in HS-mode with interrupts disabled 838 - * so enabling the interrupts now will have the effect 839 - * of taking the interrupt again, in HS-mode this time. 820 + * There's no barrier which ensures that pending interrupts are 821 + * recognised, so we just hope that the CPU takes any pending 822 + * interrupts between the enable and disable. 840 823 */ 841 824 local_irq_enable(); 825 + local_irq_disable(); 842 826 843 - /* 844 - * We do local_irq_enable() before calling guest_exit() so 845 - * that if a timer interrupt hits while running the guest 846 - * we account that tick as being spent in the guest. We 847 - * enable preemption after calling guest_exit() so that if 848 - * we get preempted we make sure ticks after that is not 849 - * counted as guest time. 850 - */ 851 - guest_exit(); 827 + guest_timing_exit_irqoff(); 828 + 829 + local_irq_enable(); 852 830 853 831 preempt_enable(); 854 832
+2 -1
arch/riscv/kvm/vcpu_sbi_base.c
··· 9 9 #include <linux/errno.h> 10 10 #include <linux/err.h> 11 11 #include <linux/kvm_host.h> 12 + #include <linux/version.h> 12 13 #include <asm/csr.h> 13 14 #include <asm/sbi.h> 14 15 #include <asm/kvm_vcpu_timer.h> ··· 33 32 *out_val = KVM_SBI_IMPID; 34 33 break; 35 34 case SBI_EXT_BASE_GET_IMP_VERSION: 36 - *out_val = 0; 35 + *out_val = LINUX_VERSION_CODE; 37 36 break; 38 37 case SBI_EXT_BASE_PROBE_EXT: 39 38 if ((cp->a0 >= SBI_EXT_EXPERIMENTAL_START &&
+2 -2
arch/x86/crypto/blake2s-shash.c
··· 18 18 static int crypto_blake2s_update_x86(struct shash_desc *desc, 19 19 const u8 *in, unsigned int inlen) 20 20 { 21 - return crypto_blake2s_update(desc, in, inlen, blake2s_compress); 21 + return crypto_blake2s_update(desc, in, inlen, false); 22 22 } 23 23 24 24 static int crypto_blake2s_final_x86(struct shash_desc *desc, u8 *out) 25 25 { 26 - return crypto_blake2s_final(desc, out, blake2s_compress); 26 + return crypto_blake2s_final(desc, out, false); 27 27 } 28 28 29 29 #define BLAKE2S_ALG(name, driver_name, digest_size) \
+1 -1
arch/x86/include/asm/kvm-x86-ops.h
··· 82 82 KVM_X86_OP(load_eoi_exitmap) 83 83 KVM_X86_OP(set_virtual_apic_mode) 84 84 KVM_X86_OP_NULL(set_apic_access_page_addr) 85 - KVM_X86_OP(deliver_posted_interrupt) 85 + KVM_X86_OP(deliver_interrupt) 86 86 KVM_X86_OP_NULL(sync_pir_to_irr) 87 87 KVM_X86_OP(set_tss_addr) 88 88 KVM_X86_OP(set_identity_map_addr)
+2 -1
arch/x86/include/asm/kvm_host.h
··· 1410 1410 void (*load_eoi_exitmap)(struct kvm_vcpu *vcpu, u64 *eoi_exit_bitmap); 1411 1411 void (*set_virtual_apic_mode)(struct kvm_vcpu *vcpu); 1412 1412 void (*set_apic_access_page_addr)(struct kvm_vcpu *vcpu); 1413 - int (*deliver_posted_interrupt)(struct kvm_vcpu *vcpu, int vector); 1413 + void (*deliver_interrupt)(struct kvm_lapic *apic, int delivery_mode, 1414 + int trig_mode, int vector); 1414 1415 int (*sync_pir_to_irr)(struct kvm_vcpu *vcpu); 1415 1416 int (*set_tss_addr)(struct kvm *kvm, unsigned int addr); 1416 1417 int (*set_identity_map_addr)(struct kvm *kvm, u64 ident_addr);
-14
arch/x86/include/asm/xen/hypervisor.h
··· 43 43 return hypervisor_cpuid_base("XenVMMXenVMM", 2); 44 44 } 45 45 46 - #ifdef CONFIG_XEN 47 - extern bool __init xen_hvm_need_lapic(void); 48 - 49 - static inline bool __init xen_x2apic_para_available(void) 50 - { 51 - return xen_hvm_need_lapic(); 52 - } 53 - #else 54 - static inline bool __init xen_x2apic_para_available(void) 55 - { 56 - return (xen_cpuid_base() != 0); 57 - } 58 - #endif 59 - 60 46 struct pci_dev; 61 47 62 48 #ifdef CONFIG_XEN_PV_DOM0
+7 -6
arch/x86/kvm/cpuid.c
··· 554 554 ); 555 555 556 556 kvm_cpu_cap_mask(CPUID_7_0_EBX, 557 - F(FSGSBASE) | F(SGX) | F(BMI1) | F(HLE) | F(AVX2) | F(SMEP) | 558 - F(BMI2) | F(ERMS) | F(INVPCID) | F(RTM) | 0 /*MPX*/ | F(RDSEED) | 559 - F(ADX) | F(SMAP) | F(AVX512IFMA) | F(AVX512F) | F(AVX512PF) | 560 - F(AVX512ER) | F(AVX512CD) | F(CLFLUSHOPT) | F(CLWB) | F(AVX512DQ) | 561 - F(SHA_NI) | F(AVX512BW) | F(AVX512VL) | 0 /*INTEL_PT*/ 562 - ); 557 + F(FSGSBASE) | F(SGX) | F(BMI1) | F(HLE) | F(AVX2) | 558 + F(FDP_EXCPTN_ONLY) | F(SMEP) | F(BMI2) | F(ERMS) | F(INVPCID) | 559 + F(RTM) | F(ZERO_FCS_FDS) | 0 /*MPX*/ | F(AVX512F) | 560 + F(AVX512DQ) | F(RDSEED) | F(ADX) | F(SMAP) | F(AVX512IFMA) | 561 + F(CLFLUSHOPT) | F(CLWB) | 0 /*INTEL_PT*/ | F(AVX512PF) | 562 + F(AVX512ER) | F(AVX512CD) | F(SHA_NI) | F(AVX512BW) | 563 + F(AVX512VL)); 563 564 564 565 kvm_cpu_cap_mask(CPUID_7_ECX, 565 566 F(AVX512VBMI) | F(LA57) | F(PKU) | 0 /*OSPKE*/ | F(RDPID) |
+2 -8
arch/x86/kvm/lapic.c
··· 1096 1096 apic->regs + APIC_TMR); 1097 1097 } 1098 1098 1099 - if (static_call(kvm_x86_deliver_posted_interrupt)(vcpu, vector)) { 1100 - kvm_lapic_set_irr(vector, apic); 1101 - kvm_make_request(KVM_REQ_EVENT, vcpu); 1102 - kvm_vcpu_kick(vcpu); 1103 - } else { 1104 - trace_kvm_apicv_accept_irq(vcpu->vcpu_id, delivery_mode, 1105 - trig_mode, vector); 1106 - } 1099 + static_call(kvm_x86_deliver_interrupt)(apic, delivery_mode, 1100 + trig_mode, vector); 1107 1101 break; 1108 1102 1109 1103 case APIC_DM_REMRD:
+18 -3
arch/x86/kvm/svm/svm.c
··· 3291 3291 SVM_EVTINJ_VALID | SVM_EVTINJ_TYPE_INTR; 3292 3292 } 3293 3293 3294 + static void svm_deliver_interrupt(struct kvm_lapic *apic, int delivery_mode, 3295 + int trig_mode, int vector) 3296 + { 3297 + struct kvm_vcpu *vcpu = apic->vcpu; 3298 + 3299 + if (svm_deliver_avic_intr(vcpu, vector)) { 3300 + kvm_lapic_set_irr(vector, apic); 3301 + kvm_make_request(KVM_REQ_EVENT, vcpu); 3302 + kvm_vcpu_kick(vcpu); 3303 + } else { 3304 + trace_kvm_apicv_accept_irq(vcpu->vcpu_id, delivery_mode, 3305 + trig_mode, vector); 3306 + } 3307 + } 3308 + 3294 3309 static void svm_update_cr8_intercept(struct kvm_vcpu *vcpu, int tpr, int irr) 3295 3310 { 3296 3311 struct vcpu_svm *svm = to_svm(vcpu); ··· 3630 3615 struct vcpu_svm *svm = to_svm(vcpu); 3631 3616 unsigned long vmcb_pa = svm->current_vmcb->pa; 3632 3617 3633 - kvm_guest_enter_irqoff(); 3618 + guest_state_enter_irqoff(); 3634 3619 3635 3620 if (sev_es_guest(vcpu->kvm)) { 3636 3621 __svm_sev_es_vcpu_run(vmcb_pa); ··· 3650 3635 vmload(__sme_page_pa(sd->save_area)); 3651 3636 } 3652 3637 3653 - kvm_guest_exit_irqoff(); 3638 + guest_state_exit_irqoff(); 3654 3639 } 3655 3640 3656 3641 static __no_kcsan fastpath_t svm_vcpu_run(struct kvm_vcpu *vcpu) ··· 4560 4545 .pmu_ops = &amd_pmu_ops, 4561 4546 .nested_ops = &svm_nested_ops, 4562 4547 4563 - .deliver_posted_interrupt = svm_deliver_avic_intr, 4548 + .deliver_interrupt = svm_deliver_interrupt, 4564 4549 .dy_apicv_has_pending_interrupt = svm_dy_apicv_has_pending_interrupt, 4565 4550 .update_pi_irte = svm_update_pi_irte, 4566 4551 .setup_mce = svm_setup_mce,
+18 -3
arch/x86/kvm/vmx/vmx.c
··· 4041 4041 return 0; 4042 4042 } 4043 4043 4044 + static void vmx_deliver_interrupt(struct kvm_lapic *apic, int delivery_mode, 4045 + int trig_mode, int vector) 4046 + { 4047 + struct kvm_vcpu *vcpu = apic->vcpu; 4048 + 4049 + if (vmx_deliver_posted_interrupt(vcpu, vector)) { 4050 + kvm_lapic_set_irr(vector, apic); 4051 + kvm_make_request(KVM_REQ_EVENT, vcpu); 4052 + kvm_vcpu_kick(vcpu); 4053 + } else { 4054 + trace_kvm_apicv_accept_irq(vcpu->vcpu_id, delivery_mode, 4055 + trig_mode, vector); 4056 + } 4057 + } 4058 + 4044 4059 /* 4045 4060 * Set up the vmcs's constant host-state fields, i.e., host-state fields that 4046 4061 * will not change in the lifetime of the guest. ··· 6769 6754 static noinstr void vmx_vcpu_enter_exit(struct kvm_vcpu *vcpu, 6770 6755 struct vcpu_vmx *vmx) 6771 6756 { 6772 - kvm_guest_enter_irqoff(); 6757 + guest_state_enter_irqoff(); 6773 6758 6774 6759 /* L1D Flush includes CPU buffer clear to mitigate MDS */ 6775 6760 if (static_branch_unlikely(&vmx_l1d_should_flush)) ··· 6785 6770 6786 6771 vcpu->arch.cr2 = native_read_cr2(); 6787 6772 6788 - kvm_guest_exit_irqoff(); 6773 + guest_state_exit_irqoff(); 6789 6774 } 6790 6775 6791 6776 static fastpath_t vmx_vcpu_run(struct kvm_vcpu *vcpu) ··· 7783 7768 .hwapic_isr_update = vmx_hwapic_isr_update, 7784 7769 .guest_apic_has_interrupt = vmx_guest_apic_has_interrupt, 7785 7770 .sync_pir_to_irr = vmx_sync_pir_to_irr, 7786 - .deliver_posted_interrupt = vmx_deliver_posted_interrupt, 7771 + .deliver_interrupt = vmx_deliver_interrupt, 7787 7772 .dy_apicv_has_pending_interrupt = pi_has_pending_interrupt, 7788 7773 7789 7774 .set_tss_addr = vmx_set_tss_addr,
+6 -4
arch/x86/kvm/x86.c
··· 90 90 u64 __read_mostly kvm_mce_cap_supported = MCG_CTL_P | MCG_SER_P; 91 91 EXPORT_SYMBOL_GPL(kvm_mce_cap_supported); 92 92 93 + #define ERR_PTR_USR(e) ((void __user *)ERR_PTR(e)) 94 + 93 95 #define emul_to_vcpu(ctxt) \ 94 96 ((struct kvm_vcpu *)(ctxt)->vcpu) 95 97 ··· 4342 4340 void __user *uaddr = (void __user*)(unsigned long)attr->addr; 4343 4341 4344 4342 if ((u64)(unsigned long)uaddr != attr->addr) 4345 - return ERR_PTR(-EFAULT); 4343 + return ERR_PTR_USR(-EFAULT); 4346 4344 return uaddr; 4347 4345 } 4348 4346 ··· 10043 10041 set_debugreg(0, 7); 10044 10042 } 10045 10043 10044 + guest_timing_enter_irqoff(); 10045 + 10046 10046 for (;;) { 10047 10047 /* 10048 10048 * Assert that vCPU vs. VM APICv state is consistent. An APICv ··· 10129 10125 * of accounting via context tracking, but the loss of accuracy is 10130 10126 * acceptable for all known use cases. 10131 10127 */ 10132 - vtime_account_guest_exit(); 10128 + guest_timing_exit_irqoff(); 10133 10129 10134 10130 if (lapic_in_kernel(vcpu)) { 10135 10131 s64 delta = vcpu->arch.apic->lapic_timer.advance_expire_delta; ··· 11642 11638 cancel_delayed_work_sync(&kvm->arch.kvmclock_update_work); 11643 11639 kvm_free_pit(kvm); 11644 11640 } 11645 - 11646 - #define ERR_PTR_USR(e) ((void __user *)ERR_PTR(e)) 11647 11641 11648 11642 /** 11649 11643 * __x86_set_memory_region: Setup KVM internal memory slot
-45
arch/x86/kvm/x86.h
··· 10 10 11 11 void kvm_spurious_fault(void); 12 12 13 - static __always_inline void kvm_guest_enter_irqoff(void) 14 - { 15 - /* 16 - * VMENTER enables interrupts (host state), but the kernel state is 17 - * interrupts disabled when this is invoked. Also tell RCU about 18 - * it. This is the same logic as for exit_to_user_mode(). 19 - * 20 - * This ensures that e.g. latency analysis on the host observes 21 - * guest mode as interrupt enabled. 22 - * 23 - * guest_enter_irqoff() informs context tracking about the 24 - * transition to guest mode and if enabled adjusts RCU state 25 - * accordingly. 26 - */ 27 - instrumentation_begin(); 28 - trace_hardirqs_on_prepare(); 29 - lockdep_hardirqs_on_prepare(CALLER_ADDR0); 30 - instrumentation_end(); 31 - 32 - guest_enter_irqoff(); 33 - lockdep_hardirqs_on(CALLER_ADDR0); 34 - } 35 - 36 - static __always_inline void kvm_guest_exit_irqoff(void) 37 - { 38 - /* 39 - * VMEXIT disables interrupts (host state), but tracing and lockdep 40 - * have them in state 'on' as recorded before entering guest mode. 41 - * Same as enter_from_user_mode(). 42 - * 43 - * context_tracking_guest_exit() restores host context and reinstates 44 - * RCU if enabled and required. 45 - * 46 - * This needs to be done immediately after VM-Exit, before any code 47 - * that might contain tracepoints or call out to the greater world, 48 - * e.g. before x86_spec_ctrl_restore_host(). 49 - */ 50 - lockdep_hardirqs_off(CALLER_ADDR0); 51 - context_tracking_guest_exit(); 52 - 53 - instrumentation_begin(); 54 - trace_hardirqs_off_finish(); 55 - instrumentation_end(); 56 - } 57 - 58 13 #define KVM_NESTED_VMENTER_CONSISTENCY_CHECK(consistency_check) \ 59 14 ({ \ 60 15 bool failed = (consistency_check); \
+4 -9
arch/x86/xen/enlighten_hvm.c
··· 9 9 #include <xen/events.h> 10 10 #include <xen/interface/memory.h> 11 11 12 + #include <asm/apic.h> 12 13 #include <asm/cpu.h> 13 14 #include <asm/smp.h> 14 15 #include <asm/io_apic.h> ··· 243 242 } 244 243 early_param("xen_no_vector_callback", xen_parse_no_vector_callback); 245 244 246 - bool __init xen_hvm_need_lapic(void) 245 + static __init bool xen_x2apic_available(void) 247 246 { 248 - if (xen_pv_domain()) 249 - return false; 250 - if (!xen_hvm_domain()) 251 - return false; 252 - if (xen_feature(XENFEAT_hvm_pirqs) && xen_have_vector_callback) 253 - return false; 254 - return true; 247 + return x2apic_supported(); 255 248 } 256 249 257 250 static __init void xen_hvm_guest_late_init(void) ··· 307 312 .detect = xen_platform_hvm, 308 313 .type = X86_HYPER_XEN_HVM, 309 314 .init.init_platform = xen_hvm_guest_init, 310 - .init.x2apic_available = xen_x2apic_para_available, 315 + .init.x2apic_available = xen_x2apic_available, 311 316 .init.init_mem_mapping = xen_hvm_init_mem_mapping, 312 317 .init.guest_late_init = xen_hvm_guest_late_init, 313 318 .runtime.pin_vcpu = xen_pin_vcpu,
-4
arch/x86/xen/enlighten_pv.c
··· 1341 1341 1342 1342 xen_acpi_sleep_register(); 1343 1343 1344 - /* Avoid searching for BIOS MP tables */ 1345 - x86_init.mpparse.find_smp_config = x86_init_noop; 1346 - x86_init.mpparse.get_smp_config = x86_init_uint_noop; 1347 - 1348 1344 xen_boot_params_init_edd(); 1349 1345 1350 1346 #ifdef CONFIG_ACPI
+6 -20
arch/x86/xen/smp_pv.c
··· 148 148 return rc; 149 149 } 150 150 151 - static void __init xen_fill_possible_map(void) 152 - { 153 - int i, rc; 154 - 155 - if (xen_initial_domain()) 156 - return; 157 - 158 - for (i = 0; i < nr_cpu_ids; i++) { 159 - rc = HYPERVISOR_vcpu_op(VCPUOP_is_up, i, NULL); 160 - if (rc >= 0) { 161 - num_processors++; 162 - set_cpu_possible(i, true); 163 - } 164 - } 165 - } 166 - 167 - static void __init xen_filter_cpu_maps(void) 151 + static void __init _get_smp_config(unsigned int early) 168 152 { 169 153 int i, rc; 170 154 unsigned int subtract = 0; 171 155 172 - if (!xen_initial_domain()) 156 + if (early) 173 157 return; 174 158 175 159 num_processors = 0; ··· 194 210 * sure the old memory can be recycled. */ 195 211 make_lowmem_page_readwrite(xen_initial_gdt); 196 212 197 - xen_filter_cpu_maps(); 198 213 xen_setup_vcpu_info_placement(); 199 214 200 215 /* ··· 459 476 void __init xen_smp_init(void) 460 477 { 461 478 smp_ops = xen_smp_ops; 462 - xen_fill_possible_map(); 479 + 480 + /* Avoid searching for BIOS MP tables */ 481 + x86_init.mpparse.find_smp_config = x86_init_noop; 482 + x86_init.mpparse.get_smp_config = _get_smp_config; 463 483 }
+1 -1
block/bio-integrity.c
··· 373 373 struct blk_integrity *bi = blk_get_integrity(bio->bi_bdev->bd_disk); 374 374 unsigned bytes = bio_integrity_bytes(bi, bytes_done >> 9); 375 375 376 - bip->bip_iter.bi_sector += bytes_done >> 9; 376 + bip->bip_iter.bi_sector += bio_integrity_intervals(bi, bytes_done >> 9); 377 377 bvec_iter_advance(bip->bip_vec, &bip->bip_iter, bytes); 378 378 } 379 379
+19 -14
block/fops.c
··· 566 566 { 567 567 struct block_device *bdev = iocb->ki_filp->private_data; 568 568 loff_t size = bdev_nr_bytes(bdev); 569 - size_t count = iov_iter_count(to); 570 569 loff_t pos = iocb->ki_pos; 571 570 size_t shorted = 0; 572 571 ssize_t ret = 0; 572 + size_t count; 573 573 574 - if (unlikely(pos + count > size)) { 574 + if (unlikely(pos + iov_iter_count(to) > size)) { 575 575 if (pos >= size) 576 576 return 0; 577 577 size -= pos; 578 - if (count > size) { 579 - shorted = count - size; 580 - iov_iter_truncate(to, size); 581 - } 578 + shorted = iov_iter_count(to) - size; 579 + iov_iter_truncate(to, size); 582 580 } 581 + 582 + count = iov_iter_count(to); 583 + if (!count) 584 + goto reexpand; /* skip atime */ 583 585 584 586 if (iocb->ki_flags & IOCB_DIRECT) { 585 587 struct address_space *mapping = iocb->ki_filp->f_mapping; 586 588 587 589 if (iocb->ki_flags & IOCB_NOWAIT) { 588 - if (filemap_range_needs_writeback(mapping, iocb->ki_pos, 589 - iocb->ki_pos + count - 1)) 590 - return -EAGAIN; 590 + if (filemap_range_needs_writeback(mapping, pos, 591 + pos + count - 1)) { 592 + ret = -EAGAIN; 593 + goto reexpand; 594 + } 591 595 } else { 592 - ret = filemap_write_and_wait_range(mapping, 593 - iocb->ki_pos, 594 - iocb->ki_pos + count - 1); 596 + ret = filemap_write_and_wait_range(mapping, pos, 597 + pos + count - 1); 595 598 if (ret < 0) 596 - return ret; 599 + goto reexpand; 597 600 } 598 601 599 602 file_accessed(iocb->ki_filp); ··· 606 603 iocb->ki_pos += ret; 607 604 count -= ret; 608 605 } 606 + iov_iter_revert(to, count - iov_iter_count(to)); 609 607 if (ret < 0 || !count) 610 - return ret; 608 + goto reexpand; 611 609 } 612 610 613 611 ret = filemap_read(iocb, to, ret); 614 612 613 + reexpand: 615 614 if (unlikely(shorted)) 616 615 iov_iter_reexpand(to, iov_iter_count(to) + shorted); 617 616 return ret;
+2 -2
crypto/blake2s_generic.c
··· 15 15 static int crypto_blake2s_update_generic(struct shash_desc *desc, 16 16 const u8 *in, unsigned int inlen) 17 17 { 18 - return crypto_blake2s_update(desc, in, inlen, blake2s_compress_generic); 18 + return crypto_blake2s_update(desc, in, inlen, true); 19 19 } 20 20 21 21 static int crypto_blake2s_final_generic(struct shash_desc *desc, u8 *out) 22 22 { 23 - return crypto_blake2s_final(desc, out, blake2s_compress_generic); 23 + return crypto_blake2s_final(desc, out, true); 24 24 } 25 25 26 26 #define BLAKE2S_ALG(name, driver_name, digest_size) \
+1
drivers/acpi/Kconfig
··· 11 11 depends on ARCH_SUPPORTS_ACPI 12 12 select PNP 13 13 select NLS 14 + select CRC32 14 15 default y if X86 15 16 help 16 17 Advanced Configuration and Power Interface (ACPI) support for
+10
drivers/ata/libata-core.c
··· 2007 2007 { 2008 2008 struct ata_port *ap = dev->link->ap; 2009 2009 2010 + if (dev->horkage & ATA_HORKAGE_NO_LOG_DIR) 2011 + return false; 2012 + 2010 2013 if (ata_read_log_page(dev, ATA_LOG_DIRECTORY, 0, ap->sector_buf, 1)) 2011 2014 return false; 2012 2015 return get_unaligned_le16(&ap->sector_buf[log * 2]) ? true : false; ··· 4075 4072 { "WDC WD2500JD-*", NULL, ATA_HORKAGE_WD_BROKEN_LPM }, 4076 4073 { "WDC WD3000JD-*", NULL, ATA_HORKAGE_WD_BROKEN_LPM }, 4077 4074 { "WDC WD3200JD-*", NULL, ATA_HORKAGE_WD_BROKEN_LPM }, 4075 + 4076 + /* 4077 + * This sata dom device goes on a walkabout when the ATA_LOG_DIRECTORY 4078 + * log page is accessed. Ensure we never ask for this log page with 4079 + * these devices. 4080 + */ 4081 + { "SATADOM-ML 3ME", NULL, ATA_HORKAGE_NO_LOG_DIR }, 4078 4082 4079 4083 /* End Marker */ 4080 4084 { }
+22 -17
drivers/char/random.c
··· 762 762 return arch_init; 763 763 } 764 764 765 - static bool __init crng_init_try_arch_early(struct crng_state *crng) 765 + static bool __init crng_init_try_arch_early(void) 766 766 { 767 767 int i; 768 768 bool arch_init = true; ··· 774 774 rv = random_get_entropy(); 775 775 arch_init = false; 776 776 } 777 - crng->state[i] ^= rv; 777 + primary_crng.state[i] ^= rv; 778 778 } 779 779 780 780 return arch_init; ··· 788 788 crng->init_time = jiffies - CRNG_RESEED_INTERVAL - 1; 789 789 } 790 790 791 - static void __init crng_initialize_primary(struct crng_state *crng) 791 + static void __init crng_initialize_primary(void) 792 792 { 793 - _extract_entropy(&crng->state[4], sizeof(u32) * 12); 794 - if (crng_init_try_arch_early(crng) && trust_cpu && crng_init < 2) { 793 + _extract_entropy(&primary_crng.state[4], sizeof(u32) * 12); 794 + if (crng_init_try_arch_early() && trust_cpu && crng_init < 2) { 795 795 invalidate_batched_entropy(); 796 796 numa_crng_init(); 797 797 crng_init = 2; 798 798 pr_notice("crng init done (trusting CPU's manufacturer)\n"); 799 799 } 800 - crng->init_time = jiffies - CRNG_RESEED_INTERVAL - 1; 800 + primary_crng.init_time = jiffies - CRNG_RESEED_INTERVAL - 1; 801 801 } 802 802 803 - static void crng_finalize_init(struct crng_state *crng) 803 + static void crng_finalize_init(void) 804 804 { 805 - if (crng != &primary_crng || crng_init >= 2) 806 - return; 807 805 if (!system_wq) { 808 806 /* We can't call numa_crng_init until we have workqueues, 809 807 * so mark this for processing later. */ ··· 812 814 invalidate_batched_entropy(); 813 815 numa_crng_init(); 814 816 crng_init = 2; 817 + crng_need_final_init = false; 815 818 process_random_ready_list(); 816 819 wake_up_interruptible(&crng_init_wait); 817 820 kill_fasync(&fasync, SIGIO, POLL_IN); ··· 979 980 memzero_explicit(&buf, sizeof(buf)); 980 981 WRITE_ONCE(crng->init_time, jiffies); 981 982 spin_unlock_irqrestore(&crng->lock, flags); 982 - crng_finalize_init(crng); 983 + if (crng == &primary_crng && crng_init < 2) 984 + crng_finalize_init(); 983 985 } 984 986 985 987 static void _extract_crng(struct crng_state *crng, u8 out[CHACHA_BLOCK_SIZE]) ··· 1697 1697 { 1698 1698 init_std_data(); 1699 1699 if (crng_need_final_init) 1700 - crng_finalize_init(&primary_crng); 1701 - crng_initialize_primary(&primary_crng); 1700 + crng_finalize_init(); 1701 + crng_initialize_primary(); 1702 1702 crng_global_init_time = jiffies; 1703 1703 if (ratelimit_disable) { 1704 1704 urandom_warning.interval = 0; ··· 1856 1856 */ 1857 1857 if (!capable(CAP_SYS_ADMIN)) 1858 1858 return -EPERM; 1859 - input_pool.entropy_count = 0; 1859 + if (xchg(&input_pool.entropy_count, 0) && random_write_wakeup_bits) { 1860 + wake_up_interruptible(&random_write_wait); 1861 + kill_fasync(&fasync, SIGIO, POLL_OUT); 1862 + } 1860 1863 return 0; 1861 1864 case RNDRESEEDCRNG: 1862 1865 if (!capable(CAP_SYS_ADMIN)) ··· 2208 2205 return; 2209 2206 } 2210 2207 2211 - /* Suspend writing if we're above the trickle threshold. 2208 + /* Throttle writing if we're above the trickle threshold. 2212 2209 * We'll be woken up again once below random_write_wakeup_thresh, 2213 - * or when the calling thread is about to terminate. 2210 + * when the calling thread is about to terminate, or once 2211 + * CRNG_RESEED_INTERVAL has lapsed. 2214 2212 */ 2215 - wait_event_interruptible(random_write_wait, 2213 + wait_event_interruptible_timeout(random_write_wait, 2216 2214 !system_wq || kthread_should_stop() || 2217 - POOL_ENTROPY_BITS() <= random_write_wakeup_bits); 2215 + POOL_ENTROPY_BITS() <= random_write_wakeup_bits, 2216 + CRNG_RESEED_INTERVAL); 2218 2217 mix_pool_bytes(buffer, count); 2219 2218 credit_entropy_bits(entropy); 2220 2219 }
+2
drivers/dma-buf/dma-heap.c
··· 14 14 #include <linux/xarray.h> 15 15 #include <linux/list.h> 16 16 #include <linux/slab.h> 17 + #include <linux/nospec.h> 17 18 #include <linux/uaccess.h> 18 19 #include <linux/syscalls.h> 19 20 #include <linux/dma-heap.h> ··· 136 135 if (nr >= ARRAY_SIZE(dma_heap_ioctl_cmds)) 137 136 return -EINVAL; 138 137 138 + nr = array_index_nospec(nr, ARRAY_SIZE(dma_heap_ioctl_cmds)); 139 139 /* Get the kernel ioctl cmd that matches */ 140 140 kcmd = dma_heap_ioctl_cmds[nr]; 141 141
+8 -2
drivers/gpu/drm/amd/amdgpu/amdgpu.h
··· 1408 1408 int amdgpu_acpi_pcie_notify_device_ready(struct amdgpu_device *adev); 1409 1409 1410 1410 void amdgpu_acpi_get_backlight_caps(struct amdgpu_dm_backlight_caps *caps); 1411 - bool amdgpu_acpi_is_s0ix_active(struct amdgpu_device *adev); 1412 1411 void amdgpu_acpi_detect(void); 1413 1412 #else 1414 1413 static inline int amdgpu_acpi_init(struct amdgpu_device *adev) { return 0; } 1415 1414 static inline void amdgpu_acpi_fini(struct amdgpu_device *adev) { } 1416 - static inline bool amdgpu_acpi_is_s0ix_active(struct amdgpu_device *adev) { return false; } 1417 1415 static inline void amdgpu_acpi_detect(void) { } 1418 1416 static inline bool amdgpu_acpi_is_power_shift_control_supported(void) { return false; } 1419 1417 static inline int amdgpu_acpi_power_shift_control(struct amdgpu_device *adev, 1420 1418 u8 dev_state, bool drv_state) { return 0; } 1421 1419 static inline int amdgpu_acpi_smart_shift_update(struct drm_device *dev, 1422 1420 enum amdgpu_ss ss_state) { return 0; } 1421 + #endif 1422 + 1423 + #if defined(CONFIG_ACPI) && defined(CONFIG_SUSPEND) 1424 + bool amdgpu_acpi_is_s3_active(struct amdgpu_device *adev); 1425 + bool amdgpu_acpi_is_s0ix_active(struct amdgpu_device *adev); 1426 + #else 1427 + static inline bool amdgpu_acpi_is_s0ix_active(struct amdgpu_device *adev) { return false; } 1428 + static inline bool amdgpu_acpi_is_s3_active(struct amdgpu_device *adev) { return false; } 1423 1429 #endif 1424 1430 1425 1431 int amdgpu_cs_find_mapping(struct amdgpu_cs_parser *parser,
+32 -5
drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c
··· 1031 1031 } 1032 1032 } 1033 1033 1034 + #if IS_ENABLED(CONFIG_SUSPEND) 1035 + /** 1036 + * amdgpu_acpi_is_s3_active 1037 + * 1038 + * @adev: amdgpu_device_pointer 1039 + * 1040 + * returns true if supported, false if not. 1041 + */ 1042 + bool amdgpu_acpi_is_s3_active(struct amdgpu_device *adev) 1043 + { 1044 + return !(adev->flags & AMD_IS_APU) || 1045 + (pm_suspend_target_state == PM_SUSPEND_MEM); 1046 + } 1047 + 1034 1048 /** 1035 1049 * amdgpu_acpi_is_s0ix_active 1036 1050 * ··· 1054 1040 */ 1055 1041 bool amdgpu_acpi_is_s0ix_active(struct amdgpu_device *adev) 1056 1042 { 1057 - #if IS_ENABLED(CONFIG_AMD_PMC) && IS_ENABLED(CONFIG_SUSPEND) 1058 - if (acpi_gbl_FADT.flags & ACPI_FADT_LOW_POWER_S0) { 1059 - if (adev->flags & AMD_IS_APU) 1060 - return pm_suspend_target_state == PM_SUSPEND_TO_IDLE; 1043 + if (!(adev->flags & AMD_IS_APU) || 1044 + (pm_suspend_target_state != PM_SUSPEND_TO_IDLE)) 1045 + return false; 1046 + 1047 + if (!(acpi_gbl_FADT.flags & ACPI_FADT_LOW_POWER_S0)) { 1048 + dev_warn_once(adev->dev, 1049 + "Power consumption will be higher as BIOS has not been configured for suspend-to-idle.\n" 1050 + "To use suspend-to-idle change the sleep mode in BIOS setup.\n"); 1051 + return false; 1061 1052 } 1062 - #endif 1053 + 1054 + #if !IS_ENABLED(CONFIG_AMD_PMC) 1055 + dev_warn_once(adev->dev, 1056 + "Power consumption will be higher as the kernel has not been compiled with CONFIG_AMD_PMC.\n"); 1063 1057 return false; 1058 + #else 1059 + return true; 1060 + #endif /* CONFIG_AMD_PMC */ 1064 1061 } 1062 + 1063 + #endif /* CONFIG_SUSPEND */
+9 -2
drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
··· 2246 2246 static int amdgpu_pmops_prepare(struct device *dev) 2247 2247 { 2248 2248 struct drm_device *drm_dev = dev_get_drvdata(dev); 2249 + struct amdgpu_device *adev = drm_to_adev(drm_dev); 2249 2250 2250 2251 /* Return a positive number here so 2251 2252 * DPM_FLAG_SMART_SUSPEND works properly 2252 2253 */ 2253 2254 if (amdgpu_device_supports_boco(drm_dev)) 2254 - return pm_runtime_suspended(dev) && 2255 - pm_suspend_via_firmware(); 2255 + return pm_runtime_suspended(dev); 2256 + 2257 + /* if we will not support s3 or s2i for the device 2258 + * then skip suspend 2259 + */ 2260 + if (!amdgpu_acpi_is_s0ix_active(adev) && 2261 + !amdgpu_acpi_is_s3_active(adev)) 2262 + return 1; 2256 2263 2257 2264 return 0; 2258 2265 }
+1 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
··· 1904 1904 unsigned i; 1905 1905 int r; 1906 1906 1907 - if (direct_submit && !ring->sched.ready) { 1907 + if (!direct_submit && !ring->sched.ready) { 1908 1908 DRM_ERROR("Trying to move memory with ring turned off.\n"); 1909 1909 return -EINVAL; 1910 1910 }
+3
drivers/gpu/drm/amd/amdgpu/gmc_v10_0.c
··· 1140 1140 { 1141 1141 struct amdgpu_device *adev = (struct amdgpu_device *)handle; 1142 1142 1143 + if (adev->ip_versions[GC_HWIP][0] == IP_VERSION(10, 1, 3)) 1144 + return; 1145 + 1143 1146 adev->mmhub.funcs->get_clockgating(adev, flags); 1144 1147 1145 1148 if (adev->ip_versions[ATHUB_HWIP][0] >= IP_VERSION(2, 1, 0))
+8 -8
drivers/gpu/drm/amd/display/dc/clk_mgr/dcn301/vg_clk_mgr.c
··· 570 570 .wm_inst = WM_A, 571 571 .wm_type = WM_TYPE_PSTATE_CHG, 572 572 .pstate_latency_us = 11.65333, 573 - .sr_exit_time_us = 7.95, 574 - .sr_enter_plus_exit_time_us = 9, 573 + .sr_exit_time_us = 13.5, 574 + .sr_enter_plus_exit_time_us = 16.5, 575 575 .valid = true, 576 576 }, 577 577 { 578 578 .wm_inst = WM_B, 579 579 .wm_type = WM_TYPE_PSTATE_CHG, 580 580 .pstate_latency_us = 11.65333, 581 - .sr_exit_time_us = 9.82, 582 - .sr_enter_plus_exit_time_us = 11.196, 581 + .sr_exit_time_us = 13.5, 582 + .sr_enter_plus_exit_time_us = 16.5, 583 583 .valid = true, 584 584 }, 585 585 { 586 586 .wm_inst = WM_C, 587 587 .wm_type = WM_TYPE_PSTATE_CHG, 588 588 .pstate_latency_us = 11.65333, 589 - .sr_exit_time_us = 9.89, 590 - .sr_enter_plus_exit_time_us = 11.24, 589 + .sr_exit_time_us = 13.5, 590 + .sr_enter_plus_exit_time_us = 16.5, 591 591 .valid = true, 592 592 }, 593 593 { 594 594 .wm_inst = WM_D, 595 595 .wm_type = WM_TYPE_PSTATE_CHG, 596 596 .pstate_latency_us = 11.65333, 597 - .sr_exit_time_us = 9.748, 598 - .sr_enter_plus_exit_time_us = 11.102, 597 + .sr_exit_time_us = 13.5, 598 + .sr_enter_plus_exit_time_us = 16.5, 599 599 .valid = true, 600 600 }, 601 601 }
+10 -10
drivers/gpu/drm/amd/display/dc/clk_mgr/dcn31/dcn31_clk_mgr.c
··· 329 329 330 330 }; 331 331 332 - static struct wm_table ddr4_wm_table = { 332 + static struct wm_table ddr5_wm_table = { 333 333 .entries = { 334 334 { 335 335 .wm_inst = WM_A, 336 336 .wm_type = WM_TYPE_PSTATE_CHG, 337 337 .pstate_latency_us = 11.72, 338 - .sr_exit_time_us = 6.09, 339 - .sr_enter_plus_exit_time_us = 7.14, 338 + .sr_exit_time_us = 9, 339 + .sr_enter_plus_exit_time_us = 11, 340 340 .valid = true, 341 341 }, 342 342 { 343 343 .wm_inst = WM_B, 344 344 .wm_type = WM_TYPE_PSTATE_CHG, 345 345 .pstate_latency_us = 11.72, 346 - .sr_exit_time_us = 10.12, 347 - .sr_enter_plus_exit_time_us = 11.48, 346 + .sr_exit_time_us = 9, 347 + .sr_enter_plus_exit_time_us = 11, 348 348 .valid = true, 349 349 }, 350 350 { 351 351 .wm_inst = WM_C, 352 352 .wm_type = WM_TYPE_PSTATE_CHG, 353 353 .pstate_latency_us = 11.72, 354 - .sr_exit_time_us = 10.12, 355 - .sr_enter_plus_exit_time_us = 11.48, 354 + .sr_exit_time_us = 9, 355 + .sr_enter_plus_exit_time_us = 11, 356 356 .valid = true, 357 357 }, 358 358 { 359 359 .wm_inst = WM_D, 360 360 .wm_type = WM_TYPE_PSTATE_CHG, 361 361 .pstate_latency_us = 11.72, 362 - .sr_exit_time_us = 10.12, 363 - .sr_enter_plus_exit_time_us = 11.48, 362 + .sr_exit_time_us = 9, 363 + .sr_enter_plus_exit_time_us = 11, 364 364 .valid = true, 365 365 }, 366 366 } ··· 687 687 if (ctx->dc_bios->integrated_info->memory_type == LpDdr5MemType) { 688 688 dcn31_bw_params.wm_table = lpddr5_wm_table; 689 689 } else { 690 - dcn31_bw_params.wm_table = ddr4_wm_table; 690 + dcn31_bw_params.wm_table = ddr5_wm_table; 691 691 } 692 692 /* Saved clocks configured at boot for debug purposes */ 693 693 dcn31_dump_clk_registers(&clk_mgr->base.base.boot_snapshot, &clk_mgr->base.base, &log_info);
-5
drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c
··· 1608 1608 pipe_ctx->stream_res.stream_enc, 1609 1609 pipe_ctx->stream_res.tg->inst); 1610 1610 1611 - if (dc_is_embedded_signal(pipe_ctx->stream->signal) && 1612 - pipe_ctx->stream_res.stream_enc->funcs->reset_fifo) 1613 - pipe_ctx->stream_res.stream_enc->funcs->reset_fifo( 1614 - pipe_ctx->stream_res.stream_enc); 1615 - 1616 1611 if (dc_is_dp_signal(pipe_ctx->stream->signal)) 1617 1612 dp_source_sequence_trace(link, DPCD_SOURCE_SEQ_AFTER_CONNECT_DIG_FE_OTG); 1618 1613
-15
drivers/gpu/drm/amd/display/dc/dcn10/dcn10_stream_encoder.c
··· 902 902 903 903 } 904 904 905 - void enc1_stream_encoder_reset_fifo( 906 - struct stream_encoder *enc) 907 - { 908 - struct dcn10_stream_encoder *enc1 = DCN10STRENC_FROM_STRENC(enc); 909 - 910 - /* set DIG_START to 0x1 to reset FIFO */ 911 - REG_UPDATE(DIG_FE_CNTL, DIG_START, 1); 912 - udelay(100); 913 - 914 - /* write 0 to take the FIFO out of reset */ 915 - REG_UPDATE(DIG_FE_CNTL, DIG_START, 0); 916 - } 917 - 918 905 void enc1_stream_encoder_dp_blank( 919 906 struct dc_link *link, 920 907 struct stream_encoder *enc) ··· 1587 1600 enc1_stream_encoder_send_immediate_sdp_message, 1588 1601 .stop_dp_info_packets = 1589 1602 enc1_stream_encoder_stop_dp_info_packets, 1590 - .reset_fifo = 1591 - enc1_stream_encoder_reset_fifo, 1592 1603 .dp_blank = 1593 1604 enc1_stream_encoder_dp_blank, 1594 1605 .dp_unblank =
-3
drivers/gpu/drm/amd/display/dc/dcn10/dcn10_stream_encoder.h
··· 626 626 void enc1_stream_encoder_stop_dp_info_packets( 627 627 struct stream_encoder *enc); 628 628 629 - void enc1_stream_encoder_reset_fifo( 630 - struct stream_encoder *enc); 631 - 632 629 void enc1_stream_encoder_dp_blank( 633 630 struct dc_link *link, 634 631 struct stream_encoder *enc);
-2
drivers/gpu/drm/amd/display/dc/dcn20/dcn20_stream_encoder.c
··· 593 593 enc1_stream_encoder_send_immediate_sdp_message, 594 594 .stop_dp_info_packets = 595 595 enc1_stream_encoder_stop_dp_info_packets, 596 - .reset_fifo = 597 - enc1_stream_encoder_reset_fifo, 598 596 .dp_blank = 599 597 enc1_stream_encoder_dp_blank, 600 598 .dp_unblank =
-2
drivers/gpu/drm/amd/display/dc/dcn30/dcn30_dio_stream_encoder.c
··· 789 789 enc3_stream_encoder_update_dp_info_packets, 790 790 .stop_dp_info_packets = 791 791 enc1_stream_encoder_stop_dp_info_packets, 792 - .reset_fifo = 793 - enc1_stream_encoder_reset_fifo, 794 792 .dp_blank = 795 793 enc1_stream_encoder_dp_blank, 796 794 .dp_unblank =
-4
drivers/gpu/drm/amd/display/dc/inc/hw/stream_encoder.h
··· 164 164 void (*stop_dp_info_packets)( 165 165 struct stream_encoder *enc); 166 166 167 - void (*reset_fifo)( 168 - struct stream_encoder *enc 169 - ); 170 - 171 167 void (*dp_blank)( 172 168 struct dc_link *link, 173 169 struct stream_encoder *enc);
+3 -3
drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
··· 3696 3696 3697 3697 static int sienna_cichlid_enable_mgpu_fan_boost(struct smu_context *smu) 3698 3698 { 3699 - struct smu_table_context *table_context = &smu->smu_table; 3700 - PPTable_t *smc_pptable = table_context->driver_pptable; 3699 + uint16_t *mgpu_fan_boost_limit_rpm; 3701 3700 3701 + GET_PPTABLE_MEMBER(MGpuFanBoostLimitRpm, &mgpu_fan_boost_limit_rpm); 3702 3702 /* 3703 3703 * Skip the MGpuFanBoost setting for those ASICs 3704 3704 * which do not support it 3705 3705 */ 3706 - if (!smc_pptable->MGpuFanBoostLimitRpm) 3706 + if (*mgpu_fan_boost_limit_rpm == 0) 3707 3707 return 0; 3708 3708 3709 3709 return smu_cmn_send_smc_msg_with_param(smu,
+3
drivers/gpu/drm/i915/display/intel_overlay.c
··· 959 959 const struct intel_crtc_state *pipe_config = 960 960 overlay->crtc->config; 961 961 962 + if (rec->dst_height == 0 || rec->dst_width == 0) 963 + return -EINVAL; 964 + 962 965 if (rec->dst_x < pipe_config->pipe_src_w && 963 966 rec->dst_x + rec->dst_width <= pipe_config->pipe_src_w && 964 967 rec->dst_y < pipe_config->pipe_src_h &&
+2 -1
drivers/gpu/drm/i915/display/intel_tc.c
··· 345 345 static bool adl_tc_phy_status_complete(struct intel_digital_port *dig_port) 346 346 { 347 347 struct drm_i915_private *i915 = to_i915(dig_port->base.base.dev); 348 + enum tc_port tc_port = intel_port_to_tc(i915, dig_port->base.port); 348 349 struct intel_uncore *uncore = &i915->uncore; 349 350 u32 val; 350 351 351 - val = intel_uncore_read(uncore, TCSS_DDI_STATUS(dig_port->tc_phy_fia_idx)); 352 + val = intel_uncore_read(uncore, TCSS_DDI_STATUS(tc_port)); 352 353 if (val == 0xffffffff) { 353 354 drm_dbg_kms(&i915->drm, 354 355 "Port %s: PHY in TCCOLD, assuming not complete\n",
+7 -2
drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
··· 2505 2505 timeout) < 0) { 2506 2506 i915_request_put(rq); 2507 2507 2508 - tl = intel_context_timeline_lock(ce); 2508 + /* 2509 + * Error path, cannot use intel_context_timeline_lock as 2510 + * that is user interruptable and this clean up step 2511 + * must be done. 2512 + */ 2513 + mutex_lock(&ce->timeline->mutex); 2509 2514 intel_context_exit(ce); 2510 - intel_context_timeline_unlock(tl); 2515 + mutex_unlock(&ce->timeline->mutex); 2511 2516 2512 2517 if (nonblock) 2513 2518 return -EWOULDBLOCK;
+5
drivers/gpu/drm/i915/gt/uc/intel_guc.h
··· 206 206 * context usage for overflows. 207 207 */ 208 208 struct delayed_work work; 209 + 210 + /** 211 + * @shift: Right shift value for the gpm timestamp 212 + */ 213 + u32 shift; 209 214 } timestamp; 210 215 211 216 #ifdef CONFIG_DRM_I915_SELFTEST
+97 -17
drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
··· 1113 1113 if (new_start == lower_32_bits(*prev_start)) 1114 1114 return; 1115 1115 1116 + /* 1117 + * When gt is unparked, we update the gt timestamp and start the ping 1118 + * worker that updates the gt_stamp every POLL_TIME_CLKS. As long as gt 1119 + * is unparked, all switched in contexts will have a start time that is 1120 + * within +/- POLL_TIME_CLKS of the most recent gt_stamp. 1121 + * 1122 + * If neither gt_stamp nor new_start has rolled over, then the 1123 + * gt_stamp_hi does not need to be adjusted, however if one of them has 1124 + * rolled over, we need to adjust gt_stamp_hi accordingly. 1125 + * 1126 + * The below conditions address the cases of new_start rollover and 1127 + * gt_stamp_last rollover respectively. 1128 + */ 1116 1129 if (new_start < gt_stamp_last && 1117 1130 (new_start - gt_stamp_last) <= POLL_TIME_CLKS) 1118 1131 gt_stamp_hi++; ··· 1137 1124 *prev_start = ((u64)gt_stamp_hi << 32) | new_start; 1138 1125 } 1139 1126 1140 - static void guc_update_engine_gt_clks(struct intel_engine_cs *engine) 1127 + /* 1128 + * GuC updates shared memory and KMD reads it. Since this is not synchronized, 1129 + * we run into a race where the value read is inconsistent. Sometimes the 1130 + * inconsistency is in reading the upper MSB bytes of the last_in value when 1131 + * this race occurs. 2 types of cases are seen - upper 8 bits are zero and upper 1132 + * 24 bits are zero. Since these are non-zero values, it is non-trivial to 1133 + * determine validity of these values. Instead we read the values multiple times 1134 + * until they are consistent. In test runs, 3 attempts results in consistent 1135 + * values. The upper bound is set to 6 attempts and may need to be tuned as per 1136 + * any new occurences. 1137 + */ 1138 + static void __get_engine_usage_record(struct intel_engine_cs *engine, 1139 + u32 *last_in, u32 *id, u32 *total) 1141 1140 { 1142 1141 struct guc_engine_usage_record *rec = intel_guc_engine_usage(engine); 1142 + int i = 0; 1143 + 1144 + do { 1145 + *last_in = READ_ONCE(rec->last_switch_in_stamp); 1146 + *id = READ_ONCE(rec->current_context_index); 1147 + *total = READ_ONCE(rec->total_runtime); 1148 + 1149 + if (READ_ONCE(rec->last_switch_in_stamp) == *last_in && 1150 + READ_ONCE(rec->current_context_index) == *id && 1151 + READ_ONCE(rec->total_runtime) == *total) 1152 + break; 1153 + } while (++i < 6); 1154 + } 1155 + 1156 + static void guc_update_engine_gt_clks(struct intel_engine_cs *engine) 1157 + { 1143 1158 struct intel_engine_guc_stats *stats = &engine->stats.guc; 1144 1159 struct intel_guc *guc = &engine->gt->uc.guc; 1145 - u32 last_switch = rec->last_switch_in_stamp; 1146 - u32 ctx_id = rec->current_context_index; 1147 - u32 total = rec->total_runtime; 1160 + u32 last_switch, ctx_id, total; 1148 1161 1149 1162 lockdep_assert_held(&guc->timestamp.lock); 1163 + 1164 + __get_engine_usage_record(engine, &last_switch, &ctx_id, &total); 1150 1165 1151 1166 stats->running = ctx_id != ~0U && last_switch; 1152 1167 if (stats->running) ··· 1190 1149 } 1191 1150 } 1192 1151 1193 - static void guc_update_pm_timestamp(struct intel_guc *guc, 1194 - struct intel_engine_cs *engine, 1195 - ktime_t *now) 1152 + static u32 gpm_timestamp_shift(struct intel_gt *gt) 1196 1153 { 1197 - u32 gt_stamp_now, gt_stamp_hi; 1154 + intel_wakeref_t wakeref; 1155 + u32 reg, shift; 1156 + 1157 + with_intel_runtime_pm(gt->uncore->rpm, wakeref) 1158 + reg = intel_uncore_read(gt->uncore, RPM_CONFIG0); 1159 + 1160 + shift = (reg & GEN10_RPM_CONFIG0_CTC_SHIFT_PARAMETER_MASK) >> 1161 + GEN10_RPM_CONFIG0_CTC_SHIFT_PARAMETER_SHIFT; 1162 + 1163 + return 3 - shift; 1164 + } 1165 + 1166 + static u64 gpm_timestamp(struct intel_gt *gt) 1167 + { 1168 + u32 lo, hi, old_hi, loop = 0; 1169 + 1170 + hi = intel_uncore_read(gt->uncore, MISC_STATUS1); 1171 + do { 1172 + lo = intel_uncore_read(gt->uncore, MISC_STATUS0); 1173 + old_hi = hi; 1174 + hi = intel_uncore_read(gt->uncore, MISC_STATUS1); 1175 + } while (old_hi != hi && loop++ < 2); 1176 + 1177 + return ((u64)hi << 32) | lo; 1178 + } 1179 + 1180 + static void guc_update_pm_timestamp(struct intel_guc *guc, ktime_t *now) 1181 + { 1182 + struct intel_gt *gt = guc_to_gt(guc); 1183 + u32 gt_stamp_lo, gt_stamp_hi; 1184 + u64 gpm_ts; 1198 1185 1199 1186 lockdep_assert_held(&guc->timestamp.lock); 1200 1187 1201 1188 gt_stamp_hi = upper_32_bits(guc->timestamp.gt_stamp); 1202 - gt_stamp_now = intel_uncore_read(engine->uncore, 1203 - RING_TIMESTAMP(engine->mmio_base)); 1189 + gpm_ts = gpm_timestamp(gt) >> guc->timestamp.shift; 1190 + gt_stamp_lo = lower_32_bits(gpm_ts); 1204 1191 *now = ktime_get(); 1205 1192 1206 - if (gt_stamp_now < lower_32_bits(guc->timestamp.gt_stamp)) 1193 + if (gt_stamp_lo < lower_32_bits(guc->timestamp.gt_stamp)) 1207 1194 gt_stamp_hi++; 1208 1195 1209 - guc->timestamp.gt_stamp = ((u64)gt_stamp_hi << 32) | gt_stamp_now; 1196 + guc->timestamp.gt_stamp = ((u64)gt_stamp_hi << 32) | gt_stamp_lo; 1210 1197 } 1211 1198 1212 1199 /* ··· 1277 1208 if (!in_reset && intel_gt_pm_get_if_awake(gt)) { 1278 1209 stats_saved = *stats; 1279 1210 gt_stamp_saved = guc->timestamp.gt_stamp; 1211 + /* 1212 + * Update gt_clks, then gt timestamp to simplify the 'gt_stamp - 1213 + * start_gt_clk' calculation below for active engines. 1214 + */ 1280 1215 guc_update_engine_gt_clks(engine); 1281 - guc_update_pm_timestamp(guc, engine, now); 1216 + guc_update_pm_timestamp(guc, now); 1282 1217 intel_gt_pm_put_async(gt); 1283 1218 if (i915_reset_count(gpu_error) != reset_count) { 1284 1219 *stats = stats_saved; ··· 1314 1241 1315 1242 spin_lock_irqsave(&guc->timestamp.lock, flags); 1316 1243 1244 + guc_update_pm_timestamp(guc, &unused); 1317 1245 for_each_engine(engine, gt, id) { 1318 - guc_update_pm_timestamp(guc, engine, &unused); 1319 1246 guc_update_engine_gt_clks(engine); 1320 1247 engine->stats.guc.prev_total = 0; 1321 1248 } ··· 1332 1259 ktime_t unused; 1333 1260 1334 1261 spin_lock_irqsave(&guc->timestamp.lock, flags); 1335 - for_each_engine(engine, gt, id) { 1336 - guc_update_pm_timestamp(guc, engine, &unused); 1262 + 1263 + guc_update_pm_timestamp(guc, &unused); 1264 + for_each_engine(engine, gt, id) 1337 1265 guc_update_engine_gt_clks(engine); 1338 - } 1266 + 1339 1267 spin_unlock_irqrestore(&guc->timestamp.lock, flags); 1340 1268 } 1341 1269 ··· 1409 1335 void intel_guc_busyness_unpark(struct intel_gt *gt) 1410 1336 { 1411 1337 struct intel_guc *guc = &gt->uc.guc; 1338 + unsigned long flags; 1339 + ktime_t unused; 1412 1340 1413 1341 if (!guc_submission_initialized(guc)) 1414 1342 return; 1415 1343 1344 + spin_lock_irqsave(&guc->timestamp.lock, flags); 1345 + guc_update_pm_timestamp(guc, &unused); 1346 + spin_unlock_irqrestore(&guc->timestamp.lock, flags); 1416 1347 mod_delayed_work(system_highpri_wq, &guc->timestamp.work, 1417 1348 guc->timestamp.ping_delay); 1418 1349 } ··· 1862 1783 spin_lock_init(&guc->timestamp.lock); 1863 1784 INIT_DELAYED_WORK(&guc->timestamp.work, guc_timestamp_ping); 1864 1785 guc->timestamp.ping_delay = (POLL_TIME_CLKS / gt->clock_frequency + 1) * HZ; 1786 + guc->timestamp.shift = gpm_timestamp_shift(gt); 1865 1787 1866 1788 return 0; 1867 1789 }
+1 -1
drivers/gpu/drm/i915/i915_gpu_error.c
··· 1522 1522 struct i915_request *rq = NULL; 1523 1523 unsigned long flags; 1524 1524 1525 - ee = intel_engine_coredump_alloc(engine, GFP_KERNEL); 1525 + ee = intel_engine_coredump_alloc(engine, ALLOW_FAIL); 1526 1526 if (!ee) 1527 1527 return NULL; 1528 1528
+2 -1
drivers/gpu/drm/i915/i915_reg.h
··· 2684 2684 #define RING_WAIT (1 << 11) /* gen3+, PRBx_CTL */ 2685 2685 #define RING_WAIT_SEMAPHORE (1 << 10) /* gen6+ */ 2686 2686 2687 - #define GUCPMTIMESTAMP _MMIO(0xC3E8) 2687 + #define MISC_STATUS0 _MMIO(0xA500) 2688 + #define MISC_STATUS1 _MMIO(0xA504) 2688 2689 2689 2690 /* There are 16 64-bit CS General Purpose Registers per-engine on Gen8+ */ 2690 2691 #define GEN8_RING_CS_GPR(base, n) _MMIO((base) + 0x600 + (n) * 8)
-6
drivers/gpu/drm/kmb/kmb_plane.c
··· 158 158 case LAYER_1: 159 159 kmb->plane_status[plane_id].ctrl = LCD_CTRL_VL2_ENABLE; 160 160 break; 161 - case LAYER_2: 162 - kmb->plane_status[plane_id].ctrl = LCD_CTRL_GL1_ENABLE; 163 - break; 164 - case LAYER_3: 165 - kmb->plane_status[plane_id].ctrl = LCD_CTRL_GL2_ENABLE; 166 - break; 167 161 } 168 162 169 163 kmb->plane_status[plane_id].disable = true;
+5 -1
drivers/gpu/drm/mxsfb/mxsfb_kms.c
··· 361 361 bridge_state = 362 362 drm_atomic_get_new_bridge_state(state, 363 363 mxsfb->bridge); 364 - bus_format = bridge_state->input_bus_cfg.format; 364 + if (!bridge_state) 365 + bus_format = MEDIA_BUS_FMT_FIXED; 366 + else 367 + bus_format = bridge_state->input_bus_cfg.format; 368 + 365 369 if (bus_format == MEDIA_BUS_FMT_FIXED) { 366 370 dev_warn_once(drm->dev, 367 371 "Bridge does not provide bus format, assuming MEDIA_BUS_FMT_RGB888_1X24.\n"
+1 -1
drivers/gpu/drm/nouveau/nvkm/subdev/bios/base.c
··· 38 38 *addr += bios->imaged_addr; 39 39 } 40 40 41 - if (unlikely(*addr + size >= bios->size)) { 41 + if (unlikely(*addr + size > bios->size)) { 42 42 nvkm_error(&bios->subdev, "OOB %d %08x %08x\n", size, p, *addr); 43 43 return false; 44 44 }
+1 -1
drivers/infiniband/core/cm.c
··· 3322 3322 ret = cm_init_av_by_path(param->alternate_path, NULL, &alt_av); 3323 3323 if (ret) { 3324 3324 rdma_destroy_ah_attr(&ah_attr); 3325 - return -EINVAL; 3325 + goto deref; 3326 3326 } 3327 3327 3328 3328 spin_lock_irq(&cm_id_priv->lock);
+12 -10
drivers/infiniband/core/cma.c
··· 67 67 [RDMA_CM_EVENT_TIMEWAIT_EXIT] = "timewait exit", 68 68 }; 69 69 70 - static void cma_set_mgid(struct rdma_id_private *id_priv, struct sockaddr *addr, 71 - union ib_gid *mgid); 70 + static void cma_iboe_set_mgid(struct sockaddr *addr, union ib_gid *mgid, 71 + enum ib_gid_type gid_type); 72 72 73 73 const char *__attribute_const__ rdma_event_msg(enum rdma_cm_event_type event) 74 74 { ··· 1846 1846 if (dev_addr->bound_dev_if) 1847 1847 ndev = dev_get_by_index(dev_addr->net, 1848 1848 dev_addr->bound_dev_if); 1849 - if (ndev) { 1849 + if (ndev && !send_only) { 1850 + enum ib_gid_type gid_type; 1850 1851 union ib_gid mgid; 1851 1852 1852 - cma_set_mgid(id_priv, (struct sockaddr *)&mc->addr, 1853 - &mgid); 1854 - 1855 - if (!send_only) 1856 - cma_igmp_send(ndev, &mgid, false); 1857 - 1858 - dev_put(ndev); 1853 + gid_type = id_priv->cma_dev->default_gid_type 1854 + [id_priv->id.port_num - 1855 + rdma_start_port( 1856 + id_priv->cma_dev->device)]; 1857 + cma_iboe_set_mgid((struct sockaddr *)&mc->addr, &mgid, 1858 + gid_type); 1859 + cma_igmp_send(ndev, &mgid, false); 1859 1860 } 1861 + dev_put(ndev); 1860 1862 1861 1863 cancel_work_sync(&mc->iboe_join.work); 1862 1864 }
+23 -11
drivers/infiniband/core/ucma.c
··· 95 95 u64 uid; 96 96 97 97 struct list_head list; 98 + struct list_head mc_list; 98 99 struct work_struct close_work; 99 100 }; 100 101 ··· 106 105 107 106 u64 uid; 108 107 u8 join_state; 108 + struct list_head list; 109 109 struct sockaddr_storage addr; 110 110 }; 111 111 ··· 200 198 201 199 INIT_WORK(&ctx->close_work, ucma_close_id); 202 200 init_completion(&ctx->comp); 201 + INIT_LIST_HEAD(&ctx->mc_list); 203 202 /* So list_del() will work if we don't do ucma_finish_ctx() */ 204 203 INIT_LIST_HEAD(&ctx->list); 205 204 ctx->file = file; ··· 487 484 488 485 static void ucma_cleanup_multicast(struct ucma_context *ctx) 489 486 { 490 - struct ucma_multicast *mc; 491 - unsigned long index; 487 + struct ucma_multicast *mc, *tmp; 492 488 493 - xa_for_each(&multicast_table, index, mc) { 494 - if (mc->ctx != ctx) 495 - continue; 489 + xa_lock(&multicast_table); 490 + list_for_each_entry_safe(mc, tmp, &ctx->mc_list, list) { 491 + list_del(&mc->list); 496 492 /* 497 493 * At this point mc->ctx->ref is 0 so the mc cannot leave the 498 494 * lock on the reader and this is enough serialization 499 495 */ 500 - xa_erase(&multicast_table, index); 496 + __xa_erase(&multicast_table, mc->id); 501 497 kfree(mc); 502 498 } 499 + xa_unlock(&multicast_table); 503 500 } 504 501 505 502 static void ucma_cleanup_mc_events(struct ucma_multicast *mc) ··· 1472 1469 mc->uid = cmd->uid; 1473 1470 memcpy(&mc->addr, addr, cmd->addr_size); 1474 1471 1475 - if (xa_alloc(&multicast_table, &mc->id, NULL, xa_limit_32b, 1472 + xa_lock(&multicast_table); 1473 + if (__xa_alloc(&multicast_table, &mc->id, NULL, xa_limit_32b, 1476 1474 GFP_KERNEL)) { 1477 1475 ret = -ENOMEM; 1478 1476 goto err_free_mc; 1479 1477 } 1478 + 1479 + list_add_tail(&mc->list, &ctx->mc_list); 1480 + xa_unlock(&multicast_table); 1480 1481 1481 1482 mutex_lock(&ctx->mutex); 1482 1483 ret = rdma_join_multicast(ctx->cm_id, (struct sockaddr *)&mc->addr, ··· 1507 1500 mutex_unlock(&ctx->mutex); 1508 1501 ucma_cleanup_mc_events(mc); 1509 1502 err_xa_erase: 1510 - xa_erase(&multicast_table, mc->id); 1503 + xa_lock(&multicast_table); 1504 + list_del(&mc->list); 1505 + __xa_erase(&multicast_table, mc->id); 1511 1506 err_free_mc: 1507 + xa_unlock(&multicast_table); 1512 1508 kfree(mc); 1513 1509 err_put_ctx: 1514 1510 ucma_put_ctx(ctx); ··· 1579 1569 mc = ERR_PTR(-EINVAL); 1580 1570 else if (!refcount_inc_not_zero(&mc->ctx->ref)) 1581 1571 mc = ERR_PTR(-ENXIO); 1582 - else 1583 - __xa_erase(&multicast_table, mc->id); 1584 - xa_unlock(&multicast_table); 1585 1572 1586 1573 if (IS_ERR(mc)) { 1574 + xa_unlock(&multicast_table); 1587 1575 ret = PTR_ERR(mc); 1588 1576 goto out; 1589 1577 } 1578 + 1579 + list_del(&mc->list); 1580 + __xa_erase(&multicast_table, mc->id); 1581 + xa_unlock(&multicast_table); 1590 1582 1591 1583 mutex_lock(&mc->ctx->mutex); 1592 1584 rdma_leave_multicast(mc->ctx->cm_id, (struct sockaddr *) &mc->addr);
+1 -1
drivers/infiniband/hw/hfi1/ipoib.h
··· 55 55 */ 56 56 struct ipoib_txreq { 57 57 struct sdma_txreq txreq; 58 - struct hfi1_sdma_header sdma_hdr; 58 + struct hfi1_sdma_header *sdma_hdr; 59 59 int sdma_status; 60 60 int complete; 61 61 struct hfi1_ipoib_dev_priv *priv;
+15 -12
drivers/infiniband/hw/hfi1/ipoib_main.c
··· 22 22 int ret; 23 23 24 24 dev->tstats = netdev_alloc_pcpu_stats(struct pcpu_sw_netstats); 25 + if (!dev->tstats) 26 + return -ENOMEM; 25 27 26 28 ret = priv->netdev_ops->ndo_init(dev); 27 29 if (ret) 28 - return ret; 30 + goto out_ret; 29 31 30 32 ret = hfi1_netdev_add_data(priv->dd, 31 33 qpn_from_mac(priv->netdev->dev_addr), 32 34 dev); 33 35 if (ret < 0) { 34 36 priv->netdev_ops->ndo_uninit(dev); 35 - return ret; 37 + goto out_ret; 36 38 } 37 39 38 40 return 0; 41 + out_ret: 42 + free_percpu(dev->tstats); 43 + dev->tstats = NULL; 44 + return ret; 39 45 } 40 46 41 47 static void hfi1_ipoib_dev_uninit(struct net_device *dev) 42 48 { 43 49 struct hfi1_ipoib_dev_priv *priv = hfi1_ipoib_priv(dev); 50 + 51 + free_percpu(dev->tstats); 52 + dev->tstats = NULL; 44 53 45 54 hfi1_netdev_remove_data(priv->dd, qpn_from_mac(priv->netdev->dev_addr)); 46 55 ··· 175 166 hfi1_ipoib_rxq_deinit(priv->netdev); 176 167 177 168 free_percpu(dev->tstats); 178 - } 179 - 180 - static void hfi1_ipoib_free_rdma_netdev(struct net_device *dev) 181 - { 182 - hfi1_ipoib_netdev_dtor(dev); 183 - free_netdev(dev); 169 + dev->tstats = NULL; 184 170 } 185 171 186 172 static void hfi1_ipoib_set_id(struct net_device *dev, int id) ··· 215 211 priv->port_num = port_num; 216 212 priv->netdev_ops = netdev->netdev_ops; 217 213 218 - netdev->netdev_ops = &hfi1_ipoib_netdev_ops; 219 - 220 214 ib_query_pkey(device, port_num, priv->pkey_index, &priv->pkey); 221 215 222 216 rc = hfi1_ipoib_txreq_init(priv); 223 217 if (rc) { 224 218 dd_dev_err(dd, "IPoIB netdev TX init - failed(%d)\n", rc); 225 - hfi1_ipoib_free_rdma_netdev(netdev); 226 219 return rc; 227 220 } 228 221 229 222 rc = hfi1_ipoib_rxq_init(netdev); 230 223 if (rc) { 231 224 dd_dev_err(dd, "IPoIB netdev RX init - failed(%d)\n", rc); 232 - hfi1_ipoib_free_rdma_netdev(netdev); 225 + hfi1_ipoib_txreq_deinit(priv); 233 226 return rc; 234 227 } 228 + 229 + netdev->netdev_ops = &hfi1_ipoib_netdev_ops; 235 230 236 231 netdev->priv_destructor = hfi1_ipoib_netdev_dtor; 237 232 netdev->needs_free_netdev = true;
+26 -12
drivers/infiniband/hw/hfi1/ipoib_tx.c
··· 122 122 dd_dev_warn(priv->dd, 123 123 "%s: Status = 0x%x pbc 0x%llx txq = %d sde = %d\n", 124 124 __func__, tx->sdma_status, 125 - le64_to_cpu(tx->sdma_hdr.pbc), tx->txq->q_idx, 125 + le64_to_cpu(tx->sdma_hdr->pbc), tx->txq->q_idx, 126 126 tx->txq->sde->this_idx); 127 127 } 128 128 ··· 231 231 { 232 232 struct hfi1_devdata *dd = txp->dd; 233 233 struct sdma_txreq *txreq = &tx->txreq; 234 - struct hfi1_sdma_header *sdma_hdr = &tx->sdma_hdr; 234 + struct hfi1_sdma_header *sdma_hdr = tx->sdma_hdr; 235 235 u16 pkt_bytes = 236 236 sizeof(sdma_hdr->pbc) + (txp->hdr_dwords << 2) + tx->skb->len; 237 237 int ret; ··· 256 256 struct ipoib_txparms *txp) 257 257 { 258 258 struct hfi1_ipoib_dev_priv *priv = tx->txq->priv; 259 - struct hfi1_sdma_header *sdma_hdr = &tx->sdma_hdr; 259 + struct hfi1_sdma_header *sdma_hdr = tx->sdma_hdr; 260 260 struct sk_buff *skb = tx->skb; 261 261 struct hfi1_pportdata *ppd = ppd_from_ibp(txp->ibp); 262 262 struct rdma_ah_attr *ah_attr = txp->ah_attr; ··· 483 483 if (likely(!ret)) { 484 484 tx_ok: 485 485 trace_sdma_output_ibhdr(txq->priv->dd, 486 - &tx->sdma_hdr.hdr, 486 + &tx->sdma_hdr->hdr, 487 487 ib_is_sc5(txp->flow.sc5)); 488 488 hfi1_ipoib_check_queue_depth(txq); 489 489 return NETDEV_TX_OK; ··· 547 547 hfi1_ipoib_check_queue_depth(txq); 548 548 549 549 trace_sdma_output_ibhdr(txq->priv->dd, 550 - &tx->sdma_hdr.hdr, 550 + &tx->sdma_hdr->hdr, 551 551 ib_is_sc5(txp->flow.sc5)); 552 552 553 553 if (!netdev_xmit_more()) ··· 683 683 { 684 684 struct net_device *dev = priv->netdev; 685 685 u32 tx_ring_size, tx_item_size; 686 - int i; 686 + struct hfi1_ipoib_circ_buf *tx_ring; 687 + int i, j; 687 688 688 689 /* 689 690 * Ring holds 1 less than tx_ring_size ··· 702 701 703 702 for (i = 0; i < dev->num_tx_queues; i++) { 704 703 struct hfi1_ipoib_txq *txq = &priv->txqs[i]; 704 + struct ipoib_txreq *tx; 705 705 706 + tx_ring = &txq->tx_ring; 706 707 iowait_init(&txq->wait, 707 708 0, 708 709 hfi1_ipoib_flush_txq, ··· 728 725 priv->dd->node); 729 726 730 727 txq->tx_ring.items = 731 - kcalloc_node(tx_ring_size, tx_item_size, 732 - GFP_KERNEL, priv->dd->node); 728 + kvzalloc_node(array_size(tx_ring_size, tx_item_size), 729 + GFP_KERNEL, priv->dd->node); 733 730 if (!txq->tx_ring.items) 734 731 goto free_txqs; 735 732 736 733 txq->tx_ring.max_items = tx_ring_size; 737 - txq->tx_ring.shift = ilog2(tx_ring_size); 734 + txq->tx_ring.shift = ilog2(tx_item_size); 738 735 txq->tx_ring.avail = hfi1_ipoib_ring_hwat(txq); 736 + tx_ring = &txq->tx_ring; 737 + for (j = 0; j < tx_ring_size; j++) 738 + hfi1_txreq_from_idx(tx_ring, j)->sdma_hdr = 739 + kzalloc_node(sizeof(*tx->sdma_hdr), 740 + GFP_KERNEL, priv->dd->node); 739 741 740 742 netif_tx_napi_add(dev, &txq->napi, 741 743 hfi1_ipoib_poll_tx_ring, ··· 754 746 struct hfi1_ipoib_txq *txq = &priv->txqs[i]; 755 747 756 748 netif_napi_del(&txq->napi); 757 - kfree(txq->tx_ring.items); 749 + tx_ring = &txq->tx_ring; 750 + for (j = 0; j < tx_ring_size; j++) 751 + kfree(hfi1_txreq_from_idx(tx_ring, j)->sdma_hdr); 752 + kvfree(tx_ring->items); 758 753 } 759 754 760 755 kfree(priv->txqs); ··· 791 780 792 781 void hfi1_ipoib_txreq_deinit(struct hfi1_ipoib_dev_priv *priv) 793 782 { 794 - int i; 783 + int i, j; 795 784 796 785 for (i = 0; i < priv->netdev->num_tx_queues; i++) { 797 786 struct hfi1_ipoib_txq *txq = &priv->txqs[i]; 787 + struct hfi1_ipoib_circ_buf *tx_ring = &txq->tx_ring; 798 788 799 789 iowait_cancel_work(&txq->wait); 800 790 iowait_sdma_drain(&txq->wait); 801 791 hfi1_ipoib_drain_tx_list(txq); 802 792 netif_napi_del(&txq->napi); 803 793 hfi1_ipoib_drain_tx_ring(txq); 804 - kfree(txq->tx_ring.items); 794 + for (j = 0; j < tx_ring->max_items; j++) 795 + kfree(hfi1_txreq_from_idx(tx_ring, j)->sdma_hdr); 796 + kvfree(tx_ring->items); 805 797 } 806 798 807 799 kfree(priv->txqs);
+1 -1
drivers/infiniband/hw/mlx4/main.c
··· 3237 3237 case MLX4_DEV_EVENT_PORT_MGMT_CHANGE: 3238 3238 ew = kmalloc(sizeof *ew, GFP_ATOMIC); 3239 3239 if (!ew) 3240 - break; 3240 + return; 3241 3241 3242 3242 INIT_WORK(&ew->work, handle_port_mgmt_change_event); 3243 3243 memcpy(&ew->ib_eqe, eqe, sizeof *eqe);
+2
drivers/infiniband/sw/rdmavt/qp.c
··· 3073 3073 case IB_WR_ATOMIC_FETCH_AND_ADD: 3074 3074 if (unlikely(!(qp->qp_access_flags & IB_ACCESS_REMOTE_ATOMIC))) 3075 3075 goto inv_err; 3076 + if (unlikely(wqe->atomic_wr.remote_addr & (sizeof(u64) - 1))) 3077 + goto inv_err; 3076 3078 if (unlikely(!rvt_rkey_ok(qp, &qp->r_sge.sge, sizeof(u64), 3077 3079 wqe->atomic_wr.remote_addr, 3078 3080 wqe->atomic_wr.rkey,
+1 -6
drivers/infiniband/sw/siw/siw.h
··· 644 644 return &qp->orq[qp->orq_get % qp->attrs.orq_size]; 645 645 } 646 646 647 - static inline struct siw_sqe *orq_get_tail(struct siw_qp *qp) 648 - { 649 - return &qp->orq[qp->orq_put % qp->attrs.orq_size]; 650 - } 651 - 652 647 static inline struct siw_sqe *orq_get_free(struct siw_qp *qp) 653 648 { 654 - struct siw_sqe *orq_e = orq_get_tail(qp); 649 + struct siw_sqe *orq_e = &qp->orq[qp->orq_put % qp->attrs.orq_size]; 655 650 656 651 if (READ_ONCE(orq_e->flags) == 0) 657 652 return orq_e;
+11 -9
drivers/infiniband/sw/siw/siw_qp_rx.c
··· 1153 1153 1154 1154 spin_lock_irqsave(&qp->orq_lock, flags); 1155 1155 1156 - rreq = orq_get_current(qp); 1157 - 1158 1156 /* free current orq entry */ 1157 + rreq = orq_get_current(qp); 1159 1158 WRITE_ONCE(rreq->flags, 0); 1159 + 1160 + qp->orq_get++; 1160 1161 1161 1162 if (qp->tx_ctx.orq_fence) { 1162 1163 if (unlikely(tx_waiting->wr_status != SIW_WR_QUEUED)) { ··· 1166 1165 rv = -EPROTO; 1167 1166 goto out; 1168 1167 } 1169 - /* resume SQ processing */ 1168 + /* resume SQ processing, if possible */ 1170 1169 if (tx_waiting->sqe.opcode == SIW_OP_READ || 1171 1170 tx_waiting->sqe.opcode == SIW_OP_READ_LOCAL_INV) { 1172 - rreq = orq_get_tail(qp); 1171 + 1172 + /* SQ processing was stopped because of a full ORQ */ 1173 + rreq = orq_get_free(qp); 1173 1174 if (unlikely(!rreq)) { 1174 1175 pr_warn("siw: [QP %u]: no ORQE\n", qp_id(qp)); 1175 1176 rv = -EPROTO; ··· 1184 1181 resume_tx = 1; 1185 1182 1186 1183 } else if (siw_orq_empty(qp)) { 1184 + /* 1185 + * SQ processing was stopped by fenced work request. 1186 + * Resume since all previous Read's are now completed. 1187 + */ 1187 1188 qp->tx_ctx.orq_fence = 0; 1188 1189 resume_tx = 1; 1189 - } else { 1190 - pr_warn("siw: [QP %u]: fence resume: orq idx: %d:%d\n", 1191 - qp_id(qp), qp->orq_get, qp->orq_put); 1192 - rv = -EPROTO; 1193 1190 } 1194 1191 } 1195 - qp->orq_get++; 1196 1192 out: 1197 1193 spin_unlock_irqrestore(&qp->orq_lock, flags); 1198 1194
+2 -1
drivers/infiniband/sw/siw/siw_verbs.c
··· 313 313 314 314 if (atomic_inc_return(&sdev->num_qp) > SIW_MAX_QP) { 315 315 siw_dbg(base_dev, "too many QP's\n"); 316 - return -ENOMEM; 316 + rv = -ENOMEM; 317 + goto err_atomic; 317 318 } 318 319 if (attrs->qp_type != IB_QPT_RC) { 319 320 siw_dbg(base_dev, "only RC QP's supported\n");
+3 -9
drivers/input/touchscreen/wm97xx-core.c
··· 615 615 * extensions) 616 616 */ 617 617 wm->touch_dev = platform_device_alloc("wm97xx-touch", -1); 618 - if (!wm->touch_dev) { 619 - ret = -ENOMEM; 620 - goto touch_err; 621 - } 618 + if (!wm->touch_dev) 619 + return -ENOMEM; 620 + 622 621 platform_set_drvdata(wm->touch_dev, wm); 623 622 wm->touch_dev->dev.parent = wm->dev; 624 623 wm->touch_dev->dev.platform_data = pdata; ··· 628 629 return 0; 629 630 touch_reg_err: 630 631 platform_device_put(wm->touch_dev); 631 - touch_err: 632 - input_unregister_device(wm->input_dev); 633 - wm->input_dev = NULL; 634 632 635 633 return ret; 636 634 } ··· 635 639 static void wm97xx_unregister_touch(struct wm97xx *wm) 636 640 { 637 641 platform_device_unregister(wm->touch_dev); 638 - input_unregister_device(wm->input_dev); 639 - wm->input_dev = NULL; 640 642 } 641 643 642 644 static int _wm97xx_probe(struct wm97xx *wm)
+2
drivers/iommu/amd/init.c
··· 21 21 #include <linux/export.h> 22 22 #include <linux/kmemleak.h> 23 23 #include <linux/cc_platform.h> 24 + #include <linux/iopoll.h> 24 25 #include <asm/pci-direct.h> 25 26 #include <asm/iommu.h> 26 27 #include <asm/apic.h> ··· 835 834 status = readl(iommu->mmio_base + MMIO_STATUS_OFFSET); 836 835 if (status & (MMIO_STATUS_GALOG_RUN_MASK)) 837 836 break; 837 + udelay(10); 838 838 } 839 839 840 840 if (WARN_ON(i >= LOOP_TIMEOUT))
+10 -3
drivers/iommu/intel/irq_remapping.c
··· 569 569 fn, &intel_ir_domain_ops, 570 570 iommu); 571 571 if (!iommu->ir_domain) { 572 - irq_domain_free_fwnode(fn); 573 572 pr_err("IR%d: failed to allocate irqdomain\n", iommu->seq_id); 574 - goto out_free_bitmap; 573 + goto out_free_fwnode; 575 574 } 576 575 iommu->ir_msi_domain = 577 576 arch_create_remap_msi_irq_domain(iommu->ir_domain, ··· 594 595 595 596 if (dmar_enable_qi(iommu)) { 596 597 pr_err("Failed to enable queued invalidation\n"); 597 - goto out_free_bitmap; 598 + goto out_free_ir_domain; 598 599 } 599 600 } 600 601 ··· 618 619 619 620 return 0; 620 621 622 + out_free_ir_domain: 623 + if (iommu->ir_msi_domain) 624 + irq_domain_remove(iommu->ir_msi_domain); 625 + iommu->ir_msi_domain = NULL; 626 + irq_domain_remove(iommu->ir_domain); 627 + iommu->ir_domain = NULL; 628 + out_free_fwnode: 629 + irq_domain_free_fwnode(fn); 621 630 out_free_bitmap: 622 631 bitmap_free(bitmap); 623 632 out_free_pages:
+1
drivers/iommu/ioasid.c
··· 349 349 350 350 /** 351 351 * ioasid_get - obtain a reference to the IOASID 352 + * @ioasid: the ID to get 352 353 */ 353 354 void ioasid_get(ioasid_t ioasid) 354 355 {
+19 -14
drivers/iommu/iommu.c
··· 207 207 208 208 static void dev_iommu_free(struct device *dev) 209 209 { 210 - iommu_fwspec_free(dev); 211 - kfree(dev->iommu); 210 + struct dev_iommu *param = dev->iommu; 211 + 212 212 dev->iommu = NULL; 213 + if (param->fwspec) { 214 + fwnode_handle_put(param->fwspec->iommu_fwnode); 215 + kfree(param->fwspec); 216 + } 217 + kfree(param); 213 218 } 214 219 215 220 static int __iommu_probe_device(struct device *dev, struct list_head *group_list) ··· 985 980 return ret; 986 981 } 987 982 988 - /** 989 - * iommu_group_for_each_dev - iterate over each device in the group 990 - * @group: the group 991 - * @data: caller opaque data to be passed to callback function 992 - * @fn: caller supplied callback function 993 - * 994 - * This function is called by group users to iterate over group devices. 995 - * Callers should hold a reference count to the group during callback. 996 - * The group->mutex is held across callbacks, which will block calls to 997 - * iommu_group_add/remove_device. 998 - */ 999 983 static int __iommu_group_for_each_dev(struct iommu_group *group, void *data, 1000 984 int (*fn)(struct device *, void *)) 1001 985 { ··· 999 1005 return ret; 1000 1006 } 1001 1007 1002 - 1008 + /** 1009 + * iommu_group_for_each_dev - iterate over each device in the group 1010 + * @group: the group 1011 + * @data: caller opaque data to be passed to callback function 1012 + * @fn: caller supplied callback function 1013 + * 1014 + * This function is called by group users to iterate over group devices. 1015 + * Callers should hold a reference count to the group during callback. 1016 + * The group->mutex is held across callbacks, which will block calls to 1017 + * iommu_group_add/remove_device. 1018 + */ 1003 1019 int iommu_group_for_each_dev(struct iommu_group *group, void *data, 1004 1020 int (*fn)(struct device *, void *)) 1005 1021 { ··· 3036 3032 * iommu_sva_bind_device() - Bind a process address space to a device 3037 3033 * @dev: the device 3038 3034 * @mm: the mm to bind, caller must hold a reference to it 3035 + * @drvdata: opaque data pointer to pass to bind callback 3039 3036 * 3040 3037 * Create a bond between device and address space, allowing the device to access 3041 3038 * the mm using the returned PASID. If a bond already exists between @device and
+1 -1
drivers/iommu/omap-iommu.c
··· 1085 1085 } 1086 1086 1087 1087 /** 1088 - * omap_iommu_suspend_prepare - prepare() dev_pm_ops implementation 1088 + * omap_iommu_prepare - prepare() dev_pm_ops implementation 1089 1089 * @dev: iommu device 1090 1090 * 1091 1091 * This function performs the necessary checks to determine if the IOMMU
+4 -4
drivers/md/md.c
··· 5869 5869 nowait = nowait && blk_queue_nowait(bdev_get_queue(rdev->bdev)); 5870 5870 } 5871 5871 5872 - /* Set the NOWAIT flags if all underlying devices support it */ 5873 - if (nowait) 5874 - blk_queue_flag_set(QUEUE_FLAG_NOWAIT, mddev->queue); 5875 - 5876 5872 if (!bioset_initialized(&mddev->bio_set)) { 5877 5873 err = bioset_init(&mddev->bio_set, BIO_POOL_SIZE, 0, BIOSET_NEED_BVECS); 5878 5874 if (err) ··· 6006 6010 else 6007 6011 blk_queue_flag_clear(QUEUE_FLAG_NONROT, mddev->queue); 6008 6012 blk_queue_flag_set(QUEUE_FLAG_IO_STAT, mddev->queue); 6013 + 6014 + /* Set the NOWAIT flags if all underlying devices support it */ 6015 + if (nowait) 6016 + blk_queue_flag_set(QUEUE_FLAG_NOWAIT, mddev->queue); 6009 6017 } 6010 6018 if (pers->sync_request) { 6011 6019 if (mddev->kobj.sd &&
+1
drivers/net/dsa/Kconfig
··· 36 36 config NET_DSA_MT7530 37 37 tristate "MediaTek MT753x and MT7621 Ethernet switch support" 38 38 select NET_DSA_TAG_MTK 39 + select MEDIATEK_GE_PHY 39 40 help 40 41 This enables support for the MediaTek MT7530, MT7531, and MT7621 41 42 Ethernet switch chips.
+13 -1
drivers/net/ethernet/amd/xgbe/xgbe-drv.c
··· 721 721 if (!channel->tx_ring) 722 722 break; 723 723 724 + /* Deactivate the Tx timer */ 724 725 del_timer_sync(&channel->tx_timer); 726 + channel->tx_timer_active = 0; 725 727 } 726 728 } 727 729 ··· 2552 2550 buf2_len = xgbe_rx_buf2_len(rdata, packet, len); 2553 2551 len += buf2_len; 2554 2552 2553 + if (buf2_len > rdata->rx.buf.dma_len) { 2554 + /* Hardware inconsistency within the descriptors 2555 + * that has resulted in a length underflow. 2556 + */ 2557 + error = 1; 2558 + goto skip_data; 2559 + } 2560 + 2555 2561 if (!skb) { 2556 2562 skb = xgbe_create_skb(pdata, napi, rdata, 2557 2563 buf1_len); ··· 2589 2579 if (!last || context_next) 2590 2580 goto read_again; 2591 2581 2592 - if (!skb) 2582 + if (!skb || error) { 2583 + dev_kfree_skb(skb); 2593 2584 goto next_packet; 2585 + } 2594 2586 2595 2587 /* Be sure we don't exceed the configured MTU */ 2596 2588 max_len = netdev->mtu + ETH_HLEN;
+1 -1
drivers/net/ethernet/google/gve/gve_adminq.c
··· 301 301 */ 302 302 static int gve_adminq_kick_and_wait(struct gve_priv *priv) 303 303 { 304 - u32 tail, head; 304 + int tail, head; 305 305 int i; 306 306 307 307 tail = ioread32be(&priv->reg_bar0->adminq_event_counter);
+3 -1
drivers/net/ethernet/intel/e1000e/e1000.h
··· 115 115 board_pch_lpt, 116 116 board_pch_spt, 117 117 board_pch_cnp, 118 - board_pch_tgp 118 + board_pch_tgp, 119 + board_pch_adp 119 120 }; 120 121 121 122 struct e1000_ps_page { ··· 503 502 extern const struct e1000_info e1000_pch_spt_info; 504 503 extern const struct e1000_info e1000_pch_cnp_info; 505 504 extern const struct e1000_info e1000_pch_tgp_info; 505 + extern const struct e1000_info e1000_pch_adp_info; 506 506 extern const struct e1000_info e1000_es2_info; 507 507 508 508 void e1000e_ptp_init(struct e1000_adapter *adapter);
+20
drivers/net/ethernet/intel/e1000e/ich8lan.c
··· 6021 6021 .phy_ops = &ich8_phy_ops, 6022 6022 .nvm_ops = &spt_nvm_ops, 6023 6023 }; 6024 + 6025 + const struct e1000_info e1000_pch_adp_info = { 6026 + .mac = e1000_pch_adp, 6027 + .flags = FLAG_IS_ICH 6028 + | FLAG_HAS_WOL 6029 + | FLAG_HAS_HW_TIMESTAMP 6030 + | FLAG_HAS_CTRLEXT_ON_LOAD 6031 + | FLAG_HAS_AMT 6032 + | FLAG_HAS_FLASH 6033 + | FLAG_HAS_JUMBO_FRAMES 6034 + | FLAG_APME_IN_WUC, 6035 + .flags2 = FLAG2_HAS_PHY_STATS 6036 + | FLAG2_HAS_EEE, 6037 + .pba = 26, 6038 + .max_hw_frame_size = 9022, 6039 + .get_variants = e1000_get_variants_ich8lan, 6040 + .mac_ops = &ich8_mac_ops, 6041 + .phy_ops = &ich8_phy_ops, 6042 + .nvm_ops = &spt_nvm_ops, 6043 + };
+21 -18
drivers/net/ethernet/intel/e1000e/netdev.c
··· 52 52 [board_pch_spt] = &e1000_pch_spt_info, 53 53 [board_pch_cnp] = &e1000_pch_cnp_info, 54 54 [board_pch_tgp] = &e1000_pch_tgp_info, 55 + [board_pch_adp] = &e1000_pch_adp_info, 55 56 }; 56 57 57 58 struct e1000_reg_info { ··· 6342 6341 u32 mac_data; 6343 6342 u16 phy_data; 6344 6343 6345 - if (er32(FWSM) & E1000_ICH_FWSM_FW_VALID) { 6344 + if (er32(FWSM) & E1000_ICH_FWSM_FW_VALID && 6345 + hw->mac.type >= e1000_pch_adp) { 6346 6346 /* Request ME configure the device for S0ix */ 6347 6347 mac_data = er32(H2ME); 6348 6348 mac_data |= E1000_H2ME_START_DPG; ··· 6492 6490 u16 phy_data; 6493 6491 u32 i = 0; 6494 6492 6495 - if (er32(FWSM) & E1000_ICH_FWSM_FW_VALID) { 6493 + if (er32(FWSM) & E1000_ICH_FWSM_FW_VALID && 6494 + hw->mac.type >= e1000_pch_adp) { 6496 6495 /* Request ME unconfigure the device from S0ix */ 6497 6496 mac_data = er32(H2ME); 6498 6497 mac_data &= ~E1000_H2ME_START_DPG; ··· 7901 7898 { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_TGP_I219_V14), board_pch_tgp }, 7902 7899 { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_TGP_I219_LM15), board_pch_tgp }, 7903 7900 { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_TGP_I219_V15), board_pch_tgp }, 7904 - { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_RPL_I219_LM23), board_pch_tgp }, 7905 - { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_RPL_I219_V23), board_pch_tgp }, 7906 - { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_ADP_I219_LM16), board_pch_tgp }, 7907 - { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_ADP_I219_V16), board_pch_tgp }, 7908 - { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_ADP_I219_LM17), board_pch_tgp }, 7909 - { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_ADP_I219_V17), board_pch_tgp }, 7910 - { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_RPL_I219_LM22), board_pch_tgp }, 7911 - { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_RPL_I219_V22), board_pch_tgp }, 7912 - { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_MTP_I219_LM18), board_pch_tgp }, 7913 - { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_MTP_I219_V18), board_pch_tgp }, 7914 - { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_MTP_I219_LM19), board_pch_tgp }, 7915 - { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_MTP_I219_V19), board_pch_tgp }, 7916 - { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_LNP_I219_LM20), board_pch_tgp }, 7917 - { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_LNP_I219_V20), board_pch_tgp }, 7918 - { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_LNP_I219_LM21), board_pch_tgp }, 7919 - { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_LNP_I219_V21), board_pch_tgp }, 7901 + { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_RPL_I219_LM23), board_pch_adp }, 7902 + { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_RPL_I219_V23), board_pch_adp }, 7903 + { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_ADP_I219_LM16), board_pch_adp }, 7904 + { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_ADP_I219_V16), board_pch_adp }, 7905 + { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_ADP_I219_LM17), board_pch_adp }, 7906 + { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_ADP_I219_V17), board_pch_adp }, 7907 + { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_RPL_I219_LM22), board_pch_adp }, 7908 + { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_RPL_I219_V22), board_pch_adp }, 7909 + { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_MTP_I219_LM18), board_pch_adp }, 7910 + { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_MTP_I219_V18), board_pch_adp }, 7911 + { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_MTP_I219_LM19), board_pch_adp }, 7912 + { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_MTP_I219_V19), board_pch_adp }, 7913 + { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_LNP_I219_LM20), board_pch_adp }, 7914 + { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_LNP_I219_V20), board_pch_adp }, 7915 + { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_LNP_I219_LM21), board_pch_adp }, 7916 + { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_LNP_I219_V21), board_pch_adp }, 7920 7917 7921 7918 { 0, 0, 0, 0, 0, 0, 0 } /* terminate list */ 7922 7919 };
+1
drivers/net/ethernet/intel/i40e/i40e.h
··· 144 144 __I40E_VIRTCHNL_OP_PENDING, 145 145 __I40E_RECOVERY_MODE, 146 146 __I40E_VF_RESETS_DISABLED, /* disable resets during i40e_remove */ 147 + __I40E_IN_REMOVE, 147 148 __I40E_VFS_RELEASING, 148 149 /* This must be last as it determines the size of the BITMAP */ 149 150 __I40E_STATE_SIZE__,
+29 -2
drivers/net/ethernet/intel/i40e/i40e_main.c
··· 5372 5372 /* There is no need to reset BW when mqprio mode is on. */ 5373 5373 if (pf->flags & I40E_FLAG_TC_MQPRIO) 5374 5374 return 0; 5375 - if (!vsi->mqprio_qopt.qopt.hw && !(pf->flags & I40E_FLAG_DCB_ENABLED)) { 5375 + 5376 + if (!vsi->mqprio_qopt.qopt.hw) { 5377 + if (pf->flags & I40E_FLAG_DCB_ENABLED) 5378 + goto skip_reset; 5379 + 5380 + if (IS_ENABLED(CONFIG_I40E_DCB) && 5381 + i40e_dcb_hw_get_num_tc(&pf->hw) == 1) 5382 + goto skip_reset; 5383 + 5376 5384 ret = i40e_set_bw_limit(vsi, vsi->seid, 0); 5377 5385 if (ret) 5378 5386 dev_info(&pf->pdev->dev, ··· 5388 5380 vsi->seid); 5389 5381 return ret; 5390 5382 } 5383 + 5384 + skip_reset: 5391 5385 memset(&bw_data, 0, sizeof(bw_data)); 5392 5386 bw_data.tc_valid_bits = enabled_tc; 5393 5387 for (i = 0; i < I40E_MAX_TRAFFIC_CLASS; i++) ··· 10863 10853 bool lock_acquired) 10864 10854 { 10865 10855 int ret; 10856 + 10857 + if (test_bit(__I40E_IN_REMOVE, pf->state)) 10858 + return; 10866 10859 /* Now we wait for GRST to settle out. 10867 10860 * We don't have to delete the VEBs or VSIs from the hw switch 10868 10861 * because the reset will make them disappear. ··· 12225 12212 12226 12213 vsi->req_queue_pairs = queue_count; 12227 12214 i40e_prep_for_reset(pf); 12215 + if (test_bit(__I40E_IN_REMOVE, pf->state)) 12216 + return pf->alloc_rss_size; 12228 12217 12229 12218 pf->alloc_rss_size = new_rss_size; 12230 12219 ··· 13052 13037 13053 13038 if (need_reset) 13054 13039 i40e_prep_for_reset(pf); 13040 + 13041 + /* VSI shall be deleted in a moment, just return EINVAL */ 13042 + if (test_bit(__I40E_IN_REMOVE, pf->state)) 13043 + return -EINVAL; 13055 13044 13056 13045 old_prog = xchg(&vsi->xdp_prog, prog); 13057 13046 ··· 15947 15928 i40e_write_rx_ctl(hw, I40E_PFQF_HENA(0), 0); 15948 15929 i40e_write_rx_ctl(hw, I40E_PFQF_HENA(1), 0); 15949 15930 15950 - while (test_bit(__I40E_RESET_RECOVERY_PENDING, pf->state)) 15931 + /* Grab __I40E_RESET_RECOVERY_PENDING and set __I40E_IN_REMOVE 15932 + * flags, once they are set, i40e_rebuild should not be called as 15933 + * i40e_prep_for_reset always returns early. 15934 + */ 15935 + while (test_and_set_bit(__I40E_RESET_RECOVERY_PENDING, pf->state)) 15951 15936 usleep_range(1000, 2000); 15937 + set_bit(__I40E_IN_REMOVE, pf->state); 15952 15938 15953 15939 if (pf->flags & I40E_FLAG_SRIOV_ENABLED) { 15954 15940 set_bit(__I40E_VF_RESETS_DISABLED, pf->state); ··· 16151 16127 static void i40e_pci_error_reset_done(struct pci_dev *pdev) 16152 16128 { 16153 16129 struct i40e_pf *pf = pci_get_drvdata(pdev); 16130 + 16131 + if (test_bit(__I40E_IN_REMOVE, pf->state)) 16132 + return; 16154 16133 16155 16134 i40e_reset_and_rebuild(pf, false, false); 16156 16135 }
+3 -3
drivers/net/ethernet/mellanox/mlx5/core/en.h
··· 224 224 struct mlx5e_tx_wqe { 225 225 struct mlx5_wqe_ctrl_seg ctrl; 226 226 struct mlx5_wqe_eth_seg eth; 227 - struct mlx5_wqe_data_seg data[0]; 227 + struct mlx5_wqe_data_seg data[]; 228 228 }; 229 229 230 230 struct mlx5e_rx_wqe_ll { ··· 241 241 struct mlx5_wqe_umr_ctrl_seg uctrl; 242 242 struct mlx5_mkey_seg mkc; 243 243 union { 244 - struct mlx5_mtt inline_mtts[0]; 245 - struct mlx5_klm inline_klms[0]; 244 + DECLARE_FLEX_ARRAY(struct mlx5_mtt, inline_mtts); 245 + DECLARE_FLEX_ARRAY(struct mlx5_klm, inline_klms); 246 246 }; 247 247 }; 248 248
+2 -1
drivers/net/ethernet/mellanox/mlx5/core/en/qos.c
··· 570 570 571 571 static void mlx5e_htb_convert_ceil(struct mlx5e_priv *priv, u64 ceil, u32 *max_average_bw) 572 572 { 573 - *max_average_bw = div_u64(ceil, BYTES_IN_MBIT); 573 + /* Hardware treats 0 as "unlimited", set at least 1. */ 574 + *max_average_bw = max_t(u32, div_u64(ceil, BYTES_IN_MBIT), 1); 574 575 575 576 qos_dbg(priv->mdev, "Convert: ceil %llu -> max_average_bw %u\n", 576 577 ceil, *max_average_bw);
+14 -18
drivers/net/ethernet/mellanox/mlx5/core/en/rep/bond.c
··· 183 183 184 184 static bool mlx5e_rep_is_lag_netdev(struct net_device *netdev) 185 185 { 186 - struct mlx5e_rep_priv *rpriv; 187 - struct mlx5e_priv *priv; 188 - 189 - /* A given netdev is not a representor or not a slave of LAG configuration */ 190 - if (!mlx5e_eswitch_rep(netdev) || !netif_is_lag_port(netdev)) 191 - return false; 192 - 193 - priv = netdev_priv(netdev); 194 - rpriv = priv->ppriv; 195 - 196 - /* Egress acl forward to vport is supported only non-uplink representor */ 197 - return rpriv->rep->vport != MLX5_VPORT_UPLINK; 186 + return netif_is_lag_port(netdev) && mlx5e_eswitch_vf_rep(netdev); 198 187 } 199 188 200 189 static void mlx5e_rep_changelowerstate_event(struct net_device *netdev, void *ptr) ··· 198 209 u16 acl_vport_num; 199 210 u16 fwd_vport_num; 200 211 int err; 201 - 202 - if (!mlx5e_rep_is_lag_netdev(netdev)) 203 - return; 204 212 205 213 info = ptr; 206 214 lag_info = info->lower_state_info; ··· 252 266 struct net_device *lag_dev; 253 267 struct mlx5e_priv *priv; 254 268 255 - if (!mlx5e_rep_is_lag_netdev(netdev)) 256 - return; 257 - 258 269 priv = netdev_priv(netdev); 259 270 rpriv = priv->ppriv; 260 271 lag_dev = info->upper_dev; ··· 276 293 unsigned long event, void *ptr) 277 294 { 278 295 struct net_device *netdev = netdev_notifier_info_to_dev(ptr); 296 + struct mlx5e_rep_priv *rpriv; 297 + struct mlx5e_rep_bond *bond; 298 + struct mlx5e_priv *priv; 299 + 300 + if (!mlx5e_rep_is_lag_netdev(netdev)) 301 + return NOTIFY_DONE; 302 + 303 + bond = container_of(nb, struct mlx5e_rep_bond, nb); 304 + priv = netdev_priv(netdev); 305 + rpriv = mlx5_eswitch_get_uplink_priv(priv->mdev->priv.eswitch, REP_ETH); 306 + /* Verify VF representor is on the same device of the bond handling the netevent. */ 307 + if (rpriv->uplink_priv.bond != bond) 308 + return NOTIFY_DONE; 279 309 280 310 switch (event) { 281 311 case NETDEV_CHANGELOWERSTATE:
+4 -2
drivers/net/ethernet/mellanox/mlx5/core/en/rep/bridge.c
··· 491 491 } 492 492 493 493 br_offloads->netdev_nb.notifier_call = mlx5_esw_bridge_switchdev_port_event; 494 - err = register_netdevice_notifier(&br_offloads->netdev_nb); 494 + err = register_netdevice_notifier_net(&init_net, &br_offloads->netdev_nb); 495 495 if (err) { 496 496 esw_warn(mdev, "Failed to register bridge offloads netdevice notifier (err=%d)\n", 497 497 err); ··· 509 509 err_register_swdev: 510 510 destroy_workqueue(br_offloads->wq); 511 511 err_alloc_wq: 512 + rtnl_lock(); 512 513 mlx5_esw_bridge_cleanup(esw); 514 + rtnl_unlock(); 513 515 } 514 516 515 517 void mlx5e_rep_bridge_cleanup(struct mlx5e_priv *priv) ··· 526 524 return; 527 525 528 526 cancel_delayed_work_sync(&br_offloads->update_work); 529 - unregister_netdevice_notifier(&br_offloads->netdev_nb); 527 + unregister_netdevice_notifier_net(&init_net, &br_offloads->netdev_nb); 530 528 unregister_switchdev_blocking_notifier(&br_offloads->nb_blk); 531 529 unregister_switchdev_notifier(&br_offloads->nb); 532 530 destroy_workqueue(br_offloads->wq);
+5
drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h
··· 167 167 return pi; 168 168 } 169 169 170 + static inline u16 mlx5e_shampo_get_cqe_header_index(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe) 171 + { 172 + return be16_to_cpu(cqe->shampo.header_entry_index) & (rq->mpwqe.shampo->hd_per_wq - 1); 173 + } 174 + 170 175 struct mlx5e_shampo_umr { 171 176 u16 len; 172 177 };
+3 -1
drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c
··· 341 341 342 342 /* copy the inline part if required */ 343 343 if (sq->min_inline_mode != MLX5_INLINE_MODE_NONE) { 344 - memcpy(eseg->inline_hdr.start, xdptxd->data, MLX5E_XDP_MIN_INLINE); 344 + memcpy(eseg->inline_hdr.start, xdptxd->data, sizeof(eseg->inline_hdr.start)); 345 345 eseg->inline_hdr.sz = cpu_to_be16(MLX5E_XDP_MIN_INLINE); 346 + memcpy(dseg, xdptxd->data + sizeof(eseg->inline_hdr.start), 347 + MLX5E_XDP_MIN_INLINE - sizeof(eseg->inline_hdr.start)); 346 348 dma_len -= MLX5E_XDP_MIN_INLINE; 347 349 dma_addr += MLX5E_XDP_MIN_INLINE; 348 350 dseg++;
+11 -2
drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_rxtx.c
··· 157 157 /* Tunnel mode */ 158 158 if (mode == XFRM_MODE_TUNNEL) { 159 159 eseg->swp_inner_l3_offset = skb_inner_network_offset(skb) / 2; 160 - eseg->swp_inner_l4_offset = skb_inner_transport_offset(skb) / 2; 161 160 if (xo->proto == IPPROTO_IPV6) 162 161 eseg->swp_flags |= MLX5_ETH_WQE_SWP_INNER_L3_IPV6; 163 - if (inner_ip_hdr(skb)->protocol == IPPROTO_UDP) 162 + 163 + switch (xo->inner_ipproto) { 164 + case IPPROTO_UDP: 164 165 eseg->swp_flags |= MLX5_ETH_WQE_SWP_INNER_L4_UDP; 166 + fallthrough; 167 + case IPPROTO_TCP: 168 + /* IP | ESP | IP | [TCP | UDP] */ 169 + eseg->swp_inner_l4_offset = skb_inner_transport_offset(skb) / 2; 170 + break; 171 + default: 172 + break; 173 + } 165 174 return; 166 175 } 167 176
+6 -3
drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_rxtx.h
··· 131 131 mlx5e_ipsec_txwqe_build_eseg_csum(struct mlx5e_txqsq *sq, struct sk_buff *skb, 132 132 struct mlx5_wqe_eth_seg *eseg) 133 133 { 134 - struct xfrm_offload *xo = xfrm_offload(skb); 134 + u8 inner_ipproto; 135 135 136 136 if (!mlx5e_ipsec_eseg_meta(eseg)) 137 137 return false; 138 138 139 139 eseg->cs_flags = MLX5_ETH_WQE_L3_CSUM; 140 - if (xo->inner_ipproto) { 141 - eseg->cs_flags |= MLX5_ETH_WQE_L4_INNER_CSUM | MLX5_ETH_WQE_L3_INNER_CSUM; 140 + inner_ipproto = xfrm_offload(skb)->inner_ipproto; 141 + if (inner_ipproto) { 142 + eseg->cs_flags |= MLX5_ETH_WQE_L3_INNER_CSUM; 143 + if (inner_ipproto == IPPROTO_TCP || inner_ipproto == IPPROTO_UDP) 144 + eseg->cs_flags |= MLX5_ETH_WQE_L4_INNER_CSUM; 142 145 } else if (likely(skb->ip_summed == CHECKSUM_PARTIAL)) { 143 146 eseg->cs_flags |= MLX5_ETH_WQE_L4_CSUM; 144 147 sq->stats->csum_partial_inner++;
+19 -11
drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
··· 1117 1117 static void mlx5e_shampo_update_fin_psh_flags(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe, 1118 1118 struct tcphdr *skb_tcp_hd) 1119 1119 { 1120 - u16 header_index = be16_to_cpu(cqe->shampo.header_entry_index); 1120 + u16 header_index = mlx5e_shampo_get_cqe_header_index(rq, cqe); 1121 1121 struct tcphdr *last_tcp_hd; 1122 1122 void *last_hd_addr; 1123 1123 ··· 1871 1871 return skb; 1872 1872 } 1873 1873 1874 - static void 1874 + static struct sk_buff * 1875 1875 mlx5e_skb_from_cqe_shampo(struct mlx5e_rq *rq, struct mlx5e_mpw_info *wi, 1876 1876 struct mlx5_cqe64 *cqe, u16 header_index) 1877 1877 { ··· 1895 1895 skb = mlx5e_build_linear_skb(rq, hdr, frag_size, rx_headroom, head_size); 1896 1896 1897 1897 if (unlikely(!skb)) 1898 - return; 1898 + return NULL; 1899 1899 1900 1900 /* queue up for recycling/reuse */ 1901 1901 page_ref_inc(head->page); ··· 1907 1907 ALIGN(head_size, sizeof(long))); 1908 1908 if (unlikely(!skb)) { 1909 1909 rq->stats->buff_alloc_err++; 1910 - return; 1910 + return NULL; 1911 1911 } 1912 1912 1913 1913 prefetchw(skb->data); ··· 1918 1918 skb->tail += head_size; 1919 1919 skb->len += head_size; 1920 1920 } 1921 - rq->hw_gro_data->skb = skb; 1922 - NAPI_GRO_CB(skb)->count = 1; 1923 - skb_shinfo(skb)->gso_size = mpwrq_get_cqe_byte_cnt(cqe) - head_size; 1921 + return skb; 1924 1922 } 1925 1923 1926 1924 static void ··· 1971 1973 static void mlx5e_handle_rx_cqe_mpwrq_shampo(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe) 1972 1974 { 1973 1975 u16 data_bcnt = mpwrq_get_cqe_byte_cnt(cqe) - cqe->shampo.header_size; 1974 - u16 header_index = be16_to_cpu(cqe->shampo.header_entry_index); 1976 + u16 header_index = mlx5e_shampo_get_cqe_header_index(rq, cqe); 1975 1977 u32 wqe_offset = be32_to_cpu(cqe->shampo.data_offset); 1976 1978 u16 cstrides = mpwrq_get_cqe_consumed_strides(cqe); 1977 1979 u32 data_offset = wqe_offset & (PAGE_SIZE - 1); 1978 1980 u32 cqe_bcnt = mpwrq_get_cqe_byte_cnt(cqe); 1979 1981 u16 wqe_id = be16_to_cpu(cqe->wqe_id); 1980 1982 u32 page_idx = wqe_offset >> PAGE_SHIFT; 1983 + u16 head_size = cqe->shampo.header_size; 1981 1984 struct sk_buff **skb = &rq->hw_gro_data->skb; 1982 1985 bool flush = cqe->shampo.flush; 1983 1986 bool match = cqe->shampo.match; ··· 2010 2011 } 2011 2012 2012 2013 if (!*skb) { 2013 - mlx5e_skb_from_cqe_shampo(rq, wi, cqe, header_index); 2014 + if (likely(head_size)) 2015 + *skb = mlx5e_skb_from_cqe_shampo(rq, wi, cqe, header_index); 2016 + else 2017 + *skb = mlx5e_skb_from_cqe_mpwrq_nonlinear(rq, wi, cqe_bcnt, data_offset, 2018 + page_idx); 2014 2019 if (unlikely(!*skb)) 2015 2020 goto free_hd_entry; 2021 + 2022 + NAPI_GRO_CB(*skb)->count = 1; 2023 + skb_shinfo(*skb)->gso_size = cqe_bcnt - head_size; 2016 2024 } else { 2017 2025 NAPI_GRO_CB(*skb)->count++; 2018 2026 if (NAPI_GRO_CB(*skb)->count == 2 && ··· 2033 2027 } 2034 2028 } 2035 2029 2036 - di = &wi->umr.dma_info[page_idx]; 2037 - mlx5e_fill_skb_data(*skb, rq, di, data_bcnt, data_offset); 2030 + if (likely(head_size)) { 2031 + di = &wi->umr.dma_info[page_idx]; 2032 + mlx5e_fill_skb_data(*skb, rq, di, data_bcnt, data_offset); 2033 + } 2038 2034 2039 2035 mlx5e_shampo_complete_rx_cqe(rq, cqe, cqe_bcnt, *skb); 2040 2036 if (flush)
+14 -1
drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
··· 1414 1414 if (err) 1415 1415 goto err_out; 1416 1416 1417 - if (!attr->chain && esw_attr->int_port) { 1417 + if (!attr->chain && esw_attr->int_port && 1418 + attr->action & MLX5_FLOW_CONTEXT_ACTION_FWD_DEST) { 1418 1419 /* If decap route device is internal port, change the 1419 1420 * source vport value in reg_c0 back to uplink just in 1420 1421 * case the rule performs goto chain > 0. If we have a miss ··· 3189 3188 if (!(actions & 3190 3189 (MLX5_FLOW_CONTEXT_ACTION_FWD_DEST | MLX5_FLOW_CONTEXT_ACTION_DROP))) { 3191 3190 NL_SET_ERR_MSG_MOD(extack, "Rule must have at least one forward/drop action"); 3191 + return false; 3192 + } 3193 + 3194 + if (!(~actions & 3195 + (MLX5_FLOW_CONTEXT_ACTION_FWD_DEST | MLX5_FLOW_CONTEXT_ACTION_DROP))) { 3196 + NL_SET_ERR_MSG_MOD(extack, "Rule cannot support forward+drop action"); 3197 + return false; 3198 + } 3199 + 3200 + if (actions & MLX5_FLOW_CONTEXT_ACTION_MOD_HDR && 3201 + actions & MLX5_FLOW_CONTEXT_ACTION_DROP) { 3202 + NL_SET_ERR_MSG_MOD(extack, "Drop with modify header action is not supported"); 3192 3203 return false; 3193 3204 } 3194 3205
+1 -1
drivers/net/ethernet/mellanox/mlx5/core/en_tx.c
··· 208 208 int cpy1_sz = 2 * ETH_ALEN; 209 209 int cpy2_sz = ihs - cpy1_sz; 210 210 211 - memcpy(vhdr, skb->data, cpy1_sz); 211 + memcpy(&vhdr->addrs, skb->data, cpy1_sz); 212 212 vhdr->h_vlan_proto = skb->vlan_proto; 213 213 vhdr->h_vlan_TCI = cpu_to_be16(skb_vlan_tag_get(skb)); 214 214 memcpy(&vhdr->h_vlan_encapsulated_proto, skb->data + cpy1_sz, cpy2_sz);
+4
drivers/net/ethernet/mellanox/mlx5/core/esw/bridge.c
··· 1574 1574 { 1575 1575 struct mlx5_esw_bridge_offloads *br_offloads; 1576 1576 1577 + ASSERT_RTNL(); 1578 + 1577 1579 br_offloads = kvzalloc(sizeof(*br_offloads), GFP_KERNEL); 1578 1580 if (!br_offloads) 1579 1581 return ERR_PTR(-ENOMEM); ··· 1591 1589 void mlx5_esw_bridge_cleanup(struct mlx5_eswitch *esw) 1592 1590 { 1593 1591 struct mlx5_esw_bridge_offloads *br_offloads = esw->br_offloads; 1592 + 1593 + ASSERT_RTNL(); 1594 1594 1595 1595 if (!br_offloads) 1596 1596 return;
+1 -1
drivers/net/ethernet/mellanox/mlx5/core/esw/diag/bridge_tracepoint.h
··· 21 21 __field(unsigned int, used) 22 22 ), 23 23 TP_fast_assign( 24 - strncpy(__entry->dev_name, 24 + strscpy(__entry->dev_name, 25 25 netdev_name(fdb->dev), 26 26 IFNAMSIZ); 27 27 memcpy(__entry->addr, fdb->key.addr, ETH_ALEN);
+1 -1
drivers/net/ethernet/mellanox/mlx5/core/fw_reset.c
··· 132 132 { 133 133 struct mlx5_fw_reset *fw_reset = dev->priv.fw_reset; 134 134 135 - del_timer(&fw_reset->timer); 135 + del_timer_sync(&fw_reset->timer); 136 136 } 137 137 138 138 static void mlx5_sync_reset_clear_reset_requested(struct mlx5_core_dev *dev, bool poll_health)
+5 -4
drivers/net/ethernet/mellanox/mlx5/core/lib/fs_chains.c
··· 121 121 122 122 u32 mlx5_chains_get_prio_range(struct mlx5_fs_chains *chains) 123 123 { 124 - if (!mlx5_chains_prios_supported(chains)) 125 - return 1; 126 - 127 124 if (mlx5_chains_ignore_flow_level_supported(chains)) 128 125 return UINT_MAX; 126 + 127 + if (!chains->dev->priv.eswitch || 128 + chains->dev->priv.eswitch->mode != MLX5_ESWITCH_OFFLOADS) 129 + return 1; 129 130 130 131 /* We should get here only for eswitch case */ 131 132 return FDB_TC_MAX_PRIO; ··· 212 211 create_chain_restore(struct fs_chain *chain) 213 212 { 214 213 struct mlx5_eswitch *esw = chain->chains->dev->priv.eswitch; 215 - char modact[MLX5_UN_SZ_BYTES(set_add_copy_action_in_auto)]; 214 + u8 modact[MLX5_UN_SZ_BYTES(set_add_copy_action_in_auto)] = {}; 216 215 struct mlx5_fs_chains *chains = chain->chains; 217 216 enum mlx5e_tc_attr_to_reg chain_to_reg; 218 217 struct mlx5_modify_hdr *mod_hdr;
+5 -4
drivers/net/ethernet/mellanox/mlx5/core/port.c
··· 406 406 407 407 switch (module_id) { 408 408 case MLX5_MODULE_ID_SFP: 409 - mlx5_sfp_eeprom_params_set(&query.i2c_address, &query.page, &query.offset); 409 + mlx5_sfp_eeprom_params_set(&query.i2c_address, &query.page, &offset); 410 410 break; 411 411 case MLX5_MODULE_ID_QSFP: 412 412 case MLX5_MODULE_ID_QSFP_PLUS: 413 413 case MLX5_MODULE_ID_QSFP28: 414 - mlx5_qsfp_eeprom_params_set(&query.i2c_address, &query.page, &query.offset); 414 + mlx5_qsfp_eeprom_params_set(&query.i2c_address, &query.page, &offset); 415 415 break; 416 416 default: 417 417 mlx5_core_err(dev, "Module ID not recognized: 0x%x\n", module_id); 418 418 return -EINVAL; 419 419 } 420 420 421 - if (query.offset + size > MLX5_EEPROM_PAGE_LENGTH) 421 + if (offset + size > MLX5_EEPROM_PAGE_LENGTH) 422 422 /* Cross pages read, read until offset 256 in low page */ 423 - size -= offset + size - MLX5_EEPROM_PAGE_LENGTH; 423 + size = MLX5_EEPROM_PAGE_LENGTH - offset; 424 424 425 425 query.size = size; 426 + query.offset = offset; 426 427 427 428 return mlx5_query_mcia(dev, &query, data); 428 429 }
+1 -1
drivers/net/ethernet/microchip/sparx5/sparx5_packet.c
··· 145 145 skb_put(skb, byte_cnt - ETH_FCS_LEN); 146 146 eth_skb_pad(skb); 147 147 skb->protocol = eth_type_trans(skb, netdev); 148 - netif_rx(skb); 149 148 netdev->stats.rx_bytes += skb->len; 150 149 netdev->stats.rx_packets++; 150 + netif_rx(skb); 151 151 } 152 152 153 153 static int sparx5_inject(struct sparx5 *sparx5,
+4 -4
drivers/net/ethernet/smsc/smc911x.c
··· 1648 1648 return ret; 1649 1649 if ((ret=smc911x_ethtool_read_eeprom_byte(dev, &eebuf[i]))!=0) 1650 1650 return ret; 1651 - } 1651 + } 1652 1652 memcpy(data, eebuf+eeprom->offset, eeprom->len); 1653 1653 return 0; 1654 1654 } ··· 1667 1667 return ret; 1668 1668 /* write byte */ 1669 1669 if ((ret=smc911x_ethtool_write_eeprom_byte(dev, *data))!=0) 1670 - return ret; 1670 + return ret; 1671 1671 if ((ret=smc911x_ethtool_write_eeprom_cmd(dev, E2P_CMD_EPC_CMD_WRITE_, i ))!=0) 1672 1672 return ret; 1673 - } 1674 - return 0; 1673 + } 1674 + return 0; 1675 1675 } 1676 1676 1677 1677 static int smc911x_ethtool_geteeprom_len(struct net_device *dev)
+7 -2
drivers/net/ethernet/stmicro/stmmac/dwmac-visconti.c
··· 49 49 void __iomem *reg; 50 50 u32 phy_intf_sel; 51 51 struct clk *phy_ref_clk; 52 + struct device *dev; 52 53 spinlock_t lock; /* lock to protect register update */ 53 54 }; 54 55 55 56 static void visconti_eth_fix_mac_speed(void *priv, unsigned int speed) 56 57 { 57 58 struct visconti_eth *dwmac = priv; 58 - unsigned int val, clk_sel_val; 59 + struct net_device *netdev = dev_get_drvdata(dwmac->dev); 60 + unsigned int val, clk_sel_val = 0; 59 61 unsigned long flags; 60 62 61 63 spin_lock_irqsave(&dwmac->lock, flags); ··· 87 85 break; 88 86 default: 89 87 /* No bit control */ 90 - break; 88 + netdev_err(netdev, "Unsupported speed request (%d)", speed); 89 + spin_unlock_irqrestore(&dwmac->lock, flags); 90 + return; 91 91 } 92 92 93 93 writel(val, dwmac->reg + MAC_CTRL_REG); ··· 233 229 234 230 spin_lock_init(&dwmac->lock); 235 231 dwmac->reg = stmmac_res.addr; 232 + dwmac->dev = &pdev->dev; 236 233 plat_dat->bsp_priv = dwmac; 237 234 plat_dat->fix_mac_speed = visconti_eth_fix_mac_speed; 238 235
+1
drivers/net/ethernet/stmicro/stmmac/dwmac_dma.h
··· 150 150 151 151 #define NUM_DWMAC100_DMA_REGS 9 152 152 #define NUM_DWMAC1000_DMA_REGS 23 153 + #define NUM_DWMAC4_DMA_REGS 27 153 154 154 155 void dwmac_enable_dma_transmission(void __iomem *ioaddr); 155 156 void dwmac_enable_dma_irq(void __iomem *ioaddr, u32 chan, bool rx, bool tx);
+17 -2
drivers/net/ethernet/stmicro/stmmac/stmmac_ethtool.c
··· 21 21 #include "dwxgmac2.h" 22 22 23 23 #define REG_SPACE_SIZE 0x1060 24 + #define GMAC4_REG_SPACE_SIZE 0x116C 24 25 #define MAC100_ETHTOOL_NAME "st_mac100" 25 26 #define GMAC_ETHTOOL_NAME "st_gmac" 26 27 #define XGMAC_ETHTOOL_NAME "st_xgmac" 28 + 29 + /* Same as DMA_CHAN_BASE_ADDR defined in dwmac4_dma.h 30 + * 31 + * It is here because dwmac_dma.h and dwmac4_dam.h can not be included at the 32 + * same time due to the conflicting macro names. 33 + */ 34 + #define GMAC4_DMA_CHAN_BASE_ADDR 0x00001100 27 35 28 36 #define ETHTOOL_DMA_OFFSET 55 29 37 ··· 442 434 443 435 if (priv->plat->has_xgmac) 444 436 return XGMAC_REGSIZE * 4; 437 + else if (priv->plat->has_gmac4) 438 + return GMAC4_REG_SPACE_SIZE; 445 439 return REG_SPACE_SIZE; 446 440 } 447 441 ··· 456 446 stmmac_dump_mac_regs(priv, priv->hw, reg_space); 457 447 stmmac_dump_dma_regs(priv, priv->ioaddr, reg_space); 458 448 459 - if (!priv->plat->has_xgmac) { 460 - /* Copy DMA registers to where ethtool expects them */ 449 + /* Copy DMA registers to where ethtool expects them */ 450 + if (priv->plat->has_gmac4) { 451 + /* GMAC4 dumps its DMA registers at its DMA_CHAN_BASE_ADDR */ 452 + memcpy(&reg_space[ETHTOOL_DMA_OFFSET], 453 + &reg_space[GMAC4_DMA_CHAN_BASE_ADDR / 4], 454 + NUM_DWMAC4_DMA_REGS * 4); 455 + } else if (!priv->plat->has_xgmac) { 461 456 memcpy(&reg_space[ETHTOOL_DMA_OFFSET], 462 457 &reg_space[DMA_BUS_MODE / 4], 463 458 NUM_DWMAC1000_DMA_REGS * 4);
+11 -6
drivers/net/ethernet/stmicro/stmmac/stmmac_hwtstamp.c
··· 145 145 146 146 static void get_systime(void __iomem *ioaddr, u64 *systime) 147 147 { 148 - u64 ns; 148 + u64 ns, sec0, sec1; 149 149 150 - /* Get the TSSS value */ 151 - ns = readl(ioaddr + PTP_STNSR); 152 - /* Get the TSS and convert sec time value to nanosecond */ 153 - ns += readl(ioaddr + PTP_STSR) * 1000000000ULL; 150 + /* Get the TSS value */ 151 + sec1 = readl_relaxed(ioaddr + PTP_STSR); 152 + do { 153 + sec0 = sec1; 154 + /* Get the TSSS value */ 155 + ns = readl_relaxed(ioaddr + PTP_STNSR); 156 + /* Get the TSS value */ 157 + sec1 = readl_relaxed(ioaddr + PTP_STSR); 158 + } while (sec0 != sec1); 154 159 155 160 if (systime) 156 - *systime = ns; 161 + *systime = ns + (sec1 * 1000000000ULL); 157 162 } 158 163 159 164 static void get_ptptime(void __iomem *ptpaddr, u64 *ptp_time)
+4 -2
drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
··· 7252 7252 7253 7253 netdev_info(priv->dev, "%s: removing driver", __func__); 7254 7254 7255 + pm_runtime_get_sync(dev); 7256 + pm_runtime_disable(dev); 7257 + pm_runtime_put_noidle(dev); 7258 + 7255 7259 stmmac_stop_all_dma(priv); 7256 7260 stmmac_mac_set(priv, priv->ioaddr, false); 7257 7261 netif_carrier_off(ndev); ··· 7274 7270 if (priv->plat->stmmac_rst) 7275 7271 reset_control_assert(priv->plat->stmmac_rst); 7276 7272 reset_control_assert(priv->plat->stmmac_ahb_rst); 7277 - pm_runtime_put(dev); 7278 - pm_runtime_disable(dev); 7279 7273 if (priv->hw->pcs != STMMAC_PCS_TBI && 7280 7274 priv->hw->pcs != STMMAC_PCS_RTBI) 7281 7275 stmmac_mdio_unregister(ndev);
+11 -2
drivers/net/ieee802154/at86rf230.c
··· 100 100 unsigned long cal_timeout; 101 101 bool is_tx; 102 102 bool is_tx_from_off; 103 + bool was_tx; 103 104 u8 tx_retry; 104 105 struct sk_buff *tx_skb; 105 106 struct at86rf230_state_change tx; ··· 344 343 if (ctx->free) 345 344 kfree(ctx); 346 345 347 - ieee802154_wake_queue(lp->hw); 346 + if (lp->was_tx) { 347 + lp->was_tx = 0; 348 + dev_kfree_skb_any(lp->tx_skb); 349 + ieee802154_wake_queue(lp->hw); 350 + } 348 351 } 349 352 350 353 static void ··· 357 352 struct at86rf230_state_change *ctx = context; 358 353 struct at86rf230_local *lp = ctx->lp; 359 354 360 - lp->is_tx = 0; 355 + if (lp->is_tx) { 356 + lp->was_tx = 1; 357 + lp->is_tx = 0; 358 + } 359 + 361 360 at86rf230_async_state_change(lp, ctx, STATE_RX_AACK_ON, 362 361 at86rf230_async_error_recover_complete); 363 362 }
+1
drivers/net/ieee802154/ca8210.c
··· 1771 1771 status 1772 1772 ); 1773 1773 if (status != MAC_TRANSACTION_OVERFLOW) { 1774 + dev_kfree_skb_any(priv->tx_skb); 1774 1775 ieee802154_wake_queue(priv->hw); 1775 1776 return 0; 1776 1777 }
+1
drivers/net/ieee802154/mac802154_hwsim.c
··· 786 786 goto err_pib; 787 787 } 788 788 789 + pib->channel = 13; 789 790 rcu_assign_pointer(phy->pib, pib); 790 791 phy->idx = idx; 791 792 INIT_LIST_HEAD(&phy->edges);
+2 -2
drivers/net/ieee802154/mcr20a.c
··· 976 976 dev_dbg(printdev(lp), "%s\n", __func__); 977 977 978 978 phy->symbol_duration = 16; 979 - phy->lifs_period = 40; 980 - phy->sifs_period = 12; 979 + phy->lifs_period = 40 * phy->symbol_duration; 980 + phy->sifs_period = 12 * phy->symbol_duration; 981 981 982 982 hw->flags = IEEE802154_HW_TX_OMIT_CKSUM | 983 983 IEEE802154_HW_AFILT |
+52
drivers/net/ipa/ipa_power.c
··· 11 11 #include <linux/pm_runtime.h> 12 12 #include <linux/bitops.h> 13 13 14 + #include "linux/soc/qcom/qcom_aoss.h" 15 + 14 16 #include "ipa.h" 15 17 #include "ipa_power.h" 16 18 #include "ipa_endpoint.h" ··· 66 64 * struct ipa_power - IPA power management information 67 65 * @dev: IPA device pointer 68 66 * @core: IPA core clock 67 + * @qmp: QMP handle for AOSS communication 69 68 * @spinlock: Protects modem TX queue enable/disable 70 69 * @flags: Boolean state flags 71 70 * @interconnect_count: Number of elements in interconnect[] ··· 75 72 struct ipa_power { 76 73 struct device *dev; 77 74 struct clk *core; 75 + struct qmp *qmp; 78 76 spinlock_t spinlock; /* used with STOPPED/STARTED power flags */ 79 77 DECLARE_BITMAP(flags, IPA_POWER_FLAG_COUNT); 80 78 u32 interconnect_count; ··· 386 382 clear_bit(IPA_POWER_FLAG_STARTED, ipa->power->flags); 387 383 } 388 384 385 + static int ipa_power_retention_init(struct ipa_power *power) 386 + { 387 + struct qmp *qmp = qmp_get(power->dev); 388 + 389 + if (IS_ERR(qmp)) { 390 + if (PTR_ERR(qmp) == -EPROBE_DEFER) 391 + return -EPROBE_DEFER; 392 + 393 + /* We assume any other error means it's not defined/needed */ 394 + qmp = NULL; 395 + } 396 + power->qmp = qmp; 397 + 398 + return 0; 399 + } 400 + 401 + static void ipa_power_retention_exit(struct ipa_power *power) 402 + { 403 + qmp_put(power->qmp); 404 + power->qmp = NULL; 405 + } 406 + 407 + /* Control register retention on power collapse */ 408 + void ipa_power_retention(struct ipa *ipa, bool enable) 409 + { 410 + static const char fmt[] = "{ class: bcm, res: ipa_pc, val: %c }"; 411 + struct ipa_power *power = ipa->power; 412 + char buf[36]; /* Exactly enough for fmt[]; size a multiple of 4 */ 413 + int ret; 414 + 415 + if (!power->qmp) 416 + return; /* Not needed on this platform */ 417 + 418 + (void)snprintf(buf, sizeof(buf), fmt, enable ? '1' : '0'); 419 + 420 + ret = qmp_send(power->qmp, buf, sizeof(buf)); 421 + if (ret) 422 + dev_err(power->dev, "error %d sending QMP %sable request\n", 423 + ret, enable ? "en" : "dis"); 424 + } 425 + 389 426 int ipa_power_setup(struct ipa *ipa) 390 427 { 391 428 int ret; ··· 483 438 if (ret) 484 439 goto err_kfree; 485 440 441 + ret = ipa_power_retention_init(power); 442 + if (ret) 443 + goto err_interconnect_exit; 444 + 486 445 pm_runtime_set_autosuspend_delay(dev, IPA_AUTOSUSPEND_DELAY); 487 446 pm_runtime_use_autosuspend(dev); 488 447 pm_runtime_enable(dev); 489 448 490 449 return power; 491 450 451 + err_interconnect_exit: 452 + ipa_interconnect_exit(power); 492 453 err_kfree: 493 454 kfree(power); 494 455 err_clk_put: ··· 511 460 512 461 pm_runtime_disable(dev); 513 462 pm_runtime_dont_use_autosuspend(dev); 463 + ipa_power_retention_exit(power); 514 464 ipa_interconnect_exit(power); 515 465 kfree(power); 516 466 clk_put(clk);
+7
drivers/net/ipa/ipa_power.h
··· 41 41 void ipa_power_modem_queue_active(struct ipa *ipa); 42 42 43 43 /** 44 + * ipa_power_retention() - Control register retention on power collapse 45 + * @ipa: IPA pointer 46 + * @enable: Whether retention should be enabled or disabled 47 + */ 48 + void ipa_power_retention(struct ipa *ipa, bool enable); 49 + 50 + /** 44 51 * ipa_power_setup() - Set up IPA power management 45 52 * @ipa: IPA pointer 46 53 *
+5
drivers/net/ipa/ipa_uc.c
··· 11 11 12 12 #include "ipa.h" 13 13 #include "ipa_uc.h" 14 + #include "ipa_power.h" 14 15 15 16 /** 16 17 * DOC: The IPA embedded microcontroller ··· 155 154 case IPA_UC_RESPONSE_INIT_COMPLETED: 156 155 if (ipa->uc_powered) { 157 156 ipa->uc_loaded = true; 157 + ipa_power_retention(ipa, true); 158 158 pm_runtime_mark_last_busy(dev); 159 159 (void)pm_runtime_put_autosuspend(dev); 160 160 ipa->uc_powered = false; ··· 186 184 187 185 ipa_interrupt_remove(ipa->interrupt, IPA_IRQ_UC_1); 188 186 ipa_interrupt_remove(ipa->interrupt, IPA_IRQ_UC_0); 187 + if (ipa->uc_loaded) 188 + ipa_power_retention(ipa, false); 189 + 189 190 if (!ipa->uc_powered) 190 191 return; 191 192
+21 -12
drivers/net/macsec.c
··· 3870 3870 struct macsec_dev *macsec = macsec_priv(dev); 3871 3871 struct net_device *real_dev = macsec->real_dev; 3872 3872 3873 + /* If h/w offloading is available, propagate to the device */ 3874 + if (macsec_is_offloaded(macsec)) { 3875 + const struct macsec_ops *ops; 3876 + struct macsec_context ctx; 3877 + 3878 + ops = macsec_get_ops(netdev_priv(dev), &ctx); 3879 + if (ops) { 3880 + ctx.secy = &macsec->secy; 3881 + macsec_offload(ops->mdo_del_secy, &ctx); 3882 + } 3883 + } 3884 + 3873 3885 unregister_netdevice_queue(dev, head); 3874 3886 list_del_rcu(&macsec->secys); 3875 3887 macsec_del_dev(macsec); ··· 3895 3883 struct macsec_dev *macsec = macsec_priv(dev); 3896 3884 struct net_device *real_dev = macsec->real_dev; 3897 3885 struct macsec_rxh_data *rxd = macsec_data_rtnl(real_dev); 3898 - 3899 - /* If h/w offloading is available, propagate to the device */ 3900 - if (macsec_is_offloaded(macsec)) { 3901 - const struct macsec_ops *ops; 3902 - struct macsec_context ctx; 3903 - 3904 - ops = macsec_get_ops(netdev_priv(dev), &ctx); 3905 - if (ops) { 3906 - ctx.secy = &macsec->secy; 3907 - macsec_offload(ops->mdo_del_secy, &ctx); 3908 - } 3909 - } 3910 3886 3911 3887 macsec_common_dellink(dev, head); 3912 3888 ··· 4017 4017 if (macsec->offload != MACSEC_OFFLOAD_OFF && 4018 4018 !macsec_check_offload(macsec->offload, macsec)) 4019 4019 return -EOPNOTSUPP; 4020 + 4021 + /* send_sci must be set to true when transmit sci explicitly is set */ 4022 + if ((data && data[IFLA_MACSEC_SCI]) && 4023 + (data && data[IFLA_MACSEC_INC_SCI])) { 4024 + u8 send_sci = !!nla_get_u8(data[IFLA_MACSEC_INC_SCI]); 4025 + 4026 + if (!send_sci) 4027 + return -EINVAL; 4028 + } 4020 4029 4021 4030 if (data && data[IFLA_MACSEC_ICV_LEN]) 4022 4031 icv_len = nla_get_u8(data[IFLA_MACSEC_ICV_LEN]);
+13 -13
drivers/net/phy/at803x.c
··· 1688 1688 if (ret < 0) 1689 1689 return ret; 1690 1690 1691 - if (phydev->link && phydev->speed == SPEED_2500) 1692 - phydev->interface = PHY_INTERFACE_MODE_2500BASEX; 1693 - else 1694 - phydev->interface = PHY_INTERFACE_MODE_SMII; 1695 - 1696 - /* generate seed as a lower random value to make PHY linked as SLAVE easily, 1697 - * except for master/slave configuration fault detected. 1698 - * the reason for not putting this code into the function link_change_notify is 1699 - * the corner case where the link partner is also the qca8081 PHY and the seed 1700 - * value is configured as the same value, the link can't be up and no link change 1701 - * occurs. 1702 - */ 1703 - if (!phydev->link) { 1691 + if (phydev->link) { 1692 + if (phydev->speed == SPEED_2500) 1693 + phydev->interface = PHY_INTERFACE_MODE_2500BASEX; 1694 + else 1695 + phydev->interface = PHY_INTERFACE_MODE_SGMII; 1696 + } else { 1697 + /* generate seed as a lower random value to make PHY linked as SLAVE easily, 1698 + * except for master/slave configuration fault detected. 1699 + * the reason for not putting this code into the function link_change_notify is 1700 + * the corner case where the link partner is also the qca8081 PHY and the seed 1701 + * value is configured as the same value, the link can't be up and no link change 1702 + * occurs. 1703 + */ 1704 1704 if (phydev->master_slave_state == MASTER_SLAVE_STATE_ERR) { 1705 1705 qca808x_phy_ms_seed_enable(phydev, false); 1706 1706 } else {
+3 -3
drivers/net/usb/ipheth.c
··· 121 121 if (tx_buf == NULL) 122 122 goto free_rx_urb; 123 123 124 - rx_buf = usb_alloc_coherent(iphone->udev, IPHETH_BUF_SIZE, 124 + rx_buf = usb_alloc_coherent(iphone->udev, IPHETH_BUF_SIZE + IPHETH_IP_ALIGN, 125 125 GFP_KERNEL, &rx_urb->transfer_dma); 126 126 if (rx_buf == NULL) 127 127 goto free_tx_buf; ··· 146 146 147 147 static void ipheth_free_urbs(struct ipheth_device *iphone) 148 148 { 149 - usb_free_coherent(iphone->udev, IPHETH_BUF_SIZE, iphone->rx_buf, 149 + usb_free_coherent(iphone->udev, IPHETH_BUF_SIZE + IPHETH_IP_ALIGN, iphone->rx_buf, 150 150 iphone->rx_urb->transfer_dma); 151 151 usb_free_coherent(iphone->udev, IPHETH_BUF_SIZE, iphone->tx_buf, 152 152 iphone->tx_urb->transfer_dma); ··· 317 317 318 318 usb_fill_bulk_urb(dev->rx_urb, udev, 319 319 usb_rcvbulkpipe(udev, dev->bulk_in), 320 - dev->rx_buf, IPHETH_BUF_SIZE, 320 + dev->rx_buf, IPHETH_BUF_SIZE + IPHETH_IP_ALIGN, 321 321 ipheth_rcvbulk_callback, 322 322 dev); 323 323 dev->rx_urb->transfer_flags |= URB_NO_TRANSFER_DMA_MAP;
+8 -1
drivers/nvme/host/core.c
··· 4253 4253 container_of(work, struct nvme_ctrl, async_event_work); 4254 4254 4255 4255 nvme_aen_uevent(ctrl); 4256 - ctrl->ops->submit_async_event(ctrl); 4256 + 4257 + /* 4258 + * The transport drivers must guarantee AER submission here is safe by 4259 + * flushing ctrl async_event_work after changing the controller state 4260 + * from LIVE and before freeing the admin queue. 4261 + */ 4262 + if (ctrl->state == NVME_CTRL_LIVE) 4263 + ctrl->ops->submit_async_event(ctrl); 4257 4264 } 4258 4265 4259 4266 static bool nvme_ctrl_pp_status(struct nvme_ctrl *ctrl)
+1
drivers/nvme/host/fabrics.h
··· 170 170 struct nvmf_ctrl_options *opts) 171 171 { 172 172 if (ctrl->state == NVME_CTRL_DELETING || 173 + ctrl->state == NVME_CTRL_DELETING_NOIO || 173 174 ctrl->state == NVME_CTRL_DEAD || 174 175 strcmp(opts->subsysnqn, ctrl->opts->subsysnqn) || 175 176 strcmp(opts->host->nqn, ctrl->opts->host->nqn) ||
+1
drivers/nvme/host/rdma.c
··· 1200 1200 struct nvme_rdma_ctrl, err_work); 1201 1201 1202 1202 nvme_stop_keep_alive(&ctrl->ctrl); 1203 + flush_work(&ctrl->ctrl.async_event_work); 1203 1204 nvme_rdma_teardown_io_queues(ctrl, false); 1204 1205 nvme_start_queues(&ctrl->ctrl); 1205 1206 nvme_rdma_teardown_admin_queue(ctrl, false);
+1
drivers/nvme/host/tcp.c
··· 2096 2096 struct nvme_ctrl *ctrl = &tcp_ctrl->ctrl; 2097 2097 2098 2098 nvme_stop_keep_alive(ctrl); 2099 + flush_work(&ctrl->async_event_work); 2099 2100 nvme_tcp_teardown_io_queues(ctrl, false); 2100 2101 /* unquiesce to fail fast pending requests */ 2101 2102 nvme_start_queues(ctrl);
+42 -43
drivers/pci/controller/cadence/pci-j721e.c
··· 356 356 const struct j721e_pcie_data *data; 357 357 struct cdns_pcie *cdns_pcie; 358 358 struct j721e_pcie *pcie; 359 - struct cdns_pcie_rc *rc; 360 - struct cdns_pcie_ep *ep; 359 + struct cdns_pcie_rc *rc = NULL; 360 + struct cdns_pcie_ep *ep = NULL; 361 361 struct gpio_desc *gpiod; 362 362 void __iomem *base; 363 363 struct clk *clk; ··· 375 375 pcie = devm_kzalloc(dev, sizeof(*pcie), GFP_KERNEL); 376 376 if (!pcie) 377 377 return -ENOMEM; 378 + 379 + switch (mode) { 380 + case PCI_MODE_RC: 381 + if (!IS_ENABLED(CONFIG_PCIE_CADENCE_HOST)) 382 + return -ENODEV; 383 + 384 + bridge = devm_pci_alloc_host_bridge(dev, sizeof(*rc)); 385 + if (!bridge) 386 + return -ENOMEM; 387 + 388 + if (!data->byte_access_allowed) 389 + bridge->ops = &cdns_ti_pcie_host_ops; 390 + rc = pci_host_bridge_priv(bridge); 391 + rc->quirk_retrain_flag = data->quirk_retrain_flag; 392 + rc->quirk_detect_quiet_flag = data->quirk_detect_quiet_flag; 393 + 394 + cdns_pcie = &rc->pcie; 395 + cdns_pcie->dev = dev; 396 + cdns_pcie->ops = &j721e_pcie_ops; 397 + pcie->cdns_pcie = cdns_pcie; 398 + break; 399 + case PCI_MODE_EP: 400 + if (!IS_ENABLED(CONFIG_PCIE_CADENCE_EP)) 401 + return -ENODEV; 402 + 403 + ep = devm_kzalloc(dev, sizeof(*ep), GFP_KERNEL); 404 + if (!ep) 405 + return -ENOMEM; 406 + 407 + ep->quirk_detect_quiet_flag = data->quirk_detect_quiet_flag; 408 + 409 + cdns_pcie = &ep->pcie; 410 + cdns_pcie->dev = dev; 411 + cdns_pcie->ops = &j721e_pcie_ops; 412 + pcie->cdns_pcie = cdns_pcie; 413 + break; 414 + default: 415 + dev_err(dev, "INVALID device type %d\n", mode); 416 + return 0; 417 + } 378 418 379 419 pcie->mode = mode; 380 420 pcie->linkdown_irq_regfield = data->linkdown_irq_regfield; ··· 466 426 467 427 switch (mode) { 468 428 case PCI_MODE_RC: 469 - if (!IS_ENABLED(CONFIG_PCIE_CADENCE_HOST)) { 470 - ret = -ENODEV; 471 - goto err_get_sync; 472 - } 473 - 474 - bridge = devm_pci_alloc_host_bridge(dev, sizeof(*rc)); 475 - if (!bridge) { 476 - ret = -ENOMEM; 477 - goto err_get_sync; 478 - } 479 - 480 - if (!data->byte_access_allowed) 481 - bridge->ops = &cdns_ti_pcie_host_ops; 482 - rc = pci_host_bridge_priv(bridge); 483 - rc->quirk_retrain_flag = data->quirk_retrain_flag; 484 - rc->quirk_detect_quiet_flag = data->quirk_detect_quiet_flag; 485 - 486 - cdns_pcie = &rc->pcie; 487 - cdns_pcie->dev = dev; 488 - cdns_pcie->ops = &j721e_pcie_ops; 489 - pcie->cdns_pcie = cdns_pcie; 490 - 491 429 gpiod = devm_gpiod_get_optional(dev, "reset", GPIOD_OUT_LOW); 492 430 if (IS_ERR(gpiod)) { 493 431 ret = PTR_ERR(gpiod); ··· 515 497 516 498 break; 517 499 case PCI_MODE_EP: 518 - if (!IS_ENABLED(CONFIG_PCIE_CADENCE_EP)) { 519 - ret = -ENODEV; 520 - goto err_get_sync; 521 - } 522 - 523 - ep = devm_kzalloc(dev, sizeof(*ep), GFP_KERNEL); 524 - if (!ep) { 525 - ret = -ENOMEM; 526 - goto err_get_sync; 527 - } 528 - ep->quirk_detect_quiet_flag = data->quirk_detect_quiet_flag; 529 - 530 - cdns_pcie = &ep->pcie; 531 - cdns_pcie->dev = dev; 532 - cdns_pcie->ops = &j721e_pcie_ops; 533 - pcie->cdns_pcie = cdns_pcie; 534 - 535 500 ret = cdns_pcie_init_phy(dev, cdns_pcie); 536 501 if (ret) { 537 502 dev_err(dev, "Failed to init phy\n"); ··· 526 525 goto err_pcie_setup; 527 526 528 527 break; 529 - default: 530 - dev_err(dev, "INVALID device type %d\n", mode); 531 528 } 532 529 533 530 return 0;
+18 -13
drivers/pci/controller/dwc/pcie-kirin.c
··· 756 756 return 0; 757 757 } 758 758 759 + struct kirin_pcie_data { 760 + enum pcie_kirin_phy_type phy_type; 761 + }; 762 + 763 + static const struct kirin_pcie_data kirin_960_data = { 764 + .phy_type = PCIE_KIRIN_INTERNAL_PHY, 765 + }; 766 + 767 + static const struct kirin_pcie_data kirin_970_data = { 768 + .phy_type = PCIE_KIRIN_EXTERNAL_PHY, 769 + }; 770 + 759 771 static const struct of_device_id kirin_pcie_match[] = { 760 - { 761 - .compatible = "hisilicon,kirin960-pcie", 762 - .data = (void *)PCIE_KIRIN_INTERNAL_PHY 763 - }, 764 - { 765 - .compatible = "hisilicon,kirin970-pcie", 766 - .data = (void *)PCIE_KIRIN_EXTERNAL_PHY 767 - }, 772 + { .compatible = "hisilicon,kirin960-pcie", .data = &kirin_960_data }, 773 + { .compatible = "hisilicon,kirin970-pcie", .data = &kirin_970_data }, 768 774 {}, 769 775 }; 770 776 771 777 static int kirin_pcie_probe(struct platform_device *pdev) 772 778 { 773 - enum pcie_kirin_phy_type phy_type; 774 779 struct device *dev = &pdev->dev; 780 + const struct kirin_pcie_data *data; 775 781 struct kirin_pcie *kirin_pcie; 776 782 struct dw_pcie *pci; 777 783 int ret; ··· 787 781 return -EINVAL; 788 782 } 789 783 790 - phy_type = (long)of_device_get_match_data(dev); 791 - if (!phy_type) { 784 + data = of_device_get_match_data(dev); 785 + if (!data) { 792 786 dev_err(dev, "OF data missing\n"); 793 787 return -EINVAL; 794 788 } 795 - 796 789 797 790 kirin_pcie = devm_kzalloc(dev, sizeof(struct kirin_pcie), GFP_KERNEL); 798 791 if (!kirin_pcie) ··· 805 800 pci->ops = &kirin_dw_pcie_ops; 806 801 pci->pp.ops = &kirin_pcie_host_ops; 807 802 kirin_pcie->pci = pci; 808 - kirin_pcie->type = phy_type; 803 + kirin_pcie->type = data->phy_type; 809 804 810 805 ret = kirin_pcie_get_resource(kirin_pcie, pdev); 811 806 if (ret)
+1 -1
drivers/pinctrl/Makefile
··· 42 42 obj-$(CONFIG_PINCTRL_RK805) += pinctrl-rk805.o 43 43 obj-$(CONFIG_PINCTRL_ROCKCHIP) += pinctrl-rockchip.o 44 44 obj-$(CONFIG_PINCTRL_SINGLE) += pinctrl-single.o 45 + obj-$(CONFIG_PINCTRL_ST) += pinctrl-st.o 45 46 obj-$(CONFIG_PINCTRL_STARFIVE) += pinctrl-starfive.o 46 47 obj-$(CONFIG_PINCTRL_STMFX) += pinctrl-stmfx.o 47 - obj-$(CONFIG_PINCTRL_ST) += pinctrl-st.o 48 48 obj-$(CONFIG_PINCTRL_SX150X) += pinctrl-sx150x.o 49 49 obj-$(CONFIG_PINCTRL_TB10X) += pinctrl-tb10x.o 50 50 obj-$(CONFIG_PINCTRL_THUNDERBAY) += pinctrl-thunderbay.o
+1
drivers/pinctrl/bcm/Kconfig
··· 35 35 select PINCONF 36 36 select GENERIC_PINCONF 37 37 select GPIOLIB 38 + select REGMAP 38 39 select GPIO_REGMAP 39 40 40 41 config PINCTRL_BCM6318
+15 -8
drivers/pinctrl/bcm/pinctrl-bcm2835.c
··· 1269 1269 sizeof(*girq->parents), 1270 1270 GFP_KERNEL); 1271 1271 if (!girq->parents) { 1272 - pinctrl_remove_gpio_range(pc->pctl_dev, &pc->gpio_range); 1273 - return -ENOMEM; 1272 + err = -ENOMEM; 1273 + goto out_remove; 1274 1274 } 1275 1275 1276 1276 if (is_7211) { 1277 1277 pc->wake_irq = devm_kcalloc(dev, BCM2835_NUM_IRQS, 1278 1278 sizeof(*pc->wake_irq), 1279 1279 GFP_KERNEL); 1280 - if (!pc->wake_irq) 1281 - return -ENOMEM; 1280 + if (!pc->wake_irq) { 1281 + err = -ENOMEM; 1282 + goto out_remove; 1283 + } 1282 1284 } 1283 1285 1284 1286 /* ··· 1308 1306 1309 1307 len = strlen(dev_name(pc->dev)) + 16; 1310 1308 name = devm_kzalloc(pc->dev, len, GFP_KERNEL); 1311 - if (!name) 1312 - return -ENOMEM; 1309 + if (!name) { 1310 + err = -ENOMEM; 1311 + goto out_remove; 1312 + } 1313 1313 1314 1314 snprintf(name, len, "%s:bank%d", dev_name(pc->dev), i); 1315 1315 ··· 1330 1326 err = gpiochip_add_data(&pc->gpio_chip, pc); 1331 1327 if (err) { 1332 1328 dev_err(dev, "could not add GPIO chip\n"); 1333 - pinctrl_remove_gpio_range(pc->pctl_dev, &pc->gpio_range); 1334 - return err; 1329 + goto out_remove; 1335 1330 } 1336 1331 1337 1332 return 0; 1333 + 1334 + out_remove: 1335 + pinctrl_remove_gpio_range(pc->pctl_dev, &pc->gpio_range); 1336 + return err; 1338 1337 } 1339 1338 1340 1339 static struct platform_driver bcm2835_pinctrl_driver = {
+3 -2
drivers/pinctrl/intel/pinctrl-cherryview.c
··· 1471 1471 1472 1472 offset = cctx->intr_lines[intr_line]; 1473 1473 if (offset == CHV_INVALID_HWIRQ) { 1474 - dev_err(dev, "interrupt on unused interrupt line %u\n", intr_line); 1475 - continue; 1474 + dev_warn_once(dev, "interrupt on unmapped interrupt line %u\n", intr_line); 1475 + /* Some boards expect hwirq 0 to trigger in this case */ 1476 + offset = 0; 1476 1477 } 1477 1478 1478 1479 generic_handle_domain_irq(gc->irq.domain, offset);
+36 -28
drivers/pinctrl/intel/pinctrl-intel.c
··· 451 451 value &= ~PADCFG0_PMODE_MASK; 452 452 value |= PADCFG0_PMODE_GPIO; 453 453 454 - /* Disable input and output buffers */ 455 - value |= PADCFG0_GPIORXDIS; 454 + /* Disable TX buffer and enable RX (this will be input) */ 455 + value &= ~PADCFG0_GPIORXDIS; 456 456 value |= PADCFG0_GPIOTXDIS; 457 457 458 458 /* Disable SCI/SMI/NMI generation */ ··· 496 496 } 497 497 498 498 intel_gpio_set_gpio_mode(padcfg0); 499 - 500 - /* Disable TX buffer and enable RX (this will be input) */ 501 - __intel_gpio_set_direction(padcfg0, true); 502 499 503 500 raw_spin_unlock_irqrestore(&pctrl->lock, flags); 504 501 ··· 1112 1115 1113 1116 intel_gpio_set_gpio_mode(reg); 1114 1117 1115 - /* Disable TX buffer and enable RX (this will be input) */ 1116 - __intel_gpio_set_direction(reg, true); 1117 - 1118 1118 value = readl(reg); 1119 1119 1120 1120 value &= ~(PADCFG0_RXEVCFG_MASK | PADCFG0_RXINV); ··· 1208 1214 } 1209 1215 1210 1216 return IRQ_RETVAL(ret); 1217 + } 1218 + 1219 + static void intel_gpio_irq_init(struct intel_pinctrl *pctrl) 1220 + { 1221 + int i; 1222 + 1223 + for (i = 0; i < pctrl->ncommunities; i++) { 1224 + const struct intel_community *community; 1225 + void __iomem *base; 1226 + unsigned int gpp; 1227 + 1228 + community = &pctrl->communities[i]; 1229 + base = community->regs; 1230 + 1231 + for (gpp = 0; gpp < community->ngpps; gpp++) { 1232 + /* Mask and clear all interrupts */ 1233 + writel(0, base + community->ie_offset + gpp * 4); 1234 + writel(0xffff, base + community->is_offset + gpp * 4); 1235 + } 1236 + } 1237 + } 1238 + 1239 + static int intel_gpio_irq_init_hw(struct gpio_chip *gc) 1240 + { 1241 + struct intel_pinctrl *pctrl = gpiochip_get_data(gc); 1242 + 1243 + /* 1244 + * Make sure the interrupt lines are in a proper state before 1245 + * further configuration. 1246 + */ 1247 + intel_gpio_irq_init(pctrl); 1248 + 1249 + return 0; 1211 1250 } 1212 1251 1213 1252 static int intel_gpio_add_community_ranges(struct intel_pinctrl *pctrl, ··· 1347 1320 girq->num_parents = 0; 1348 1321 girq->default_type = IRQ_TYPE_NONE; 1349 1322 girq->handler = handle_bad_irq; 1323 + girq->init_hw = intel_gpio_irq_init_hw; 1350 1324 1351 1325 ret = devm_gpiochip_add_data(pctrl->dev, &pctrl->chip, pctrl); 1352 1326 if (ret) { ··· 1722 1694 return 0; 1723 1695 } 1724 1696 EXPORT_SYMBOL_GPL(intel_pinctrl_suspend_noirq); 1725 - 1726 - static void intel_gpio_irq_init(struct intel_pinctrl *pctrl) 1727 - { 1728 - size_t i; 1729 - 1730 - for (i = 0; i < pctrl->ncommunities; i++) { 1731 - const struct intel_community *community; 1732 - void __iomem *base; 1733 - unsigned int gpp; 1734 - 1735 - community = &pctrl->communities[i]; 1736 - base = community->regs; 1737 - 1738 - for (gpp = 0; gpp < community->ngpps; gpp++) { 1739 - /* Mask and clear all interrupts */ 1740 - writel(0, base + community->ie_offset + gpp * 4); 1741 - writel(0xffff, base + community->is_offset + gpp * 4); 1742 - } 1743 - } 1744 - } 1745 1697 1746 1698 static bool intel_gpio_update_reg(void __iomem *reg, u32 mask, u32 value) 1747 1699 {
+2 -1
drivers/pinctrl/pinctrl-microchip-sgpio.c
··· 137 137 138 138 static inline u32 sgpio_get_addr(struct sgpio_priv *priv, u32 rno, u32 off) 139 139 { 140 - return priv->properties->regoff[rno] + off; 140 + return (priv->properties->regoff[rno] + off) * 141 + regmap_get_reg_stride(priv->regs); 141 142 } 142 143 143 144 static u32 sgpio_readl(struct sgpio_priv *priv, u32 rno, u32 off)
+33 -53
drivers/pinctrl/pinctrl-thunderbay.c
··· 773 773 774 774 static int thunderbay_add_functions(struct thunderbay_pinctrl *tpc, struct function_desc *funcs) 775 775 { 776 - struct function_desc *function = funcs; 777 776 int i; 778 777 779 778 /* Assign the groups for each function */ 780 - for (i = 0; i < tpc->soc->npins; i++) { 781 - const struct pinctrl_pin_desc *pin_info = thunderbay_pins + i; 782 - struct thunderbay_mux_desc *pin_mux = pin_info->drv_data; 779 + for (i = 0; i < tpc->nfuncs; i++) { 780 + struct function_desc *func = &funcs[i]; 781 + const char **group_names; 782 + unsigned int grp_idx = 0; 783 + int j; 783 784 784 - while (pin_mux->name) { 785 - const char **grp; 786 - int j, grp_num, match = 0; 787 - size_t grp_size; 788 - struct function_desc *func; 785 + group_names = devm_kcalloc(tpc->dev, func->num_group_names, 786 + sizeof(*group_names), GFP_KERNEL); 787 + if (!group_names) 788 + return -ENOMEM; 789 789 790 - for (j = 0; j < tpc->nfuncs; j++) { 791 - if (!strcmp(pin_mux->name, function[j].name)) { 792 - match = 1; 793 - break; 794 - } 790 + for (j = 0; j < tpc->soc->npins; j++) { 791 + const struct pinctrl_pin_desc *pin_info = &thunderbay_pins[j]; 792 + struct thunderbay_mux_desc *pin_mux; 793 + 794 + for (pin_mux = pin_info->drv_data; pin_mux->name; pin_mux++) { 795 + if (!strcmp(pin_mux->name, func->name)) 796 + group_names[grp_idx++] = pin_info->name; 795 797 } 796 - 797 - if (!match) 798 - return -EINVAL; 799 - 800 - func = function + j; 801 - grp_num = func->num_group_names; 802 - grp_size = sizeof(*func->group_names); 803 - 804 - if (!func->group_names) { 805 - func->group_names = devm_kcalloc(tpc->dev, 806 - grp_num, 807 - grp_size, 808 - GFP_KERNEL); 809 - if (!func->group_names) { 810 - kfree(func); 811 - return -ENOMEM; 812 - } 813 - } 814 - 815 - grp = func->group_names; 816 - while (*grp) 817 - grp++; 818 - 819 - *grp = pin_info->name; 820 - pin_mux++; 821 798 } 799 + 800 + func->group_names = group_names; 822 801 } 823 802 824 803 /* Add all functions */ 825 804 for (i = 0; i < tpc->nfuncs; i++) { 826 805 pinmux_generic_add_function(tpc->pctrl, 827 - function[i].name, 828 - function[i].group_names, 829 - function[i].num_group_names, 830 - function[i].data); 806 + funcs[i].name, 807 + funcs[i].group_names, 808 + funcs[i].num_group_names, 809 + funcs[i].data); 831 810 } 832 - kfree(function); 811 + kfree(funcs); 833 812 return 0; 834 813 } 835 814 ··· 818 839 void *ptr; 819 840 int pin; 820 841 821 - /* Total number of functions is unknown at this point. Allocate first. */ 842 + /* 843 + * Allocate maximum possible number of functions. Assume every pin 844 + * being part of 8 (hw maximum) globally unique muxes. 845 + */ 822 846 tpc->nfuncs = 0; 823 847 thunderbay_funcs = kcalloc(tpc->soc->npins * 8, 824 848 sizeof(*thunderbay_funcs), GFP_KERNEL); 825 849 if (!thunderbay_funcs) 826 850 return -ENOMEM; 827 851 828 - /* Find total number of functions and each's properties */ 852 + /* Setup 1 function for each unique mux */ 829 853 for (pin = 0; pin < tpc->soc->npins; pin++) { 830 854 const struct pinctrl_pin_desc *pin_info = thunderbay_pins + pin; 831 - struct thunderbay_mux_desc *pin_mux = pin_info->drv_data; 855 + struct thunderbay_mux_desc *pin_mux; 832 856 833 - while (pin_mux->name) { 834 - struct function_desc *func = thunderbay_funcs; 857 + for (pin_mux = pin_info->drv_data; pin_mux->name; pin_mux++) { 858 + struct function_desc *func; 835 859 836 - while (func->name) { 860 + /* Check if we already have function for this mux */ 861 + for (func = thunderbay_funcs; func->name; func++) { 837 862 if (!strcmp(pin_mux->name, func->name)) { 838 863 func->num_group_names++; 839 864 break; 840 865 } 841 - func++; 842 866 } 843 867 844 868 if (!func->name) { ··· 850 868 func->data = (int *)&pin_mux->mode; 851 869 tpc->nfuncs++; 852 870 } 853 - 854 - pin_mux++; 855 871 } 856 872 } 857 873
+4 -6
drivers/pinctrl/pinctrl-zynqmp.c
··· 809 809 unsigned int *npins) 810 810 { 811 811 struct pinctrl_pin_desc *pins, *pin; 812 - char **pin_names; 813 812 int ret; 814 813 int i; 815 814 ··· 820 821 if (!pins) 821 822 return -ENOMEM; 822 823 823 - pin_names = devm_kasprintf_strarray(dev, ZYNQMP_PIN_PREFIX, *npins); 824 - if (IS_ERR(pin_names)) 825 - return PTR_ERR(pin_names); 826 - 827 824 for (i = 0; i < *npins; i++) { 828 825 pin = &pins[i]; 829 826 pin->number = i; 830 - pin->name = pin_names[i]; 827 + pin->name = devm_kasprintf(dev, GFP_KERNEL, "%s%d", 828 + ZYNQMP_PIN_PREFIX, i); 829 + if (!pin->name) 830 + return -ENOMEM; 831 831 } 832 832 833 833 *zynqmp_pins = pins;
+4 -4
drivers/pinctrl/sunxi/pinctrl-sun50i-h616.c
··· 363 363 SUNXI_FUNCTION(0x0, "gpio_in"), 364 364 SUNXI_FUNCTION(0x1, "gpio_out"), 365 365 SUNXI_FUNCTION(0x2, "uart2"), /* CTS */ 366 - SUNXI_FUNCTION(0x3, "i2s3"), /* DO0 */ 366 + SUNXI_FUNCTION(0x3, "i2s3_dout0"), /* DO0 */ 367 367 SUNXI_FUNCTION(0x4, "spi1"), /* MISO */ 368 - SUNXI_FUNCTION(0x5, "i2s3"), /* DI1 */ 368 + SUNXI_FUNCTION(0x5, "i2s3_din1"), /* DI1 */ 369 369 SUNXI_FUNCTION_IRQ_BANK(0x6, 6, 8)), /* PH_EINT8 */ 370 370 SUNXI_PIN(SUNXI_PINCTRL_PIN(H, 9), 371 371 SUNXI_FUNCTION(0x0, "gpio_in"), 372 372 SUNXI_FUNCTION(0x1, "gpio_out"), 373 - SUNXI_FUNCTION(0x3, "i2s3"), /* DI0 */ 373 + SUNXI_FUNCTION(0x3, "i2s3_din0"), /* DI0 */ 374 374 SUNXI_FUNCTION(0x4, "spi1"), /* CS1 */ 375 - SUNXI_FUNCTION(0x3, "i2s3"), /* DO1 */ 375 + SUNXI_FUNCTION(0x5, "i2s3_dout1"), /* DO1 */ 376 376 SUNXI_FUNCTION_IRQ_BANK(0x6, 6, 9)), /* PH_EINT9 */ 377 377 SUNXI_PIN(SUNXI_PINCTRL_PIN(H, 10), 378 378 SUNXI_FUNCTION(0x0, "gpio_in"),
+1
drivers/platform/surface/Kconfig
··· 5 5 6 6 menuconfig SURFACE_PLATFORMS 7 7 bool "Microsoft Surface Platform-Specific Device Drivers" 8 + depends on ARM64 || X86 || COMPILE_TEST 8 9 default y 9 10 help 10 11 Say Y here to get to see options for platform-specific device drivers
+9 -6
drivers/platform/x86/amd-pmc.c
··· 124 124 u32 cpu_id; 125 125 u32 active_ips; 126 126 /* SMU version information */ 127 - u16 major; 128 - u16 minor; 129 - u16 rev; 127 + u8 smu_program; 128 + u8 major; 129 + u8 minor; 130 + u8 rev; 130 131 struct device *dev; 131 132 struct pci_dev *rdev; 132 133 struct mutex lock; /* generic mutex lock */ ··· 181 180 if (rc) 182 181 return rc; 183 182 184 - dev->major = (val >> 16) & GENMASK(15, 0); 183 + dev->smu_program = (val >> 24) & GENMASK(7, 0); 184 + dev->major = (val >> 16) & GENMASK(7, 0); 185 185 dev->minor = (val >> 8) & GENMASK(7, 0); 186 186 dev->rev = (val >> 0) & GENMASK(7, 0); 187 187 188 - dev_dbg(dev->dev, "SMU version is %u.%u.%u\n", dev->major, dev->minor, dev->rev); 188 + dev_dbg(dev->dev, "SMU program %u version is %u.%u.%u\n", 189 + dev->smu_program, dev->major, dev->minor, dev->rev); 189 190 190 191 return 0; 191 192 } ··· 229 226 return 0; 230 227 } 231 228 232 - const struct file_operations amd_pmc_stb_debugfs_fops = { 229 + static const struct file_operations amd_pmc_stb_debugfs_fops = { 233 230 .owner = THIS_MODULE, 234 231 .open = amd_pmc_stb_debugfs_open, 235 232 .read = amd_pmc_stb_debugfs_read,
+2 -2
drivers/platform/x86/asus-tf103c-dock.c
··· 250 250 return 0; 251 251 } 252 252 253 - struct hid_ll_driver tf103c_dock_hid_ll_driver = { 253 + static struct hid_ll_driver tf103c_dock_hid_ll_driver = { 254 254 .parse = tf103c_dock_hid_parse, 255 255 .start = tf103c_dock_hid_start, 256 256 .stop = tf103c_dock_hid_stop, ··· 921 921 return 0; 922 922 } 923 923 924 - SIMPLE_DEV_PM_OPS(tf103c_dock_pm_ops, tf103c_dock_suspend, tf103c_dock_resume); 924 + static SIMPLE_DEV_PM_OPS(tf103c_dock_pm_ops, tf103c_dock_suspend, tf103c_dock_resume); 925 925 926 926 static const struct acpi_device_id tf103c_dock_acpi_match[] = { 927 927 {"NPCE69A"},
+13 -13
drivers/platform/x86/intel/crystal_cove_charger.c
··· 17 17 #include <linux/regmap.h> 18 18 19 19 #define CHGRIRQ_REG 0x0a 20 + #define MCHGRIRQ_REG 0x17 20 21 21 22 struct crystal_cove_charger_data { 22 23 struct mutex buslock; /* irq_bus_lock */ ··· 26 25 struct irq_domain *irq_domain; 27 26 int irq; 28 27 int charger_irq; 29 - bool irq_enabled; 30 - bool irq_is_enabled; 28 + u8 mask; 29 + u8 new_mask; 31 30 }; 32 31 33 32 static irqreturn_t crystal_cove_charger_irq(int irq, void *data) ··· 54 53 { 55 54 struct crystal_cove_charger_data *charger = irq_data_get_irq_chip_data(data); 56 55 57 - if (charger->irq_is_enabled != charger->irq_enabled) { 58 - if (charger->irq_enabled) 59 - enable_irq(charger->irq); 60 - else 61 - disable_irq(charger->irq); 62 - 63 - charger->irq_is_enabled = charger->irq_enabled; 56 + if (charger->mask != charger->new_mask) { 57 + regmap_write(charger->regmap, MCHGRIRQ_REG, charger->new_mask); 58 + charger->mask = charger->new_mask; 64 59 } 65 60 66 61 mutex_unlock(&charger->buslock); ··· 66 69 { 67 70 struct crystal_cove_charger_data *charger = irq_data_get_irq_chip_data(data); 68 71 69 - charger->irq_enabled = true; 72 + charger->new_mask &= ~BIT(data->hwirq); 70 73 } 71 74 72 75 static void crystal_cove_charger_irq_mask(struct irq_data *data) 73 76 { 74 77 struct crystal_cove_charger_data *charger = irq_data_get_irq_chip_data(data); 75 78 76 - charger->irq_enabled = false; 79 + charger->new_mask |= BIT(data->hwirq); 77 80 } 78 81 79 82 static void crystal_cove_charger_rm_irq_domain(void *data) ··· 127 130 irq_set_nested_thread(charger->charger_irq, true); 128 131 irq_set_noprobe(charger->charger_irq); 129 132 133 + /* Mask the single 2nd level IRQ before enabling the 1st level IRQ */ 134 + charger->mask = charger->new_mask = BIT(0); 135 + regmap_write(charger->regmap, MCHGRIRQ_REG, charger->mask); 136 + 130 137 ret = devm_request_threaded_irq(&pdev->dev, charger->irq, NULL, 131 138 crystal_cove_charger_irq, 132 - IRQF_ONESHOT | IRQF_NO_AUTOEN, 133 - KBUILD_MODNAME, charger); 139 + IRQF_ONESHOT, KBUILD_MODNAME, charger); 134 140 if (ret) 135 141 return dev_err_probe(&pdev->dev, ret, "requesting irq\n"); 136 142
+63 -34
drivers/platform/x86/intel/speed_select_if/isst_if_common.c
··· 596 596 return ret; 597 597 } 598 598 599 - static DEFINE_MUTEX(punit_misc_dev_lock); 599 + /* Lock to prevent module registration when already opened by user space */ 600 + static DEFINE_MUTEX(punit_misc_dev_open_lock); 601 + /* Lock to allow one share misc device for all ISST interace */ 602 + static DEFINE_MUTEX(punit_misc_dev_reg_lock); 600 603 static int misc_usage_count; 601 604 static int misc_device_ret; 602 605 static int misc_device_open; ··· 609 606 int i, ret = 0; 610 607 611 608 /* Fail open, if a module is going away */ 612 - mutex_lock(&punit_misc_dev_lock); 609 + mutex_lock(&punit_misc_dev_open_lock); 613 610 for (i = 0; i < ISST_IF_DEV_MAX; ++i) { 614 611 struct isst_if_cmd_cb *cb = &punit_callbacks[i]; 615 612 ··· 631 628 } else { 632 629 misc_device_open++; 633 630 } 634 - mutex_unlock(&punit_misc_dev_lock); 631 + mutex_unlock(&punit_misc_dev_open_lock); 635 632 636 633 return ret; 637 634 } ··· 640 637 { 641 638 int i; 642 639 643 - mutex_lock(&punit_misc_dev_lock); 640 + mutex_lock(&punit_misc_dev_open_lock); 644 641 misc_device_open--; 645 642 for (i = 0; i < ISST_IF_DEV_MAX; ++i) { 646 643 struct isst_if_cmd_cb *cb = &punit_callbacks[i]; ··· 648 645 if (cb->registered) 649 646 module_put(cb->owner); 650 647 } 651 - mutex_unlock(&punit_misc_dev_lock); 648 + mutex_unlock(&punit_misc_dev_open_lock); 652 649 653 650 return 0; 654 651 } ··· 664 661 .name = "isst_interface", 665 662 .fops = &isst_if_char_driver_ops, 666 663 }; 664 + 665 + static int isst_misc_reg(void) 666 + { 667 + mutex_lock(&punit_misc_dev_reg_lock); 668 + if (misc_device_ret) 669 + goto unlock_exit; 670 + 671 + if (!misc_usage_count) { 672 + misc_device_ret = isst_if_cpu_info_init(); 673 + if (misc_device_ret) 674 + goto unlock_exit; 675 + 676 + misc_device_ret = misc_register(&isst_if_char_driver); 677 + if (misc_device_ret) { 678 + isst_if_cpu_info_exit(); 679 + goto unlock_exit; 680 + } 681 + } 682 + misc_usage_count++; 683 + 684 + unlock_exit: 685 + mutex_unlock(&punit_misc_dev_reg_lock); 686 + 687 + return misc_device_ret; 688 + } 689 + 690 + static void isst_misc_unreg(void) 691 + { 692 + mutex_lock(&punit_misc_dev_reg_lock); 693 + if (misc_usage_count) 694 + misc_usage_count--; 695 + if (!misc_usage_count && !misc_device_ret) { 696 + misc_deregister(&isst_if_char_driver); 697 + isst_if_cpu_info_exit(); 698 + } 699 + mutex_unlock(&punit_misc_dev_reg_lock); 700 + } 667 701 668 702 /** 669 703 * isst_if_cdev_register() - Register callback for IOCTL ··· 719 679 */ 720 680 int isst_if_cdev_register(int device_type, struct isst_if_cmd_cb *cb) 721 681 { 722 - if (misc_device_ret) 723 - return misc_device_ret; 682 + int ret; 724 683 725 684 if (device_type >= ISST_IF_DEV_MAX) 726 685 return -EINVAL; 727 686 728 - mutex_lock(&punit_misc_dev_lock); 687 + mutex_lock(&punit_misc_dev_open_lock); 688 + /* Device is already open, we don't want to add new callbacks */ 729 689 if (misc_device_open) { 730 - mutex_unlock(&punit_misc_dev_lock); 690 + mutex_unlock(&punit_misc_dev_open_lock); 731 691 return -EAGAIN; 732 - } 733 - if (!misc_usage_count) { 734 - int ret; 735 - 736 - misc_device_ret = misc_register(&isst_if_char_driver); 737 - if (misc_device_ret) 738 - goto unlock_exit; 739 - 740 - ret = isst_if_cpu_info_init(); 741 - if (ret) { 742 - misc_deregister(&isst_if_char_driver); 743 - misc_device_ret = ret; 744 - goto unlock_exit; 745 - } 746 692 } 747 693 memcpy(&punit_callbacks[device_type], cb, sizeof(*cb)); 748 694 punit_callbacks[device_type].registered = 1; 749 - misc_usage_count++; 750 - unlock_exit: 751 - mutex_unlock(&punit_misc_dev_lock); 695 + mutex_unlock(&punit_misc_dev_open_lock); 752 696 753 - return misc_device_ret; 697 + ret = isst_misc_reg(); 698 + if (ret) { 699 + /* 700 + * No need of mutex as the misc device register failed 701 + * as no one can open device yet. Hence no contention. 702 + */ 703 + punit_callbacks[device_type].registered = 0; 704 + return ret; 705 + } 706 + return 0; 754 707 } 755 708 EXPORT_SYMBOL_GPL(isst_if_cdev_register); 756 709 ··· 758 725 */ 759 726 void isst_if_cdev_unregister(int device_type) 760 727 { 761 - mutex_lock(&punit_misc_dev_lock); 762 - misc_usage_count--; 728 + isst_misc_unreg(); 729 + mutex_lock(&punit_misc_dev_open_lock); 763 730 punit_callbacks[device_type].registered = 0; 764 731 if (device_type == ISST_IF_DEV_MBOX) 765 732 isst_delete_hash(); 766 - if (!misc_usage_count && !misc_device_ret) { 767 - misc_deregister(&isst_if_char_driver); 768 - isst_if_cpu_info_exit(); 769 - } 770 - mutex_unlock(&punit_misc_dev_lock); 733 + mutex_unlock(&punit_misc_dev_open_lock); 771 734 } 772 735 EXPORT_SYMBOL_GPL(isst_if_cdev_unregister); 773 736
+22 -3
drivers/platform/x86/thinkpad_acpi.c
··· 8679 8679 .attrs = fan_driver_attributes, 8680 8680 }; 8681 8681 8682 - #define TPACPI_FAN_Q1 0x0001 /* Unitialized HFSP */ 8683 - #define TPACPI_FAN_2FAN 0x0002 /* EC 0x31 bit 0 selects fan2 */ 8684 - #define TPACPI_FAN_2CTL 0x0004 /* selects fan2 control */ 8682 + #define TPACPI_FAN_Q1 0x0001 /* Uninitialized HFSP */ 8683 + #define TPACPI_FAN_2FAN 0x0002 /* EC 0x31 bit 0 selects fan2 */ 8684 + #define TPACPI_FAN_2CTL 0x0004 /* selects fan2 control */ 8685 + #define TPACPI_FAN_NOFAN 0x0008 /* no fan available */ 8685 8686 8686 8687 static const struct tpacpi_quirk fan_quirk_table[] __initconst = { 8687 8688 TPACPI_QEC_IBM('1', 'Y', TPACPI_FAN_Q1), ··· 8703 8702 TPACPI_Q_LNV3('N', '4', '0', TPACPI_FAN_2CTL), /* P1 / X1 Extreme (4nd gen) */ 8704 8703 TPACPI_Q_LNV3('N', '3', '0', TPACPI_FAN_2CTL), /* P15 (1st gen) / P15v (1st gen) */ 8705 8704 TPACPI_Q_LNV3('N', '3', '2', TPACPI_FAN_2CTL), /* X1 Carbon (9th gen) */ 8705 + TPACPI_Q_LNV3('N', '1', 'O', TPACPI_FAN_NOFAN), /* X1 Tablet (2nd gen) */ 8706 8706 }; 8707 8707 8708 8708 static int __init fan_init(struct ibm_init_struct *iibm) ··· 8731 8729 8732 8730 quirks = tpacpi_check_quirks(fan_quirk_table, 8733 8731 ARRAY_SIZE(fan_quirk_table)); 8732 + 8733 + if (quirks & TPACPI_FAN_NOFAN) { 8734 + pr_info("No integrated ThinkPad fan available\n"); 8735 + return -ENODEV; 8736 + } 8734 8737 8735 8738 if (gfan_handle) { 8736 8739 /* 570, 600e/x, 770e, 770x */ ··· 10119 10112 #define DYTC_CMD_MMC_GET 8 /* To get current MMC function and mode */ 10120 10113 #define DYTC_CMD_RESET 0x1ff /* To reset back to default */ 10121 10114 10115 + #define DYTC_CMD_FUNC_CAP 3 /* To get DYTC capabilities */ 10116 + #define DYTC_FC_MMC 27 /* MMC Mode supported */ 10117 + 10122 10118 #define DYTC_GET_FUNCTION_BIT 8 /* Bits 8-11 - function setting */ 10123 10119 #define DYTC_GET_MODE_BIT 12 /* Bits 12-15 - mode setting */ 10124 10120 ··· 10333 10323 /* Check DYTC is enabled and supports mode setting */ 10334 10324 if (dytc_version < 5) 10335 10325 return -ENODEV; 10326 + 10327 + /* Check what capabilities are supported. Currently MMC is needed */ 10328 + err = dytc_command(DYTC_CMD_FUNC_CAP, &output); 10329 + if (err) 10330 + return err; 10331 + if (!(output & BIT(DYTC_FC_MMC))) { 10332 + dbg_printk(TPACPI_DBG_INIT, " DYTC MMC mode not supported\n"); 10333 + return -ENODEV; 10334 + } 10336 10335 10337 10336 dbg_printk(TPACPI_DBG_INIT, 10338 10337 "DYTC version %d: thermal mode available\n", dytc_version);
+24
drivers/platform/x86/touchscreen_dmi.c
··· 770 770 .properties = predia_basic_props, 771 771 }; 772 772 773 + static const struct property_entry rwc_nanote_p8_props[] = { 774 + PROPERTY_ENTRY_U32("touchscreen-min-y", 46), 775 + PROPERTY_ENTRY_U32("touchscreen-size-x", 1728), 776 + PROPERTY_ENTRY_U32("touchscreen-size-y", 1140), 777 + PROPERTY_ENTRY_BOOL("touchscreen-inverted-y"), 778 + PROPERTY_ENTRY_STRING("firmware-name", "gsl1680-rwc-nanote-p8.fw"), 779 + PROPERTY_ENTRY_U32("silead,max-fingers", 10), 780 + { } 781 + }; 782 + 783 + static const struct ts_dmi_data rwc_nanote_p8_data = { 784 + .acpi_name = "MSSL1680:00", 785 + .properties = rwc_nanote_p8_props, 786 + }; 787 + 773 788 static const struct property_entry schneider_sct101ctm_props[] = { 774 789 PROPERTY_ENTRY_U32("touchscreen-size-x", 1715), 775 790 PROPERTY_ENTRY_U32("touchscreen-size-y", 1140), ··· 1407 1392 /* Note 105b is Foxcon's USB/PCI vendor id */ 1408 1393 DMI_EXACT_MATCH(DMI_BOARD_VENDOR, "105B"), 1409 1394 DMI_EXACT_MATCH(DMI_BOARD_NAME, "0E57"), 1395 + }, 1396 + }, 1397 + { 1398 + /* RWC NANOTE P8 */ 1399 + .driver_data = (void *)&rwc_nanote_p8_data, 1400 + .matches = { 1401 + DMI_MATCH(DMI_BOARD_VENDOR, "Default string"), 1402 + DMI_MATCH(DMI_PRODUCT_NAME, "AY07J"), 1403 + DMI_MATCH(DMI_PRODUCT_SKU, "0001") 1410 1404 }, 1411 1405 }, 1412 1406 {
+95 -10
drivers/platform/x86/x86-android-tablets.c
··· 26 26 #include <linux/string.h> 27 27 /* For gpio_get_desc() which is EXPORT_SYMBOL_GPL() */ 28 28 #include "../../gpio/gpiolib.h" 29 + #include "../../gpio/gpiolib-acpi.h" 29 30 30 31 /* 31 32 * Helper code to get Linux IRQ numbers given a description of the IRQ source ··· 48 47 int polarity; /* ACPI_ACTIVE_HIGH / ACPI_ACTIVE_LOW / ACPI_ACTIVE_BOTH */ 49 48 }; 50 49 51 - static int x86_acpi_irq_helper_gpiochip_find(struct gpio_chip *gc, void *data) 50 + static int gpiochip_find_match_label(struct gpio_chip *gc, void *data) 52 51 { 53 52 return gc->label && !strcmp(gc->label, data); 54 53 } ··· 74 73 return irq; 75 74 case X86_ACPI_IRQ_TYPE_GPIOINT: 76 75 /* Like acpi_dev_gpio_irq_get(), but without parsing ACPI resources */ 77 - chip = gpiochip_find(data->chip, x86_acpi_irq_helper_gpiochip_find); 76 + chip = gpiochip_find(data->chip, gpiochip_find_match_label); 78 77 if (!chip) { 79 78 pr_err("error cannot find GPIO chip %s\n", data->chip); 80 79 return -ENODEV; ··· 144 143 }; 145 144 146 145 struct x86_dev_info { 146 + char *invalid_aei_gpiochip; 147 147 const char * const *modules; 148 - struct gpiod_lookup_table **gpiod_lookup_tables; 148 + struct gpiod_lookup_table * const *gpiod_lookup_tables; 149 149 const struct x86_i2c_client_info *i2c_client_info; 150 150 const struct platform_device_info *pdev_info; 151 151 const struct x86_serdev_info *serdev_info; 152 152 int i2c_client_count; 153 153 int pdev_count; 154 154 int serdev_count; 155 + int (*init)(void); 156 + void (*exit)(void); 155 157 }; 156 158 157 159 /* Generic / shared bq24190 settings */ ··· 191 187 }; 192 188 193 189 static const char * const bq24190_modules[] __initconst = { 194 - "crystal_cove_charger", /* For the bq24190 IRQ */ 195 - "bq24190_charger", /* For the Vbus regulator for intel-int3496 */ 190 + "intel_crystal_cove_charger", /* For the bq24190 IRQ */ 191 + "bq24190_charger", /* For the Vbus regulator for intel-int3496 */ 196 192 NULL 197 193 }; 198 194 ··· 306 302 }, 307 303 }; 308 304 309 - static struct gpiod_lookup_table *asus_me176c_gpios[] = { 305 + static struct gpiod_lookup_table * const asus_me176c_gpios[] = { 310 306 &int3496_gpo2_pin22_gpios, 311 307 &asus_me176c_goodix_gpios, 312 308 NULL ··· 321 317 .serdev_count = ARRAY_SIZE(asus_me176c_serdevs), 322 318 .gpiod_lookup_tables = asus_me176c_gpios, 323 319 .modules = bq24190_modules, 320 + .invalid_aei_gpiochip = "INT33FC:02", 324 321 }; 325 322 326 323 /* Asus TF103C tablets have an Android factory img with everything hardcoded */ ··· 410 405 }, 411 406 }; 412 407 413 - static struct gpiod_lookup_table *asus_tf103c_gpios[] = { 408 + static struct gpiod_lookup_table * const asus_tf103c_gpios[] = { 414 409 &int3496_gpo2_pin22_gpios, 415 410 NULL 416 411 }; ··· 422 417 .pdev_count = ARRAY_SIZE(int3496_pdevs), 423 418 .gpiod_lookup_tables = asus_tf103c_gpios, 424 419 .modules = bq24190_modules, 420 + .invalid_aei_gpiochip = "INT33FC:02", 425 421 }; 426 422 427 423 /* ··· 496 490 .i2c_client_count = ARRAY_SIZE(chuwi_hi8_i2c_clients), 497 491 }; 498 492 493 + #define CZC_EC_EXTRA_PORT 0x68 494 + #define CZC_EC_ANDROID_KEYS 0x63 495 + 496 + static int __init czc_p10t_init(void) 497 + { 498 + /* 499 + * The device boots up in "Windows 7" mode, when the home button sends a 500 + * Windows specific key sequence (Left Meta + D) and the second button 501 + * sends an unknown one while also toggling the Radio Kill Switch. 502 + * This is a surprising behavior when the second button is labeled "Back". 503 + * 504 + * The vendor-supplied Android-x86 build switches the device to a "Android" 505 + * mode by writing value 0x63 to the I/O port 0x68. This just seems to just 506 + * set bit 6 on address 0x96 in the EC region; switching the bit directly 507 + * seems to achieve the same result. It uses a "p10t_switcher" to do the 508 + * job. It doesn't seem to be able to do anything else, and no other use 509 + * of the port 0x68 is known. 510 + * 511 + * In the Android mode, the home button sends just a single scancode, 512 + * which can be handled in Linux userspace more reasonably and the back 513 + * button only sends a scancode without toggling the kill switch. 514 + * The scancode can then be mapped either to Back or RF Kill functionality 515 + * in userspace, depending on how the button is labeled on that particular 516 + * model. 517 + */ 518 + outb(CZC_EC_ANDROID_KEYS, CZC_EC_EXTRA_PORT); 519 + return 0; 520 + } 521 + 522 + static const struct x86_dev_info czc_p10t __initconst = { 523 + .init = czc_p10t_init, 524 + }; 525 + 499 526 /* 500 527 * Whitelabel (sold as various brands) TM800A550L tablets. 501 528 * These tablet's DSDT contains a whole bunch of bogus ACPI I2C devices ··· 598 559 }, 599 560 }; 600 561 601 - static struct gpiod_lookup_table *whitelabel_tm800a550l_gpios[] = { 562 + static struct gpiod_lookup_table * const whitelabel_tm800a550l_gpios[] = { 602 563 &whitelabel_tm800a550l_goodix_gpios, 603 564 NULL 604 565 }; ··· 681 642 .driver_data = (void *)&chuwi_hi8_info, 682 643 }, 683 644 { 645 + /* CZC P10T */ 646 + .ident = "CZC ODEON TPC-10 (\"P10T\")", 647 + .matches = { 648 + DMI_MATCH(DMI_SYS_VENDOR, "CZC"), 649 + DMI_MATCH(DMI_PRODUCT_NAME, "ODEON*TPC-10"), 650 + }, 651 + .driver_data = (void *)&czc_p10t, 652 + }, 653 + { 654 + /* A variant of CZC P10T */ 655 + .ident = "ViewSonic ViewPad 10", 656 + .matches = { 657 + DMI_MATCH(DMI_SYS_VENDOR, "ViewSonic"), 658 + DMI_MATCH(DMI_PRODUCT_NAME, "VPAD10"), 659 + }, 660 + .driver_data = (void *)&czc_p10t, 661 + }, 662 + { 684 663 /* Whitelabel (sold as various brands) TM800A550L */ 685 664 .matches = { 686 665 DMI_MATCH(DMI_BOARD_VENDOR, "AMI Corporation"), ··· 726 669 static struct i2c_client **i2c_clients; 727 670 static struct platform_device **pdevs; 728 671 static struct serdev_device **serdevs; 729 - static struct gpiod_lookup_table **gpiod_lookup_tables; 672 + static struct gpiod_lookup_table * const *gpiod_lookup_tables; 673 + static void (*exit_handler)(void); 730 674 731 675 static __init int x86_instantiate_i2c_client(const struct x86_dev_info *dev_info, 732 676 int idx) ··· 845 787 846 788 kfree(i2c_clients); 847 789 790 + if (exit_handler) 791 + exit_handler(); 792 + 848 793 for (i = 0; gpiod_lookup_tables && gpiod_lookup_tables[i]; i++) 849 794 gpiod_remove_lookup_table(gpiod_lookup_tables[i]); 850 795 } ··· 856 795 { 857 796 const struct x86_dev_info *dev_info; 858 797 const struct dmi_system_id *id; 798 + struct gpio_chip *chip; 859 799 int i, ret = 0; 860 800 861 801 id = dmi_first_match(x86_android_tablet_ids); ··· 864 802 return -ENODEV; 865 803 866 804 dev_info = id->driver_data; 805 + 806 + /* 807 + * The broken DSDTs on these devices often also include broken 808 + * _AEI (ACPI Event Interrupt) handlers, disable these. 809 + */ 810 + if (dev_info->invalid_aei_gpiochip) { 811 + chip = gpiochip_find(dev_info->invalid_aei_gpiochip, 812 + gpiochip_find_match_label); 813 + if (!chip) { 814 + pr_err("error cannot find GPIO chip %s\n", dev_info->invalid_aei_gpiochip); 815 + return -ENODEV; 816 + } 817 + acpi_gpiochip_free_interrupts(chip); 818 + } 867 819 868 820 /* 869 821 * Since this runs from module_init() it cannot use -EPROBE_DEFER, ··· 889 813 gpiod_lookup_tables = dev_info->gpiod_lookup_tables; 890 814 for (i = 0; gpiod_lookup_tables && gpiod_lookup_tables[i]; i++) 891 815 gpiod_add_lookup_table(gpiod_lookup_tables[i]); 816 + 817 + if (dev_info->init) { 818 + ret = dev_info->init(); 819 + if (ret < 0) { 820 + x86_android_tablet_cleanup(); 821 + return ret; 822 + } 823 + exit_handler = dev_info->exit; 824 + } 892 825 893 826 i2c_clients = kcalloc(dev_info->i2c_client_count, sizeof(*i2c_clients), GFP_KERNEL); 894 827 if (!i2c_clients) { ··· 950 865 module_init(x86_android_tablet_init); 951 866 module_exit(x86_android_tablet_cleanup); 952 867 953 - MODULE_AUTHOR("Hans de Goede <hdegoede@redhat.com"); 868 + MODULE_AUTHOR("Hans de Goede <hdegoede@redhat.com>"); 954 869 MODULE_DESCRIPTION("X86 Android tablets DSDT fixups driver"); 955 870 MODULE_LICENSE("GPL");
+2 -1
drivers/regulator/max20086-regulator.c
··· 7 7 8 8 #include <linux/err.h> 9 9 #include <linux/gpio.h> 10 + #include <linux/gpio/consumer.h> 10 11 #include <linux/i2c.h> 11 12 #include <linux/module.h> 12 13 #include <linux/regmap.h> ··· 141 140 node = of_get_child_by_name(chip->dev->of_node, "regulators"); 142 141 if (!node) { 143 142 dev_err(chip->dev, "regulators node not found\n"); 144 - return PTR_ERR(node); 143 + return -ENODEV; 145 144 } 146 145 147 146 for (i = 0; i < chip->info->num_outputs; ++i)
+13 -8
drivers/scsi/bnx2fc/bnx2fc_fcoe.c
··· 508 508 509 509 static void bnx2fc_recv_frame(struct sk_buff *skb) 510 510 { 511 - u32 fr_len; 511 + u64 crc_err; 512 + u32 fr_len, fr_crc; 512 513 struct fc_lport *lport; 513 514 struct fcoe_rcv_info *fr; 514 515 struct fc_stats *stats; ··· 542 541 fh = (struct fc_frame_header *) skb_transport_header(skb); 543 542 skb_pull(skb, sizeof(struct fcoe_hdr)); 544 543 fr_len = skb->len - sizeof(struct fcoe_crc_eof); 544 + 545 + stats = per_cpu_ptr(lport->stats, get_cpu()); 546 + stats->RxFrames++; 547 + stats->RxWords += fr_len / FCOE_WORD_TO_BYTE; 548 + put_cpu(); 545 549 546 550 fp = (struct fc_frame *)skb; 547 551 fc_frame_init(fp); ··· 630 624 return; 631 625 } 632 626 633 - stats = per_cpu_ptr(lport->stats, smp_processor_id()); 634 - stats->RxFrames++; 635 - stats->RxWords += fr_len / FCOE_WORD_TO_BYTE; 627 + fr_crc = le32_to_cpu(fr_crc(fp)); 636 628 637 - if (le32_to_cpu(fr_crc(fp)) != 638 - ~crc32(~0, skb->data, fr_len)) { 639 - if (stats->InvalidCRCCount < 5) 629 + if (unlikely(fr_crc != ~crc32(~0, skb->data, fr_len))) { 630 + stats = per_cpu_ptr(lport->stats, get_cpu()); 631 + crc_err = (stats->InvalidCRCCount++); 632 + put_cpu(); 633 + if (crc_err < 5) 640 634 printk(KERN_WARNING PFX "dropping frame with " 641 635 "CRC error\n"); 642 - stats->InvalidCRCCount++; 643 636 kfree_skb(skb); 644 637 return; 645 638 }
+6 -8
drivers/scsi/hisi_sas/hisi_sas_main.c
··· 400 400 struct hisi_sas_slot *slot, 401 401 struct hisi_sas_dq *dq, 402 402 struct hisi_sas_device *sas_dev, 403 - struct hisi_sas_internal_abort *abort, 404 - struct hisi_sas_tmf_task *tmf) 403 + struct hisi_sas_internal_abort *abort) 405 404 { 406 405 struct hisi_sas_cmd_hdr *cmd_hdr_base; 407 406 int dlvry_queue_slot, dlvry_queue; ··· 426 427 cmd_hdr_base = hisi_hba->cmd_hdr[dlvry_queue]; 427 428 slot->cmd_hdr = &cmd_hdr_base[dlvry_queue_slot]; 428 429 429 - slot->tmf = tmf; 430 - slot->is_internal = tmf; 431 430 task->lldd_task = slot; 432 431 433 432 memset(slot->cmd_hdr, 0, sizeof(struct hisi_sas_cmd_hdr)); ··· 584 587 slot->is_internal = tmf; 585 588 586 589 /* protect task_prep and start_delivery sequence */ 587 - hisi_sas_task_deliver(hisi_hba, slot, dq, sas_dev, NULL, tmf); 590 + hisi_sas_task_deliver(hisi_hba, slot, dq, sas_dev, NULL); 588 591 589 592 return 0; 590 593 ··· 1377 1380 struct hisi_hba *hisi_hba = dev_to_hisi_hba(device); 1378 1381 struct device *dev = hisi_hba->dev; 1379 1382 int s = sizeof(struct host_to_dev_fis); 1383 + struct hisi_sas_tmf_task tmf = {}; 1380 1384 1381 1385 ata_for_each_link(link, ap, EDGE) { 1382 1386 int pmp = sata_srst_pmp(link); 1383 1387 1384 1388 hisi_sas_fill_ata_reset_cmd(link->device, 1, pmp, fis); 1385 - rc = hisi_sas_exec_internal_tmf_task(device, fis, s, NULL); 1389 + rc = hisi_sas_exec_internal_tmf_task(device, fis, s, &tmf); 1386 1390 if (rc != TMF_RESP_FUNC_COMPLETE) 1387 1391 break; 1388 1392 } ··· 1394 1396 1395 1397 hisi_sas_fill_ata_reset_cmd(link->device, 0, pmp, fis); 1396 1398 rc = hisi_sas_exec_internal_tmf_task(device, fis, 1397 - s, NULL); 1399 + s, &tmf); 1398 1400 if (rc != TMF_RESP_FUNC_COMPLETE) 1399 1401 dev_err(dev, "ata disk %016llx de-reset failed\n", 1400 1402 SAS_ADDR(device->sas_addr)); ··· 2065 2067 slot->port = port; 2066 2068 slot->is_internal = true; 2067 2069 2068 - hisi_sas_task_deliver(hisi_hba, slot, dq, sas_dev, abort, NULL); 2070 + hisi_sas_task_deliver(hisi_hba, slot, dq, sas_dev, abort); 2069 2071 2070 2072 return 0; 2071 2073
-18
drivers/scsi/pm8001/pm8001_hwi.c
··· 2692 2692 u32 tag = le32_to_cpu(psataPayload->tag); 2693 2693 u32 port_id = le32_to_cpu(psataPayload->port_id); 2694 2694 u32 dev_id = le32_to_cpu(psataPayload->device_id); 2695 - unsigned long flags; 2696 2695 2697 2696 if (event) 2698 2697 pm8001_dbg(pm8001_ha, FAIL, "SATA EVENT 0x%x\n", event); ··· 2723 2724 ts->resp = SAS_TASK_COMPLETE; 2724 2725 ts->stat = SAS_DATA_OVERRUN; 2725 2726 ts->residual = 0; 2726 - if (pm8001_dev) 2727 - atomic_dec(&pm8001_dev->running_req); 2728 2727 break; 2729 2728 case IO_XFER_ERROR_BREAK: 2730 2729 pm8001_dbg(pm8001_ha, IO, "IO_XFER_ERROR_BREAK\n"); ··· 2764 2767 IO_OPEN_CNX_ERROR_IT_NEXUS_LOSS); 2765 2768 ts->resp = SAS_TASK_COMPLETE; 2766 2769 ts->stat = SAS_QUEUE_FULL; 2767 - pm8001_ccb_task_free_done(pm8001_ha, t, ccb, tag); 2768 2770 return; 2769 2771 } 2770 2772 break; ··· 2848 2852 ts->resp = SAS_TASK_COMPLETE; 2849 2853 ts->stat = SAS_OPEN_TO; 2850 2854 break; 2851 - } 2852 - spin_lock_irqsave(&t->task_state_lock, flags); 2853 - t->task_state_flags &= ~SAS_TASK_STATE_PENDING; 2854 - t->task_state_flags &= ~SAS_TASK_AT_INITIATOR; 2855 - t->task_state_flags |= SAS_TASK_STATE_DONE; 2856 - if (unlikely((t->task_state_flags & SAS_TASK_STATE_ABORTED))) { 2857 - spin_unlock_irqrestore(&t->task_state_lock, flags); 2858 - pm8001_dbg(pm8001_ha, FAIL, 2859 - "task 0x%p done with io_status 0x%x resp 0x%x stat 0x%x but aborted by upper layer!\n", 2860 - t, event, ts->resp, ts->stat); 2861 - pm8001_ccb_task_free(pm8001_ha, t, ccb, tag); 2862 - } else { 2863 - spin_unlock_irqrestore(&t->task_state_lock, flags); 2864 - pm8001_ccb_task_free_done(pm8001_ha, t, ccb, tag); 2865 2855 } 2866 2856 } 2867 2857
+5
drivers/scsi/pm8001/pm8001_sas.c
··· 769 769 res = -TMF_RESP_FUNC_FAILED; 770 770 /* Even TMF timed out, return direct. */ 771 771 if (task->task_state_flags & SAS_TASK_STATE_ABORTED) { 772 + struct pm8001_ccb_info *ccb = task->lldd_task; 773 + 772 774 pm8001_dbg(pm8001_ha, FAIL, "TMF task[%x]timeout.\n", 773 775 tmf->tmf); 776 + 777 + if (ccb) 778 + ccb->task = NULL; 774 779 goto ex_err; 775 780 } 776 781
+3 -28
drivers/scsi/pm8001/pm80xx_hwi.c
··· 2185 2185 pm8001_dbg(pm8001_ha, FAIL, 2186 2186 "task 0x%p done with io_status 0x%x resp 0x%x stat 0x%x but aborted by upper layer!\n", 2187 2187 t, status, ts->resp, ts->stat); 2188 + pm8001_ccb_task_free(pm8001_ha, t, ccb, tag); 2188 2189 if (t->slow_task) 2189 2190 complete(&t->slow_task->completion); 2190 - pm8001_ccb_task_free(pm8001_ha, t, ccb, tag); 2191 2191 } else { 2192 2192 spin_unlock_irqrestore(&t->task_state_lock, flags); 2193 2193 pm8001_ccb_task_free(pm8001_ha, t, ccb, tag); ··· 2794 2794 pm8001_dbg(pm8001_ha, FAIL, 2795 2795 "task 0x%p done with io_status 0x%x resp 0x%x stat 0x%x but aborted by upper layer!\n", 2796 2796 t, status, ts->resp, ts->stat); 2797 + pm8001_ccb_task_free(pm8001_ha, t, ccb, tag); 2797 2798 if (t->slow_task) 2798 2799 complete(&t->slow_task->completion); 2799 - pm8001_ccb_task_free(pm8001_ha, t, ccb, tag); 2800 2800 } else { 2801 2801 spin_unlock_irqrestore(&t->task_state_lock, flags); 2802 2802 spin_unlock_irqrestore(&circularQ->oq_lock, ··· 2821 2821 u32 tag = le32_to_cpu(psataPayload->tag); 2822 2822 u32 port_id = le32_to_cpu(psataPayload->port_id); 2823 2823 u32 dev_id = le32_to_cpu(psataPayload->device_id); 2824 - unsigned long flags; 2825 2824 2826 2825 if (event) 2827 2826 pm8001_dbg(pm8001_ha, FAIL, "SATA EVENT 0x%x\n", event); ··· 2853 2854 ts->resp = SAS_TASK_COMPLETE; 2854 2855 ts->stat = SAS_DATA_OVERRUN; 2855 2856 ts->residual = 0; 2856 - if (pm8001_dev) 2857 - atomic_dec(&pm8001_dev->running_req); 2858 2857 break; 2859 2858 case IO_XFER_ERROR_BREAK: 2860 2859 pm8001_dbg(pm8001_ha, IO, "IO_XFER_ERROR_BREAK\n"); ··· 2901 2904 IO_OPEN_CNX_ERROR_IT_NEXUS_LOSS); 2902 2905 ts->resp = SAS_TASK_COMPLETE; 2903 2906 ts->stat = SAS_QUEUE_FULL; 2904 - spin_unlock_irqrestore(&circularQ->oq_lock, 2905 - circularQ->lock_flags); 2906 - pm8001_ccb_task_free_done(pm8001_ha, t, ccb, tag); 2907 - spin_lock_irqsave(&circularQ->oq_lock, 2908 - circularQ->lock_flags); 2909 2907 return; 2910 2908 } 2911 2909 break; ··· 2999 3007 ts->resp = SAS_TASK_COMPLETE; 3000 3008 ts->stat = SAS_OPEN_TO; 3001 3009 break; 3002 - } 3003 - spin_lock_irqsave(&t->task_state_lock, flags); 3004 - t->task_state_flags &= ~SAS_TASK_STATE_PENDING; 3005 - t->task_state_flags &= ~SAS_TASK_AT_INITIATOR; 3006 - t->task_state_flags |= SAS_TASK_STATE_DONE; 3007 - if (unlikely((t->task_state_flags & SAS_TASK_STATE_ABORTED))) { 3008 - spin_unlock_irqrestore(&t->task_state_lock, flags); 3009 - pm8001_dbg(pm8001_ha, FAIL, 3010 - "task 0x%p done with io_status 0x%x resp 0x%x stat 0x%x but aborted by upper layer!\n", 3011 - t, event, ts->resp, ts->stat); 3012 - pm8001_ccb_task_free(pm8001_ha, t, ccb, tag); 3013 - } else { 3014 - spin_unlock_irqrestore(&t->task_state_lock, flags); 3015 - spin_unlock_irqrestore(&circularQ->oq_lock, 3016 - circularQ->lock_flags); 3017 - pm8001_ccb_task_free_done(pm8001_ha, t, ccb, tag); 3018 - spin_lock_irqsave(&circularQ->oq_lock, 3019 - circularQ->lock_flags); 3020 3010 } 3021 3011 } 3022 3012 ··· 3905 3931 /** 3906 3932 * process_one_iomb - process one outbound Queue memory block 3907 3933 * @pm8001_ha: our hba card information 3934 + * @circularQ: outbound circular queue 3908 3935 * @piomb: IO message buffer 3909 3936 */ 3910 3937 static void process_one_iomb(struct pm8001_hba_info *pm8001_ha,
+50 -5
drivers/scsi/scsi_scan.c
··· 214 214 SCSI_TIMEOUT, 3, NULL); 215 215 } 216 216 217 + static int scsi_realloc_sdev_budget_map(struct scsi_device *sdev, 218 + unsigned int depth) 219 + { 220 + int new_shift = sbitmap_calculate_shift(depth); 221 + bool need_alloc = !sdev->budget_map.map; 222 + bool need_free = false; 223 + int ret; 224 + struct sbitmap sb_backup; 225 + 226 + /* 227 + * realloc if new shift is calculated, which is caused by setting 228 + * up one new default queue depth after calling ->slave_configure 229 + */ 230 + if (!need_alloc && new_shift != sdev->budget_map.shift) 231 + need_alloc = need_free = true; 232 + 233 + if (!need_alloc) 234 + return 0; 235 + 236 + /* 237 + * Request queue has to be frozen for reallocating budget map, 238 + * and here disk isn't added yet, so freezing is pretty fast 239 + */ 240 + if (need_free) { 241 + blk_mq_freeze_queue(sdev->request_queue); 242 + sb_backup = sdev->budget_map; 243 + } 244 + ret = sbitmap_init_node(&sdev->budget_map, 245 + scsi_device_max_queue_depth(sdev), 246 + new_shift, GFP_KERNEL, 247 + sdev->request_queue->node, false, true); 248 + if (need_free) { 249 + if (ret) 250 + sdev->budget_map = sb_backup; 251 + else 252 + sbitmap_free(&sb_backup); 253 + ret = 0; 254 + blk_mq_unfreeze_queue(sdev->request_queue); 255 + } 256 + return ret; 257 + } 258 + 217 259 /** 218 260 * scsi_alloc_sdev - allocate and setup a scsi_Device 219 261 * @starget: which target to allocate a &scsi_device for ··· 348 306 * default device queue depth to figure out sbitmap shift 349 307 * since we use this queue depth most of times. 350 308 */ 351 - if (sbitmap_init_node(&sdev->budget_map, 352 - scsi_device_max_queue_depth(sdev), 353 - sbitmap_calculate_shift(depth), 354 - GFP_KERNEL, sdev->request_queue->node, 355 - false, true)) { 309 + if (scsi_realloc_sdev_budget_map(sdev, depth)) { 356 310 put_device(&starget->dev); 357 311 kfree(sdev); 358 312 goto out; ··· 1055 1017 } 1056 1018 return SCSI_SCAN_NO_RESPONSE; 1057 1019 } 1020 + 1021 + /* 1022 + * The queue_depth is often changed in ->slave_configure. 1023 + * Set up budget map again since memory consumption of 1024 + * the map depends on actual queue depth. 1025 + */ 1026 + scsi_realloc_sdev_budget_map(sdev, sdev->queue_depth); 1058 1027 } 1059 1028 1060 1029 if (sdev->scsi_level >= SCSI_3)
+1 -1
drivers/spi/spi-bcm-qspi.c
··· 585 585 u32 rd = 0; 586 586 u32 wr = 0; 587 587 588 - if (qspi->base[CHIP_SELECT]) { 588 + if (cs >= 0 && qspi->base[CHIP_SELECT]) { 589 589 rd = bcm_qspi_read(qspi, CHIP_SELECT, 0); 590 590 wr = (rd & ~0xff) | (1 << cs); 591 591 if (rd == wr)
+5
drivers/spi/spi-meson-spicc.c
··· 693 693 writel_relaxed(0, spicc->base + SPICC_INTREG); 694 694 695 695 irq = platform_get_irq(pdev, 0); 696 + if (irq < 0) { 697 + ret = irq; 698 + goto out_master; 699 + } 700 + 696 701 ret = devm_request_irq(&pdev->dev, irq, meson_spicc_irq, 697 702 0, NULL, spicc); 698 703 if (ret) {
+1 -1
drivers/spi/spi-mt65xx.c
··· 624 624 else 625 625 mdata->state = MTK_SPI_IDLE; 626 626 627 - if (!master->can_dma(master, master->cur_msg->spi, trans)) { 627 + if (!master->can_dma(master, NULL, trans)) { 628 628 if (trans->rx_buf) { 629 629 cnt = mdata->xfer_len / 4; 630 630 ioread32_rep(mdata->base + SPI_RX_DATA_REG,
+17 -30
drivers/spi/spi-stm32-qspi.c
··· 688 688 struct resource *res; 689 689 int ret, irq; 690 690 691 - ctrl = spi_alloc_master(dev, sizeof(*qspi)); 691 + ctrl = devm_spi_alloc_master(dev, sizeof(*qspi)); 692 692 if (!ctrl) 693 693 return -ENOMEM; 694 694 ··· 697 697 698 698 res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "qspi"); 699 699 qspi->io_base = devm_ioremap_resource(dev, res); 700 - if (IS_ERR(qspi->io_base)) { 701 - ret = PTR_ERR(qspi->io_base); 702 - goto err_master_put; 703 - } 700 + if (IS_ERR(qspi->io_base)) 701 + return PTR_ERR(qspi->io_base); 704 702 705 703 qspi->phys_base = res->start; 706 704 707 705 res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "qspi_mm"); 708 706 qspi->mm_base = devm_ioremap_resource(dev, res); 709 - if (IS_ERR(qspi->mm_base)) { 710 - ret = PTR_ERR(qspi->mm_base); 711 - goto err_master_put; 712 - } 707 + if (IS_ERR(qspi->mm_base)) 708 + return PTR_ERR(qspi->mm_base); 713 709 714 710 qspi->mm_size = resource_size(res); 715 - if (qspi->mm_size > STM32_QSPI_MAX_MMAP_SZ) { 716 - ret = -EINVAL; 717 - goto err_master_put; 718 - } 711 + if (qspi->mm_size > STM32_QSPI_MAX_MMAP_SZ) 712 + return -EINVAL; 719 713 720 714 irq = platform_get_irq(pdev, 0); 721 - if (irq < 0) { 722 - ret = irq; 723 - goto err_master_put; 724 - } 715 + if (irq < 0) 716 + return irq; 725 717 726 718 ret = devm_request_irq(dev, irq, stm32_qspi_irq, 0, 727 719 dev_name(dev), qspi); 728 720 if (ret) { 729 721 dev_err(dev, "failed to request irq\n"); 730 - goto err_master_put; 722 + return ret; 731 723 } 732 724 733 725 init_completion(&qspi->data_completion); 734 726 init_completion(&qspi->match_completion); 735 727 736 728 qspi->clk = devm_clk_get(dev, NULL); 737 - if (IS_ERR(qspi->clk)) { 738 - ret = PTR_ERR(qspi->clk); 739 - goto err_master_put; 740 - } 729 + if (IS_ERR(qspi->clk)) 730 + return PTR_ERR(qspi->clk); 741 731 742 732 qspi->clk_rate = clk_get_rate(qspi->clk); 743 - if (!qspi->clk_rate) { 744 - ret = -EINVAL; 745 - goto err_master_put; 746 - } 733 + if (!qspi->clk_rate) 734 + return -EINVAL; 747 735 748 736 ret = clk_prepare_enable(qspi->clk); 749 737 if (ret) { 750 738 dev_err(dev, "can not enable the clock\n"); 751 - goto err_master_put; 739 + return ret; 752 740 } 753 741 754 742 rstc = devm_reset_control_get_exclusive(dev, NULL); ··· 772 784 pm_runtime_enable(dev); 773 785 pm_runtime_get_noresume(dev); 774 786 775 - ret = devm_spi_register_master(dev, ctrl); 787 + ret = spi_register_master(ctrl); 776 788 if (ret) 777 789 goto err_pm_runtime_free; 778 790 ··· 794 806 stm32_qspi_dma_free(qspi); 795 807 err_clk_disable: 796 808 clk_disable_unprepare(qspi->clk); 797 - err_master_put: 798 - spi_master_put(qspi->ctrl); 799 809 800 810 return ret; 801 811 } ··· 803 817 struct stm32_qspi *qspi = platform_get_drvdata(pdev); 804 818 805 819 pm_runtime_get_sync(qspi->dev); 820 + spi_unregister_master(qspi->ctrl); 806 821 /* disable qspi */ 807 822 writel_relaxed(0, qspi->io_base + QSPI_CR); 808 823 stm32_qspi_dma_free(qspi);
+4 -3
drivers/spi/spi-stm32.c
··· 221 221 * time between frames (if driver has this functionality) 222 222 * @set_number_of_data: optional routine to configure registers to desired 223 223 * number of data (if driver has this functionality) 224 - * @can_dma: routine to determine if the transfer is eligible for DMA use 225 224 * @transfer_one_dma_start: routine to start transfer a single spi_transfer 226 225 * using DMA 227 226 * @dma_rx_cb: routine to call after DMA RX channel operation is complete ··· 231 232 * @baud_rate_div_min: minimum baud rate divisor 232 233 * @baud_rate_div_max: maximum baud rate divisor 233 234 * @has_fifo: boolean to know if fifo is used for driver 234 - * @has_startbit: boolean to know if start bit is used to start transfer 235 + * @flags: compatible specific SPI controller flags used at registration time 235 236 */ 236 237 struct stm32_spi_cfg { 237 238 const struct stm32_spi_regspec *regs; ··· 252 253 unsigned int baud_rate_div_min; 253 254 unsigned int baud_rate_div_max; 254 255 bool has_fifo; 256 + u16 flags; 255 257 }; 256 258 257 259 /** ··· 1722 1722 .baud_rate_div_min = STM32F4_SPI_BR_DIV_MIN, 1723 1723 .baud_rate_div_max = STM32F4_SPI_BR_DIV_MAX, 1724 1724 .has_fifo = false, 1725 + .flags = SPI_MASTER_MUST_TX, 1725 1726 }; 1726 1727 1727 1728 static const struct stm32_spi_cfg stm32h7_spi_cfg = { ··· 1855 1854 master->prepare_message = stm32_spi_prepare_msg; 1856 1855 master->transfer_one = stm32_spi_transfer_one; 1857 1856 master->unprepare_message = stm32_spi_unprepare_msg; 1858 - master->flags = SPI_MASTER_MUST_TX; 1857 + master->flags = spi->cfg->flags; 1859 1858 1860 1859 spi->dma_tx = dma_request_chan(spi->dev, "tx"); 1861 1860 if (IS_ERR(spi->dma_tx)) {
+14 -4
drivers/spi/spi-uniphier.c
··· 726 726 if (ret) { 727 727 dev_err(&pdev->dev, "failed to get TX DMA capacities: %d\n", 728 728 ret); 729 - goto out_disable_clk; 729 + goto out_release_dma; 730 730 } 731 731 dma_tx_burst = caps.max_burst; 732 732 } ··· 735 735 if (IS_ERR_OR_NULL(master->dma_rx)) { 736 736 if (PTR_ERR(master->dma_rx) == -EPROBE_DEFER) { 737 737 ret = -EPROBE_DEFER; 738 - goto out_disable_clk; 738 + goto out_release_dma; 739 739 } 740 740 master->dma_rx = NULL; 741 741 dma_rx_burst = INT_MAX; ··· 744 744 if (ret) { 745 745 dev_err(&pdev->dev, "failed to get RX DMA capacities: %d\n", 746 746 ret); 747 - goto out_disable_clk; 747 + goto out_release_dma; 748 748 } 749 749 dma_rx_burst = caps.max_burst; 750 750 } ··· 753 753 754 754 ret = devm_spi_register_master(&pdev->dev, master); 755 755 if (ret) 756 - goto out_disable_clk; 756 + goto out_release_dma; 757 757 758 758 return 0; 759 + 760 + out_release_dma: 761 + if (!IS_ERR_OR_NULL(master->dma_rx)) { 762 + dma_release_channel(master->dma_rx); 763 + master->dma_rx = NULL; 764 + } 765 + if (!IS_ERR_OR_NULL(master->dma_tx)) { 766 + dma_release_channel(master->dma_tx); 767 + master->dma_tx = NULL; 768 + } 759 769 760 770 out_disable_clk: 761 771 clk_disable_unprepare(priv->clk);
+20
drivers/video/console/Kconfig
··· 78 78 help 79 79 Low-level framebuffer-based console driver. 80 80 81 + config FRAMEBUFFER_CONSOLE_LEGACY_ACCELERATION 82 + bool "Enable legacy fbcon hardware acceleration code" 83 + depends on FRAMEBUFFER_CONSOLE 84 + default y if PARISC 85 + default n 86 + help 87 + This option enables the fbcon (framebuffer text-based) hardware 88 + acceleration for graphics drivers which were written for the fbdev 89 + graphics interface. 90 + 91 + On modern machines, on mainstream machines (like x86-64) or when 92 + using a modern Linux distribution those fbdev drivers usually aren't used. 93 + So enabling this option wouldn't have any effect, which is why you want 94 + to disable this option on such newer machines. 95 + 96 + If you compile this kernel for older machines which still require the 97 + fbdev drivers, you may want to say Y. 98 + 99 + If unsure, select n. 100 + 81 101 config FRAMEBUFFER_CONSOLE_DETECT_PRIMARY 82 102 bool "Map the console to the primary display device" 83 103 depends on FRAMEBUFFER_CONSOLE
+16
drivers/video/fbdev/core/bitblit.c
··· 43 43 } 44 44 } 45 45 46 + static void bit_bmove(struct vc_data *vc, struct fb_info *info, int sy, 47 + int sx, int dy, int dx, int height, int width) 48 + { 49 + struct fb_copyarea area; 50 + 51 + area.sx = sx * vc->vc_font.width; 52 + area.sy = sy * vc->vc_font.height; 53 + area.dx = dx * vc->vc_font.width; 54 + area.dy = dy * vc->vc_font.height; 55 + area.height = height * vc->vc_font.height; 56 + area.width = width * vc->vc_font.width; 57 + 58 + info->fbops->fb_copyarea(info, &area); 59 + } 60 + 46 61 static void bit_clear(struct vc_data *vc, struct fb_info *info, int sy, 47 62 int sx, int height, int width) 48 63 { ··· 393 378 394 379 void fbcon_set_bitops(struct fbcon_ops *ops) 395 380 { 381 + ops->bmove = bit_bmove; 396 382 ops->clear = bit_clear; 397 383 ops->putcs = bit_putcs; 398 384 ops->clear_margins = bit_clear_margins;
+536 -21
drivers/video/fbdev/core/fbcon.c
··· 173 173 int count, int ypos, int xpos); 174 174 static void fbcon_clear_margins(struct vc_data *vc, int bottom_only); 175 175 static void fbcon_cursor(struct vc_data *vc, int mode); 176 + static void fbcon_bmove(struct vc_data *vc, int sy, int sx, int dy, int dx, 177 + int height, int width); 176 178 static int fbcon_switch(struct vc_data *vc); 177 179 static int fbcon_blank(struct vc_data *vc, int blank, int mode_switch); 178 180 static void fbcon_set_palette(struct vc_data *vc, const unsigned char *table); ··· 182 180 /* 183 181 * Internal routines 184 182 */ 183 + static __inline__ void ywrap_up(struct vc_data *vc, int count); 184 + static __inline__ void ywrap_down(struct vc_data *vc, int count); 185 + static __inline__ void ypan_up(struct vc_data *vc, int count); 186 + static __inline__ void ypan_down(struct vc_data *vc, int count); 187 + static void fbcon_bmove_rec(struct vc_data *vc, struct fbcon_display *p, int sy, int sx, 188 + int dy, int dx, int height, int width, u_int y_break); 185 189 static void fbcon_set_disp(struct fb_info *info, struct fb_var_screeninfo *var, 186 190 int unit); 191 + static void fbcon_redraw_move(struct vc_data *vc, struct fbcon_display *p, 192 + int line, int count, int dy); 187 193 static void fbcon_modechanged(struct fb_info *info); 188 194 static void fbcon_set_all_vcs(struct fb_info *info); 189 195 static void fbcon_start(void); ··· 1025 1015 struct vc_data *svc = *default_mode; 1026 1016 struct fbcon_display *t, *p = &fb_display[vc->vc_num]; 1027 1017 int logo = 1, new_rows, new_cols, rows, cols; 1028 - int ret; 1018 + int cap, ret; 1029 1019 1030 1020 if (WARN_ON(info_idx == -1)) 1031 1021 return; ··· 1034 1024 con2fb_map[vc->vc_num] = info_idx; 1035 1025 1036 1026 info = registered_fb[con2fb_map[vc->vc_num]]; 1027 + cap = info->flags; 1037 1028 1038 1029 if (logo_shown < 0 && console_loglevel <= CONSOLE_LOGLEVEL_QUIET) 1039 1030 logo_shown = FBCON_LOGO_DONTSHOW; ··· 1136 1125 1137 1126 ops->graphics = 0; 1138 1127 1128 + #ifdef CONFIG_FRAMEBUFFER_CONSOLE_LEGACY_ACCELERATION 1129 + if ((cap & FBINFO_HWACCEL_COPYAREA) && 1130 + !(cap & FBINFO_HWACCEL_DISABLED)) 1131 + p->scrollmode = SCROLL_MOVE; 1132 + else /* default to something safe */ 1133 + p->scrollmode = SCROLL_REDRAW; 1134 + #endif 1135 + 1139 1136 /* 1140 1137 * ++guenther: console.c:vc_allocate() relies on initializing 1141 1138 * vc_{cols,rows}, but we must not set those if we are only ··· 1230 1211 * This system is now divided into two levels because of complications 1231 1212 * caused by hardware scrolling. Top level functions: 1232 1213 * 1233 - * fbcon_clear(), fbcon_putc(), fbcon_clear_margins() 1214 + * fbcon_bmove(), fbcon_clear(), fbcon_putc(), fbcon_clear_margins() 1234 1215 * 1235 1216 * handles y values in range [0, scr_height-1] that correspond to real 1236 1217 * screen positions. y_wrap shift means that first line of bitmap may be 1237 1218 * anywhere on this display. These functions convert lineoffsets to 1238 1219 * bitmap offsets and deal with the wrap-around case by splitting blits. 1239 1220 * 1221 + * fbcon_bmove_physical_8() -- These functions fast implementations 1240 1222 * fbcon_clear_physical_8() -- of original fbcon_XXX fns. 1241 1223 * fbcon_putc_physical_8() -- (font width != 8) may be added later 1242 1224 * ··· 1410 1390 } 1411 1391 } 1412 1392 1393 + static __inline__ void ywrap_up(struct vc_data *vc, int count) 1394 + { 1395 + struct fb_info *info = registered_fb[con2fb_map[vc->vc_num]]; 1396 + struct fbcon_ops *ops = info->fbcon_par; 1397 + struct fbcon_display *p = &fb_display[vc->vc_num]; 1398 + 1399 + p->yscroll += count; 1400 + if (p->yscroll >= p->vrows) /* Deal with wrap */ 1401 + p->yscroll -= p->vrows; 1402 + ops->var.xoffset = 0; 1403 + ops->var.yoffset = p->yscroll * vc->vc_font.height; 1404 + ops->var.vmode |= FB_VMODE_YWRAP; 1405 + ops->update_start(info); 1406 + scrollback_max += count; 1407 + if (scrollback_max > scrollback_phys_max) 1408 + scrollback_max = scrollback_phys_max; 1409 + scrollback_current = 0; 1410 + } 1411 + 1412 + static __inline__ void ywrap_down(struct vc_data *vc, int count) 1413 + { 1414 + struct fb_info *info = registered_fb[con2fb_map[vc->vc_num]]; 1415 + struct fbcon_ops *ops = info->fbcon_par; 1416 + struct fbcon_display *p = &fb_display[vc->vc_num]; 1417 + 1418 + p->yscroll -= count; 1419 + if (p->yscroll < 0) /* Deal with wrap */ 1420 + p->yscroll += p->vrows; 1421 + ops->var.xoffset = 0; 1422 + ops->var.yoffset = p->yscroll * vc->vc_font.height; 1423 + ops->var.vmode |= FB_VMODE_YWRAP; 1424 + ops->update_start(info); 1425 + scrollback_max -= count; 1426 + if (scrollback_max < 0) 1427 + scrollback_max = 0; 1428 + scrollback_current = 0; 1429 + } 1430 + 1431 + static __inline__ void ypan_up(struct vc_data *vc, int count) 1432 + { 1433 + struct fb_info *info = registered_fb[con2fb_map[vc->vc_num]]; 1434 + struct fbcon_display *p = &fb_display[vc->vc_num]; 1435 + struct fbcon_ops *ops = info->fbcon_par; 1436 + 1437 + p->yscroll += count; 1438 + if (p->yscroll > p->vrows - vc->vc_rows) { 1439 + ops->bmove(vc, info, p->vrows - vc->vc_rows, 1440 + 0, 0, 0, vc->vc_rows, vc->vc_cols); 1441 + p->yscroll -= p->vrows - vc->vc_rows; 1442 + } 1443 + 1444 + ops->var.xoffset = 0; 1445 + ops->var.yoffset = p->yscroll * vc->vc_font.height; 1446 + ops->var.vmode &= ~FB_VMODE_YWRAP; 1447 + ops->update_start(info); 1448 + fbcon_clear_margins(vc, 1); 1449 + scrollback_max += count; 1450 + if (scrollback_max > scrollback_phys_max) 1451 + scrollback_max = scrollback_phys_max; 1452 + scrollback_current = 0; 1453 + } 1454 + 1455 + static __inline__ void ypan_up_redraw(struct vc_data *vc, int t, int count) 1456 + { 1457 + struct fb_info *info = registered_fb[con2fb_map[vc->vc_num]]; 1458 + struct fbcon_ops *ops = info->fbcon_par; 1459 + struct fbcon_display *p = &fb_display[vc->vc_num]; 1460 + 1461 + p->yscroll += count; 1462 + 1463 + if (p->yscroll > p->vrows - vc->vc_rows) { 1464 + p->yscroll -= p->vrows - vc->vc_rows; 1465 + fbcon_redraw_move(vc, p, t + count, vc->vc_rows - count, t); 1466 + } 1467 + 1468 + ops->var.xoffset = 0; 1469 + ops->var.yoffset = p->yscroll * vc->vc_font.height; 1470 + ops->var.vmode &= ~FB_VMODE_YWRAP; 1471 + ops->update_start(info); 1472 + fbcon_clear_margins(vc, 1); 1473 + scrollback_max += count; 1474 + if (scrollback_max > scrollback_phys_max) 1475 + scrollback_max = scrollback_phys_max; 1476 + scrollback_current = 0; 1477 + } 1478 + 1479 + static __inline__ void ypan_down(struct vc_data *vc, int count) 1480 + { 1481 + struct fb_info *info = registered_fb[con2fb_map[vc->vc_num]]; 1482 + struct fbcon_display *p = &fb_display[vc->vc_num]; 1483 + struct fbcon_ops *ops = info->fbcon_par; 1484 + 1485 + p->yscroll -= count; 1486 + if (p->yscroll < 0) { 1487 + ops->bmove(vc, info, 0, 0, p->vrows - vc->vc_rows, 1488 + 0, vc->vc_rows, vc->vc_cols); 1489 + p->yscroll += p->vrows - vc->vc_rows; 1490 + } 1491 + 1492 + ops->var.xoffset = 0; 1493 + ops->var.yoffset = p->yscroll * vc->vc_font.height; 1494 + ops->var.vmode &= ~FB_VMODE_YWRAP; 1495 + ops->update_start(info); 1496 + fbcon_clear_margins(vc, 1); 1497 + scrollback_max -= count; 1498 + if (scrollback_max < 0) 1499 + scrollback_max = 0; 1500 + scrollback_current = 0; 1501 + } 1502 + 1503 + static __inline__ void ypan_down_redraw(struct vc_data *vc, int t, int count) 1504 + { 1505 + struct fb_info *info = registered_fb[con2fb_map[vc->vc_num]]; 1506 + struct fbcon_ops *ops = info->fbcon_par; 1507 + struct fbcon_display *p = &fb_display[vc->vc_num]; 1508 + 1509 + p->yscroll -= count; 1510 + 1511 + if (p->yscroll < 0) { 1512 + p->yscroll += p->vrows - vc->vc_rows; 1513 + fbcon_redraw_move(vc, p, t, vc->vc_rows - count, t + count); 1514 + } 1515 + 1516 + ops->var.xoffset = 0; 1517 + ops->var.yoffset = p->yscroll * vc->vc_font.height; 1518 + ops->var.vmode &= ~FB_VMODE_YWRAP; 1519 + ops->update_start(info); 1520 + fbcon_clear_margins(vc, 1); 1521 + scrollback_max -= count; 1522 + if (scrollback_max < 0) 1523 + scrollback_max = 0; 1524 + scrollback_current = 0; 1525 + } 1526 + 1527 + static void fbcon_redraw_move(struct vc_data *vc, struct fbcon_display *p, 1528 + int line, int count, int dy) 1529 + { 1530 + unsigned short *s = (unsigned short *) 1531 + (vc->vc_origin + vc->vc_size_row * line); 1532 + 1533 + while (count--) { 1534 + unsigned short *start = s; 1535 + unsigned short *le = advance_row(s, 1); 1536 + unsigned short c; 1537 + int x = 0; 1538 + unsigned short attr = 1; 1539 + 1540 + do { 1541 + c = scr_readw(s); 1542 + if (attr != (c & 0xff00)) { 1543 + attr = c & 0xff00; 1544 + if (s > start) { 1545 + fbcon_putcs(vc, start, s - start, 1546 + dy, x); 1547 + x += s - start; 1548 + start = s; 1549 + } 1550 + } 1551 + console_conditional_schedule(); 1552 + s++; 1553 + } while (s < le); 1554 + if (s > start) 1555 + fbcon_putcs(vc, start, s - start, dy, x); 1556 + console_conditional_schedule(); 1557 + dy++; 1558 + } 1559 + } 1560 + 1561 + static void fbcon_redraw_blit(struct vc_data *vc, struct fb_info *info, 1562 + struct fbcon_display *p, int line, int count, int ycount) 1563 + { 1564 + int offset = ycount * vc->vc_cols; 1565 + unsigned short *d = (unsigned short *) 1566 + (vc->vc_origin + vc->vc_size_row * line); 1567 + unsigned short *s = d + offset; 1568 + struct fbcon_ops *ops = info->fbcon_par; 1569 + 1570 + while (count--) { 1571 + unsigned short *start = s; 1572 + unsigned short *le = advance_row(s, 1); 1573 + unsigned short c; 1574 + int x = 0; 1575 + 1576 + do { 1577 + c = scr_readw(s); 1578 + 1579 + if (c == scr_readw(d)) { 1580 + if (s > start) { 1581 + ops->bmove(vc, info, line + ycount, x, 1582 + line, x, 1, s-start); 1583 + x += s - start + 1; 1584 + start = s + 1; 1585 + } else { 1586 + x++; 1587 + start++; 1588 + } 1589 + } 1590 + 1591 + scr_writew(c, d); 1592 + console_conditional_schedule(); 1593 + s++; 1594 + d++; 1595 + } while (s < le); 1596 + if (s > start) 1597 + ops->bmove(vc, info, line + ycount, x, line, x, 1, 1598 + s-start); 1599 + console_conditional_schedule(); 1600 + if (ycount > 0) 1601 + line++; 1602 + else { 1603 + line--; 1604 + /* NOTE: We subtract two lines from these pointers */ 1605 + s -= vc->vc_size_row; 1606 + d -= vc->vc_size_row; 1607 + } 1608 + } 1609 + } 1610 + 1413 1611 static void fbcon_redraw(struct vc_data *vc, struct fbcon_display *p, 1414 1612 int line, int count, int offset) 1415 1613 { ··· 1688 1450 { 1689 1451 struct fb_info *info = registered_fb[con2fb_map[vc->vc_num]]; 1690 1452 struct fbcon_display *p = &fb_display[vc->vc_num]; 1453 + int scroll_partial = info->flags & FBINFO_PARTIAL_PAN_OK; 1691 1454 1692 1455 if (fbcon_is_inactive(vc, info)) 1693 1456 return true; ··· 1705 1466 case SM_UP: 1706 1467 if (count > vc->vc_rows) /* Maximum realistic size */ 1707 1468 count = vc->vc_rows; 1708 - fbcon_redraw(vc, p, t, b - t - count, 1709 - count * vc->vc_cols); 1710 - fbcon_clear(vc, b - count, 0, count, vc->vc_cols); 1711 - scr_memsetw((unsigned short *) (vc->vc_origin + 1712 - vc->vc_size_row * 1713 - (b - count)), 1714 - vc->vc_video_erase_char, 1715 - vc->vc_size_row * count); 1716 - return true; 1469 + if (logo_shown >= 0) 1470 + goto redraw_up; 1471 + switch (fb_scrollmode(p)) { 1472 + case SCROLL_MOVE: 1473 + fbcon_redraw_blit(vc, info, p, t, b - t - count, 1474 + count); 1475 + fbcon_clear(vc, b - count, 0, count, vc->vc_cols); 1476 + scr_memsetw((unsigned short *) (vc->vc_origin + 1477 + vc->vc_size_row * 1478 + (b - count)), 1479 + vc->vc_video_erase_char, 1480 + vc->vc_size_row * count); 1481 + return true; 1482 + 1483 + case SCROLL_WRAP_MOVE: 1484 + if (b - t - count > 3 * vc->vc_rows >> 2) { 1485 + if (t > 0) 1486 + fbcon_bmove(vc, 0, 0, count, 0, t, 1487 + vc->vc_cols); 1488 + ywrap_up(vc, count); 1489 + if (vc->vc_rows - b > 0) 1490 + fbcon_bmove(vc, b - count, 0, b, 0, 1491 + vc->vc_rows - b, 1492 + vc->vc_cols); 1493 + } else if (info->flags & FBINFO_READS_FAST) 1494 + fbcon_bmove(vc, t + count, 0, t, 0, 1495 + b - t - count, vc->vc_cols); 1496 + else 1497 + goto redraw_up; 1498 + fbcon_clear(vc, b - count, 0, count, vc->vc_cols); 1499 + break; 1500 + 1501 + case SCROLL_PAN_REDRAW: 1502 + if ((p->yscroll + count <= 1503 + 2 * (p->vrows - vc->vc_rows)) 1504 + && ((!scroll_partial && (b - t == vc->vc_rows)) 1505 + || (scroll_partial 1506 + && (b - t - count > 1507 + 3 * vc->vc_rows >> 2)))) { 1508 + if (t > 0) 1509 + fbcon_redraw_move(vc, p, 0, t, count); 1510 + ypan_up_redraw(vc, t, count); 1511 + if (vc->vc_rows - b > 0) 1512 + fbcon_redraw_move(vc, p, b, 1513 + vc->vc_rows - b, b); 1514 + } else 1515 + fbcon_redraw_move(vc, p, t + count, b - t - count, t); 1516 + fbcon_clear(vc, b - count, 0, count, vc->vc_cols); 1517 + break; 1518 + 1519 + case SCROLL_PAN_MOVE: 1520 + if ((p->yscroll + count <= 1521 + 2 * (p->vrows - vc->vc_rows)) 1522 + && ((!scroll_partial && (b - t == vc->vc_rows)) 1523 + || (scroll_partial 1524 + && (b - t - count > 1525 + 3 * vc->vc_rows >> 2)))) { 1526 + if (t > 0) 1527 + fbcon_bmove(vc, 0, 0, count, 0, t, 1528 + vc->vc_cols); 1529 + ypan_up(vc, count); 1530 + if (vc->vc_rows - b > 0) 1531 + fbcon_bmove(vc, b - count, 0, b, 0, 1532 + vc->vc_rows - b, 1533 + vc->vc_cols); 1534 + } else if (info->flags & FBINFO_READS_FAST) 1535 + fbcon_bmove(vc, t + count, 0, t, 0, 1536 + b - t - count, vc->vc_cols); 1537 + else 1538 + goto redraw_up; 1539 + fbcon_clear(vc, b - count, 0, count, vc->vc_cols); 1540 + break; 1541 + 1542 + case SCROLL_REDRAW: 1543 + redraw_up: 1544 + fbcon_redraw(vc, p, t, b - t - count, 1545 + count * vc->vc_cols); 1546 + fbcon_clear(vc, b - count, 0, count, vc->vc_cols); 1547 + scr_memsetw((unsigned short *) (vc->vc_origin + 1548 + vc->vc_size_row * 1549 + (b - count)), 1550 + vc->vc_video_erase_char, 1551 + vc->vc_size_row * count); 1552 + return true; 1553 + } 1554 + break; 1717 1555 1718 1556 case SM_DOWN: 1719 1557 if (count > vc->vc_rows) /* Maximum realistic size */ 1720 1558 count = vc->vc_rows; 1721 - fbcon_redraw(vc, p, b - 1, b - t - count, 1722 - -count * vc->vc_cols); 1723 - fbcon_clear(vc, t, 0, count, vc->vc_cols); 1724 - scr_memsetw((unsigned short *) (vc->vc_origin + 1725 - vc->vc_size_row * 1726 - t), 1727 - vc->vc_video_erase_char, 1728 - vc->vc_size_row * count); 1729 - return true; 1559 + if (logo_shown >= 0) 1560 + goto redraw_down; 1561 + switch (fb_scrollmode(p)) { 1562 + case SCROLL_MOVE: 1563 + fbcon_redraw_blit(vc, info, p, b - 1, b - t - count, 1564 + -count); 1565 + fbcon_clear(vc, t, 0, count, vc->vc_cols); 1566 + scr_memsetw((unsigned short *) (vc->vc_origin + 1567 + vc->vc_size_row * 1568 + t), 1569 + vc->vc_video_erase_char, 1570 + vc->vc_size_row * count); 1571 + return true; 1572 + 1573 + case SCROLL_WRAP_MOVE: 1574 + if (b - t - count > 3 * vc->vc_rows >> 2) { 1575 + if (vc->vc_rows - b > 0) 1576 + fbcon_bmove(vc, b, 0, b - count, 0, 1577 + vc->vc_rows - b, 1578 + vc->vc_cols); 1579 + ywrap_down(vc, count); 1580 + if (t > 0) 1581 + fbcon_bmove(vc, count, 0, 0, 0, t, 1582 + vc->vc_cols); 1583 + } else if (info->flags & FBINFO_READS_FAST) 1584 + fbcon_bmove(vc, t, 0, t + count, 0, 1585 + b - t - count, vc->vc_cols); 1586 + else 1587 + goto redraw_down; 1588 + fbcon_clear(vc, t, 0, count, vc->vc_cols); 1589 + break; 1590 + 1591 + case SCROLL_PAN_MOVE: 1592 + if ((count - p->yscroll <= p->vrows - vc->vc_rows) 1593 + && ((!scroll_partial && (b - t == vc->vc_rows)) 1594 + || (scroll_partial 1595 + && (b - t - count > 1596 + 3 * vc->vc_rows >> 2)))) { 1597 + if (vc->vc_rows - b > 0) 1598 + fbcon_bmove(vc, b, 0, b - count, 0, 1599 + vc->vc_rows - b, 1600 + vc->vc_cols); 1601 + ypan_down(vc, count); 1602 + if (t > 0) 1603 + fbcon_bmove(vc, count, 0, 0, 0, t, 1604 + vc->vc_cols); 1605 + } else if (info->flags & FBINFO_READS_FAST) 1606 + fbcon_bmove(vc, t, 0, t + count, 0, 1607 + b - t - count, vc->vc_cols); 1608 + else 1609 + goto redraw_down; 1610 + fbcon_clear(vc, t, 0, count, vc->vc_cols); 1611 + break; 1612 + 1613 + case SCROLL_PAN_REDRAW: 1614 + if ((count - p->yscroll <= p->vrows - vc->vc_rows) 1615 + && ((!scroll_partial && (b - t == vc->vc_rows)) 1616 + || (scroll_partial 1617 + && (b - t - count > 1618 + 3 * vc->vc_rows >> 2)))) { 1619 + if (vc->vc_rows - b > 0) 1620 + fbcon_redraw_move(vc, p, b, vc->vc_rows - b, 1621 + b - count); 1622 + ypan_down_redraw(vc, t, count); 1623 + if (t > 0) 1624 + fbcon_redraw_move(vc, p, count, t, 0); 1625 + } else 1626 + fbcon_redraw_move(vc, p, t, b - t - count, t + count); 1627 + fbcon_clear(vc, t, 0, count, vc->vc_cols); 1628 + break; 1629 + 1630 + case SCROLL_REDRAW: 1631 + redraw_down: 1632 + fbcon_redraw(vc, p, b - 1, b - t - count, 1633 + -count * vc->vc_cols); 1634 + fbcon_clear(vc, t, 0, count, vc->vc_cols); 1635 + scr_memsetw((unsigned short *) (vc->vc_origin + 1636 + vc->vc_size_row * 1637 + t), 1638 + vc->vc_video_erase_char, 1639 + vc->vc_size_row * count); 1640 + return true; 1641 + } 1730 1642 } 1731 1643 return false; 1644 + } 1645 + 1646 + 1647 + static void fbcon_bmove(struct vc_data *vc, int sy, int sx, int dy, int dx, 1648 + int height, int width) 1649 + { 1650 + struct fb_info *info = registered_fb[con2fb_map[vc->vc_num]]; 1651 + struct fbcon_display *p = &fb_display[vc->vc_num]; 1652 + 1653 + if (fbcon_is_inactive(vc, info)) 1654 + return; 1655 + 1656 + if (!width || !height) 1657 + return; 1658 + 1659 + /* Split blits that cross physical y_wrap case. 1660 + * Pathological case involves 4 blits, better to use recursive 1661 + * code rather than unrolled case 1662 + * 1663 + * Recursive invocations don't need to erase the cursor over and 1664 + * over again, so we use fbcon_bmove_rec() 1665 + */ 1666 + fbcon_bmove_rec(vc, p, sy, sx, dy, dx, height, width, 1667 + p->vrows - p->yscroll); 1668 + } 1669 + 1670 + static void fbcon_bmove_rec(struct vc_data *vc, struct fbcon_display *p, int sy, int sx, 1671 + int dy, int dx, int height, int width, u_int y_break) 1672 + { 1673 + struct fb_info *info = registered_fb[con2fb_map[vc->vc_num]]; 1674 + struct fbcon_ops *ops = info->fbcon_par; 1675 + u_int b; 1676 + 1677 + if (sy < y_break && sy + height > y_break) { 1678 + b = y_break - sy; 1679 + if (dy < sy) { /* Avoid trashing self */ 1680 + fbcon_bmove_rec(vc, p, sy, sx, dy, dx, b, width, 1681 + y_break); 1682 + fbcon_bmove_rec(vc, p, sy + b, sx, dy + b, dx, 1683 + height - b, width, y_break); 1684 + } else { 1685 + fbcon_bmove_rec(vc, p, sy + b, sx, dy + b, dx, 1686 + height - b, width, y_break); 1687 + fbcon_bmove_rec(vc, p, sy, sx, dy, dx, b, width, 1688 + y_break); 1689 + } 1690 + return; 1691 + } 1692 + 1693 + if (dy < y_break && dy + height > y_break) { 1694 + b = y_break - dy; 1695 + if (dy < sy) { /* Avoid trashing self */ 1696 + fbcon_bmove_rec(vc, p, sy, sx, dy, dx, b, width, 1697 + y_break); 1698 + fbcon_bmove_rec(vc, p, sy + b, sx, dy + b, dx, 1699 + height - b, width, y_break); 1700 + } else { 1701 + fbcon_bmove_rec(vc, p, sy + b, sx, dy + b, dx, 1702 + height - b, width, y_break); 1703 + fbcon_bmove_rec(vc, p, sy, sx, dy, dx, b, width, 1704 + y_break); 1705 + } 1706 + return; 1707 + } 1708 + ops->bmove(vc, info, real_y(p, sy), sx, real_y(p, dy), dx, 1709 + height, width); 1710 + } 1711 + 1712 + static void updatescrollmode_accel(struct fbcon_display *p, 1713 + struct fb_info *info, 1714 + struct vc_data *vc) 1715 + { 1716 + #ifdef CONFIG_FRAMEBUFFER_CONSOLE_LEGACY_ACCELERATION 1717 + struct fbcon_ops *ops = info->fbcon_par; 1718 + int cap = info->flags; 1719 + u16 t = 0; 1720 + int ypan = FBCON_SWAP(ops->rotate, info->fix.ypanstep, 1721 + info->fix.xpanstep); 1722 + int ywrap = FBCON_SWAP(ops->rotate, info->fix.ywrapstep, t); 1723 + int yres = FBCON_SWAP(ops->rotate, info->var.yres, info->var.xres); 1724 + int vyres = FBCON_SWAP(ops->rotate, info->var.yres_virtual, 1725 + info->var.xres_virtual); 1726 + int good_pan = (cap & FBINFO_HWACCEL_YPAN) && 1727 + divides(ypan, vc->vc_font.height) && vyres > yres; 1728 + int good_wrap = (cap & FBINFO_HWACCEL_YWRAP) && 1729 + divides(ywrap, vc->vc_font.height) && 1730 + divides(vc->vc_font.height, vyres) && 1731 + divides(vc->vc_font.height, yres); 1732 + int reading_fast = cap & FBINFO_READS_FAST; 1733 + int fast_copyarea = (cap & FBINFO_HWACCEL_COPYAREA) && 1734 + !(cap & FBINFO_HWACCEL_DISABLED); 1735 + int fast_imageblit = (cap & FBINFO_HWACCEL_IMAGEBLIT) && 1736 + !(cap & FBINFO_HWACCEL_DISABLED); 1737 + 1738 + if (good_wrap || good_pan) { 1739 + if (reading_fast || fast_copyarea) 1740 + p->scrollmode = good_wrap ? 1741 + SCROLL_WRAP_MOVE : SCROLL_PAN_MOVE; 1742 + else 1743 + p->scrollmode = good_wrap ? SCROLL_REDRAW : 1744 + SCROLL_PAN_REDRAW; 1745 + } else { 1746 + if (reading_fast || (fast_copyarea && !fast_imageblit)) 1747 + p->scrollmode = SCROLL_MOVE; 1748 + else 1749 + p->scrollmode = SCROLL_REDRAW; 1750 + } 1751 + #endif 1732 1752 } 1733 1753 1734 1754 static void updatescrollmode(struct fbcon_display *p, ··· 2005 1507 p->vrows -= (yres - (fh * vc->vc_rows)) / fh; 2006 1508 if ((yres % fh) && (vyres % fh < yres % fh)) 2007 1509 p->vrows--; 1510 + 1511 + /* update scrollmode in case hardware acceleration is used */ 1512 + updatescrollmode_accel(p, info, vc); 2008 1513 } 2009 1514 2010 1515 #define PITCH(w) (((w) + 7) >> 3) ··· 2165 1664 2166 1665 updatescrollmode(p, info, vc); 2167 1666 2168 - scrollback_phys_max = 0; 1667 + switch (fb_scrollmode(p)) { 1668 + case SCROLL_WRAP_MOVE: 1669 + scrollback_phys_max = p->vrows - vc->vc_rows; 1670 + break; 1671 + case SCROLL_PAN_MOVE: 1672 + case SCROLL_PAN_REDRAW: 1673 + scrollback_phys_max = p->vrows - 2 * vc->vc_rows; 1674 + if (scrollback_phys_max < 0) 1675 + scrollback_phys_max = 0; 1676 + break; 1677 + default: 1678 + scrollback_phys_max = 0; 1679 + break; 1680 + } 1681 + 2169 1682 scrollback_max = 0; 2170 1683 scrollback_current = 0; 2171 1684
+72
drivers/video/fbdev/core/fbcon.h
··· 29 29 /* Filled in by the low-level console driver */ 30 30 const u_char *fontdata; 31 31 int userfont; /* != 0 if fontdata kmalloc()ed */ 32 + #ifdef CONFIG_FRAMEBUFFER_CONSOLE_LEGACY_ACCELERATION 33 + u_short scrollmode; /* Scroll Method, use fb_scrollmode() */ 34 + #endif 32 35 u_short inverse; /* != 0 text black on white as default */ 33 36 short yscroll; /* Hardware scrolling */ 34 37 int vrows; /* number of virtual rows */ ··· 54 51 }; 55 52 56 53 struct fbcon_ops { 54 + void (*bmove)(struct vc_data *vc, struct fb_info *info, int sy, 55 + int sx, int dy, int dx, int height, int width); 57 56 void (*clear)(struct vc_data *vc, struct fb_info *info, int sy, 58 57 int sx, int height, int width); 59 58 void (*putcs)(struct vc_data *vc, struct fb_info *info, ··· 153 148 154 149 #define attr_bgcol_ec(bgshift, vc, info) attr_col_ec(bgshift, vc, info, 0) 155 150 #define attr_fgcol_ec(fgshift, vc, info) attr_col_ec(fgshift, vc, info, 1) 151 + 152 + /* 153 + * Scroll Method 154 + */ 155 + 156 + /* There are several methods fbcon can use to move text around the screen: 157 + * 158 + * Operation Pan Wrap 159 + *--------------------------------------------- 160 + * SCROLL_MOVE copyarea No No 161 + * SCROLL_PAN_MOVE copyarea Yes No 162 + * SCROLL_WRAP_MOVE copyarea No Yes 163 + * SCROLL_REDRAW imageblit No No 164 + * SCROLL_PAN_REDRAW imageblit Yes No 165 + * SCROLL_WRAP_REDRAW imageblit No Yes 166 + * 167 + * (SCROLL_WRAP_REDRAW is not implemented yet) 168 + * 169 + * In general, fbcon will choose the best scrolling 170 + * method based on the rule below: 171 + * 172 + * Pan/Wrap > accel imageblit > accel copyarea > 173 + * soft imageblit > (soft copyarea) 174 + * 175 + * Exception to the rule: Pan + accel copyarea is 176 + * preferred over Pan + accel imageblit. 177 + * 178 + * The above is typical for PCI/AGP cards. Unless 179 + * overridden, fbcon will never use soft copyarea. 180 + * 181 + * If you need to override the above rule, set the 182 + * appropriate flags in fb_info->flags. For example, 183 + * to prefer copyarea over imageblit, set 184 + * FBINFO_READS_FAST. 185 + * 186 + * Other notes: 187 + * + use the hardware engine to move the text 188 + * (hw-accelerated copyarea() and fillrect()) 189 + * + use hardware-supported panning on a large virtual screen 190 + * + amifb can not only pan, but also wrap the display by N lines 191 + * (i.e. visible line i = physical line (i+N) % yres). 192 + * + read what's already rendered on the screen and 193 + * write it in a different place (this is cfb_copyarea()) 194 + * + re-render the text to the screen 195 + * 196 + * Whether to use wrapping or panning can only be figured out at 197 + * runtime (when we know whether our font height is a multiple 198 + * of the pan/wrap step) 199 + * 200 + */ 201 + 202 + #define SCROLL_MOVE 0x001 203 + #define SCROLL_PAN_MOVE 0x002 204 + #define SCROLL_WRAP_MOVE 0x003 205 + #define SCROLL_REDRAW 0x004 206 + #define SCROLL_PAN_REDRAW 0x005 207 + 208 + static inline u_short fb_scrollmode(struct fbcon_display *fb) 209 + { 210 + #ifdef CONFIG_FRAMEBUFFER_CONSOLE_LEGACY_ACCELERATION 211 + return fb->scrollmode; 212 + #else 213 + /* hardcoded to SCROLL_REDRAW if acceleration was disabled. */ 214 + return SCROLL_REDRAW; 215 + #endif 216 + } 217 + 156 218 157 219 #ifdef CONFIG_FB_TILEBLITTING 158 220 extern void fbcon_set_tileops(struct vc_data *vc, struct fb_info *info);
+24 -4
drivers/video/fbdev/core/fbcon_ccw.c
··· 59 59 } 60 60 } 61 61 62 + 63 + static void ccw_bmove(struct vc_data *vc, struct fb_info *info, int sy, 64 + int sx, int dy, int dx, int height, int width) 65 + { 66 + struct fbcon_ops *ops = info->fbcon_par; 67 + struct fb_copyarea area; 68 + u32 vyres = GETVYRES(ops->p, info); 69 + 70 + area.sx = sy * vc->vc_font.height; 71 + area.sy = vyres - ((sx + width) * vc->vc_font.width); 72 + area.dx = dy * vc->vc_font.height; 73 + area.dy = vyres - ((dx + width) * vc->vc_font.width); 74 + area.width = height * vc->vc_font.height; 75 + area.height = width * vc->vc_font.width; 76 + 77 + info->fbops->fb_copyarea(info, &area); 78 + } 79 + 62 80 static void ccw_clear(struct vc_data *vc, struct fb_info *info, int sy, 63 81 int sx, int height, int width) 64 82 { 83 + struct fbcon_ops *ops = info->fbcon_par; 65 84 struct fb_fillrect region; 66 85 int bgshift = (vc->vc_hi_font_mask) ? 13 : 12; 67 - u32 vyres = info->var.yres; 86 + u32 vyres = GETVYRES(ops->p, info); 68 87 69 88 region.color = attr_bgcol_ec(bgshift,vc,info); 70 89 region.dx = sy * vc->vc_font.height; ··· 140 121 u32 cnt, pitch, size; 141 122 u32 attribute = get_attribute(info, scr_readw(s)); 142 123 u8 *dst, *buf = NULL; 143 - u32 vyres = info->var.yres; 124 + u32 vyres = GETVYRES(ops->p, info); 144 125 145 126 if (!ops->fontbuffer) 146 127 return; ··· 229 210 int attribute, use_sw = vc->vc_cursor_type & CUR_SW; 230 211 int err = 1, dx, dy; 231 212 char *src; 232 - u32 vyres = info->var.yres; 213 + u32 vyres = GETVYRES(ops->p, info); 233 214 234 215 if (!ops->fontbuffer) 235 216 return; ··· 387 368 { 388 369 struct fbcon_ops *ops = info->fbcon_par; 389 370 u32 yoffset; 390 - u32 vyres = info->var.yres; 371 + u32 vyres = GETVYRES(ops->p, info); 391 372 int err; 392 373 393 374 yoffset = (vyres - info->var.yres) - ops->var.xoffset; ··· 402 383 403 384 void fbcon_rotate_ccw(struct fbcon_ops *ops) 404 385 { 386 + ops->bmove = ccw_bmove; 405 387 ops->clear = ccw_clear; 406 388 ops->putcs = ccw_putcs; 407 389 ops->clear_margins = ccw_clear_margins;
+24 -4
drivers/video/fbdev/core/fbcon_cw.c
··· 44 44 } 45 45 } 46 46 47 + 48 + static void cw_bmove(struct vc_data *vc, struct fb_info *info, int sy, 49 + int sx, int dy, int dx, int height, int width) 50 + { 51 + struct fbcon_ops *ops = info->fbcon_par; 52 + struct fb_copyarea area; 53 + u32 vxres = GETVXRES(ops->p, info); 54 + 55 + area.sx = vxres - ((sy + height) * vc->vc_font.height); 56 + area.sy = sx * vc->vc_font.width; 57 + area.dx = vxres - ((dy + height) * vc->vc_font.height); 58 + area.dy = dx * vc->vc_font.width; 59 + area.width = height * vc->vc_font.height; 60 + area.height = width * vc->vc_font.width; 61 + 62 + info->fbops->fb_copyarea(info, &area); 63 + } 64 + 47 65 static void cw_clear(struct vc_data *vc, struct fb_info *info, int sy, 48 66 int sx, int height, int width) 49 67 { 68 + struct fbcon_ops *ops = info->fbcon_par; 50 69 struct fb_fillrect region; 51 70 int bgshift = (vc->vc_hi_font_mask) ? 13 : 12; 52 - u32 vxres = info->var.xres; 71 + u32 vxres = GETVXRES(ops->p, info); 53 72 54 73 region.color = attr_bgcol_ec(bgshift,vc,info); 55 74 region.dx = vxres - ((sy + height) * vc->vc_font.height); ··· 125 106 u32 cnt, pitch, size; 126 107 u32 attribute = get_attribute(info, scr_readw(s)); 127 108 u8 *dst, *buf = NULL; 128 - u32 vxres = info->var.xres; 109 + u32 vxres = GETVXRES(ops->p, info); 129 110 130 111 if (!ops->fontbuffer) 131 112 return; ··· 212 193 int attribute, use_sw = vc->vc_cursor_type & CUR_SW; 213 194 int err = 1, dx, dy; 214 195 char *src; 215 - u32 vxres = info->var.xres; 196 + u32 vxres = GETVXRES(ops->p, info); 216 197 217 198 if (!ops->fontbuffer) 218 199 return; ··· 369 350 static int cw_update_start(struct fb_info *info) 370 351 { 371 352 struct fbcon_ops *ops = info->fbcon_par; 372 - u32 vxres = info->var.xres; 353 + u32 vxres = GETVXRES(ops->p, info); 373 354 u32 xoffset; 374 355 int err; 375 356 ··· 385 366 386 367 void fbcon_rotate_cw(struct fbcon_ops *ops) 387 368 { 369 + ops->bmove = cw_bmove; 388 370 ops->clear = cw_clear; 389 371 ops->putcs = cw_putcs; 390 372 ops->clear_margins = cw_clear_margins;
+9
drivers/video/fbdev/core/fbcon_rotate.h
··· 11 11 #ifndef _FBCON_ROTATE_H 12 12 #define _FBCON_ROTATE_H 13 13 14 + #define GETVYRES(s,i) ({ \ 15 + (fb_scrollmode(s) == SCROLL_REDRAW || fb_scrollmode(s) == SCROLL_MOVE) ? \ 16 + (i)->var.yres : (i)->var.yres_virtual; }) 17 + 18 + #define GETVXRES(s,i) ({ \ 19 + (fb_scrollmode(s) == SCROLL_REDRAW || fb_scrollmode(s) == SCROLL_MOVE || !(i)->fix.xpanstep) ? \ 20 + (i)->var.xres : (i)->var.xres_virtual; }) 21 + 22 + 14 23 static inline int pattern_test_bit(u32 x, u32 y, u32 pitch, const char *pat) 15 24 { 16 25 u32 tmp = (y * pitch) + x, index = tmp / 8, bit = tmp % 8;
+29 -8
drivers/video/fbdev/core/fbcon_ud.c
··· 44 44 } 45 45 } 46 46 47 + 48 + static void ud_bmove(struct vc_data *vc, struct fb_info *info, int sy, 49 + int sx, int dy, int dx, int height, int width) 50 + { 51 + struct fbcon_ops *ops = info->fbcon_par; 52 + struct fb_copyarea area; 53 + u32 vyres = GETVYRES(ops->p, info); 54 + u32 vxres = GETVXRES(ops->p, info); 55 + 56 + area.sy = vyres - ((sy + height) * vc->vc_font.height); 57 + area.sx = vxres - ((sx + width) * vc->vc_font.width); 58 + area.dy = vyres - ((dy + height) * vc->vc_font.height); 59 + area.dx = vxres - ((dx + width) * vc->vc_font.width); 60 + area.height = height * vc->vc_font.height; 61 + area.width = width * vc->vc_font.width; 62 + 63 + info->fbops->fb_copyarea(info, &area); 64 + } 65 + 47 66 static void ud_clear(struct vc_data *vc, struct fb_info *info, int sy, 48 67 int sx, int height, int width) 49 68 { 69 + struct fbcon_ops *ops = info->fbcon_par; 50 70 struct fb_fillrect region; 51 71 int bgshift = (vc->vc_hi_font_mask) ? 13 : 12; 52 - u32 vyres = info->var.yres; 53 - u32 vxres = info->var.xres; 72 + u32 vyres = GETVYRES(ops->p, info); 73 + u32 vxres = GETVXRES(ops->p, info); 54 74 55 75 region.color = attr_bgcol_ec(bgshift,vc,info); 56 76 region.dy = vyres - ((sy + height) * vc->vc_font.height); ··· 162 142 u32 mod = vc->vc_font.width % 8, cnt, pitch, size; 163 143 u32 attribute = get_attribute(info, scr_readw(s)); 164 144 u8 *dst, *buf = NULL; 165 - u32 vyres = info->var.yres; 166 - u32 vxres = info->var.xres; 145 + u32 vyres = GETVYRES(ops->p, info); 146 + u32 vxres = GETVXRES(ops->p, info); 167 147 168 148 if (!ops->fontbuffer) 169 149 return; ··· 259 239 int attribute, use_sw = vc->vc_cursor_type & CUR_SW; 260 240 int err = 1, dx, dy; 261 241 char *src; 262 - u32 vyres = info->var.yres; 263 - u32 vxres = info->var.xres; 242 + u32 vyres = GETVYRES(ops->p, info); 243 + u32 vxres = GETVXRES(ops->p, info); 264 244 265 245 if (!ops->fontbuffer) 266 246 return; ··· 410 390 { 411 391 struct fbcon_ops *ops = info->fbcon_par; 412 392 int xoffset, yoffset; 413 - u32 vyres = info->var.yres; 414 - u32 vxres = info->var.xres; 393 + u32 vyres = GETVYRES(ops->p, info); 394 + u32 vxres = GETVXRES(ops->p, info); 415 395 int err; 416 396 417 397 xoffset = vxres - info->var.xres - ops->var.xoffset; ··· 429 409 430 410 void fbcon_rotate_ud(struct fbcon_ops *ops) 431 411 { 412 + ops->bmove = ud_bmove; 432 413 ops->clear = ud_clear; 433 414 ops->putcs = ud_putcs; 434 415 ops->clear_margins = ud_clear_margins;
+16
drivers/video/fbdev/core/tileblit.c
··· 16 16 #include <asm/types.h> 17 17 #include "fbcon.h" 18 18 19 + static void tile_bmove(struct vc_data *vc, struct fb_info *info, int sy, 20 + int sx, int dy, int dx, int height, int width) 21 + { 22 + struct fb_tilearea area; 23 + 24 + area.sx = sx; 25 + area.sy = sy; 26 + area.dx = dx; 27 + area.dy = dy; 28 + area.height = height; 29 + area.width = width; 30 + 31 + info->tileops->fb_tilecopy(info, &area); 32 + } 33 + 19 34 static void tile_clear(struct vc_data *vc, struct fb_info *info, int sy, 20 35 int sx, int height, int width) 21 36 { ··· 133 118 struct fb_tilemap map; 134 119 struct fbcon_ops *ops = info->fbcon_par; 135 120 121 + ops->bmove = tile_bmove; 136 122 ops->clear = tile_clear; 137 123 ops->putcs = tile_putcs; 138 124 ops->clear_margins = tile_clear_margins;
+6 -6
drivers/video/fbdev/skeletonfb.c
··· 505 505 } 506 506 507 507 /** 508 - * xxxfb_copyarea - OBSOLETE function. 508 + * xxxfb_copyarea - REQUIRED function. Can use generic routines if 509 + * non acclerated hardware and packed pixel based. 509 510 * Copies one area of the screen to another area. 510 - * Will be deleted in a future version 511 511 * 512 512 * @info: frame buffer structure that represents a single frame buffer 513 513 * @area: Structure providing the data to copy the framebuffer contents 514 514 * from one region to another. 515 515 * 516 - * This drawing operation copied a rectangular area from one area of the 516 + * This drawing operation copies a rectangular area from one area of the 517 517 * screen to another area. 518 518 */ 519 519 void xxxfb_copyarea(struct fb_info *p, const struct fb_copyarea *area) ··· 645 645 .fb_setcolreg = xxxfb_setcolreg, 646 646 .fb_blank = xxxfb_blank, 647 647 .fb_pan_display = xxxfb_pan_display, 648 - .fb_fillrect = xxxfb_fillrect, /* Needed !!! */ 649 - .fb_copyarea = xxxfb_copyarea, /* Obsolete */ 650 - .fb_imageblit = xxxfb_imageblit, /* Needed !!! */ 648 + .fb_fillrect = xxxfb_fillrect, /* Needed !!! */ 649 + .fb_copyarea = xxxfb_copyarea, /* Needed !!! */ 650 + .fb_imageblit = xxxfb_imageblit, /* Needed !!! */ 651 651 .fb_cursor = xxxfb_cursor, /* Optional !!! */ 652 652 .fb_sync = xxxfb_sync, 653 653 .fb_ioctl = xxxfb_ioctl,
+4 -5
fs/9p/fid.c
··· 96 96 dentry, dentry, from_kuid(&init_user_ns, uid), 97 97 any); 98 98 ret = NULL; 99 - 100 - if (d_inode(dentry)) 101 - ret = v9fs_fid_find_inode(d_inode(dentry), uid); 102 - 103 99 /* we'll recheck under lock if there's anything to look in */ 104 - if (!ret && dentry->d_fsdata) { 100 + if (dentry->d_fsdata) { 105 101 struct hlist_head *h = (struct hlist_head *)&dentry->d_fsdata; 106 102 107 103 spin_lock(&dentry->d_lock); ··· 109 113 } 110 114 } 111 115 spin_unlock(&dentry->d_lock); 116 + } else { 117 + if (dentry->d_inode) 118 + ret = v9fs_fid_find_inode(dentry->d_inode, uid); 112 119 } 113 120 114 121 return ret;
+1 -1
fs/Makefile
··· 96 96 obj-$(CONFIG_NFSD) += nfsd/ 97 97 obj-$(CONFIG_LOCKD) += lockd/ 98 98 obj-$(CONFIG_NLS) += nls/ 99 - obj-$(CONFIG_UNICODE) += unicode/ 99 + obj-y += unicode/ 100 100 obj-$(CONFIG_SYSV_FS) += sysv/ 101 101 obj-$(CONFIG_SMBFS_COMMON) += smbfs_common/ 102 102 obj-$(CONFIG_CIFS) += cifs/
+37 -2
fs/btrfs/block-group.c
··· 124 124 { 125 125 if (refcount_dec_and_test(&cache->refs)) { 126 126 WARN_ON(cache->pinned > 0); 127 - WARN_ON(cache->reserved > 0); 127 + /* 128 + * If there was a failure to cleanup a log tree, very likely due 129 + * to an IO failure on a writeback attempt of one or more of its 130 + * extent buffers, we could not do proper (and cheap) unaccounting 131 + * of their reserved space, so don't warn on reserved > 0 in that 132 + * case. 133 + */ 134 + if (!(cache->flags & BTRFS_BLOCK_GROUP_METADATA) || 135 + !BTRFS_FS_LOG_CLEANUP_ERROR(cache->fs_info)) 136 + WARN_ON(cache->reserved > 0); 128 137 129 138 /* 130 139 * A block_group shouldn't be on the discard_list anymore. ··· 2553 2544 int ret; 2554 2545 bool dirty_bg_running; 2555 2546 2547 + /* 2548 + * This can only happen when we are doing read-only scrub on read-only 2549 + * mount. 2550 + * In that case we should not start a new transaction on read-only fs. 2551 + * Thus here we skip all chunk allocations. 2552 + */ 2553 + if (sb_rdonly(fs_info->sb)) { 2554 + mutex_lock(&fs_info->ro_block_group_mutex); 2555 + ret = inc_block_group_ro(cache, 0); 2556 + mutex_unlock(&fs_info->ro_block_group_mutex); 2557 + return ret; 2558 + } 2559 + 2556 2560 do { 2557 2561 trans = btrfs_join_transaction(root); 2558 2562 if (IS_ERR(trans)) ··· 3996 3974 * important and indicates a real bug if this happens. 3997 3975 */ 3998 3976 if (WARN_ON(space_info->bytes_pinned > 0 || 3999 - space_info->bytes_reserved > 0 || 4000 3977 space_info->bytes_may_use > 0)) 4001 3978 btrfs_dump_space_info(info, space_info, 0, 0); 3979 + 3980 + /* 3981 + * If there was a failure to cleanup a log tree, very likely due 3982 + * to an IO failure on a writeback attempt of one or more of its 3983 + * extent buffers, we could not do proper (and cheap) unaccounting 3984 + * of their reserved space, so don't warn on bytes_reserved > 0 in 3985 + * that case. 3986 + */ 3987 + if (!(space_info->flags & BTRFS_BLOCK_GROUP_METADATA) || 3988 + !BTRFS_FS_LOG_CLEANUP_ERROR(info)) { 3989 + if (WARN_ON(space_info->bytes_reserved > 0)) 3990 + btrfs_dump_space_info(info, space_info, 0, 0); 3991 + } 3992 + 4002 3993 WARN_ON(space_info->reclaim_size > 0); 4003 3994 list_del(&space_info->list); 4004 3995 btrfs_sysfs_remove_space_info(space_info);
+6
fs/btrfs/ctree.h
··· 145 145 BTRFS_FS_STATE_DUMMY_FS_INFO, 146 146 147 147 BTRFS_FS_STATE_NO_CSUMS, 148 + 149 + /* Indicates there was an error cleaning up a log tree. */ 150 + BTRFS_FS_STATE_LOG_CLEANUP_ERROR, 148 151 }; 149 152 150 153 #define BTRFS_BACKREF_REV_MAX 256 ··· 3596 3593 3597 3594 #define BTRFS_FS_ERROR(fs_info) (unlikely(test_bit(BTRFS_FS_STATE_ERROR, \ 3598 3595 &(fs_info)->fs_state))) 3596 + #define BTRFS_FS_LOG_CLEANUP_ERROR(fs_info) \ 3597 + (unlikely(test_bit(BTRFS_FS_STATE_LOG_CLEANUP_ERROR, \ 3598 + &(fs_info)->fs_state))) 3599 3599 3600 3600 __printf(5, 6) 3601 3601 __cold
+2 -5
fs/btrfs/ioctl.c
··· 805 805 goto fail; 806 806 } 807 807 808 - spin_lock(&fs_info->trans_lock); 809 - list_add(&pending_snapshot->list, 810 - &trans->transaction->pending_snapshots); 811 - spin_unlock(&fs_info->trans_lock); 808 + trans->pending_snapshot = pending_snapshot; 812 809 813 810 ret = btrfs_commit_transaction(trans); 814 811 if (ret) ··· 3351 3354 struct block_device *bdev = NULL; 3352 3355 fmode_t mode; 3353 3356 int ret; 3354 - bool cancel; 3357 + bool cancel = false; 3355 3358 3356 3359 if (!capable(CAP_SYS_ADMIN)) 3357 3360 return -EPERM;
+19 -2
fs/btrfs/qgroup.c
··· 1185 1185 struct btrfs_trans_handle *trans = NULL; 1186 1186 int ret = 0; 1187 1187 1188 + /* 1189 + * We need to have subvol_sem write locked, to prevent races between 1190 + * concurrent tasks trying to disable quotas, because we will unlock 1191 + * and relock qgroup_ioctl_lock across BTRFS_FS_QUOTA_ENABLED changes. 1192 + */ 1193 + lockdep_assert_held_write(&fs_info->subvol_sem); 1194 + 1188 1195 mutex_lock(&fs_info->qgroup_ioctl_lock); 1189 1196 if (!fs_info->quota_root) 1190 1197 goto out; 1198 + 1199 + /* 1200 + * Request qgroup rescan worker to complete and wait for it. This wait 1201 + * must be done before transaction start for quota disable since it may 1202 + * deadlock with transaction by the qgroup rescan worker. 1203 + */ 1204 + clear_bit(BTRFS_FS_QUOTA_ENABLED, &fs_info->flags); 1205 + btrfs_qgroup_wait_for_completion(fs_info, false); 1191 1206 mutex_unlock(&fs_info->qgroup_ioctl_lock); 1192 1207 1193 1208 /* ··· 1220 1205 if (IS_ERR(trans)) { 1221 1206 ret = PTR_ERR(trans); 1222 1207 trans = NULL; 1208 + set_bit(BTRFS_FS_QUOTA_ENABLED, &fs_info->flags); 1223 1209 goto out; 1224 1210 } 1225 1211 1226 1212 if (!fs_info->quota_root) 1227 1213 goto out; 1228 1214 1229 - clear_bit(BTRFS_FS_QUOTA_ENABLED, &fs_info->flags); 1230 - btrfs_qgroup_wait_for_completion(fs_info, false); 1231 1215 spin_lock(&fs_info->qgroup_lock); 1232 1216 quota_root = fs_info->quota_root; 1233 1217 fs_info->quota_root = NULL; ··· 3397 3383 btrfs_warn(fs_info, 3398 3384 "qgroup rescan init failed, qgroup is not enabled"); 3399 3385 ret = -EINVAL; 3386 + } else if (!test_bit(BTRFS_FS_QUOTA_ENABLED, &fs_info->flags)) { 3387 + /* Quota disable is in progress */ 3388 + ret = -EBUSY; 3400 3389 } 3401 3390 3402 3391 if (ret) {
+24
fs/btrfs/transaction.c
··· 2000 2000 btrfs_wait_ordered_roots(fs_info, U64_MAX, 0, (u64)-1); 2001 2001 } 2002 2002 2003 + /* 2004 + * Add a pending snapshot associated with the given transaction handle to the 2005 + * respective handle. This must be called after the transaction commit started 2006 + * and while holding fs_info->trans_lock. 2007 + * This serves to guarantee a caller of btrfs_commit_transaction() that it can 2008 + * safely free the pending snapshot pointer in case btrfs_commit_transaction() 2009 + * returns an error. 2010 + */ 2011 + static void add_pending_snapshot(struct btrfs_trans_handle *trans) 2012 + { 2013 + struct btrfs_transaction *cur_trans = trans->transaction; 2014 + 2015 + if (!trans->pending_snapshot) 2016 + return; 2017 + 2018 + lockdep_assert_held(&trans->fs_info->trans_lock); 2019 + ASSERT(cur_trans->state >= TRANS_STATE_COMMIT_START); 2020 + 2021 + list_add(&trans->pending_snapshot->list, &cur_trans->pending_snapshots); 2022 + } 2023 + 2003 2024 int btrfs_commit_transaction(struct btrfs_trans_handle *trans) 2004 2025 { 2005 2026 struct btrfs_fs_info *fs_info = trans->fs_info; ··· 2093 2072 spin_lock(&fs_info->trans_lock); 2094 2073 if (cur_trans->state >= TRANS_STATE_COMMIT_START) { 2095 2074 enum btrfs_trans_state want_state = TRANS_STATE_COMPLETED; 2075 + 2076 + add_pending_snapshot(trans); 2096 2077 2097 2078 spin_unlock(&fs_info->trans_lock); 2098 2079 refcount_inc(&cur_trans->use_count); ··· 2186 2163 * COMMIT_DOING so make sure to wait for num_writers to == 1 again. 2187 2164 */ 2188 2165 spin_lock(&fs_info->trans_lock); 2166 + add_pending_snapshot(trans); 2189 2167 cur_trans->state = TRANS_STATE_COMMIT_DOING; 2190 2168 spin_unlock(&fs_info->trans_lock); 2191 2169 wait_event(cur_trans->writer_wait,
+2
fs/btrfs/transaction.h
··· 123 123 struct btrfs_transaction *transaction; 124 124 struct btrfs_block_rsv *block_rsv; 125 125 struct btrfs_block_rsv *orig_rsv; 126 + /* Set by a task that wants to create a snapshot. */ 127 + struct btrfs_pending_snapshot *pending_snapshot; 126 128 refcount_t use_count; 127 129 unsigned int type; 128 130 /*
+15
fs/btrfs/tree-checker.c
··· 965 965 struct btrfs_key *key, int slot) 966 966 { 967 967 struct btrfs_dev_item *ditem; 968 + const u32 item_size = btrfs_item_size(leaf, slot); 968 969 969 970 if (unlikely(key->objectid != BTRFS_DEV_ITEMS_OBJECTID)) { 970 971 dev_item_err(leaf, slot, ··· 973 972 key->objectid, BTRFS_DEV_ITEMS_OBJECTID); 974 973 return -EUCLEAN; 975 974 } 975 + 976 + if (unlikely(item_size != sizeof(*ditem))) { 977 + dev_item_err(leaf, slot, "invalid item size: has %u expect %zu", 978 + item_size, sizeof(*ditem)); 979 + return -EUCLEAN; 980 + } 981 + 976 982 ditem = btrfs_item_ptr(leaf, slot, struct btrfs_dev_item); 977 983 if (unlikely(btrfs_device_id(leaf, ditem) != key->offset)) { 978 984 dev_item_err(leaf, slot, ··· 1015 1007 struct btrfs_inode_item *iitem; 1016 1008 u64 super_gen = btrfs_super_generation(fs_info->super_copy); 1017 1009 u32 valid_mask = (S_IFMT | S_ISUID | S_ISGID | S_ISVTX | 0777); 1010 + const u32 item_size = btrfs_item_size(leaf, slot); 1018 1011 u32 mode; 1019 1012 int ret; 1020 1013 u32 flags; ··· 1024 1015 ret = check_inode_key(leaf, key, slot); 1025 1016 if (unlikely(ret < 0)) 1026 1017 return ret; 1018 + 1019 + if (unlikely(item_size != sizeof(*iitem))) { 1020 + generic_err(leaf, slot, "invalid item size: has %u expect %zu", 1021 + item_size, sizeof(*iitem)); 1022 + return -EUCLEAN; 1023 + } 1027 1024 1028 1025 iitem = btrfs_item_ptr(leaf, slot, struct btrfs_inode_item); 1029 1026
+23
fs/btrfs/tree-log.c
··· 3414 3414 if (log->node) { 3415 3415 ret = walk_log_tree(trans, log, &wc); 3416 3416 if (ret) { 3417 + /* 3418 + * We weren't able to traverse the entire log tree, the 3419 + * typical scenario is getting an -EIO when reading an 3420 + * extent buffer of the tree, due to a previous writeback 3421 + * failure of it. 3422 + */ 3423 + set_bit(BTRFS_FS_STATE_LOG_CLEANUP_ERROR, 3424 + &log->fs_info->fs_state); 3425 + 3426 + /* 3427 + * Some extent buffers of the log tree may still be dirty 3428 + * and not yet written back to storage, because we may 3429 + * have updates to a log tree without syncing a log tree, 3430 + * such as during rename and link operations. So flush 3431 + * them out and wait for their writeback to complete, so 3432 + * that we properly cleanup their state and pages. 3433 + */ 3434 + btrfs_write_marked_extents(log->fs_info, 3435 + &log->dirty_log_pages, 3436 + EXTENT_DIRTY | EXTENT_NEW); 3437 + btrfs_wait_tree_log_extents(log, 3438 + EXTENT_DIRTY | EXTENT_NEW); 3439 + 3417 3440 if (trans) 3418 3441 btrfs_abort_transaction(trans, ret); 3419 3442 else
+59
fs/cachefiles/io.c
··· 192 192 } 193 193 194 194 /* 195 + * Query the occupancy of the cache in a region, returning where the next chunk 196 + * of data starts and how long it is. 197 + */ 198 + static int cachefiles_query_occupancy(struct netfs_cache_resources *cres, 199 + loff_t start, size_t len, size_t granularity, 200 + loff_t *_data_start, size_t *_data_len) 201 + { 202 + struct cachefiles_object *object; 203 + struct file *file; 204 + loff_t off, off2; 205 + 206 + *_data_start = -1; 207 + *_data_len = 0; 208 + 209 + if (!fscache_wait_for_operation(cres, FSCACHE_WANT_READ)) 210 + return -ENOBUFS; 211 + 212 + object = cachefiles_cres_object(cres); 213 + file = cachefiles_cres_file(cres); 214 + granularity = max_t(size_t, object->volume->cache->bsize, granularity); 215 + 216 + _enter("%pD,%li,%llx,%zx/%llx", 217 + file, file_inode(file)->i_ino, start, len, 218 + i_size_read(file_inode(file))); 219 + 220 + off = cachefiles_inject_read_error(); 221 + if (off == 0) 222 + off = vfs_llseek(file, start, SEEK_DATA); 223 + if (off == -ENXIO) 224 + return -ENODATA; /* Beyond EOF */ 225 + if (off < 0 && off >= (loff_t)-MAX_ERRNO) 226 + return -ENOBUFS; /* Error. */ 227 + if (round_up(off, granularity) >= start + len) 228 + return -ENODATA; /* No data in range */ 229 + 230 + off2 = cachefiles_inject_read_error(); 231 + if (off2 == 0) 232 + off2 = vfs_llseek(file, off, SEEK_HOLE); 233 + if (off2 == -ENXIO) 234 + return -ENODATA; /* Beyond EOF */ 235 + if (off2 < 0 && off2 >= (loff_t)-MAX_ERRNO) 236 + return -ENOBUFS; /* Error. */ 237 + 238 + /* Round away partial blocks */ 239 + off = round_up(off, granularity); 240 + off2 = round_down(off2, granularity); 241 + if (off2 <= off) 242 + return -ENODATA; 243 + 244 + *_data_start = off; 245 + if (off2 > start + len) 246 + *_data_len = len; 247 + else 248 + *_data_len = off2 - off; 249 + return 0; 250 + } 251 + 252 + /* 195 253 * Handle completion of a write to the cache. 196 254 */ 197 255 static void cachefiles_write_complete(struct kiocb *iocb, long ret) ··· 603 545 .write = cachefiles_write, 604 546 .prepare_read = cachefiles_prepare_read, 605 547 .prepare_write = cachefiles_prepare_write, 548 + .query_occupancy = cachefiles_query_occupancy, 606 549 }; 607 550 608 551 /*
+16 -7
fs/cifs/connect.c
··· 162 162 mutex_unlock(&server->srv_mutex); 163 163 } 164 164 165 - /** 165 + /* 166 166 * Mark all sessions and tcons for reconnect. 167 167 * 168 168 * @server needs to be previously set to CifsNeedReconnect. ··· 1831 1831 int i; 1832 1832 1833 1833 for (i = 1; i < chan_count; i++) { 1834 - /* 1835 - * note: for now, we're okay accessing ses->chans 1836 - * without chan_lock. But when chans can go away, we'll 1837 - * need to introduce ref counting to make sure that chan 1838 - * is not freed from under us. 1839 - */ 1834 + spin_unlock(&ses->chan_lock); 1840 1835 cifs_put_tcp_session(ses->chans[i].server, 0); 1836 + spin_lock(&ses->chan_lock); 1841 1837 ses->chans[i].server = NULL; 1842 1838 } 1843 1839 } ··· 1975 1979 ctx->password = NULL; 1976 1980 goto out_key_put; 1977 1981 } 1982 + } 1983 + 1984 + ctx->workstation_name = kstrdup(ses->workstation_name, GFP_KERNEL); 1985 + if (!ctx->workstation_name) { 1986 + cifs_dbg(FYI, "Unable to allocate memory for workstation_name\n"); 1987 + rc = -ENOMEM; 1988 + kfree(ctx->username); 1989 + ctx->username = NULL; 1990 + kfree_sensitive(ctx->password); 1991 + ctx->password = NULL; 1992 + kfree(ctx->domainname); 1993 + ctx->domainname = NULL; 1994 + goto out_key_put; 1978 1995 } 1979 1996 1980 1997 out_key_put:
+83 -140
fs/cifs/file.c
··· 4269 4269 for (i = 0; i < rdata->nr_pages; i++) { 4270 4270 struct page *page = rdata->pages[i]; 4271 4271 4272 - lru_cache_add(page); 4273 - 4274 4272 if (rdata->result == 0 || 4275 4273 (rdata->result == -EAGAIN && got_bytes)) { 4276 4274 flush_dcache_page(page); ··· 4276 4278 } else 4277 4279 SetPageError(page); 4278 4280 4279 - unlock_page(page); 4280 - 4281 4281 if (rdata->result == 0 || 4282 4282 (rdata->result == -EAGAIN && got_bytes)) 4283 4283 cifs_readpage_to_fscache(rdata->mapping->host, page); 4284 + 4285 + unlock_page(page); 4284 4286 4285 4287 got_bytes -= min_t(unsigned int, PAGE_SIZE, got_bytes); 4286 4288 ··· 4338 4340 * fill them until the writes are flushed. 4339 4341 */ 4340 4342 zero_user(page, 0, PAGE_SIZE); 4341 - lru_cache_add(page); 4342 4343 flush_dcache_page(page); 4343 4344 SetPageUptodate(page); 4344 4345 unlock_page(page); ··· 4347 4350 continue; 4348 4351 } else { 4349 4352 /* no need to hold page hostage */ 4350 - lru_cache_add(page); 4351 4353 unlock_page(page); 4352 4354 put_page(page); 4353 4355 rdata->pages[i] = NULL; ··· 4389 4393 return readpages_fill_pages(server, rdata, iter, iter->count); 4390 4394 } 4391 4395 4392 - static int 4393 - readpages_get_pages(struct address_space *mapping, struct list_head *page_list, 4394 - unsigned int rsize, struct list_head *tmplist, 4395 - unsigned int *nr_pages, loff_t *offset, unsigned int *bytes) 4396 - { 4397 - struct page *page, *tpage; 4398 - unsigned int expected_index; 4399 - int rc; 4400 - gfp_t gfp = readahead_gfp_mask(mapping); 4401 - 4402 - INIT_LIST_HEAD(tmplist); 4403 - 4404 - page = lru_to_page(page_list); 4405 - 4406 - /* 4407 - * Lock the page and put it in the cache. Since no one else 4408 - * should have access to this page, we're safe to simply set 4409 - * PG_locked without checking it first. 4410 - */ 4411 - __SetPageLocked(page); 4412 - rc = add_to_page_cache_locked(page, mapping, 4413 - page->index, gfp); 4414 - 4415 - /* give up if we can't stick it in the cache */ 4416 - if (rc) { 4417 - __ClearPageLocked(page); 4418 - return rc; 4419 - } 4420 - 4421 - /* move first page to the tmplist */ 4422 - *offset = (loff_t)page->index << PAGE_SHIFT; 4423 - *bytes = PAGE_SIZE; 4424 - *nr_pages = 1; 4425 - list_move_tail(&page->lru, tmplist); 4426 - 4427 - /* now try and add more pages onto the request */ 4428 - expected_index = page->index + 1; 4429 - list_for_each_entry_safe_reverse(page, tpage, page_list, lru) { 4430 - /* discontinuity ? */ 4431 - if (page->index != expected_index) 4432 - break; 4433 - 4434 - /* would this page push the read over the rsize? */ 4435 - if (*bytes + PAGE_SIZE > rsize) 4436 - break; 4437 - 4438 - __SetPageLocked(page); 4439 - rc = add_to_page_cache_locked(page, mapping, page->index, gfp); 4440 - if (rc) { 4441 - __ClearPageLocked(page); 4442 - break; 4443 - } 4444 - list_move_tail(&page->lru, tmplist); 4445 - (*bytes) += PAGE_SIZE; 4446 - expected_index++; 4447 - (*nr_pages)++; 4448 - } 4449 - return rc; 4450 - } 4451 - 4452 - static int cifs_readpages(struct file *file, struct address_space *mapping, 4453 - struct list_head *page_list, unsigned num_pages) 4396 + static void cifs_readahead(struct readahead_control *ractl) 4454 4397 { 4455 4398 int rc; 4456 - int err = 0; 4457 - struct list_head tmplist; 4458 - struct cifsFileInfo *open_file = file->private_data; 4459 - struct cifs_sb_info *cifs_sb = CIFS_FILE_SB(file); 4399 + struct cifsFileInfo *open_file = ractl->file->private_data; 4400 + struct cifs_sb_info *cifs_sb = CIFS_FILE_SB(ractl->file); 4460 4401 struct TCP_Server_Info *server; 4461 4402 pid_t pid; 4462 - unsigned int xid; 4403 + unsigned int xid, nr_pages, last_batch_size = 0, cache_nr_pages = 0; 4404 + pgoff_t next_cached = ULONG_MAX; 4405 + bool caching = fscache_cookie_enabled(cifs_inode_cookie(ractl->mapping->host)) && 4406 + cifs_inode_cookie(ractl->mapping->host)->cache_priv; 4407 + bool check_cache = caching; 4463 4408 4464 4409 xid = get_xid(); 4465 - /* 4466 - * Reads as many pages as possible from fscache. Returns -ENOBUFS 4467 - * immediately if the cookie is negative 4468 - * 4469 - * After this point, every page in the list might have PG_fscache set, 4470 - * so we will need to clean that up off of every page we don't use. 4471 - */ 4472 - rc = cifs_readpages_from_fscache(mapping->host, mapping, page_list, 4473 - &num_pages); 4474 - if (rc == 0) { 4475 - free_xid(xid); 4476 - return rc; 4477 - } 4478 4410 4479 4411 if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_RWPIDFORWARD) 4480 4412 pid = open_file->pid; ··· 4413 4489 server = cifs_pick_channel(tlink_tcon(open_file->tlink)->ses); 4414 4490 4415 4491 cifs_dbg(FYI, "%s: file=%p mapping=%p num_pages=%u\n", 4416 - __func__, file, mapping, num_pages); 4492 + __func__, ractl->file, ractl->mapping, readahead_count(ractl)); 4417 4493 4418 4494 /* 4419 - * Start with the page at end of list and move it to private 4420 - * list. Do the same with any following pages until we hit 4421 - * the rsize limit, hit an index discontinuity, or run out of 4422 - * pages. Issue the async read and then start the loop again 4423 - * until the list is empty. 4424 - * 4425 - * Note that list order is important. The page_list is in 4426 - * the order of declining indexes. When we put the pages in 4427 - * the rdata->pages, then we want them in increasing order. 4495 + * Chop the readahead request up into rsize-sized read requests. 4428 4496 */ 4429 - while (!list_empty(page_list) && !err) { 4430 - unsigned int i, nr_pages, bytes, rsize; 4431 - loff_t offset; 4432 - struct page *page, *tpage; 4497 + while ((nr_pages = readahead_count(ractl) - last_batch_size)) { 4498 + unsigned int i, got, rsize; 4499 + struct page *page; 4433 4500 struct cifs_readdata *rdata; 4434 4501 struct cifs_credits credits_on_stack; 4435 4502 struct cifs_credits *credits = &credits_on_stack; 4503 + pgoff_t index = readahead_index(ractl) + last_batch_size; 4504 + 4505 + /* 4506 + * Find out if we have anything cached in the range of 4507 + * interest, and if so, where the next chunk of cached data is. 4508 + */ 4509 + if (caching) { 4510 + if (check_cache) { 4511 + rc = cifs_fscache_query_occupancy( 4512 + ractl->mapping->host, index, nr_pages, 4513 + &next_cached, &cache_nr_pages); 4514 + if (rc < 0) 4515 + caching = false; 4516 + check_cache = false; 4517 + } 4518 + 4519 + if (index == next_cached) { 4520 + /* 4521 + * TODO: Send a whole batch of pages to be read 4522 + * by the cache. 4523 + */ 4524 + page = readahead_page(ractl); 4525 + last_batch_size = 1 << thp_order(page); 4526 + if (cifs_readpage_from_fscache(ractl->mapping->host, 4527 + page) < 0) { 4528 + /* 4529 + * TODO: Deal with cache read failure 4530 + * here, but for the moment, delegate 4531 + * that to readpage. 4532 + */ 4533 + caching = false; 4534 + } 4535 + unlock_page(page); 4536 + next_cached++; 4537 + cache_nr_pages--; 4538 + if (cache_nr_pages == 0) 4539 + check_cache = true; 4540 + continue; 4541 + } 4542 + } 4436 4543 4437 4544 if (open_file->invalidHandle) { 4438 4545 rc = cifs_reopen_file(open_file, true); 4439 - if (rc == -EAGAIN) 4440 - continue; 4441 - else if (rc) 4546 + if (rc) { 4547 + if (rc == -EAGAIN) 4548 + continue; 4442 4549 break; 4550 + } 4443 4551 } 4444 4552 4445 4553 rc = server->ops->wait_mtu_credits(server, cifs_sb->ctx->rsize, 4446 4554 &rsize, credits); 4447 4555 if (rc) 4448 4556 break; 4557 + nr_pages = min_t(size_t, rsize / PAGE_SIZE, readahead_count(ractl)); 4558 + nr_pages = min_t(size_t, nr_pages, next_cached - index); 4449 4559 4450 4560 /* 4451 4561 * Give up immediately if rsize is too small to read an entire ··· 4487 4529 * reach this point however since we set ra_pages to 0 when the 4488 4530 * rsize is smaller than a cache page. 4489 4531 */ 4490 - if (unlikely(rsize < PAGE_SIZE)) { 4491 - add_credits_and_wake_if(server, credits, 0); 4492 - free_xid(xid); 4493 - return 0; 4494 - } 4495 - 4496 - nr_pages = 0; 4497 - err = readpages_get_pages(mapping, page_list, rsize, &tmplist, 4498 - &nr_pages, &offset, &bytes); 4499 - if (!nr_pages) { 4532 + if (unlikely(!nr_pages)) { 4500 4533 add_credits_and_wake_if(server, credits, 0); 4501 4534 break; 4502 4535 } ··· 4495 4546 rdata = cifs_readdata_alloc(nr_pages, cifs_readv_complete); 4496 4547 if (!rdata) { 4497 4548 /* best to give up if we're out of mem */ 4498 - list_for_each_entry_safe(page, tpage, &tmplist, lru) { 4499 - list_del(&page->lru); 4500 - lru_cache_add(page); 4501 - unlock_page(page); 4502 - put_page(page); 4503 - } 4504 - rc = -ENOMEM; 4505 4549 add_credits_and_wake_if(server, credits, 0); 4506 4550 break; 4507 4551 } 4508 4552 4509 - rdata->cfile = cifsFileInfo_get(open_file); 4510 - rdata->server = server; 4511 - rdata->mapping = mapping; 4512 - rdata->offset = offset; 4513 - rdata->bytes = bytes; 4514 - rdata->pid = pid; 4515 - rdata->pagesz = PAGE_SIZE; 4516 - rdata->tailsz = PAGE_SIZE; 4517 - rdata->read_into_pages = cifs_readpages_read_into_pages; 4518 - rdata->copy_into_pages = cifs_readpages_copy_into_pages; 4519 - rdata->credits = credits_on_stack; 4520 - 4521 - list_for_each_entry_safe(page, tpage, &tmplist, lru) { 4522 - list_del(&page->lru); 4523 - rdata->pages[rdata->nr_pages++] = page; 4553 + got = __readahead_batch(ractl, rdata->pages, nr_pages); 4554 + if (got != nr_pages) { 4555 + pr_warn("__readahead_batch() returned %u/%u\n", 4556 + got, nr_pages); 4557 + nr_pages = got; 4524 4558 } 4525 4559 4526 - rc = adjust_credits(server, &rdata->credits, rdata->bytes); 4560 + rdata->nr_pages = nr_pages; 4561 + rdata->bytes = readahead_batch_length(ractl); 4562 + rdata->cfile = cifsFileInfo_get(open_file); 4563 + rdata->server = server; 4564 + rdata->mapping = ractl->mapping; 4565 + rdata->offset = readahead_pos(ractl); 4566 + rdata->pid = pid; 4567 + rdata->pagesz = PAGE_SIZE; 4568 + rdata->tailsz = PAGE_SIZE; 4569 + rdata->read_into_pages = cifs_readpages_read_into_pages; 4570 + rdata->copy_into_pages = cifs_readpages_copy_into_pages; 4571 + rdata->credits = credits_on_stack; 4527 4572 4573 + rc = adjust_credits(server, &rdata->credits, rdata->bytes); 4528 4574 if (!rc) { 4529 4575 if (rdata->cfile->invalidHandle) 4530 4576 rc = -EAGAIN; ··· 4531 4587 add_credits_and_wake_if(server, &rdata->credits, 0); 4532 4588 for (i = 0; i < rdata->nr_pages; i++) { 4533 4589 page = rdata->pages[i]; 4534 - lru_cache_add(page); 4535 4590 unlock_page(page); 4536 4591 put_page(page); 4537 4592 } ··· 4540 4597 } 4541 4598 4542 4599 kref_put(&rdata->refcount, cifs_readdata_release); 4600 + last_batch_size = nr_pages; 4543 4601 } 4544 4602 4545 4603 free_xid(xid); 4546 - return rc; 4547 4604 } 4548 4605 4549 4606 /* ··· 4867 4924 * In the non-cached mode (mount with cache=none), we shunt off direct read and write requests 4868 4925 * so this method should never be called. 4869 4926 * 4870 - * Direct IO is not yet supported in the cached mode. 4927 + * Direct IO is not yet supported in the cached mode. 4871 4928 */ 4872 4929 static ssize_t 4873 4930 cifs_direct_io(struct kiocb *iocb, struct iov_iter *iter) ··· 4949 5006 4950 5007 const struct address_space_operations cifs_addr_ops = { 4951 5008 .readpage = cifs_readpage, 4952 - .readpages = cifs_readpages, 5009 + .readahead = cifs_readahead, 4953 5010 .writepage = cifs_writepage, 4954 5011 .writepages = cifs_writepages, 4955 5012 .write_begin = cifs_write_begin,
+111 -21
fs/cifs/fscache.c
··· 134 134 } 135 135 } 136 136 137 + static inline void fscache_end_operation(struct netfs_cache_resources *cres) 138 + { 139 + const struct netfs_cache_ops *ops = fscache_operation_valid(cres); 140 + 141 + if (ops) 142 + ops->end_operation(cres); 143 + } 144 + 145 + /* 146 + * Fallback page reading interface. 147 + */ 148 + static int fscache_fallback_read_page(struct inode *inode, struct page *page) 149 + { 150 + struct netfs_cache_resources cres; 151 + struct fscache_cookie *cookie = cifs_inode_cookie(inode); 152 + struct iov_iter iter; 153 + struct bio_vec bvec[1]; 154 + int ret; 155 + 156 + memset(&cres, 0, sizeof(cres)); 157 + bvec[0].bv_page = page; 158 + bvec[0].bv_offset = 0; 159 + bvec[0].bv_len = PAGE_SIZE; 160 + iov_iter_bvec(&iter, READ, bvec, ARRAY_SIZE(bvec), PAGE_SIZE); 161 + 162 + ret = fscache_begin_read_operation(&cres, cookie); 163 + if (ret < 0) 164 + return ret; 165 + 166 + ret = fscache_read(&cres, page_offset(page), &iter, NETFS_READ_HOLE_FAIL, 167 + NULL, NULL); 168 + fscache_end_operation(&cres); 169 + return ret; 170 + } 171 + 172 + /* 173 + * Fallback page writing interface. 174 + */ 175 + static int fscache_fallback_write_page(struct inode *inode, struct page *page, 176 + bool no_space_allocated_yet) 177 + { 178 + struct netfs_cache_resources cres; 179 + struct fscache_cookie *cookie = cifs_inode_cookie(inode); 180 + struct iov_iter iter; 181 + struct bio_vec bvec[1]; 182 + loff_t start = page_offset(page); 183 + size_t len = PAGE_SIZE; 184 + int ret; 185 + 186 + memset(&cres, 0, sizeof(cres)); 187 + bvec[0].bv_page = page; 188 + bvec[0].bv_offset = 0; 189 + bvec[0].bv_len = PAGE_SIZE; 190 + iov_iter_bvec(&iter, WRITE, bvec, ARRAY_SIZE(bvec), PAGE_SIZE); 191 + 192 + ret = fscache_begin_write_operation(&cres, cookie); 193 + if (ret < 0) 194 + return ret; 195 + 196 + ret = cres.ops->prepare_write(&cres, &start, &len, i_size_read(inode), 197 + no_space_allocated_yet); 198 + if (ret == 0) 199 + ret = fscache_write(&cres, page_offset(page), &iter, NULL, NULL); 200 + fscache_end_operation(&cres); 201 + return ret; 202 + } 203 + 137 204 /* 138 205 * Retrieve a page from FS-Cache 139 206 */ 140 207 int __cifs_readpage_from_fscache(struct inode *inode, struct page *page) 141 208 { 142 - cifs_dbg(FYI, "%s: (fsc:%p, p:%p, i:0x%p\n", 143 - __func__, CIFS_I(inode)->fscache, page, inode); 144 - return -ENOBUFS; // Needs conversion to using netfslib 145 - } 209 + int ret; 146 210 147 - /* 148 - * Retrieve a set of pages from FS-Cache 149 - */ 150 - int __cifs_readpages_from_fscache(struct inode *inode, 151 - struct address_space *mapping, 152 - struct list_head *pages, 153 - unsigned *nr_pages) 154 - { 155 - cifs_dbg(FYI, "%s: (0x%p/%u/0x%p)\n", 156 - __func__, CIFS_I(inode)->fscache, *nr_pages, inode); 157 - return -ENOBUFS; // Needs conversion to using netfslib 211 + cifs_dbg(FYI, "%s: (fsc:%p, p:%p, i:0x%p\n", 212 + __func__, cifs_inode_cookie(inode), page, inode); 213 + 214 + ret = fscache_fallback_read_page(inode, page); 215 + if (ret < 0) 216 + return ret; 217 + 218 + /* Read completed synchronously */ 219 + SetPageUptodate(page); 220 + return 0; 158 221 } 159 222 160 223 void __cifs_readpage_to_fscache(struct inode *inode, struct page *page) 161 224 { 162 - struct cifsInodeInfo *cifsi = CIFS_I(inode); 163 - 164 - WARN_ON(!cifsi->fscache); 165 - 166 225 cifs_dbg(FYI, "%s: (fsc: %p, p: %p, i: %p)\n", 167 - __func__, cifsi->fscache, page, inode); 226 + __func__, cifs_inode_cookie(inode), page, inode); 168 227 169 - // Needs conversion to using netfslib 228 + fscache_fallback_write_page(inode, page, true); 229 + } 230 + 231 + /* 232 + * Query the cache occupancy. 233 + */ 234 + int __cifs_fscache_query_occupancy(struct inode *inode, 235 + pgoff_t first, unsigned int nr_pages, 236 + pgoff_t *_data_first, 237 + unsigned int *_data_nr_pages) 238 + { 239 + struct netfs_cache_resources cres; 240 + struct fscache_cookie *cookie = cifs_inode_cookie(inode); 241 + loff_t start, data_start; 242 + size_t len, data_len; 243 + int ret; 244 + 245 + ret = fscache_begin_read_operation(&cres, cookie); 246 + if (ret < 0) 247 + return ret; 248 + 249 + start = first * PAGE_SIZE; 250 + len = nr_pages * PAGE_SIZE; 251 + ret = cres.ops->query_occupancy(&cres, start, len, PAGE_SIZE, 252 + &data_start, &data_len); 253 + if (ret == 0) { 254 + *_data_first = data_start / PAGE_SIZE; 255 + *_data_nr_pages = len / PAGE_SIZE; 256 + } 257 + 258 + fscache_end_operation(&cres); 259 + return ret; 170 260 }
+50 -31
fs/cifs/fscache.h
··· 9 9 #ifndef _CIFS_FSCACHE_H 10 10 #define _CIFS_FSCACHE_H 11 11 12 + #include <linux/swap.h> 12 13 #include <linux/fscache.h> 13 14 14 15 #include "cifsglob.h" ··· 59 58 } 60 59 61 60 62 - extern int cifs_fscache_release_page(struct page *page, gfp_t gfp); 63 - extern int __cifs_readpage_from_fscache(struct inode *, struct page *); 64 - extern int __cifs_readpages_from_fscache(struct inode *, 65 - struct address_space *, 66 - struct list_head *, 67 - unsigned *); 68 - extern void __cifs_readpage_to_fscache(struct inode *, struct page *); 69 - 70 61 static inline struct fscache_cookie *cifs_inode_cookie(struct inode *inode) 71 62 { 72 63 return CIFS_I(inode)->fscache; ··· 73 80 i_size_read(inode), flags); 74 81 } 75 82 83 + extern int __cifs_fscache_query_occupancy(struct inode *inode, 84 + pgoff_t first, unsigned int nr_pages, 85 + pgoff_t *_data_first, 86 + unsigned int *_data_nr_pages); 87 + 88 + static inline int cifs_fscache_query_occupancy(struct inode *inode, 89 + pgoff_t first, unsigned int nr_pages, 90 + pgoff_t *_data_first, 91 + unsigned int *_data_nr_pages) 92 + { 93 + if (!cifs_inode_cookie(inode)) 94 + return -ENOBUFS; 95 + return __cifs_fscache_query_occupancy(inode, first, nr_pages, 96 + _data_first, _data_nr_pages); 97 + } 98 + 99 + extern int __cifs_readpage_from_fscache(struct inode *pinode, struct page *ppage); 100 + extern void __cifs_readpage_to_fscache(struct inode *pinode, struct page *ppage); 101 + 102 + 76 103 static inline int cifs_readpage_from_fscache(struct inode *inode, 77 104 struct page *page) 78 105 { 79 - if (CIFS_I(inode)->fscache) 106 + if (cifs_inode_cookie(inode)) 80 107 return __cifs_readpage_from_fscache(inode, page); 81 - 82 - return -ENOBUFS; 83 - } 84 - 85 - static inline int cifs_readpages_from_fscache(struct inode *inode, 86 - struct address_space *mapping, 87 - struct list_head *pages, 88 - unsigned *nr_pages) 89 - { 90 - if (CIFS_I(inode)->fscache) 91 - return __cifs_readpages_from_fscache(inode, mapping, pages, 92 - nr_pages); 93 108 return -ENOBUFS; 94 109 } 95 110 96 111 static inline void cifs_readpage_to_fscache(struct inode *inode, 97 112 struct page *page) 98 113 { 99 - if (PageFsCache(page)) 114 + if (cifs_inode_cookie(inode)) 100 115 __cifs_readpage_to_fscache(inode, page); 116 + } 117 + 118 + static inline int cifs_fscache_release_page(struct page *page, gfp_t gfp) 119 + { 120 + if (PageFsCache(page)) { 121 + if (current_is_kswapd() || !(gfp & __GFP_FS)) 122 + return false; 123 + wait_on_page_fscache(page); 124 + fscache_note_page_release(cifs_inode_cookie(page->mapping->host)); 125 + } 126 + return true; 101 127 } 102 128 103 129 #else /* CONFIG_CIFS_FSCACHE */ ··· 135 123 static inline struct fscache_cookie *cifs_inode_cookie(struct inode *inode) { return NULL; } 136 124 static inline void cifs_invalidate_cache(struct inode *inode, unsigned int flags) {} 137 125 126 + static inline int cifs_fscache_query_occupancy(struct inode *inode, 127 + pgoff_t first, unsigned int nr_pages, 128 + pgoff_t *_data_first, 129 + unsigned int *_data_nr_pages) 130 + { 131 + *_data_first = ULONG_MAX; 132 + *_data_nr_pages = 0; 133 + return -ENOBUFS; 134 + } 135 + 138 136 static inline int 139 137 cifs_readpage_from_fscache(struct inode *inode, struct page *page) 140 138 { 141 139 return -ENOBUFS; 142 140 } 143 141 144 - static inline int cifs_readpages_from_fscache(struct inode *inode, 145 - struct address_space *mapping, 146 - struct list_head *pages, 147 - unsigned *nr_pages) 148 - { 149 - return -ENOBUFS; 150 - } 142 + static inline 143 + void cifs_readpage_to_fscache(struct inode *inode, struct page *page) {} 151 144 152 - static inline void cifs_readpage_to_fscache(struct inode *inode, 153 - struct page *page) {} 145 + static inline int nfs_fscache_release_page(struct page *page, gfp_t gfp) 146 + { 147 + return true; /* May release page */ 148 + } 154 149 155 150 #endif /* CONFIG_CIFS_FSCACHE */ 156 151
+4 -4
fs/cifs/inode.c
··· 83 83 static void 84 84 cifs_revalidate_cache(struct inode *inode, struct cifs_fattr *fattr) 85 85 { 86 + struct cifs_fscache_inode_coherency_data cd; 86 87 struct cifsInodeInfo *cifs_i = CIFS_I(inode); 87 88 88 89 cifs_dbg(FYI, "%s: revalidating inode %llu\n", ··· 114 113 cifs_dbg(FYI, "%s: invalidating inode %llu mapping\n", 115 114 __func__, cifs_i->uniqueid); 116 115 set_bit(CIFS_INO_INVALID_MAPPING, &cifs_i->flags); 116 + /* Invalidate fscache cookie */ 117 + cifs_fscache_fill_coherency(&cifs_i->vfs_inode, &cd); 118 + fscache_invalidate(cifs_inode_cookie(inode), &cd, i_size_read(inode), 0); 117 119 } 118 120 119 121 /* ··· 2265 2261 int 2266 2262 cifs_invalidate_mapping(struct inode *inode) 2267 2263 { 2268 - struct cifs_fscache_inode_coherency_data cd; 2269 - struct cifsInodeInfo *cifsi = CIFS_I(inode); 2270 2264 int rc = 0; 2271 2265 2272 2266 if (inode->i_mapping && inode->i_mapping->nrpages != 0) { ··· 2274 2272 __func__, inode); 2275 2273 } 2276 2274 2277 - cifs_fscache_fill_coherency(&cifsi->vfs_inode, &cd); 2278 - fscache_invalidate(cifs_inode_cookie(inode), &cd, i_size_read(inode), 0); 2279 2275 return rc; 2280 2276 } 2281 2277
+5 -1
fs/cifs/sess.c
··· 713 713 else 714 714 sz += sizeof(__le16); 715 715 716 - sz += sizeof(__le16) * strnlen(ses->workstation_name, CIFS_MAX_WORKSTATION_LEN); 716 + if (ses->workstation_name) 717 + sz += sizeof(__le16) * strnlen(ses->workstation_name, 718 + CIFS_MAX_WORKSTATION_LEN); 719 + else 720 + sz += sizeof(__le16); 717 721 718 722 return sz; 719 723 }
+4 -4
fs/erofs/data.c
··· 252 252 return ret; 253 253 254 254 iomap->offset = map.m_la; 255 - if (flags & IOMAP_DAX) { 255 + if (flags & IOMAP_DAX) 256 256 iomap->dax_dev = mdev.m_daxdev; 257 - iomap->offset += mdev.m_dax_part_off; 258 - } else { 257 + else 259 258 iomap->bdev = mdev.m_bdev; 260 - } 261 259 iomap->length = map.m_llen; 262 260 iomap->flags = 0; 263 261 iomap->private = NULL; ··· 282 284 } else { 283 285 iomap->type = IOMAP_MAPPED; 284 286 iomap->addr = mdev.m_pa; 287 + if (flags & IOMAP_DAX) 288 + iomap->addr += mdev.m_dax_part_off; 285 289 } 286 290 return 0; 287 291 }
+56 -57
fs/erofs/zdata.c
··· 810 810 return false; 811 811 } 812 812 813 - static void z_erofs_decompressqueue_work(struct work_struct *work); 814 - static void z_erofs_decompress_kickoff(struct z_erofs_decompressqueue *io, 815 - bool sync, int bios) 816 - { 817 - struct erofs_sb_info *const sbi = EROFS_SB(io->sb); 818 - 819 - /* wake up the caller thread for sync decompression */ 820 - if (sync) { 821 - unsigned long flags; 822 - 823 - spin_lock_irqsave(&io->u.wait.lock, flags); 824 - if (!atomic_add_return(bios, &io->pending_bios)) 825 - wake_up_locked(&io->u.wait); 826 - spin_unlock_irqrestore(&io->u.wait.lock, flags); 827 - return; 828 - } 829 - 830 - if (atomic_add_return(bios, &io->pending_bios)) 831 - return; 832 - /* Use workqueue and sync decompression for atomic contexts only */ 833 - if (in_atomic() || irqs_disabled()) { 834 - queue_work(z_erofs_workqueue, &io->u.work); 835 - /* enable sync decompression for readahead */ 836 - if (sbi->opt.sync_decompress == EROFS_SYNC_DECOMPRESS_AUTO) 837 - sbi->opt.sync_decompress = EROFS_SYNC_DECOMPRESS_FORCE_ON; 838 - return; 839 - } 840 - z_erofs_decompressqueue_work(&io->u.work); 841 - } 842 - 843 813 static bool z_erofs_page_is_invalidated(struct page *page) 844 814 { 845 815 return !page->mapping && !z_erofs_is_shortlived_page(page); 846 - } 847 - 848 - static void z_erofs_decompressqueue_endio(struct bio *bio) 849 - { 850 - tagptr1_t t = tagptr_init(tagptr1_t, bio->bi_private); 851 - struct z_erofs_decompressqueue *q = tagptr_unfold_ptr(t); 852 - blk_status_t err = bio->bi_status; 853 - struct bio_vec *bvec; 854 - struct bvec_iter_all iter_all; 855 - 856 - bio_for_each_segment_all(bvec, bio, iter_all) { 857 - struct page *page = bvec->bv_page; 858 - 859 - DBG_BUGON(PageUptodate(page)); 860 - DBG_BUGON(z_erofs_page_is_invalidated(page)); 861 - 862 - if (err) 863 - SetPageError(page); 864 - 865 - if (erofs_page_is_managed(EROFS_SB(q->sb), page)) { 866 - if (!err) 867 - SetPageUptodate(page); 868 - unlock_page(page); 869 - } 870 - } 871 - z_erofs_decompress_kickoff(q, tagptr_unfold_tags(t), -1); 872 - bio_put(bio); 873 816 } 874 817 875 818 static int z_erofs_decompress_pcluster(struct super_block *sb, ··· 1066 1123 kvfree(bgq); 1067 1124 } 1068 1125 1126 + static void z_erofs_decompress_kickoff(struct z_erofs_decompressqueue *io, 1127 + bool sync, int bios) 1128 + { 1129 + struct erofs_sb_info *const sbi = EROFS_SB(io->sb); 1130 + 1131 + /* wake up the caller thread for sync decompression */ 1132 + if (sync) { 1133 + unsigned long flags; 1134 + 1135 + spin_lock_irqsave(&io->u.wait.lock, flags); 1136 + if (!atomic_add_return(bios, &io->pending_bios)) 1137 + wake_up_locked(&io->u.wait); 1138 + spin_unlock_irqrestore(&io->u.wait.lock, flags); 1139 + return; 1140 + } 1141 + 1142 + if (atomic_add_return(bios, &io->pending_bios)) 1143 + return; 1144 + /* Use workqueue and sync decompression for atomic contexts only */ 1145 + if (in_atomic() || irqs_disabled()) { 1146 + queue_work(z_erofs_workqueue, &io->u.work); 1147 + /* enable sync decompression for readahead */ 1148 + if (sbi->opt.sync_decompress == EROFS_SYNC_DECOMPRESS_AUTO) 1149 + sbi->opt.sync_decompress = EROFS_SYNC_DECOMPRESS_FORCE_ON; 1150 + return; 1151 + } 1152 + z_erofs_decompressqueue_work(&io->u.work); 1153 + } 1154 + 1069 1155 static struct page *pickup_page_for_submission(struct z_erofs_pcluster *pcl, 1070 1156 unsigned int nr, 1071 1157 struct page **pagepool, ··· 1270 1298 WRITE_ONCE(*bypass_qtail, &pcl->next); 1271 1299 1272 1300 qtail[JQ_BYPASS] = &pcl->next; 1301 + } 1302 + 1303 + static void z_erofs_decompressqueue_endio(struct bio *bio) 1304 + { 1305 + tagptr1_t t = tagptr_init(tagptr1_t, bio->bi_private); 1306 + struct z_erofs_decompressqueue *q = tagptr_unfold_ptr(t); 1307 + blk_status_t err = bio->bi_status; 1308 + struct bio_vec *bvec; 1309 + struct bvec_iter_all iter_all; 1310 + 1311 + bio_for_each_segment_all(bvec, bio, iter_all) { 1312 + struct page *page = bvec->bv_page; 1313 + 1314 + DBG_BUGON(PageUptodate(page)); 1315 + DBG_BUGON(z_erofs_page_is_invalidated(page)); 1316 + 1317 + if (err) 1318 + SetPageError(page); 1319 + 1320 + if (erofs_page_is_managed(EROFS_SB(q->sb), page)) { 1321 + if (!err) 1322 + SetPageUptodate(page); 1323 + unlock_page(page); 1324 + } 1325 + } 1326 + z_erofs_decompress_kickoff(q, tagptr_unfold_tags(t), -1); 1327 + bio_put(bio); 1273 1328 } 1274 1329 1275 1330 static void z_erofs_submit_queue(struct super_block *sb,
+7
fs/erofs/zmap.c
··· 630 630 if (endoff >= m.clusterofs) { 631 631 m.headtype = m.type; 632 632 map->m_la = (m.lcn << lclusterbits) | m.clusterofs; 633 + /* 634 + * For ztailpacking files, in order to inline data more 635 + * effectively, special EOF lclusters are now supported 636 + * which can have three parts at most. 637 + */ 638 + if (ztailpacking && end > inode->i_size) 639 + end = inode->i_size; 633 640 break; 634 641 } 635 642 /* m.lcn should be >= 1 if endoff < m.clusterofs */
+7 -7
fs/ext4/ext4.h
··· 2485 2485 #ifdef CONFIG_FS_ENCRYPTION 2486 2486 struct fscrypt_str crypto_buf; 2487 2487 #endif 2488 - #ifdef CONFIG_UNICODE 2488 + #if IS_ENABLED(CONFIG_UNICODE) 2489 2489 struct fscrypt_str cf_name; 2490 2490 #endif 2491 2491 }; ··· 2721 2721 struct ext4_group_desc *gdp); 2722 2722 ext4_fsblk_t ext4_inode_to_goal_block(struct inode *); 2723 2723 2724 - #ifdef CONFIG_UNICODE 2724 + #if IS_ENABLED(CONFIG_UNICODE) 2725 2725 extern int ext4_fname_setup_ci_filename(struct inode *dir, 2726 2726 const struct qstr *iname, 2727 2727 struct ext4_filename *fname); ··· 2754 2754 2755 2755 ext4_fname_from_fscrypt_name(fname, &name); 2756 2756 2757 - #ifdef CONFIG_UNICODE 2757 + #if IS_ENABLED(CONFIG_UNICODE) 2758 2758 err = ext4_fname_setup_ci_filename(dir, iname, fname); 2759 2759 #endif 2760 2760 return err; ··· 2773 2773 2774 2774 ext4_fname_from_fscrypt_name(fname, &name); 2775 2775 2776 - #ifdef CONFIG_UNICODE 2776 + #if IS_ENABLED(CONFIG_UNICODE) 2777 2777 err = ext4_fname_setup_ci_filename(dir, &dentry->d_name, fname); 2778 2778 #endif 2779 2779 return err; ··· 2790 2790 fname->usr_fname = NULL; 2791 2791 fname->disk_name.name = NULL; 2792 2792 2793 - #ifdef CONFIG_UNICODE 2793 + #if IS_ENABLED(CONFIG_UNICODE) 2794 2794 kfree(fname->cf_name.name); 2795 2795 fname->cf_name.name = NULL; 2796 2796 #endif ··· 2806 2806 fname->disk_name.name = (unsigned char *) iname->name; 2807 2807 fname->disk_name.len = iname->len; 2808 2808 2809 - #ifdef CONFIG_UNICODE 2809 + #if IS_ENABLED(CONFIG_UNICODE) 2810 2810 err = ext4_fname_setup_ci_filename(dir, iname, fname); 2811 2811 #endif 2812 2812 ··· 2822 2822 2823 2823 static inline void ext4_fname_free_filename(struct ext4_filename *fname) 2824 2824 { 2825 - #ifdef CONFIG_UNICODE 2825 + #if IS_ENABLED(CONFIG_UNICODE) 2826 2826 kfree(fname->cf_name.name); 2827 2827 fname->cf_name.name = NULL; 2828 2828 #endif
+1 -1
fs/ext4/hash.c
··· 290 290 int ext4fs_dirhash(const struct inode *dir, const char *name, int len, 291 291 struct dx_hash_info *hinfo) 292 292 { 293 - #ifdef CONFIG_UNICODE 293 + #if IS_ENABLED(CONFIG_UNICODE) 294 294 const struct unicode_map *um = dir->i_sb->s_encoding; 295 295 int r, dlen; 296 296 unsigned char *buff;
+6 -6
fs/ext4/namei.c
··· 1317 1317 dx_set_count(entries, count + 1); 1318 1318 } 1319 1319 1320 - #ifdef CONFIG_UNICODE 1320 + #if IS_ENABLED(CONFIG_UNICODE) 1321 1321 /* 1322 1322 * Test whether a case-insensitive directory entry matches the filename 1323 1323 * being searched for. If quick is set, assume the name being looked up ··· 1428 1428 f.crypto_buf = fname->crypto_buf; 1429 1429 #endif 1430 1430 1431 - #ifdef CONFIG_UNICODE 1431 + #if IS_ENABLED(CONFIG_UNICODE) 1432 1432 if (parent->i_sb->s_encoding && IS_CASEFOLDED(parent) && 1433 1433 (!IS_ENCRYPTED(parent) || fscrypt_has_encryption_key(parent))) { 1434 1434 if (fname->cf_name.name) { ··· 1800 1800 } 1801 1801 } 1802 1802 1803 - #ifdef CONFIG_UNICODE 1803 + #if IS_ENABLED(CONFIG_UNICODE) 1804 1804 if (!inode && IS_CASEFOLDED(dir)) { 1805 1805 /* Eventually we want to call d_add_ci(dentry, NULL) 1806 1806 * for negative dentries in the encoding case as ··· 2308 2308 if (fscrypt_is_nokey_name(dentry)) 2309 2309 return -ENOKEY; 2310 2310 2311 - #ifdef CONFIG_UNICODE 2311 + #if IS_ENABLED(CONFIG_UNICODE) 2312 2312 if (sb_has_strict_encoding(sb) && IS_CASEFOLDED(dir) && 2313 2313 sb->s_encoding && utf8_validate(sb->s_encoding, &dentry->d_name)) 2314 2314 return -EINVAL; ··· 3126 3126 ext4_fc_track_unlink(handle, dentry); 3127 3127 retval = ext4_mark_inode_dirty(handle, dir); 3128 3128 3129 - #ifdef CONFIG_UNICODE 3129 + #if IS_ENABLED(CONFIG_UNICODE) 3130 3130 /* VFS negative dentries are incompatible with Encoding and 3131 3131 * Case-insensitiveness. Eventually we'll want avoid 3132 3132 * invalidating the dentries here, alongside with returning the ··· 3231 3231 retval = __ext4_unlink(handle, dir, &dentry->d_name, d_inode(dentry)); 3232 3232 if (!retval) 3233 3233 ext4_fc_track_unlink(handle, dentry); 3234 - #ifdef CONFIG_UNICODE 3234 + #if IS_ENABLED(CONFIG_UNICODE) 3235 3235 /* VFS negative dentries are incompatible with Encoding and 3236 3236 * Case-insensitiveness. Eventually we'll want avoid 3237 3237 * invalidating the dentries here, alongside with returning the
+5 -5
fs/ext4/super.c
··· 1301 1301 kfree(sbi->s_blockgroup_lock); 1302 1302 fs_put_dax(sbi->s_daxdev); 1303 1303 fscrypt_free_dummy_policy(&sbi->s_dummy_enc_policy); 1304 - #ifdef CONFIG_UNICODE 1304 + #if IS_ENABLED(CONFIG_UNICODE) 1305 1305 utf8_unload(sb->s_encoding); 1306 1306 #endif 1307 1307 kfree(sbi); ··· 1961 1961 {Opt_err, 0, 0} 1962 1962 }; 1963 1963 1964 - #ifdef CONFIG_UNICODE 1964 + #if IS_ENABLED(CONFIG_UNICODE) 1965 1965 static const struct ext4_sb_encodings { 1966 1966 __u16 magic; 1967 1967 char *name; ··· 3606 3606 return 0; 3607 3607 } 3608 3608 3609 - #ifndef CONFIG_UNICODE 3609 + #if !IS_ENABLED(CONFIG_UNICODE) 3610 3610 if (ext4_has_feature_casefold(sb)) { 3611 3611 ext4_msg(sb, KERN_ERR, 3612 3612 "Filesystem with casefold feature cannot be " ··· 4610 4610 if (err < 0) 4611 4611 goto failed_mount; 4612 4612 4613 - #ifdef CONFIG_UNICODE 4613 + #if IS_ENABLED(CONFIG_UNICODE) 4614 4614 if (ext4_has_feature_casefold(sb) && !sb->s_encoding) { 4615 4615 const struct ext4_sb_encodings *encoding_info; 4616 4616 struct unicode_map *encoding; ··· 5514 5514 if (sbi->s_chksum_driver) 5515 5515 crypto_free_shash(sbi->s_chksum_driver); 5516 5516 5517 - #ifdef CONFIG_UNICODE 5517 + #if IS_ENABLED(CONFIG_UNICODE) 5518 5518 utf8_unload(sb->s_encoding); 5519 5519 #endif 5520 5520
+4 -4
fs/ext4/sysfs.c
··· 309 309 EXT4_ATTR_FEATURE(encryption); 310 310 EXT4_ATTR_FEATURE(test_dummy_encryption_v2); 311 311 #endif 312 - #ifdef CONFIG_UNICODE 312 + #if IS_ENABLED(CONFIG_UNICODE) 313 313 EXT4_ATTR_FEATURE(casefold); 314 314 #endif 315 315 #ifdef CONFIG_FS_VERITY ··· 317 317 #endif 318 318 EXT4_ATTR_FEATURE(metadata_csum_seed); 319 319 EXT4_ATTR_FEATURE(fast_commit); 320 - #if defined(CONFIG_UNICODE) && defined(CONFIG_FS_ENCRYPTION) 320 + #if IS_ENABLED(CONFIG_UNICODE) && defined(CONFIG_FS_ENCRYPTION) 321 321 EXT4_ATTR_FEATURE(encrypted_casefold); 322 322 #endif 323 323 ··· 329 329 ATTR_LIST(encryption), 330 330 ATTR_LIST(test_dummy_encryption_v2), 331 331 #endif 332 - #ifdef CONFIG_UNICODE 332 + #if IS_ENABLED(CONFIG_UNICODE) 333 333 ATTR_LIST(casefold), 334 334 #endif 335 335 #ifdef CONFIG_FS_VERITY ··· 337 337 #endif 338 338 ATTR_LIST(metadata_csum_seed), 339 339 ATTR_LIST(fast_commit), 340 - #if defined(CONFIG_UNICODE) && defined(CONFIG_FS_ENCRYPTION) 340 + #if IS_ENABLED(CONFIG_UNICODE) && defined(CONFIG_FS_ENCRYPTION) 341 341 ATTR_LIST(encrypted_casefold), 342 342 #endif 343 343 NULL,
+5 -5
fs/f2fs/dir.c
··· 16 16 #include "xattr.h" 17 17 #include <trace/events/f2fs.h> 18 18 19 - #ifdef CONFIG_UNICODE 19 + #if IS_ENABLED(CONFIG_UNICODE) 20 20 extern struct kmem_cache *f2fs_cf_name_slab; 21 21 #endif 22 22 ··· 79 79 int f2fs_init_casefolded_name(const struct inode *dir, 80 80 struct f2fs_filename *fname) 81 81 { 82 - #ifdef CONFIG_UNICODE 82 + #if IS_ENABLED(CONFIG_UNICODE) 83 83 struct super_block *sb = dir->i_sb; 84 84 85 85 if (IS_CASEFOLDED(dir)) { ··· 174 174 kfree(fname->crypto_buf.name); 175 175 fname->crypto_buf.name = NULL; 176 176 #endif 177 - #ifdef CONFIG_UNICODE 177 + #if IS_ENABLED(CONFIG_UNICODE) 178 178 if (fname->cf_name.name) { 179 179 kmem_cache_free(f2fs_cf_name_slab, fname->cf_name.name); 180 180 fname->cf_name.name = NULL; ··· 208 208 return f2fs_find_target_dentry(&d, fname, max_slots); 209 209 } 210 210 211 - #ifdef CONFIG_UNICODE 211 + #if IS_ENABLED(CONFIG_UNICODE) 212 212 /* 213 213 * Test whether a case-insensitive directory entry matches the filename 214 214 * being searched for. ··· 266 266 { 267 267 struct fscrypt_name f; 268 268 269 - #ifdef CONFIG_UNICODE 269 + #if IS_ENABLED(CONFIG_UNICODE) 270 270 if (fname->cf_name.name) { 271 271 struct qstr cf = FSTR_TO_QSTR(&fname->cf_name); 272 272
+1 -1
fs/f2fs/f2fs.h
··· 488 488 */ 489 489 struct fscrypt_str crypto_buf; 490 490 #endif 491 - #ifdef CONFIG_UNICODE 491 + #if IS_ENABLED(CONFIG_UNICODE) 492 492 /* 493 493 * For casefolded directories: the casefolded name, but it's left NULL 494 494 * if the original name is not valid Unicode, if the directory is both
+1 -1
fs/f2fs/hash.c
··· 105 105 return; 106 106 } 107 107 108 - #ifdef CONFIG_UNICODE 108 + #if IS_ENABLED(CONFIG_UNICODE) 109 109 if (IS_CASEFOLDED(dir)) { 110 110 /* 111 111 * If the casefolded name is provided, hash it instead of the
+2 -2
fs/f2fs/namei.c
··· 561 561 goto out_iput; 562 562 } 563 563 out_splice: 564 - #ifdef CONFIG_UNICODE 564 + #if IS_ENABLED(CONFIG_UNICODE) 565 565 if (!inode && IS_CASEFOLDED(dir)) { 566 566 /* Eventually we want to call d_add_ci(dentry, NULL) 567 567 * for negative dentries in the encoding case as ··· 622 622 goto fail; 623 623 } 624 624 f2fs_delete_entry(de, page, dir, inode); 625 - #ifdef CONFIG_UNICODE 625 + #if IS_ENABLED(CONFIG_UNICODE) 626 626 /* VFS negative dentries are incompatible with Encoding and 627 627 * Case-insensitiveness. Eventually we'll want avoid 628 628 * invalidating the dentries here, alongside with returning the
+2 -2
fs/f2fs/recovery.c
··· 46 46 47 47 static struct kmem_cache *fsync_entry_slab; 48 48 49 - #ifdef CONFIG_UNICODE 49 + #if IS_ENABLED(CONFIG_UNICODE) 50 50 extern struct kmem_cache *f2fs_cf_name_slab; 51 51 #endif 52 52 ··· 149 149 if (err) 150 150 return err; 151 151 f2fs_hash_filename(dir, fname); 152 - #ifdef CONFIG_UNICODE 152 + #if IS_ENABLED(CONFIG_UNICODE) 153 153 /* Case-sensitive match is fine for recovery */ 154 154 kmem_cache_free(f2fs_cf_name_slab, fname->cf_name.name); 155 155 fname->cf_name.name = NULL;
+5 -5
fs/f2fs/super.c
··· 257 257 va_end(args); 258 258 } 259 259 260 - #ifdef CONFIG_UNICODE 260 + #if IS_ENABLED(CONFIG_UNICODE) 261 261 static const struct f2fs_sb_encodings { 262 262 __u16 magic; 263 263 char *name; ··· 1259 1259 return -EINVAL; 1260 1260 } 1261 1261 #endif 1262 - #ifndef CONFIG_UNICODE 1262 + #if !IS_ENABLED(CONFIG_UNICODE) 1263 1263 if (f2fs_sb_has_casefold(sbi)) { 1264 1264 f2fs_err(sbi, 1265 1265 "Filesystem with casefold feature cannot be mounted without CONFIG_UNICODE"); ··· 1619 1619 f2fs_destroy_iostat(sbi); 1620 1620 for (i = 0; i < NR_PAGE_TYPE; i++) 1621 1621 kvfree(sbi->write_io[i]); 1622 - #ifdef CONFIG_UNICODE 1622 + #if IS_ENABLED(CONFIG_UNICODE) 1623 1623 utf8_unload(sb->s_encoding); 1624 1624 #endif 1625 1625 kfree(sbi); ··· 3903 3903 3904 3904 static int f2fs_setup_casefold(struct f2fs_sb_info *sbi) 3905 3905 { 3906 - #ifdef CONFIG_UNICODE 3906 + #if IS_ENABLED(CONFIG_UNICODE) 3907 3907 if (f2fs_sb_has_casefold(sbi) && !sbi->sb->s_encoding) { 3908 3908 const struct f2fs_sb_encodings *encoding_info; 3909 3909 struct unicode_map *encoding; ··· 4458 4458 for (i = 0; i < NR_PAGE_TYPE; i++) 4459 4459 kvfree(sbi->write_io[i]); 4460 4460 4461 - #ifdef CONFIG_UNICODE 4461 + #if IS_ENABLED(CONFIG_UNICODE) 4462 4462 utf8_unload(sb->s_encoding); 4463 4463 sb->s_encoding = NULL; 4464 4464 #endif
+5 -5
fs/f2fs/sysfs.c
··· 201 201 static ssize_t encoding_show(struct f2fs_attr *a, 202 202 struct f2fs_sb_info *sbi, char *buf) 203 203 { 204 - #ifdef CONFIG_UNICODE 204 + #if IS_ENABLED(CONFIG_UNICODE) 205 205 struct super_block *sb = sbi->sb; 206 206 207 207 if (f2fs_sb_has_casefold(sbi)) ··· 778 778 #ifdef CONFIG_FS_ENCRYPTION 779 779 F2FS_FEATURE_RO_ATTR(encryption); 780 780 F2FS_FEATURE_RO_ATTR(test_dummy_encryption_v2); 781 - #ifdef CONFIG_UNICODE 781 + #if IS_ENABLED(CONFIG_UNICODE) 782 782 F2FS_FEATURE_RO_ATTR(encrypted_casefold); 783 783 #endif 784 784 #endif /* CONFIG_FS_ENCRYPTION */ ··· 797 797 F2FS_FEATURE_RO_ATTR(verity); 798 798 #endif 799 799 F2FS_FEATURE_RO_ATTR(sb_checksum); 800 - #ifdef CONFIG_UNICODE 800 + #if IS_ENABLED(CONFIG_UNICODE) 801 801 F2FS_FEATURE_RO_ATTR(casefold); 802 802 #endif 803 803 F2FS_FEATURE_RO_ATTR(readonly); ··· 910 910 #ifdef CONFIG_FS_ENCRYPTION 911 911 ATTR_LIST(encryption), 912 912 ATTR_LIST(test_dummy_encryption_v2), 913 - #ifdef CONFIG_UNICODE 913 + #if IS_ENABLED(CONFIG_UNICODE) 914 914 ATTR_LIST(encrypted_casefold), 915 915 #endif 916 916 #endif /* CONFIG_FS_ENCRYPTION */ ··· 929 929 ATTR_LIST(verity), 930 930 #endif 931 931 ATTR_LIST(sb_checksum), 932 - #ifdef CONFIG_UNICODE 932 + #if IS_ENABLED(CONFIG_UNICODE) 933 933 ATTR_LIST(casefold), 934 934 #endif 935 935 ATTR_LIST(readonly),
+48 -4
fs/iomap/buffered-io.c
··· 21 21 22 22 #include "../internal.h" 23 23 24 + #define IOEND_BATCH_SIZE 4096 25 + 24 26 /* 25 27 * Structure allocated for each folio when block size < folio size 26 28 * to track sub-folio uptodate status and I/O completions. ··· 1041 1039 * state, release holds on bios, and finally free up memory. Do not use the 1042 1040 * ioend after this. 1043 1041 */ 1044 - static void 1042 + static u32 1045 1043 iomap_finish_ioend(struct iomap_ioend *ioend, int error) 1046 1044 { 1047 1045 struct inode *inode = ioend->io_inode; ··· 1050 1048 u64 start = bio->bi_iter.bi_sector; 1051 1049 loff_t offset = ioend->io_offset; 1052 1050 bool quiet = bio_flagged(bio, BIO_QUIET); 1051 + u32 folio_count = 0; 1053 1052 1054 1053 for (bio = &ioend->io_inline_bio; bio; bio = next) { 1055 1054 struct folio_iter fi; ··· 1065 1062 next = bio->bi_private; 1066 1063 1067 1064 /* walk all folios in bio, ending page IO on them */ 1068 - bio_for_each_folio_all(fi, bio) 1065 + bio_for_each_folio_all(fi, bio) { 1069 1066 iomap_finish_folio_write(inode, fi.folio, fi.length, 1070 1067 error); 1068 + folio_count++; 1069 + } 1071 1070 bio_put(bio); 1072 1071 } 1073 1072 /* The ioend has been freed by bio_put() */ ··· 1079 1074 "%s: writeback error on inode %lu, offset %lld, sector %llu", 1080 1075 inode->i_sb->s_id, inode->i_ino, offset, start); 1081 1076 } 1077 + return folio_count; 1082 1078 } 1083 1079 1080 + /* 1081 + * Ioend completion routine for merged bios. This can only be called from task 1082 + * contexts as merged ioends can be of unbound length. Hence we have to break up 1083 + * the writeback completions into manageable chunks to avoid long scheduler 1084 + * holdoffs. We aim to keep scheduler holdoffs down below 10ms so that we get 1085 + * good batch processing throughput without creating adverse scheduler latency 1086 + * conditions. 1087 + */ 1084 1088 void 1085 1089 iomap_finish_ioends(struct iomap_ioend *ioend, int error) 1086 1090 { 1087 1091 struct list_head tmp; 1092 + u32 completions; 1093 + 1094 + might_sleep(); 1088 1095 1089 1096 list_replace_init(&ioend->io_list, &tmp); 1090 - iomap_finish_ioend(ioend, error); 1097 + completions = iomap_finish_ioend(ioend, error); 1091 1098 1092 1099 while (!list_empty(&tmp)) { 1100 + if (completions > IOEND_BATCH_SIZE * 8) { 1101 + cond_resched(); 1102 + completions = 0; 1103 + } 1093 1104 ioend = list_first_entry(&tmp, struct iomap_ioend, io_list); 1094 1105 list_del_init(&ioend->io_list); 1095 - iomap_finish_ioend(ioend, error); 1106 + completions += iomap_finish_ioend(ioend, error); 1096 1107 } 1097 1108 } 1098 1109 EXPORT_SYMBOL_GPL(iomap_finish_ioends); ··· 1128 1107 (next->io_type == IOMAP_UNWRITTEN)) 1129 1108 return false; 1130 1109 if (ioend->io_offset + ioend->io_size != next->io_offset) 1110 + return false; 1111 + /* 1112 + * Do not merge physically discontiguous ioends. The filesystem 1113 + * completion functions will have to iterate the physical 1114 + * discontiguities even if we merge the ioends at a logical level, so 1115 + * we don't gain anything by merging physical discontiguities here. 1116 + * 1117 + * We cannot use bio->bi_iter.bi_sector here as it is modified during 1118 + * submission so does not point to the start sector of the bio at 1119 + * completion. 1120 + */ 1121 + if (ioend->io_sector + (ioend->io_size >> 9) != next->io_sector) 1131 1122 return false; 1132 1123 return true; 1133 1124 } ··· 1242 1209 ioend->io_flags = wpc->iomap.flags; 1243 1210 ioend->io_inode = inode; 1244 1211 ioend->io_size = 0; 1212 + ioend->io_folios = 0; 1245 1213 ioend->io_offset = offset; 1246 1214 ioend->io_bio = bio; 1215 + ioend->io_sector = sector; 1247 1216 return ioend; 1248 1217 } 1249 1218 ··· 1285 1250 if (offset != wpc->ioend->io_offset + wpc->ioend->io_size) 1286 1251 return false; 1287 1252 if (sector != bio_end_sector(wpc->ioend->io_bio)) 1253 + return false; 1254 + /* 1255 + * Limit ioend bio chain lengths to minimise IO completion latency. This 1256 + * also prevents long tight loops ending page writeback on all the 1257 + * folios in the ioend. 1258 + */ 1259 + if (wpc->ioend->io_folios >= IOEND_BATCH_SIZE) 1288 1260 return false; 1289 1261 return true; 1290 1262 } ··· 1377 1335 &submit_list); 1378 1336 count++; 1379 1337 } 1338 + if (count) 1339 + wpc->ioend->io_folios++; 1380 1340 1381 1341 WARN_ON_ONCE(!wpc->ioend && !list_empty(&submit_list)); 1382 1342 WARN_ON_ONCE(!folio_test_locked(folio));
+5 -5
fs/libfs.c
··· 1379 1379 (inode->i_op == &empty_dir_inode_operations); 1380 1380 } 1381 1381 1382 - #ifdef CONFIG_UNICODE 1382 + #if IS_ENABLED(CONFIG_UNICODE) 1383 1383 /* 1384 1384 * Determine if the name of a dentry should be casefolded. 1385 1385 * ··· 1473 1473 }; 1474 1474 #endif 1475 1475 1476 - #if defined(CONFIG_FS_ENCRYPTION) && defined(CONFIG_UNICODE) 1476 + #if defined(CONFIG_FS_ENCRYPTION) && IS_ENABLED(CONFIG_UNICODE) 1477 1477 static const struct dentry_operations generic_encrypted_ci_dentry_ops = { 1478 1478 .d_hash = generic_ci_d_hash, 1479 1479 .d_compare = generic_ci_d_compare, ··· 1508 1508 #ifdef CONFIG_FS_ENCRYPTION 1509 1509 bool needs_encrypt_ops = dentry->d_flags & DCACHE_NOKEY_NAME; 1510 1510 #endif 1511 - #ifdef CONFIG_UNICODE 1511 + #if IS_ENABLED(CONFIG_UNICODE) 1512 1512 bool needs_ci_ops = dentry->d_sb->s_encoding; 1513 1513 #endif 1514 - #if defined(CONFIG_FS_ENCRYPTION) && defined(CONFIG_UNICODE) 1514 + #if defined(CONFIG_FS_ENCRYPTION) && IS_ENABLED(CONFIG_UNICODE) 1515 1515 if (needs_encrypt_ops && needs_ci_ops) { 1516 1516 d_set_d_op(dentry, &generic_encrypted_ci_dentry_ops); 1517 1517 return; ··· 1523 1523 return; 1524 1524 } 1525 1525 #endif 1526 - #ifdef CONFIG_UNICODE 1526 + #if IS_ENABLED(CONFIG_UNICODE) 1527 1527 if (needs_ci_ops) { 1528 1528 d_set_d_op(dentry, &generic_ci_dentry_ops); 1529 1529 return;
+10 -8
fs/lockd/svcsubs.c
··· 179 179 static int nlm_unlock_files(struct nlm_file *file) 180 180 { 181 181 struct file_lock lock; 182 - struct file *f; 183 182 183 + locks_init_lock(&lock); 184 184 lock.fl_type = F_UNLCK; 185 185 lock.fl_start = 0; 186 186 lock.fl_end = OFFSET_MAX; 187 - for (f = file->f_file[0]; f <= file->f_file[1]; f++) { 188 - if (f && vfs_lock_file(f, F_SETLK, &lock, NULL) < 0) { 189 - pr_warn("lockd: unlock failure in %s:%d\n", 190 - __FILE__, __LINE__); 191 - return 1; 192 - } 193 - } 187 + if (file->f_file[O_RDONLY] && 188 + vfs_lock_file(file->f_file[O_RDONLY], F_SETLK, &lock, NULL)) 189 + goto out_err; 190 + if (file->f_file[O_WRONLY] && 191 + vfs_lock_file(file->f_file[O_WRONLY], F_SETLK, &lock, NULL)) 192 + goto out_err; 194 193 return 0; 194 + out_err: 195 + pr_warn("lockd: unlock failure in %s:%d\n", __FILE__, __LINE__); 196 + return 1; 195 197 } 196 198 197 199 /*
+3 -1
fs/nfsd/nfs4state.c
··· 4130 4130 status = nfserr_clid_inuse; 4131 4131 if (client_has_state(old) 4132 4132 && !same_creds(&unconf->cl_cred, 4133 - &old->cl_cred)) 4133 + &old->cl_cred)) { 4134 + old = NULL; 4134 4135 goto out; 4136 + } 4135 4137 status = mark_client_expired_locked(old); 4136 4138 if (status) { 4137 4139 old = NULL;
+3 -3
fs/notify/fanotify/fanotify_user.c
··· 701 701 if (fanotify_is_perm_event(event->mask)) 702 702 FANOTIFY_PERM(event)->fd = fd; 703 703 704 - if (f) 705 - fd_install(fd, f); 706 - 707 704 if (info_mode) { 708 705 ret = copy_info_records_to_user(event, info, info_mode, pidfd, 709 706 buf, count); 710 707 if (ret < 0) 711 708 goto out_close_fd; 712 709 } 710 + 711 + if (f) 712 + fd_install(fd, f); 713 713 714 714 return metadata.event_len; 715 715
+13 -3
fs/overlayfs/copy_up.c
··· 145 145 if (err == -ENOTTY || err == -EINVAL) 146 146 return 0; 147 147 pr_warn("failed to retrieve lower fileattr (%pd2, err=%i)\n", 148 - old, err); 148 + old->dentry, err); 149 149 return err; 150 150 } 151 151 ··· 157 157 */ 158 158 if (oldfa.flags & OVL_PROT_FS_FLAGS_MASK) { 159 159 err = ovl_set_protattr(inode, new->dentry, &oldfa); 160 - if (err) 160 + if (err == -EPERM) 161 + pr_warn_once("copying fileattr: no xattr on upper\n"); 162 + else if (err) 161 163 return err; 162 164 } 163 165 ··· 169 167 170 168 err = ovl_real_fileattr_get(new, &newfa); 171 169 if (err) { 170 + /* 171 + * Returning an error if upper doesn't support fileattr will 172 + * result in a regression, so revert to the old behavior. 173 + */ 174 + if (err == -ENOTTY || err == -EINVAL) { 175 + pr_warn_once("copying fileattr: no support on upper\n"); 176 + return 0; 177 + } 172 178 pr_warn("failed to retrieve upper fileattr (%pd2, err=%i)\n", 173 - new, err); 179 + new->dentry, err); 174 180 return err; 175 181 } 176 182
+8 -3
fs/quota/dquot.c
··· 690 690 /* This is not very clever (and fast) but currently I don't know about 691 691 * any other simple way of getting quota data to disk and we must get 692 692 * them there for userspace to be visible... */ 693 - if (sb->s_op->sync_fs) 694 - sb->s_op->sync_fs(sb, 1); 695 - sync_blockdev(sb->s_bdev); 693 + if (sb->s_op->sync_fs) { 694 + ret = sb->s_op->sync_fs(sb, 1); 695 + if (ret) 696 + return ret; 697 + } 698 + ret = sync_blockdev(sb->s_bdev); 699 + if (ret) 700 + return ret; 696 701 697 702 /* 698 703 * Now when everything is written we can discard the pagecache so
+12 -7
fs/super.c
··· 1616 1616 percpu_rwsem_acquire(sb->s_writers.rw_sem + level, 0, _THIS_IP_); 1617 1617 } 1618 1618 1619 - static void sb_freeze_unlock(struct super_block *sb) 1619 + static void sb_freeze_unlock(struct super_block *sb, int level) 1620 1620 { 1621 - int level; 1622 - 1623 - for (level = SB_FREEZE_LEVELS - 1; level >= 0; level--) 1621 + for (level--; level >= 0; level--) 1624 1622 percpu_up_write(sb->s_writers.rw_sem + level); 1625 1623 } 1626 1624 ··· 1689 1691 sb_wait_write(sb, SB_FREEZE_PAGEFAULT); 1690 1692 1691 1693 /* All writers are done so after syncing there won't be dirty data */ 1692 - sync_filesystem(sb); 1694 + ret = sync_filesystem(sb); 1695 + if (ret) { 1696 + sb->s_writers.frozen = SB_UNFROZEN; 1697 + sb_freeze_unlock(sb, SB_FREEZE_PAGEFAULT); 1698 + wake_up(&sb->s_writers.wait_unfrozen); 1699 + deactivate_locked_super(sb); 1700 + return ret; 1701 + } 1693 1702 1694 1703 /* Now wait for internal filesystem counter */ 1695 1704 sb->s_writers.frozen = SB_FREEZE_FS; ··· 1708 1703 printk(KERN_ERR 1709 1704 "VFS:Filesystem freeze failed\n"); 1710 1705 sb->s_writers.frozen = SB_UNFROZEN; 1711 - sb_freeze_unlock(sb); 1706 + sb_freeze_unlock(sb, SB_FREEZE_FS); 1712 1707 wake_up(&sb->s_writers.wait_unfrozen); 1713 1708 deactivate_locked_super(sb); 1714 1709 return ret; ··· 1753 1748 } 1754 1749 1755 1750 sb->s_writers.frozen = SB_UNFROZEN; 1756 - sb_freeze_unlock(sb); 1751 + sb_freeze_unlock(sb, SB_FREEZE_FS); 1757 1752 out: 1758 1753 wake_up(&sb->s_writers.wait_unfrozen); 1759 1754 deactivate_locked_super(sb);
+12 -6
fs/sync.c
··· 29 29 */ 30 30 int sync_filesystem(struct super_block *sb) 31 31 { 32 - int ret; 32 + int ret = 0; 33 33 34 34 /* 35 35 * We need to be protected against the filesystem going from ··· 52 52 * at a time. 53 53 */ 54 54 writeback_inodes_sb(sb, WB_REASON_SYNC); 55 - if (sb->s_op->sync_fs) 56 - sb->s_op->sync_fs(sb, 0); 55 + if (sb->s_op->sync_fs) { 56 + ret = sb->s_op->sync_fs(sb, 0); 57 + if (ret) 58 + return ret; 59 + } 57 60 ret = sync_blockdev_nowait(sb->s_bdev); 58 - if (ret < 0) 61 + if (ret) 59 62 return ret; 60 63 61 64 sync_inodes_sb(sb); 62 - if (sb->s_op->sync_fs) 63 - sb->s_op->sync_fs(sb, 1); 65 + if (sb->s_op->sync_fs) { 66 + ret = sb->s_op->sync_fs(sb, 1); 67 + if (ret) 68 + return ret; 69 + } 64 70 return sync_blockdev(sb->s_bdev); 65 71 } 66 72 EXPORT_SYMBOL(sync_filesystem);
+5 -13
fs/unicode/Kconfig
··· 3 3 # UTF-8 normalization 4 4 # 5 5 config UNICODE 6 - bool "UTF-8 normalization and casefolding support" 6 + tristate "UTF-8 normalization and casefolding support" 7 7 help 8 8 Say Y here to enable UTF-8 NFD normalization and NFD+CF casefolding 9 - support. 10 - 11 - config UNICODE_UTF8_DATA 12 - tristate "UTF-8 normalization and casefolding tables" 13 - depends on UNICODE 14 - default UNICODE 15 - help 16 - This contains a large table of case foldings, which can be loaded as 17 - a separate module if you say M here. To be on the safe side stick 18 - to the default of Y. Saying N here makes no sense, if you do not want 19 - utf8 casefolding support, disable CONFIG_UNICODE instead. 9 + support. If you say M here the large table of case foldings will 10 + be a separate loadable module that gets requested only when a file 11 + system actually use it. 20 12 21 13 config UNICODE_NORMALIZATION_SELFTEST 22 14 tristate "Test UTF-8 normalization support" 23 - depends on UNICODE_UTF8_DATA 15 + depends on UNICODE
+4 -2
fs/unicode/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0 2 2 3 - obj-$(CONFIG_UNICODE) += unicode.o 3 + ifneq ($(CONFIG_UNICODE),) 4 + obj-y += unicode.o 5 + endif 6 + obj-$(CONFIG_UNICODE) += utf8data.o 4 7 obj-$(CONFIG_UNICODE_NORMALIZATION_SELFTEST) += utf8-selftest.o 5 - obj-$(CONFIG_UNICODE_UTF8_DATA) += utf8data.o 6 8 7 9 unicode-y := utf8-norm.o utf8-core.o 8 10
+15 -1
fs/xfs/xfs_aops.c
··· 136 136 memalloc_nofs_restore(nofs_flag); 137 137 } 138 138 139 - /* Finish all pending io completions. */ 139 + /* 140 + * Finish all pending IO completions that require transactional modifications. 141 + * 142 + * We try to merge physical and logically contiguous ioends before completion to 143 + * minimise the number of transactions we need to perform during IO completion. 144 + * Both unwritten extent conversion and COW remapping need to iterate and modify 145 + * one physical extent at a time, so we gain nothing by merging physically 146 + * discontiguous extents here. 147 + * 148 + * The ioend chain length that we can be processing here is largely unbound in 149 + * length and we may have to perform significant amounts of work on each ioend 150 + * to complete it. Hence we have to be careful about holding the CPU for too 151 + * long in this loop. 152 + */ 140 153 void 141 154 xfs_end_io( 142 155 struct work_struct *work) ··· 170 157 list_del_init(&ioend->io_list); 171 158 iomap_ioend_try_merge(ioend, &tmp); 172 159 xfs_end_ioend(ioend); 160 + cond_resched(); 173 161 } 174 162 } 175 163
+3 -6
fs/xfs/xfs_bmap_util.c
··· 850 850 rblocks = 0; 851 851 } 852 852 853 - /* 854 - * Allocate and setup the transaction. 855 - */ 856 853 error = xfs_trans_alloc_inode(ip, &M_RES(mp)->tr_write, 857 854 dblocks, rblocks, false, &tp); 858 855 if (error) ··· 866 869 if (error) 867 870 goto error; 868 871 869 - /* 870 - * Complete the transaction 871 - */ 872 + ip->i_diflags |= XFS_DIFLAG_PREALLOC; 873 + xfs_trans_log_inode(tp, ip, XFS_ILOG_CORE); 874 + 872 875 error = xfs_trans_commit(tp); 873 876 xfs_iunlock(ip, XFS_ILOCK_EXCL); 874 877 if (error)
+26 -60
fs/xfs/xfs_file.c
··· 66 66 return !((pos | len) & mask); 67 67 } 68 68 69 - int 70 - xfs_update_prealloc_flags( 71 - struct xfs_inode *ip, 72 - enum xfs_prealloc_flags flags) 73 - { 74 - struct xfs_trans *tp; 75 - int error; 76 - 77 - error = xfs_trans_alloc(ip->i_mount, &M_RES(ip->i_mount)->tr_writeid, 78 - 0, 0, 0, &tp); 79 - if (error) 80 - return error; 81 - 82 - xfs_ilock(ip, XFS_ILOCK_EXCL); 83 - xfs_trans_ijoin(tp, ip, XFS_ILOCK_EXCL); 84 - 85 - if (!(flags & XFS_PREALLOC_INVISIBLE)) { 86 - VFS_I(ip)->i_mode &= ~S_ISUID; 87 - if (VFS_I(ip)->i_mode & S_IXGRP) 88 - VFS_I(ip)->i_mode &= ~S_ISGID; 89 - xfs_trans_ichgtime(tp, ip, XFS_ICHGTIME_MOD | XFS_ICHGTIME_CHG); 90 - } 91 - 92 - if (flags & XFS_PREALLOC_SET) 93 - ip->i_diflags |= XFS_DIFLAG_PREALLOC; 94 - if (flags & XFS_PREALLOC_CLEAR) 95 - ip->i_diflags &= ~XFS_DIFLAG_PREALLOC; 96 - 97 - xfs_trans_log_inode(tp, ip, XFS_ILOG_CORE); 98 - if (flags & XFS_PREALLOC_SYNC) 99 - xfs_trans_set_sync(tp); 100 - return xfs_trans_commit(tp); 101 - } 102 - 103 69 /* 104 70 * Fsync operations on directories are much simpler than on regular files, 105 71 * as there is no file data to flush, and thus also no need for explicit ··· 861 895 return error; 862 896 } 863 897 898 + /* Does this file, inode, or mount want synchronous writes? */ 899 + static inline bool xfs_file_sync_writes(struct file *filp) 900 + { 901 + struct xfs_inode *ip = XFS_I(file_inode(filp)); 902 + 903 + if (xfs_has_wsync(ip->i_mount)) 904 + return true; 905 + if (filp->f_flags & (__O_SYNC | O_DSYNC)) 906 + return true; 907 + if (IS_SYNC(file_inode(filp))) 908 + return true; 909 + 910 + return false; 911 + } 912 + 864 913 #define XFS_FALLOC_FL_SUPPORTED \ 865 914 (FALLOC_FL_KEEP_SIZE | FALLOC_FL_PUNCH_HOLE | \ 866 915 FALLOC_FL_COLLAPSE_RANGE | FALLOC_FL_ZERO_RANGE | \ ··· 891 910 struct inode *inode = file_inode(file); 892 911 struct xfs_inode *ip = XFS_I(inode); 893 912 long error; 894 - enum xfs_prealloc_flags flags = 0; 895 913 uint iolock = XFS_IOLOCK_EXCL | XFS_MMAPLOCK_EXCL; 896 914 loff_t new_size = 0; 897 915 bool do_file_insert = false; ··· 934 954 if (error) 935 955 goto out_unlock; 936 956 } 957 + 958 + error = file_modified(file); 959 + if (error) 960 + goto out_unlock; 937 961 938 962 if (mode & FALLOC_FL_PUNCH_HOLE) { 939 963 error = xfs_free_file_space(ip, offset, len); ··· 988 1004 } 989 1005 do_file_insert = true; 990 1006 } else { 991 - flags |= XFS_PREALLOC_SET; 992 - 993 1007 if (!(mode & FALLOC_FL_KEEP_SIZE) && 994 1008 offset + len > i_size_read(inode)) { 995 1009 new_size = offset + len; ··· 1039 1057 } 1040 1058 } 1041 1059 1042 - if (file->f_flags & O_DSYNC) 1043 - flags |= XFS_PREALLOC_SYNC; 1044 - 1045 - error = xfs_update_prealloc_flags(ip, flags); 1046 - if (error) 1047 - goto out_unlock; 1048 - 1049 1060 /* Change file size if needed */ 1050 1061 if (new_size) { 1051 1062 struct iattr iattr; ··· 1057 1082 * leave shifted extents past EOF and hence losing access to 1058 1083 * the data that is contained within them. 1059 1084 */ 1060 - if (do_file_insert) 1085 + if (do_file_insert) { 1061 1086 error = xfs_insert_file_space(ip, offset, len); 1087 + if (error) 1088 + goto out_unlock; 1089 + } 1090 + 1091 + if (xfs_file_sync_writes(file)) 1092 + error = xfs_log_force_inode(ip); 1062 1093 1063 1094 out_unlock: 1064 1095 xfs_iunlock(ip, iolock); ··· 1094 1113 if (lockflags) 1095 1114 xfs_iunlock(ip, lockflags); 1096 1115 return ret; 1097 - } 1098 - 1099 - /* Does this file, inode, or mount want synchronous writes? */ 1100 - static inline bool xfs_file_sync_writes(struct file *filp) 1101 - { 1102 - struct xfs_inode *ip = XFS_I(file_inode(filp)); 1103 - 1104 - if (xfs_has_wsync(ip->i_mount)) 1105 - return true; 1106 - if (filp->f_flags & (__O_SYNC | O_DSYNC)) 1107 - return true; 1108 - if (IS_SYNC(file_inode(filp))) 1109 - return true; 1110 - 1111 - return false; 1112 1116 } 1113 1117 1114 1118 STATIC loff_t
-9
fs/xfs/xfs_inode.h
··· 462 462 } 463 463 464 464 /* from xfs_file.c */ 465 - enum xfs_prealloc_flags { 466 - XFS_PREALLOC_SET = (1 << 1), 467 - XFS_PREALLOC_CLEAR = (1 << 2), 468 - XFS_PREALLOC_SYNC = (1 << 3), 469 - XFS_PREALLOC_INVISIBLE = (1 << 4), 470 - }; 471 - 472 - int xfs_update_prealloc_flags(struct xfs_inode *ip, 473 - enum xfs_prealloc_flags flags); 474 465 int xfs_break_layouts(struct inode *inode, uint *iolock, 475 466 enum layout_break_reason reason); 476 467
+1 -1
fs/xfs/xfs_ioctl.c
··· 1464 1464 1465 1465 if (bmx.bmv_count < 2) 1466 1466 return -EINVAL; 1467 - if (bmx.bmv_count > ULONG_MAX / recsize) 1467 + if (bmx.bmv_count >= INT_MAX / recsize) 1468 1468 return -ENOMEM; 1469 1469 1470 1470 buf = kvcalloc(bmx.bmv_count, sizeof(*buf), GFP_KERNEL);
+39 -3
fs/xfs/xfs_pnfs.c
··· 71 71 } 72 72 73 73 /* 74 + * We cannot use file based VFS helpers such as file_modified() to update 75 + * inode state as we modify the data/metadata in the inode here. Hence we have 76 + * to open code the timestamp updates and SUID/SGID stripping. We also need 77 + * to set the inode prealloc flag to ensure that the extents we allocate are not 78 + * removed if the inode is reclaimed from memory before xfs_fs_block_commit() 79 + * is from the client to indicate that data has been written and the file size 80 + * can be extended. 81 + */ 82 + static int 83 + xfs_fs_map_update_inode( 84 + struct xfs_inode *ip) 85 + { 86 + struct xfs_trans *tp; 87 + int error; 88 + 89 + error = xfs_trans_alloc(ip->i_mount, &M_RES(ip->i_mount)->tr_writeid, 90 + 0, 0, 0, &tp); 91 + if (error) 92 + return error; 93 + 94 + xfs_ilock(ip, XFS_ILOCK_EXCL); 95 + xfs_trans_ijoin(tp, ip, XFS_ILOCK_EXCL); 96 + 97 + VFS_I(ip)->i_mode &= ~S_ISUID; 98 + if (VFS_I(ip)->i_mode & S_IXGRP) 99 + VFS_I(ip)->i_mode &= ~S_ISGID; 100 + xfs_trans_ichgtime(tp, ip, XFS_ICHGTIME_MOD | XFS_ICHGTIME_CHG); 101 + ip->i_diflags |= XFS_DIFLAG_PREALLOC; 102 + 103 + xfs_trans_log_inode(tp, ip, XFS_ILOG_CORE); 104 + return xfs_trans_commit(tp); 105 + } 106 + 107 + /* 74 108 * Get a layout for the pNFS client. 75 109 */ 76 110 int ··· 198 164 * that the blocks allocated and handed out to the client are 199 165 * guaranteed to be present even after a server crash. 200 166 */ 201 - error = xfs_update_prealloc_flags(ip, 202 - XFS_PREALLOC_SET | XFS_PREALLOC_SYNC); 167 + error = xfs_fs_map_update_inode(ip); 168 + if (!error) 169 + error = xfs_log_force_inode(ip); 203 170 if (error) 204 171 goto out_unlock; 172 + 205 173 } else { 206 174 xfs_iunlock(ip, lock_flags); 207 175 } ··· 291 255 length = end - start; 292 256 if (!length) 293 257 continue; 294 - 258 + 295 259 /* 296 260 * Make sure reads through the pagecache see the new data. 297 261 */
+5 -1
fs/xfs/xfs_super.c
··· 735 735 int wait) 736 736 { 737 737 struct xfs_mount *mp = XFS_M(sb); 738 + int error; 738 739 739 740 trace_xfs_fs_sync_fs(mp, __return_address); 740 741 ··· 745 744 if (!wait) 746 745 return 0; 747 746 748 - xfs_log_force(mp, XFS_LOG_SYNC); 747 + error = xfs_log_force(mp, XFS_LOG_SYNC); 748 + if (error) 749 + return error; 750 + 749 751 if (laptop_mode) { 750 752 /* 751 753 * The disk must be active because we're syncing.
+25 -15
include/crypto/internal/blake2s.h
··· 24 24 state->f[0] = -1; 25 25 } 26 26 27 - typedef void (*blake2s_compress_t)(struct blake2s_state *state, 28 - const u8 *block, size_t nblocks, u32 inc); 29 - 30 27 /* Helper functions for BLAKE2s shared by the library and shash APIs */ 31 28 32 - static inline void __blake2s_update(struct blake2s_state *state, 33 - const u8 *in, size_t inlen, 34 - blake2s_compress_t compress) 29 + static __always_inline void 30 + __blake2s_update(struct blake2s_state *state, const u8 *in, size_t inlen, 31 + bool force_generic) 35 32 { 36 33 const size_t fill = BLAKE2S_BLOCK_SIZE - state->buflen; 37 34 ··· 36 39 return; 37 40 if (inlen > fill) { 38 41 memcpy(state->buf + state->buflen, in, fill); 39 - (*compress)(state, state->buf, 1, BLAKE2S_BLOCK_SIZE); 42 + if (force_generic) 43 + blake2s_compress_generic(state, state->buf, 1, 44 + BLAKE2S_BLOCK_SIZE); 45 + else 46 + blake2s_compress(state, state->buf, 1, 47 + BLAKE2S_BLOCK_SIZE); 40 48 state->buflen = 0; 41 49 in += fill; 42 50 inlen -= fill; ··· 49 47 if (inlen > BLAKE2S_BLOCK_SIZE) { 50 48 const size_t nblocks = DIV_ROUND_UP(inlen, BLAKE2S_BLOCK_SIZE); 51 49 /* Hash one less (full) block than strictly possible */ 52 - (*compress)(state, in, nblocks - 1, BLAKE2S_BLOCK_SIZE); 50 + if (force_generic) 51 + blake2s_compress_generic(state, in, nblocks - 1, 52 + BLAKE2S_BLOCK_SIZE); 53 + else 54 + blake2s_compress(state, in, nblocks - 1, 55 + BLAKE2S_BLOCK_SIZE); 53 56 in += BLAKE2S_BLOCK_SIZE * (nblocks - 1); 54 57 inlen -= BLAKE2S_BLOCK_SIZE * (nblocks - 1); 55 58 } ··· 62 55 state->buflen += inlen; 63 56 } 64 57 65 - static inline void __blake2s_final(struct blake2s_state *state, u8 *out, 66 - blake2s_compress_t compress) 58 + static __always_inline void 59 + __blake2s_final(struct blake2s_state *state, u8 *out, bool force_generic) 67 60 { 68 61 blake2s_set_lastblock(state); 69 62 memset(state->buf + state->buflen, 0, 70 63 BLAKE2S_BLOCK_SIZE - state->buflen); /* Padding */ 71 - (*compress)(state, state->buf, 1, state->buflen); 64 + if (force_generic) 65 + blake2s_compress_generic(state, state->buf, 1, state->buflen); 66 + else 67 + blake2s_compress(state, state->buf, 1, state->buflen); 72 68 cpu_to_le32_array(state->h, ARRAY_SIZE(state->h)); 73 69 memcpy(out, state->h, state->outlen); 74 70 } ··· 109 99 110 100 static inline int crypto_blake2s_update(struct shash_desc *desc, 111 101 const u8 *in, unsigned int inlen, 112 - blake2s_compress_t compress) 102 + bool force_generic) 113 103 { 114 104 struct blake2s_state *state = shash_desc_ctx(desc); 115 105 116 - __blake2s_update(state, in, inlen, compress); 106 + __blake2s_update(state, in, inlen, force_generic); 117 107 return 0; 118 108 } 119 109 120 110 static inline int crypto_blake2s_final(struct shash_desc *desc, u8 *out, 121 - blake2s_compress_t compress) 111 + bool force_generic) 122 112 { 123 113 struct blake2s_state *state = shash_desc_ctx(desc); 124 114 125 - __blake2s_final(state, out, compress); 115 + __blake2s_final(state, out, force_generic); 126 116 return 0; 127 117 } 128 118
+1
include/linux/ceph/libceph.h
··· 35 35 #define CEPH_OPT_TCP_NODELAY (1<<4) /* TCP_NODELAY on TCP sockets */ 36 36 #define CEPH_OPT_NOMSGSIGN (1<<5) /* don't sign msgs (msgr1) */ 37 37 #define CEPH_OPT_ABORT_ON_FULL (1<<6) /* abort w/ ENOSPC when full */ 38 + #define CEPH_OPT_RXBOUNCE (1<<7) /* double-buffer read data */ 38 39 39 40 #define CEPH_OPT_DEFAULT (CEPH_OPT_TCP_NODELAY) 40 41
+5
include/linux/ceph/messenger.h
··· 383 383 struct ceph_gcm_nonce in_gcm_nonce; 384 384 struct ceph_gcm_nonce out_gcm_nonce; 385 385 386 + struct page **in_enc_pages; 387 + int in_enc_page_cnt; 388 + int in_enc_resid; 389 + int in_enc_i; 386 390 struct page **out_enc_pages; 387 391 int out_enc_page_cnt; 388 392 int out_enc_resid; ··· 461 457 struct ceph_msg *out_msg; /* sending message (== tail of 462 458 out_sent) */ 463 459 460 + struct page *bounce_page; 464 461 u32 in_front_crc, in_middle_crc, in_data_crc; /* calculated crc */ 465 462 466 463 struct timespec64 last_keepalive_ack; /* keepalive2 ack stamp */
+1 -1
include/linux/fb.h
··· 262 262 263 263 /* Draws a rectangle */ 264 264 void (*fb_fillrect) (struct fb_info *info, const struct fb_fillrect *rect); 265 - /* Copy data from area to another. Obsolete. */ 265 + /* Copy data from area to another */ 266 266 void (*fb_copyarea) (struct fb_info *info, const struct fb_copyarea *region); 267 267 /* Draws a image to the display */ 268 268 void (*fb_imageblit) (struct fb_info *info, const struct fb_image *image);
+1 -1
include/linux/fs.h
··· 1483 1483 #ifdef CONFIG_FS_VERITY 1484 1484 const struct fsverity_operations *s_vop; 1485 1485 #endif 1486 - #ifdef CONFIG_UNICODE 1486 + #if IS_ENABLED(CONFIG_UNICODE) 1487 1487 struct unicode_map *s_encoding; 1488 1488 __u16 s_encoding_flags; 1489 1489 #endif
+4 -2
include/linux/if_vlan.h
··· 46 46 * @h_vlan_encapsulated_proto: packet type ID or len 47 47 */ 48 48 struct vlan_ethhdr { 49 - unsigned char h_dest[ETH_ALEN]; 50 - unsigned char h_source[ETH_ALEN]; 49 + struct_group(addrs, 50 + unsigned char h_dest[ETH_ALEN]; 51 + unsigned char h_source[ETH_ALEN]; 52 + ); 51 53 __be16 h_vlan_proto; 52 54 __be16 h_vlan_TCI; 53 55 __be16 h_vlan_encapsulated_proto;
+2
include/linux/iomap.h
··· 263 263 struct list_head io_list; /* next ioend in chain */ 264 264 u16 io_type; 265 265 u16 io_flags; /* IOMAP_F_* */ 266 + u32 io_folios; /* folios added to ioend */ 266 267 struct inode *io_inode; /* file being written to */ 267 268 size_t io_size; /* size of the extent */ 268 269 loff_t io_offset; /* offset in the file */ 270 + sector_t io_sector; /* start sector of ioend */ 269 271 struct bio *io_bio; /* bio being built */ 270 272 struct bio io_inline_bio; /* MUST BE LAST! */ 271 273 };
+109 -3
include/linux/kvm_host.h
··· 29 29 #include <linux/refcount.h> 30 30 #include <linux/nospec.h> 31 31 #include <linux/notifier.h> 32 + #include <linux/ftrace.h> 32 33 #include <linux/hashtable.h> 34 + #include <linux/instrumentation.h> 33 35 #include <linux/interval_tree.h> 34 36 #include <linux/rbtree.h> 35 37 #include <linux/xarray.h> ··· 370 368 u64 last_used_slot_gen; 371 369 }; 372 370 373 - /* must be called with irqs disabled */ 374 - static __always_inline void guest_enter_irqoff(void) 371 + /* 372 + * Start accounting time towards a guest. 373 + * Must be called before entering guest context. 374 + */ 375 + static __always_inline void guest_timing_enter_irqoff(void) 375 376 { 376 377 /* 377 378 * This is running in ioctl context so its safe to assume that it's the ··· 383 378 instrumentation_begin(); 384 379 vtime_account_guest_enter(); 385 380 instrumentation_end(); 381 + } 386 382 383 + /* 384 + * Enter guest context and enter an RCU extended quiescent state. 385 + * 386 + * Between guest_context_enter_irqoff() and guest_context_exit_irqoff() it is 387 + * unsafe to use any code which may directly or indirectly use RCU, tracing 388 + * (including IRQ flag tracing), or lockdep. All code in this period must be 389 + * non-instrumentable. 390 + */ 391 + static __always_inline void guest_context_enter_irqoff(void) 392 + { 387 393 /* 388 394 * KVM does not hold any references to rcu protected data when it 389 395 * switches CPU into a guest mode. In fact switching to a guest mode ··· 410 394 } 411 395 } 412 396 413 - static __always_inline void guest_exit_irqoff(void) 397 + /* 398 + * Deprecated. Architectures should move to guest_timing_enter_irqoff() and 399 + * guest_state_enter_irqoff(). 400 + */ 401 + static __always_inline void guest_enter_irqoff(void) 402 + { 403 + guest_timing_enter_irqoff(); 404 + guest_context_enter_irqoff(); 405 + } 406 + 407 + /** 408 + * guest_state_enter_irqoff - Fixup state when entering a guest 409 + * 410 + * Entry to a guest will enable interrupts, but the kernel state is interrupts 411 + * disabled when this is invoked. Also tell RCU about it. 412 + * 413 + * 1) Trace interrupts on state 414 + * 2) Invoke context tracking if enabled to adjust RCU state 415 + * 3) Tell lockdep that interrupts are enabled 416 + * 417 + * Invoked from architecture specific code before entering a guest. 418 + * Must be called with interrupts disabled and the caller must be 419 + * non-instrumentable. 420 + * The caller has to invoke guest_timing_enter_irqoff() before this. 421 + * 422 + * Note: this is analogous to exit_to_user_mode(). 423 + */ 424 + static __always_inline void guest_state_enter_irqoff(void) 425 + { 426 + instrumentation_begin(); 427 + trace_hardirqs_on_prepare(); 428 + lockdep_hardirqs_on_prepare(CALLER_ADDR0); 429 + instrumentation_end(); 430 + 431 + guest_context_enter_irqoff(); 432 + lockdep_hardirqs_on(CALLER_ADDR0); 433 + } 434 + 435 + /* 436 + * Exit guest context and exit an RCU extended quiescent state. 437 + * 438 + * Between guest_context_enter_irqoff() and guest_context_exit_irqoff() it is 439 + * unsafe to use any code which may directly or indirectly use RCU, tracing 440 + * (including IRQ flag tracing), or lockdep. All code in this period must be 441 + * non-instrumentable. 442 + */ 443 + static __always_inline void guest_context_exit_irqoff(void) 414 444 { 415 445 context_tracking_guest_exit(); 446 + } 416 447 448 + /* 449 + * Stop accounting time towards a guest. 450 + * Must be called after exiting guest context. 451 + */ 452 + static __always_inline void guest_timing_exit_irqoff(void) 453 + { 417 454 instrumentation_begin(); 418 455 /* Flush the guest cputime we spent on the guest */ 419 456 vtime_account_guest_exit(); 420 457 instrumentation_end(); 458 + } 459 + 460 + /* 461 + * Deprecated. Architectures should move to guest_state_exit_irqoff() and 462 + * guest_timing_exit_irqoff(). 463 + */ 464 + static __always_inline void guest_exit_irqoff(void) 465 + { 466 + guest_context_exit_irqoff(); 467 + guest_timing_exit_irqoff(); 421 468 } 422 469 423 470 static inline void guest_exit(void) ··· 490 411 local_irq_save(flags); 491 412 guest_exit_irqoff(); 492 413 local_irq_restore(flags); 414 + } 415 + 416 + /** 417 + * guest_state_exit_irqoff - Establish state when returning from guest mode 418 + * 419 + * Entry from a guest disables interrupts, but guest mode is traced as 420 + * interrupts enabled. Also with NO_HZ_FULL RCU might be idle. 421 + * 422 + * 1) Tell lockdep that interrupts are disabled 423 + * 2) Invoke context tracking if enabled to reactivate RCU 424 + * 3) Trace interrupts off state 425 + * 426 + * Invoked from architecture specific code after exiting a guest. 427 + * Must be invoked with interrupts disabled and the caller must be 428 + * non-instrumentable. 429 + * The caller has to invoke guest_timing_exit_irqoff() after this. 430 + * 431 + * Note: this is analogous to enter_from_user_mode(). 432 + */ 433 + static __always_inline void guest_state_exit_irqoff(void) 434 + { 435 + lockdep_hardirqs_off(CALLER_ADDR0); 436 + guest_context_exit_irqoff(); 437 + 438 + instrumentation_begin(); 439 + trace_hardirqs_off_finish(); 440 + instrumentation_end(); 493 441 } 494 442 495 443 static inline int kvm_vcpu_exiting_guest_mode(struct kvm_vcpu *vcpu)
+1
include/linux/libata.h
··· 380 380 ATA_HORKAGE_MAX_TRIM_128M = (1 << 26), /* Limit max trim size to 128M */ 381 381 ATA_HORKAGE_NO_NCQ_ON_ATI = (1 << 27), /* Disable NCQ on ATI chipset */ 382 382 ATA_HORKAGE_NO_ID_DEV_LOG = (1 << 28), /* Identify device log missing */ 383 + ATA_HORKAGE_NO_LOG_DIR = (1 << 29), /* Do not read log directory */ 383 384 384 385 /* DMA mask for user DMA control: User visible values; DO NOT 385 386 renumber */
+7
include/linux/netfs.h
··· 244 244 int (*prepare_write)(struct netfs_cache_resources *cres, 245 245 loff_t *_start, size_t *_len, loff_t i_size, 246 246 bool no_space_allocated_yet); 247 + 248 + /* Query the occupancy of the cache in a region, returning where the 249 + * next chunk of data starts and how long it is. 250 + */ 251 + int (*query_occupancy)(struct netfs_cache_resources *cres, 252 + loff_t start, size_t len, size_t granularity, 253 + loff_t *_data_start, size_t *_data_len); 247 254 }; 248 255 249 256 struct readahead_control;
+19
include/linux/page_table_check.h
··· 26 26 pmd_t *pmdp, pmd_t pmd); 27 27 void __page_table_check_pud_set(struct mm_struct *mm, unsigned long addr, 28 28 pud_t *pudp, pud_t pud); 29 + void __page_table_check_pte_clear_range(struct mm_struct *mm, 30 + unsigned long addr, 31 + pmd_t pmd); 29 32 30 33 static inline void page_table_check_alloc(struct page *page, unsigned int order) 31 34 { ··· 103 100 __page_table_check_pud_set(mm, addr, pudp, pud); 104 101 } 105 102 103 + static inline void page_table_check_pte_clear_range(struct mm_struct *mm, 104 + unsigned long addr, 105 + pmd_t pmd) 106 + { 107 + if (static_branch_likely(&page_table_check_disabled)) 108 + return; 109 + 110 + __page_table_check_pte_clear_range(mm, addr, pmd); 111 + } 112 + 106 113 #else 107 114 108 115 static inline void page_table_check_alloc(struct page *page, unsigned int order) ··· 153 140 static inline void page_table_check_pud_set(struct mm_struct *mm, 154 141 unsigned long addr, pud_t *pudp, 155 142 pud_t pud) 143 + { 144 + } 145 + 146 + static inline void page_table_check_pte_clear_range(struct mm_struct *mm, 147 + unsigned long addr, 148 + pmd_t pmd) 156 149 { 157 150 } 158 151
+1
include/linux/pgtable.h
··· 62 62 { 63 63 return (address >> PAGE_SHIFT) & (PTRS_PER_PTE - 1); 64 64 } 65 + #define pte_index pte_index 65 66 66 67 #ifndef pmd_index 67 68 static inline unsigned long pmd_index(unsigned long address)
-1
include/linux/sched.h
··· 1680 1680 #define PF_MEMALLOC 0x00000800 /* Allocating memory */ 1681 1681 #define PF_NPROC_EXCEEDED 0x00001000 /* set_user() noticed that RLIMIT_NPROC was exceeded */ 1682 1682 #define PF_USED_MATH 0x00002000 /* If unset the fpu must be initialized before use */ 1683 - #define PF_USED_ASYNC 0x00004000 /* Used async_schedule*(), used by module init */ 1684 1683 #define PF_NOFREEZE 0x00008000 /* This thread should not be frozen */ 1685 1684 #define PF_FROZEN 0x00010000 /* Frozen for system suspend */ 1686 1685 #define PF_KSWAPD 0x00020000 /* I am kswapd */
+12
include/net/ax25.h
··· 239 239 #if defined(CONFIG_AX25_DAMA_SLAVE) || defined(CONFIG_AX25_DAMA_MASTER) 240 240 ax25_dama_info dama; 241 241 #endif 242 + refcount_t refcount; 242 243 } ax25_dev; 243 244 244 245 typedef struct ax25_cb { ··· 294 293 } 295 294 } 296 295 296 + static inline void ax25_dev_hold(ax25_dev *ax25_dev) 297 + { 298 + refcount_inc(&ax25_dev->refcount); 299 + } 300 + 301 + static inline void ax25_dev_put(ax25_dev *ax25_dev) 302 + { 303 + if (refcount_dec_and_test(&ax25_dev->refcount)) { 304 + kfree(ax25_dev); 305 + } 306 + } 297 307 static inline __be16 ax25_type_trans(struct sk_buff *skb, struct net_device *dev) 298 308 { 299 309 skb->dev = dev;
+13 -5
include/net/neighbour.h
··· 350 350 return __neigh_create(tbl, pkey, dev, true); 351 351 } 352 352 void neigh_destroy(struct neighbour *neigh); 353 - int __neigh_event_send(struct neighbour *neigh, struct sk_buff *skb); 353 + int __neigh_event_send(struct neighbour *neigh, struct sk_buff *skb, 354 + const bool immediate_ok); 354 355 int neigh_update(struct neighbour *neigh, const u8 *lladdr, u8 new, u32 flags, 355 356 u32 nlmsg_pid); 356 357 void __neigh_set_probe_once(struct neighbour *neigh); ··· 461 460 462 461 #define neigh_hold(n) refcount_inc(&(n)->refcnt) 463 462 464 - static inline int neigh_event_send(struct neighbour *neigh, struct sk_buff *skb) 463 + static __always_inline int neigh_event_send_probe(struct neighbour *neigh, 464 + struct sk_buff *skb, 465 + const bool immediate_ok) 465 466 { 466 467 unsigned long now = jiffies; 467 - 468 + 468 469 if (READ_ONCE(neigh->used) != now) 469 470 WRITE_ONCE(neigh->used, now); 470 - if (!(neigh->nud_state&(NUD_CONNECTED|NUD_DELAY|NUD_PROBE))) 471 - return __neigh_event_send(neigh, skb); 471 + if (!(neigh->nud_state & (NUD_CONNECTED | NUD_DELAY | NUD_PROBE))) 472 + return __neigh_event_send(neigh, skb, immediate_ok); 472 473 return 0; 474 + } 475 + 476 + static inline int neigh_event_send(struct neighbour *neigh, struct sk_buff *skb) 477 + { 478 + return neigh_event_send_probe(neigh, skb, true); 473 479 } 474 480 475 481 #if IS_ENABLED(CONFIG_BRIDGE_NETFILTER)
+15
include/sound/pcm.h
··· 617 617 void snd_pcm_stream_lock_irq(struct snd_pcm_substream *substream); 618 618 void snd_pcm_stream_unlock_irq(struct snd_pcm_substream *substream); 619 619 unsigned long _snd_pcm_stream_lock_irqsave(struct snd_pcm_substream *substream); 620 + unsigned long _snd_pcm_stream_lock_irqsave_nested(struct snd_pcm_substream *substream); 620 621 621 622 /** 622 623 * snd_pcm_stream_lock_irqsave - Lock the PCM stream ··· 635 634 } while (0) 636 635 void snd_pcm_stream_unlock_irqrestore(struct snd_pcm_substream *substream, 637 636 unsigned long flags); 637 + 638 + /** 639 + * snd_pcm_stream_lock_irqsave_nested - Single-nested PCM stream locking 640 + * @substream: PCM substream 641 + * @flags: irq flags 642 + * 643 + * This locks the PCM stream like snd_pcm_stream_lock_irqsave() but with 644 + * the single-depth lockdep subclass. 645 + */ 646 + #define snd_pcm_stream_lock_irqsave_nested(substream, flags) \ 647 + do { \ 648 + typecheck(unsigned long, flags); \ 649 + flags = _snd_pcm_stream_lock_irqsave_nested(substream); \ 650 + } while (0) 638 651 639 652 /** 640 653 * snd_pcm_group_for_each_entry - iterate over the linked substreams
+3 -3
include/uapi/linux/kvm.h
··· 1624 1624 #define KVM_S390_NORMAL_RESET _IO(KVMIO, 0xc3) 1625 1625 #define KVM_S390_CLEAR_RESET _IO(KVMIO, 0xc4) 1626 1626 1627 - /* Available with KVM_CAP_XSAVE2 */ 1628 - #define KVM_GET_XSAVE2 _IOR(KVMIO, 0xcf, struct kvm_xsave) 1629 - 1630 1627 struct kvm_s390_pv_sec_parm { 1631 1628 __u64 origin; 1632 1629 __u64 length; ··· 2044 2047 }; 2045 2048 2046 2049 #define KVM_GET_STATS_FD _IO(KVMIO, 0xce) 2050 + 2051 + /* Available with KVM_CAP_XSAVE2 */ 2052 + #define KVM_GET_XSAVE2 _IOR(KVMIO, 0xcf, struct kvm_xsave) 2047 2053 2048 2054 #endif /* __LINUX_KVM_H */
+5 -6
include/uapi/linux/smc_diag.h
··· 84 84 /* SMC_DIAG_LINKINFO */ 85 85 86 86 struct smc_diag_linkinfo { 87 - __u8 link_id; /* link identifier */ 88 - __u8 ibname[IB_DEVICE_NAME_MAX]; /* name of the RDMA device */ 89 - __u8 ibport; /* RDMA device port number */ 90 - __u8 gid[40]; /* local GID */ 91 - __u8 peer_gid[40]; /* peer GID */ 92 - __aligned_u64 net_cookie; /* RDMA device net namespace */ 87 + __u8 link_id; /* link identifier */ 88 + __u8 ibname[IB_DEVICE_NAME_MAX]; /* name of the RDMA device */ 89 + __u8 ibport; /* RDMA device port number */ 90 + __u8 gid[40]; /* local GID */ 91 + __u8 peer_gid[40]; /* peer GID */ 93 92 }; 94 93 95 94 struct smc_diag_lgrinfo {
+3 -1
include/uapi/sound/asound.h
··· 56 56 * * 57 57 ****************************************************************************/ 58 58 59 + #define AES_IEC958_STATUS_SIZE 24 60 + 59 61 struct snd_aes_iec958 { 60 - unsigned char status[24]; /* AES/IEC958 channel status bits */ 62 + unsigned char status[AES_IEC958_STATUS_SIZE]; /* AES/IEC958 channel status bits */ 61 63 unsigned char subcode[147]; /* AES/IEC958 subcode bits */ 62 64 unsigned char pad; /* nothing */ 63 65 unsigned char dig_subframe[4]; /* AES/IEC958 subframe bits */
+7 -1
include/uapi/xen/gntdev.h
··· 47 47 /* 48 48 * Inserts the grant references into the mapping table of an instance 49 49 * of gntdev. N.B. This does not perform the mapping, which is deferred 50 - * until mmap() is called with @index as the offset. 50 + * until mmap() is called with @index as the offset. @index should be 51 + * considered opaque to userspace, with one exception: if no grant 52 + * references have ever been inserted into the mapping table of this 53 + * instance, @index will be set to 0. This is necessary to use gntdev 54 + * with userspace APIs that expect a file descriptor that can be 55 + * mmap()'d at offset 0, such as Wayland. If @count is set to 0, this 56 + * ioctl will fail. 51 57 */ 52 58 #define IOCTL_GNTDEV_MAP_GRANT_REF \ 53 59 _IOC(_IOC_NONE, 'G', 0, sizeof(struct ioctl_gntdev_map_grant_ref))
-2
include/xen/xenbus_dev.h
··· 1 1 /****************************************************************************** 2 - * evtchn.h 3 - * 4 2 * Interface to /dev/xen/xenbus_backend. 5 3 * 6 4 * Copyright (c) 2011 Bastian Blank <waldi@debian.org>
+2 -2
ipc/sem.c
··· 1964 1964 */ 1965 1965 un = lookup_undo(ulp, semid); 1966 1966 if (un) { 1967 + spin_unlock(&ulp->lock); 1967 1968 kvfree(new); 1968 1969 goto success; 1969 1970 } ··· 1977 1976 ipc_assert_locked_object(&sma->sem_perm); 1978 1977 list_add(&new->list_id, &sma->list_id); 1979 1978 un = new; 1980 - 1981 - success: 1982 1979 spin_unlock(&ulp->lock); 1980 + success: 1983 1981 sem_unlock(sma, -1); 1984 1982 out: 1985 1983 return un;
-3
kernel/async.c
··· 205 205 atomic_inc(&entry_count); 206 206 spin_unlock_irqrestore(&async_lock, flags); 207 207 208 - /* mark that this task has queued an async job, used by module init */ 209 - current->flags |= PF_USED_ASYNC; 210 - 211 208 /* schedule for execution */ 212 209 queue_work_node(node, system_unbound_wq, &entry->work); 213 210
+43 -19
kernel/audit.c
··· 541 541 /** 542 542 * kauditd_rehold_skb - Handle a audit record send failure in the hold queue 543 543 * @skb: audit record 544 + * @error: error code (unused) 544 545 * 545 546 * Description: 546 547 * This should only be used by the kauditd_thread when it fails to flush the 547 548 * hold queue. 548 549 */ 549 - static void kauditd_rehold_skb(struct sk_buff *skb) 550 + static void kauditd_rehold_skb(struct sk_buff *skb, __always_unused int error) 550 551 { 551 - /* put the record back in the queue at the same place */ 552 - skb_queue_head(&audit_hold_queue, skb); 552 + /* put the record back in the queue */ 553 + skb_queue_tail(&audit_hold_queue, skb); 553 554 } 554 555 555 556 /** 556 557 * kauditd_hold_skb - Queue an audit record, waiting for auditd 557 558 * @skb: audit record 559 + * @error: error code 558 560 * 559 561 * Description: 560 562 * Queue the audit record, waiting for an instance of auditd. When this ··· 566 564 * and queue it, if we have room. If we want to hold on to the record, but we 567 565 * don't have room, record a record lost message. 568 566 */ 569 - static void kauditd_hold_skb(struct sk_buff *skb) 567 + static void kauditd_hold_skb(struct sk_buff *skb, int error) 570 568 { 571 569 /* at this point it is uncertain if we will ever send this to auditd so 572 570 * try to send the message via printk before we go any further */ 573 571 kauditd_printk_skb(skb); 574 572 575 573 /* can we just silently drop the message? */ 576 - if (!audit_default) { 577 - kfree_skb(skb); 578 - return; 574 + if (!audit_default) 575 + goto drop; 576 + 577 + /* the hold queue is only for when the daemon goes away completely, 578 + * not -EAGAIN failures; if we are in a -EAGAIN state requeue the 579 + * record on the retry queue unless it's full, in which case drop it 580 + */ 581 + if (error == -EAGAIN) { 582 + if (!audit_backlog_limit || 583 + skb_queue_len(&audit_retry_queue) < audit_backlog_limit) { 584 + skb_queue_tail(&audit_retry_queue, skb); 585 + return; 586 + } 587 + audit_log_lost("kauditd retry queue overflow"); 588 + goto drop; 579 589 } 580 590 581 - /* if we have room, queue the message */ 591 + /* if we have room in the hold queue, queue the message */ 582 592 if (!audit_backlog_limit || 583 593 skb_queue_len(&audit_hold_queue) < audit_backlog_limit) { 584 594 skb_queue_tail(&audit_hold_queue, skb); ··· 599 585 600 586 /* we have no other options - drop the message */ 601 587 audit_log_lost("kauditd hold queue overflow"); 588 + drop: 602 589 kfree_skb(skb); 603 590 } 604 591 605 592 /** 606 593 * kauditd_retry_skb - Queue an audit record, attempt to send again to auditd 607 594 * @skb: audit record 595 + * @error: error code (unused) 608 596 * 609 597 * Description: 610 598 * Not as serious as kauditd_hold_skb() as we still have a connected auditd, 611 599 * but for some reason we are having problems sending it audit records so 612 600 * queue the given record and attempt to resend. 613 601 */ 614 - static void kauditd_retry_skb(struct sk_buff *skb) 602 + static void kauditd_retry_skb(struct sk_buff *skb, __always_unused int error) 615 603 { 616 - /* NOTE: because records should only live in the retry queue for a 617 - * short period of time, before either being sent or moved to the hold 618 - * queue, we don't currently enforce a limit on this queue */ 619 - skb_queue_tail(&audit_retry_queue, skb); 604 + if (!audit_backlog_limit || 605 + skb_queue_len(&audit_retry_queue) < audit_backlog_limit) { 606 + skb_queue_tail(&audit_retry_queue, skb); 607 + return; 608 + } 609 + 610 + /* we have to drop the record, send it via printk as a last effort */ 611 + kauditd_printk_skb(skb); 612 + audit_log_lost("kauditd retry queue overflow"); 613 + kfree_skb(skb); 620 614 } 621 615 622 616 /** ··· 662 640 /* flush the retry queue to the hold queue, but don't touch the main 663 641 * queue since we need to process that normally for multicast */ 664 642 while ((skb = skb_dequeue(&audit_retry_queue))) 665 - kauditd_hold_skb(skb); 643 + kauditd_hold_skb(skb, -ECONNREFUSED); 666 644 } 667 645 668 646 /** ··· 736 714 struct sk_buff_head *queue, 737 715 unsigned int retry_limit, 738 716 void (*skb_hook)(struct sk_buff *skb), 739 - void (*err_hook)(struct sk_buff *skb)) 717 + void (*err_hook)(struct sk_buff *skb, int error)) 740 718 { 741 719 int rc = 0; 742 - struct sk_buff *skb; 720 + struct sk_buff *skb = NULL; 721 + struct sk_buff *skb_tail; 743 722 unsigned int failed = 0; 744 723 745 724 /* NOTE: kauditd_thread takes care of all our locking, we just use 746 725 * the netlink info passed to us (e.g. sk and portid) */ 747 726 748 - while ((skb = skb_dequeue(queue))) { 727 + skb_tail = skb_peek_tail(queue); 728 + while ((skb != skb_tail) && (skb = skb_dequeue(queue))) { 749 729 /* call the skb_hook for each skb we touch */ 750 730 if (skb_hook) 751 731 (*skb_hook)(skb); ··· 755 731 /* can we send to anyone via unicast? */ 756 732 if (!sk) { 757 733 if (err_hook) 758 - (*err_hook)(skb); 734 + (*err_hook)(skb, -ECONNREFUSED); 759 735 continue; 760 736 } 761 737 ··· 769 745 rc == -ECONNREFUSED || rc == -EPERM) { 770 746 sk = NULL; 771 747 if (err_hook) 772 - (*err_hook)(skb); 748 + (*err_hook)(skb, rc); 773 749 if (rc == -EAGAIN) 774 750 rc = 0; 775 751 /* continue to drain the queue */
+1 -1
kernel/bpf/bpf_lsm.c
··· 207 207 208 208 BTF_ID(func, bpf_lsm_syslog) 209 209 BTF_ID(func, bpf_lsm_task_alloc) 210 - BTF_ID(func, bpf_lsm_task_getsecid_subj) 210 + BTF_ID(func, bpf_lsm_current_getsecid_subj) 211 211 BTF_ID(func, bpf_lsm_task_getsecid_obj) 212 212 BTF_ID(func, bpf_lsm_task_prctl) 213 213 BTF_ID(func, bpf_lsm_task_setscheduler)
+1 -1
kernel/bpf/ringbuf.c
··· 104 104 } 105 105 106 106 rb = vmap(pages, nr_meta_pages + 2 * nr_data_pages, 107 - VM_ALLOC | VM_USERMAP, PAGE_KERNEL); 107 + VM_MAP | VM_USERMAP, PAGE_KERNEL); 108 108 if (rb) { 109 109 kmemleak_not_leak(pages); 110 110 rb->pages = pages;
+3 -2
kernel/bpf/trampoline.c
··· 550 550 static void notrace inc_misses_counter(struct bpf_prog *prog) 551 551 { 552 552 struct bpf_prog_stats *stats; 553 + unsigned int flags; 553 554 554 555 stats = this_cpu_ptr(prog->stats); 555 - u64_stats_update_begin(&stats->syncp); 556 + flags = u64_stats_update_begin_irqsave(&stats->syncp); 556 557 u64_stats_inc(&stats->misses); 557 - u64_stats_update_end(&stats->syncp); 558 + u64_stats_update_end_irqrestore(&stats->syncp, flags); 558 559 } 559 560 560 561 /* The logic is similar to bpf_prog_run(), but with an explicit
+14
kernel/cgroup/cgroup-v1.c
··· 549 549 550 550 BUILD_BUG_ON(sizeof(cgrp->root->release_agent_path) < PATH_MAX); 551 551 552 + /* 553 + * Release agent gets called with all capabilities, 554 + * require capabilities to set release agent. 555 + */ 556 + if ((of->file->f_cred->user_ns != &init_user_ns) || 557 + !capable(CAP_SYS_ADMIN)) 558 + return -EPERM; 559 + 552 560 cgrp = cgroup_kn_lock_live(of->kn, false); 553 561 if (!cgrp) 554 562 return -ENODEV; ··· 962 954 /* Specifying two release agents is forbidden */ 963 955 if (ctx->release_agent) 964 956 return invalfc(fc, "release_agent respecified"); 957 + /* 958 + * Release agent gets called with all capabilities, 959 + * require capabilities to set release agent. 960 + */ 961 + if ((fc->user_ns != &init_user_ns) || !capable(CAP_SYS_ADMIN)) 962 + return invalfc(fc, "Setting release_agent not allowed"); 965 963 ctx->release_agent = param->string; 966 964 param->string = NULL; 967 965 break;
+51 -14
kernel/cgroup/cpuset.c
··· 591 591 } 592 592 593 593 /* 594 + * validate_change_legacy() - Validate conditions specific to legacy (v1) 595 + * behavior. 596 + */ 597 + static int validate_change_legacy(struct cpuset *cur, struct cpuset *trial) 598 + { 599 + struct cgroup_subsys_state *css; 600 + struct cpuset *c, *par; 601 + int ret; 602 + 603 + WARN_ON_ONCE(!rcu_read_lock_held()); 604 + 605 + /* Each of our child cpusets must be a subset of us */ 606 + ret = -EBUSY; 607 + cpuset_for_each_child(c, css, cur) 608 + if (!is_cpuset_subset(c, trial)) 609 + goto out; 610 + 611 + /* On legacy hierarchy, we must be a subset of our parent cpuset. */ 612 + ret = -EACCES; 613 + par = parent_cs(cur); 614 + if (par && !is_cpuset_subset(trial, par)) 615 + goto out; 616 + 617 + ret = 0; 618 + out: 619 + return ret; 620 + } 621 + 622 + /* 594 623 * validate_change() - Used to validate that any proposed cpuset change 595 624 * follows the structural rules for cpusets. 596 625 * ··· 643 614 { 644 615 struct cgroup_subsys_state *css; 645 616 struct cpuset *c, *par; 646 - int ret; 647 - 648 - /* The checks don't apply to root cpuset */ 649 - if (cur == &top_cpuset) 650 - return 0; 617 + int ret = 0; 651 618 652 619 rcu_read_lock(); 653 - par = parent_cs(cur); 654 620 655 - /* On legacy hierarchy, we must be a subset of our parent cpuset. */ 656 - ret = -EACCES; 657 - if (!is_in_v2_mode() && !is_cpuset_subset(trial, par)) 621 + if (!is_in_v2_mode()) 622 + ret = validate_change_legacy(cur, trial); 623 + if (ret) 658 624 goto out; 625 + 626 + /* Remaining checks don't apply to root cpuset */ 627 + if (cur == &top_cpuset) 628 + goto out; 629 + 630 + par = parent_cs(cur); 659 631 660 632 /* 661 633 * If either I or some sibling (!= me) is exclusive, we can't ··· 1205 1175 * 1206 1176 * Because of the implicit cpu exclusive nature of a partition root, 1207 1177 * cpumask changes that violates the cpu exclusivity rule will not be 1208 - * permitted when checked by validate_change(). The validate_change() 1209 - * function will also prevent any changes to the cpu list if it is not 1210 - * a superset of children's cpu lists. 1178 + * permitted when checked by validate_change(). 1211 1179 */ 1212 1180 static int update_parent_subparts_cpumask(struct cpuset *cpuset, int cmd, 1213 1181 struct cpumask *newmask, ··· 1550 1522 struct cpuset *sibling; 1551 1523 struct cgroup_subsys_state *pos_css; 1552 1524 1525 + percpu_rwsem_assert_held(&cpuset_rwsem); 1526 + 1553 1527 /* 1554 1528 * Check all its siblings and call update_cpumasks_hier() 1555 1529 * if their use_parent_ecpus flag is set in order for them 1556 1530 * to use the right effective_cpus value. 1531 + * 1532 + * The update_cpumasks_hier() function may sleep. So we have to 1533 + * release the RCU read lock before calling it. 1557 1534 */ 1558 1535 rcu_read_lock(); 1559 1536 cpuset_for_each_child(sibling, pos_css, parent) { ··· 1566 1533 continue; 1567 1534 if (!sibling->use_parent_ecpus) 1568 1535 continue; 1536 + if (!css_tryget_online(&sibling->css)) 1537 + continue; 1569 1538 1539 + rcu_read_unlock(); 1570 1540 update_cpumasks_hier(sibling, tmp); 1541 + rcu_read_lock(); 1542 + css_put(&sibling->css); 1571 1543 } 1572 1544 rcu_read_unlock(); 1573 1545 } ··· 1645 1607 * Make sure that subparts_cpus is a subset of cpus_allowed. 1646 1608 */ 1647 1609 if (cs->nr_subparts_cpus) { 1648 - cpumask_andnot(cs->subparts_cpus, cs->subparts_cpus, 1649 - cs->cpus_allowed); 1610 + cpumask_and(cs->subparts_cpus, cs->subparts_cpus, cs->cpus_allowed); 1650 1611 cs->nr_subparts_cpus = cpumask_weight(cs->subparts_cpus); 1651 1612 } 1652 1613 spin_unlock_irq(&callback_lock);
+5 -20
kernel/module.c
··· 3725 3725 } 3726 3726 freeinit->module_init = mod->init_layout.base; 3727 3727 3728 - /* 3729 - * We want to find out whether @mod uses async during init. Clear 3730 - * PF_USED_ASYNC. async_schedule*() will set it. 3731 - */ 3732 - current->flags &= ~PF_USED_ASYNC; 3733 - 3734 3728 do_mod_ctors(mod); 3735 3729 /* Start the module */ 3736 3730 if (mod->init != NULL) ··· 3750 3756 3751 3757 /* 3752 3758 * We need to finish all async code before the module init sequence 3753 - * is done. This has potential to deadlock. For example, a newly 3754 - * detected block device can trigger request_module() of the 3755 - * default iosched from async probing task. Once userland helper 3756 - * reaches here, async_synchronize_full() will wait on the async 3757 - * task waiting on request_module() and deadlock. 3759 + * is done. This has potential to deadlock if synchronous module 3760 + * loading is requested from async (which is not allowed!). 3758 3761 * 3759 - * This deadlock is avoided by perfomring async_synchronize_full() 3760 - * iff module init queued any async jobs. This isn't a full 3761 - * solution as it will deadlock the same if module loading from 3762 - * async jobs nests more than once; however, due to the various 3763 - * constraints, this hack seems to be the best option for now. 3764 - * Please refer to the following thread for details. 3765 - * 3766 - * http://thread.gmane.org/gmane.linux.kernel/1420814 3762 + * See commit 0fdff3ec6d87 ("async, kmod: warn on synchronous 3763 + * request_module() from async workers") for more details. 3767 3764 */ 3768 - if (!mod->async_probe_requested && (current->flags & PF_USED_ASYNC)) 3765 + if (!mod->async_probe_requested) 3769 3766 async_synchronize_full(); 3770 3767 3771 3768 ftrace_free_mem(mod, mod->init_layout.base, mod->init_layout.base +
+1 -1
kernel/printk/sysctl.c
··· 12 12 static const int ten_thousand = 10000; 13 13 14 14 static int proc_dointvec_minmax_sysadmin(struct ctl_table *table, int write, 15 - void __user *buffer, size_t *lenp, loff_t *ppos) 15 + void *buffer, size_t *lenp, loff_t *ppos) 16 16 { 17 17 if (write && !capable(CAP_SYS_ADMIN)) 18 18 return -EPERM;
+2 -3
kernel/stackleak.c
··· 70 70 #define skip_erasing() false 71 71 #endif /* CONFIG_STACKLEAK_RUNTIME_DISABLE */ 72 72 73 - asmlinkage void notrace stackleak_erase(void) 73 + asmlinkage void noinstr stackleak_erase(void) 74 74 { 75 75 /* It would be nice not to have 'kstack_ptr' and 'boundary' on stack */ 76 76 unsigned long kstack_ptr = current->lowest_stack; ··· 124 124 /* Reset the 'lowest_stack' value for the next syscall */ 125 125 current->lowest_stack = current_top_of_stack() - THREAD_SIZE/64; 126 126 } 127 - NOKPROBE_SYMBOL(stackleak_erase); 128 127 129 - void __used __no_caller_saved_registers notrace stackleak_track_stack(void) 128 + void __used __no_caller_saved_registers noinstr stackleak_track_stack(void) 130 129 { 131 130 unsigned long sp = current_stack_pointer; 132 131
+2 -2
lib/crypto/blake2s.c
··· 18 18 19 19 void blake2s_update(struct blake2s_state *state, const u8 *in, size_t inlen) 20 20 { 21 - __blake2s_update(state, in, inlen, blake2s_compress); 21 + __blake2s_update(state, in, inlen, false); 22 22 } 23 23 EXPORT_SYMBOL(blake2s_update); 24 24 25 25 void blake2s_final(struct blake2s_state *state, u8 *out) 26 26 { 27 27 WARN_ON(IS_ENABLED(DEBUG) && !out); 28 - __blake2s_final(state, out, blake2s_compress); 28 + __blake2s_final(state, out, false); 29 29 memzero_explicit(state, sizeof(*state)); 30 30 } 31 31 EXPORT_SYMBOL(blake2s_final);
+2
mm/debug_vm_pgtable.c
··· 171 171 ptep_test_and_clear_young(args->vma, args->vaddr, args->ptep); 172 172 pte = ptep_get(args->ptep); 173 173 WARN_ON(pte_young(pte)); 174 + 175 + ptep_get_and_clear_full(args->mm, args->vaddr, args->ptep, 1); 174 176 } 175 177 176 178 static void __init pte_savedwrite_tests(struct pgtable_debug_args *args)
+30 -5
mm/gup.c
··· 124 124 * considered failure, and furthermore, a likely bug in the caller, so a warning 125 125 * is also emitted. 126 126 */ 127 - struct page *try_grab_compound_head(struct page *page, 128 - int refs, unsigned int flags) 127 + __maybe_unused struct page *try_grab_compound_head(struct page *page, 128 + int refs, unsigned int flags) 129 129 { 130 130 if (flags & FOLL_GET) 131 131 return try_get_compound_head(page, refs); ··· 208 208 */ 209 209 bool __must_check try_grab_page(struct page *page, unsigned int flags) 210 210 { 211 - if (!(flags & (FOLL_GET | FOLL_PIN))) 212 - return true; 211 + WARN_ON_ONCE((flags & (FOLL_GET | FOLL_PIN)) == (FOLL_GET | FOLL_PIN)); 213 212 214 - return try_grab_compound_head(page, 1, flags); 213 + if (flags & FOLL_GET) 214 + return try_get_page(page); 215 + else if (flags & FOLL_PIN) { 216 + int refs = 1; 217 + 218 + page = compound_head(page); 219 + 220 + if (WARN_ON_ONCE(page_ref_count(page) <= 0)) 221 + return false; 222 + 223 + if (hpage_pincount_available(page)) 224 + hpage_pincount_add(page, 1); 225 + else 226 + refs = GUP_PIN_COUNTING_BIAS; 227 + 228 + /* 229 + * Similar to try_grab_compound_head(): even if using the 230 + * hpage_pincount_add/_sub() routines, be sure to 231 + * *also* increment the normal page refcount field at least 232 + * once, so that the page really is pinned. 233 + */ 234 + page_ref_add(page, refs); 235 + 236 + mod_node_page_state(page_pgdat(page), NR_FOLL_PIN_ACQUIRED, 1); 237 + } 238 + 239 + return true; 215 240 } 216 241 217 242 /**
+21 -16
mm/khugepaged.c
··· 16 16 #include <linux/hashtable.h> 17 17 #include <linux/userfaultfd_k.h> 18 18 #include <linux/page_idle.h> 19 + #include <linux/page_table_check.h> 19 20 #include <linux/swapops.h> 20 21 #include <linux/shmem_fs.h> 21 22 ··· 1417 1416 return 0; 1418 1417 } 1419 1418 1419 + static void collapse_and_free_pmd(struct mm_struct *mm, struct vm_area_struct *vma, 1420 + unsigned long addr, pmd_t *pmdp) 1421 + { 1422 + spinlock_t *ptl; 1423 + pmd_t pmd; 1424 + 1425 + mmap_assert_write_locked(mm); 1426 + ptl = pmd_lock(vma->vm_mm, pmdp); 1427 + pmd = pmdp_collapse_flush(vma, addr, pmdp); 1428 + spin_unlock(ptl); 1429 + mm_dec_nr_ptes(mm); 1430 + page_table_check_pte_clear_range(mm, addr, pmd); 1431 + pte_free(mm, pmd_pgtable(pmd)); 1432 + } 1433 + 1420 1434 /** 1421 1435 * collapse_pte_mapped_thp - Try to collapse a pte-mapped THP for mm at 1422 1436 * address haddr. ··· 1449 1433 struct vm_area_struct *vma = find_vma(mm, haddr); 1450 1434 struct page *hpage; 1451 1435 pte_t *start_pte, *pte; 1452 - pmd_t *pmd, _pmd; 1436 + pmd_t *pmd; 1453 1437 spinlock_t *ptl; 1454 1438 int count = 0; 1455 1439 int i; ··· 1525 1509 } 1526 1510 1527 1511 /* step 4: collapse pmd */ 1528 - ptl = pmd_lock(vma->vm_mm, pmd); 1529 - _pmd = pmdp_collapse_flush(vma, haddr, pmd); 1530 - spin_unlock(ptl); 1531 - mm_dec_nr_ptes(mm); 1532 - pte_free(mm, pmd_pgtable(_pmd)); 1533 - 1512 + collapse_and_free_pmd(mm, vma, haddr, pmd); 1534 1513 drop_hpage: 1535 1514 unlock_page(hpage); 1536 1515 put_page(hpage); ··· 1563 1552 struct vm_area_struct *vma; 1564 1553 struct mm_struct *mm; 1565 1554 unsigned long addr; 1566 - pmd_t *pmd, _pmd; 1555 + pmd_t *pmd; 1567 1556 1568 1557 i_mmap_lock_write(mapping); 1569 1558 vma_interval_tree_foreach(vma, &mapping->i_mmap, pgoff, pgoff) { ··· 1602 1591 * reverse order. Trylock is a way to avoid deadlock. 1603 1592 */ 1604 1593 if (mmap_write_trylock(mm)) { 1605 - if (!khugepaged_test_exit(mm)) { 1606 - spinlock_t *ptl = pmd_lock(mm, pmd); 1607 - /* assume page table is clear */ 1608 - _pmd = pmdp_collapse_flush(vma, addr, pmd); 1609 - spin_unlock(ptl); 1610 - mm_dec_nr_ptes(mm); 1611 - pte_free(mm, pmd_pgtable(_pmd)); 1612 - } 1594 + if (!khugepaged_test_exit(mm)) 1595 + collapse_and_free_pmd(mm, vma, addr, pmd); 1613 1596 mmap_write_unlock(mm); 1614 1597 } else { 1615 1598 /* Try again later */
+7 -6
mm/kmemleak.c
··· 1410 1410 { 1411 1411 unsigned long flags; 1412 1412 struct kmemleak_object *object; 1413 - int i; 1413 + struct zone *zone; 1414 + int __maybe_unused i; 1414 1415 int new_leaks = 0; 1415 1416 1416 1417 jiffies_last_scan = jiffies; ··· 1451 1450 * Struct page scanning for each node. 1452 1451 */ 1453 1452 get_online_mems(); 1454 - for_each_online_node(i) { 1455 - unsigned long start_pfn = node_start_pfn(i); 1456 - unsigned long end_pfn = node_end_pfn(i); 1453 + for_each_populated_zone(zone) { 1454 + unsigned long start_pfn = zone->zone_start_pfn; 1455 + unsigned long end_pfn = zone_end_pfn(zone); 1457 1456 unsigned long pfn; 1458 1457 1459 1458 for (pfn = start_pfn; pfn < end_pfn; pfn++) { ··· 1462 1461 if (!page) 1463 1462 continue; 1464 1463 1465 - /* only scan pages belonging to this node */ 1466 - if (page_to_nid(page) != i) 1464 + /* only scan pages belonging to this zone */ 1465 + if (page_zone(page) != zone) 1467 1466 continue; 1468 1467 /* only scan if page is in use */ 1469 1468 if (page_count(page) == 0)
+1 -1
mm/page_isolation.c
··· 115 115 * onlining - just onlined memory won't immediately be considered for 116 116 * allocation. 117 117 */ 118 - if (!isolated_page && PageBuddy(page)) { 118 + if (!isolated_page) { 119 119 nr_pages = move_freepages_block(zone, page, migratetype, NULL); 120 120 __mod_zone_freepage_state(zone, nr_pages, migratetype); 121 121 }
+27 -28
mm/page_table_check.c
··· 86 86 { 87 87 struct page_ext *page_ext; 88 88 struct page *page; 89 + unsigned long i; 89 90 bool anon; 90 - int i; 91 91 92 92 if (!pfn_valid(pfn)) 93 93 return; ··· 121 121 { 122 122 struct page_ext *page_ext; 123 123 struct page *page; 124 + unsigned long i; 124 125 bool anon; 125 - int i; 126 126 127 127 if (!pfn_valid(pfn)) 128 128 return; ··· 152 152 void __page_table_check_zero(struct page *page, unsigned int order) 153 153 { 154 154 struct page_ext *page_ext = lookup_page_ext(page); 155 - int i; 155 + unsigned long i; 156 156 157 157 BUG_ON(!page_ext); 158 - for (i = 0; i < (1 << order); i++) { 158 + for (i = 0; i < (1ul << order); i++) { 159 159 struct page_table_check *ptc = get_page_table_check(page_ext); 160 160 161 161 BUG_ON(atomic_read(&ptc->anon_map_count)); ··· 206 206 void __page_table_check_pte_set(struct mm_struct *mm, unsigned long addr, 207 207 pte_t *ptep, pte_t pte) 208 208 { 209 - pte_t old_pte; 210 - 211 209 if (&init_mm == mm) 212 210 return; 213 211 214 - old_pte = *ptep; 215 - if (pte_user_accessible_page(old_pte)) { 216 - page_table_check_clear(mm, addr, pte_pfn(old_pte), 217 - PAGE_SIZE >> PAGE_SHIFT); 218 - } 219 - 212 + __page_table_check_pte_clear(mm, addr, *ptep); 220 213 if (pte_user_accessible_page(pte)) { 221 214 page_table_check_set(mm, addr, pte_pfn(pte), 222 215 PAGE_SIZE >> PAGE_SHIFT, ··· 221 228 void __page_table_check_pmd_set(struct mm_struct *mm, unsigned long addr, 222 229 pmd_t *pmdp, pmd_t pmd) 223 230 { 224 - pmd_t old_pmd; 225 - 226 231 if (&init_mm == mm) 227 232 return; 228 233 229 - old_pmd = *pmdp; 230 - if (pmd_user_accessible_page(old_pmd)) { 231 - page_table_check_clear(mm, addr, pmd_pfn(old_pmd), 232 - PMD_PAGE_SIZE >> PAGE_SHIFT); 233 - } 234 - 234 + __page_table_check_pmd_clear(mm, addr, *pmdp); 235 235 if (pmd_user_accessible_page(pmd)) { 236 236 page_table_check_set(mm, addr, pmd_pfn(pmd), 237 237 PMD_PAGE_SIZE >> PAGE_SHIFT, ··· 236 250 void __page_table_check_pud_set(struct mm_struct *mm, unsigned long addr, 237 251 pud_t *pudp, pud_t pud) 238 252 { 239 - pud_t old_pud; 240 - 241 253 if (&init_mm == mm) 242 254 return; 243 255 244 - old_pud = *pudp; 245 - if (pud_user_accessible_page(old_pud)) { 246 - page_table_check_clear(mm, addr, pud_pfn(old_pud), 247 - PUD_PAGE_SIZE >> PAGE_SHIFT); 248 - } 249 - 256 + __page_table_check_pud_clear(mm, addr, *pudp); 250 257 if (pud_user_accessible_page(pud)) { 251 258 page_table_check_set(mm, addr, pud_pfn(pud), 252 259 PUD_PAGE_SIZE >> PAGE_SHIFT, ··· 247 268 } 248 269 } 249 270 EXPORT_SYMBOL(__page_table_check_pud_set); 271 + 272 + void __page_table_check_pte_clear_range(struct mm_struct *mm, 273 + unsigned long addr, 274 + pmd_t pmd) 275 + { 276 + if (&init_mm == mm) 277 + return; 278 + 279 + if (!pmd_bad(pmd) && !pmd_leaf(pmd)) { 280 + pte_t *ptep = pte_offset_map(&pmd, addr); 281 + unsigned long i; 282 + 283 + pte_unmap(ptep); 284 + for (i = 0; i < PTRS_PER_PTE; i++) { 285 + __page_table_check_pte_clear(mm, addr, *ptep); 286 + addr += PAGE_SIZE; 287 + ptep++; 288 + } 289 + } 290 + }
+16 -7
net/ax25/af_ax25.c
··· 77 77 { 78 78 ax25_dev *ax25_dev; 79 79 ax25_cb *s; 80 + struct sock *sk; 80 81 81 82 if ((ax25_dev = ax25_dev_ax25dev(dev)) == NULL) 82 83 return; ··· 86 85 again: 87 86 ax25_for_each(s, &ax25_list) { 88 87 if (s->ax25_dev == ax25_dev) { 88 + sk = s->sk; 89 + sock_hold(sk); 89 90 spin_unlock_bh(&ax25_list_lock); 90 - lock_sock(s->sk); 91 + lock_sock(sk); 91 92 s->ax25_dev = NULL; 92 - release_sock(s->sk); 93 + ax25_dev_put(ax25_dev); 94 + release_sock(sk); 93 95 ax25_disconnect(s, ENETUNREACH); 94 96 spin_lock_bh(&ax25_list_lock); 95 - 97 + sock_put(sk); 96 98 /* The entry could have been deleted from the 97 99 * list meanwhile and thus the next pointer is 98 100 * no longer valid. Play it safe and restart ··· 359 355 if (copy_from_user(&ax25_ctl, arg, sizeof(ax25_ctl))) 360 356 return -EFAULT; 361 357 362 - if ((ax25_dev = ax25_addr_ax25dev(&ax25_ctl.port_addr)) == NULL) 363 - return -ENODEV; 364 - 365 358 if (ax25_ctl.digi_count > AX25_MAX_DIGIS) 366 359 return -EINVAL; 367 360 368 361 if (ax25_ctl.arg > ULONG_MAX / HZ && ax25_ctl.cmd != AX25_KILL) 369 362 return -EINVAL; 370 363 364 + ax25_dev = ax25_addr_ax25dev(&ax25_ctl.port_addr); 365 + if (!ax25_dev) 366 + return -ENODEV; 367 + 371 368 digi.ndigi = ax25_ctl.digi_count; 372 369 for (k = 0; k < digi.ndigi; k++) 373 370 digi.calls[k] = ax25_ctl.digi_addr[k]; 374 371 375 - if ((ax25 = ax25_find_cb(&ax25_ctl.source_addr, &ax25_ctl.dest_addr, &digi, ax25_dev->dev)) == NULL) 372 + ax25 = ax25_find_cb(&ax25_ctl.source_addr, &ax25_ctl.dest_addr, &digi, ax25_dev->dev); 373 + if (!ax25) { 374 + ax25_dev_put(ax25_dev); 376 375 return -ENOTCONN; 376 + } 377 377 378 378 switch (ax25_ctl.cmd) { 379 379 case AX25_KILL: ··· 444 436 } 445 437 446 438 out_put: 439 + ax25_dev_put(ax25_dev); 447 440 ax25_cb_put(ax25); 448 441 return ret; 449 442
+23 -5
net/ax25/ax25_dev.c
··· 37 37 for (ax25_dev = ax25_dev_list; ax25_dev != NULL; ax25_dev = ax25_dev->next) 38 38 if (ax25cmp(addr, (const ax25_address *)ax25_dev->dev->dev_addr) == 0) { 39 39 res = ax25_dev; 40 + ax25_dev_hold(ax25_dev); 40 41 } 41 42 spin_unlock_bh(&ax25_dev_lock); 42 43 ··· 57 56 return; 58 57 } 59 58 59 + refcount_set(&ax25_dev->refcount, 1); 60 60 dev->ax25_ptr = ax25_dev; 61 61 ax25_dev->dev = dev; 62 62 dev_hold_track(dev, &ax25_dev->dev_tracker, GFP_ATOMIC); ··· 86 84 ax25_dev->next = ax25_dev_list; 87 85 ax25_dev_list = ax25_dev; 88 86 spin_unlock_bh(&ax25_dev_lock); 87 + ax25_dev_hold(ax25_dev); 89 88 90 89 ax25_register_dev_sysctl(ax25_dev); 91 90 } ··· 116 113 if ((s = ax25_dev_list) == ax25_dev) { 117 114 ax25_dev_list = s->next; 118 115 spin_unlock_bh(&ax25_dev_lock); 116 + ax25_dev_put(ax25_dev); 119 117 dev->ax25_ptr = NULL; 120 118 dev_put_track(dev, &ax25_dev->dev_tracker); 121 - kfree(ax25_dev); 119 + ax25_dev_put(ax25_dev); 122 120 return; 123 121 } 124 122 ··· 127 123 if (s->next == ax25_dev) { 128 124 s->next = ax25_dev->next; 129 125 spin_unlock_bh(&ax25_dev_lock); 126 + ax25_dev_put(ax25_dev); 130 127 dev->ax25_ptr = NULL; 131 128 dev_put_track(dev, &ax25_dev->dev_tracker); 132 - kfree(ax25_dev); 129 + ax25_dev_put(ax25_dev); 133 130 return; 134 131 } 135 132 ··· 138 133 } 139 134 spin_unlock_bh(&ax25_dev_lock); 140 135 dev->ax25_ptr = NULL; 136 + ax25_dev_put(ax25_dev); 141 137 } 142 138 143 139 int ax25_fwd_ioctl(unsigned int cmd, struct ax25_fwd_struct *fwd) ··· 150 144 151 145 switch (cmd) { 152 146 case SIOCAX25ADDFWD: 153 - if ((fwd_dev = ax25_addr_ax25dev(&fwd->port_to)) == NULL) 147 + fwd_dev = ax25_addr_ax25dev(&fwd->port_to); 148 + if (!fwd_dev) { 149 + ax25_dev_put(ax25_dev); 154 150 return -EINVAL; 155 - if (ax25_dev->forward != NULL) 151 + } 152 + if (ax25_dev->forward) { 153 + ax25_dev_put(fwd_dev); 154 + ax25_dev_put(ax25_dev); 156 155 return -EINVAL; 156 + } 157 157 ax25_dev->forward = fwd_dev->dev; 158 + ax25_dev_put(fwd_dev); 159 + ax25_dev_put(ax25_dev); 158 160 break; 159 161 160 162 case SIOCAX25DELFWD: 161 - if (ax25_dev->forward == NULL) 163 + if (!ax25_dev->forward) { 164 + ax25_dev_put(ax25_dev); 162 165 return -EINVAL; 166 + } 163 167 ax25_dev->forward = NULL; 168 + ax25_dev_put(ax25_dev); 164 169 break; 165 170 166 171 default: 172 + ax25_dev_put(ax25_dev); 167 173 return -EINVAL; 168 174 } 169 175
+11 -2
net/ax25/ax25_route.c
··· 75 75 ax25_dev *ax25_dev; 76 76 int i; 77 77 78 - if ((ax25_dev = ax25_addr_ax25dev(&route->port_addr)) == NULL) 79 - return -EINVAL; 80 78 if (route->digi_count > AX25_MAX_DIGIS) 79 + return -EINVAL; 80 + 81 + ax25_dev = ax25_addr_ax25dev(&route->port_addr); 82 + if (!ax25_dev) 81 83 return -EINVAL; 82 84 83 85 write_lock_bh(&ax25_route_lock); ··· 93 91 if (route->digi_count != 0) { 94 92 if ((ax25_rt->digipeat = kmalloc(sizeof(ax25_digi), GFP_ATOMIC)) == NULL) { 95 93 write_unlock_bh(&ax25_route_lock); 94 + ax25_dev_put(ax25_dev); 96 95 return -ENOMEM; 97 96 } 98 97 ax25_rt->digipeat->lastrepeat = -1; ··· 104 101 } 105 102 } 106 103 write_unlock_bh(&ax25_route_lock); 104 + ax25_dev_put(ax25_dev); 107 105 return 0; 108 106 } 109 107 ax25_rt = ax25_rt->next; ··· 112 108 113 109 if ((ax25_rt = kmalloc(sizeof(ax25_route), GFP_ATOMIC)) == NULL) { 114 110 write_unlock_bh(&ax25_route_lock); 111 + ax25_dev_put(ax25_dev); 115 112 return -ENOMEM; 116 113 } 117 114 ··· 125 120 if ((ax25_rt->digipeat = kmalloc(sizeof(ax25_digi), GFP_ATOMIC)) == NULL) { 126 121 write_unlock_bh(&ax25_route_lock); 127 122 kfree(ax25_rt); 123 + ax25_dev_put(ax25_dev); 128 124 return -ENOMEM; 129 125 } 130 126 ax25_rt->digipeat->lastrepeat = -1; ··· 138 132 ax25_rt->next = ax25_route_list; 139 133 ax25_route_list = ax25_rt; 140 134 write_unlock_bh(&ax25_route_lock); 135 + ax25_dev_put(ax25_dev); 141 136 142 137 return 0; 143 138 } ··· 180 173 } 181 174 } 182 175 write_unlock_bh(&ax25_route_lock); 176 + ax25_dev_put(ax25_dev); 183 177 184 178 return 0; 185 179 } ··· 223 215 224 216 out: 225 217 write_unlock_bh(&ax25_route_lock); 218 + ax25_dev_put(ax25_dev); 226 219 return err; 227 220 } 228 221
+4 -4
net/bridge/netfilter/nft_reject_bridge.c
··· 49 49 { 50 50 struct sk_buff *nskb; 51 51 52 - nskb = nf_reject_skb_v4_tcp_reset(net, oldskb, dev, hook); 52 + nskb = nf_reject_skb_v4_tcp_reset(net, oldskb, NULL, hook); 53 53 if (!nskb) 54 54 return; 55 55 ··· 65 65 { 66 66 struct sk_buff *nskb; 67 67 68 - nskb = nf_reject_skb_v4_unreach(net, oldskb, dev, hook, code); 68 + nskb = nf_reject_skb_v4_unreach(net, oldskb, NULL, hook, code); 69 69 if (!nskb) 70 70 return; 71 71 ··· 81 81 { 82 82 struct sk_buff *nskb; 83 83 84 - nskb = nf_reject_skb_v6_tcp_reset(net, oldskb, dev, hook); 84 + nskb = nf_reject_skb_v6_tcp_reset(net, oldskb, NULL, hook); 85 85 if (!nskb) 86 86 return; 87 87 ··· 98 98 { 99 99 struct sk_buff *nskb; 100 100 101 - nskb = nf_reject_skb_v6_unreach(net, oldskb, dev, hook, code); 101 + nskb = nf_reject_skb_v6_unreach(net, oldskb, NULL, hook, code); 102 102 if (!nskb) 103 103 return; 104 104
+7
net/ceph/ceph_common.c
··· 246 246 Opt_cephx_sign_messages, 247 247 Opt_tcp_nodelay, 248 248 Opt_abort_on_full, 249 + Opt_rxbounce, 249 250 }; 250 251 251 252 enum { ··· 296 295 fsparam_u32 ("osdkeepalive", Opt_osdkeepalivetimeout), 297 296 fsparam_enum ("read_from_replica", Opt_read_from_replica, 298 297 ceph_param_read_from_replica), 298 + fsparam_flag ("rxbounce", Opt_rxbounce), 299 299 fsparam_enum ("ms_mode", Opt_ms_mode, 300 300 ceph_param_ms_mode), 301 301 fsparam_string ("secret", Opt_secret), ··· 586 584 case Opt_abort_on_full: 587 585 opt->flags |= CEPH_OPT_ABORT_ON_FULL; 588 586 break; 587 + case Opt_rxbounce: 588 + opt->flags |= CEPH_OPT_RXBOUNCE; 589 + break; 589 590 590 591 default: 591 592 BUG(); ··· 665 660 seq_puts(m, "notcp_nodelay,"); 666 661 if (show_all && (opt->flags & CEPH_OPT_ABORT_ON_FULL)) 667 662 seq_puts(m, "abort_on_full,"); 663 + if (opt->flags & CEPH_OPT_RXBOUNCE) 664 + seq_puts(m, "rxbounce,"); 668 665 669 666 if (opt->mount_timeout != CEPH_MOUNT_TIMEOUT_DEFAULT) 670 667 seq_printf(m, "mount_timeout=%d,",
+4
net/ceph/messenger.c
··· 515 515 ceph_msg_put(con->out_msg); 516 516 con->out_msg = NULL; 517 517 } 518 + if (con->bounce_page) { 519 + __free_page(con->bounce_page); 520 + con->bounce_page = NULL; 521 + } 518 522 519 523 if (ceph_msgr2(from_msgr(con->msgr))) 520 524 ceph_con_v2_reset_protocol(con);
+48 -6
net/ceph/messenger_v1.c
··· 992 992 993 993 static int read_partial_msg_data(struct ceph_connection *con) 994 994 { 995 - struct ceph_msg *msg = con->in_msg; 996 - struct ceph_msg_data_cursor *cursor = &msg->cursor; 995 + struct ceph_msg_data_cursor *cursor = &con->in_msg->cursor; 997 996 bool do_datacrc = !ceph_test_opt(from_msgr(con->msgr), NOCRC); 998 997 struct page *page; 999 998 size_t page_offset; 1000 999 size_t length; 1001 1000 u32 crc = 0; 1002 1001 int ret; 1003 - 1004 - if (!msg->num_data_items) 1005 - return -EIO; 1006 1002 1007 1003 if (do_datacrc) 1008 1004 crc = con->in_data_crc; ··· 1023 1027 } 1024 1028 if (do_datacrc) 1025 1029 con->in_data_crc = crc; 1030 + 1031 + return 1; /* must return > 0 to indicate success */ 1032 + } 1033 + 1034 + static int read_partial_msg_data_bounce(struct ceph_connection *con) 1035 + { 1036 + struct ceph_msg_data_cursor *cursor = &con->in_msg->cursor; 1037 + struct page *page; 1038 + size_t off, len; 1039 + u32 crc; 1040 + int ret; 1041 + 1042 + if (unlikely(!con->bounce_page)) { 1043 + con->bounce_page = alloc_page(GFP_NOIO); 1044 + if (!con->bounce_page) { 1045 + pr_err("failed to allocate bounce page\n"); 1046 + return -ENOMEM; 1047 + } 1048 + } 1049 + 1050 + crc = con->in_data_crc; 1051 + while (cursor->total_resid) { 1052 + if (!cursor->resid) { 1053 + ceph_msg_data_advance(cursor, 0); 1054 + continue; 1055 + } 1056 + 1057 + page = ceph_msg_data_next(cursor, &off, &len, NULL); 1058 + ret = ceph_tcp_recvpage(con->sock, con->bounce_page, 0, len); 1059 + if (ret <= 0) { 1060 + con->in_data_crc = crc; 1061 + return ret; 1062 + } 1063 + 1064 + crc = crc32c(crc, page_address(con->bounce_page), ret); 1065 + memcpy_to_page(page, off, page_address(con->bounce_page), ret); 1066 + 1067 + ceph_msg_data_advance(cursor, ret); 1068 + } 1069 + con->in_data_crc = crc; 1026 1070 1027 1071 return 1; /* must return > 0 to indicate success */ 1028 1072 } ··· 1177 1141 1178 1142 /* (page) data */ 1179 1143 if (data_len) { 1180 - ret = read_partial_msg_data(con); 1144 + if (!m->num_data_items) 1145 + return -EIO; 1146 + 1147 + if (ceph_test_opt(from_msgr(con->msgr), RXBOUNCE)) 1148 + ret = read_partial_msg_data_bounce(con); 1149 + else 1150 + ret = read_partial_msg_data(con); 1181 1151 if (ret <= 0) 1182 1152 return ret; 1183 1153 }
+186 -64
net/ceph/messenger_v2.c
··· 57 57 #define IN_S_HANDLE_CONTROL_REMAINDER 3 58 58 #define IN_S_PREPARE_READ_DATA 4 59 59 #define IN_S_PREPARE_READ_DATA_CONT 5 60 - #define IN_S_HANDLE_EPILOGUE 6 61 - #define IN_S_FINISH_SKIP 7 60 + #define IN_S_PREPARE_READ_ENC_PAGE 6 61 + #define IN_S_HANDLE_EPILOGUE 7 62 + #define IN_S_FINISH_SKIP 8 62 63 63 64 #define OUT_S_QUEUE_DATA 1 64 65 #define OUT_S_QUEUE_DATA_CONT 2 ··· 1033 1032 padded_len(rem_len) + CEPH_GCM_TAG_LEN); 1034 1033 } 1035 1034 1036 - static int decrypt_message(struct ceph_connection *con) 1035 + static int decrypt_tail(struct ceph_connection *con) 1037 1036 { 1037 + struct sg_table enc_sgt = {}; 1038 1038 struct sg_table sgt = {}; 1039 + int tail_len; 1039 1040 int ret; 1041 + 1042 + tail_len = tail_onwire_len(con->in_msg, true); 1043 + ret = sg_alloc_table_from_pages(&enc_sgt, con->v2.in_enc_pages, 1044 + con->v2.in_enc_page_cnt, 0, tail_len, 1045 + GFP_NOIO); 1046 + if (ret) 1047 + goto out; 1040 1048 1041 1049 ret = setup_message_sgs(&sgt, con->in_msg, FRONT_PAD(con->v2.in_buf), 1042 1050 MIDDLE_PAD(con->v2.in_buf), DATA_PAD(con->v2.in_buf), ··· 1053 1043 if (ret) 1054 1044 goto out; 1055 1045 1056 - ret = gcm_crypt(con, false, sgt.sgl, sgt.sgl, 1057 - tail_onwire_len(con->in_msg, true)); 1046 + dout("%s con %p msg %p enc_page_cnt %d sg_cnt %d\n", __func__, con, 1047 + con->in_msg, con->v2.in_enc_page_cnt, sgt.orig_nents); 1048 + ret = gcm_crypt(con, false, enc_sgt.sgl, sgt.sgl, tail_len); 1049 + if (ret) 1050 + goto out; 1051 + 1052 + WARN_ON(!con->v2.in_enc_page_cnt); 1053 + ceph_release_page_vector(con->v2.in_enc_pages, 1054 + con->v2.in_enc_page_cnt); 1055 + con->v2.in_enc_pages = NULL; 1056 + con->v2.in_enc_page_cnt = 0; 1058 1057 1059 1058 out: 1060 1059 sg_free_table(&sgt); 1060 + sg_free_table(&enc_sgt); 1061 1061 return ret; 1062 1062 } 1063 1063 ··· 1753 1733 return 0; 1754 1734 } 1755 1735 1756 - static void prepare_read_data(struct ceph_connection *con) 1736 + static int prepare_read_data(struct ceph_connection *con) 1757 1737 { 1758 1738 struct bio_vec bv; 1759 1739 1760 - if (!con_secure(con)) 1761 - con->in_data_crc = -1; 1740 + con->in_data_crc = -1; 1762 1741 ceph_msg_data_cursor_init(&con->v2.in_cursor, con->in_msg, 1763 1742 data_len(con->in_msg)); 1764 1743 1765 1744 get_bvec_at(&con->v2.in_cursor, &bv); 1766 - set_in_bvec(con, &bv); 1745 + if (ceph_test_opt(from_msgr(con->msgr), RXBOUNCE)) { 1746 + if (unlikely(!con->bounce_page)) { 1747 + con->bounce_page = alloc_page(GFP_NOIO); 1748 + if (!con->bounce_page) { 1749 + pr_err("failed to allocate bounce page\n"); 1750 + return -ENOMEM; 1751 + } 1752 + } 1753 + 1754 + bv.bv_page = con->bounce_page; 1755 + bv.bv_offset = 0; 1756 + set_in_bvec(con, &bv); 1757 + } else { 1758 + set_in_bvec(con, &bv); 1759 + } 1767 1760 con->v2.in_state = IN_S_PREPARE_READ_DATA_CONT; 1761 + return 0; 1768 1762 } 1769 1763 1770 1764 static void prepare_read_data_cont(struct ceph_connection *con) 1771 1765 { 1772 1766 struct bio_vec bv; 1773 1767 1774 - if (!con_secure(con)) 1768 + if (ceph_test_opt(from_msgr(con->msgr), RXBOUNCE)) { 1769 + con->in_data_crc = crc32c(con->in_data_crc, 1770 + page_address(con->bounce_page), 1771 + con->v2.in_bvec.bv_len); 1772 + 1773 + get_bvec_at(&con->v2.in_cursor, &bv); 1774 + memcpy_to_page(bv.bv_page, bv.bv_offset, 1775 + page_address(con->bounce_page), 1776 + con->v2.in_bvec.bv_len); 1777 + } else { 1775 1778 con->in_data_crc = ceph_crc32c_page(con->in_data_crc, 1776 1779 con->v2.in_bvec.bv_page, 1777 1780 con->v2.in_bvec.bv_offset, 1778 1781 con->v2.in_bvec.bv_len); 1782 + } 1779 1783 1780 1784 ceph_msg_data_advance(&con->v2.in_cursor, con->v2.in_bvec.bv_len); 1781 1785 if (con->v2.in_cursor.total_resid) { 1782 1786 get_bvec_at(&con->v2.in_cursor, &bv); 1783 - set_in_bvec(con, &bv); 1787 + if (ceph_test_opt(from_msgr(con->msgr), RXBOUNCE)) { 1788 + bv.bv_page = con->bounce_page; 1789 + bv.bv_offset = 0; 1790 + set_in_bvec(con, &bv); 1791 + } else { 1792 + set_in_bvec(con, &bv); 1793 + } 1784 1794 WARN_ON(con->v2.in_state != IN_S_PREPARE_READ_DATA_CONT); 1785 1795 return; 1786 1796 } 1787 1797 1788 1798 /* 1789 - * We've read all data. Prepare to read data padding (if any) 1790 - * and epilogue. 1799 + * We've read all data. Prepare to read epilogue. 1791 1800 */ 1792 1801 reset_in_kvecs(con); 1793 - if (con_secure(con)) { 1794 - if (need_padding(data_len(con->in_msg))) 1795 - add_in_kvec(con, DATA_PAD(con->v2.in_buf), 1796 - padding_len(data_len(con->in_msg))); 1797 - add_in_kvec(con, con->v2.in_buf, CEPH_EPILOGUE_SECURE_LEN); 1802 + add_in_kvec(con, con->v2.in_buf, CEPH_EPILOGUE_PLAIN_LEN); 1803 + con->v2.in_state = IN_S_HANDLE_EPILOGUE; 1804 + } 1805 + 1806 + static int prepare_read_tail_plain(struct ceph_connection *con) 1807 + { 1808 + struct ceph_msg *msg = con->in_msg; 1809 + 1810 + if (!front_len(msg) && !middle_len(msg)) { 1811 + WARN_ON(!data_len(msg)); 1812 + return prepare_read_data(con); 1813 + } 1814 + 1815 + reset_in_kvecs(con); 1816 + if (front_len(msg)) { 1817 + add_in_kvec(con, msg->front.iov_base, front_len(msg)); 1818 + WARN_ON(msg->front.iov_len != front_len(msg)); 1819 + } 1820 + if (middle_len(msg)) { 1821 + add_in_kvec(con, msg->middle->vec.iov_base, middle_len(msg)); 1822 + WARN_ON(msg->middle->vec.iov_len != middle_len(msg)); 1823 + } 1824 + 1825 + if (data_len(msg)) { 1826 + con->v2.in_state = IN_S_PREPARE_READ_DATA; 1798 1827 } else { 1799 1828 add_in_kvec(con, con->v2.in_buf, CEPH_EPILOGUE_PLAIN_LEN); 1829 + con->v2.in_state = IN_S_HANDLE_EPILOGUE; 1800 1830 } 1831 + return 0; 1832 + } 1833 + 1834 + static void prepare_read_enc_page(struct ceph_connection *con) 1835 + { 1836 + struct bio_vec bv; 1837 + 1838 + dout("%s con %p i %d resid %d\n", __func__, con, con->v2.in_enc_i, 1839 + con->v2.in_enc_resid); 1840 + WARN_ON(!con->v2.in_enc_resid); 1841 + 1842 + bv.bv_page = con->v2.in_enc_pages[con->v2.in_enc_i]; 1843 + bv.bv_offset = 0; 1844 + bv.bv_len = min(con->v2.in_enc_resid, (int)PAGE_SIZE); 1845 + 1846 + set_in_bvec(con, &bv); 1847 + con->v2.in_enc_i++; 1848 + con->v2.in_enc_resid -= bv.bv_len; 1849 + 1850 + if (con->v2.in_enc_resid) { 1851 + con->v2.in_state = IN_S_PREPARE_READ_ENC_PAGE; 1852 + return; 1853 + } 1854 + 1855 + /* 1856 + * We are set to read the last piece of ciphertext (ending 1857 + * with epilogue) + auth tag. 1858 + */ 1859 + WARN_ON(con->v2.in_enc_i != con->v2.in_enc_page_cnt); 1801 1860 con->v2.in_state = IN_S_HANDLE_EPILOGUE; 1861 + } 1862 + 1863 + static int prepare_read_tail_secure(struct ceph_connection *con) 1864 + { 1865 + struct page **enc_pages; 1866 + int enc_page_cnt; 1867 + int tail_len; 1868 + 1869 + tail_len = tail_onwire_len(con->in_msg, true); 1870 + WARN_ON(!tail_len); 1871 + 1872 + enc_page_cnt = calc_pages_for(0, tail_len); 1873 + enc_pages = ceph_alloc_page_vector(enc_page_cnt, GFP_NOIO); 1874 + if (IS_ERR(enc_pages)) 1875 + return PTR_ERR(enc_pages); 1876 + 1877 + WARN_ON(con->v2.in_enc_pages || con->v2.in_enc_page_cnt); 1878 + con->v2.in_enc_pages = enc_pages; 1879 + con->v2.in_enc_page_cnt = enc_page_cnt; 1880 + con->v2.in_enc_resid = tail_len; 1881 + con->v2.in_enc_i = 0; 1882 + 1883 + prepare_read_enc_page(con); 1884 + return 0; 1802 1885 } 1803 1886 1804 1887 static void __finish_skip(struct ceph_connection *con) ··· 2712 2589 } 2713 2590 2714 2591 msg = con->in_msg; /* set in process_message_header() */ 2715 - if (!front_len(msg) && !middle_len(msg)) { 2716 - if (!data_len(msg)) 2717 - return process_message(con); 2718 - 2719 - prepare_read_data(con); 2720 - return 0; 2721 - } 2722 - 2723 - reset_in_kvecs(con); 2724 2592 if (front_len(msg)) { 2725 2593 WARN_ON(front_len(msg) > msg->front_alloc_len); 2726 - add_in_kvec(con, msg->front.iov_base, front_len(msg)); 2727 2594 msg->front.iov_len = front_len(msg); 2728 - 2729 - if (con_secure(con) && need_padding(front_len(msg))) 2730 - add_in_kvec(con, FRONT_PAD(con->v2.in_buf), 2731 - padding_len(front_len(msg))); 2732 2595 } else { 2733 2596 msg->front.iov_len = 0; 2734 2597 } 2735 2598 if (middle_len(msg)) { 2736 2599 WARN_ON(middle_len(msg) > msg->middle->alloc_len); 2737 - add_in_kvec(con, msg->middle->vec.iov_base, middle_len(msg)); 2738 2600 msg->middle->vec.iov_len = middle_len(msg); 2739 - 2740 - if (con_secure(con) && need_padding(middle_len(msg))) 2741 - add_in_kvec(con, MIDDLE_PAD(con->v2.in_buf), 2742 - padding_len(middle_len(msg))); 2743 2601 } else if (msg->middle) { 2744 2602 msg->middle->vec.iov_len = 0; 2745 2603 } 2746 2604 2747 - if (data_len(msg)) { 2748 - con->v2.in_state = IN_S_PREPARE_READ_DATA; 2749 - } else { 2750 - add_in_kvec(con, con->v2.in_buf, 2751 - con_secure(con) ? CEPH_EPILOGUE_SECURE_LEN : 2752 - CEPH_EPILOGUE_PLAIN_LEN); 2753 - con->v2.in_state = IN_S_HANDLE_EPILOGUE; 2754 - } 2755 - return 0; 2605 + if (!front_len(msg) && !middle_len(msg) && !data_len(msg)) 2606 + return process_message(con); 2607 + 2608 + if (con_secure(con)) 2609 + return prepare_read_tail_secure(con); 2610 + 2611 + return prepare_read_tail_plain(con); 2756 2612 } 2757 2613 2758 2614 static int handle_preamble(struct ceph_connection *con) ··· 2819 2717 int ret; 2820 2718 2821 2719 if (con_secure(con)) { 2822 - ret = decrypt_message(con); 2720 + ret = decrypt_tail(con); 2823 2721 if (ret) { 2824 2722 if (ret == -EBADMSG) 2825 2723 con->error_msg = "integrity error, bad epilogue auth tag"; ··· 2887 2785 ret = handle_control_remainder(con); 2888 2786 break; 2889 2787 case IN_S_PREPARE_READ_DATA: 2890 - prepare_read_data(con); 2891 - ret = 0; 2788 + ret = prepare_read_data(con); 2892 2789 break; 2893 2790 case IN_S_PREPARE_READ_DATA_CONT: 2894 2791 prepare_read_data_cont(con); 2792 + ret = 0; 2793 + break; 2794 + case IN_S_PREPARE_READ_ENC_PAGE: 2795 + prepare_read_enc_page(con); 2895 2796 ret = 0; 2896 2797 break; 2897 2798 case IN_S_HANDLE_EPILOGUE: ··· 3431 3326 3432 3327 static void revoke_at_prepare_read_data(struct ceph_connection *con) 3433 3328 { 3434 - int remaining; /* data + [data padding] + epilogue */ 3329 + int remaining; 3435 3330 int resid; 3436 3331 3332 + WARN_ON(con_secure(con)); 3437 3333 WARN_ON(!data_len(con->in_msg)); 3438 3334 WARN_ON(!iov_iter_is_kvec(&con->v2.in_iter)); 3439 3335 resid = iov_iter_count(&con->v2.in_iter); 3440 3336 WARN_ON(!resid); 3441 3337 3442 - if (con_secure(con)) 3443 - remaining = padded_len(data_len(con->in_msg)) + 3444 - CEPH_EPILOGUE_SECURE_LEN; 3445 - else 3446 - remaining = data_len(con->in_msg) + CEPH_EPILOGUE_PLAIN_LEN; 3447 - 3338 + remaining = data_len(con->in_msg) + CEPH_EPILOGUE_PLAIN_LEN; 3448 3339 dout("%s con %p resid %d remaining %d\n", __func__, con, resid, 3449 3340 remaining); 3450 3341 con->v2.in_iter.count -= resid; ··· 3451 3350 static void revoke_at_prepare_read_data_cont(struct ceph_connection *con) 3452 3351 { 3453 3352 int recved, resid; /* current piece of data */ 3454 - int remaining; /* [data padding] + epilogue */ 3353 + int remaining; 3455 3354 3355 + WARN_ON(con_secure(con)); 3456 3356 WARN_ON(!data_len(con->in_msg)); 3457 3357 WARN_ON(!iov_iter_is_bvec(&con->v2.in_iter)); 3458 3358 resid = iov_iter_count(&con->v2.in_iter); ··· 3465 3363 ceph_msg_data_advance(&con->v2.in_cursor, recved); 3466 3364 WARN_ON(resid > con->v2.in_cursor.total_resid); 3467 3365 3468 - if (con_secure(con)) 3469 - remaining = padding_len(data_len(con->in_msg)) + 3470 - CEPH_EPILOGUE_SECURE_LEN; 3471 - else 3472 - remaining = CEPH_EPILOGUE_PLAIN_LEN; 3473 - 3366 + remaining = CEPH_EPILOGUE_PLAIN_LEN; 3474 3367 dout("%s con %p total_resid %zu remaining %d\n", __func__, con, 3475 3368 con->v2.in_cursor.total_resid, remaining); 3476 3369 con->v2.in_iter.count -= resid; ··· 3473 3376 con->v2.in_state = IN_S_FINISH_SKIP; 3474 3377 } 3475 3378 3379 + static void revoke_at_prepare_read_enc_page(struct ceph_connection *con) 3380 + { 3381 + int resid; /* current enc page (not necessarily data) */ 3382 + 3383 + WARN_ON(!con_secure(con)); 3384 + WARN_ON(!iov_iter_is_bvec(&con->v2.in_iter)); 3385 + resid = iov_iter_count(&con->v2.in_iter); 3386 + WARN_ON(!resid || resid > con->v2.in_bvec.bv_len); 3387 + 3388 + dout("%s con %p resid %d enc_resid %d\n", __func__, con, resid, 3389 + con->v2.in_enc_resid); 3390 + con->v2.in_iter.count -= resid; 3391 + set_in_skip(con, resid + con->v2.in_enc_resid); 3392 + con->v2.in_state = IN_S_FINISH_SKIP; 3393 + } 3394 + 3476 3395 static void revoke_at_handle_epilogue(struct ceph_connection *con) 3477 3396 { 3478 3397 int resid; 3479 3398 3480 - WARN_ON(!iov_iter_is_kvec(&con->v2.in_iter)); 3481 3399 resid = iov_iter_count(&con->v2.in_iter); 3482 3400 WARN_ON(!resid); 3483 3401 ··· 3510 3398 break; 3511 3399 case IN_S_PREPARE_READ_DATA_CONT: 3512 3400 revoke_at_prepare_read_data_cont(con); 3401 + break; 3402 + case IN_S_PREPARE_READ_ENC_PAGE: 3403 + revoke_at_prepare_read_enc_page(con); 3513 3404 break; 3514 3405 case IN_S_HANDLE_EPILOGUE: 3515 3406 revoke_at_handle_epilogue(con); ··· 3547 3432 clear_out_sign_kvecs(con); 3548 3433 free_conn_bufs(con); 3549 3434 3435 + if (con->v2.in_enc_pages) { 3436 + WARN_ON(!con->v2.in_enc_page_cnt); 3437 + ceph_release_page_vector(con->v2.in_enc_pages, 3438 + con->v2.in_enc_page_cnt); 3439 + con->v2.in_enc_pages = NULL; 3440 + con->v2.in_enc_page_cnt = 0; 3441 + } 3550 3442 if (con->v2.out_enc_pages) { 3551 3443 WARN_ON(!con->v2.out_enc_page_cnt); 3552 3444 ceph_release_page_vector(con->v2.out_enc_pages,
+12 -6
net/core/neighbour.c
··· 1133 1133 neigh_release(neigh); 1134 1134 } 1135 1135 1136 - int __neigh_event_send(struct neighbour *neigh, struct sk_buff *skb) 1136 + int __neigh_event_send(struct neighbour *neigh, struct sk_buff *skb, 1137 + const bool immediate_ok) 1137 1138 { 1138 1139 int rc; 1139 1140 bool immediate_probe = false; ··· 1155 1154 atomic_set(&neigh->probes, 1156 1155 NEIGH_VAR(neigh->parms, UCAST_PROBES)); 1157 1156 neigh_del_timer(neigh); 1158 - neigh->nud_state = NUD_INCOMPLETE; 1157 + neigh->nud_state = NUD_INCOMPLETE; 1159 1158 neigh->updated = now; 1160 - next = now + max(NEIGH_VAR(neigh->parms, RETRANS_TIME), 1161 - HZ/100); 1159 + if (!immediate_ok) { 1160 + next = now + 1; 1161 + } else { 1162 + immediate_probe = true; 1163 + next = now + max(NEIGH_VAR(neigh->parms, 1164 + RETRANS_TIME), 1165 + HZ / 100); 1166 + } 1162 1167 neigh_add_timer(neigh, next); 1163 - immediate_probe = true; 1164 1168 } else { 1165 1169 neigh->nud_state = NUD_FAILED; 1166 1170 neigh->updated = jiffies; ··· 1577 1571 1578 1572 write_lock_bh(&tbl->lock); 1579 1573 list_for_each_entry(neigh, &tbl->managed_list, managed_list) 1580 - neigh_event_send(neigh, NULL); 1574 + neigh_event_send_probe(neigh, NULL, false); 1581 1575 queue_delayed_work(system_power_efficient_wq, &tbl->managed_work, 1582 1576 NEIGH_VAR(&tbl->parms, DELAY_PROBE_TIME)); 1583 1577 write_unlock_bh(&tbl->lock);
+4 -2
net/core/rtnetlink.c
··· 3275 3275 struct nlattr *slave_attr[RTNL_SLAVE_MAX_TYPE + 1]; 3276 3276 unsigned char name_assign_type = NET_NAME_USER; 3277 3277 struct nlattr *linkinfo[IFLA_INFO_MAX + 1]; 3278 - const struct rtnl_link_ops *m_ops = NULL; 3279 - struct net_device *master_dev = NULL; 3278 + const struct rtnl_link_ops *m_ops; 3279 + struct net_device *master_dev; 3280 3280 struct net *net = sock_net(skb->sk); 3281 3281 const struct rtnl_link_ops *ops; 3282 3282 struct nlattr *tb[IFLA_MAX + 1]; ··· 3314 3314 else 3315 3315 dev = NULL; 3316 3316 3317 + master_dev = NULL; 3318 + m_ops = NULL; 3317 3319 if (dev) { 3318 3320 master_dev = netdev_master_upper_dev_get(dev); 3319 3321 if (master_dev)
+4 -4
net/ieee802154/nl802154.c
··· 1441 1441 1442 1442 hdr = nl802154hdr_put(msg, portid, seq, flags, cmd); 1443 1443 if (!hdr) 1444 - return -1; 1444 + return -ENOBUFS; 1445 1445 1446 1446 if (nla_put_u32(msg, NL802154_ATTR_IFINDEX, dev->ifindex)) 1447 1447 goto nla_put_failure; ··· 1634 1634 1635 1635 hdr = nl802154hdr_put(msg, portid, seq, flags, cmd); 1636 1636 if (!hdr) 1637 - return -1; 1637 + return -ENOBUFS; 1638 1638 1639 1639 if (nla_put_u32(msg, NL802154_ATTR_IFINDEX, dev->ifindex)) 1640 1640 goto nla_put_failure; ··· 1812 1812 1813 1813 hdr = nl802154hdr_put(msg, portid, seq, flags, cmd); 1814 1814 if (!hdr) 1815 - return -1; 1815 + return -ENOBUFS; 1816 1816 1817 1817 if (nla_put_u32(msg, NL802154_ATTR_IFINDEX, dev->ifindex)) 1818 1818 goto nla_put_failure; ··· 1988 1988 1989 1989 hdr = nl802154hdr_put(msg, portid, seq, flags, cmd); 1990 1990 if (!hdr) 1991 - return -1; 1991 + return -ENOBUFS; 1992 1992 1993 1993 if (nla_put_u32(msg, NL802154_ATTR_IFINDEX, dev->ifindex)) 1994 1994 goto nla_put_failure;
-4
net/ipv4/netfilter/Kconfig
··· 58 58 59 59 endif # NF_TABLES 60 60 61 - config NF_FLOW_TABLE_IPV4 62 - tristate 63 - select NF_FLOW_TABLE_INET 64 - 65 61 config NF_DUP_IPV4 66 62 tristate "Netfilter IPv4 packet duplication to alternate destination" 67 63 depends on !NF_CONNTRACK || NF_CONNTRACK
+5 -2
net/ipv4/tcp.c
··· 1322 1322 1323 1323 /* skb changing from pure zc to mixed, must charge zc */ 1324 1324 if (unlikely(skb_zcopy_pure(skb))) { 1325 - if (!sk_wmem_schedule(sk, skb->data_len)) 1325 + u32 extra = skb->truesize - 1326 + SKB_TRUESIZE(skb_end_offset(skb)); 1327 + 1328 + if (!sk_wmem_schedule(sk, extra)) 1326 1329 goto wait_for_space; 1327 1330 1328 - sk_mem_charge(sk, skb->data_len); 1331 + sk_mem_charge(sk, extra); 1329 1332 skb_shinfo(skb)->flags &= ~SKBFL_PURE_ZEROCOPY; 1330 1333 } 1331 1334
+2
net/ipv4/tcp_input.c
··· 1660 1660 (mss != tcp_skb_seglen(skb))) 1661 1661 goto out; 1662 1662 1663 + if (!tcp_skb_can_collapse(prev, skb)) 1664 + goto out; 1663 1665 len = skb->len; 1664 1666 pcount = tcp_skb_pcount(skb); 1665 1667 if (tcp_skb_shift(prev, skb, pcount, len))
-4
net/ipv6/netfilter/Kconfig
··· 47 47 endif # NF_TABLES_IPV6 48 48 endif # NF_TABLES 49 49 50 - config NF_FLOW_TABLE_IPV6 51 - tristate 52 - select NF_FLOW_TABLE_INET 53 - 54 50 config NF_DUP_IPV6 55 51 tristate "Netfilter IPv6 packet duplication to alternate destination" 56 52 depends on !NF_CONNTRACK || NF_CONNTRACK
-3
net/ipv6/netfilter/Makefile
··· 28 28 obj-$(CONFIG_NFT_DUP_IPV6) += nft_dup_ipv6.o 29 29 obj-$(CONFIG_NFT_FIB_IPV6) += nft_fib_ipv6.o 30 30 31 - # flow table support 32 - obj-$(CONFIG_NF_FLOW_TABLE_IPV6) += nf_flow_table_ipv6.o 33 - 34 31 # matches 35 32 obj-$(CONFIG_IP6_NF_MATCH_AH) += ip6t_ah.o 36 33 obj-$(CONFIG_IP6_NF_MATCH_EUI64) += ip6t_eui64.o
net/ipv6/netfilter/nf_flow_table_ipv6.c
-1
net/netfilter/nf_tables_api.c
··· 2011 2011 2012 2012 prule = (struct nft_rule_dp *)ptr; 2013 2013 prule->is_last = 1; 2014 - ptr += offsetof(struct nft_rule_dp, data); 2015 2014 /* blob size does not include the trailer rule */ 2016 2015 } 2017 2016
+12
net/netfilter/nft_byteorder.c
··· 167 167 return -1; 168 168 } 169 169 170 + static bool nft_byteorder_reduce(struct nft_regs_track *track, 171 + const struct nft_expr *expr) 172 + { 173 + struct nft_byteorder *priv = nft_expr_priv(expr); 174 + 175 + track->regs[priv->dreg].selector = NULL; 176 + track->regs[priv->dreg].bitwise = NULL; 177 + 178 + return false; 179 + } 180 + 170 181 static const struct nft_expr_ops nft_byteorder_ops = { 171 182 .type = &nft_byteorder_type, 172 183 .size = NFT_EXPR_SIZE(sizeof(struct nft_byteorder)), 173 184 .eval = nft_byteorder_eval, 174 185 .init = nft_byteorder_init, 175 186 .dump = nft_byteorder_dump, 187 + .reduce = nft_byteorder_reduce, 176 188 }; 177 189 178 190 struct nft_expr_type nft_byteorder_type __read_mostly = {
+4 -1
net/netfilter/nft_ct.c
··· 260 260 ct = this_cpu_read(nft_ct_pcpu_template); 261 261 262 262 if (likely(refcount_read(&ct->ct_general.use) == 1)) { 263 + refcount_inc(&ct->ct_general.use); 263 264 nf_ct_zone_add(ct, &zone); 264 265 } else { 265 - /* previous skb got queued to userspace */ 266 + /* previous skb got queued to userspace, allocate temporary 267 + * one until percpu template can be reused. 268 + */ 266 269 ct = nf_ct_tmpl_alloc(nft_net(pkt), &zone, GFP_ATOMIC); 267 270 if (!ct) { 268 271 regs->verdict.code = NF_DROP;
+6 -2
net/packet/af_packet.c
··· 1789 1789 err = -ENOSPC; 1790 1790 if (refcount_read(&match->sk_ref) < match->max_num_members) { 1791 1791 __dev_remove_pack(&po->prot_hook); 1792 - po->fanout = match; 1792 + 1793 + /* Paired with packet_setsockopt(PACKET_FANOUT_DATA) */ 1794 + WRITE_ONCE(po->fanout, match); 1795 + 1793 1796 po->rollover = rollover; 1794 1797 rollover = NULL; 1795 1798 refcount_set(&match->sk_ref, refcount_read(&match->sk_ref) + 1); ··· 3937 3934 } 3938 3935 case PACKET_FANOUT_DATA: 3939 3936 { 3940 - if (!po->fanout) 3937 + /* Paired with the WRITE_ONCE() in fanout_add() */ 3938 + if (!READ_ONCE(po->fanout)) 3941 3939 return -EINVAL; 3942 3940 3943 3941 return fanout_set_data(po, optval, optlen);
+7 -4
net/sched/cls_api.c
··· 1945 1945 bool prio_allocate; 1946 1946 u32 parent; 1947 1947 u32 chain_index; 1948 - struct Qdisc *q = NULL; 1948 + struct Qdisc *q; 1949 1949 struct tcf_chain_info chain_info; 1950 - struct tcf_chain *chain = NULL; 1950 + struct tcf_chain *chain; 1951 1951 struct tcf_block *block; 1952 1952 struct tcf_proto *tp; 1953 1953 unsigned long cl; ··· 1976 1976 tp = NULL; 1977 1977 cl = 0; 1978 1978 block = NULL; 1979 + q = NULL; 1980 + chain = NULL; 1979 1981 flags = 0; 1980 1982 1981 1983 if (prio == 0) { ··· 2800 2798 struct tcmsg *t; 2801 2799 u32 parent; 2802 2800 u32 chain_index; 2803 - struct Qdisc *q = NULL; 2804 - struct tcf_chain *chain = NULL; 2801 + struct Qdisc *q; 2802 + struct tcf_chain *chain; 2805 2803 struct tcf_block *block; 2806 2804 unsigned long cl; 2807 2805 int err; ··· 2811 2809 return -EPERM; 2812 2810 2813 2811 replay: 2812 + q = NULL; 2814 2813 err = nlmsg_parse_deprecated(n, sizeof(*t), tca, TCA_MAX, 2815 2814 rtm_tca_policy, extack); 2816 2815 if (err < 0)
+118 -15
net/smc/af_smc.c
··· 566 566 mutex_unlock(&net->smc.mutex_fback_rsn); 567 567 } 568 568 569 + /* must be called under rcu read lock */ 570 + static void smc_fback_wakeup_waitqueue(struct smc_sock *smc, void *key) 571 + { 572 + struct socket_wq *wq; 573 + __poll_t flags; 574 + 575 + wq = rcu_dereference(smc->sk.sk_wq); 576 + if (!skwq_has_sleeper(wq)) 577 + return; 578 + 579 + /* wake up smc sk->sk_wq */ 580 + if (!key) { 581 + /* sk_state_change */ 582 + wake_up_interruptible_all(&wq->wait); 583 + } else { 584 + flags = key_to_poll(key); 585 + if (flags & (EPOLLIN | EPOLLOUT)) 586 + /* sk_data_ready or sk_write_space */ 587 + wake_up_interruptible_sync_poll(&wq->wait, flags); 588 + else if (flags & EPOLLERR) 589 + /* sk_error_report */ 590 + wake_up_interruptible_poll(&wq->wait, flags); 591 + } 592 + } 593 + 594 + static int smc_fback_mark_woken(wait_queue_entry_t *wait, 595 + unsigned int mode, int sync, void *key) 596 + { 597 + struct smc_mark_woken *mark = 598 + container_of(wait, struct smc_mark_woken, wait_entry); 599 + 600 + mark->woken = true; 601 + mark->key = key; 602 + return 0; 603 + } 604 + 605 + static void smc_fback_forward_wakeup(struct smc_sock *smc, struct sock *clcsk, 606 + void (*clcsock_callback)(struct sock *sk)) 607 + { 608 + struct smc_mark_woken mark = { .woken = false }; 609 + struct socket_wq *wq; 610 + 611 + init_waitqueue_func_entry(&mark.wait_entry, 612 + smc_fback_mark_woken); 613 + rcu_read_lock(); 614 + wq = rcu_dereference(clcsk->sk_wq); 615 + if (!wq) 616 + goto out; 617 + add_wait_queue(sk_sleep(clcsk), &mark.wait_entry); 618 + clcsock_callback(clcsk); 619 + remove_wait_queue(sk_sleep(clcsk), &mark.wait_entry); 620 + 621 + if (mark.woken) 622 + smc_fback_wakeup_waitqueue(smc, mark.key); 623 + out: 624 + rcu_read_unlock(); 625 + } 626 + 627 + static void smc_fback_state_change(struct sock *clcsk) 628 + { 629 + struct smc_sock *smc = 630 + smc_clcsock_user_data(clcsk); 631 + 632 + if (!smc) 633 + return; 634 + smc_fback_forward_wakeup(smc, clcsk, smc->clcsk_state_change); 635 + } 636 + 637 + static void smc_fback_data_ready(struct sock *clcsk) 638 + { 639 + struct smc_sock *smc = 640 + smc_clcsock_user_data(clcsk); 641 + 642 + if (!smc) 643 + return; 644 + smc_fback_forward_wakeup(smc, clcsk, smc->clcsk_data_ready); 645 + } 646 + 647 + static void smc_fback_write_space(struct sock *clcsk) 648 + { 649 + struct smc_sock *smc = 650 + smc_clcsock_user_data(clcsk); 651 + 652 + if (!smc) 653 + return; 654 + smc_fback_forward_wakeup(smc, clcsk, smc->clcsk_write_space); 655 + } 656 + 657 + static void smc_fback_error_report(struct sock *clcsk) 658 + { 659 + struct smc_sock *smc = 660 + smc_clcsock_user_data(clcsk); 661 + 662 + if (!smc) 663 + return; 664 + smc_fback_forward_wakeup(smc, clcsk, smc->clcsk_error_report); 665 + } 666 + 569 667 static int smc_switch_to_fallback(struct smc_sock *smc, int reason_code) 570 668 { 571 - wait_queue_head_t *smc_wait = sk_sleep(&smc->sk); 572 - wait_queue_head_t *clc_wait; 573 - unsigned long flags; 669 + struct sock *clcsk; 574 670 575 671 mutex_lock(&smc->clcsock_release_lock); 576 672 if (!smc->clcsock) { 577 673 mutex_unlock(&smc->clcsock_release_lock); 578 674 return -EBADF; 579 675 } 676 + clcsk = smc->clcsock->sk; 677 + 580 678 smc->use_fallback = true; 581 679 smc->fallback_rsn = reason_code; 582 680 smc_stat_fallback(smc); ··· 685 587 smc->clcsock->wq.fasync_list = 686 588 smc->sk.sk_socket->wq.fasync_list; 687 589 688 - /* There may be some entries remaining in 689 - * smc socket->wq, which should be removed 690 - * to clcsocket->wq during the fallback. 590 + /* There might be some wait entries remaining 591 + * in smc sk->sk_wq and they should be woken up 592 + * as clcsock's wait queue is woken up. 691 593 */ 692 - clc_wait = sk_sleep(smc->clcsock->sk); 693 - spin_lock_irqsave(&smc_wait->lock, flags); 694 - spin_lock_nested(&clc_wait->lock, SINGLE_DEPTH_NESTING); 695 - list_splice_init(&smc_wait->head, &clc_wait->head); 696 - spin_unlock(&clc_wait->lock); 697 - spin_unlock_irqrestore(&smc_wait->lock, flags); 594 + smc->clcsk_state_change = clcsk->sk_state_change; 595 + smc->clcsk_data_ready = clcsk->sk_data_ready; 596 + smc->clcsk_write_space = clcsk->sk_write_space; 597 + smc->clcsk_error_report = clcsk->sk_error_report; 598 + 599 + clcsk->sk_state_change = smc_fback_state_change; 600 + clcsk->sk_data_ready = smc_fback_data_ready; 601 + clcsk->sk_write_space = smc_fback_write_space; 602 + clcsk->sk_error_report = smc_fback_error_report; 603 + 604 + smc->clcsock->sk->sk_user_data = 605 + (void *)((uintptr_t)smc | SK_USER_DATA_NOCOPY); 698 606 } 699 607 mutex_unlock(&smc->clcsock_release_lock); 700 608 return 0; ··· 2219 2115 2220 2116 static void smc_clcsock_data_ready(struct sock *listen_clcsock) 2221 2117 { 2222 - struct smc_sock *lsmc; 2118 + struct smc_sock *lsmc = 2119 + smc_clcsock_user_data(listen_clcsock); 2223 2120 2224 - lsmc = (struct smc_sock *) 2225 - ((uintptr_t)listen_clcsock->sk_user_data & ~SK_USER_DATA_NOCOPY); 2226 2121 if (!lsmc) 2227 2122 return; 2228 2123 lsmc->clcsk_data_ready(listen_clcsock);
+19 -1
net/smc/smc.h
··· 139 139 SMC_URG_READ = 3, /* data was already read */ 140 140 }; 141 141 142 + struct smc_mark_woken { 143 + bool woken; 144 + void *key; 145 + wait_queue_entry_t wait_entry; 146 + }; 147 + 142 148 struct smc_connection { 143 149 struct rb_node alert_node; 144 150 struct smc_link_group *lgr; /* link group of connection */ ··· 234 228 struct smc_sock { /* smc sock container */ 235 229 struct sock sk; 236 230 struct socket *clcsock; /* internal tcp socket */ 231 + void (*clcsk_state_change)(struct sock *sk); 232 + /* original stat_change fct. */ 237 233 void (*clcsk_data_ready)(struct sock *sk); 238 - /* original data_ready fct. **/ 234 + /* original data_ready fct. */ 235 + void (*clcsk_write_space)(struct sock *sk); 236 + /* original write_space fct. */ 237 + void (*clcsk_error_report)(struct sock *sk); 238 + /* original error_report fct. */ 239 239 struct smc_connection conn; /* smc connection */ 240 240 struct smc_sock *listen_smc; /* listen parent */ 241 241 struct work_struct connect_work; /* handle non-blocking connect*/ ··· 274 262 static inline struct smc_sock *smc_sk(const struct sock *sk) 275 263 { 276 264 return (struct smc_sock *)sk; 265 + } 266 + 267 + static inline struct smc_sock *smc_clcsock_user_data(struct sock *clcsk) 268 + { 269 + return (struct smc_sock *) 270 + ((uintptr_t)clcsk->sk_user_data & ~SK_USER_DATA_NOCOPY); 277 271 } 278 272 279 273 extern struct workqueue_struct *smc_hs_wq; /* wq for handshake work */
-2
net/smc/smc_diag.c
··· 146 146 (req->diag_ext & (1 << (SMC_DIAG_LGRINFO - 1))) && 147 147 !list_empty(&smc->conn.lgr->list)) { 148 148 struct smc_link *link = smc->conn.lnk; 149 - struct net *net = read_pnet(&link->smcibdev->ibdev->coredev.rdma_net); 150 149 151 150 struct smc_diag_lgrinfo linfo = { 152 151 .role = smc->conn.lgr->role, 153 152 .lnk[0].ibport = link->ibport, 154 153 .lnk[0].link_id = link->link_id, 155 - .lnk[0].net_cookie = net->net_cookie, 156 154 }; 157 155 158 156 memcpy(linfo.lnk[0].ibname,
+2 -1
security/selinux/ss/conditional.c
··· 152 152 for (i = 0; i < p->cond_list_len; i++) 153 153 cond_node_destroy(&p->cond_list[i]); 154 154 kfree(p->cond_list); 155 + p->cond_list = NULL; 156 + p->cond_list_len = 0; 155 157 } 156 158 157 159 void cond_policydb_destroy(struct policydb *p) ··· 443 441 return 0; 444 442 err: 445 443 cond_list_destroy(p); 446 - p->cond_list = NULL; 447 444 return rc; 448 445 } 449 446
+13
sound/core/pcm_native.c
··· 172 172 } 173 173 EXPORT_SYMBOL_GPL(_snd_pcm_stream_lock_irqsave); 174 174 175 + unsigned long _snd_pcm_stream_lock_irqsave_nested(struct snd_pcm_substream *substream) 176 + { 177 + unsigned long flags = 0; 178 + if (substream->pcm->nonatomic) 179 + mutex_lock_nested(&substream->self_group.mutex, 180 + SINGLE_DEPTH_NESTING); 181 + else 182 + spin_lock_irqsave_nested(&substream->self_group.lock, flags, 183 + SINGLE_DEPTH_NESTING); 184 + return flags; 185 + } 186 + EXPORT_SYMBOL_GPL(_snd_pcm_stream_lock_irqsave_nested); 187 + 175 188 /** 176 189 * snd_pcm_stream_unlock_irqrestore - Unlock the PCM stream 177 190 * @substream: PCM substream
+3 -4
sound/hda/intel-sdw-acpi.c
··· 50 50 static int 51 51 sdw_intel_scan_controller(struct sdw_intel_acpi_info *info) 52 52 { 53 - struct acpi_device *adev; 53 + struct acpi_device *adev = acpi_fetch_acpi_dev(info->handle); 54 54 int ret, i; 55 55 u8 count; 56 56 57 - if (acpi_bus_get_device(info->handle, &adev)) 57 + if (!adev) 58 58 return -EINVAL; 59 59 60 60 /* Found controller, find links supported */ ··· 119 119 void *cdata, void **return_value) 120 120 { 121 121 struct sdw_intel_acpi_info *info = cdata; 122 - struct acpi_device *adev; 123 122 acpi_status status; 124 123 u64 adr; 125 124 ··· 126 127 if (ACPI_FAILURE(status)) 127 128 return AE_OK; /* keep going */ 128 129 129 - if (acpi_bus_get_device(handle, &adev)) { 130 + if (!acpi_fetch_acpi_dev(handle)) { 130 131 pr_err("%s: Couldn't find ACPI handle\n", __func__); 131 132 return AE_NOT_FOUND; 132 133 }
+1 -1
sound/pci/hda/hda_auto_parser.c
··· 981 981 int id = HDA_FIXUP_ID_NOT_SET; 982 982 const char *name = NULL; 983 983 const char *type = NULL; 984 - int vendor, device; 984 + unsigned int vendor, device; 985 985 986 986 if (codec->fixup_id != HDA_FIXUP_ID_NOT_SET) 987 987 return;
+4
sound/pci/hda/hda_codec.c
··· 3000 3000 { 3001 3001 struct hda_pcm *cpcm; 3002 3002 3003 + /* Skip the shutdown if codec is not registered */ 3004 + if (!codec->registered) 3005 + return; 3006 + 3003 3007 list_for_each_entry(cpcm, &codec->pcm_list_head, list) 3004 3008 snd_pcm_suspend_all(cpcm->pcm); 3005 3009
+15 -2
sound/pci/hda/hda_generic.c
··· 91 91 free_kctls(spec); 92 92 snd_array_free(&spec->paths); 93 93 snd_array_free(&spec->loopback_list); 94 + #ifdef CONFIG_SND_HDA_GENERIC_LEDS 95 + if (spec->led_cdevs[LED_AUDIO_MUTE]) 96 + led_classdev_unregister(spec->led_cdevs[LED_AUDIO_MUTE]); 97 + if (spec->led_cdevs[LED_AUDIO_MICMUTE]) 98 + led_classdev_unregister(spec->led_cdevs[LED_AUDIO_MICMUTE]); 99 + #endif 94 100 } 95 101 96 102 /* ··· 3928 3922 enum led_brightness), 3929 3923 bool micmute) 3930 3924 { 3925 + struct hda_gen_spec *spec = codec->spec; 3931 3926 struct led_classdev *cdev; 3927 + int idx = micmute ? LED_AUDIO_MICMUTE : LED_AUDIO_MUTE; 3928 + int err; 3932 3929 3933 3930 cdev = devm_kzalloc(&codec->core.dev, sizeof(*cdev), GFP_KERNEL); 3934 3931 if (!cdev) ··· 3941 3932 cdev->max_brightness = 1; 3942 3933 cdev->default_trigger = micmute ? "audio-micmute" : "audio-mute"; 3943 3934 cdev->brightness_set_blocking = callback; 3944 - cdev->brightness = ledtrig_audio_get(micmute ? LED_AUDIO_MICMUTE : LED_AUDIO_MUTE); 3935 + cdev->brightness = ledtrig_audio_get(idx); 3945 3936 cdev->flags = LED_CORE_SUSPENDRESUME; 3946 3937 3947 - return devm_led_classdev_register(&codec->core.dev, cdev); 3938 + err = led_classdev_register(&codec->core.dev, cdev); 3939 + if (err < 0) 3940 + return err; 3941 + spec->led_cdevs[idx] = cdev; 3942 + return 0; 3948 3943 } 3949 3944 3950 3945 /**
+3
sound/pci/hda/hda_generic.h
··· 294 294 struct hda_jack_callback *cb); 295 295 void (*mic_autoswitch_hook)(struct hda_codec *codec, 296 296 struct hda_jack_callback *cb); 297 + 298 + /* leds */ 299 + struct led_classdev *led_cdevs[NUM_AUDIO_LEDS]; 297 300 }; 298 301 299 302 /* values for add_stereo_mix_input flag */
+55 -12
sound/pci/hda/patch_realtek.c
··· 98 98 unsigned int gpio_mic_led_mask; 99 99 struct alc_coef_led mute_led_coef; 100 100 struct alc_coef_led mic_led_coef; 101 + struct mutex coef_mutex; 101 102 102 103 hda_nid_t headset_mic_pin; 103 104 hda_nid_t headphone_mic_pin; ··· 138 137 * COEF access helper functions 139 138 */ 140 139 141 - static int alc_read_coefex_idx(struct hda_codec *codec, hda_nid_t nid, 142 - unsigned int coef_idx) 140 + static int __alc_read_coefex_idx(struct hda_codec *codec, hda_nid_t nid, 141 + unsigned int coef_idx) 143 142 { 144 143 unsigned int val; 145 144 ··· 148 147 return val; 149 148 } 150 149 150 + static int alc_read_coefex_idx(struct hda_codec *codec, hda_nid_t nid, 151 + unsigned int coef_idx) 152 + { 153 + struct alc_spec *spec = codec->spec; 154 + unsigned int val; 155 + 156 + mutex_lock(&spec->coef_mutex); 157 + val = __alc_read_coefex_idx(codec, nid, coef_idx); 158 + mutex_unlock(&spec->coef_mutex); 159 + return val; 160 + } 161 + 151 162 #define alc_read_coef_idx(codec, coef_idx) \ 152 163 alc_read_coefex_idx(codec, 0x20, coef_idx) 153 164 154 - static void alc_write_coefex_idx(struct hda_codec *codec, hda_nid_t nid, 155 - unsigned int coef_idx, unsigned int coef_val) 165 + static void __alc_write_coefex_idx(struct hda_codec *codec, hda_nid_t nid, 166 + unsigned int coef_idx, unsigned int coef_val) 156 167 { 157 168 snd_hda_codec_write(codec, nid, 0, AC_VERB_SET_COEF_INDEX, coef_idx); 158 169 snd_hda_codec_write(codec, nid, 0, AC_VERB_SET_PROC_COEF, coef_val); 159 170 } 160 171 172 + static void alc_write_coefex_idx(struct hda_codec *codec, hda_nid_t nid, 173 + unsigned int coef_idx, unsigned int coef_val) 174 + { 175 + struct alc_spec *spec = codec->spec; 176 + 177 + mutex_lock(&spec->coef_mutex); 178 + __alc_write_coefex_idx(codec, nid, coef_idx, coef_val); 179 + mutex_unlock(&spec->coef_mutex); 180 + } 181 + 161 182 #define alc_write_coef_idx(codec, coef_idx, coef_val) \ 162 183 alc_write_coefex_idx(codec, 0x20, coef_idx, coef_val) 184 + 185 + static void __alc_update_coefex_idx(struct hda_codec *codec, hda_nid_t nid, 186 + unsigned int coef_idx, unsigned int mask, 187 + unsigned int bits_set) 188 + { 189 + unsigned int val = __alc_read_coefex_idx(codec, nid, coef_idx); 190 + 191 + if (val != -1) 192 + __alc_write_coefex_idx(codec, nid, coef_idx, 193 + (val & ~mask) | bits_set); 194 + } 163 195 164 196 static void alc_update_coefex_idx(struct hda_codec *codec, hda_nid_t nid, 165 197 unsigned int coef_idx, unsigned int mask, 166 198 unsigned int bits_set) 167 199 { 168 - unsigned int val = alc_read_coefex_idx(codec, nid, coef_idx); 200 + struct alc_spec *spec = codec->spec; 169 201 170 - if (val != -1) 171 - alc_write_coefex_idx(codec, nid, coef_idx, 172 - (val & ~mask) | bits_set); 202 + mutex_lock(&spec->coef_mutex); 203 + __alc_update_coefex_idx(codec, nid, coef_idx, mask, bits_set); 204 + mutex_unlock(&spec->coef_mutex); 173 205 } 174 206 175 207 #define alc_update_coef_idx(codec, coef_idx, mask, bits_set) \ ··· 235 201 static void alc_process_coef_fw(struct hda_codec *codec, 236 202 const struct coef_fw *fw) 237 203 { 204 + struct alc_spec *spec = codec->spec; 205 + 206 + mutex_lock(&spec->coef_mutex); 238 207 for (; fw->nid; fw++) { 239 208 if (fw->mask == (unsigned short)-1) 240 - alc_write_coefex_idx(codec, fw->nid, fw->idx, fw->val); 209 + __alc_write_coefex_idx(codec, fw->nid, fw->idx, fw->val); 241 210 else 242 - alc_update_coefex_idx(codec, fw->nid, fw->idx, 243 - fw->mask, fw->val); 211 + __alc_update_coefex_idx(codec, fw->nid, fw->idx, 212 + fw->mask, fw->val); 244 213 } 214 + mutex_unlock(&spec->coef_mutex); 245 215 } 246 216 247 217 /* ··· 1191 1153 codec->spdif_status_reset = 1; 1192 1154 codec->forced_resume = 1; 1193 1155 codec->patch_ops = alc_patch_ops; 1156 + mutex_init(&spec->coef_mutex); 1194 1157 1195 1158 err = alc_codec_rename_from_preset(codec); 1196 1159 if (err < 0) { ··· 2164 2125 { 2165 2126 static const hda_nid_t conn1[] = { 0x0c }; 2166 2127 static const struct coef_fw gb_x570_coefs[] = { 2128 + WRITE_COEF(0x07, 0x03c0), 2167 2129 WRITE_COEF(0x1a, 0x01c1), 2168 2130 WRITE_COEF(0x1b, 0x0202), 2169 2131 WRITE_COEF(0x43, 0x3005), ··· 2591 2551 SND_PCI_QUIRK(0x1458, 0xa002, "Gigabyte EP45-DS3/Z87X-UD3H", ALC889_FIXUP_FRONT_HP_NO_PRESENCE), 2592 2552 SND_PCI_QUIRK(0x1458, 0xa0b8, "Gigabyte AZ370-Gaming", ALC1220_FIXUP_GB_DUAL_CODECS), 2593 2553 SND_PCI_QUIRK(0x1458, 0xa0cd, "Gigabyte X570 Aorus Master", ALC1220_FIXUP_GB_X570), 2594 - SND_PCI_QUIRK(0x1458, 0xa0ce, "Gigabyte X570 Aorus Xtreme", ALC1220_FIXUP_CLEVO_P950), 2554 + SND_PCI_QUIRK(0x1458, 0xa0ce, "Gigabyte X570 Aorus Xtreme", ALC1220_FIXUP_GB_X570), 2555 + SND_PCI_QUIRK(0x1458, 0xa0d5, "Gigabyte X570S Aorus Master", ALC1220_FIXUP_GB_X570), 2595 2556 SND_PCI_QUIRK(0x1462, 0x11f7, "MSI-GE63", ALC1220_FIXUP_CLEVO_P950), 2596 2557 SND_PCI_QUIRK(0x1462, 0x1228, "MSI-GP63", ALC1220_FIXUP_CLEVO_P950), 2597 2558 SND_PCI_QUIRK(0x1462, 0x1229, "MSI-GP73", ALC1220_FIXUP_CLEVO_P950), ··· 2667 2626 {.id = ALC882_FIXUP_NO_PRIMARY_HP, .name = "no-primary-hp"}, 2668 2627 {.id = ALC887_FIXUP_ASUS_BASS, .name = "asus-bass"}, 2669 2628 {.id = ALC1220_FIXUP_GB_DUAL_CODECS, .name = "dual-codecs"}, 2629 + {.id = ALC1220_FIXUP_GB_X570, .name = "gb-x570"}, 2670 2630 {.id = ALC1220_FIXUP_CLEVO_P950, .name = "clevo-p950"}, 2671 2631 {} 2672 2632 }; ··· 9011 8969 SND_PCI_QUIRK(0x1043, 0x1e51, "ASUS Zephyrus M15", ALC294_FIXUP_ASUS_GU502_PINS), 9012 8970 SND_PCI_QUIRK(0x1043, 0x1e8e, "ASUS Zephyrus G15", ALC289_FIXUP_ASUS_GA401), 9013 8971 SND_PCI_QUIRK(0x1043, 0x1f11, "ASUS Zephyrus G14", ALC289_FIXUP_ASUS_GA401), 8972 + SND_PCI_QUIRK(0x1043, 0x16b2, "ASUS GU603", ALC289_FIXUP_ASUS_GA401), 9014 8973 SND_PCI_QUIRK(0x1043, 0x3030, "ASUS ZN270IE", ALC256_FIXUP_ASUS_AIO_GPIO2), 9015 8974 SND_PCI_QUIRK(0x1043, 0x831a, "ASUS P901", ALC269_FIXUP_STEREO_DMIC), 9016 8975 SND_PCI_QUIRK(0x1043, 0x834a, "ASUS S101", ALC269_FIXUP_STEREO_DMIC),
+2 -2
sound/soc/amd/acp/acp-mach-common.c
··· 303 303 304 304 static struct snd_soc_codec_conf rt1019_conf[] = { 305 305 { 306 - .dlc = COMP_CODEC_CONF("i2c-10EC1019:00"), 306 + .dlc = COMP_CODEC_CONF("i2c-10EC1019:01"), 307 307 .name_prefix = "Left", 308 308 }, 309 309 { 310 - .dlc = COMP_CODEC_CONF("i2c-10EC1019:01"), 310 + .dlc = COMP_CODEC_CONF("i2c-10EC1019:00"), 311 311 .name_prefix = "Right", 312 312 }, 313 313 };
+2
sound/soc/codecs/cpcap.c
··· 1667 1667 { 1668 1668 struct device_node *codec_node = 1669 1669 of_get_child_by_name(pdev->dev.parent->of_node, "audio-codec"); 1670 + if (!codec_node) 1671 + return -ENODEV; 1670 1672 1671 1673 pdev->dev.of_node = codec_node; 1672 1674
+1 -1
sound/soc/codecs/hdmi-codec.c
··· 277 277 bool busy; 278 278 struct snd_soc_jack *jack; 279 279 unsigned int jack_status; 280 - u8 iec_status[5]; 280 + u8 iec_status[AES_IEC958_STATUS_SIZE]; 281 281 }; 282 282 283 283 static const struct snd_soc_dapm_widget hdmi_widgets[] = {
+4 -4
sound/soc/codecs/lpass-rx-macro.c
··· 2688 2688 int reg, b2_reg; 2689 2689 2690 2690 /* Address does not automatically update if reading */ 2691 - reg = CDC_RX_SIDETONE_IIR0_IIR_COEF_B1_CTL + 16 * iir_idx; 2692 - b2_reg = CDC_RX_SIDETONE_IIR0_IIR_COEF_B2_CTL + 16 * iir_idx; 2691 + reg = CDC_RX_SIDETONE_IIR0_IIR_COEF_B1_CTL + 0x80 * iir_idx; 2692 + b2_reg = CDC_RX_SIDETONE_IIR0_IIR_COEF_B2_CTL + 0x80 * iir_idx; 2693 2693 2694 2694 snd_soc_component_write(component, reg, 2695 2695 ((band_idx * BAND_MAX + coeff_idx) * ··· 2718 2718 static void set_iir_band_coeff(struct snd_soc_component *component, 2719 2719 int iir_idx, int band_idx, uint32_t value) 2720 2720 { 2721 - int reg = CDC_RX_SIDETONE_IIR0_IIR_COEF_B2_CTL + 16 * iir_idx; 2721 + int reg = CDC_RX_SIDETONE_IIR0_IIR_COEF_B2_CTL + 0x80 * iir_idx; 2722 2722 2723 2723 snd_soc_component_write(component, reg, (value & 0xFF)); 2724 2724 snd_soc_component_write(component, reg, (value >> 8) & 0xFF); ··· 2739 2739 int iir_idx = ctl->iir_idx; 2740 2740 int band_idx = ctl->band_idx; 2741 2741 u32 coeff[BAND_MAX]; 2742 - int reg = CDC_RX_SIDETONE_IIR0_IIR_COEF_B1_CTL + 16 * iir_idx; 2742 + int reg = CDC_RX_SIDETONE_IIR0_IIR_COEF_B1_CTL + 0x80 * iir_idx; 2743 2743 2744 2744 memcpy(&coeff[0], ucontrol->value.bytes.data, params->max); 2745 2745
+2 -1
sound/soc/codecs/max9759.c
··· 64 64 struct snd_soc_component *c = snd_soc_kcontrol_component(kcontrol); 65 65 struct max9759 *priv = snd_soc_component_get_drvdata(c); 66 66 67 - if (ucontrol->value.integer.value[0] > 3) 67 + if (ucontrol->value.integer.value[0] < 0 || 68 + ucontrol->value.integer.value[0] > 3) 68 69 return -EINVAL; 69 70 70 71 priv->gain = ucontrol->value.integer.value[0];
+4 -11
sound/soc/codecs/rt5682-i2c.c
··· 59 59 struct rt5682_priv *rt5682 = container_of(work, struct rt5682_priv, 60 60 jd_check_work.work); 61 61 62 - if (snd_soc_component_read(rt5682->component, RT5682_AJD1_CTRL) 63 - & RT5682_JDH_RS_MASK) { 62 + if (snd_soc_component_read(rt5682->component, RT5682_AJD1_CTRL) & RT5682_JDH_RS_MASK) 64 63 /* jack out */ 65 - rt5682->jack_type = rt5682_headset_detect(rt5682->component, 0); 66 - 67 - snd_soc_jack_report(rt5682->hs_jack, rt5682->jack_type, 68 - SND_JACK_HEADSET | 69 - SND_JACK_BTN_0 | SND_JACK_BTN_1 | 70 - SND_JACK_BTN_2 | SND_JACK_BTN_3); 71 - } else { 64 + mod_delayed_work(system_power_efficient_wq, 65 + &rt5682->jack_detect_work, 0); 66 + else 72 67 schedule_delayed_work(&rt5682->jd_check_work, 500); 73 - } 74 68 } 75 69 76 70 static irqreturn_t rt5682_irq(int irq, void *data) ··· 192 198 } 193 199 194 200 mutex_init(&rt5682->calibrate_mutex); 195 - mutex_init(&rt5682->jdet_mutex); 196 201 rt5682_calibrate(rt5682); 197 202 198 203 rt5682_apply_patch_list(rt5682, &i2c->dev);
+8 -16
sound/soc/codecs/rt5682.c
··· 922 922 * 923 923 * Returns detect status. 924 924 */ 925 - int rt5682_headset_detect(struct snd_soc_component *component, int jack_insert) 925 + static int rt5682_headset_detect(struct snd_soc_component *component, int jack_insert) 926 926 { 927 927 struct rt5682_priv *rt5682 = snd_soc_component_get_drvdata(component); 928 928 struct snd_soc_dapm_context *dapm = &component->dapm; 929 929 unsigned int val, count; 930 930 931 931 if (jack_insert) { 932 - snd_soc_dapm_mutex_lock(dapm); 933 - 934 932 snd_soc_component_update_bits(component, RT5682_PWR_ANLG_1, 935 933 RT5682_PWR_VREF2 | RT5682_PWR_MB, 936 934 RT5682_PWR_VREF2 | RT5682_PWR_MB); ··· 979 981 snd_soc_component_update_bits(component, RT5682_MICBIAS_2, 980 982 RT5682_PWR_CLK25M_MASK | RT5682_PWR_CLK1M_MASK, 981 983 RT5682_PWR_CLK25M_PU | RT5682_PWR_CLK1M_PU); 982 - 983 - snd_soc_dapm_mutex_unlock(dapm); 984 984 } else { 985 985 rt5682_enable_push_button_irq(component, false); 986 986 snd_soc_component_update_bits(component, RT5682_CBJ_CTRL_1, ··· 1007 1011 dev_dbg(component->dev, "jack_type = %d\n", rt5682->jack_type); 1008 1012 return rt5682->jack_type; 1009 1013 } 1010 - EXPORT_SYMBOL_GPL(rt5682_headset_detect); 1011 1014 1012 1015 static int rt5682_set_jack_detect(struct snd_soc_component *component, 1013 1016 struct snd_soc_jack *hs_jack, void *data) ··· 1089 1094 { 1090 1095 struct rt5682_priv *rt5682 = 1091 1096 container_of(work, struct rt5682_priv, jack_detect_work.work); 1097 + struct snd_soc_dapm_context *dapm; 1092 1098 int val, btn_type; 1093 1099 1094 1100 while (!rt5682->component) ··· 1098 1102 while (!rt5682->component->card->instantiated) 1099 1103 usleep_range(10000, 15000); 1100 1104 1101 - mutex_lock(&rt5682->jdet_mutex); 1105 + dapm = snd_soc_component_get_dapm(rt5682->component); 1106 + 1107 + snd_soc_dapm_mutex_lock(dapm); 1102 1108 mutex_lock(&rt5682->calibrate_mutex); 1103 1109 1104 1110 val = snd_soc_component_read(rt5682->component, RT5682_AJD1_CTRL) ··· 1160 1162 rt5682->irq_work_delay_time = 50; 1161 1163 } 1162 1164 1165 + mutex_unlock(&rt5682->calibrate_mutex); 1166 + snd_soc_dapm_mutex_unlock(dapm); 1167 + 1163 1168 snd_soc_jack_report(rt5682->hs_jack, rt5682->jack_type, 1164 1169 SND_JACK_HEADSET | 1165 1170 SND_JACK_BTN_0 | SND_JACK_BTN_1 | ··· 1175 1174 else 1176 1175 cancel_delayed_work_sync(&rt5682->jd_check_work); 1177 1176 } 1178 - 1179 - mutex_unlock(&rt5682->calibrate_mutex); 1180 - mutex_unlock(&rt5682->jdet_mutex); 1181 1177 } 1182 1178 EXPORT_SYMBOL_GPL(rt5682_jack_detect_handler); 1183 1179 ··· 1524 1526 { 1525 1527 struct snd_soc_component *component = 1526 1528 snd_soc_dapm_to_component(w->dapm); 1527 - struct rt5682_priv *rt5682 = snd_soc_component_get_drvdata(component); 1528 1529 1529 1530 switch (event) { 1530 1531 case SND_SOC_DAPM_PRE_PMU: ··· 1535 1538 RT5682_DEPOP_1, 0x60, 0x60); 1536 1539 snd_soc_component_update_bits(component, 1537 1540 RT5682_DAC_ADC_DIG_VOL1, 0x00c0, 0x0080); 1538 - 1539 - mutex_lock(&rt5682->jdet_mutex); 1540 - 1541 1541 snd_soc_component_update_bits(component, RT5682_HP_CTRL_2, 1542 1542 RT5682_HP_C2_DAC_L_EN | RT5682_HP_C2_DAC_R_EN, 1543 1543 RT5682_HP_C2_DAC_L_EN | RT5682_HP_C2_DAC_R_EN); 1544 1544 usleep_range(5000, 10000); 1545 1545 snd_soc_component_update_bits(component, RT5682_CHARGE_PUMP_1, 1546 1546 RT5682_CP_SW_SIZE_MASK, RT5682_CP_SW_SIZE_L); 1547 - 1548 - mutex_unlock(&rt5682->jdet_mutex); 1549 1547 break; 1550 1548 1551 1549 case SND_SOC_DAPM_POST_PMD:
-2
sound/soc/codecs/rt5682.h
··· 1463 1463 1464 1464 int jack_type; 1465 1465 int irq_work_delay_time; 1466 - struct mutex jdet_mutex; 1467 1466 }; 1468 1467 1469 1468 extern const char *rt5682_supply_names[RT5682_NUM_SUPPLIES]; ··· 1472 1473 1473 1474 void rt5682_apply_patch_list(struct rt5682_priv *rt5682, struct device *dev); 1474 1475 1475 - int rt5682_headset_detect(struct snd_soc_component *component, int jack_insert); 1476 1476 void rt5682_jack_detect_handler(struct work_struct *work); 1477 1477 1478 1478 bool rt5682_volatile_register(struct device *dev, unsigned int reg);
+19 -16
sound/soc/codecs/wcd938x.c
··· 1432 1432 return 0; 1433 1433 } 1434 1434 1435 - static int wcd938x_connect_port(struct wcd938x_sdw_priv *wcd, u8 ch_id, u8 enable) 1435 + static int wcd938x_connect_port(struct wcd938x_sdw_priv *wcd, u8 port_num, u8 ch_id, u8 enable) 1436 1436 { 1437 - u8 port_num; 1438 - 1439 - port_num = wcd->ch_info[ch_id].port_num; 1440 - 1441 1437 return wcd938x_sdw_connect_port(&wcd->ch_info[ch_id], 1442 - &wcd->port_config[port_num], 1438 + &wcd->port_config[port_num - 1], 1443 1439 enable); 1444 1440 } 1445 1441 ··· 2559 2563 WCD938X_EAR_GAIN_MASK, 2560 2564 ucontrol->value.integer.value[0]); 2561 2565 2562 - return 0; 2566 + return 1; 2563 2567 } 2564 2568 2565 2569 static int wcd938x_get_compander(struct snd_kcontrol *kcontrol, ··· 2589 2593 struct wcd938x_priv *wcd938x = snd_soc_component_get_drvdata(component); 2590 2594 struct wcd938x_sdw_priv *wcd; 2591 2595 int value = ucontrol->value.integer.value[0]; 2596 + int portidx; 2592 2597 struct soc_mixer_control *mc; 2593 2598 bool hphr; 2594 2599 ··· 2603 2606 else 2604 2607 wcd938x->comp1_enable = value; 2605 2608 2606 - if (value) 2607 - wcd938x_connect_port(wcd, mc->reg, true); 2608 - else 2609 - wcd938x_connect_port(wcd, mc->reg, false); 2609 + portidx = wcd->ch_info[mc->reg].port_num; 2610 2610 2611 - return 0; 2611 + if (value) 2612 + wcd938x_connect_port(wcd, portidx, mc->reg, true); 2613 + else 2614 + wcd938x_connect_port(wcd, portidx, mc->reg, false); 2615 + 2616 + return 1; 2612 2617 } 2613 2618 2614 2619 static int wcd938x_ldoh_get(struct snd_kcontrol *kcontrol, ··· 2881 2882 struct wcd938x_sdw_priv *wcd; 2882 2883 struct soc_mixer_control *mixer = (struct soc_mixer_control *)kcontrol->private_value; 2883 2884 int dai_id = mixer->shift; 2884 - int portidx = mixer->reg; 2885 + int portidx, ch_idx = mixer->reg; 2886 + 2885 2887 2886 2888 wcd = wcd938x->sdw_priv[dai_id]; 2889 + portidx = wcd->ch_info[ch_idx].port_num; 2887 2890 2888 2891 ucontrol->value.integer.value[0] = wcd->port_enable[portidx]; 2889 2892 ··· 2900 2899 struct wcd938x_sdw_priv *wcd; 2901 2900 struct soc_mixer_control *mixer = 2902 2901 (struct soc_mixer_control *)kcontrol->private_value; 2903 - int portidx = mixer->reg; 2902 + int ch_idx = mixer->reg; 2903 + int portidx; 2904 2904 int dai_id = mixer->shift; 2905 2905 bool enable; 2906 2906 2907 2907 wcd = wcd938x->sdw_priv[dai_id]; 2908 2908 2909 + portidx = wcd->ch_info[ch_idx].port_num; 2909 2910 if (ucontrol->value.integer.value[0]) 2910 2911 enable = true; 2911 2912 else ··· 2915 2912 2916 2913 wcd->port_enable[portidx] = enable; 2917 2914 2918 - wcd938x_connect_port(wcd, portidx, enable); 2915 + wcd938x_connect_port(wcd, portidx, ch_idx, enable); 2919 2916 2920 - return 0; 2917 + return 1; 2921 2918 2922 2919 } 2923 2920
+8 -3
sound/soc/fsl/pcm030-audio-fabric.c
··· 93 93 dev_err(&op->dev, "platform_device_alloc() failed\n"); 94 94 95 95 ret = platform_device_add(pdata->codec_device); 96 - if (ret) 96 + if (ret) { 97 97 dev_err(&op->dev, "platform_device_add() failed: %d\n", ret); 98 + platform_device_put(pdata->codec_device); 99 + } 98 100 99 101 ret = snd_soc_register_card(card); 100 - if (ret) 102 + if (ret) { 101 103 dev_err(&op->dev, "snd_soc_register_card() failed: %d\n", ret); 104 + platform_device_del(pdata->codec_device); 105 + platform_device_put(pdata->codec_device); 106 + } 102 107 103 108 platform_set_drvdata(op, pdata); 104 - 105 109 return ret; 110 + 106 111 } 107 112 108 113 static int pcm030_fabric_remove(struct platform_device *op)
+25 -1
sound/soc/generic/simple-card.c
··· 28 28 .hw_params = asoc_simple_hw_params, 29 29 }; 30 30 31 + static int asoc_simple_parse_platform(struct device_node *node, 32 + struct snd_soc_dai_link_component *dlc) 33 + { 34 + struct of_phandle_args args; 35 + int ret; 36 + 37 + if (!node) 38 + return 0; 39 + 40 + /* 41 + * Get node via "sound-dai = <&phandle port>" 42 + * it will be used as xxx_of_node on soc_bind_dai_link() 43 + */ 44 + ret = of_parse_phandle_with_args(node, DAI, CELL, 0, &args); 45 + if (ret) 46 + return ret; 47 + 48 + /* dai_name is not required and may not exist for plat component */ 49 + 50 + dlc->of_node = args.np; 51 + 52 + return 0; 53 + } 54 + 31 55 static int asoc_simple_parse_dai(struct device_node *node, 32 56 struct snd_soc_dai_link_component *dlc, 33 57 int *is_single_link) ··· 313 289 if (ret < 0) 314 290 goto dai_link_of_err; 315 291 316 - ret = asoc_simple_parse_dai(plat, platforms, NULL); 292 + ret = asoc_simple_parse_platform(plat, platforms); 317 293 if (ret < 0) 318 294 goto dai_link_of_err; 319 295
+1 -1
sound/soc/mediatek/Kconfig
··· 216 216 217 217 config SND_SOC_MT8195_MT6359_RT1011_RT5682 218 218 tristate "ASoC Audio driver for MT8195 with MT6359 RT1011 RT5682 codec" 219 - depends on I2C 219 + depends on I2C && GPIOLIB 220 220 depends on SND_SOC_MT8195 && MTK_PMIC_WRAP 221 221 select SND_SOC_MT6359 222 222 select SND_SOC_RT1011
+5 -2
sound/soc/qcom/qdsp6/q6apm-dai.c
··· 308 308 struct snd_pcm_runtime *runtime = substream->runtime; 309 309 struct q6apm_dai_rtd *prtd = runtime->private_data; 310 310 311 - q6apm_graph_stop(prtd->graph); 312 - q6apm_unmap_memory_regions(prtd->graph, substream->stream); 311 + if (prtd->state) { /* only stop graph that is started */ 312 + q6apm_graph_stop(prtd->graph); 313 + q6apm_unmap_memory_regions(prtd->graph, substream->stream); 314 + } 315 + 313 316 q6apm_graph_close(prtd->graph); 314 317 prtd->graph = NULL; 315 318 kfree(prtd);
+2 -5
sound/soc/soc-acpi.c
··· 55 55 static acpi_status snd_soc_acpi_find_package(acpi_handle handle, u32 level, 56 56 void *context, void **ret) 57 57 { 58 - struct acpi_device *adev; 58 + struct acpi_device *adev = acpi_fetch_acpi_dev(handle); 59 59 acpi_status status; 60 60 struct snd_soc_acpi_package_context *pkg_ctx = context; 61 61 62 62 pkg_ctx->data_valid = false; 63 63 64 - if (acpi_bus_get_device(handle, &adev)) 65 - return AE_OK; 66 - 67 - if (adev->status.present && adev->status.functional) { 64 + if (adev && adev->status.present && adev->status.functional) { 68 65 struct acpi_buffer buffer = {ACPI_ALLOCATE_BUFFER, NULL}; 69 66 union acpi_object *myobj = NULL; 70 67
+26 -3
sound/soc/soc-ops.c
··· 316 316 if (sign_bit) 317 317 mask = BIT(sign_bit + 1) - 1; 318 318 319 - val = ((ucontrol->value.integer.value[0] + min) & mask); 319 + if (ucontrol->value.integer.value[0] < 0) 320 + return -EINVAL; 321 + val = ucontrol->value.integer.value[0]; 322 + if (mc->platform_max && val > mc->platform_max) 323 + return -EINVAL; 324 + if (val > max - min) 325 + return -EINVAL; 326 + val = (val + min) & mask; 320 327 if (invert) 321 328 val = max - val; 322 329 val_mask = mask << shift; 323 330 val = val << shift; 324 331 if (snd_soc_volsw_is_stereo(mc)) { 325 - val2 = ((ucontrol->value.integer.value[1] + min) & mask); 332 + if (ucontrol->value.integer.value[1] < 0) 333 + return -EINVAL; 334 + val2 = ucontrol->value.integer.value[1]; 335 + if (mc->platform_max && val2 > mc->platform_max) 336 + return -EINVAL; 337 + if (val2 > max - min) 338 + return -EINVAL; 339 + val2 = (val2 + min) & mask; 326 340 if (invert) 327 341 val2 = max - val2; 328 342 if (reg == reg2) { ··· 423 409 int err = 0; 424 410 unsigned int val, val_mask; 425 411 412 + if (ucontrol->value.integer.value[0] < 0) 413 + return -EINVAL; 414 + val = ucontrol->value.integer.value[0]; 415 + if (mc->platform_max && val > mc->platform_max) 416 + return -EINVAL; 417 + if (val > max - min) 418 + return -EINVAL; 426 419 val_mask = mask << shift; 427 - val = (ucontrol->value.integer.value[0] + min) & mask; 420 + val = (val + min) & mask; 428 421 val = val << shift; 429 422 430 423 err = snd_soc_component_update_bits(component, reg, val_mask, val); ··· 879 858 long val = ucontrol->value.integer.value[0]; 880 859 unsigned int i; 881 860 861 + if (val < mc->min || val > mc->max) 862 + return -EINVAL; 882 863 if (invert) 883 864 val = max - val; 884 865 val &= mask;
+13 -7
sound/soc/soc-pcm.c
··· 46 46 snd_pcm_stream_lock_irq(snd_soc_dpcm_get_substream(rtd, stream)); 47 47 } 48 48 49 - #define snd_soc_dpcm_stream_lock_irqsave(rtd, stream, flags) \ 50 - snd_pcm_stream_lock_irqsave(snd_soc_dpcm_get_substream(rtd, stream), flags) 49 + #define snd_soc_dpcm_stream_lock_irqsave_nested(rtd, stream, flags) \ 50 + snd_pcm_stream_lock_irqsave_nested(snd_soc_dpcm_get_substream(rtd, stream), flags) 51 51 52 52 static inline void snd_soc_dpcm_stream_unlock_irq(struct snd_soc_pcm_runtime *rtd, 53 53 int stream) ··· 1268 1268 void dpcm_be_disconnect(struct snd_soc_pcm_runtime *fe, int stream) 1269 1269 { 1270 1270 struct snd_soc_dpcm *dpcm, *d; 1271 + LIST_HEAD(deleted_dpcms); 1271 1272 1272 1273 snd_soc_dpcm_mutex_assert_held(fe); 1273 1274 ··· 1288 1287 /* BEs still alive need new FE */ 1289 1288 dpcm_be_reparent(fe, dpcm->be, stream); 1290 1289 1291 - dpcm_remove_debugfs_state(dpcm); 1292 - 1293 1290 list_del(&dpcm->list_be); 1294 - list_del(&dpcm->list_fe); 1295 - kfree(dpcm); 1291 + list_move(&dpcm->list_fe, &deleted_dpcms); 1296 1292 } 1297 1293 snd_soc_dpcm_stream_unlock_irq(fe, stream); 1294 + 1295 + while (!list_empty(&deleted_dpcms)) { 1296 + dpcm = list_first_entry(&deleted_dpcms, struct snd_soc_dpcm, 1297 + list_fe); 1298 + list_del(&dpcm->list_fe); 1299 + dpcm_remove_debugfs_state(dpcm); 1300 + kfree(dpcm); 1301 + } 1298 1302 } 1299 1303 1300 1304 /* get BE for DAI widget and stream */ ··· 2100 2094 be = dpcm->be; 2101 2095 be_substream = snd_soc_dpcm_get_substream(be, stream); 2102 2096 2103 - snd_soc_dpcm_stream_lock_irqsave(be, stream, flags); 2097 + snd_soc_dpcm_stream_lock_irqsave_nested(be, stream, flags); 2104 2098 2105 2099 /* is this op for this BE ? */ 2106 2100 if (!snd_soc_dpcm_be_can_update(fe, be, stream))
+24 -3
sound/soc/xilinx/xlnx_formatter_pcm.c
··· 37 37 #define XLNX_AUD_XFER_COUNT 0x28 38 38 #define XLNX_AUD_CH_STS_START 0x2C 39 39 #define XLNX_BYTES_PER_CH 0x44 40 + #define XLNX_AUD_ALIGN_BYTES 64 40 41 41 42 #define AUD_STS_IOC_IRQ_MASK BIT(31) 42 43 #define AUD_STS_CH_STS_MASK BIT(29) ··· 369 368 snd_soc_set_runtime_hwparams(substream, &xlnx_pcm_hardware); 370 369 runtime->private_data = stream_data; 371 370 372 - /* Resize the period size divisible by 64 */ 371 + /* Resize the period bytes as divisible by 64 */ 373 372 err = snd_pcm_hw_constraint_step(runtime, 0, 374 - SNDRV_PCM_HW_PARAM_PERIOD_BYTES, 64); 373 + SNDRV_PCM_HW_PARAM_PERIOD_BYTES, 374 + XLNX_AUD_ALIGN_BYTES); 375 375 if (err) { 376 376 dev_err(component->dev, 377 - "unable to set constraint on period bytes\n"); 377 + "Unable to set constraint on period bytes\n"); 378 + return err; 379 + } 380 + 381 + /* Resize the buffer bytes as divisible by 64 */ 382 + err = snd_pcm_hw_constraint_step(runtime, 0, 383 + SNDRV_PCM_HW_PARAM_BUFFER_BYTES, 384 + XLNX_AUD_ALIGN_BYTES); 385 + if (err) { 386 + dev_err(component->dev, 387 + "Unable to set constraint on buffer bytes\n"); 388 + return err; 389 + } 390 + 391 + /* Set periods as integer multiple */ 392 + err = snd_pcm_hw_constraint_integer(runtime, 393 + SNDRV_PCM_HW_PARAM_PERIODS); 394 + if (err < 0) { 395 + dev_err(component->dev, 396 + "Unable to set constraint on periods to be integer\n"); 378 397 return err; 379 398 } 380 399
+4
sound/usb/mixer.c
··· 1527 1527 usb_audio_err(chip, 1528 1528 "cannot get connectors status: req = %#x, wValue = %#x, wIndex = %#x, type = %d\n", 1529 1529 UAC_GET_CUR, validx, idx, cval->val_type); 1530 + 1531 + if (val) 1532 + *val = 0; 1533 + 1530 1534 return filter_error(cval, ret); 1531 1535 } 1532 1536
+1 -1
sound/usb/quirks-table.h
··· 84 84 * combination. 85 85 */ 86 86 { 87 - USB_DEVICE(0x041e, 0x4095), 87 + USB_AUDIO_DEVICE(0x041e, 0x4095), 88 88 .driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) { 89 89 .ifnum = QUIRK_ANY_INTERFACE, 90 90 .type = QUIRK_COMPOSITE,
+5 -1
tools/bpf/resolve_btfids/Makefile
··· 9 9 msg = 10 10 else 11 11 Q = @ 12 - msg = @printf ' %-8s %s%s\n' "$(1)" "$(notdir $(2))" "$(if $(3), $(3))"; 12 + ifeq ($(silent),1) 13 + msg = 14 + else 15 + msg = @printf ' %-8s %s%s\n' "$(1)" "$(notdir $(2))" "$(if $(3), $(3))"; 16 + endif 13 17 MAKEFLAGS=--no-print-directory 14 18 endif 15 19
-229
tools/include/uapi/linux/lirc.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */ 2 - /* 3 - * lirc.h - linux infrared remote control header file 4 - * last modified 2010/07/13 by Jarod Wilson 5 - */ 6 - 7 - #ifndef _LINUX_LIRC_H 8 - #define _LINUX_LIRC_H 9 - 10 - #include <linux/types.h> 11 - #include <linux/ioctl.h> 12 - 13 - #define PULSE_BIT 0x01000000 14 - #define PULSE_MASK 0x00FFFFFF 15 - 16 - #define LIRC_MODE2_SPACE 0x00000000 17 - #define LIRC_MODE2_PULSE 0x01000000 18 - #define LIRC_MODE2_FREQUENCY 0x02000000 19 - #define LIRC_MODE2_TIMEOUT 0x03000000 20 - 21 - #define LIRC_VALUE_MASK 0x00FFFFFF 22 - #define LIRC_MODE2_MASK 0xFF000000 23 - 24 - #define LIRC_SPACE(val) (((val)&LIRC_VALUE_MASK) | LIRC_MODE2_SPACE) 25 - #define LIRC_PULSE(val) (((val)&LIRC_VALUE_MASK) | LIRC_MODE2_PULSE) 26 - #define LIRC_FREQUENCY(val) (((val)&LIRC_VALUE_MASK) | LIRC_MODE2_FREQUENCY) 27 - #define LIRC_TIMEOUT(val) (((val)&LIRC_VALUE_MASK) | LIRC_MODE2_TIMEOUT) 28 - 29 - #define LIRC_VALUE(val) ((val)&LIRC_VALUE_MASK) 30 - #define LIRC_MODE2(val) ((val)&LIRC_MODE2_MASK) 31 - 32 - #define LIRC_IS_SPACE(val) (LIRC_MODE2(val) == LIRC_MODE2_SPACE) 33 - #define LIRC_IS_PULSE(val) (LIRC_MODE2(val) == LIRC_MODE2_PULSE) 34 - #define LIRC_IS_FREQUENCY(val) (LIRC_MODE2(val) == LIRC_MODE2_FREQUENCY) 35 - #define LIRC_IS_TIMEOUT(val) (LIRC_MODE2(val) == LIRC_MODE2_TIMEOUT) 36 - 37 - /* used heavily by lirc userspace */ 38 - #define lirc_t int 39 - 40 - /*** lirc compatible hardware features ***/ 41 - 42 - #define LIRC_MODE2SEND(x) (x) 43 - #define LIRC_SEND2MODE(x) (x) 44 - #define LIRC_MODE2REC(x) ((x) << 16) 45 - #define LIRC_REC2MODE(x) ((x) >> 16) 46 - 47 - #define LIRC_MODE_RAW 0x00000001 48 - #define LIRC_MODE_PULSE 0x00000002 49 - #define LIRC_MODE_MODE2 0x00000004 50 - #define LIRC_MODE_SCANCODE 0x00000008 51 - #define LIRC_MODE_LIRCCODE 0x00000010 52 - 53 - 54 - #define LIRC_CAN_SEND_RAW LIRC_MODE2SEND(LIRC_MODE_RAW) 55 - #define LIRC_CAN_SEND_PULSE LIRC_MODE2SEND(LIRC_MODE_PULSE) 56 - #define LIRC_CAN_SEND_MODE2 LIRC_MODE2SEND(LIRC_MODE_MODE2) 57 - #define LIRC_CAN_SEND_LIRCCODE LIRC_MODE2SEND(LIRC_MODE_LIRCCODE) 58 - 59 - #define LIRC_CAN_SEND_MASK 0x0000003f 60 - 61 - #define LIRC_CAN_SET_SEND_CARRIER 0x00000100 62 - #define LIRC_CAN_SET_SEND_DUTY_CYCLE 0x00000200 63 - #define LIRC_CAN_SET_TRANSMITTER_MASK 0x00000400 64 - 65 - #define LIRC_CAN_REC_RAW LIRC_MODE2REC(LIRC_MODE_RAW) 66 - #define LIRC_CAN_REC_PULSE LIRC_MODE2REC(LIRC_MODE_PULSE) 67 - #define LIRC_CAN_REC_MODE2 LIRC_MODE2REC(LIRC_MODE_MODE2) 68 - #define LIRC_CAN_REC_SCANCODE LIRC_MODE2REC(LIRC_MODE_SCANCODE) 69 - #define LIRC_CAN_REC_LIRCCODE LIRC_MODE2REC(LIRC_MODE_LIRCCODE) 70 - 71 - #define LIRC_CAN_REC_MASK LIRC_MODE2REC(LIRC_CAN_SEND_MASK) 72 - 73 - #define LIRC_CAN_SET_REC_CARRIER (LIRC_CAN_SET_SEND_CARRIER << 16) 74 - #define LIRC_CAN_SET_REC_DUTY_CYCLE (LIRC_CAN_SET_SEND_DUTY_CYCLE << 16) 75 - 76 - #define LIRC_CAN_SET_REC_DUTY_CYCLE_RANGE 0x40000000 77 - #define LIRC_CAN_SET_REC_CARRIER_RANGE 0x80000000 78 - #define LIRC_CAN_GET_REC_RESOLUTION 0x20000000 79 - #define LIRC_CAN_SET_REC_TIMEOUT 0x10000000 80 - #define LIRC_CAN_SET_REC_FILTER 0x08000000 81 - 82 - #define LIRC_CAN_MEASURE_CARRIER 0x02000000 83 - #define LIRC_CAN_USE_WIDEBAND_RECEIVER 0x04000000 84 - 85 - #define LIRC_CAN_SEND(x) ((x)&LIRC_CAN_SEND_MASK) 86 - #define LIRC_CAN_REC(x) ((x)&LIRC_CAN_REC_MASK) 87 - 88 - #define LIRC_CAN_NOTIFY_DECODE 0x01000000 89 - 90 - /*** IOCTL commands for lirc driver ***/ 91 - 92 - #define LIRC_GET_FEATURES _IOR('i', 0x00000000, __u32) 93 - 94 - #define LIRC_GET_SEND_MODE _IOR('i', 0x00000001, __u32) 95 - #define LIRC_GET_REC_MODE _IOR('i', 0x00000002, __u32) 96 - #define LIRC_GET_REC_RESOLUTION _IOR('i', 0x00000007, __u32) 97 - 98 - #define LIRC_GET_MIN_TIMEOUT _IOR('i', 0x00000008, __u32) 99 - #define LIRC_GET_MAX_TIMEOUT _IOR('i', 0x00000009, __u32) 100 - 101 - /* code length in bits, currently only for LIRC_MODE_LIRCCODE */ 102 - #define LIRC_GET_LENGTH _IOR('i', 0x0000000f, __u32) 103 - 104 - #define LIRC_SET_SEND_MODE _IOW('i', 0x00000011, __u32) 105 - #define LIRC_SET_REC_MODE _IOW('i', 0x00000012, __u32) 106 - /* Note: these can reset the according pulse_width */ 107 - #define LIRC_SET_SEND_CARRIER _IOW('i', 0x00000013, __u32) 108 - #define LIRC_SET_REC_CARRIER _IOW('i', 0x00000014, __u32) 109 - #define LIRC_SET_SEND_DUTY_CYCLE _IOW('i', 0x00000015, __u32) 110 - #define LIRC_SET_TRANSMITTER_MASK _IOW('i', 0x00000017, __u32) 111 - 112 - /* 113 - * when a timeout != 0 is set the driver will send a 114 - * LIRC_MODE2_TIMEOUT data packet, otherwise LIRC_MODE2_TIMEOUT is 115 - * never sent, timeout is disabled by default 116 - */ 117 - #define LIRC_SET_REC_TIMEOUT _IOW('i', 0x00000018, __u32) 118 - 119 - /* 1 enables, 0 disables timeout reports in MODE2 */ 120 - #define LIRC_SET_REC_TIMEOUT_REPORTS _IOW('i', 0x00000019, __u32) 121 - 122 - /* 123 - * if enabled from the next key press on the driver will send 124 - * LIRC_MODE2_FREQUENCY packets 125 - */ 126 - #define LIRC_SET_MEASURE_CARRIER_MODE _IOW('i', 0x0000001d, __u32) 127 - 128 - /* 129 - * to set a range use LIRC_SET_REC_CARRIER_RANGE with the 130 - * lower bound first and later LIRC_SET_REC_CARRIER with the upper bound 131 - */ 132 - #define LIRC_SET_REC_CARRIER_RANGE _IOW('i', 0x0000001f, __u32) 133 - 134 - #define LIRC_SET_WIDEBAND_RECEIVER _IOW('i', 0x00000023, __u32) 135 - 136 - /* 137 - * Return the recording timeout, which is either set by 138 - * the ioctl LIRC_SET_REC_TIMEOUT or by the kernel after setting the protocols. 139 - */ 140 - #define LIRC_GET_REC_TIMEOUT _IOR('i', 0x00000024, __u32) 141 - 142 - /* 143 - * struct lirc_scancode - decoded scancode with protocol for use with 144 - * LIRC_MODE_SCANCODE 145 - * 146 - * @timestamp: Timestamp in nanoseconds using CLOCK_MONOTONIC when IR 147 - * was decoded. 148 - * @flags: should be 0 for transmit. When receiving scancodes, 149 - * LIRC_SCANCODE_FLAG_TOGGLE or LIRC_SCANCODE_FLAG_REPEAT can be set 150 - * depending on the protocol 151 - * @rc_proto: see enum rc_proto 152 - * @keycode: the translated keycode. Set to 0 for transmit. 153 - * @scancode: the scancode received or to be sent 154 - */ 155 - struct lirc_scancode { 156 - __u64 timestamp; 157 - __u16 flags; 158 - __u16 rc_proto; 159 - __u32 keycode; 160 - __u64 scancode; 161 - }; 162 - 163 - /* Set if the toggle bit of rc-5 or rc-6 is enabled */ 164 - #define LIRC_SCANCODE_FLAG_TOGGLE 1 165 - /* Set if this is a nec or sanyo repeat */ 166 - #define LIRC_SCANCODE_FLAG_REPEAT 2 167 - 168 - /** 169 - * enum rc_proto - the Remote Controller protocol 170 - * 171 - * @RC_PROTO_UNKNOWN: Protocol not known 172 - * @RC_PROTO_OTHER: Protocol known but proprietary 173 - * @RC_PROTO_RC5: Philips RC5 protocol 174 - * @RC_PROTO_RC5X_20: Philips RC5x 20 bit protocol 175 - * @RC_PROTO_RC5_SZ: StreamZap variant of RC5 176 - * @RC_PROTO_JVC: JVC protocol 177 - * @RC_PROTO_SONY12: Sony 12 bit protocol 178 - * @RC_PROTO_SONY15: Sony 15 bit protocol 179 - * @RC_PROTO_SONY20: Sony 20 bit protocol 180 - * @RC_PROTO_NEC: NEC protocol 181 - * @RC_PROTO_NECX: Extended NEC protocol 182 - * @RC_PROTO_NEC32: NEC 32 bit protocol 183 - * @RC_PROTO_SANYO: Sanyo protocol 184 - * @RC_PROTO_MCIR2_KBD: RC6-ish MCE keyboard 185 - * @RC_PROTO_MCIR2_MSE: RC6-ish MCE mouse 186 - * @RC_PROTO_RC6_0: Philips RC6-0-16 protocol 187 - * @RC_PROTO_RC6_6A_20: Philips RC6-6A-20 protocol 188 - * @RC_PROTO_RC6_6A_24: Philips RC6-6A-24 protocol 189 - * @RC_PROTO_RC6_6A_32: Philips RC6-6A-32 protocol 190 - * @RC_PROTO_RC6_MCE: MCE (Philips RC6-6A-32 subtype) protocol 191 - * @RC_PROTO_SHARP: Sharp protocol 192 - * @RC_PROTO_XMP: XMP protocol 193 - * @RC_PROTO_CEC: CEC protocol 194 - * @RC_PROTO_IMON: iMon Pad protocol 195 - * @RC_PROTO_RCMM12: RC-MM protocol 12 bits 196 - * @RC_PROTO_RCMM24: RC-MM protocol 24 bits 197 - * @RC_PROTO_RCMM32: RC-MM protocol 32 bits 198 - */ 199 - enum rc_proto { 200 - RC_PROTO_UNKNOWN = 0, 201 - RC_PROTO_OTHER = 1, 202 - RC_PROTO_RC5 = 2, 203 - RC_PROTO_RC5X_20 = 3, 204 - RC_PROTO_RC5_SZ = 4, 205 - RC_PROTO_JVC = 5, 206 - RC_PROTO_SONY12 = 6, 207 - RC_PROTO_SONY15 = 7, 208 - RC_PROTO_SONY20 = 8, 209 - RC_PROTO_NEC = 9, 210 - RC_PROTO_NECX = 10, 211 - RC_PROTO_NEC32 = 11, 212 - RC_PROTO_SANYO = 12, 213 - RC_PROTO_MCIR2_KBD = 13, 214 - RC_PROTO_MCIR2_MSE = 14, 215 - RC_PROTO_RC6_0 = 15, 216 - RC_PROTO_RC6_6A_20 = 16, 217 - RC_PROTO_RC6_6A_24 = 17, 218 - RC_PROTO_RC6_6A_32 = 18, 219 - RC_PROTO_RC6_MCE = 19, 220 - RC_PROTO_SHARP = 20, 221 - RC_PROTO_XMP = 21, 222 - RC_PROTO_CEC = 22, 223 - RC_PROTO_IMON = 23, 224 - RC_PROTO_RCMM12 = 24, 225 - RC_PROTO_RCMM24 = 25, 226 - RC_PROTO_RCMM32 = 26, 227 - }; 228 - 229 - #endif
+1 -1
tools/scripts/Makefile.include
··· 90 90 91 91 else ifneq ($(CROSS_COMPILE),) 92 92 CLANG_CROSS_FLAGS := --target=$(notdir $(CROSS_COMPILE:%-=%)) 93 - GCC_TOOLCHAIN_DIR := $(dir $(shell which $(CROSS_COMPILE)gcc)) 93 + GCC_TOOLCHAIN_DIR := $(dir $(shell which $(CROSS_COMPILE)gcc 2>/dev/null)) 94 94 ifneq ($(GCC_TOOLCHAIN_DIR),) 95 95 CLANG_CROSS_FLAGS += --prefix=$(GCC_TOOLCHAIN_DIR)$(notdir $(CROSS_COMPILE)) 96 96 CLANG_CROSS_FLAGS += --sysroot=$(shell $(CROSS_COMPILE)gcc -print-sysroot)
+1
tools/testing/kunit/kunit_kernel.py
··· 6 6 # Author: Felix Guo <felixguoxiuping@gmail.com> 7 7 # Author: Brendan Higgins <brendanhiggins@google.com> 8 8 9 + import importlib.abc 9 10 import importlib.util 10 11 import logging 11 12 import subprocess
-1
tools/testing/selftests/bpf/test_lirc_mode2_user.c
··· 28 28 // 5. We can read keycode from same /dev/lirc device 29 29 30 30 #include <linux/bpf.h> 31 - #include <linux/lirc.h> 32 31 #include <linux/input.h> 33 32 #include <errno.h> 34 33 #include <stdio.h>
+1 -1
tools/testing/selftests/cpufreq/main.sh
··· 194 194 195 195 # Run requested functions 196 196 clear_dumps $OUTFILE 197 - do_test >> $OUTFILE.txt 197 + do_test | tee -a $OUTFILE.txt 198 198 dmesg_dumps $OUTFILE
+1 -1
tools/testing/selftests/exec/Makefile
··· 5 5 6 6 TEST_PROGS := binfmt_script non-regular 7 7 TEST_GEN_PROGS := execveat load_address_4096 load_address_2097152 load_address_16777216 8 - TEST_GEN_FILES := execveat.symlink execveat.denatured script subdir pipe 8 + TEST_GEN_FILES := execveat.symlink execveat.denatured script subdir 9 9 # Makefile is a run-time dependency, since it's accessed by the execveat test 10 10 TEST_FILES := Makefile 11 11
+2 -2
tools/testing/selftests/futex/Makefile
··· 11 11 @for DIR in $(SUBDIRS); do \ 12 12 BUILD_TARGET=$(OUTPUT)/$$DIR; \ 13 13 mkdir $$BUILD_TARGET -p; \ 14 - make OUTPUT=$$BUILD_TARGET -C $$DIR $@;\ 14 + $(MAKE) OUTPUT=$$BUILD_TARGET -C $$DIR $@;\ 15 15 if [ -e $$DIR/$(TEST_PROGS) ]; then \ 16 16 rsync -a $$DIR/$(TEST_PROGS) $$BUILD_TARGET/; \ 17 17 fi \ ··· 32 32 @for DIR in $(SUBDIRS); do \ 33 33 BUILD_TARGET=$(OUTPUT)/$$DIR; \ 34 34 mkdir $$BUILD_TARGET -p; \ 35 - make OUTPUT=$$BUILD_TARGET -C $$DIR $@;\ 35 + $(MAKE) OUTPUT=$$BUILD_TARGET -C $$DIR $@;\ 36 36 done 37 37 endef
+3 -1
tools/testing/selftests/kselftest_harness.h
··· 877 877 } 878 878 879 879 t->timed_out = true; 880 - kill(t->pid, SIGKILL); 880 + // signal process group 881 + kill(-(t->pid), SIGKILL); 881 882 } 882 883 883 884 void __wait_for_test(struct __test_metadata *t) ··· 988 987 ksft_print_msg("ERROR SPAWNING TEST CHILD\n"); 989 988 t->passed = 0; 990 989 } else if (t->pid == 0) { 990 + setpgrp(); 991 991 t->fn(t, variant); 992 992 if (t->skip) 993 993 _exit(255);
+14 -6
tools/testing/selftests/mincore/mincore_selftest.c
··· 207 207 208 208 errno = 0; 209 209 fd = open(".", O_TMPFILE | O_RDWR, 0600); 210 - ASSERT_NE(-1, fd) { 211 - TH_LOG("Can't create temporary file: %s", 212 - strerror(errno)); 210 + if (fd < 0) { 211 + ASSERT_EQ(errno, EOPNOTSUPP) { 212 + TH_LOG("Can't create temporary file: %s", 213 + strerror(errno)); 214 + } 215 + SKIP(goto out_free, "O_TMPFILE not supported by filesystem."); 213 216 } 214 217 errno = 0; 215 218 retval = fallocate(fd, 0, 0, FILE_SIZE); 216 - ASSERT_EQ(0, retval) { 217 - TH_LOG("Error allocating space for the temporary file: %s", 218 - strerror(errno)); 219 + if (retval) { 220 + ASSERT_EQ(errno, EOPNOTSUPP) { 221 + TH_LOG("Error allocating space for the temporary file: %s", 222 + strerror(errno)); 223 + } 224 + SKIP(goto out_close, "fallocate not supported by filesystem."); 219 225 } 220 226 221 227 /* ··· 277 271 } 278 272 279 273 munmap(addr, FILE_SIZE); 274 + out_close: 280 275 close(fd); 276 + out_free: 281 277 free(vec); 282 278 } 283 279
+71 -1
tools/testing/selftests/netfilter/nft_concat_range.sh
··· 27 27 net6_port_net6_port net_port_mac_proto_net" 28 28 29 29 # Reported bugs, also described by TYPE_ variables below 30 - BUGS="flush_remove_add" 30 + BUGS="flush_remove_add reload" 31 31 32 32 # List of possible paths to pktgen script from kernel tree for performance tests 33 33 PKTGEN_SCRIPT_PATHS=" ··· 352 352 # display display text for test report 353 353 TYPE_flush_remove_add=" 354 354 display Add two elements, flush, re-add 355 + " 356 + 357 + TYPE_reload=" 358 + display net,mac with reload 359 + type_spec ipv4_addr . ether_addr 360 + chain_spec ip daddr . ether saddr 361 + dst addr4 362 + src mac 363 + start 1 364 + count 1 365 + src_delta 2000 366 + tools sendip nc bash 367 + proto udp 368 + 369 + race_repeat 0 370 + 371 + perf_duration 0 355 372 " 356 373 357 374 # Set template for all tests, types and rules are filled in depending on test ··· 1487 1470 nft flush set t s 2>/dev/null || return 1 1488 1471 nft add element t s ${elem2} 2>/dev/null || return 1 1489 1472 done 1473 + nft flush ruleset 1474 + } 1475 + 1476 + # - add ranged element, check that packets match it 1477 + # - reload the set, check packets still match 1478 + test_bug_reload() { 1479 + setup veth send_"${proto}" set || return ${KSELFTEST_SKIP} 1480 + rstart=${start} 1481 + 1482 + range_size=1 1483 + for i in $(seq "${start}" $((start + count))); do 1484 + end=$((start + range_size)) 1485 + 1486 + # Avoid negative or zero-sized port ranges 1487 + if [ $((end / 65534)) -gt $((start / 65534)) ]; then 1488 + start=${end} 1489 + end=$((end + 1)) 1490 + fi 1491 + srcstart=$((start + src_delta)) 1492 + srcend=$((end + src_delta)) 1493 + 1494 + add "$(format)" || return 1 1495 + range_size=$((range_size + 1)) 1496 + start=$((end + range_size)) 1497 + done 1498 + 1499 + # check kernel does allocate pcpu sctrach map 1500 + # for reload with no elemet add/delete 1501 + ( echo flush set inet filter test ; 1502 + nft list set inet filter test ) | nft -f - 1503 + 1504 + start=${rstart} 1505 + range_size=1 1506 + 1507 + for i in $(seq "${start}" $((start + count))); do 1508 + end=$((start + range_size)) 1509 + 1510 + # Avoid negative or zero-sized port ranges 1511 + if [ $((end / 65534)) -gt $((start / 65534)) ]; then 1512 + start=${end} 1513 + end=$((end + 1)) 1514 + fi 1515 + srcstart=$((start + src_delta)) 1516 + srcend=$((end + src_delta)) 1517 + 1518 + for j in $(seq ${start} $((range_size / 2 + 1)) ${end}); do 1519 + send_match "${j}" $((j + src_delta)) || return 1 1520 + done 1521 + 1522 + range_size=$((range_size + 1)) 1523 + start=$((end + range_size)) 1524 + done 1525 + 1490 1526 nft flush ruleset 1491 1527 } 1492 1528
+152
tools/testing/selftests/netfilter/nft_nat.sh
··· 899 899 ip netns exec "$ns0" nft delete table $family nat 900 900 } 901 901 902 + test_stateless_nat_ip() 903 + { 904 + local lret=0 905 + 906 + ip netns exec "$ns0" sysctl net.ipv4.conf.veth0.forwarding=1 > /dev/null 907 + ip netns exec "$ns0" sysctl net.ipv4.conf.veth1.forwarding=1 > /dev/null 908 + 909 + ip netns exec "$ns2" ping -q -c 1 10.0.1.99 > /dev/null # ping ns2->ns1 910 + if [ $? -ne 0 ] ; then 911 + echo "ERROR: cannot ping $ns1 from $ns2 before loading stateless rules" 912 + return 1 913 + fi 914 + 915 + ip netns exec "$ns0" nft -f /dev/stdin <<EOF 916 + table ip stateless { 917 + map xlate_in { 918 + typeof meta iifname . ip saddr . ip daddr : ip daddr 919 + elements = { 920 + "veth1" . 10.0.2.99 . 10.0.1.99 : 10.0.2.2, 921 + } 922 + } 923 + map xlate_out { 924 + typeof meta iifname . ip saddr . ip daddr : ip daddr 925 + elements = { 926 + "veth0" . 10.0.1.99 . 10.0.2.2 : 10.0.2.99 927 + } 928 + } 929 + 930 + chain prerouting { 931 + type filter hook prerouting priority -400; policy accept; 932 + ip saddr set meta iifname . ip saddr . ip daddr map @xlate_in 933 + ip daddr set meta iifname . ip saddr . ip daddr map @xlate_out 934 + } 935 + } 936 + EOF 937 + if [ $? -ne 0 ]; then 938 + echo "SKIP: Could not add ip statless rules" 939 + return $ksft_skip 940 + fi 941 + 942 + reset_counters 943 + 944 + ip netns exec "$ns2" ping -q -c 1 10.0.1.99 > /dev/null # ping ns2->ns1 945 + if [ $? -ne 0 ] ; then 946 + echo "ERROR: cannot ping $ns1 from $ns2 with stateless rules" 947 + lret=1 948 + fi 949 + 950 + # ns1 should have seen packets from .2.2, due to stateless rewrite. 951 + expect="packets 1 bytes 84" 952 + cnt=$(ip netns exec "$ns1" nft list counter inet filter ns0insl | grep -q "$expect") 953 + if [ $? -ne 0 ]; then 954 + bad_counter "$ns1" ns0insl "$expect" "test_stateless 1" 955 + lret=1 956 + fi 957 + 958 + for dir in "in" "out" ; do 959 + cnt=$(ip netns exec "$ns2" nft list counter inet filter ns1${dir} | grep -q "$expect") 960 + if [ $? -ne 0 ]; then 961 + bad_counter "$ns2" ns1$dir "$expect" "test_stateless 2" 962 + lret=1 963 + fi 964 + done 965 + 966 + # ns1 should not have seen packets from ns2, due to masquerade 967 + expect="packets 0 bytes 0" 968 + for dir in "in" "out" ; do 969 + cnt=$(ip netns exec "$ns1" nft list counter inet filter ns2${dir} | grep -q "$expect") 970 + if [ $? -ne 0 ]; then 971 + bad_counter "$ns1" ns0$dir "$expect" "test_stateless 3" 972 + lret=1 973 + fi 974 + 975 + cnt=$(ip netns exec "$ns0" nft list counter inet filter ns1${dir} | grep -q "$expect") 976 + if [ $? -ne 0 ]; then 977 + bad_counter "$ns0" ns1$dir "$expect" "test_stateless 4" 978 + lret=1 979 + fi 980 + done 981 + 982 + reset_counters 983 + 984 + socat -h > /dev/null 2>&1 985 + if [ $? -ne 0 ];then 986 + echo "SKIP: Could not run stateless nat frag test without socat tool" 987 + if [ $lret -eq 0 ]; then 988 + return $ksft_skip 989 + fi 990 + 991 + ip netns exec "$ns0" nft delete table ip stateless 992 + return $lret 993 + fi 994 + 995 + local tmpfile=$(mktemp) 996 + dd if=/dev/urandom of=$tmpfile bs=4096 count=1 2>/dev/null 997 + 998 + local outfile=$(mktemp) 999 + ip netns exec "$ns1" timeout 3 socat -u UDP4-RECV:4233 OPEN:$outfile < /dev/null & 1000 + sc_r=$! 1001 + 1002 + sleep 1 1003 + # re-do with large ping -> ip fragmentation 1004 + ip netns exec "$ns2" timeout 3 socat - UDP4-SENDTO:"10.0.1.99:4233" < "$tmpfile" > /dev/null 1005 + if [ $? -ne 0 ] ; then 1006 + echo "ERROR: failed to test udp $ns1 to $ns2 with stateless ip nat" 1>&2 1007 + lret=1 1008 + fi 1009 + 1010 + wait 1011 + 1012 + cmp "$tmpfile" "$outfile" 1013 + if [ $? -ne 0 ]; then 1014 + ls -l "$tmpfile" "$outfile" 1015 + echo "ERROR: in and output file mismatch when checking udp with stateless nat" 1>&2 1016 + lret=1 1017 + fi 1018 + 1019 + rm -f "$tmpfile" "$outfile" 1020 + 1021 + # ns1 should have seen packets from 2.2, due to stateless rewrite. 1022 + expect="packets 3 bytes 4164" 1023 + cnt=$(ip netns exec "$ns1" nft list counter inet filter ns0insl | grep -q "$expect") 1024 + if [ $? -ne 0 ]; then 1025 + bad_counter "$ns1" ns0insl "$expect" "test_stateless 5" 1026 + lret=1 1027 + fi 1028 + 1029 + ip netns exec "$ns0" nft delete table ip stateless 1030 + if [ $? -ne 0 ]; then 1031 + echo "ERROR: Could not delete table ip stateless" 1>&2 1032 + lret=1 1033 + fi 1034 + 1035 + test $lret -eq 0 && echo "PASS: IP statless for $ns2" 1036 + 1037 + return $lret 1038 + } 1039 + 902 1040 # ip netns exec "$ns0" ping -c 1 -q 10.0.$i.99 903 1041 for i in 0 1 2; do 904 1042 ip netns exec ns$i-$sfx nft -f /dev/stdin <<EOF ··· 1103 965 EOF 1104 966 done 1105 967 968 + # special case for stateless nat check, counter needs to 969 + # be done before (input) ip defragmentation 970 + ip netns exec ns1-$sfx nft -f /dev/stdin <<EOF 971 + table inet filter { 972 + counter ns0insl {} 973 + 974 + chain pre { 975 + type filter hook prerouting priority -400; policy accept; 976 + ip saddr 10.0.2.2 counter name "ns0insl" 977 + } 978 + } 979 + EOF 980 + 1106 981 sleep 3 1107 982 # test basic connectivity 1108 983 for i in 1 2; do ··· 1170 1019 $test_inet_nat && test_redirect6 inet 1171 1020 1172 1021 test_port_shadowing 1022 + test_stateless_nat_ip 1173 1023 1174 1024 if [ $ret -ne 0 ];then 1175 1025 echo -n "FAIL: "
+6 -6
tools/testing/selftests/netfilter/nft_zones_many.sh
··· 9 9 # Kselftest framework requirement - SKIP code is 4. 10 10 ksft_skip=4 11 11 12 - zones=20000 12 + zones=2000 13 13 have_ct_tool=0 14 14 ret=0 15 15 ··· 75 75 76 76 while [ $i -lt $max_zones ]; do 77 77 local start=$(date +%s%3N) 78 - i=$((i + 10000)) 78 + i=$((i + 1000)) 79 79 j=$((j + 1)) 80 80 # nft rule in output places each packet in a different zone. 81 - dd if=/dev/zero of=/dev/stdout bs=8k count=10000 2>/dev/null | ip netns exec "$ns" socat STDIN UDP:127.0.0.1:12345,sourceport=12345 81 + dd if=/dev/zero of=/dev/stdout bs=8k count=1000 2>/dev/null | ip netns exec "$ns" socat STDIN UDP:127.0.0.1:12345,sourceport=12345 82 82 if [ $? -ne 0 ] ;then 83 83 ret=1 84 84 break ··· 86 86 87 87 stop=$(date +%s%3N) 88 88 local duration=$((stop-start)) 89 - echo "PASS: added 10000 entries in $duration ms (now $i total, loop $j)" 89 + echo "PASS: added 1000 entries in $duration ms (now $i total, loop $j)" 90 90 done 91 91 92 92 if [ $have_ct_tool -eq 1 ]; then ··· 128 128 break 129 129 fi 130 130 131 - if [ $((i%10000)) -eq 0 ];then 131 + if [ $((i%1000)) -eq 0 ];then 132 132 stop=$(date +%s%3N) 133 133 134 134 local duration=$((stop-start)) 135 - echo "PASS: added 10000 entries in $duration ms (now $i total)" 135 + echo "PASS: added 1000 entries in $duration ms (now $i total)" 136 136 start=$stop 137 137 fi 138 138 done
+1 -1
tools/testing/selftests/openat2/Makefile
··· 5 5 6 6 include ../lib.mk 7 7 8 - $(TEST_GEN_PROGS): helpers.c 8 + $(TEST_GEN_PROGS): helpers.c helpers.h
+7 -5
tools/testing/selftests/openat2/helpers.h
··· 9 9 10 10 #define _GNU_SOURCE 11 11 #include <stdint.h> 12 + #include <stdbool.h> 12 13 #include <errno.h> 13 14 #include <linux/types.h> 14 15 #include "../kselftest.h" ··· 63 62 (similar to chroot(2)). */ 64 63 #endif /* RESOLVE_IN_ROOT */ 65 64 66 - #define E_func(func, ...) \ 67 - do { \ 68 - if (func(__VA_ARGS__) < 0) \ 69 - ksft_exit_fail_msg("%s:%d %s failed\n", \ 70 - __FILE__, __LINE__, #func);\ 65 + #define E_func(func, ...) \ 66 + do { \ 67 + errno = 0; \ 68 + if (func(__VA_ARGS__) < 0) \ 69 + ksft_exit_fail_msg("%s:%d %s failed - errno:%d\n", \ 70 + __FILE__, __LINE__, #func, errno); \ 71 71 } while (0) 72 72 73 73 #define E_asprintf(...) E_func(asprintf, __VA_ARGS__)
+11 -1
tools/testing/selftests/openat2/openat2_test.c
··· 259 259 unlink(path); 260 260 261 261 fd = sys_openat2(AT_FDCWD, path, &test->how); 262 + if (fd < 0 && fd == -EOPNOTSUPP) { 263 + /* 264 + * Skip the testcase if it failed because not supported 265 + * by FS. (e.g. a valid O_TMPFILE combination on NFS) 266 + */ 267 + ksft_test_result_skip("openat2 with %s fails with %d (%s)\n", 268 + test->name, fd, strerror(-fd)); 269 + goto next; 270 + } 271 + 262 272 if (test->err >= 0) 263 273 failed = (fd < 0); 264 274 else ··· 313 303 else 314 304 resultfn("openat2 with %s fails with %d (%s)\n", 315 305 test->name, test->err, strerror(-test->err)); 316 - 306 + next: 317 307 free(fdpath); 318 308 fflush(stdout); 319 309 }
+1 -1
tools/testing/selftests/rtc/settings
··· 1 - timeout=90 1 + timeout=180
+62 -73
tools/testing/selftests/vDSO/vdso_test_abi.c
··· 33 33 typedef long (*vdso_clock_getres_t)(clockid_t clk_id, struct timespec *ts); 34 34 typedef time_t (*vdso_time_t)(time_t *t); 35 35 36 - static int vdso_test_gettimeofday(void) 36 + #define VDSO_TEST_PASS_MSG() "\n%s(): PASS\n", __func__ 37 + #define VDSO_TEST_FAIL_MSG(x) "\n%s(): %s FAIL\n", __func__, x 38 + #define VDSO_TEST_SKIP_MSG(x) "\n%s(): SKIP: Could not find %s\n", __func__, x 39 + 40 + static void vdso_test_gettimeofday(void) 37 41 { 38 42 /* Find gettimeofday. */ 39 43 vdso_gettimeofday_t vdso_gettimeofday = 40 44 (vdso_gettimeofday_t)vdso_sym(version, name[0]); 41 45 42 46 if (!vdso_gettimeofday) { 43 - printf("Could not find %s\n", name[0]); 44 - return KSFT_SKIP; 47 + ksft_test_result_skip(VDSO_TEST_SKIP_MSG(name[0])); 48 + return; 45 49 } 46 50 47 51 struct timeval tv; 48 52 long ret = vdso_gettimeofday(&tv, 0); 49 53 50 54 if (ret == 0) { 51 - printf("The time is %lld.%06lld\n", 52 - (long long)tv.tv_sec, (long long)tv.tv_usec); 55 + ksft_print_msg("The time is %lld.%06lld\n", 56 + (long long)tv.tv_sec, (long long)tv.tv_usec); 57 + ksft_test_result_pass(VDSO_TEST_PASS_MSG()); 53 58 } else { 54 - printf("%s failed\n", name[0]); 55 - return KSFT_FAIL; 59 + ksft_test_result_fail(VDSO_TEST_FAIL_MSG(name[0])); 56 60 } 57 - 58 - return KSFT_PASS; 59 61 } 60 62 61 - static int vdso_test_clock_gettime(clockid_t clk_id) 63 + static void vdso_test_clock_gettime(clockid_t clk_id) 62 64 { 63 65 /* Find clock_gettime. */ 64 66 vdso_clock_gettime_t vdso_clock_gettime = 65 67 (vdso_clock_gettime_t)vdso_sym(version, name[1]); 66 68 67 69 if (!vdso_clock_gettime) { 68 - printf("Could not find %s\n", name[1]); 69 - return KSFT_SKIP; 70 + ksft_test_result_skip(VDSO_TEST_SKIP_MSG(name[1])); 71 + return; 70 72 } 71 73 72 74 struct timespec ts; 73 75 long ret = vdso_clock_gettime(clk_id, &ts); 74 76 75 77 if (ret == 0) { 76 - printf("The time is %lld.%06lld\n", 77 - (long long)ts.tv_sec, (long long)ts.tv_nsec); 78 + ksft_print_msg("The time is %lld.%06lld\n", 79 + (long long)ts.tv_sec, (long long)ts.tv_nsec); 80 + ksft_test_result_pass(VDSO_TEST_PASS_MSG()); 78 81 } else { 79 - printf("%s failed\n", name[1]); 80 - return KSFT_FAIL; 82 + ksft_test_result_fail(VDSO_TEST_FAIL_MSG(name[1])); 81 83 } 82 - 83 - return KSFT_PASS; 84 84 } 85 85 86 - static int vdso_test_time(void) 86 + static void vdso_test_time(void) 87 87 { 88 88 /* Find time. */ 89 89 vdso_time_t vdso_time = 90 90 (vdso_time_t)vdso_sym(version, name[2]); 91 91 92 92 if (!vdso_time) { 93 - printf("Could not find %s\n", name[2]); 94 - return KSFT_SKIP; 93 + ksft_test_result_skip(VDSO_TEST_SKIP_MSG(name[2])); 94 + return; 95 95 } 96 96 97 97 long ret = vdso_time(NULL); 98 98 99 99 if (ret > 0) { 100 - printf("The time in hours since January 1, 1970 is %lld\n", 100 + ksft_print_msg("The time in hours since January 1, 1970 is %lld\n", 101 101 (long long)(ret / 3600)); 102 + ksft_test_result_pass(VDSO_TEST_PASS_MSG()); 102 103 } else { 103 - printf("%s failed\n", name[2]); 104 - return KSFT_FAIL; 104 + ksft_test_result_fail(VDSO_TEST_FAIL_MSG(name[2])); 105 105 } 106 - 107 - return KSFT_PASS; 108 106 } 109 107 110 - static int vdso_test_clock_getres(clockid_t clk_id) 108 + static void vdso_test_clock_getres(clockid_t clk_id) 111 109 { 110 + int clock_getres_fail = 0; 111 + 112 112 /* Find clock_getres. */ 113 113 vdso_clock_getres_t vdso_clock_getres = 114 114 (vdso_clock_getres_t)vdso_sym(version, name[3]); 115 115 116 116 if (!vdso_clock_getres) { 117 - printf("Could not find %s\n", name[3]); 118 - return KSFT_SKIP; 117 + ksft_test_result_skip(VDSO_TEST_SKIP_MSG(name[3])); 118 + return; 119 119 } 120 120 121 121 struct timespec ts, sys_ts; 122 122 long ret = vdso_clock_getres(clk_id, &ts); 123 123 124 124 if (ret == 0) { 125 - printf("The resolution is %lld %lld\n", 126 - (long long)ts.tv_sec, (long long)ts.tv_nsec); 125 + ksft_print_msg("The vdso resolution is %lld %lld\n", 126 + (long long)ts.tv_sec, (long long)ts.tv_nsec); 127 127 } else { 128 - printf("%s failed\n", name[3]); 129 - return KSFT_FAIL; 128 + clock_getres_fail++; 130 129 } 131 130 132 131 ret = syscall(SYS_clock_getres, clk_id, &sys_ts); 133 132 134 - if ((sys_ts.tv_sec != ts.tv_sec) || (sys_ts.tv_nsec != ts.tv_nsec)) { 135 - printf("%s failed\n", name[3]); 136 - return KSFT_FAIL; 137 - } 133 + ksft_print_msg("The syscall resolution is %lld %lld\n", 134 + (long long)sys_ts.tv_sec, (long long)sys_ts.tv_nsec); 138 135 139 - return KSFT_PASS; 136 + if ((sys_ts.tv_sec != ts.tv_sec) || (sys_ts.tv_nsec != ts.tv_nsec)) 137 + clock_getres_fail++; 138 + 139 + if (clock_getres_fail > 0) { 140 + ksft_test_result_fail(VDSO_TEST_FAIL_MSG(name[3])); 141 + } else { 142 + ksft_test_result_pass(VDSO_TEST_PASS_MSG()); 143 + } 140 144 } 141 145 142 146 const char *vdso_clock_name[12] = { ··· 162 158 * This function calls vdso_test_clock_gettime and vdso_test_clock_getres 163 159 * with different values for clock_id. 164 160 */ 165 - static inline int vdso_test_clock(clockid_t clock_id) 161 + static inline void vdso_test_clock(clockid_t clock_id) 166 162 { 167 - int ret0, ret1; 163 + ksft_print_msg("\nclock_id: %s\n", vdso_clock_name[clock_id]); 168 164 169 - ret0 = vdso_test_clock_gettime(clock_id); 170 - /* A skipped test is considered passed */ 171 - if (ret0 == KSFT_SKIP) 172 - ret0 = KSFT_PASS; 165 + vdso_test_clock_gettime(clock_id); 173 166 174 - ret1 = vdso_test_clock_getres(clock_id); 175 - /* A skipped test is considered passed */ 176 - if (ret1 == KSFT_SKIP) 177 - ret1 = KSFT_PASS; 178 - 179 - ret0 += ret1; 180 - 181 - printf("clock_id: %s", vdso_clock_name[clock_id]); 182 - 183 - if (ret0 > 0) 184 - printf(" [FAIL]\n"); 185 - else 186 - printf(" [PASS]\n"); 187 - 188 - return ret0; 167 + vdso_test_clock_getres(clock_id); 189 168 } 169 + 170 + #define VDSO_TEST_PLAN 16 190 171 191 172 int main(int argc, char **argv) 192 173 { 193 174 unsigned long sysinfo_ehdr = getauxval(AT_SYSINFO_EHDR); 194 - int ret; 175 + 176 + ksft_print_header(); 177 + ksft_set_plan(VDSO_TEST_PLAN); 195 178 196 179 if (!sysinfo_ehdr) { 197 180 printf("AT_SYSINFO_EHDR is not present!\n"); ··· 192 201 193 202 vdso_init_from_sysinfo_ehdr(getauxval(AT_SYSINFO_EHDR)); 194 203 195 - ret = vdso_test_gettimeofday(); 204 + vdso_test_gettimeofday(); 196 205 197 206 #if _POSIX_TIMERS > 0 198 207 199 208 #ifdef CLOCK_REALTIME 200 - ret += vdso_test_clock(CLOCK_REALTIME); 209 + vdso_test_clock(CLOCK_REALTIME); 201 210 #endif 202 211 203 212 #ifdef CLOCK_BOOTTIME 204 - ret += vdso_test_clock(CLOCK_BOOTTIME); 213 + vdso_test_clock(CLOCK_BOOTTIME); 205 214 #endif 206 215 207 216 #ifdef CLOCK_TAI 208 - ret += vdso_test_clock(CLOCK_TAI); 217 + vdso_test_clock(CLOCK_TAI); 209 218 #endif 210 219 211 220 #ifdef CLOCK_REALTIME_COARSE 212 - ret += vdso_test_clock(CLOCK_REALTIME_COARSE); 221 + vdso_test_clock(CLOCK_REALTIME_COARSE); 213 222 #endif 214 223 215 224 #ifdef CLOCK_MONOTONIC 216 - ret += vdso_test_clock(CLOCK_MONOTONIC); 225 + vdso_test_clock(CLOCK_MONOTONIC); 217 226 #endif 218 227 219 228 #ifdef CLOCK_MONOTONIC_RAW 220 - ret += vdso_test_clock(CLOCK_MONOTONIC_RAW); 229 + vdso_test_clock(CLOCK_MONOTONIC_RAW); 221 230 #endif 222 231 223 232 #ifdef CLOCK_MONOTONIC_COARSE 224 - ret += vdso_test_clock(CLOCK_MONOTONIC_COARSE); 233 + vdso_test_clock(CLOCK_MONOTONIC_COARSE); 225 234 #endif 226 235 227 236 #endif 228 237 229 - ret += vdso_test_time(); 238 + vdso_test_time(); 230 239 231 - if (ret > 0) 232 - return KSFT_FAIL; 233 - 234 - return KSFT_PASS; 240 + ksft_print_cnts(); 241 + return ksft_get_fail_cnt() == 0 ? KSFT_PASS : KSFT_FAIL; 235 242 }
+7 -2
tools/testing/selftests/vm/userfaultfd.c
··· 1417 1417 static int userfaultfd_stress(void) 1418 1418 { 1419 1419 void *area; 1420 + char *tmp_area; 1420 1421 unsigned long nr; 1421 1422 struct uffdio_register uffdio_register; 1422 1423 struct uffd_stats uffd_stats[nr_cpus]; ··· 1528 1527 count_verify[nr], nr); 1529 1528 1530 1529 /* prepare next bounce */ 1531 - swap(area_src, area_dst); 1530 + tmp_area = area_src; 1531 + area_src = area_dst; 1532 + area_dst = tmp_area; 1532 1533 1533 - swap(area_src_alias, area_dst_alias); 1534 + tmp_area = area_src_alias; 1535 + area_src_alias = area_dst_alias; 1536 + area_dst_alias = tmp_area; 1534 1537 1535 1538 uffd_stats_report(uffd_stats, nr_cpus); 1536 1539 }
+1 -14
tools/testing/selftests/zram/zram.sh
··· 2 2 # SPDX-License-Identifier: GPL-2.0 3 3 TCID="zram.sh" 4 4 5 - # Kselftest framework requirement - SKIP code is 4. 6 - ksft_skip=4 7 - 8 5 . ./zram_lib.sh 9 6 10 7 run_zram () { ··· 15 18 16 19 check_prereqs 17 20 18 - # check zram module exists 19 - MODULE_PATH=/lib/modules/`uname -r`/kernel/drivers/block/zram/zram.ko 20 - if [ -f $MODULE_PATH ]; then 21 - run_zram 22 - elif [ -b /dev/zram0 ]; then 23 - run_zram 24 - else 25 - echo "$TCID : No zram.ko module or /dev/zram0 device file not found" 26 - echo "$TCID : CONFIG_ZRAM is not set" 27 - exit $ksft_skip 28 - fi 21 + run_zram
+11 -26
tools/testing/selftests/zram/zram01.sh
··· 33 33 34 34 zram_fill_fs() 35 35 { 36 - local mem_free0=$(free -m | awk 'NR==2 {print $4}') 37 - 38 - for i in $(seq 0 $(($dev_num - 1))); do 36 + for i in $(seq $dev_start $dev_end); do 39 37 echo "fill zram$i..." 40 38 local b=0 41 39 while [ true ]; do ··· 43 45 b=$(($b + 1)) 44 46 done 45 47 echo "zram$i can be filled with '$b' KB" 48 + 49 + local mem_used_total=`awk '{print $3}' "/sys/block/zram$i/mm_stat"` 50 + local v=$((100 * 1024 * $b / $mem_used_total)) 51 + if [ "$v" -lt 100 ]; then 52 + echo "FAIL compression ratio: 0.$v:1" 53 + ERR_CODE=-1 54 + return 55 + fi 56 + 57 + echo "zram compression ratio: $(echo "scale=2; $v / 100 " | bc):1: OK" 46 58 done 47 - 48 - local mem_free1=$(free -m | awk 'NR==2 {print $4}') 49 - local used_mem=$(($mem_free0 - $mem_free1)) 50 - 51 - local total_size=0 52 - for sm in $zram_sizes; do 53 - local s=$(echo $sm | sed 's/M//') 54 - total_size=$(($total_size + $s)) 55 - done 56 - 57 - echo "zram used ${used_mem}M, zram disk sizes ${total_size}M" 58 - 59 - local v=$((100 * $total_size / $used_mem)) 60 - 61 - if [ "$v" -lt 100 ]; then 62 - echo "FAIL compression ratio: 0.$v:1" 63 - ERR_CODE=-1 64 - zram_cleanup 65 - return 66 - fi 67 - 68 - echo "zram compression ratio: $(echo "scale=2; $v / 100 " | bc):1: OK" 69 59 } 70 60 71 61 check_prereqs ··· 67 81 68 82 zram_fill_fs 69 83 zram_cleanup 70 - zram_unload 71 84 72 85 if [ $ERR_CODE -ne 0 ]; then 73 86 echo "$TCID : [FAIL]"
-1
tools/testing/selftests/zram/zram02.sh
··· 36 36 zram_makeswap 37 37 zram_swapoff 38 38 zram_cleanup 39 - zram_unload 40 39 41 40 if [ $ERR_CODE -ne 0 ]; then 42 41 echo "$TCID : [FAIL]"
+89 -47
tools/testing/selftests/zram/zram_lib.sh
··· 5 5 # Author: Alexey Kodanev <alexey.kodanev@oracle.com> 6 6 # Modified: Naresh Kamboju <naresh.kamboju@linaro.org> 7 7 8 - MODULE=0 9 8 dev_makeswap=-1 10 9 dev_mounted=-1 11 - 10 + dev_start=0 11 + dev_end=-1 12 + module_load=-1 13 + sys_control=-1 12 14 # Kselftest framework requirement - SKIP code is 4. 13 15 ksft_skip=4 16 + kernel_version=`uname -r | cut -d'.' -f1,2` 17 + kernel_major=${kernel_version%.*} 18 + kernel_minor=${kernel_version#*.} 14 19 15 20 trap INT 16 21 ··· 30 25 fi 31 26 } 32 27 28 + kernel_gte() 29 + { 30 + major=${1%.*} 31 + minor=${1#*.} 32 + 33 + if [ $kernel_major -gt $major ]; then 34 + return 0 35 + elif [[ $kernel_major -eq $major && $kernel_minor -ge $minor ]]; then 36 + return 0 37 + fi 38 + 39 + return 1 40 + } 41 + 33 42 zram_cleanup() 34 43 { 35 44 echo "zram cleanup" 36 45 local i= 37 - for i in $(seq 0 $dev_makeswap); do 46 + for i in $(seq $dev_start $dev_makeswap); do 38 47 swapoff /dev/zram$i 39 48 done 40 49 41 - for i in $(seq 0 $dev_mounted); do 50 + for i in $(seq $dev_start $dev_mounted); do 42 51 umount /dev/zram$i 43 52 done 44 53 45 - for i in $(seq 0 $(($dev_num - 1))); do 54 + for i in $(seq $dev_start $dev_end); do 46 55 echo 1 > /sys/block/zram${i}/reset 47 56 rm -rf zram$i 48 57 done 49 58 50 - } 59 + if [ $sys_control -eq 1 ]; then 60 + for i in $(seq $dev_start $dev_end); do 61 + echo $i > /sys/class/zram-control/hot_remove 62 + done 63 + fi 51 64 52 - zram_unload() 53 - { 54 - if [ $MODULE -ne 0 ] ; then 55 - echo "zram rmmod zram" 65 + if [ $module_load -eq 1 ]; then 56 66 rmmod zram > /dev/null 2>&1 57 67 fi 58 68 } 59 69 60 70 zram_load() 61 71 { 62 - # check zram module exists 63 - MODULE_PATH=/lib/modules/`uname -r`/kernel/drivers/block/zram/zram.ko 64 - if [ -f $MODULE_PATH ]; then 65 - MODULE=1 66 - echo "create '$dev_num' zram device(s)" 67 - modprobe zram num_devices=$dev_num 68 - if [ $? -ne 0 ]; then 69 - echo "failed to insert zram module" 70 - exit 1 71 - fi 72 + echo "create '$dev_num' zram device(s)" 72 73 73 - dev_num_created=$(ls /dev/zram* | wc -w) 74 + # zram module loaded, new kernel 75 + if [ -d "/sys/class/zram-control" ]; then 76 + echo "zram modules already loaded, kernel supports" \ 77 + "zram-control interface" 78 + dev_start=$(ls /dev/zram* | wc -w) 79 + dev_end=$(($dev_start + $dev_num - 1)) 80 + sys_control=1 74 81 75 - if [ "$dev_num_created" -ne "$dev_num" ]; then 76 - echo "unexpected num of devices: $dev_num_created" 77 - ERR_CODE=-1 78 - else 79 - echo "zram load module successful" 80 - fi 81 - elif [ -b /dev/zram0 ]; then 82 - echo "/dev/zram0 device file found: OK" 83 - else 84 - echo "ERROR: No zram.ko module or no /dev/zram0 device found" 85 - echo "$TCID : CONFIG_ZRAM is not set" 86 - exit 1 82 + for i in $(seq $dev_start $dev_end); do 83 + cat /sys/class/zram-control/hot_add > /dev/null 84 + done 85 + 86 + echo "all zram devices (/dev/zram$dev_start~$dev_end" \ 87 + "successfully created" 88 + return 0 87 89 fi 90 + 91 + # detect old kernel or built-in 92 + modprobe zram num_devices=$dev_num 93 + if [ ! -d "/sys/class/zram-control" ]; then 94 + if grep -q '^zram' /proc/modules; then 95 + rmmod zram > /dev/null 2>&1 96 + if [ $? -ne 0 ]; then 97 + echo "zram module is being used on old kernel" \ 98 + "without zram-control interface" 99 + exit $ksft_skip 100 + fi 101 + else 102 + echo "test needs CONFIG_ZRAM=m on old kernel without" \ 103 + "zram-control interface" 104 + exit $ksft_skip 105 + fi 106 + modprobe zram num_devices=$dev_num 107 + fi 108 + 109 + module_load=1 110 + dev_end=$(($dev_num - 1)) 111 + echo "all zram devices (/dev/zram0~$dev_end) successfully created" 88 112 } 89 113 90 114 zram_max_streams() 91 115 { 92 116 echo "set max_comp_streams to zram device(s)" 93 117 94 - local i=0 118 + kernel_gte 4.7 119 + if [ $? -eq 0 ]; then 120 + echo "The device attribute max_comp_streams was"\ 121 + "deprecated in 4.7" 122 + return 0 123 + fi 124 + 125 + local i=$dev_start 95 126 for max_s in $zram_max_streams; do 96 127 local sys_path="/sys/block/zram${i}/max_comp_streams" 97 128 echo $max_s > $sys_path || \ ··· 139 98 echo "FAIL can't set max_streams '$max_s', get $max_stream" 140 99 141 100 i=$(($i + 1)) 142 - echo "$sys_path = '$max_streams' ($i/$dev_num)" 101 + echo "$sys_path = '$max_streams'" 143 102 done 144 103 145 104 echo "zram max streams: OK" ··· 149 108 { 150 109 echo "test that we can set compression algorithm" 151 110 152 - local algs=$(cat /sys/block/zram0/comp_algorithm) 111 + local i=$dev_start 112 + local algs=$(cat /sys/block/zram${i}/comp_algorithm) 153 113 echo "supported algs: $algs" 154 - local i=0 114 + 155 115 for alg in $zram_algs; do 156 116 local sys_path="/sys/block/zram${i}/comp_algorithm" 157 117 echo "$alg" > $sys_path || \ 158 118 echo "FAIL can't set '$alg' to $sys_path" 159 119 i=$(($i + 1)) 160 - echo "$sys_path = '$alg' ($i/$dev_num)" 120 + echo "$sys_path = '$alg'" 161 121 done 162 122 163 123 echo "zram set compression algorithm: OK" ··· 167 125 zram_set_disksizes() 168 126 { 169 127 echo "set disk size to zram device(s)" 170 - local i=0 128 + local i=$dev_start 171 129 for ds in $zram_sizes; do 172 130 local sys_path="/sys/block/zram${i}/disksize" 173 131 echo "$ds" > $sys_path || \ 174 132 echo "FAIL can't set '$ds' to $sys_path" 175 133 176 134 i=$(($i + 1)) 177 - echo "$sys_path = '$ds' ($i/$dev_num)" 135 + echo "$sys_path = '$ds'" 178 136 done 179 137 180 138 echo "zram set disksizes: OK" ··· 184 142 { 185 143 echo "set memory limit to zram device(s)" 186 144 187 - local i=0 145 + local i=$dev_start 188 146 for ds in $zram_mem_limits; do 189 147 local sys_path="/sys/block/zram${i}/mem_limit" 190 148 echo "$ds" > $sys_path || \ 191 149 echo "FAIL can't set '$ds' to $sys_path" 192 150 193 151 i=$(($i + 1)) 194 - echo "$sys_path = '$ds' ($i/$dev_num)" 152 + echo "$sys_path = '$ds'" 195 153 done 196 154 197 155 echo "zram set memory limit: OK" ··· 200 158 zram_makeswap() 201 159 { 202 160 echo "make swap with zram device(s)" 203 - local i=0 204 - for i in $(seq 0 $(($dev_num - 1))); do 161 + local i=$dev_start 162 + for i in $(seq $dev_start $dev_end); do 205 163 mkswap /dev/zram$i > err.log 2>&1 206 164 if [ $? -ne 0 ]; then 207 165 cat err.log ··· 224 182 zram_swapoff() 225 183 { 226 184 local i= 227 - for i in $(seq 0 $dev_makeswap); do 185 + for i in $(seq $dev_start $dev_end); do 228 186 swapoff /dev/zram$i > err.log 2>&1 229 187 if [ $? -ne 0 ]; then 230 188 cat err.log ··· 238 196 239 197 zram_makefs() 240 198 { 241 - local i=0 199 + local i=$dev_start 242 200 for fs in $zram_filesystems; do 243 201 # if requested fs not supported default it to ext2 244 202 which mkfs.$fs > /dev/null 2>&1 || fs=ext2 ··· 257 215 zram_mount() 258 216 { 259 217 local i=0 260 - for i in $(seq 0 $(($dev_num - 1))); do 218 + for i in $(seq $dev_start $dev_end); do 261 219 echo "mount /dev/zram$i" 262 220 mkdir zram$i 263 221 mount /dev/zram$i zram$i > /dev/null || \