Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Cross-merge networking fixes after downstream PR.

Conflicts:

net/ipv4/udp.c
f796feabb9f5 ("udp: add local "peek offset enabled" flag")
56667da7399e ("net: implement lockless setsockopt(SO_PEEK_OFF)")

Adjacent changes:

net/unix/garbage.c
aa82ac51d633 ("af_unix: Drop oob_skb ref before purging queue in GC.")
11498715f266 ("af_unix: Remove io_uring code for GC.")

Signed-off-by: Jakub Kicinski <kuba@kernel.org>

+3245 -1528
+8 -8
Documentation/ABI/testing/sysfs-nvmem-cells
··· 4 4 Contact: Miquel Raynal <miquel.raynal@bootlin.com> 5 5 Description: 6 6 The "cells" folder contains one file per cell exposed by the 7 - NVMEM device. The name of the file is: <name>@<where>, with 8 - <name> being the cell name and <where> its location in the NVMEM 9 - device, in hexadecimal (without the '0x' prefix, to mimic device 10 - tree node names). The length of the file is the size of the cell 11 - (when known). The content of the file is the binary content of 12 - the cell (may sometimes be ASCII, likely without trailing 13 - character). 7 + NVMEM device. The name of the file is: "<name>@<byte>,<bit>", 8 + with <name> being the cell name and <where> its location in 9 + the NVMEM device, in hexadecimal bytes and bits (without the 10 + '0x' prefix, to mimic device tree node names). The length of 11 + the file is the size of the cell (when known). The content of 12 + the file is the binary content of the cell (may sometimes be 13 + ASCII, likely without trailing character). 14 14 Note: This file is only present if CONFIG_NVMEM_SYSFS 15 15 is enabled. 16 16 17 17 Example:: 18 18 19 - hexdump -C /sys/bus/nvmem/devices/1-00563/cells/product-name@d 19 + hexdump -C /sys/bus/nvmem/devices/1-00563/cells/product-name@d,0 20 20 00000000 54 4e 34 38 4d 2d 50 2d 44 4e |TN48M-P-DN| 21 21 0000000a
+7
Documentation/arch/arm64/silicon-errata.rst
··· 243 243 +----------------+-----------------+-----------------+-----------------------------+ 244 244 | ASR | ASR8601 | #8601001 | N/A | 245 245 +----------------+-----------------+-----------------+-----------------------------+ 246 + +----------------+-----------------+-----------------+-----------------------------+ 247 + | Microsoft | Azure Cobalt 100| #2139208 | ARM64_ERRATUM_2139208 | 248 + +----------------+-----------------+-----------------+-----------------------------+ 249 + | Microsoft | Azure Cobalt 100| #2067961 | ARM64_ERRATUM_2067961 | 250 + +----------------+-----------------+-----------------+-----------------------------+ 251 + | Microsoft | Azure Cobalt 100| #2253138 | ARM64_ERRATUM_2253138 | 252 + +----------------+-----------------+-----------------+-----------------------------+
-1
Documentation/devicetree/bindings/sound/google,sc7280-herobrine.yaml
··· 7 7 title: Google SC7280-Herobrine ASoC sound card driver 8 8 9 9 maintainers: 10 - - Srinivasa Rao Mandadapu <srivasam@codeaurora.org> 11 10 - Judy Hsiao <judyhsiao@chromium.org> 12 11 13 12 description:
+3 -3
Documentation/kbuild/Kconfig.recursion-issue-01
··· 16 16 # that are possible for CORE. So for example if CORE_BELL_A_ADVANCED is 'y', 17 17 # CORE must be 'y' too. 18 18 # 19 - # * What influences CORE_BELL_A_ADVANCED ? 19 + # * What influences CORE_BELL_A_ADVANCED? 20 20 # 21 21 # As the name implies CORE_BELL_A_ADVANCED is an advanced feature of 22 22 # CORE_BELL_A so naturally it depends on CORE_BELL_A. So if CORE_BELL_A is 'y' 23 23 # we know CORE_BELL_A_ADVANCED can be 'y' too. 24 24 # 25 - # * What influences CORE_BELL_A ? 25 + # * What influences CORE_BELL_A? 26 26 # 27 27 # CORE_BELL_A depends on CORE, so CORE influences CORE_BELL_A. 28 28 # ··· 34 34 # the "recursive dependency detected" error. 35 35 # 36 36 # Reading the Documentation/kbuild/Kconfig.recursion-issue-01 file it may be 37 - # obvious that an easy to solution to this problem should just be the removal 37 + # obvious that an easy solution to this problem should just be the removal 38 38 # of the "select CORE" from CORE_BELL_A_ADVANCED as that is implicit already 39 39 # since CORE_BELL_A depends on CORE. Recursive dependency issues are not always 40 40 # so trivial to resolve, we provide another example below of practical
+121
Documentation/process/cve.rst
··· 1 + ==== 2 + CVEs 3 + ==== 4 + 5 + Common Vulnerabilities and Exposure (CVE®) numbers were developed as an 6 + unambiguous way to identify, define, and catalog publicly disclosed 7 + security vulnerabilities. Over time, their usefulness has declined with 8 + regards to the kernel project, and CVE numbers were very often assigned 9 + in inappropriate ways and for inappropriate reasons. Because of this, 10 + the kernel development community has tended to avoid them. However, the 11 + combination of continuing pressure to assign CVEs and other forms of 12 + security identifiers, and ongoing abuses by individuals and companies 13 + outside of the kernel community has made it clear that the kernel 14 + community should have control over those assignments. 15 + 16 + The Linux kernel developer team does have the ability to assign CVEs for 17 + potential Linux kernel security issues. This assignment is independent 18 + of the :doc:`normal Linux kernel security bug reporting 19 + process<../process/security-bugs>`. 20 + 21 + A list of all assigned CVEs for the Linux kernel can be found in the 22 + archives of the linux-cve mailing list, as seen on 23 + https://lore.kernel.org/linux-cve-announce/. To get notice of the 24 + assigned CVEs, please `subscribe 25 + <https://subspace.kernel.org/subscribing.html>`_ to that mailing list. 26 + 27 + Process 28 + ======= 29 + 30 + As part of the normal stable release process, kernel changes that are 31 + potentially security issues are identified by the developers responsible 32 + for CVE number assignments and have CVE numbers automatically assigned 33 + to them. These assignments are published on the linux-cve-announce 34 + mailing list as announcements on a frequent basis. 35 + 36 + Note, due to the layer at which the Linux kernel is in a system, almost 37 + any bug might be exploitable to compromise the security of the kernel, 38 + but the possibility of exploitation is often not evident when the bug is 39 + fixed. Because of this, the CVE assignment team is overly cautious and 40 + assign CVE numbers to any bugfix that they identify. This 41 + explains the seemingly large number of CVEs that are issued by the Linux 42 + kernel team. 43 + 44 + If the CVE assignment team misses a specific fix that any user feels 45 + should have a CVE assigned to it, please email them at <cve@kernel.org> 46 + and the team there will work with you on it. Note that no potential 47 + security issues should be sent to this alias, it is ONLY for assignment 48 + of CVEs for fixes that are already in released kernel trees. If you 49 + feel you have found an unfixed security issue, please follow the 50 + :doc:`normal Linux kernel security bug reporting 51 + process<../process/security-bugs>`. 52 + 53 + No CVEs will be automatically assigned for unfixed security issues in 54 + the Linux kernel; assignment will only automatically happen after a fix 55 + is available and applied to a stable kernel tree, and it will be tracked 56 + that way by the git commit id of the original fix. If anyone wishes to 57 + have a CVE assigned before an issue is resolved with a commit, please 58 + contact the kernel CVE assignment team at <cve@kernel.org> to get an 59 + identifier assigned from their batch of reserved identifiers. 60 + 61 + No CVEs will be assigned for any issue found in a version of the kernel 62 + that is not currently being actively supported by the Stable/LTS kernel 63 + team. A list of the currently supported kernel branches can be found at 64 + https://kernel.org/releases.html 65 + 66 + Disputes of assigned CVEs 67 + ========================= 68 + 69 + The authority to dispute or modify an assigned CVE for a specific kernel 70 + change lies solely with the maintainers of the relevant subsystem 71 + affected. This principle ensures a high degree of accuracy and 72 + accountability in vulnerability reporting. Only those individuals with 73 + deep expertise and intimate knowledge of the subsystem can effectively 74 + assess the validity and scope of a reported vulnerability and determine 75 + its appropriate CVE designation. Any attempt to modify or dispute a CVE 76 + outside of this designated authority could lead to confusion, inaccurate 77 + reporting, and ultimately, compromised systems. 78 + 79 + Invalid CVEs 80 + ============ 81 + 82 + If a security issue is found in a Linux kernel that is only supported by 83 + a Linux distribution due to the changes that have been made by that 84 + distribution, or due to the distribution supporting a kernel version 85 + that is no longer one of the kernel.org supported releases, then a CVE 86 + can not be assigned by the Linux kernel CVE team, and must be asked for 87 + from that Linux distribution itself. 88 + 89 + Any CVE that is assigned against the Linux kernel for an actively 90 + supported kernel version, by any group other than the kernel assignment 91 + CVE team should not be treated as a valid CVE. Please notify the 92 + kernel CVE assignment team at <cve@kernel.org> so that they can work to 93 + invalidate such entries through the CNA remediation process. 94 + 95 + Applicability of specific CVEs 96 + ============================== 97 + 98 + As the Linux kernel can be used in many different ways, with many 99 + different ways of accessing it by external users, or no access at all, 100 + the applicability of any specific CVE is up to the user of Linux to 101 + determine, it is not up to the CVE assignment team. Please do not 102 + contact us to attempt to determine the applicability of any specific 103 + CVE. 104 + 105 + Also, as the source tree is so large, and any one system only uses a 106 + small subset of the source tree, any users of Linux should be aware that 107 + large numbers of assigned CVEs are not relevant for their systems. 108 + 109 + In short, we do not know your use case, and we do not know what portions 110 + of the kernel that you use, so there is no way for us to determine if a 111 + specific CVE is relevant for your system. 112 + 113 + As always, it is best to take all released kernel changes, as they are 114 + tested together in a unified whole by many community members, and not as 115 + individual cherry-picked changes. Also note that for many bugs, the 116 + solution to the overall problem is not found in a single change, but by 117 + the sum of many fixes on top of each other. Ideally CVEs will be 118 + assigned to all fixes for all issues, but sometimes we will fail to 119 + notice fixes, therefore assume that some changes without a CVE assigned 120 + might be relevant to take. 121 +
+1
Documentation/process/index.rst
··· 81 81 82 82 handling-regressions 83 83 security-bugs 84 + cve 84 85 embargoed-hardware-issues 85 86 86 87 Maintainer information
+1 -1
Documentation/process/maintainer-netdev.rst
··· 431 431 Checks in patchwork are mostly simple wrappers around existing kernel 432 432 scripts, the sources are available at: 433 433 434 - https://github.com/kuba-moo/nipa/tree/master/tests 434 + https://github.com/linux-netdev/nipa/tree/master/tests 435 435 436 436 **Do not** post your patches just to run them through the checks. 437 437 You must ensure that your patches are ready by testing them locally
+2 -3
Documentation/process/security-bugs.rst
··· 99 99 The security team does not assign CVEs, nor do we require them for 100 100 reports or fixes, as this can needlessly complicate the process and may 101 101 delay the bug handling. If a reporter wishes to have a CVE identifier 102 - assigned, they should find one by themselves, for example by contacting 103 - MITRE directly. However under no circumstances will a patch inclusion 104 - be delayed to wait for a CVE identifier to arrive. 102 + assigned for a confirmed issue, they can contact the :doc:`kernel CVE 103 + assignment team<../process/cve>` to obtain one. 105 104 106 105 Non-disclosure agreements 107 106 -------------------------
+16
MAINTAINERS
··· 5613 5613 F: Documentation/devicetree/bindings/net/can/ctu,ctucanfd.yaml 5614 5614 F: drivers/net/can/ctucanfd/ 5615 5615 5616 + CVE ASSIGNMENT CONTACT 5617 + M: CVE Assignment Team <cve@kernel.org> 5618 + S: Maintained 5619 + F: Documentation/process/cve.rst 5620 + 5616 5621 CW1200 WLAN driver 5617 5622 S: Orphan 5618 5623 F: drivers/net/wireless/st/cw1200/ ··· 15262 15257 F: Documentation/networking/net_cachelines/ 15263 15258 F: Documentation/process/maintainer-netdev.rst 15264 15259 F: Documentation/userspace-api/netlink/ 15260 + F: include/linux/framer/framer-provider.h 15261 + F: include/linux/framer/framer.h 15265 15262 F: include/linux/in.h 15266 15263 F: include/linux/indirect_call_wrapper.h 15267 15264 F: include/linux/net.h ··· 16864 16857 16865 16858 PCI DRIVER FOR TI DRA7XX/J721E 16866 16859 M: Vignesh Raghavendra <vigneshr@ti.com> 16860 + R: Siddharth Vadapalli <s-vadapalli@ti.com> 16867 16861 L: linux-omap@vger.kernel.org 16868 16862 L: linux-pci@vger.kernel.org 16869 16863 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) ··· 22043 22035 F: Documentation/devicetree/bindings/media/i2c/ti,ds90* 22044 22036 F: drivers/media/i2c/ds90* 22045 22037 F: include/media/i2c/ds90* 22038 + 22039 + TI HDC302X HUMIDITY DRIVER 22040 + M: Javier Carrasco <javier.carrasco.cruz@gmail.com> 22041 + M: Li peiyu <579lpy@gmail.com> 22042 + L: linux-iio@vger.kernel.org 22043 + S: Maintained 22044 + F: Documentation/devicetree/bindings/iio/humidity/ti,hdc3020.yaml 22045 + F: drivers/iio/humidity/hdc3020.c 22046 22046 22047 22047 TI ICSSG ETHERNET DRIVER (ICSSG) 22048 22048 R: MD Danish Anwar <danishanwar@ti.com>
+7 -7
Makefile
··· 2 2 VERSION = 6 3 3 PATCHLEVEL = 8 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc4 5 + EXTRAVERSION = -rc5 6 6 NAME = Hurr durr I'ma ninja sloth 7 7 8 8 # *DOCUMENTATION* ··· 294 294 single-build := 295 295 296 296 ifneq ($(filter $(no-dot-config-targets), $(MAKECMDGOALS)),) 297 - ifeq ($(filter-out $(no-dot-config-targets), $(MAKECMDGOALS)),) 297 + ifeq ($(filter-out $(no-dot-config-targets), $(MAKECMDGOALS)),) 298 298 need-config := 299 - endif 299 + endif 300 300 endif 301 301 302 302 ifneq ($(filter $(no-sync-config-targets), $(MAKECMDGOALS)),) 303 - ifeq ($(filter-out $(no-sync-config-targets), $(MAKECMDGOALS)),) 303 + ifeq ($(filter-out $(no-sync-config-targets), $(MAKECMDGOALS)),) 304 304 may-sync-config := 305 - endif 305 + endif 306 306 endif 307 307 308 308 need-compiler := $(may-sync-config) ··· 323 323 # We cannot build single targets and the others at the same time 324 324 ifneq ($(filter $(single-targets), $(MAKECMDGOALS)),) 325 325 single-build := 1 326 - ifneq ($(filter-out $(single-targets), $(MAKECMDGOALS)),) 326 + ifneq ($(filter-out $(single-targets), $(MAKECMDGOALS)),) 327 327 mixed-build := 1 328 - endif 328 + endif 329 329 endif 330 330 331 331 # For "make -j clean all", "make -j mrproper defconfig all", etc.
+1 -1
arch/arm64/include/asm/cpufeature.h
··· 83 83 * to full-0 denotes that this field has no override 84 84 * 85 85 * A @mask field set to full-0 with the corresponding @val field set 86 - * to full-1 denotes thath this field has an invalid override. 86 + * to full-1 denotes that this field has an invalid override. 87 87 */ 88 88 struct arm64_ftr_override { 89 89 u64 val;
+4
arch/arm64/include/asm/cputype.h
··· 61 61 #define ARM_CPU_IMP_HISI 0x48 62 62 #define ARM_CPU_IMP_APPLE 0x61 63 63 #define ARM_CPU_IMP_AMPERE 0xC0 64 + #define ARM_CPU_IMP_MICROSOFT 0x6D 64 65 65 66 #define ARM_CPU_PART_AEM_V8 0xD0F 66 67 #define ARM_CPU_PART_FOUNDATION 0xD00 ··· 136 135 137 136 #define AMPERE_CPU_PART_AMPERE1 0xAC3 138 137 138 + #define MICROSOFT_CPU_PART_AZURE_COBALT_100 0xD49 /* Based on r0p0 of ARM Neoverse N2 */ 139 + 139 140 #define MIDR_CORTEX_A53 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A53) 140 141 #define MIDR_CORTEX_A57 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A57) 141 142 #define MIDR_CORTEX_A72 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A72) ··· 196 193 #define MIDR_APPLE_M2_BLIZZARD_MAX MIDR_CPU_MODEL(ARM_CPU_IMP_APPLE, APPLE_CPU_PART_M2_BLIZZARD_MAX) 197 194 #define MIDR_APPLE_M2_AVALANCHE_MAX MIDR_CPU_MODEL(ARM_CPU_IMP_APPLE, APPLE_CPU_PART_M2_AVALANCHE_MAX) 198 195 #define MIDR_AMPERE1 MIDR_CPU_MODEL(ARM_CPU_IMP_AMPERE, AMPERE_CPU_PART_AMPERE1) 196 + #define MIDR_MICROSOFT_AZURE_COBALT_100 MIDR_CPU_MODEL(ARM_CPU_IMP_MICROSOFT, MICROSOFT_CPU_PART_AZURE_COBALT_100) 199 197 200 198 /* Fujitsu Erratum 010001 affects A64FX 1.0 and 1.1, (v0r0 and v1r0) */ 201 199 #define MIDR_FUJITSU_ERRATUM_010001 MIDR_FUJITSU_A64FX
+6 -6
arch/arm64/include/asm/fpsimd.h
··· 62 62 * When we defined the maximum SVE vector length we defined the ABI so 63 63 * that the maximum vector length included all the reserved for future 64 64 * expansion bits in ZCR rather than those just currently defined by 65 - * the architecture. While SME follows a similar pattern the fact that 66 - * it includes a square matrix means that any allocations that attempt 67 - * to cover the maximum potential vector length (such as happen with 68 - * the regset used for ptrace) end up being extremely large. Define 69 - * the much lower actual limit for use in such situations. 65 + * the architecture. Using this length to allocate worst size buffers 66 + * results in excessively large allocations, and this effect is even 67 + * more pronounced for SME due to ZA. Define more suitable VLs for 68 + * these situations. 70 69 */ 71 - #define SME_VQ_MAX 16 70 + #define ARCH_SVE_VQ_MAX ((ZCR_ELx_LEN_MASK >> ZCR_ELx_LEN_SHIFT) + 1) 71 + #define SME_VQ_MAX ((SMCR_ELx_LEN_MASK >> SMCR_ELx_LEN_SHIFT) + 1) 72 72 73 73 struct task_struct; 74 74
+8 -4
arch/arm64/include/asm/jump_label.h
··· 15 15 16 16 #define JUMP_LABEL_NOP_SIZE AARCH64_INSN_SIZE 17 17 18 + /* 19 + * Prefer the constraint "S" to support PIC with GCC. Clang before 19 does not 20 + * support "S" on a symbol with a constant offset, so we use "i" as a fallback. 21 + */ 18 22 static __always_inline bool arch_static_branch(struct static_key * const key, 19 23 const bool branch) 20 24 { ··· 27 23 " .pushsection __jump_table, \"aw\" \n\t" 28 24 " .align 3 \n\t" 29 25 " .long 1b - ., %l[l_yes] - . \n\t" 30 - " .quad %c0 - . \n\t" 26 + " .quad (%[key] - .) + %[bit0] \n\t" 31 27 " .popsection \n\t" 32 - : : "i"(&((char *)key)[branch]) : : l_yes); 28 + : : [key]"Si"(key), [bit0]"i"(branch) : : l_yes); 33 29 34 30 return false; 35 31 l_yes: ··· 44 40 " .pushsection __jump_table, \"aw\" \n\t" 45 41 " .align 3 \n\t" 46 42 " .long 1b - ., %l[l_yes] - . \n\t" 47 - " .quad %c0 - . \n\t" 43 + " .quad (%[key] - .) + %[bit0] \n\t" 48 44 " .popsection \n\t" 49 - : : "i"(&((char *)key)[branch]) : : l_yes); 45 + : : [key]"Si"(key), [bit0]"i"(branch) : : l_yes); 50 46 51 47 return false; 52 48 l_yes:
+3
arch/arm64/kernel/cpu_errata.c
··· 374 374 static const struct midr_range trbe_overwrite_fill_mode_cpus[] = { 375 375 #ifdef CONFIG_ARM64_ERRATUM_2139208 376 376 MIDR_ALL_VERSIONS(MIDR_NEOVERSE_N2), 377 + MIDR_ALL_VERSIONS(MIDR_MICROSOFT_AZURE_COBALT_100), 377 378 #endif 378 379 #ifdef CONFIG_ARM64_ERRATUM_2119858 379 380 MIDR_ALL_VERSIONS(MIDR_CORTEX_A710), ··· 388 387 static const struct midr_range tsb_flush_fail_cpus[] = { 389 388 #ifdef CONFIG_ARM64_ERRATUM_2067961 390 389 MIDR_ALL_VERSIONS(MIDR_NEOVERSE_N2), 390 + MIDR_ALL_VERSIONS(MIDR_MICROSOFT_AZURE_COBALT_100), 391 391 #endif 392 392 #ifdef CONFIG_ARM64_ERRATUM_2054223 393 393 MIDR_ALL_VERSIONS(MIDR_CORTEX_A710), ··· 401 399 static struct midr_range trbe_write_out_of_range_cpus[] = { 402 400 #ifdef CONFIG_ARM64_ERRATUM_2253138 403 401 MIDR_ALL_VERSIONS(MIDR_NEOVERSE_N2), 402 + MIDR_ALL_VERSIONS(MIDR_MICROSOFT_AZURE_COBALT_100), 404 403 #endif 405 404 #ifdef CONFIG_ARM64_ERRATUM_2224489 406 405 MIDR_ALL_VERSIONS(MIDR_CORTEX_A710),
+1 -1
arch/arm64/kernel/fpsimd.c
··· 1635 1635 void fpsimd_signal_preserve_current_state(void) 1636 1636 { 1637 1637 fpsimd_preserve_current_state(); 1638 - if (test_thread_flag(TIF_SVE)) 1638 + if (current->thread.fp_type == FP_STATE_SVE) 1639 1639 sve_to_fpsimd(current); 1640 1640 } 1641 1641
+2 -1
arch/arm64/kernel/ptrace.c
··· 1500 1500 #ifdef CONFIG_ARM64_SVE 1501 1501 [REGSET_SVE] = { /* Scalable Vector Extension */ 1502 1502 .core_note_type = NT_ARM_SVE, 1503 - .n = DIV_ROUND_UP(SVE_PT_SIZE(SVE_VQ_MAX, SVE_PT_REGS_SVE), 1503 + .n = DIV_ROUND_UP(SVE_PT_SIZE(ARCH_SVE_VQ_MAX, 1504 + SVE_PT_REGS_SVE), 1504 1505 SVE_VQ_BYTES), 1505 1506 .size = SVE_VQ_BYTES, 1506 1507 .align = SVE_VQ_BYTES,
+2 -2
arch/arm64/kernel/signal.c
··· 242 242 vl = task_get_sme_vl(current); 243 243 vq = sve_vq_from_vl(vl); 244 244 flags |= SVE_SIG_FLAG_SM; 245 - } else if (test_thread_flag(TIF_SVE)) { 245 + } else if (current->thread.fp_type == FP_STATE_SVE) { 246 246 vq = sve_vq_from_vl(vl); 247 247 } 248 248 ··· 878 878 if (system_supports_sve() || system_supports_sme()) { 879 879 unsigned int vq = 0; 880 880 881 - if (add_all || test_thread_flag(TIF_SVE) || 881 + if (add_all || current->thread.fp_type == FP_STATE_SVE || 882 882 thread_sm_enabled(&current->thread)) { 883 883 int vl = max(sve_max_vl(), sme_max_vl()); 884 884
-1
arch/arm64/kvm/Kconfig
··· 3 3 # KVM configuration 4 4 # 5 5 6 - source "virt/lib/Kconfig" 7 6 source "virt/kvm/Kconfig" 8 7 9 8 menuconfig VIRTUALIZATION
-2
arch/arm64/kvm/hyp/pgtable.c
··· 1419 1419 level + 1); 1420 1420 if (ret) { 1421 1421 kvm_pgtable_stage2_free_unlinked(mm_ops, pgtable, level); 1422 - mm_ops->put_page(pgtable); 1423 1422 return ERR_PTR(ret); 1424 1423 } 1425 1424 ··· 1501 1502 1502 1503 if (!stage2_try_break_pte(ctx, mmu)) { 1503 1504 kvm_pgtable_stage2_free_unlinked(mm_ops, childp, level); 1504 - mm_ops->put_page(childp); 1505 1505 return -EAGAIN; 1506 1506 } 1507 1507
+17 -10
arch/arm64/kvm/pkvm.c
··· 101 101 hyp_mem_base); 102 102 } 103 103 104 + static void __pkvm_destroy_hyp_vm(struct kvm *host_kvm) 105 + { 106 + if (host_kvm->arch.pkvm.handle) { 107 + WARN_ON(kvm_call_hyp_nvhe(__pkvm_teardown_vm, 108 + host_kvm->arch.pkvm.handle)); 109 + } 110 + 111 + host_kvm->arch.pkvm.handle = 0; 112 + free_hyp_memcache(&host_kvm->arch.pkvm.teardown_mc); 113 + } 114 + 104 115 /* 105 116 * Allocates and donates memory for hypervisor VM structs at EL2. 106 117 * ··· 192 181 return 0; 193 182 194 183 destroy_vm: 195 - pkvm_destroy_hyp_vm(host_kvm); 184 + __pkvm_destroy_hyp_vm(host_kvm); 196 185 return ret; 197 186 free_vm: 198 187 free_pages_exact(hyp_vm, hyp_vm_sz); ··· 205 194 { 206 195 int ret = 0; 207 196 208 - mutex_lock(&host_kvm->lock); 197 + mutex_lock(&host_kvm->arch.config_lock); 209 198 if (!host_kvm->arch.pkvm.handle) 210 199 ret = __pkvm_create_hyp_vm(host_kvm); 211 - mutex_unlock(&host_kvm->lock); 200 + mutex_unlock(&host_kvm->arch.config_lock); 212 201 213 202 return ret; 214 203 } 215 204 216 205 void pkvm_destroy_hyp_vm(struct kvm *host_kvm) 217 206 { 218 - if (host_kvm->arch.pkvm.handle) { 219 - WARN_ON(kvm_call_hyp_nvhe(__pkvm_teardown_vm, 220 - host_kvm->arch.pkvm.handle)); 221 - } 222 - 223 - host_kvm->arch.pkvm.handle = 0; 224 - free_hyp_memcache(&host_kvm->arch.pkvm.teardown_mc); 207 + mutex_lock(&host_kvm->arch.config_lock); 208 + __pkvm_destroy_hyp_vm(host_kvm); 209 + mutex_unlock(&host_kvm->arch.config_lock); 225 210 } 226 211 227 212 int pkvm_init_host_vm(struct kvm *host_kvm)
+5
arch/arm64/kvm/vgic/vgic-its.c
··· 468 468 } 469 469 470 470 irq = vgic_get_irq(vcpu->kvm, NULL, intids[i]); 471 + if (!irq) 472 + continue; 473 + 471 474 raw_spin_lock_irqsave(&irq->irq_lock, flags); 472 475 irq->pending_latch = pendmask & (1U << bit_nr); 473 476 vgic_queue_irq_unlock(vcpu->kvm, irq, flags); ··· 1435 1432 1436 1433 for (i = 0; i < irq_count; i++) { 1437 1434 irq = vgic_get_irq(kvm, NULL, intids[i]); 1435 + if (!irq) 1436 + continue; 1438 1437 1439 1438 update_affinity(irq, vcpu2); 1440 1439
+2 -2
arch/m68k/Makefile
··· 15 15 KBUILD_DEFCONFIG := multi_defconfig 16 16 17 17 ifdef cross_compiling 18 - ifeq ($(CROSS_COMPILE),) 18 + ifeq ($(CROSS_COMPILE),) 19 19 CROSS_COMPILE := $(call cc-cross-prefix, \ 20 20 m68k-linux-gnu- m68k-linux- m68k-unknown-linux-gnu-) 21 - endif 21 + endif 22 22 endif 23 23 24 24 #
+2 -2
arch/parisc/Makefile
··· 50 50 51 51 # Set default cross compiler for kernel build 52 52 ifdef cross_compiling 53 - ifeq ($(CROSS_COMPILE),) 53 + ifeq ($(CROSS_COMPILE),) 54 54 CC_SUFFIXES = linux linux-gnu unknown-linux-gnu suse-linux 55 55 CROSS_COMPILE := $(call cc-cross-prefix, \ 56 56 $(foreach a,$(CC_ARCHES), \ 57 57 $(foreach s,$(CC_SUFFIXES),$(a)-$(s)-))) 58 - endif 58 + endif 59 59 endif 60 60 61 61 ifdef CONFIG_DYNAMIC_FTRACE
+2 -8
arch/powerpc/include/asm/ftrace.h
··· 20 20 #ifndef __ASSEMBLY__ 21 21 extern void _mcount(void); 22 22 23 - static inline unsigned long ftrace_call_adjust(unsigned long addr) 24 - { 25 - if (IS_ENABLED(CONFIG_ARCH_USING_PATCHABLE_FUNCTION_ENTRY)) 26 - addr += MCOUNT_INSN_SIZE; 27 - 28 - return addr; 29 - } 30 - 31 23 unsigned long prepare_ftrace_return(unsigned long parent, unsigned long ip, 32 24 unsigned long sp); 33 25 ··· 134 142 #ifdef CONFIG_FUNCTION_TRACER 135 143 extern unsigned int ftrace_tramp_text[], ftrace_tramp_init[]; 136 144 void ftrace_free_init_tramp(void); 145 + unsigned long ftrace_call_adjust(unsigned long addr); 137 146 #else 138 147 static inline void ftrace_free_init_tramp(void) { } 148 + static inline unsigned long ftrace_call_adjust(unsigned long addr) { return addr; } 139 149 #endif 140 150 #endif /* !__ASSEMBLY__ */ 141 151
+1 -1
arch/powerpc/include/asm/papr-sysparm.h
··· 32 32 */ 33 33 struct papr_sysparm_buf { 34 34 __be16 len; 35 - char val[PAPR_SYSPARM_MAX_OUTPUT]; 35 + u8 val[PAPR_SYSPARM_MAX_OUTPUT]; 36 36 }; 37 37 38 38 struct papr_sysparm_buf *papr_sysparm_buf_alloc(void);
+2
arch/powerpc/include/asm/reg.h
··· 617 617 #endif 618 618 #define SPRN_HID2 0x3F8 /* Hardware Implementation Register 2 */ 619 619 #define SPRN_HID2_GEKKO 0x398 /* Gekko HID2 Register */ 620 + #define SPRN_HID2_G2_LE 0x3F3 /* G2_LE HID2 Register */ 621 + #define HID2_G2_LE_HBE (1<<18) /* High BAT Enable (G2_LE) */ 620 622 #define SPRN_IABR 0x3F2 /* Instruction Address Breakpoint Register */ 621 623 #define SPRN_IABR2 0x3FA /* 83xx */ 622 624 #define SPRN_IBCR 0x135 /* 83xx Insn Breakpoint Control Reg */
+1
arch/powerpc/include/asm/sections.h
··· 14 14 15 15 extern char __head_end[]; 16 16 extern char __srwx_boundary[]; 17 + extern char __exittext_begin[], __exittext_end[]; 17 18 18 19 /* Patch sites */ 19 20 extern s32 patch__call_flush_branch_caches1;
+1 -1
arch/powerpc/include/asm/thread_info.h
··· 14 14 15 15 #ifdef __KERNEL__ 16 16 17 - #ifdef CONFIG_KASAN 17 + #if defined(CONFIG_KASAN) && CONFIG_THREAD_SHIFT < 15 18 18 #define MIN_THREAD_SHIFT (CONFIG_THREAD_SHIFT + 1) 19 19 #else 20 20 #define MIN_THREAD_SHIFT CONFIG_THREAD_SHIFT
+1 -1
arch/powerpc/include/uapi/asm/papr-sysparm.h
··· 14 14 struct papr_sysparm_io_block { 15 15 __u32 parameter; 16 16 __u16 length; 17 - char data[PAPR_SYSPARM_MAX_OUTPUT]; 17 + __u8 data[PAPR_SYSPARM_MAX_OUTPUT]; 18 18 }; 19 19 20 20 /**
+19 -1
arch/powerpc/kernel/cpu_setup_6xx.S
··· 26 26 bl __init_fpu_registers 27 27 END_FTR_SECTION_IFCLR(CPU_FTR_FPU_UNAVAILABLE) 28 28 bl setup_common_caches 29 + 30 + /* 31 + * This assumes that all cores using __setup_cpu_603 with 32 + * MMU_FTR_USE_HIGH_BATS are G2_LE compatible 33 + */ 34 + BEGIN_MMU_FTR_SECTION 35 + bl setup_g2_le_hid2 36 + END_MMU_FTR_SECTION_IFSET(MMU_FTR_USE_HIGH_BATS) 37 + 29 38 mtlr r5 30 39 blr 31 40 _GLOBAL(__setup_cpu_604) ··· 123 114 isync 124 115 blr 125 116 SYM_FUNC_END(setup_604_hid0) 117 + 118 + /* Enable high BATs for G2_LE and derivatives like e300cX */ 119 + SYM_FUNC_START_LOCAL(setup_g2_le_hid2) 120 + mfspr r11,SPRN_HID2_G2_LE 121 + oris r11,r11,HID2_G2_LE_HBE@h 122 + mtspr SPRN_HID2_G2_LE,r11 123 + sync 124 + isync 125 + blr 126 + SYM_FUNC_END(setup_g2_le_hid2) 126 127 127 128 /* 7400 <= rev 2.7 and 7410 rev = 1.0 suffer from some 128 129 * erratas we work around here. ··· 514 495 mtcr r7 515 496 blr 516 497 _ASM_NOKPROBE_SYMBOL(__restore_cpu_setup) 517 -
+2 -1
arch/powerpc/kernel/cpu_specs_e500mc.h
··· 8 8 9 9 #ifdef CONFIG_PPC64 10 10 #define COMMON_USER_BOOKE (PPC_FEATURE_32 | PPC_FEATURE_HAS_MMU | \ 11 - PPC_FEATURE_HAS_FPU | PPC_FEATURE_64) 11 + PPC_FEATURE_HAS_FPU | PPC_FEATURE_64 | \ 12 + PPC_FEATURE_BOOKE) 12 13 #else 13 14 #define COMMON_USER_BOOKE (PPC_FEATURE_32 | PPC_FEATURE_HAS_MMU | \ 14 15 PPC_FEATURE_BOOKE)
+2 -2
arch/powerpc/kernel/interrupt_64.S
··· 52 52 mr r10,r1 53 53 ld r1,PACAKSAVE(r13) 54 54 std r10,0(r1) 55 - std r11,_NIP(r1) 55 + std r11,_LINK(r1) 56 + std r11,_NIP(r1) /* Saved LR is also the next instruction */ 56 57 std r12,_MSR(r1) 57 58 std r0,GPR0(r1) 58 59 std r10,GPR1(r1) ··· 71 70 std r9,GPR13(r1) 72 71 SAVE_NVGPRS(r1) 73 72 std r11,_XER(r1) 74 - std r11,_LINK(r1) 75 73 std r11,_CTR(r1) 76 74 77 75 li r11,\trapnr
+3 -1
arch/powerpc/kernel/iommu.c
··· 1289 1289 struct iommu_table_group *table_group; 1290 1290 1291 1291 /* At first attach the ownership is already set */ 1292 - if (!domain) 1292 + if (!domain) { 1293 + iommu_group_put(grp); 1293 1294 return 0; 1295 + } 1294 1296 1295 1297 table_group = iommu_group_get_iommudata(grp); 1296 1298 /*
+12
arch/powerpc/kernel/trace/ftrace.c
··· 27 27 #include <asm/ftrace.h> 28 28 #include <asm/syscall.h> 29 29 #include <asm/inst.h> 30 + #include <asm/sections.h> 30 31 31 32 #define NUM_FTRACE_TRAMPS 2 32 33 static unsigned long ftrace_tramps[NUM_FTRACE_TRAMPS]; 34 + 35 + unsigned long ftrace_call_adjust(unsigned long addr) 36 + { 37 + if (addr >= (unsigned long)__exittext_begin && addr < (unsigned long)__exittext_end) 38 + return 0; 39 + 40 + if (IS_ENABLED(CONFIG_ARCH_USING_PATCHABLE_FUNCTION_ENTRY)) 41 + addr += MCOUNT_INSN_SIZE; 42 + 43 + return addr; 44 + } 33 45 34 46 static ppc_inst_t ftrace_create_branch_inst(unsigned long ip, unsigned long addr, int link) 35 47 {
+5
arch/powerpc/kernel/trace/ftrace_64_pg.c
··· 37 37 #define NUM_FTRACE_TRAMPS 8 38 38 static unsigned long ftrace_tramps[NUM_FTRACE_TRAMPS]; 39 39 40 + unsigned long ftrace_call_adjust(unsigned long addr) 41 + { 42 + return addr; 43 + } 44 + 40 45 static ppc_inst_t 41 46 ftrace_call_replace(unsigned long ip, unsigned long addr, int link) 42 47 {
+2
arch/powerpc/kernel/vmlinux.lds.S
··· 281 281 * to deal with references from __bug_table 282 282 */ 283 283 .exit.text : AT(ADDR(.exit.text) - LOAD_OFFSET) { 284 + __exittext_begin = .; 284 285 EXIT_TEXT 286 + __exittext_end = .; 285 287 } 286 288 287 289 . = ALIGN(PAGE_SIZE);
+1
arch/powerpc/mm/kasan/init_32.c
··· 64 64 if (ret) 65 65 return ret; 66 66 67 + k_start = k_start & PAGE_MASK; 67 68 block = memblock_alloc(k_end - k_start, PAGE_SIZE); 68 69 if (!block) 69 70 return -ENOMEM;
+1 -1
arch/powerpc/platforms/85xx/mpc8536_ds.c
··· 27 27 28 28 #include "mpc85xx.h" 29 29 30 - void __init mpc8536_ds_pic_init(void) 30 + static void __init mpc8536_ds_pic_init(void) 31 31 { 32 32 struct mpic *mpic = mpic_alloc(NULL, 0, MPIC_BIG_ENDIAN, 33 33 0, 256, " OpenPIC ");
+1 -1
arch/powerpc/platforms/85xx/mvme2500.c
··· 21 21 22 22 #include "mpc85xx.h" 23 23 24 - void __init mvme2500_pic_init(void) 24 + static void __init mvme2500_pic_init(void) 25 25 { 26 26 struct mpic *mpic = mpic_alloc(NULL, 0, 27 27 MPIC_BIG_ENDIAN | MPIC_SINGLE_DEST_CPU,
+1 -1
arch/powerpc/platforms/85xx/p1010rdb.c
··· 24 24 25 25 #include "mpc85xx.h" 26 26 27 - void __init p1010_rdb_pic_init(void) 27 + static void __init p1010_rdb_pic_init(void) 28 28 { 29 29 struct mpic *mpic = mpic_alloc(NULL, 0, MPIC_BIG_ENDIAN | 30 30 MPIC_SINGLE_DEST_CPU,
+3 -3
arch/powerpc/platforms/85xx/p1022_ds.c
··· 370 370 * 371 371 * @pixclock: the wavelength, in picoseconds, of the clock 372 372 */ 373 - void p1022ds_set_pixel_clock(unsigned int pixclock) 373 + static void p1022ds_set_pixel_clock(unsigned int pixclock) 374 374 { 375 375 struct device_node *guts_np = NULL; 376 376 struct ccsr_guts __iomem *guts; ··· 418 418 /** 419 419 * p1022ds_valid_monitor_port: set the monitor port for sysfs 420 420 */ 421 - enum fsl_diu_monitor_port 421 + static enum fsl_diu_monitor_port 422 422 p1022ds_valid_monitor_port(enum fsl_diu_monitor_port port) 423 423 { 424 424 switch (port) { ··· 432 432 433 433 #endif 434 434 435 - void __init p1022_ds_pic_init(void) 435 + static void __init p1022_ds_pic_init(void) 436 436 { 437 437 struct mpic *mpic = mpic_alloc(NULL, 0, MPIC_BIG_ENDIAN | 438 438 MPIC_SINGLE_DEST_CPU,
+3 -3
arch/powerpc/platforms/85xx/p1022_rdk.c
··· 40 40 * 41 41 * @pixclock: the wavelength, in picoseconds, of the clock 42 42 */ 43 - void p1022rdk_set_pixel_clock(unsigned int pixclock) 43 + static void p1022rdk_set_pixel_clock(unsigned int pixclock) 44 44 { 45 45 struct device_node *guts_np = NULL; 46 46 struct ccsr_guts __iomem *guts; ··· 88 88 /** 89 89 * p1022rdk_valid_monitor_port: set the monitor port for sysfs 90 90 */ 91 - enum fsl_diu_monitor_port 91 + static enum fsl_diu_monitor_port 92 92 p1022rdk_valid_monitor_port(enum fsl_diu_monitor_port port) 93 93 { 94 94 return FSL_DIU_PORT_DVI; ··· 96 96 97 97 #endif 98 98 99 - void __init p1022_rdk_pic_init(void) 99 + static void __init p1022_rdk_pic_init(void) 100 100 { 101 101 struct mpic *mpic = mpic_alloc(NULL, 0, MPIC_BIG_ENDIAN | 102 102 MPIC_SINGLE_DEST_CPU,
+2
arch/powerpc/platforms/85xx/socrates_fpga_pic.c
··· 8 8 #include <linux/of_irq.h> 9 9 #include <linux/io.h> 10 10 11 + #include "socrates_fpga_pic.h" 12 + 11 13 /* 12 14 * The FPGA supports 9 interrupt sources, which can be routed to 3 13 15 * interrupt request lines of the MPIC. The line to be used can be
+1 -1
arch/powerpc/platforms/85xx/xes_mpc85xx.c
··· 37 37 #define MPC85xx_L2CTL_L2I 0x40000000 /* L2 flash invalidate */ 38 38 #define MPC85xx_L2CTL_L2SIZ_MASK 0x30000000 /* L2 SRAM size (R/O) */ 39 39 40 - void __init xes_mpc85xx_pic_init(void) 40 + static void __init xes_mpc85xx_pic_init(void) 41 41 { 42 42 struct mpic *mpic = mpic_alloc(NULL, 0, MPIC_BIG_ENDIAN, 43 43 0, 256, " OpenPIC ");
+6 -2
arch/powerpc/platforms/pseries/lpar.c
··· 662 662 { 663 663 struct lppaca *lppaca = &lppaca_of(cpu); 664 664 665 - return be64_to_cpu(READ_ONCE(lppaca->enqueue_dispatch_tb)) + 666 - be64_to_cpu(READ_ONCE(lppaca->ready_enqueue_tb)); 665 + /* 666 + * VPA steal time counters are reported at TB frequency. Hence do a 667 + * conversion to ns before returning 668 + */ 669 + return tb_to_ns(be64_to_cpu(READ_ONCE(lppaca->enqueue_dispatch_tb)) + 670 + be64_to_cpu(READ_ONCE(lppaca->ready_enqueue_tb))); 667 671 } 668 672 #endif 669 673
+3 -3
arch/powerpc/sysdev/udbg_memcons.c
··· 41 41 .input_end = &memcons_input[CONFIG_PPC_MEMCONS_INPUT_SIZE], 42 42 }; 43 43 44 - void memcons_putc(char c) 44 + static void memcons_putc(char c) 45 45 { 46 46 char *new_output_pos; 47 47 ··· 54 54 memcons.output_pos = new_output_pos; 55 55 } 56 56 57 - int memcons_getc_poll(void) 57 + static int memcons_getc_poll(void) 58 58 { 59 59 char c; 60 60 char *new_input_pos; ··· 77 77 return -1; 78 78 } 79 79 80 - int memcons_getc(void) 80 + static int memcons_getc(void) 81 81 { 82 82 int c; 83 83
+3 -3
arch/riscv/kernel/paravirt.c
··· 41 41 42 42 early_param("no-steal-acc", parse_no_stealacc); 43 43 44 - DEFINE_PER_CPU(struct sbi_sta_struct, steal_time) __aligned(64); 44 + static DEFINE_PER_CPU(struct sbi_sta_struct, steal_time) __aligned(64); 45 45 46 46 static bool __init has_pv_steal_clock(void) 47 47 { ··· 91 91 static u64 pv_time_steal_clock(int cpu) 92 92 { 93 93 struct sbi_sta_struct *st = per_cpu_ptr(&steal_time, cpu); 94 - u32 sequence; 95 - u64 steal; 94 + __le32 sequence; 95 + __le64 steal; 96 96 97 97 /* 98 98 * Check the sequence field before and after reading the steal
+12 -8
arch/riscv/kvm/vcpu_sbi_sta.c
··· 26 26 { 27 27 gpa_t shmem = vcpu->arch.sta.shmem; 28 28 u64 last_steal = vcpu->arch.sta.last_steal; 29 - u32 *sequence_ptr, sequence; 30 - u64 *steal_ptr, steal; 29 + __le32 __user *sequence_ptr; 30 + __le64 __user *steal_ptr; 31 + __le32 sequence_le; 32 + __le64 steal_le; 33 + u32 sequence; 34 + u64 steal; 31 35 unsigned long hva; 32 36 gfn_t gfn; 33 37 ··· 51 47 return; 52 48 } 53 49 54 - sequence_ptr = (u32 *)(hva + offset_in_page(shmem) + 50 + sequence_ptr = (__le32 __user *)(hva + offset_in_page(shmem) + 55 51 offsetof(struct sbi_sta_struct, sequence)); 56 - steal_ptr = (u64 *)(hva + offset_in_page(shmem) + 52 + steal_ptr = (__le64 __user *)(hva + offset_in_page(shmem) + 57 53 offsetof(struct sbi_sta_struct, steal)); 58 54 59 - if (WARN_ON(get_user(sequence, sequence_ptr))) 55 + if (WARN_ON(get_user(sequence_le, sequence_ptr))) 60 56 return; 61 57 62 - sequence = le32_to_cpu(sequence); 58 + sequence = le32_to_cpu(sequence_le); 63 59 sequence += 1; 64 60 65 61 if (WARN_ON(put_user(cpu_to_le32(sequence), sequence_ptr))) 66 62 return; 67 63 68 - if (!WARN_ON(get_user(steal, steal_ptr))) { 69 - steal = le64_to_cpu(steal); 64 + if (!WARN_ON(get_user(steal_le, steal_ptr))) { 65 + steal = le64_to_cpu(steal_le); 70 66 vcpu->arch.sta.last_steal = READ_ONCE(current->sched_info.run_delay); 71 67 steal += vcpu->arch.sta.last_steal - last_steal; 72 68 WARN_ON(put_user(cpu_to_le64(steal), steal_ptr));
+4 -4
arch/x86/Makefile
··· 112 112 # temporary until string.h is fixed 113 113 KBUILD_CFLAGS += -ffreestanding 114 114 115 - ifeq ($(CONFIG_STACKPROTECTOR),y) 116 - ifeq ($(CONFIG_SMP),y) 115 + ifeq ($(CONFIG_STACKPROTECTOR),y) 116 + ifeq ($(CONFIG_SMP),y) 117 117 KBUILD_CFLAGS += -mstack-protector-guard-reg=fs -mstack-protector-guard-symbol=__stack_chk_guard 118 - else 118 + else 119 119 KBUILD_CFLAGS += -mstack-protector-guard=global 120 - endif 121 120 endif 121 + endif 122 122 else 123 123 BITS := 64 124 124 UTS_MACHINE := x86_64
+10
arch/x86/include/asm/vsyscall.h
··· 4 4 5 5 #include <linux/seqlock.h> 6 6 #include <uapi/asm/vsyscall.h> 7 + #include <asm/page_types.h> 7 8 8 9 #ifdef CONFIG_X86_VSYSCALL_EMULATION 9 10 extern void map_vsyscall(void); ··· 24 23 return false; 25 24 } 26 25 #endif 26 + 27 + /* 28 + * The (legacy) vsyscall page is the long page in the kernel portion 29 + * of the address space that has user-accessible permissions. 30 + */ 31 + static inline bool is_vsyscall_vaddr(unsigned long vaddr) 32 + { 33 + return unlikely((vaddr & PAGE_MASK) == VSYSCALL_ADDR); 34 + } 27 35 28 36 #endif /* _ASM_X86_VSYSCALL_H */
+1 -1
arch/x86/kvm/vmx/pmu_intel.c
··· 71 71 static void reprogram_fixed_counters(struct kvm_pmu *pmu, u64 data) 72 72 { 73 73 struct kvm_pmc *pmc; 74 - u8 old_fixed_ctr_ctrl = pmu->fixed_ctr_ctrl; 74 + u64 old_fixed_ctr_ctrl = pmu->fixed_ctr_ctrl; 75 75 int i; 76 76 77 77 pmu->fixed_ctr_ctrl = data;
+8 -12
arch/x86/kvm/x86.c
··· 1704 1704 struct kvm_msr_entry msr; 1705 1705 int r; 1706 1706 1707 + /* Unconditionally clear the output for simplicity */ 1708 + msr.data = 0; 1707 1709 msr.index = index; 1708 1710 r = kvm_get_msr_feature(&msr); 1709 1711 1710 - if (r == KVM_MSR_RET_INVALID) { 1711 - /* Unconditionally clear the output for simplicity */ 1712 - *data = 0; 1713 - if (kvm_msr_ignored_check(index, 0, false)) 1714 - r = 0; 1715 - } 1716 - 1717 - if (r) 1718 - return r; 1712 + if (r == KVM_MSR_RET_INVALID && kvm_msr_ignored_check(index, 0, false)) 1713 + r = 0; 1719 1714 1720 1715 *data = msr.data; 1721 1716 1722 - return 0; 1717 + return r; 1723 1718 } 1724 1719 1725 1720 static bool __kvm_valid_efer(struct kvm_vcpu *vcpu, u64 efer) ··· 2506 2511 } 2507 2512 2508 2513 #ifdef CONFIG_X86_64 2509 - static inline int gtod_is_based_on_tsc(int mode) 2514 + static inline bool gtod_is_based_on_tsc(int mode) 2510 2515 { 2511 2516 return mode == VDSO_CLOCKMODE_TSC || mode == VDSO_CLOCKMODE_HVCLOCK; 2512 2517 } ··· 5453 5458 if (events->flags & KVM_VCPUEVENT_VALID_NMI_PENDING) { 5454 5459 vcpu->arch.nmi_pending = 0; 5455 5460 atomic_set(&vcpu->arch.nmi_queued, events->nmi.pending); 5456 - kvm_make_request(KVM_REQ_NMI, vcpu); 5461 + if (events->nmi.pending) 5462 + kvm_make_request(KVM_REQ_NMI, vcpu); 5457 5463 } 5458 5464 static_call(kvm_x86_set_nmi_mask)(vcpu, events->nmi.masked); 5459 5465
-9
arch/x86/mm/fault.c
··· 798 798 show_opcodes(regs, loglvl); 799 799 } 800 800 801 - /* 802 - * The (legacy) vsyscall page is the long page in the kernel portion 803 - * of the address space that has user-accessible permissions. 804 - */ 805 - static bool is_vsyscall_vaddr(unsigned long vaddr) 806 - { 807 - return unlikely((vaddr & PAGE_MASK) == VSYSCALL_ADDR); 808 - } 809 - 810 801 static void 811 802 __bad_area_nosemaphore(struct pt_regs *regs, unsigned long error_code, 812 803 unsigned long address, u32 pkey, int si_code)
+18 -5
arch/x86/mm/ident_map.c
··· 26 26 for (; addr < end; addr = next) { 27 27 pud_t *pud = pud_page + pud_index(addr); 28 28 pmd_t *pmd; 29 + bool use_gbpage; 29 30 30 31 next = (addr & PUD_MASK) + PUD_SIZE; 31 32 if (next > end) 32 33 next = end; 33 34 34 - if (info->direct_gbpages) { 35 + /* if this is already a gbpage, this portion is already mapped */ 36 + if (pud_large(*pud)) 37 + continue; 38 + 39 + /* Is using a gbpage allowed? */ 40 + use_gbpage = info->direct_gbpages; 41 + 42 + /* Don't use gbpage if it maps more than the requested region. */ 43 + /* at the begining: */ 44 + use_gbpage &= ((addr & ~PUD_MASK) == 0); 45 + /* ... or at the end: */ 46 + use_gbpage &= ((next & ~PUD_MASK) == 0); 47 + 48 + /* Never overwrite existing mappings */ 49 + use_gbpage &= !pud_present(*pud); 50 + 51 + if (use_gbpage) { 35 52 pud_t pudval; 36 53 37 - if (pud_present(*pud)) 38 - continue; 39 - 40 - addr &= PUD_MASK; 41 54 pudval = __pud((addr - info->offset) | info->page_flag); 42 55 set_pud(pud, pudval); 43 56 continue;
+10
arch/x86/mm/maccess.c
··· 3 3 #include <linux/uaccess.h> 4 4 #include <linux/kernel.h> 5 5 6 + #include <asm/vsyscall.h> 7 + 6 8 #ifdef CONFIG_X86_64 7 9 bool copy_from_kernel_nofault_allowed(const void *unsafe_src, size_t size) 8 10 { ··· 15 13 * normal userspace and the userspace guard page: 16 14 */ 17 15 if (vaddr < TASK_SIZE_MAX + PAGE_SIZE) 16 + return false; 17 + 18 + /* 19 + * Reading from the vsyscall page may cause an unhandled fault in 20 + * certain cases. Though it is at an address above TASK_SIZE_MAX, it is 21 + * usually considered as a user space address. 22 + */ 23 + if (is_vsyscall_vaddr(vaddr)) 18 24 return false; 19 25 20 26 /*
+32 -12
drivers/accel/ivpu/ivpu_hw_37xx.c
··· 510 510 return ret; 511 511 } 512 512 513 - static int ivpu_boot_pwr_domain_disable(struct ivpu_device *vdev) 514 - { 515 - ivpu_boot_dpu_active_drive(vdev, false); 516 - ivpu_boot_pwr_island_isolation_drive(vdev, true); 517 - ivpu_boot_pwr_island_trickle_drive(vdev, false); 518 - ivpu_boot_pwr_island_drive(vdev, false); 519 - 520 - return ivpu_boot_wait_for_pwr_island_status(vdev, 0x0); 521 - } 522 - 523 513 static void ivpu_boot_no_snoop_enable(struct ivpu_device *vdev) 524 514 { 525 515 u32 val = REGV_RD32(VPU_37XX_HOST_IF_TCU_PTW_OVERRIDES); ··· 606 616 return 0; 607 617 } 608 618 619 + static int ivpu_hw_37xx_ip_reset(struct ivpu_device *vdev) 620 + { 621 + int ret; 622 + u32 val; 623 + 624 + if (IVPU_WA(punit_disabled)) 625 + return 0; 626 + 627 + ret = REGB_POLL_FLD(VPU_37XX_BUTTRESS_VPU_IP_RESET, TRIGGER, 0, TIMEOUT_US); 628 + if (ret) { 629 + ivpu_err(vdev, "Timed out waiting for TRIGGER bit\n"); 630 + return ret; 631 + } 632 + 633 + val = REGB_RD32(VPU_37XX_BUTTRESS_VPU_IP_RESET); 634 + val = REG_SET_FLD(VPU_37XX_BUTTRESS_VPU_IP_RESET, TRIGGER, val); 635 + REGB_WR32(VPU_37XX_BUTTRESS_VPU_IP_RESET, val); 636 + 637 + ret = REGB_POLL_FLD(VPU_37XX_BUTTRESS_VPU_IP_RESET, TRIGGER, 0, TIMEOUT_US); 638 + if (ret) 639 + ivpu_err(vdev, "Timed out waiting for RESET completion\n"); 640 + 641 + return ret; 642 + } 643 + 609 644 static int ivpu_hw_37xx_reset(struct ivpu_device *vdev) 610 645 { 611 646 int ret = 0; 612 647 613 - if (ivpu_boot_pwr_domain_disable(vdev)) { 614 - ivpu_err(vdev, "Failed to disable power domain\n"); 648 + if (ivpu_hw_37xx_ip_reset(vdev)) { 649 + ivpu_err(vdev, "Failed to reset NPU\n"); 615 650 ret = -EIO; 616 651 } 617 652 ··· 675 660 static int ivpu_hw_37xx_power_up(struct ivpu_device *vdev) 676 661 { 677 662 int ret; 663 + 664 + /* PLL requests may fail when powering down, so issue WP 0 here */ 665 + ret = ivpu_pll_disable(vdev); 666 + if (ret) 667 + ivpu_warn(vdev, "Failed to disable PLL: %d\n", ret); 678 668 679 669 ret = ivpu_hw_37xx_d0i3_disable(vdev); 680 670 if (ret)
+22 -17
drivers/accel/ivpu/ivpu_pm.c
··· 58 58 { 59 59 int ret; 60 60 61 + /* Save PCI state before powering down as it sometimes gets corrupted if NPU hangs */ 62 + pci_save_state(to_pci_dev(vdev->drm.dev)); 63 + 61 64 ret = ivpu_shutdown(vdev); 62 - if (ret) { 65 + if (ret) 63 66 ivpu_err(vdev, "Failed to shutdown VPU: %d\n", ret); 64 - return ret; 65 - } 67 + 68 + pci_set_power_state(to_pci_dev(vdev->drm.dev), PCI_D3hot); 66 69 67 70 return ret; 68 71 } ··· 73 70 static int ivpu_resume(struct ivpu_device *vdev) 74 71 { 75 72 int ret; 73 + 74 + pci_set_power_state(to_pci_dev(vdev->drm.dev), PCI_D0); 75 + pci_restore_state(to_pci_dev(vdev->drm.dev)); 76 76 77 77 retry: 78 78 ret = ivpu_hw_power_up(vdev); ··· 126 120 127 121 ivpu_fw_log_dump(vdev); 128 122 129 - retry: 130 - ret = pci_try_reset_function(to_pci_dev(vdev->drm.dev)); 131 - if (ret == -EAGAIN && !drm_dev_is_unplugged(&vdev->drm)) { 132 - cond_resched(); 133 - goto retry; 134 - } 123 + atomic_inc(&vdev->pm->reset_counter); 124 + atomic_set(&vdev->pm->reset_pending, 1); 125 + down_write(&vdev->pm->reset_lock); 135 126 136 - if (ret && ret != -EAGAIN) 137 - ivpu_err(vdev, "Failed to reset VPU: %d\n", ret); 127 + ivpu_suspend(vdev); 128 + ivpu_pm_prepare_cold_boot(vdev); 129 + ivpu_jobs_abort_all(vdev); 130 + 131 + ret = ivpu_resume(vdev); 132 + if (ret) 133 + ivpu_err(vdev, "Failed to resume NPU: %d\n", ret); 134 + 135 + up_write(&vdev->pm->reset_lock); 136 + atomic_set(&vdev->pm->reset_pending, 0); 138 137 139 138 kobject_uevent_env(&vdev->drm.dev->kobj, KOBJ_CHANGE, evt); 140 139 pm_runtime_mark_last_busy(vdev->drm.dev); ··· 211 200 ivpu_suspend(vdev); 212 201 ivpu_pm_prepare_warm_boot(vdev); 213 202 214 - pci_save_state(to_pci_dev(dev)); 215 - pci_set_power_state(to_pci_dev(dev), PCI_D3hot); 216 - 217 203 ivpu_dbg(vdev, PM, "Suspend done.\n"); 218 204 219 205 return 0; ··· 223 215 int ret; 224 216 225 217 ivpu_dbg(vdev, PM, "Resume..\n"); 226 - 227 - pci_set_power_state(to_pci_dev(dev), PCI_D0); 228 - pci_restore_state(to_pci_dev(dev)); 229 218 230 219 ret = ivpu_resume(vdev); 231 220 if (ret)
+6 -7
drivers/base/arch_topology.c
··· 431 431 struct cpufreq_policy *policy = data; 432 432 int cpu; 433 433 434 - if (!raw_capacity) 435 - return 0; 436 - 437 434 if (val != CPUFREQ_CREATE_POLICY) 438 435 return 0; 439 436 ··· 447 450 } 448 451 449 452 if (cpumask_empty(cpus_to_visit)) { 450 - topology_normalize_cpu_scale(); 451 - schedule_work(&update_topology_flags_work); 452 - free_raw_capacity(); 453 + if (raw_capacity) { 454 + topology_normalize_cpu_scale(); 455 + schedule_work(&update_topology_flags_work); 456 + free_raw_capacity(); 457 + } 453 458 pr_debug("cpu_capacity: parsing done\n"); 454 459 schedule_work(&parsing_done_work); 455 460 } ··· 471 472 * On ACPI-based systems skip registering cpufreq notifier as cpufreq 472 473 * information is not needed for cpu capacity initialization. 473 474 */ 474 - if (!acpi_disabled || !raw_capacity) 475 + if (!acpi_disabled) 475 476 return -EINVAL; 476 477 477 478 if (!alloc_cpumask_var(&cpus_to_visit, GFP_KERNEL))
+20 -6
drivers/base/core.c
··· 125 125 */ 126 126 static void __fwnode_link_cycle(struct fwnode_link *link) 127 127 { 128 - pr_debug("%pfwf: Relaxing link with %pfwf\n", 128 + pr_debug("%pfwf: cycle: depends on %pfwf\n", 129 129 link->consumer, link->supplier); 130 130 link->flags |= FWLINK_FLAG_CYCLE; 131 131 } ··· 284 284 return false; 285 285 } 286 286 287 + #define DL_MARKER_FLAGS (DL_FLAG_INFERRED | \ 288 + DL_FLAG_CYCLE | \ 289 + DL_FLAG_MANAGED) 287 290 static inline bool device_link_flag_is_sync_state_only(u32 flags) 288 291 { 289 - return (flags & ~(DL_FLAG_INFERRED | DL_FLAG_CYCLE)) == 290 - (DL_FLAG_SYNC_STATE_ONLY | DL_FLAG_MANAGED); 292 + return (flags & ~DL_MARKER_FLAGS) == DL_FLAG_SYNC_STATE_ONLY; 291 293 } 292 294 293 295 /** ··· 1945 1943 1946 1944 /* Termination condition. */ 1947 1945 if (sup_dev == con) { 1946 + pr_debug("----- cycle: start -----\n"); 1948 1947 ret = true; 1949 1948 goto out; 1950 1949 } ··· 1977 1974 else 1978 1975 par_dev = fwnode_get_next_parent_dev(sup_handle); 1979 1976 1980 - if (par_dev && __fw_devlink_relax_cycles(con, par_dev->fwnode)) 1977 + if (par_dev && __fw_devlink_relax_cycles(con, par_dev->fwnode)) { 1978 + pr_debug("%pfwf: cycle: child of %pfwf\n", sup_handle, 1979 + par_dev->fwnode); 1981 1980 ret = true; 1981 + } 1982 1982 1983 1983 if (!sup_dev) 1984 1984 goto out; ··· 1997 1991 1998 1992 if (__fw_devlink_relax_cycles(con, 1999 1993 dev_link->supplier->fwnode)) { 1994 + pr_debug("%pfwf: cycle: depends on %pfwf\n", sup_handle, 1995 + dev_link->supplier->fwnode); 2000 1996 fw_devlink_relax_link(dev_link); 2001 1997 dev_link->flags |= DL_FLAG_CYCLE; 2002 1998 ret = true; ··· 2066 2058 2067 2059 /* 2068 2060 * SYNC_STATE_ONLY device links don't block probing and supports cycles. 2069 - * So cycle detection isn't necessary and shouldn't be done. 2061 + * So, one might expect that cycle detection isn't necessary for them. 2062 + * However, if the device link was marked as SYNC_STATE_ONLY because 2063 + * it's part of a cycle, then we still need to do cycle detection. This 2064 + * is because the consumer and supplier might be part of multiple cycles 2065 + * and we need to detect all those cycles. 2070 2066 */ 2071 - if (!(flags & DL_FLAG_SYNC_STATE_ONLY)) { 2067 + if (!device_link_flag_is_sync_state_only(flags) || 2068 + flags & DL_FLAG_CYCLE) { 2072 2069 device_links_write_lock(); 2073 2070 if (__fw_devlink_relax_cycles(con, sup_handle)) { 2074 2071 __fwnode_link_cycle(link); 2075 2072 flags = fw_devlink_get_flags(link->flags); 2073 + pr_debug("----- cycle: end -----\n"); 2076 2074 dev_info(con, "Fixed dependency cycle(s) with %pfwf\n", 2077 2075 sup_handle); 2078 2076 }
+3 -2
drivers/crypto/virtio/virtio_crypto_akcipher_algs.c
··· 104 104 } 105 105 106 106 static int virtio_crypto_alg_akcipher_init_session(struct virtio_crypto_akcipher_ctx *ctx, 107 - struct virtio_crypto_ctrl_header *header, void *para, 107 + struct virtio_crypto_ctrl_header *header, 108 + struct virtio_crypto_akcipher_session_para *para, 108 109 const uint8_t *key, unsigned int keylen) 109 110 { 110 111 struct scatterlist outhdr_sg, key_sg, inhdr_sg, *sgs[3]; ··· 129 128 130 129 ctrl = &vc_ctrl_req->ctrl; 131 130 memcpy(&ctrl->header, header, sizeof(ctrl->header)); 132 - memcpy(&ctrl->u, para, sizeof(ctrl->u)); 131 + memcpy(&ctrl->u.akcipher_create_session.para, para, sizeof(*para)); 133 132 input = &vc_ctrl_req->input; 134 133 input->status = cpu_to_le32(VIRTIO_CRYPTO_ERR); 135 134
+3
drivers/gpu/drm/amd/amdgpu/amdgpu.h
··· 200 200 extern uint amdgpu_dc_visual_confirm; 201 201 extern uint amdgpu_dm_abm_level; 202 202 extern int amdgpu_backlight; 203 + extern int amdgpu_damage_clips; 203 204 extern struct amdgpu_mgpu_info mgpu_info; 204 205 extern int amdgpu_ras_enable; 205 206 extern uint amdgpu_ras_mask; ··· 1550 1549 #if defined(CONFIG_ACPI) && defined(CONFIG_SUSPEND) 1551 1550 bool amdgpu_acpi_is_s3_active(struct amdgpu_device *adev); 1552 1551 bool amdgpu_acpi_is_s0ix_active(struct amdgpu_device *adev); 1552 + void amdgpu_choose_low_power_state(struct amdgpu_device *adev); 1553 1553 #else 1554 1554 static inline bool amdgpu_acpi_is_s0ix_active(struct amdgpu_device *adev) { return false; } 1555 1555 static inline bool amdgpu_acpi_is_s3_active(struct amdgpu_device *adev) { return false; } 1556 + static inline void amdgpu_choose_low_power_state(struct amdgpu_device *adev) { } 1556 1557 #endif 1557 1558 1558 1559 #if defined(CONFIG_DRM_AMD_DC)
+15
drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c
··· 1519 1519 #endif /* CONFIG_AMD_PMC */ 1520 1520 } 1521 1521 1522 + /** 1523 + * amdgpu_choose_low_power_state 1524 + * 1525 + * @adev: amdgpu_device_pointer 1526 + * 1527 + * Choose the target low power state for the GPU 1528 + */ 1529 + void amdgpu_choose_low_power_state(struct amdgpu_device *adev) 1530 + { 1531 + if (amdgpu_acpi_is_s0ix_active(adev)) 1532 + adev->in_s0ix = true; 1533 + else if (amdgpu_acpi_is_s3_active(adev)) 1534 + adev->in_s3 = true; 1535 + } 1536 + 1522 1537 #endif /* CONFIG_SUSPEND */
+9 -3
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
··· 4514 4514 struct amdgpu_device *adev = drm_to_adev(dev); 4515 4515 int i, r; 4516 4516 4517 + amdgpu_choose_low_power_state(adev); 4518 + 4517 4519 if (dev->switch_power_state == DRM_SWITCH_POWER_OFF) 4518 4520 return 0; 4519 4521 4520 4522 /* Evict the majority of BOs before starting suspend sequence */ 4521 4523 r = amdgpu_device_evict_resources(adev); 4522 4524 if (r) 4523 - return r; 4525 + goto unprepare; 4524 4526 4525 4527 for (i = 0; i < adev->num_ip_blocks; i++) { 4526 4528 if (!adev->ip_blocks[i].status.valid) ··· 4531 4529 continue; 4532 4530 r = adev->ip_blocks[i].version->funcs->prepare_suspend((void *)adev); 4533 4531 if (r) 4534 - return r; 4532 + goto unprepare; 4535 4533 } 4536 4534 4537 4535 return 0; 4536 + 4537 + unprepare: 4538 + adev->in_s0ix = adev->in_s3 = false; 4539 + 4540 + return r; 4538 4541 } 4539 4542 4540 4543 /** ··· 4576 4569 drm_fb_helper_set_suspend_unlocked(adev_to_drm(adev)->fb_helper, true); 4577 4570 4578 4571 cancel_delayed_work_sync(&adev->delayed_init_work); 4579 - flush_delayed_work(&adev->gfx.gfx_off_delay_work); 4580 4572 4581 4573 amdgpu_ras_suspend(adev); 4582 4574
+13
drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
··· 211 211 uint amdgpu_debug_mask; 212 212 int amdgpu_agp = -1; /* auto */ 213 213 int amdgpu_wbrf = -1; 214 + int amdgpu_damage_clips = -1; /* auto */ 214 215 215 216 static void amdgpu_drv_delayed_reset_work_handler(struct work_struct *work); 216 217 ··· 859 858 int amdgpu_backlight = -1; 860 859 MODULE_PARM_DESC(backlight, "Backlight control (0 = pwm, 1 = aux, -1 auto (default))"); 861 860 module_param_named(backlight, amdgpu_backlight, bint, 0444); 861 + 862 + /** 863 + * DOC: damageclips (int) 864 + * Enable or disable damage clips support. If damage clips support is disabled, 865 + * we will force full frame updates, irrespective of what user space sends to 866 + * us. 867 + * 868 + * Defaults to -1 (where it is enabled unless a PSR-SU display is detected). 869 + */ 870 + MODULE_PARM_DESC(damageclips, 871 + "Damage clips support (0 = disable, 1 = enable, -1 auto (default))"); 872 + module_param_named(damageclips, amdgpu_damage_clips, int, 0444); 862 873 863 874 /** 864 875 * DOC: tmz (int)
+8 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
··· 723 723 724 724 if (adev->gfx.gfx_off_req_count == 0 && 725 725 !adev->gfx.gfx_off_state) { 726 - schedule_delayed_work(&adev->gfx.gfx_off_delay_work, 726 + /* If going to s2idle, no need to wait */ 727 + if (adev->in_s0ix) { 728 + if (!amdgpu_dpm_set_powergating_by_smu(adev, 729 + AMD_IP_BLOCK_TYPE_GFX, true)) 730 + adev->gfx.gfx_off_state = true; 731 + } else { 732 + schedule_delayed_work(&adev->gfx.gfx_off_delay_work, 727 733 delay); 734 + } 728 735 } 729 736 } else { 730 737 if (adev->gfx.gfx_off_req_count == 0) {
+2 -2
drivers/gpu/drm/amd/amdgpu/soc21.c
··· 50 50 /* SOC21 */ 51 51 static const struct amdgpu_video_codec_info vcn_4_0_0_video_codecs_encode_array_vcn0[] = { 52 52 {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG4_AVC, 4096, 2304, 0)}, 53 - {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_HEVC, 4096, 2304, 0)}, 53 + {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_HEVC, 8192, 4352, 0)}, 54 54 {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_AV1, 8192, 4352, 0)}, 55 55 }; 56 56 57 57 static const struct amdgpu_video_codec_info vcn_4_0_0_video_codecs_encode_array_vcn1[] = { 58 58 {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG4_AVC, 4096, 2304, 0)}, 59 - {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_HEVC, 4096, 2304, 0)}, 59 + {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_HEVC, 8192, 4352, 0)}, 60 60 }; 61 61 62 62 static const struct amdgpu_video_codecs vcn_4_0_0_video_codecs_encode_vcn0 = {
+2 -2
drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v11.c
··· 55 55 m = get_mqd(mqd); 56 56 57 57 if (has_wa_flag) { 58 - uint32_t wa_mask = minfo->update_flag == UPDATE_FLAG_DBG_WA_ENABLE ? 59 - 0xffff : 0xffffffff; 58 + uint32_t wa_mask = 59 + (minfo->update_flag & UPDATE_FLAG_DBG_WA_ENABLE) ? 0xffff : 0xffffffff; 60 60 61 61 m->compute_static_thread_mgmt_se0 = wa_mask; 62 62 m->compute_static_thread_mgmt_se1 = wa_mask;
+9
drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v9.c
··· 303 303 update_cu_mask(mm, mqd, minfo, 0); 304 304 set_priority(m, q); 305 305 306 + if (minfo && KFD_GC_VERSION(mm->dev) >= IP_VERSION(9, 4, 2)) { 307 + if (minfo->update_flag & UPDATE_FLAG_IS_GWS) 308 + m->compute_resource_limits |= 309 + COMPUTE_RESOURCE_LIMITS__FORCE_SIMD_DIST_MASK; 310 + else 311 + m->compute_resource_limits &= 312 + ~COMPUTE_RESOURCE_LIMITS__FORCE_SIMD_DIST_MASK; 313 + } 314 + 306 315 q->is_active = QUEUE_IS_ACTIVE(*q); 307 316 } 308 317
+1
drivers/gpu/drm/amd/amdkfd/kfd_priv.h
··· 532 532 enum mqd_update_flag { 533 533 UPDATE_FLAG_DBG_WA_ENABLE = 1, 534 534 UPDATE_FLAG_DBG_WA_DISABLE = 2, 535 + UPDATE_FLAG_IS_GWS = 4, /* quirk for gfx9 IP */ 535 536 }; 536 537 537 538 struct mqd_update_info {
+3 -1
drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c
··· 95 95 int pqm_set_gws(struct process_queue_manager *pqm, unsigned int qid, 96 96 void *gws) 97 97 { 98 + struct mqd_update_info minfo = {0}; 98 99 struct kfd_node *dev = NULL; 99 100 struct process_queue_node *pqn; 100 101 struct kfd_process_device *pdd; ··· 147 146 } 148 147 149 148 pdd->qpd.num_gws = gws ? dev->adev->gds.gws_size : 0; 149 + minfo.update_flag = gws ? UPDATE_FLAG_IS_GWS : 0; 150 150 151 151 return pqn->q->device->dqm->ops.update_queue(pqn->q->device->dqm, 152 - pqn->q, NULL); 152 + pqn->q, &minfo); 153 153 } 154 154 155 155 void kfd_process_dequeue_from_all_devices(struct kfd_process *p)
+4 -6
drivers/gpu/drm/amd/amdkfd/kfd_topology.c
··· 1638 1638 else 1639 1639 mode = UNKNOWN_MEMORY_PARTITION_MODE; 1640 1640 1641 - if (pcache->cache_level == 2) 1642 - pcache->cache_size = pcache_info[cache_type].cache_size * num_xcc; 1643 - else if (mode) 1644 - pcache->cache_size = pcache_info[cache_type].cache_size / mode; 1645 - else 1646 - pcache->cache_size = pcache_info[cache_type].cache_size; 1641 + pcache->cache_size = pcache_info[cache_type].cache_size; 1642 + /* Partition mode only affects L3 cache size */ 1643 + if (mode && pcache->cache_level == 3) 1644 + pcache->cache_size /= mode; 1647 1645 1648 1646 if (pcache_info[cache_type].flags & CRAT_CACHE_FLAGS_DATA_CACHE) 1649 1647 pcache->cache_type |= HSA_CACHE_TYPE_DATA;
+10 -1
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
··· 1956 1956 &adev->dm.dmub_bo_gpu_addr, 1957 1957 &adev->dm.dmub_bo_cpu_addr); 1958 1958 1959 - if (adev->dm.hpd_rx_offload_wq) { 1959 + if (adev->dm.hpd_rx_offload_wq && adev->dm.dc) { 1960 1960 for (i = 0; i < adev->dm.dc->caps.max_links; i++) { 1961 1961 if (adev->dm.hpd_rx_offload_wq[i].wq) { 1962 1962 destroy_workqueue(adev->dm.hpd_rx_offload_wq[i].wq); ··· 5219 5219 struct drm_plane_state *new_plane_state, 5220 5220 struct drm_crtc_state *crtc_state, 5221 5221 struct dc_flip_addrs *flip_addrs, 5222 + bool is_psr_su, 5222 5223 bool *dirty_regions_changed) 5223 5224 { 5224 5225 struct dm_crtc_state *dm_crtc_state = to_dm_crtc_state(crtc_state); ··· 5243 5242 5244 5243 num_clips = drm_plane_get_damage_clips_count(new_plane_state); 5245 5244 clips = drm_plane_get_damage_clips(new_plane_state); 5245 + 5246 + if (num_clips && (!amdgpu_damage_clips || (amdgpu_damage_clips < 0 && 5247 + is_psr_su))) 5248 + goto ffu; 5246 5249 5247 5250 if (!dm_crtc_state->mpo_requested) { 5248 5251 if (!num_clips || num_clips > DC_MAX_DIRTY_RECTS) ··· 6199 6194 if (recalculate_timing) { 6200 6195 freesync_mode = get_highest_refresh_rate_mode(aconnector, false); 6201 6196 drm_mode_copy(&saved_mode, &mode); 6197 + saved_mode.picture_aspect_ratio = mode.picture_aspect_ratio; 6202 6198 drm_mode_copy(&mode, freesync_mode); 6199 + mode.picture_aspect_ratio = saved_mode.picture_aspect_ratio; 6203 6200 } else { 6204 6201 decide_crtc_timing_for_drm_display_mode( 6205 6202 &mode, preferred_mode, scale); ··· 8305 8298 fill_dc_dirty_rects(plane, old_plane_state, 8306 8299 new_plane_state, new_crtc_state, 8307 8300 &bundle->flip_addrs[planes_count], 8301 + acrtc_state->stream->link->psr_settings.psr_version == 8302 + DC_PSR_VERSION_SU_1, 8308 8303 &dirty_rects_changed); 8309 8304 8310 8305 /*
+1 -1
drivers/gpu/drm/amd/display/dc/basics/dce_calcs.c
··· 94 94 const uint32_t s_high = 7; 95 95 const uint32_t dmif_chunk_buff_margin = 1; 96 96 97 - uint32_t max_chunks_fbc_mode; 97 + uint32_t max_chunks_fbc_mode = 0; 98 98 int32_t num_cursor_lines; 99 99 100 100 int32_t i, j, k;
+10 -6
drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c
··· 1850 1850 /* Vega12 */ 1851 1851 smu_info_v3_2 = GET_IMAGE(struct atom_smu_info_v3_2, 1852 1852 DATA_TABLES(smu_info)); 1853 - DC_LOG_BIOS("gpuclk_ss_percentage (unit of 0.001 percent): %d\n", smu_info_v3_2->gpuclk_ss_percentage); 1854 1853 if (!smu_info_v3_2) 1855 1854 return BP_RESULT_BADBIOSTABLE; 1855 + 1856 + DC_LOG_BIOS("gpuclk_ss_percentage (unit of 0.001 percent): %d\n", smu_info_v3_2->gpuclk_ss_percentage); 1856 1857 1857 1858 info->default_engine_clk = smu_info_v3_2->bootup_dcefclk_10khz * 10; 1858 1859 } else if (revision.minor == 3) { 1859 1860 /* Vega20 */ 1860 1861 smu_info_v3_3 = GET_IMAGE(struct atom_smu_info_v3_3, 1861 1862 DATA_TABLES(smu_info)); 1862 - DC_LOG_BIOS("gpuclk_ss_percentage (unit of 0.001 percent): %d\n", smu_info_v3_3->gpuclk_ss_percentage); 1863 1863 if (!smu_info_v3_3) 1864 1864 return BP_RESULT_BADBIOSTABLE; 1865 + 1866 + DC_LOG_BIOS("gpuclk_ss_percentage (unit of 0.001 percent): %d\n", smu_info_v3_3->gpuclk_ss_percentage); 1865 1867 1866 1868 info->default_engine_clk = smu_info_v3_3->bootup_dcefclk_10khz * 10; 1867 1869 } ··· 2424 2422 info_v11 = GET_IMAGE(struct atom_integrated_system_info_v1_11, 2425 2423 DATA_TABLES(integratedsysteminfo)); 2426 2424 2427 - DC_LOG_BIOS("gpuclk_ss_percentage (unit of 0.001 percent): %d\n", info_v11->gpuclk_ss_percentage); 2428 2425 if (info_v11 == NULL) 2429 2426 return BP_RESULT_BADBIOSTABLE; 2427 + 2428 + DC_LOG_BIOS("gpuclk_ss_percentage (unit of 0.001 percent): %d\n", info_v11->gpuclk_ss_percentage); 2430 2429 2431 2430 info->gpu_cap_info = 2432 2431 le32_to_cpu(info_v11->gpucapinfo); ··· 2640 2637 2641 2638 info_v2_1 = GET_IMAGE(struct atom_integrated_system_info_v2_1, 2642 2639 DATA_TABLES(integratedsysteminfo)); 2643 - DC_LOG_BIOS("gpuclk_ss_percentage (unit of 0.001 percent): %d\n", info_v2_1->gpuclk_ss_percentage); 2644 2640 2645 2641 if (info_v2_1 == NULL) 2646 2642 return BP_RESULT_BADBIOSTABLE; 2643 + 2644 + DC_LOG_BIOS("gpuclk_ss_percentage (unit of 0.001 percent): %d\n", info_v2_1->gpuclk_ss_percentage); 2647 2645 2648 2646 info->gpu_cap_info = 2649 2647 le32_to_cpu(info_v2_1->gpucapinfo); ··· 2803 2799 info_v2_2 = GET_IMAGE(struct atom_integrated_system_info_v2_2, 2804 2800 DATA_TABLES(integratedsysteminfo)); 2805 2801 2806 - DC_LOG_BIOS("gpuclk_ss_percentage (unit of 0.001 percent): %d\n", info_v2_2->gpuclk_ss_percentage); 2807 - 2808 2802 if (info_v2_2 == NULL) 2809 2803 return BP_RESULT_BADBIOSTABLE; 2804 + 2805 + DC_LOG_BIOS("gpuclk_ss_percentage (unit of 0.001 percent): %d\n", info_v2_2->gpuclk_ss_percentage); 2810 2806 2811 2807 info->gpu_cap_info = 2812 2808 le32_to_cpu(info_v2_2->gpucapinfo);
+2
drivers/gpu/drm/amd/display/dc/clk_mgr/dcn301/vg_clk_mgr.c
··· 546 546 int i; 547 547 548 548 for (i = 0; i < VG_NUM_SOC_VOLTAGE_LEVELS; i++) { 549 + if (i >= VG_NUM_DCFCLK_DPM_LEVELS) 550 + break; 549 551 if (clock_table->SocVoltage[i] == voltage) 550 552 return clock_table->DcfClocks[i]; 551 553 }
+11 -4
drivers/gpu/drm/amd/display/dc/clk_mgr/dcn35/dcn35_clk_mgr.c
··· 655 655 struct clk_limit_table_entry def_max = bw_params->clk_table.entries[bw_params->clk_table.num_entries - 1]; 656 656 uint32_t max_fclk = 0, min_pstate = 0, max_dispclk = 0, max_dppclk = 0; 657 657 uint32_t max_pstate = 0, max_dram_speed_mts = 0, min_dram_speed_mts = 0; 658 + uint32_t num_memps, num_fclk, num_dcfclk; 658 659 int i; 659 660 660 661 /* Determine min/max p-state values. */ 661 - for (i = 0; i < clock_table->NumMemPstatesEnabled; i++) { 662 + num_memps = (clock_table->NumMemPstatesEnabled > NUM_MEM_PSTATE_LEVELS) ? NUM_MEM_PSTATE_LEVELS : 663 + clock_table->NumMemPstatesEnabled; 664 + for (i = 0; i < num_memps; i++) { 662 665 uint32_t dram_speed_mts = calc_dram_speed_mts(&clock_table->MemPstateTable[i]); 663 666 664 667 if (is_valid_clock_value(dram_speed_mts) && dram_speed_mts > max_dram_speed_mts) { ··· 673 670 min_dram_speed_mts = max_dram_speed_mts; 674 671 min_pstate = max_pstate; 675 672 676 - for (i = 0; i < clock_table->NumMemPstatesEnabled; i++) { 673 + for (i = 0; i < num_memps; i++) { 677 674 uint32_t dram_speed_mts = calc_dram_speed_mts(&clock_table->MemPstateTable[i]); 678 675 679 676 if (is_valid_clock_value(dram_speed_mts) && dram_speed_mts < min_dram_speed_mts) { ··· 702 699 /* Base the clock table on dcfclk, need at least one entry regardless of pmfw table */ 703 700 ASSERT(clock_table->NumDcfClkLevelsEnabled > 0); 704 701 705 - max_fclk = find_max_clk_value(clock_table->FclkClocks_Freq, clock_table->NumFclkLevelsEnabled); 702 + num_fclk = (clock_table->NumFclkLevelsEnabled > NUM_FCLK_DPM_LEVELS) ? NUM_FCLK_DPM_LEVELS : 703 + clock_table->NumFclkLevelsEnabled; 704 + max_fclk = find_max_clk_value(clock_table->FclkClocks_Freq, num_fclk); 706 705 707 - for (i = 0; i < clock_table->NumDcfClkLevelsEnabled; i++) { 706 + num_dcfclk = (clock_table->NumFclkLevelsEnabled > NUM_DCFCLK_DPM_LEVELS) ? NUM_DCFCLK_DPM_LEVELS : 707 + clock_table->NumDcfClkLevelsEnabled; 708 + for (i = 0; i < num_dcfclk; i++) { 708 709 int j; 709 710 710 711 /* First search defaults for the clocks we don't read using closest lower or equal default dcfclk */
+1 -4
drivers/gpu/drm/amd/display/dc/dcn30/dcn30_dpp_cm.c
··· 56 56 57 57 static enum dc_lut_mode dpp30_get_gamcor_current(struct dpp *dpp_base) 58 58 { 59 - enum dc_lut_mode mode; 59 + enum dc_lut_mode mode = LUT_BYPASS; 60 60 uint32_t state_mode; 61 61 uint32_t lut_mode; 62 62 struct dcn3_dpp *dpp = TO_DCN30_DPP(dpp_base); 63 63 64 64 REG_GET(CM_GAMCOR_CONTROL, CM_GAMCOR_MODE_CURRENT, &state_mode); 65 - 66 - if (state_mode == 0) 67 - mode = LUT_BYPASS; 68 65 69 66 if (state_mode == 2) {//Programmable RAM LUT 70 67 REG_GET(CM_GAMCOR_CONTROL, CM_GAMCOR_SELECT_CURRENT, &lut_mode);
+1 -1
drivers/gpu/drm/amd/display/dc/dml/dcn32/dcn32_fpu.c
··· 2760 2760 struct _vcs_dpi_voltage_scaling_st entry = {0}; 2761 2761 struct clk_limit_table_entry max_clk_data = {0}; 2762 2762 2763 - unsigned int min_dcfclk_mhz = 399, min_fclk_mhz = 599; 2763 + unsigned int min_dcfclk_mhz = 199, min_fclk_mhz = 299; 2764 2764 2765 2765 static const unsigned int num_dcfclk_stas = 5; 2766 2766 unsigned int dcfclk_sta_targets[DC__VOLTAGE_STATES] = {199, 615, 906, 1324, 1564};
+2 -2
drivers/gpu/drm/amd/display/dc/hwss/dcn21/dcn21_hwseq.c
··· 211 211 struct dmcu *dmcu = pipe_ctx->stream->ctx->dc->res_pool->dmcu; 212 212 uint32_t otg_inst; 213 213 214 - if (!abm && !tg && !panel_cntl) 214 + if (!abm || !tg || !panel_cntl) 215 215 return; 216 216 217 217 otg_inst = tg->inst; ··· 245 245 struct panel_cntl *panel_cntl = pipe_ctx->stream->link->panel_cntl; 246 246 uint32_t otg_inst; 247 247 248 - if (!abm && !tg && !panel_cntl) 248 + if (!abm || !tg || !panel_cntl) 249 249 return false; 250 250 251 251 otg_inst = tg->inst;
+1 -1
drivers/gpu/drm/amd/display/dc/resource/dcn35/dcn35_resource.c
··· 780 780 .disable_z10 = false, 781 781 .ignore_pg = true, 782 782 .psp_disabled_wa = true, 783 - .ips2_eval_delay_us = 1650, 783 + .ips2_eval_delay_us = 2000, 784 784 .ips2_entry_delay_us = 800, 785 785 .static_screen_wait_frames = 2, 786 786 };
+6
drivers/gpu/drm/drm_buddy.c
··· 539 539 } while (1); 540 540 541 541 list_splice_tail(&allocated, blocks); 542 + 543 + if (total_allocated < size) { 544 + err = -ENOSPC; 545 + goto err_free; 546 + } 547 + 542 548 return 0; 543 549 544 550 err_undo:
+1
drivers/gpu/drm/drm_crtc.c
··· 904 904 connector_set = NULL; 905 905 fb = NULL; 906 906 mode = NULL; 907 + num_connectors = 0; 907 908 908 909 DRM_MODESET_LOCK_ALL_END(dev, ctx, ret); 909 910
+1 -1
drivers/gpu/drm/drm_prime.c
··· 820 820 if (max_segment == 0) 821 821 max_segment = UINT_MAX; 822 822 err = sg_alloc_table_from_pages_segment(sg, pages, nr_pages, 0, 823 - nr_pages << PAGE_SHIFT, 823 + (unsigned long)nr_pages << PAGE_SHIFT, 824 824 max_segment, GFP_KERNEL); 825 825 if (err) { 826 826 kfree(sg);
+3
drivers/gpu/drm/i915/display/intel_dp.c
··· 2355 2355 limits->min_rate = intel_dp_common_rate(intel_dp, 0); 2356 2356 limits->max_rate = intel_dp_max_link_rate(intel_dp); 2357 2357 2358 + /* FIXME 128b/132b SST support missing */ 2359 + limits->max_rate = min(limits->max_rate, 810000); 2360 + 2358 2361 limits->min_lane_count = 1; 2359 2362 limits->max_lane_count = intel_dp_max_lane_count(intel_dp); 2360 2363
+2 -2
drivers/gpu/drm/i915/display/intel_vdsc_regs.h
··· 51 51 #define DSCC_PICTURE_PARAMETER_SET_0 _MMIO(0x6BA00) 52 52 #define _DSCA_PPS_0 0x6B200 53 53 #define _DSCC_PPS_0 0x6BA00 54 - #define DSCA_PPS(pps) _MMIO(_DSCA_PPS_0 + (pps) * 4) 55 - #define DSCC_PPS(pps) _MMIO(_DSCC_PPS_0 + (pps) * 4) 54 + #define DSCA_PPS(pps) _MMIO(_DSCA_PPS_0 + ((pps) < 12 ? (pps) : (pps) + 12) * 4) 55 + #define DSCC_PPS(pps) _MMIO(_DSCC_PPS_0 + ((pps) < 12 ? (pps) : (pps) + 12) * 4) 56 56 #define _ICL_DSC0_PICTURE_PARAMETER_SET_0_PB 0x78270 57 57 #define _ICL_DSC1_PICTURE_PARAMETER_SET_0_PB 0x78370 58 58 #define _ICL_DSC0_PICTURE_PARAMETER_SET_0_PC 0x78470
+1 -1
drivers/gpu/drm/msm/adreno/a6xx_gpu.c
··· 1287 1287 gpu->ubwc_config.highest_bank_bit = 15; 1288 1288 1289 1289 if (adreno_is_a610(gpu)) { 1290 - gpu->ubwc_config.highest_bank_bit = 14; 1290 + gpu->ubwc_config.highest_bank_bit = 13; 1291 1291 gpu->ubwc_config.min_acc_len = 1; 1292 1292 gpu->ubwc_config.ubwc_mode = 1; 1293 1293 }
+2 -2
drivers/gpu/drm/msm/msm_gem_prime.c
··· 26 26 { 27 27 void *vaddr; 28 28 29 - vaddr = msm_gem_get_vaddr(obj); 29 + vaddr = msm_gem_get_vaddr_locked(obj); 30 30 if (IS_ERR(vaddr)) 31 31 return PTR_ERR(vaddr); 32 32 iosys_map_set_vaddr(map, vaddr); ··· 36 36 37 37 void msm_gem_prime_vunmap(struct drm_gem_object *obj, struct iosys_map *map) 38 38 { 39 - msm_gem_put_vaddr(obj); 39 + msm_gem_put_vaddr_locked(obj); 40 40 } 41 41 42 42 struct drm_gem_object *msm_gem_prime_import_sg_table(struct drm_device *dev,
+5 -6
drivers/gpu/drm/msm/msm_gpu.c
··· 751 751 struct msm_ringbuffer *ring = submit->ring; 752 752 unsigned long flags; 753 753 754 + WARN_ON(!mutex_is_locked(&gpu->lock)); 755 + 754 756 pm_runtime_get_sync(&gpu->pdev->dev); 755 757 756 - mutex_lock(&gpu->lock); 757 - 758 758 msm_gpu_hw_init(gpu); 759 + 760 + submit->seqno = submit->hw_fence->seqno; 759 761 760 762 update_sw_cntrs(gpu); 761 763 ··· 783 781 gpu->funcs->submit(gpu, submit); 784 782 gpu->cur_ctx_seqno = submit->queue->ctx->seqno; 785 783 786 - hangcheck_timer_reset(gpu); 787 - 788 - mutex_unlock(&gpu->lock); 789 - 790 784 pm_runtime_put(&gpu->pdev->dev); 785 + hangcheck_timer_reset(gpu); 791 786 } 792 787 793 788 /*
+29 -3
drivers/gpu/drm/msm/msm_iommu.c
··· 21 21 struct msm_mmu base; 22 22 struct msm_mmu *parent; 23 23 struct io_pgtable_ops *pgtbl_ops; 24 + const struct iommu_flush_ops *tlb; 25 + struct device *iommu_dev; 24 26 unsigned long pgsize_bitmap; /* Bitmap of page sizes in use */ 25 27 phys_addr_t ttbr; 26 28 u32 asid; ··· 203 201 204 202 static void msm_iommu_tlb_flush_all(void *cookie) 205 203 { 204 + struct msm_iommu_pagetable *pagetable = cookie; 205 + struct adreno_smmu_priv *adreno_smmu; 206 + 207 + if (!pm_runtime_get_if_in_use(pagetable->iommu_dev)) 208 + return; 209 + 210 + adreno_smmu = dev_get_drvdata(pagetable->parent->dev); 211 + 212 + pagetable->tlb->tlb_flush_all((void *)adreno_smmu->cookie); 213 + 214 + pm_runtime_put_autosuspend(pagetable->iommu_dev); 206 215 } 207 216 208 217 static void msm_iommu_tlb_flush_walk(unsigned long iova, size_t size, 209 218 size_t granule, void *cookie) 210 219 { 220 + struct msm_iommu_pagetable *pagetable = cookie; 221 + struct adreno_smmu_priv *adreno_smmu; 222 + 223 + if (!pm_runtime_get_if_in_use(pagetable->iommu_dev)) 224 + return; 225 + 226 + adreno_smmu = dev_get_drvdata(pagetable->parent->dev); 227 + 228 + pagetable->tlb->tlb_flush_walk(iova, size, granule, (void *)adreno_smmu->cookie); 229 + 230 + pm_runtime_put_autosuspend(pagetable->iommu_dev); 211 231 } 212 232 213 233 static void msm_iommu_tlb_add_page(struct iommu_iotlb_gather *gather, ··· 237 213 { 238 214 } 239 215 240 - static const struct iommu_flush_ops null_tlb_ops = { 216 + static const struct iommu_flush_ops tlb_ops = { 241 217 .tlb_flush_all = msm_iommu_tlb_flush_all, 242 218 .tlb_flush_walk = msm_iommu_tlb_flush_walk, 243 219 .tlb_add_page = msm_iommu_tlb_add_page, ··· 278 254 279 255 /* The incoming cfg will have the TTBR1 quirk enabled */ 280 256 ttbr0_cfg.quirks &= ~IO_PGTABLE_QUIRK_ARM_TTBR1; 281 - ttbr0_cfg.tlb = &null_tlb_ops; 257 + ttbr0_cfg.tlb = &tlb_ops; 282 258 283 259 pagetable->pgtbl_ops = alloc_io_pgtable_ops(ARM_64_LPAE_S1, 284 - &ttbr0_cfg, iommu->domain); 260 + &ttbr0_cfg, pagetable); 285 261 286 262 if (!pagetable->pgtbl_ops) { 287 263 kfree(pagetable); ··· 303 279 304 280 /* Needed later for TLB flush */ 305 281 pagetable->parent = parent; 282 + pagetable->tlb = ttbr1_cfg->tlb; 283 + pagetable->iommu_dev = ttbr1_cfg->iommu_dev; 306 284 pagetable->pgsize_bitmap = ttbr0_cfg.pgsize_bitmap; 307 285 pagetable->ttbr = ttbr0_cfg.arm_lpae_s1_cfg.ttbr; 308 286
+5 -2
drivers/gpu/drm/msm/msm_ringbuffer.c
··· 21 21 22 22 msm_fence_init(submit->hw_fence, fctx); 23 23 24 - submit->seqno = submit->hw_fence->seqno; 25 - 26 24 mutex_lock(&priv->lru.lock); 27 25 28 26 for (i = 0; i < submit->nr_bos; i++) { ··· 33 35 34 36 mutex_unlock(&priv->lru.lock); 35 37 38 + /* TODO move submit path over to using a per-ring lock.. */ 39 + mutex_lock(&gpu->lock); 40 + 36 41 msm_gpu_submit(gpu, submit); 42 + 43 + mutex_unlock(&gpu->lock); 37 44 38 45 return dma_fence_get(submit->hw_fence); 39 46 }
+14 -6
drivers/gpu/drm/nouveau/nouveau_abi16.c
··· 128 128 struct nouveau_abi16_ntfy *ntfy, *temp; 129 129 130 130 /* Cancel all jobs from the entity's queue. */ 131 - drm_sched_entity_fini(&chan->sched.entity); 131 + if (chan->sched) 132 + drm_sched_entity_fini(&chan->sched->entity); 132 133 133 134 if (chan->chan) 134 135 nouveau_channel_idle(chan->chan); 135 136 136 - nouveau_sched_fini(&chan->sched); 137 + if (chan->sched) 138 + nouveau_sched_destroy(&chan->sched); 137 139 138 140 /* cleanup notifier state */ 139 141 list_for_each_entry_safe(ntfy, temp, &chan->notifiers, head) { ··· 339 337 if (ret) 340 338 goto done; 341 339 342 - ret = nouveau_sched_init(&chan->sched, drm, drm->sched_wq, 343 - chan->chan->dma.ib_max); 344 - if (ret) 345 - goto done; 340 + /* If we're not using the VM_BIND uAPI, we don't need a scheduler. 341 + * 342 + * The client lock is already acquired by nouveau_abi16_get(). 343 + */ 344 + if (nouveau_cli_uvmm(cli)) { 345 + ret = nouveau_sched_create(&chan->sched, drm, drm->sched_wq, 346 + chan->chan->dma.ib_max); 347 + if (ret) 348 + goto done; 349 + } 346 350 347 351 init->channel = chan->chan->chid; 348 352
+1 -1
drivers/gpu/drm/nouveau/nouveau_abi16.h
··· 26 26 struct nouveau_bo *ntfy; 27 27 struct nouveau_vma *ntfy_vma; 28 28 struct nvkm_mm heap; 29 - struct nouveau_sched sched; 29 + struct nouveau_sched *sched; 30 30 }; 31 31 32 32 struct nouveau_abi16 {
+4 -3
drivers/gpu/drm/nouveau/nouveau_drm.c
··· 201 201 WARN_ON(!list_empty(&cli->worker)); 202 202 203 203 usif_client_fini(cli); 204 - nouveau_sched_fini(&cli->sched); 204 + if (cli->sched) 205 + nouveau_sched_destroy(&cli->sched); 205 206 if (uvmm) 206 207 nouveau_uvmm_fini(uvmm); 207 208 nouveau_vmm_fini(&cli->svm); ··· 312 311 cli->mem = &mems[ret]; 313 312 314 313 /* Don't pass in the (shared) sched_wq in order to let 315 - * nouveau_sched_init() create a dedicated one for VM_BIND jobs. 314 + * nouveau_sched_create() create a dedicated one for VM_BIND jobs. 316 315 * 317 316 * This is required to ensure that for VM_BIND jobs free_job() work and 318 317 * run_job() work can always run concurrently and hence, free_job() work ··· 321 320 * locks which indirectly or directly are held for allocations 322 321 * elsewhere. 323 322 */ 324 - ret = nouveau_sched_init(&cli->sched, drm, NULL, 1); 323 + ret = nouveau_sched_create(&cli->sched, drm, NULL, 1); 325 324 if (ret) 326 325 goto done; 327 326
+1 -1
drivers/gpu/drm/nouveau/nouveau_drv.h
··· 98 98 bool disabled; 99 99 } uvmm; 100 100 101 - struct nouveau_sched sched; 101 + struct nouveau_sched *sched; 102 102 103 103 const struct nvif_mclass *mem; 104 104
+1 -1
drivers/gpu/drm/nouveau/nouveau_exec.c
··· 389 389 if (ret) 390 390 goto out; 391 391 392 - args.sched = &chan16->sched; 392 + args.sched = chan16->sched; 393 393 args.file_priv = file_priv; 394 394 args.chan = chan; 395 395
+36 -2
drivers/gpu/drm/nouveau/nouveau_sched.c
··· 398 398 .free_job = nouveau_sched_free_job, 399 399 }; 400 400 401 - int 401 + static int 402 402 nouveau_sched_init(struct nouveau_sched *sched, struct nouveau_drm *drm, 403 403 struct workqueue_struct *wq, u32 credit_limit) 404 404 { ··· 453 453 return ret; 454 454 } 455 455 456 - void 456 + int 457 + nouveau_sched_create(struct nouveau_sched **psched, struct nouveau_drm *drm, 458 + struct workqueue_struct *wq, u32 credit_limit) 459 + { 460 + struct nouveau_sched *sched; 461 + int ret; 462 + 463 + sched = kzalloc(sizeof(*sched), GFP_KERNEL); 464 + if (!sched) 465 + return -ENOMEM; 466 + 467 + ret = nouveau_sched_init(sched, drm, wq, credit_limit); 468 + if (ret) { 469 + kfree(sched); 470 + return ret; 471 + } 472 + 473 + *psched = sched; 474 + 475 + return 0; 476 + } 477 + 478 + 479 + static void 457 480 nouveau_sched_fini(struct nouveau_sched *sched) 458 481 { 459 482 struct drm_gpu_scheduler *drm_sched = &sched->base; ··· 493 470 */ 494 471 if (sched->wq) 495 472 destroy_workqueue(sched->wq); 473 + } 474 + 475 + void 476 + nouveau_sched_destroy(struct nouveau_sched **psched) 477 + { 478 + struct nouveau_sched *sched = *psched; 479 + 480 + nouveau_sched_fini(sched); 481 + kfree(sched); 482 + 483 + *psched = NULL; 496 484 }
+3 -3
drivers/gpu/drm/nouveau/nouveau_sched.h
··· 111 111 } job; 112 112 }; 113 113 114 - int nouveau_sched_init(struct nouveau_sched *sched, struct nouveau_drm *drm, 115 - struct workqueue_struct *wq, u32 credit_limit); 116 - void nouveau_sched_fini(struct nouveau_sched *sched); 114 + int nouveau_sched_create(struct nouveau_sched **psched, struct nouveau_drm *drm, 115 + struct workqueue_struct *wq, u32 credit_limit); 116 + void nouveau_sched_destroy(struct nouveau_sched **psched); 117 117 118 118 #endif
+1 -1
drivers/gpu/drm/nouveau/nouveau_svm.c
··· 1011 1011 if (ret) 1012 1012 return ret; 1013 1013 1014 - buffer->fault = kvcalloc(sizeof(*buffer->fault), buffer->entries, GFP_KERNEL); 1014 + buffer->fault = kvcalloc(buffer->entries, sizeof(*buffer->fault), GFP_KERNEL); 1015 1015 if (!buffer->fault) 1016 1016 return -ENOMEM; 1017 1017
+1 -1
drivers/gpu/drm/nouveau/nouveau_uvmm.c
··· 1740 1740 if (ret) 1741 1741 return ret; 1742 1742 1743 - args.sched = &cli->sched; 1743 + args.sched = cli->sched; 1744 1744 args.file_priv = file_priv; 1745 1745 1746 1746 ret = nouveau_uvmm_vm_bind(&args);
+3 -1
drivers/gpu/drm/rockchip/rockchip_drm_vop2.c
··· 1985 1985 clock = vop2_set_intf_mux(vp, rkencoder->crtc_endpoint_id, polflags); 1986 1986 } 1987 1987 1988 - if (!clock) 1988 + if (!clock) { 1989 + vop2_unlock(vop2); 1989 1990 return; 1991 + } 1990 1992 1991 1993 if (vcstate->output_mode == ROCKCHIP_OUT_MODE_AAAA && 1992 1994 !(vp_data->feature & VOP2_VP_FEATURE_OUTPUT_10BIT))
+88
drivers/gpu/drm/tests/drm_buddy_test.c
··· 8 8 9 9 #include <linux/prime_numbers.h> 10 10 #include <linux/sched/signal.h> 11 + #include <linux/sizes.h> 11 12 12 13 #include <drm/drm_buddy.h> 13 14 ··· 17 16 static inline u64 get_size(int order, u64 chunk_size) 18 17 { 19 18 return (1 << order) * chunk_size; 19 + } 20 + 21 + static void drm_test_buddy_alloc_contiguous(struct kunit *test) 22 + { 23 + const unsigned long ps = SZ_4K, mm_size = 16 * 3 * SZ_4K; 24 + unsigned long i, n_pages, total; 25 + struct drm_buddy_block *block; 26 + struct drm_buddy mm; 27 + LIST_HEAD(left); 28 + LIST_HEAD(middle); 29 + LIST_HEAD(right); 30 + LIST_HEAD(allocated); 31 + 32 + KUNIT_EXPECT_FALSE(test, drm_buddy_init(&mm, mm_size, ps)); 33 + 34 + /* 35 + * Idea is to fragment the address space by alternating block 36 + * allocations between three different lists; one for left, middle and 37 + * right. We can then free a list to simulate fragmentation. In 38 + * particular we want to exercise the DRM_BUDDY_CONTIGUOUS_ALLOCATION, 39 + * including the try_harder path. 40 + */ 41 + 42 + i = 0; 43 + n_pages = mm_size / ps; 44 + do { 45 + struct list_head *list; 46 + int slot = i % 3; 47 + 48 + if (slot == 0) 49 + list = &left; 50 + else if (slot == 1) 51 + list = &middle; 52 + else 53 + list = &right; 54 + KUNIT_ASSERT_FALSE_MSG(test, 55 + drm_buddy_alloc_blocks(&mm, 0, mm_size, 56 + ps, ps, list, 0), 57 + "buddy_alloc hit an error size=%d\n", 58 + ps); 59 + } while (++i < n_pages); 60 + 61 + KUNIT_ASSERT_TRUE_MSG(test, drm_buddy_alloc_blocks(&mm, 0, mm_size, 62 + 3 * ps, ps, &allocated, 63 + DRM_BUDDY_CONTIGUOUS_ALLOCATION), 64 + "buddy_alloc didn't error size=%d\n", 3 * ps); 65 + 66 + drm_buddy_free_list(&mm, &middle); 67 + KUNIT_ASSERT_TRUE_MSG(test, drm_buddy_alloc_blocks(&mm, 0, mm_size, 68 + 3 * ps, ps, &allocated, 69 + DRM_BUDDY_CONTIGUOUS_ALLOCATION), 70 + "buddy_alloc didn't error size=%llu\n", 3 * ps); 71 + KUNIT_ASSERT_TRUE_MSG(test, drm_buddy_alloc_blocks(&mm, 0, mm_size, 72 + 2 * ps, ps, &allocated, 73 + DRM_BUDDY_CONTIGUOUS_ALLOCATION), 74 + "buddy_alloc didn't error size=%llu\n", 2 * ps); 75 + 76 + drm_buddy_free_list(&mm, &right); 77 + KUNIT_ASSERT_TRUE_MSG(test, drm_buddy_alloc_blocks(&mm, 0, mm_size, 78 + 3 * ps, ps, &allocated, 79 + DRM_BUDDY_CONTIGUOUS_ALLOCATION), 80 + "buddy_alloc didn't error size=%llu\n", 3 * ps); 81 + /* 82 + * At this point we should have enough contiguous space for 2 blocks, 83 + * however they are never buddies (since we freed middle and right) so 84 + * will require the try_harder logic to find them. 85 + */ 86 + KUNIT_ASSERT_FALSE_MSG(test, drm_buddy_alloc_blocks(&mm, 0, mm_size, 87 + 2 * ps, ps, &allocated, 88 + DRM_BUDDY_CONTIGUOUS_ALLOCATION), 89 + "buddy_alloc hit an error size=%d\n", 2 * ps); 90 + 91 + drm_buddy_free_list(&mm, &left); 92 + KUNIT_ASSERT_FALSE_MSG(test, drm_buddy_alloc_blocks(&mm, 0, mm_size, 93 + 3 * ps, ps, &allocated, 94 + DRM_BUDDY_CONTIGUOUS_ALLOCATION), 95 + "buddy_alloc hit an error size=%d\n", 3 * ps); 96 + 97 + total = 0; 98 + list_for_each_entry(block, &allocated, link) 99 + total += drm_buddy_block_size(&mm, block); 100 + 101 + KUNIT_ASSERT_EQ(test, total, ps * 2 + ps * 3); 102 + 103 + drm_buddy_free_list(&mm, &allocated); 104 + drm_buddy_fini(&mm); 20 105 } 21 106 22 107 static void drm_test_buddy_alloc_pathological(struct kunit *test) ··· 367 280 KUNIT_CASE(drm_test_buddy_alloc_optimistic), 368 281 KUNIT_CASE(drm_test_buddy_alloc_pessimistic), 369 282 KUNIT_CASE(drm_test_buddy_alloc_pathological), 283 + KUNIT_CASE(drm_test_buddy_alloc_contiguous), 370 284 {} 371 285 }; 372 286
+1 -1
drivers/gpu/drm/xe/compat-i915-headers/gem/i915_gem_object.h
··· 10 10 11 11 #include "xe_bo.h" 12 12 13 - #define i915_gem_object_is_shmem(obj) ((obj)->flags & XE_BO_CREATE_SYSTEM_BIT) 13 + #define i915_gem_object_is_shmem(obj) (0) /* We don't use shmem */ 14 14 15 15 static inline dma_addr_t i915_gem_object_get_dma_address(const struct xe_bo *bo, pgoff_t n) 16 16 {
+25 -14
drivers/gpu/drm/xe/xe_pt.c
··· 20 20 21 21 struct xe_pt_dir { 22 22 struct xe_pt pt; 23 - /** @dir: Directory structure for the xe_pt_walk functionality */ 24 - struct xe_ptw_dir dir; 23 + /** @children: Array of page-table child nodes */ 24 + struct xe_ptw *children[XE_PDES]; 25 25 }; 26 26 27 27 #if IS_ENABLED(CONFIG_DRM_XE_DEBUG_VM) ··· 44 44 45 45 static struct xe_pt *xe_pt_entry(struct xe_pt_dir *pt_dir, unsigned int index) 46 46 { 47 - return container_of(pt_dir->dir.entries[index], struct xe_pt, base); 47 + return container_of(pt_dir->children[index], struct xe_pt, base); 48 48 } 49 49 50 50 static u64 __xe_pt_empty_pte(struct xe_tile *tile, struct xe_vm *vm, ··· 63 63 64 64 return vm->pt_ops->pte_encode_addr(xe, 0, pat_index, level, IS_DGFX(xe), 0) | 65 65 XE_PTE_NULL; 66 + } 67 + 68 + static void xe_pt_free(struct xe_pt *pt) 69 + { 70 + if (pt->level) 71 + kfree(as_xe_pt_dir(pt)); 72 + else 73 + kfree(pt); 66 74 } 67 75 68 76 /** ··· 93 85 { 94 86 struct xe_pt *pt; 95 87 struct xe_bo *bo; 96 - size_t size; 97 88 int err; 98 89 99 - size = !level ? sizeof(struct xe_pt) : sizeof(struct xe_pt_dir) + 100 - XE_PDES * sizeof(struct xe_ptw *); 101 - pt = kzalloc(size, GFP_KERNEL); 90 + if (level) { 91 + struct xe_pt_dir *dir = kzalloc(sizeof(*dir), GFP_KERNEL); 92 + 93 + pt = (dir) ? &dir->pt : NULL; 94 + } else { 95 + pt = kzalloc(sizeof(*pt), GFP_KERNEL); 96 + } 102 97 if (!pt) 103 98 return ERR_PTR(-ENOMEM); 104 99 100 + pt->level = level; 105 101 bo = xe_bo_create_pin_map(vm->xe, tile, vm, SZ_4K, 106 102 ttm_bo_type_kernel, 107 103 XE_BO_CREATE_VRAM_IF_DGFX(tile) | ··· 118 106 goto err_kfree; 119 107 } 120 108 pt->bo = bo; 121 - pt->level = level; 122 - pt->base.dir = level ? &as_xe_pt_dir(pt)->dir : NULL; 109 + pt->base.children = level ? as_xe_pt_dir(pt)->children : NULL; 123 110 124 111 if (vm->xef) 125 112 xe_drm_client_add_bo(vm->xef->client, pt->bo); ··· 127 116 return pt; 128 117 129 118 err_kfree: 130 - kfree(pt); 119 + xe_pt_free(pt); 131 120 return ERR_PTR(err); 132 121 } 133 122 ··· 204 193 deferred); 205 194 } 206 195 } 207 - kfree(pt); 196 + xe_pt_free(pt); 208 197 } 209 198 210 199 /** ··· 369 358 struct iosys_map *map = &parent->bo->vmap; 370 359 371 360 if (unlikely(xe_child)) 372 - parent->base.dir->entries[offset] = &xe_child->base; 361 + parent->base.children[offset] = &xe_child->base; 373 362 374 363 xe_pt_write(xe_walk->vm->xe, map, offset, pte); 375 364 parent->num_live++; ··· 864 853 xe_pt_destroy(xe_pt_entry(pt_dir, j_), 865 854 xe_vma_vm(vma)->flags, deferred); 866 855 867 - pt_dir->dir.entries[j_] = &newpte->base; 856 + pt_dir->children[j_] = &newpte->base; 868 857 } 869 858 kfree(entries[i].pt_entries); 870 859 } ··· 1518 1507 xe_pt_destroy(xe_pt_entry(pt_dir, i), 1519 1508 xe_vma_vm(vma)->flags, deferred); 1520 1509 1521 - pt_dir->dir.entries[i] = NULL; 1510 + pt_dir->children[i] = NULL; 1522 1511 } 1523 1512 } 1524 1513 }
+1 -1
drivers/gpu/drm/xe/xe_pt_walk.c
··· 74 74 u64 addr, u64 end, struct xe_pt_walk *walk) 75 75 { 76 76 pgoff_t offset = xe_pt_offset(addr, level, walk); 77 - struct xe_ptw **entries = parent->dir ? parent->dir->entries : NULL; 77 + struct xe_ptw **entries = parent->children ? parent->children : NULL; 78 78 const struct xe_pt_walk_ops *ops = walk->ops; 79 79 enum page_walk_action action; 80 80 struct xe_ptw *child;
+3 -16
drivers/gpu/drm/xe/xe_pt_walk.h
··· 8 8 #include <linux/pagewalk.h> 9 9 #include <linux/types.h> 10 10 11 - struct xe_ptw_dir; 12 - 13 11 /** 14 12 * struct xe_ptw - base class for driver pagetable subclassing. 15 - * @dir: Pointer to an array of children if any. 13 + * @children: Pointer to an array of children if any. 16 14 * 17 15 * Drivers could subclass this, and if it's a page-directory, typically 18 - * embed the xe_ptw_dir::entries array in the same allocation. 16 + * embed an array of xe_ptw pointers. 19 17 */ 20 18 struct xe_ptw { 21 - struct xe_ptw_dir *dir; 22 - }; 23 - 24 - /** 25 - * struct xe_ptw_dir - page directory structure 26 - * @entries: Array holding page directory children. 27 - * 28 - * It is the responsibility of the user to ensure @entries is 29 - * correctly sized. 30 - */ 31 - struct xe_ptw_dir { 32 - struct xe_ptw *entries[0]; 19 + struct xe_ptw **children; 33 20 }; 34 21 35 22 /**
+6 -1
drivers/gpu/drm/xe/xe_range_fence.c
··· 151 151 return xe_range_fence_tree_iter_next(rfence, start, last); 152 152 } 153 153 154 + static void xe_range_fence_free(struct xe_range_fence *rfence) 155 + { 156 + kfree(rfence); 157 + } 158 + 154 159 const struct xe_range_fence_ops xe_range_fence_kfree_ops = { 155 - .free = (void (*)(struct xe_range_fence *rfence)) kfree, 160 + .free = xe_range_fence_free, 156 161 };
+10 -3
drivers/gpu/drm/xe/xe_vm.c
··· 995 995 int err; 996 996 997 997 XE_WARN_ON(!vm); 998 - err = drm_exec_prepare_obj(exec, xe_vm_obj(vm), num_shared); 999 - if (!err && bo && !bo->vm) 1000 - err = drm_exec_prepare_obj(exec, &bo->ttm.base, num_shared); 998 + if (num_shared) 999 + err = drm_exec_prepare_obj(exec, xe_vm_obj(vm), num_shared); 1000 + else 1001 + err = drm_exec_lock_obj(exec, xe_vm_obj(vm)); 1002 + if (!err && bo && !bo->vm) { 1003 + if (num_shared) 1004 + err = drm_exec_prepare_obj(exec, &bo->ttm.base, num_shared); 1005 + else 1006 + err = drm_exec_lock_obj(exec, &bo->ttm.base); 1007 + } 1001 1008 1002 1009 return err; 1003 1010 }
+2 -4
drivers/i2c/busses/Makefile
··· 90 90 obj-$(CONFIG_I2C_OCORES) += i2c-ocores.o 91 91 obj-$(CONFIG_I2C_OMAP) += i2c-omap.o 92 92 obj-$(CONFIG_I2C_OWL) += i2c-owl.o 93 - i2c-pasemi-objs := i2c-pasemi-core.o i2c-pasemi-pci.o 94 - obj-$(CONFIG_I2C_PASEMI) += i2c-pasemi.o 95 - i2c-apple-objs := i2c-pasemi-core.o i2c-pasemi-platform.o 96 - obj-$(CONFIG_I2C_APPLE) += i2c-apple.o 93 + obj-$(CONFIG_I2C_PASEMI) += i2c-pasemi-core.o i2c-pasemi-pci.o 94 + obj-$(CONFIG_I2C_APPLE) += i2c-pasemi-core.o i2c-pasemi-platform.o 97 95 obj-$(CONFIG_I2C_PCA_PLATFORM) += i2c-pca-platform.o 98 96 obj-$(CONFIG_I2C_PNX) += i2c-pnx.o 99 97 obj-$(CONFIG_I2C_PXA) += i2c-pxa.o
+2 -2
drivers/i2c/busses/i2c-i801.c
··· 498 498 /* Set block buffer mode */ 499 499 outb_p(inb_p(SMBAUXCTL(priv)) | SMBAUXCTL_E32B, SMBAUXCTL(priv)); 500 500 501 - inb_p(SMBHSTCNT(priv)); /* reset the data buffer index */ 502 - 503 501 if (read_write == I2C_SMBUS_WRITE) { 504 502 len = data->block[0]; 505 503 outb_p(len, SMBHSTDAT0(priv)); 504 + inb_p(SMBHSTCNT(priv)); /* reset the data buffer index */ 506 505 for (i = 0; i < len; i++) 507 506 outb_p(data->block[i+1], SMBBLKDAT(priv)); 508 507 } ··· 519 520 } 520 521 521 522 data->block[0] = len; 523 + inb_p(SMBHSTCNT(priv)); /* reset the data buffer index */ 522 524 for (i = 0; i < len; i++) 523 525 data->block[i + 1] = inb_p(SMBBLKDAT(priv)); 524 526 }
+6
drivers/i2c/busses/i2c-pasemi-core.c
··· 369 369 370 370 return 0; 371 371 } 372 + EXPORT_SYMBOL_GPL(pasemi_i2c_common_probe); 372 373 373 374 irqreturn_t pasemi_irq_handler(int irq, void *dev_id) 374 375 { ··· 379 378 complete(&smbus->irq_completion); 380 379 return IRQ_HANDLED; 381 380 } 381 + EXPORT_SYMBOL_GPL(pasemi_irq_handler); 382 + 383 + MODULE_LICENSE("GPL"); 384 + MODULE_AUTHOR("Olof Johansson <olof@lixom.net>"); 385 + MODULE_DESCRIPTION("PA Semi PWRficient SMBus driver");
+8 -8
drivers/i2c/busses/i2c-qcom-geni.c
··· 613 613 614 614 peripheral.addr = msgs[i].addr; 615 615 616 - if (msgs[i].flags & I2C_M_RD) { 617 - ret = geni_i2c_gpi(gi2c, &msgs[i], &config, 618 - &rx_addr, &rx_buf, I2C_READ, gi2c->rx_c); 619 - if (ret) 620 - goto err; 621 - } 622 - 623 616 ret = geni_i2c_gpi(gi2c, &msgs[i], &config, 624 617 &tx_addr, &tx_buf, I2C_WRITE, gi2c->tx_c); 625 618 if (ret) 626 619 goto err; 627 620 628 - if (msgs[i].flags & I2C_M_RD) 621 + if (msgs[i].flags & I2C_M_RD) { 622 + ret = geni_i2c_gpi(gi2c, &msgs[i], &config, 623 + &rx_addr, &rx_buf, I2C_READ, gi2c->rx_c); 624 + if (ret) 625 + goto err; 626 + 629 627 dma_async_issue_pending(gi2c->rx_c); 628 + } 629 + 630 630 dma_async_issue_pending(gi2c->tx_c); 631 631 632 632 timeout = wait_for_completion_timeout(&gi2c->done, XFER_TIMEOUT);
+2
drivers/iio/accel/Kconfig
··· 219 219 220 220 config BMA400_I2C 221 221 tristate 222 + select REGMAP_I2C 222 223 depends on BMA400 223 224 224 225 config BMA400_SPI 225 226 tristate 227 + select REGMAP_SPI 226 228 depends on BMA400 227 229 228 230 config BMC150_ACCEL
+8 -4
drivers/iio/adc/ad4130.c
··· 1821 1821 { 1822 1822 struct device *dev = &st->spi->dev; 1823 1823 struct device_node *of_node = dev_of_node(dev); 1824 - struct clk_init_data init; 1824 + struct clk_init_data init = {}; 1825 1825 const char *clk_name; 1826 1826 int ret; 1827 1827 ··· 1891 1891 return ret; 1892 1892 1893 1893 /* 1894 - * Configure all GPIOs for output. If configured, the interrupt function 1895 - * of P2 takes priority over the GPIO out function. 1894 + * Configure unused GPIOs for output. If configured, the interrupt 1895 + * function of P2 takes priority over the GPIO out function. 1896 1896 */ 1897 - val = AD4130_IO_CONTROL_GPIO_CTRL_MASK; 1897 + val = 0; 1898 + for (i = 0; i < AD4130_MAX_GPIOS; i++) 1899 + if (st->pins_fn[i + AD4130_AIN2_P1] == AD4130_PIN_FN_NONE) 1900 + val |= FIELD_PREP(AD4130_IO_CONTROL_GPIO_CTRL_MASK, BIT(i)); 1901 + 1898 1902 val |= FIELD_PREP(AD4130_IO_CONTROL_INT_PIN_SEL_MASK, st->int_pin_sel); 1899 1903 1900 1904 ret = regmap_write(st->regmap, AD4130_IO_CONTROL_REG, val);
+1 -1
drivers/iio/adc/ad7091r8.c
··· 195 195 st->reset_gpio = devm_gpiod_get_optional(st->dev, "reset", 196 196 GPIOD_OUT_HIGH); 197 197 if (IS_ERR(st->reset_gpio)) 198 - return dev_err_probe(st->dev, PTR_ERR(st->convst_gpio), 198 + return dev_err_probe(st->dev, PTR_ERR(st->reset_gpio), 199 199 "Error on requesting reset GPIO\n"); 200 200 201 201 if (st->reset_gpio) {
+12
drivers/iio/humidity/Kconfig
··· 48 48 To compile this driver as a module, choose M here: the module 49 49 will be called hdc2010. 50 50 51 + config HDC3020 52 + tristate "TI HDC3020 relative humidity and temperature sensor" 53 + depends on I2C 54 + select CRC8 55 + help 56 + Say yes here to build support for the Texas Instruments 57 + HDC3020, HDC3021 and HDC3022 relative humidity and temperature 58 + sensors. 59 + 60 + To compile this driver as a module, choose M here: the module 61 + will be called hdc3020. 62 + 51 63 config HID_SENSOR_HUMIDITY 52 64 tristate "HID Environmental humidity sensor" 53 65 depends on HID_SENSOR_HUB
+1
drivers/iio/humidity/Makefile
··· 7 7 obj-$(CONFIG_DHT11) += dht11.o 8 8 obj-$(CONFIG_HDC100X) += hdc100x.o 9 9 obj-$(CONFIG_HDC2010) += hdc2010.o 10 + obj-$(CONFIG_HDC3020) += hdc3020.o 10 11 obj-$(CONFIG_HID_SENSOR_HUMIDITY) += hid-sensor-humidity.o 11 12 12 13 hts221-y := hts221_core.o \
+1 -1
drivers/iio/humidity/hdc3020.c
··· 322 322 if (chan->type != IIO_TEMP) 323 323 return -EINVAL; 324 324 325 - *val = 16852; 325 + *val = -16852; 326 326 return IIO_VAL_INT; 327 327 328 328 default:
+1
drivers/iio/imu/bno055/Kconfig
··· 8 8 config BOSCH_BNO055_SERIAL 9 9 tristate "Bosch BNO055 attached via UART" 10 10 depends on SERIAL_DEV_BUS 11 + select REGMAP 11 12 select BOSCH_BNO055 12 13 help 13 14 Enable this to support Bosch BNO055 IMUs attached via UART.
+4 -1
drivers/iio/industrialio-core.c
··· 1584 1584 ret = iio_device_register_sysfs_group(indio_dev, 1585 1585 &iio_dev_opaque->chan_attr_group); 1586 1586 if (ret) 1587 - goto error_clear_attrs; 1587 + goto error_free_chan_attrs; 1588 1588 1589 1589 return 0; 1590 1590 1591 + error_free_chan_attrs: 1592 + kfree(iio_dev_opaque->chan_attr_group.attrs); 1593 + iio_dev_opaque->chan_attr_group.attrs = NULL; 1591 1594 error_clear_attrs: 1592 1595 iio_free_chan_devattr_list(&iio_dev_opaque->channel_attr_list); 1593 1596
+1
drivers/iio/light/hid-sensor-als.c
··· 226 226 case HID_USAGE_SENSOR_TIME_TIMESTAMP: 227 227 als_state->timestamp = hid_sensor_convert_timestamp(&als_state->common_attributes, 228 228 *(s64 *)raw_data); 229 + ret = 0; 229 230 break; 230 231 default: 231 232 break;
+8 -2
drivers/iio/magnetometer/rm3100-core.c
··· 530 530 struct rm3100_data *data; 531 531 unsigned int tmp; 532 532 int ret; 533 + int samp_rate_index; 533 534 534 535 indio_dev = devm_iio_device_alloc(dev, sizeof(*data)); 535 536 if (!indio_dev) ··· 587 586 ret = regmap_read(regmap, RM3100_REG_TMRC, &tmp); 588 587 if (ret < 0) 589 588 return ret; 589 + 590 + samp_rate_index = tmp - RM3100_TMRC_OFFSET; 591 + if (samp_rate_index < 0 || samp_rate_index >= RM3100_SAMP_NUM) { 592 + dev_err(dev, "The value read from RM3100_REG_TMRC is invalid!\n"); 593 + return -EINVAL; 594 + } 590 595 /* Initializing max wait time, which is double conversion time. */ 591 - data->conversion_time = rm3100_samp_rates[tmp - RM3100_TMRC_OFFSET][2] 592 - * 2; 596 + data->conversion_time = rm3100_samp_rates[samp_rate_index][2] * 2; 593 597 594 598 /* Cycle count values may not be what we want. */ 595 599 if ((tmp - RM3100_TMRC_OFFSET) == 0)
+1
drivers/iio/pressure/bmp280-spi.c
··· 87 87 MODULE_DEVICE_TABLE(of, bmp280_of_spi_match); 88 88 89 89 static const struct spi_device_id bmp280_spi_id[] = { 90 + { "bmp085", (kernel_ulong_t)&bmp180_chip_info }, 90 91 { "bmp180", (kernel_ulong_t)&bmp180_chip_info }, 91 92 { "bmp181", (kernel_ulong_t)&bmp180_chip_info }, 92 93 { "bmp280", (kernel_ulong_t)&bmp280_chip_info },
+28 -15
drivers/infiniband/hw/bnxt_re/ib_verbs.c
··· 401 401 struct bnxt_re_fence_data *fence = &pd->fence; 402 402 struct ib_mr *ib_mr = &fence->mr->ib_mr; 403 403 struct bnxt_qplib_swqe *wqe = &fence->bind_wqe; 404 + struct bnxt_re_dev *rdev = pd->rdev; 405 + 406 + if (bnxt_qplib_is_chip_gen_p5_p7(rdev->chip_ctx)) 407 + return; 404 408 405 409 memset(wqe, 0, sizeof(*wqe)); 406 410 wqe->type = BNXT_QPLIB_SWQE_TYPE_BIND_MW; ··· 459 455 struct device *dev = &rdev->en_dev->pdev->dev; 460 456 struct bnxt_re_mr *mr = fence->mr; 461 457 458 + if (bnxt_qplib_is_chip_gen_p5_p7(rdev->chip_ctx)) 459 + return; 460 + 462 461 if (fence->mw) { 463 462 bnxt_re_dealloc_mw(fence->mw); 464 463 fence->mw = NULL; ··· 492 485 dma_addr_t dma_addr = 0; 493 486 struct ib_mw *mw; 494 487 int rc; 488 + 489 + if (bnxt_qplib_is_chip_gen_p5_p7(rdev->chip_ctx)) 490 + return 0; 495 491 496 492 dma_addr = dma_map_single(dev, fence->va, BNXT_RE_FENCE_BYTES, 497 493 DMA_BIDIRECTIONAL); ··· 1827 1817 switch (srq_attr_mask) { 1828 1818 case IB_SRQ_MAX_WR: 1829 1819 /* SRQ resize is not supported */ 1830 - break; 1820 + return -EINVAL; 1831 1821 case IB_SRQ_LIMIT: 1832 1822 /* Change the SRQ threshold */ 1833 1823 if (srq_attr->srq_limit > srq->qplib_srq.max_wqe) ··· 1842 1832 /* On success, update the shadow */ 1843 1833 srq->srq_limit = srq_attr->srq_limit; 1844 1834 /* No need to Build and send response back to udata */ 1845 - break; 1835 + return 0; 1846 1836 default: 1847 1837 ibdev_err(&rdev->ibdev, 1848 1838 "Unsupported srq_attr_mask 0x%x", srq_attr_mask); 1849 1839 return -EINVAL; 1850 1840 } 1851 - return 0; 1852 1841 } 1853 1842 1854 1843 int bnxt_re_query_srq(struct ib_srq *ib_srq, struct ib_srq_attr *srq_attr) ··· 2565 2556 wqe->type = BNXT_QPLIB_SWQE_TYPE_LOCAL_INV; 2566 2557 wqe->local_inv.inv_l_key = wr->ex.invalidate_rkey; 2567 2558 2568 - /* Need unconditional fence for local invalidate 2569 - * opcode to work as expected. 2570 - */ 2571 - wqe->flags |= BNXT_QPLIB_SWQE_FLAGS_UC_FENCE; 2572 - 2573 2559 if (wr->send_flags & IB_SEND_SIGNALED) 2574 2560 wqe->flags |= BNXT_QPLIB_SWQE_FLAGS_SIGNAL_COMP; 2575 2561 if (wr->send_flags & IB_SEND_SOLICITED) ··· 2586 2582 wqe->frmr.page_list_len = mr->npages; 2587 2583 wqe->frmr.levels = qplib_frpl->hwq.level; 2588 2584 wqe->type = BNXT_QPLIB_SWQE_TYPE_REG_MR; 2589 - 2590 - /* Need unconditional fence for reg_mr 2591 - * opcode to function as expected. 2592 - */ 2593 - 2594 - wqe->flags |= BNXT_QPLIB_SWQE_FLAGS_UC_FENCE; 2595 2585 2596 2586 if (wr->wr.send_flags & IB_SEND_SIGNALED) 2597 2587 wqe->flags |= BNXT_QPLIB_SWQE_FLAGS_SIGNAL_COMP; ··· 2717 2719 return rc; 2718 2720 } 2719 2721 2722 + static void bnxt_re_legacy_set_uc_fence(struct bnxt_qplib_swqe *wqe) 2723 + { 2724 + /* Need unconditional fence for non-wire memory opcode 2725 + * to work as expected. 2726 + */ 2727 + if (wqe->type == BNXT_QPLIB_SWQE_TYPE_LOCAL_INV || 2728 + wqe->type == BNXT_QPLIB_SWQE_TYPE_FAST_REG_MR || 2729 + wqe->type == BNXT_QPLIB_SWQE_TYPE_REG_MR || 2730 + wqe->type == BNXT_QPLIB_SWQE_TYPE_BIND_MW) 2731 + wqe->flags |= BNXT_QPLIB_SWQE_FLAGS_UC_FENCE; 2732 + } 2733 + 2720 2734 int bnxt_re_post_send(struct ib_qp *ib_qp, const struct ib_send_wr *wr, 2721 2735 const struct ib_send_wr **bad_wr) 2722 2736 { ··· 2808 2798 rc = -EINVAL; 2809 2799 goto bad; 2810 2800 } 2811 - if (!rc) 2801 + if (!rc) { 2802 + if (!bnxt_qplib_is_chip_gen_p5_p7(qp->rdev->chip_ctx)) 2803 + bnxt_re_legacy_set_uc_fence(&wqe); 2812 2804 rc = bnxt_qplib_post_send(&qp->qplib_qp, &wqe); 2805 + } 2813 2806 bad: 2814 2807 if (rc) { 2815 2808 ibdev_err(&qp->rdev->ibdev,
-3
drivers/infiniband/hw/bnxt_re/main.c
··· 280 280 281 281 static void bnxt_re_vf_res_config(struct bnxt_re_dev *rdev) 282 282 { 283 - 284 - if (test_bit(BNXT_RE_FLAG_ERR_DEVICE_DETACHED, &rdev->flags)) 285 - return; 286 283 rdev->num_vfs = pci_sriov_get_totalvfs(rdev->en_dev->pdev); 287 284 if (!bnxt_qplib_is_chip_gen_p5_p7(rdev->chip_ctx)) { 288 285 bnxt_re_set_resource_limits(rdev);
+2 -1
drivers/infiniband/hw/bnxt_re/qplib_fp.c
··· 744 744 bnxt_qplib_fill_cmdqmsg(&msg, &req, &resp, &sbuf, sizeof(req), 745 745 sizeof(resp), 0); 746 746 rc = bnxt_qplib_rcfw_send_message(rcfw, &msg); 747 - srq->threshold = le16_to_cpu(sb->srq_limit); 747 + if (!rc) 748 + srq->threshold = le16_to_cpu(sb->srq_limit); 748 749 dma_free_coherent(&rcfw->pdev->dev, sbuf.size, 749 750 sbuf.sb, sbuf.dma_addr); 750 751
+5 -1
drivers/infiniband/hw/hfi1/pio.c
··· 2086 2086 "Unable to allocate credit return DMA range for NUMA %d\n", 2087 2087 i); 2088 2088 ret = -ENOMEM; 2089 - goto done; 2089 + goto free_cr_base; 2090 2090 } 2091 2091 } 2092 2092 set_dev_node(&dd->pcidev->dev, dd->node); ··· 2094 2094 ret = 0; 2095 2095 done: 2096 2096 return ret; 2097 + 2098 + free_cr_base: 2099 + free_credit_return(dd); 2100 + goto done; 2097 2101 } 2098 2102 2099 2103 void free_credit_return(struct hfi1_devdata *dd)
+1 -1
drivers/infiniband/hw/hfi1/sdma.c
··· 3158 3158 { 3159 3159 int rval = 0; 3160 3160 3161 - if ((unlikely(tx->num_desc + 1 == tx->desc_limit))) { 3161 + if ((unlikely(tx->num_desc == tx->desc_limit))) { 3162 3162 rval = _extend_sdma_tx_descs(dd, tx); 3163 3163 if (rval) { 3164 3164 __sdma_txclean(dd, tx);
+1
drivers/infiniband/hw/irdma/defs.h
··· 346 346 #define IRDMA_AE_LLP_TOO_MANY_KEEPALIVE_RETRIES 0x050b 347 347 #define IRDMA_AE_LLP_DOUBT_REACHABILITY 0x050c 348 348 #define IRDMA_AE_LLP_CONNECTION_ESTABLISHED 0x050e 349 + #define IRDMA_AE_LLP_TOO_MANY_RNRS 0x050f 349 350 #define IRDMA_AE_RESOURCE_EXHAUSTION 0x0520 350 351 #define IRDMA_AE_RESET_SENT 0x0601 351 352 #define IRDMA_AE_TERMINATE_SENT 0x0602
+8
drivers/infiniband/hw/irdma/hw.c
··· 387 387 case IRDMA_AE_LLP_TOO_MANY_RETRIES: 388 388 case IRDMA_AE_LCE_QP_CATASTROPHIC: 389 389 case IRDMA_AE_LCE_FUNCTION_CATASTROPHIC: 390 + case IRDMA_AE_LLP_TOO_MANY_RNRS: 390 391 case IRDMA_AE_LCE_CQ_CATASTROPHIC: 391 392 case IRDMA_AE_UDA_XMIT_DGRAM_TOO_LONG: 392 393 default: ··· 571 570 dev->irq_ops->irdma_dis_irq(dev, msix_vec->idx); 572 571 irq_update_affinity_hint(msix_vec->irq, NULL); 573 572 free_irq(msix_vec->irq, dev_id); 573 + if (rf == dev_id) { 574 + tasklet_kill(&rf->dpc_tasklet); 575 + } else { 576 + struct irdma_ceq *iwceq = (struct irdma_ceq *)dev_id; 577 + 578 + tasklet_kill(&iwceq->dpc_tasklet); 579 + } 574 580 } 575 581 576 582 /**
+5 -4
drivers/infiniband/hw/irdma/verbs.c
··· 839 839 840 840 if (init_attr->cap.max_inline_data > uk_attrs->max_hw_inline || 841 841 init_attr->cap.max_send_sge > uk_attrs->max_hw_wq_frags || 842 - init_attr->cap.max_recv_sge > uk_attrs->max_hw_wq_frags) 842 + init_attr->cap.max_recv_sge > uk_attrs->max_hw_wq_frags || 843 + init_attr->cap.max_send_wr > uk_attrs->max_hw_wq_quanta || 844 + init_attr->cap.max_recv_wr > uk_attrs->max_hw_rq_quanta) 843 845 return -EINVAL; 844 846 845 847 if (rdma_protocol_roce(&iwdev->ibdev, 1)) { ··· 2186 2184 info.cq_base_pa = iwcq->kmem.pa; 2187 2185 } 2188 2186 2189 - if (dev->hw_attrs.uk_attrs.hw_rev >= IRDMA_GEN_2) 2190 - info.shadow_read_threshold = min(info.cq_uk_init_info.cq_size / 2, 2191 - (u32)IRDMA_MAX_CQ_READ_THRESH); 2187 + info.shadow_read_threshold = min(info.cq_uk_init_info.cq_size / 2, 2188 + (u32)IRDMA_MAX_CQ_READ_THRESH); 2192 2189 2193 2190 if (irdma_sc_cq_init(cq, &info)) { 2194 2191 ibdev_dbg(&iwdev->ibdev, "VERBS: init cq fail\n");
+6
drivers/infiniband/hw/mlx5/cong.c
··· 458 458 dbg_cc_params->root = debugfs_create_dir("cc_params", mlx5_debugfs_get_dev_root(mdev)); 459 459 460 460 for (i = 0; i < MLX5_IB_DBG_CC_MAX; i++) { 461 + if ((i == MLX5_IB_DBG_CC_GENERAL_RTT_RESP_DSCP_VALID || 462 + i == MLX5_IB_DBG_CC_GENERAL_RTT_RESP_DSCP)) 463 + if (!MLX5_CAP_GEN(mdev, roce) || 464 + !MLX5_CAP_ROCE(mdev, roce_cc_general)) 465 + continue; 466 + 461 467 dbg_cc_params->params[i].offset = i; 462 468 dbg_cc_params->params[i].dev = dev; 463 469 dbg_cc_params->params[i].port_num = port_num;
+1 -1
drivers/infiniband/hw/mlx5/devx.c
··· 2949 2949 MLX5_IB_METHOD_DEVX_OBJ_MODIFY, 2950 2950 UVERBS_ATTR_IDR(MLX5_IB_ATTR_DEVX_OBJ_MODIFY_HANDLE, 2951 2951 UVERBS_IDR_ANY_OBJECT, 2952 - UVERBS_ACCESS_WRITE, 2952 + UVERBS_ACCESS_READ, 2953 2953 UA_MANDATORY), 2954 2954 UVERBS_ATTR_PTR_IN( 2955 2955 MLX5_IB_ATTR_DEVX_OBJ_MODIFY_CMD_IN,
+1 -1
drivers/infiniband/hw/mlx5/wr.c
··· 78 78 */ 79 79 copysz = min_t(u64, *cur_edge - (void *)eseg->inline_hdr.start, 80 80 left); 81 - memcpy(eseg->inline_hdr.start, pdata, copysz); 81 + memcpy(eseg->inline_hdr.data, pdata, copysz); 82 82 stride = ALIGN(sizeof(struct mlx5_wqe_eth_seg) - 83 83 sizeof(eseg->inline_hdr.start) + copysz, 16); 84 84 *size += stride / 16;
+10 -1
drivers/infiniband/hw/qedr/verbs.c
··· 1879 1879 /* RQ - read access only (0) */ 1880 1880 rc = qedr_init_user_queue(udata, dev, &qp->urq, ureq.rq_addr, 1881 1881 ureq.rq_len, true, 0, alloc_and_init); 1882 - if (rc) 1882 + if (rc) { 1883 + ib_umem_release(qp->usq.umem); 1884 + qp->usq.umem = NULL; 1885 + if (rdma_protocol_roce(&dev->ibdev, 1)) { 1886 + qedr_free_pbl(dev, &qp->usq.pbl_info, 1887 + qp->usq.pbl_tbl); 1888 + } else { 1889 + kfree(qp->usq.pbl_tbl); 1890 + } 1883 1891 return rc; 1892 + } 1884 1893 } 1885 1894 1886 1895 memset(&in_params, 0, sizeof(in_params));
+11 -6
drivers/infiniband/ulp/srpt/ib_srpt.c
··· 79 79 MODULE_PARM_DESC(srpt_srq_size, 80 80 "Shared receive queue (SRQ) size."); 81 81 82 + static int srpt_set_u64_x(const char *buffer, const struct kernel_param *kp) 83 + { 84 + return kstrtou64(buffer, 16, (u64 *)kp->arg); 85 + } 82 86 static int srpt_get_u64_x(char *buffer, const struct kernel_param *kp) 83 87 { 84 88 return sprintf(buffer, "0x%016llx\n", *(u64 *)kp->arg); 85 89 } 86 - module_param_call(srpt_service_guid, NULL, srpt_get_u64_x, &srpt_service_guid, 87 - 0444); 90 + module_param_call(srpt_service_guid, srpt_set_u64_x, srpt_get_u64_x, 91 + &srpt_service_guid, 0444); 88 92 MODULE_PARM_DESC(srpt_service_guid, 89 93 "Using this value for ioc_guid, id_ext, and cm_listen_id instead of using the node_guid of the first HCA."); 90 94 ··· 214 210 /** 215 211 * srpt_qp_event - QP event callback function 216 212 * @event: Description of the event that occurred. 217 - * @ch: SRPT RDMA channel. 213 + * @ptr: SRPT RDMA channel. 218 214 */ 219 - static void srpt_qp_event(struct ib_event *event, struct srpt_rdma_ch *ch) 215 + static void srpt_qp_event(struct ib_event *event, void *ptr) 220 216 { 217 + struct srpt_rdma_ch *ch = ptr; 218 + 221 219 pr_debug("QP event %d on ch=%p sess_name=%s-%d state=%s\n", 222 220 event->event, ch, ch->sess_name, ch->qp->qp_num, 223 221 get_ch_state_name(ch->state)); ··· 1813 1807 ch->cq_size = ch->rq_size + sq_size; 1814 1808 1815 1809 qp_init->qp_context = (void *)ch; 1816 - qp_init->event_handler 1817 - = (void(*)(struct ib_event *, void*))srpt_qp_event; 1810 + qp_init->event_handler = srpt_qp_event; 1818 1811 qp_init->send_cq = ch->cq; 1819 1812 qp_init->recv_cq = ch->cq; 1820 1813 qp_init->sq_sig_type = IB_SIGNAL_REQ_WR;
+1
drivers/interconnect/qcom/sc8180x.c
··· 1372 1372 1373 1373 static struct qcom_icc_bcm bcm_co0 = { 1374 1374 .name = "CO0", 1375 + .keepalive = true, 1375 1376 .num_nodes = 1, 1376 1377 .nodes = { &slv_qns_cdsp_mem_noc } 1377 1378 };
+1
drivers/interconnect/qcom/sm8550.c
··· 2223 2223 .driver = { 2224 2224 .name = "qnoc-sm8550", 2225 2225 .of_match_table = qnoc_of_match, 2226 + .sync_state = icc_sync_state, 2226 2227 }, 2227 2228 }; 2228 2229
+1 -1
drivers/interconnect/qcom/sm8650.c
··· 1160 1160 1161 1161 static struct qcom_icc_bcm bcm_acv = { 1162 1162 .name = "ACV", 1163 - .enable_mask = BIT(3), 1163 + .enable_mask = BIT(0), 1164 1164 .num_nodes = 1, 1165 1165 .nodes = { &ebi }, 1166 1166 };
+1
drivers/interconnect/qcom/x1e80100.c
··· 1586 1586 1587 1587 static struct qcom_icc_bcm bcm_acv = { 1588 1588 .name = "ACV", 1589 + .enable_mask = BIT(3), 1589 1590 .num_nodes = 1, 1590 1591 .nodes = { &ebi }, 1591 1592 };
+4 -1
drivers/irqchip/irq-brcmstb-l2.c
··· 2 2 /* 3 3 * Generic Broadcom Set Top Box Level 2 Interrupt controller driver 4 4 * 5 - * Copyright (C) 2014-2017 Broadcom 5 + * Copyright (C) 2014-2024 Broadcom 6 6 */ 7 7 8 8 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt ··· 112 112 generic_handle_domain_irq(b->domain, irq); 113 113 } while (status); 114 114 out: 115 + /* Don't ack parent before all device writes are done */ 116 + wmb(); 117 + 115 118 chained_irq_exit(chip, desc); 116 119 } 117 120
+40 -22
drivers/irqchip/irq-gic-v3-its.c
··· 207 207 return (gic_rdists->has_rvpeid || vm->vlpi_count[its->list_nr]); 208 208 } 209 209 210 + static bool rdists_support_shareable(void) 211 + { 212 + return !(gic_rdists->flags & RDIST_FLAGS_FORCE_NON_SHAREABLE); 213 + } 214 + 210 215 static u16 get_its_list(struct its_vm *vm) 211 216 { 212 217 struct its_node *its; ··· 2715 2710 break; 2716 2711 } 2717 2712 val |= FIELD_PREP(GICR_VPROPBASER_4_1_ADDR, addr >> 12); 2718 - val |= FIELD_PREP(GICR_VPROPBASER_SHAREABILITY_MASK, 2719 - FIELD_GET(GITS_BASER_SHAREABILITY_MASK, baser)); 2720 - val |= FIELD_PREP(GICR_VPROPBASER_INNER_CACHEABILITY_MASK, 2721 - FIELD_GET(GITS_BASER_INNER_CACHEABILITY_MASK, baser)); 2713 + if (rdists_support_shareable()) { 2714 + val |= FIELD_PREP(GICR_VPROPBASER_SHAREABILITY_MASK, 2715 + FIELD_GET(GITS_BASER_SHAREABILITY_MASK, baser)); 2716 + val |= FIELD_PREP(GICR_VPROPBASER_INNER_CACHEABILITY_MASK, 2717 + FIELD_GET(GITS_BASER_INNER_CACHEABILITY_MASK, baser)); 2718 + } 2722 2719 val |= FIELD_PREP(GICR_VPROPBASER_4_1_SIZE, GITS_BASER_NR_PAGES(baser) - 1); 2723 2720 2724 2721 return val; ··· 2943 2936 WARN_ON(!IS_ALIGNED(pa, psz)); 2944 2937 2945 2938 val |= FIELD_PREP(GICR_VPROPBASER_4_1_ADDR, pa >> 12); 2946 - val |= GICR_VPROPBASER_RaWb; 2947 - val |= GICR_VPROPBASER_InnerShareable; 2939 + if (rdists_support_shareable()) { 2940 + val |= GICR_VPROPBASER_RaWb; 2941 + val |= GICR_VPROPBASER_InnerShareable; 2942 + } 2948 2943 val |= GICR_VPROPBASER_4_1_Z; 2949 2944 val |= GICR_VPROPBASER_4_1_VALID; 2950 2945 ··· 3135 3126 gicr_write_propbaser(val, rbase + GICR_PROPBASER); 3136 3127 tmp = gicr_read_propbaser(rbase + GICR_PROPBASER); 3137 3128 3138 - if (gic_rdists->flags & RDIST_FLAGS_FORCE_NON_SHAREABLE) 3129 + if (!rdists_support_shareable()) 3139 3130 tmp &= ~GICR_PROPBASER_SHAREABILITY_MASK; 3140 3131 3141 3132 if ((tmp ^ val) & GICR_PROPBASER_SHAREABILITY_MASK) { ··· 3162 3153 gicr_write_pendbaser(val, rbase + GICR_PENDBASER); 3163 3154 tmp = gicr_read_pendbaser(rbase + GICR_PENDBASER); 3164 3155 3165 - if (gic_rdists->flags & RDIST_FLAGS_FORCE_NON_SHAREABLE) 3156 + if (!rdists_support_shareable()) 3166 3157 tmp &= ~GICR_PENDBASER_SHAREABILITY_MASK; 3167 3158 3168 3159 if (!(tmp & GICR_PENDBASER_SHAREABILITY_MASK)) { ··· 3826 3817 bool force) 3827 3818 { 3828 3819 struct its_vpe *vpe = irq_data_get_irq_chip_data(d); 3829 - int from, cpu = cpumask_first(mask_val); 3820 + struct cpumask common, *table_mask; 3830 3821 unsigned long flags; 3822 + int from, cpu; 3831 3823 3832 3824 /* 3833 3825 * Changing affinity is mega expensive, so let's be as lazy as ··· 3844 3834 * taken on any vLPI handling path that evaluates vpe->col_idx. 3845 3835 */ 3846 3836 from = vpe_to_cpuid_lock(vpe, &flags); 3837 + table_mask = gic_data_rdist_cpu(from)->vpe_table_mask; 3838 + 3839 + /* 3840 + * If we are offered another CPU in the same GICv4.1 ITS 3841 + * affinity, pick this one. Otherwise, any CPU will do. 3842 + */ 3843 + if (table_mask && cpumask_and(&common, mask_val, table_mask)) 3844 + cpu = cpumask_test_cpu(from, &common) ? from : cpumask_first(&common); 3845 + else 3846 + cpu = cpumask_first(mask_val); 3847 + 3847 3848 if (from == cpu) 3848 3849 goto out; 3849 3850 3850 3851 vpe->col_idx = cpu; 3851 - 3852 - /* 3853 - * GICv4.1 allows us to skip VMOVP if moving to a cpu whose RD 3854 - * is sharing its VPE table with the current one. 3855 - */ 3856 - if (gic_data_rdist_cpu(cpu)->vpe_table_mask && 3857 - cpumask_test_cpu(from, gic_data_rdist_cpu(cpu)->vpe_table_mask)) 3858 - goto out; 3859 3852 3860 3853 its_send_vmovp(vpe); 3861 3854 its_vpe_db_proxy_move(vpe, from, cpu); ··· 3893 3880 val = virt_to_phys(page_address(vpe->its_vm->vprop_page)) & 3894 3881 GENMASK_ULL(51, 12); 3895 3882 val |= (LPI_NRBITS - 1) & GICR_VPROPBASER_IDBITS_MASK; 3896 - val |= GICR_VPROPBASER_RaWb; 3897 - val |= GICR_VPROPBASER_InnerShareable; 3883 + if (rdists_support_shareable()) { 3884 + val |= GICR_VPROPBASER_RaWb; 3885 + val |= GICR_VPROPBASER_InnerShareable; 3886 + } 3898 3887 gicr_write_vpropbaser(val, vlpi_base + GICR_VPROPBASER); 3899 3888 3900 3889 val = virt_to_phys(page_address(vpe->vpt_page)) & 3901 3890 GENMASK_ULL(51, 16); 3902 - val |= GICR_VPENDBASER_RaWaWb; 3903 - val |= GICR_VPENDBASER_InnerShareable; 3891 + if (rdists_support_shareable()) { 3892 + val |= GICR_VPENDBASER_RaWaWb; 3893 + val |= GICR_VPENDBASER_InnerShareable; 3894 + } 3904 3895 /* 3905 3896 * There is no good way of finding out if the pending table is 3906 3897 * empty as we can race against the doorbell interrupt very ··· 5095 5078 u32 ctlr; 5096 5079 int err; 5097 5080 5081 + its_enable_quirks(its); 5082 + 5098 5083 if (is_v4(its)) { 5099 5084 if (!(its->typer & GITS_TYPER_VMOVP)) { 5100 5085 err = its_compute_its_list_map(its); ··· 5448 5429 if (!its) 5449 5430 return -ENOMEM; 5450 5431 5451 - its_enable_quirks(its); 5452 5432 err = its_probe_one(its); 5453 5433 if (err) { 5454 5434 its_node_destroy(its);
+1 -1
drivers/irqchip/irq-loongson-eiointc.c
··· 241 241 int ret; 242 242 unsigned int i, type; 243 243 unsigned long hwirq = 0; 244 - struct eiointc *priv = domain->host_data; 244 + struct eiointc_priv *priv = domain->host_data; 245 245 246 246 ret = irq_domain_translate_onecell(domain, arg, &hwirq, &type); 247 247 if (ret)
+2 -2
drivers/irqchip/irq-qcom-mpm.c
··· 389 389 /* Don't use devm_ioremap_resource, as we're accessing a shared region. */ 390 390 priv->base = devm_ioremap(dev, res.start, resource_size(&res)); 391 391 of_node_put(msgram_np); 392 - if (IS_ERR(priv->base)) 393 - return PTR_ERR(priv->base); 392 + if (!priv->base) 393 + return -ENOMEM; 394 394 } else { 395 395 /* Otherwise, fall back to simple MMIO. */ 396 396 priv->base = devm_platform_ioremap_resource(pdev, 0);
+3
drivers/media/platform/rockchip/rkisp1/rkisp1-capture.c
··· 725 725 unsigned int i; 726 726 u32 status; 727 727 728 + if (!rkisp1->irqs_enabled) 729 + return IRQ_NONE; 730 + 728 731 status = rkisp1_read(rkisp1, RKISP1_CIF_MI_MIS); 729 732 if (!status) 730 733 return IRQ_NONE;
+2
drivers/media/platform/rockchip/rkisp1/rkisp1-common.h
··· 450 450 * @debug: debug params to be exposed on debugfs 451 451 * @info: version-specific ISP information 452 452 * @irqs: IRQ line numbers 453 + * @irqs_enabled: the hardware is enabled and can cause interrupts 453 454 */ 454 455 struct rkisp1_device { 455 456 void __iomem *base_addr; ··· 472 471 struct rkisp1_debug debug; 473 472 const struct rkisp1_info *info; 474 473 int irqs[RKISP1_NUM_IRQS]; 474 + bool irqs_enabled; 475 475 }; 476 476 477 477 /*
+3
drivers/media/platform/rockchip/rkisp1/rkisp1-csi.c
··· 196 196 struct rkisp1_device *rkisp1 = dev_get_drvdata(dev); 197 197 u32 val, status; 198 198 199 + if (!rkisp1->irqs_enabled) 200 + return IRQ_NONE; 201 + 199 202 status = rkisp1_read(rkisp1, RKISP1_CIF_MIPI_MIS); 200 203 if (!status) 201 204 return IRQ_NONE;
+23 -1
drivers/media/platform/rockchip/rkisp1/rkisp1-dev.c
··· 305 305 { 306 306 struct rkisp1_device *rkisp1 = dev_get_drvdata(dev); 307 307 308 + rkisp1->irqs_enabled = false; 309 + /* Make sure the IRQ handler will see the above */ 310 + mb(); 311 + 312 + /* 313 + * Wait until any running IRQ handler has returned. The IRQ handler 314 + * may get called even after this (as it's a shared interrupt line) 315 + * but the 'irqs_enabled' flag will make the handler return immediately. 316 + */ 317 + for (unsigned int il = 0; il < ARRAY_SIZE(rkisp1->irqs); ++il) { 318 + if (rkisp1->irqs[il] == -1) 319 + continue; 320 + 321 + /* Skip if the irq line is the same as previous */ 322 + if (il == 0 || rkisp1->irqs[il - 1] != rkisp1->irqs[il]) 323 + synchronize_irq(rkisp1->irqs[il]); 324 + } 325 + 308 326 clk_bulk_disable_unprepare(rkisp1->clk_size, rkisp1->clks); 309 327 return pinctrl_pm_select_sleep_state(dev); 310 328 } ··· 338 320 ret = clk_bulk_prepare_enable(rkisp1->clk_size, rkisp1->clks); 339 321 if (ret) 340 322 return ret; 323 + 324 + rkisp1->irqs_enabled = true; 325 + /* Make sure the IRQ handler will see the above */ 326 + mb(); 341 327 342 328 return 0; 343 329 } ··· 581 559 rkisp1->irqs[il] = irq; 582 560 } 583 561 584 - ret = devm_request_irq(dev, irq, info->isrs[i].isr, 0, 562 + ret = devm_request_irq(dev, irq, info->isrs[i].isr, IRQF_SHARED, 585 563 dev_driver_string(dev), dev); 586 564 if (ret) { 587 565 dev_err(dev, "request irq failed: %d\n", ret);
+3
drivers/media/platform/rockchip/rkisp1/rkisp1-isp.c
··· 976 976 struct rkisp1_device *rkisp1 = dev_get_drvdata(dev); 977 977 u32 status, isp_err; 978 978 979 + if (!rkisp1->irqs_enabled) 980 + return IRQ_NONE; 981 + 979 982 status = rkisp1_read(rkisp1, RKISP1_CIF_ISP_MIS); 980 983 if (!status) 981 984 return IRQ_NONE;
+1
drivers/media/rc/Kconfig
··· 319 319 tristate "PWM IR transmitter" 320 320 depends on LIRC 321 321 depends on PWM 322 + depends on HIGH_RES_TIMERS 322 323 depends on OF 323 324 help 324 325 Say Y if you want to use a PWM based IR transmitter. This is
+3 -3
drivers/media/rc/bpf-lirc.c
··· 253 253 if (attr->attach_flags) 254 254 return -EINVAL; 255 255 256 - rcdev = rc_dev_get_from_fd(attr->target_fd); 256 + rcdev = rc_dev_get_from_fd(attr->target_fd, true); 257 257 if (IS_ERR(rcdev)) 258 258 return PTR_ERR(rcdev); 259 259 ··· 278 278 if (IS_ERR(prog)) 279 279 return PTR_ERR(prog); 280 280 281 - rcdev = rc_dev_get_from_fd(attr->target_fd); 281 + rcdev = rc_dev_get_from_fd(attr->target_fd, true); 282 282 if (IS_ERR(rcdev)) { 283 283 bpf_prog_put(prog); 284 284 return PTR_ERR(rcdev); ··· 303 303 if (attr->query.query_flags) 304 304 return -EINVAL; 305 305 306 - rcdev = rc_dev_get_from_fd(attr->query.target_fd); 306 + rcdev = rc_dev_get_from_fd(attr->query.target_fd, false); 307 307 if (IS_ERR(rcdev)) 308 308 return PTR_ERR(rcdev); 309 309
+2
drivers/media/rc/ir_toy.c
··· 332 332 sizeof(COMMAND_SMODE_EXIT), STATE_COMMAND_NO_RESP); 333 333 if (err) { 334 334 dev_err(irtoy->dev, "exit sample mode: %d\n", err); 335 + kfree(buf); 335 336 return err; 336 337 } 337 338 ··· 340 339 sizeof(COMMAND_SMODE_ENTER), STATE_COMMAND); 341 340 if (err) { 342 341 dev_err(irtoy->dev, "enter sample mode: %d\n", err); 342 + kfree(buf); 343 343 return err; 344 344 } 345 345
+4 -1
drivers/media/rc/lirc_dev.c
··· 814 814 unregister_chrdev_region(lirc_base_dev, RC_DEV_MAX); 815 815 } 816 816 817 - struct rc_dev *rc_dev_get_from_fd(int fd) 817 + struct rc_dev *rc_dev_get_from_fd(int fd, bool write) 818 818 { 819 819 struct fd f = fdget(fd); 820 820 struct lirc_fh *fh; ··· 827 827 fdput(f); 828 828 return ERR_PTR(-EINVAL); 829 829 } 830 + 831 + if (write && !(f.file->f_mode & FMODE_WRITE)) 832 + return ERR_PTR(-EPERM); 830 833 831 834 fh = f.file->private_data; 832 835 dev = fh->rc;
+1 -1
drivers/media/rc/rc-core-priv.h
··· 325 325 void lirc_scancode_event(struct rc_dev *dev, struct lirc_scancode *lsc); 326 326 int lirc_register(struct rc_dev *dev); 327 327 void lirc_unregister(struct rc_dev *dev); 328 - struct rc_dev *rc_dev_get_from_fd(int fd); 328 + struct rc_dev *rc_dev_get_from_fd(int fd, bool write); 329 329 #else 330 330 static inline int lirc_dev_init(void) { return 0; } 331 331 static inline void lirc_dev_exit(void) {}
+1
drivers/net/ethernet/adi/Kconfig
··· 7 7 bool "Analog Devices devices" 8 8 default y 9 9 depends on SPI 10 + select PHYLIB 10 11 help 11 12 If you have a network (Ethernet) card belonging to this class, say Y. 12 13
+3 -3
drivers/net/ethernet/broadcom/asp2/bcmasp.c
··· 535 535 int j = 0, i; 536 536 537 537 for (i = 0; i < NUM_NET_FILTERS; i++) { 538 - if (j == *rule_cnt) 539 - return -EMSGSIZE; 540 - 541 538 if (!priv->net_filters[i].claimed || 542 539 priv->net_filters[i].port != intf->port) 543 540 continue; ··· 543 546 priv->net_filters[i].wake_filter && 544 547 priv->net_filters[i - 1].wake_filter) 545 548 continue; 549 + 550 + if (j == *rule_cnt) 551 + return -EMSGSIZE; 546 552 547 553 rule_locs[j++] = priv->net_filters[i].fs.location; 548 554 }
+3
drivers/net/ethernet/broadcom/asp2/bcmasp_intf.c
··· 1051 1051 netdev_err(dev, "could not attach to PHY\n"); 1052 1052 goto err_phy_disable; 1053 1053 } 1054 + 1055 + /* Indicate that the MAC is responsible for PHY PM */ 1056 + phydev->mac_managed_pm = true; 1054 1057 } else if (!intf->wolopts) { 1055 1058 ret = phy_resume(dev->phydev); 1056 1059 if (ret)
+2 -1
drivers/net/ethernet/cisco/enic/vnic_vic.c
··· 49 49 50 50 tlv->type = htons(type); 51 51 tlv->length = htons(length); 52 - memcpy(tlv->value, value, length); 52 + unsafe_memcpy(tlv->value, value, length, 53 + /* Flexible array of flexible arrays */); 53 54 54 55 vp->num_tlvs = htonl(ntohl(vp->num_tlvs) + 1); 55 56 vp->length = htonl(ntohl(vp->length) +
+4
drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c
··· 415 415 return; 416 416 } 417 417 418 + /* AF modifies given action iff PF/VF has requested for it */ 419 + if ((entry->action & 0xFULL) != NIX_RX_ACTION_DEFAULT) 420 + return; 421 + 418 422 /* copy VF default entry action to the VF mcam entry */ 419 423 rx_action = npc_get_default_entry_action(rvu, mcam, blkaddr, 420 424 target_func);
+1
drivers/net/ethernet/microchip/sparx5/sparx5_main.c
··· 757 757 platform_set_drvdata(pdev, sparx5); 758 758 sparx5->pdev = pdev; 759 759 sparx5->dev = &pdev->dev; 760 + spin_lock_init(&sparx5->tx_lock); 760 761 761 762 /* Do switch core reset if available */ 762 763 reset = devm_reset_control_get_optional_shared(&pdev->dev, "switch");
+1
drivers/net/ethernet/microchip/sparx5/sparx5_main.h
··· 280 280 int xtr_irq; 281 281 /* Frame DMA */ 282 282 int fdma_irq; 283 + spinlock_t tx_lock; /* lock for frame transmission */ 283 284 struct sparx5_rx rx; 284 285 struct sparx5_tx tx; 285 286 /* PTP */
+2
drivers/net/ethernet/microchip/sparx5/sparx5_packet.c
··· 244 244 } 245 245 246 246 skb_tx_timestamp(skb); 247 + spin_lock(&sparx5->tx_lock); 247 248 if (sparx5->fdma_irq > 0) 248 249 ret = sparx5_fdma_xmit(sparx5, ifh, skb); 249 250 else 250 251 ret = sparx5_inject(sparx5, ifh, skb, dev); 252 + spin_unlock(&sparx5->tx_lock); 251 253 252 254 if (ret == -EBUSY) 253 255 goto busy;
+1 -1
drivers/net/ethernet/pensando/ionic/ionic_bus_pci.c
··· 223 223 ionic_unmap_bars(ionic); 224 224 pci_release_regions(ionic->pdev); 225 225 226 - if (atomic_read(&ionic->pdev->enable_cnt) > 0) 226 + if (pci_is_enabled(ionic->pdev)) 227 227 pci_disable_device(ionic->pdev); 228 228 } 229 229
+1 -1
drivers/net/ethernet/stmicro/stmmac/hwif.c
··· 224 224 .regs = { 225 225 .ptp_off = PTP_GMAC4_OFFSET, 226 226 .mmc_off = MMC_GMAC4_OFFSET, 227 - .est_off = EST_XGMAC_OFFSET, 227 + .est_off = EST_GMAC4_OFFSET, 228 228 }, 229 229 .desc = &dwmac4_desc_ops, 230 230 .dma = &dwmac410_dma_ops,
-20
drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
··· 6115 6115 struct net_device *dev = (struct net_device *)dev_id; 6116 6116 struct stmmac_priv *priv = netdev_priv(dev); 6117 6117 6118 - if (unlikely(!dev)) { 6119 - netdev_err(priv->dev, "%s: invalid dev pointer\n", __func__); 6120 - return IRQ_NONE; 6121 - } 6122 - 6123 6118 /* Check if adapter is up */ 6124 6119 if (test_bit(STMMAC_DOWN, &priv->state)) 6125 6120 return IRQ_HANDLED; ··· 6129 6134 { 6130 6135 struct net_device *dev = (struct net_device *)dev_id; 6131 6136 struct stmmac_priv *priv = netdev_priv(dev); 6132 - 6133 - if (unlikely(!dev)) { 6134 - netdev_err(priv->dev, "%s: invalid dev pointer\n", __func__); 6135 - return IRQ_NONE; 6136 - } 6137 6137 6138 6138 /* Check if adapter is up */ 6139 6139 if (test_bit(STMMAC_DOWN, &priv->state)) ··· 6150 6160 6151 6161 dma_conf = container_of(tx_q, struct stmmac_dma_conf, tx_queue[chan]); 6152 6162 priv = container_of(dma_conf, struct stmmac_priv, dma_conf); 6153 - 6154 - if (unlikely(!data)) { 6155 - netdev_err(priv->dev, "%s: invalid dev pointer\n", __func__); 6156 - return IRQ_NONE; 6157 - } 6158 6163 6159 6164 /* Check if adapter is up */ 6160 6165 if (test_bit(STMMAC_DOWN, &priv->state)) ··· 6176 6191 6177 6192 dma_conf = container_of(rx_q, struct stmmac_dma_conf, rx_queue[chan]); 6178 6193 priv = container_of(dma_conf, struct stmmac_priv, dma_conf); 6179 - 6180 - if (unlikely(!data)) { 6181 - netdev_err(priv->dev, "%s: invalid dev pointer\n", __func__); 6182 - return IRQ_NONE; 6183 - } 6184 6194 6185 6195 /* Check if adapter is up */ 6186 6196 if (test_bit(STMMAC_DOWN, &priv->state))
+5 -5
drivers/net/gtp.c
··· 1907 1907 if (err < 0) 1908 1908 goto error_out; 1909 1909 1910 - err = genl_register_family(&gtp_genl_family); 1910 + err = register_pernet_subsys(&gtp_net_ops); 1911 1911 if (err < 0) 1912 1912 goto unreg_rtnl_link; 1913 1913 1914 - err = register_pernet_subsys(&gtp_net_ops); 1914 + err = genl_register_family(&gtp_genl_family); 1915 1915 if (err < 0) 1916 - goto unreg_genl_family; 1916 + goto unreg_pernet_subsys; 1917 1917 1918 1918 pr_info("GTP module loaded (pdp ctx size %zd bytes)\n", 1919 1919 sizeof(struct pdp_ctx)); 1920 1920 return 0; 1921 1921 1922 - unreg_genl_family: 1923 - genl_unregister_family(&gtp_genl_family); 1922 + unreg_pernet_subsys: 1923 + unregister_pernet_subsys(&gtp_net_ops); 1924 1924 unreg_rtnl_link: 1925 1925 rtnl_link_unregister(&gtp_link_ops); 1926 1926 error_out:
+1 -1
drivers/net/ipa/ipa_interrupt.c
··· 212 212 u32 unit_count; 213 213 u32 unit; 214 214 215 - unit_count = roundup(ipa->endpoint_count, 32); 215 + unit_count = DIV_ROUND_UP(ipa->endpoint_count, 32); 216 216 for (unit = 0; unit < unit_count; unit++) { 217 217 const struct reg *reg; 218 218 u32 val;
+3 -1
drivers/net/phy/realtek.c
··· 413 413 ERR_PTR(ret)); 414 414 return ret; 415 415 } 416 + 417 + return genphy_soft_reset(phydev); 416 418 } 417 419 418 - return genphy_soft_reset(phydev); 420 + return 0; 419 421 } 420 422 421 423 static int rtl821x_suspend(struct phy_device *phydev)
+4
drivers/nvme/host/core.c
··· 1153 1153 effects &= ~NVME_CMD_EFFECTS_CSE_MASK; 1154 1154 } else { 1155 1155 effects = le32_to_cpu(ctrl->effects->acs[opcode]); 1156 + 1157 + /* Ignore execution restrictions if any relaxation bits are set */ 1158 + if (effects & NVME_CMD_EFFECTS_CSER_MASK) 1159 + effects &= ~NVME_CMD_EFFECTS_CSE_MASK; 1156 1160 } 1157 1161 1158 1162 return effects;
+1
drivers/nvme/host/fabrics.c
··· 534 534 if (ret) { 535 535 nvmf_log_connect_error(ctrl, ret, le32_to_cpu(res.u32), 536 536 &cmd, data); 537 + goto out_free_data; 537 538 } 538 539 result = le32_to_cpu(res.u32); 539 540 if (result & (NVME_CONNECT_AUTHREQ_ATR | NVME_CONNECT_AUTHREQ_ASCR)) {
+2 -2
drivers/nvme/target/fabrics-cmd.c
··· 209 209 struct nvmf_connect_command *c = &req->cmd->connect; 210 210 struct nvmf_connect_data *d; 211 211 struct nvmet_ctrl *ctrl = NULL; 212 - u16 status = 0; 212 + u16 status; 213 213 int ret; 214 214 215 215 if (!nvmet_check_transfer_len(req, sizeof(struct nvmf_connect_data))) ··· 290 290 struct nvmf_connect_data *d; 291 291 struct nvmet_ctrl *ctrl; 292 292 u16 qid = le16_to_cpu(c->qid); 293 - u16 status = 0; 293 + u16 status; 294 294 295 295 if (!nvmet_check_transfer_len(req, sizeof(struct nvmf_connect_data))) 296 296 return;
+3 -2
drivers/nvmem/core.c
··· 460 460 list_for_each_entry(entry, &nvmem->cells, node) { 461 461 sysfs_bin_attr_init(&attrs[i]); 462 462 attrs[i].attr.name = devm_kasprintf(&nvmem->dev, GFP_KERNEL, 463 - "%s@%x", entry->name, 464 - entry->offset); 463 + "%s@%x,%x", entry->name, 464 + entry->offset, 465 + entry->bit_offset); 465 466 attrs[i].attr.mode = 0444; 466 467 attrs[i].size = entry->bytes; 467 468 attrs[i].read = &nvmem_cell_attr_read;
+22 -15
drivers/pci/pci.c
··· 2522 2522 if (pdev->pme_poll) { 2523 2523 struct pci_dev *bridge = pdev->bus->self; 2524 2524 struct device *dev = &pdev->dev; 2525 - int pm_status; 2525 + struct device *bdev = bridge ? &bridge->dev : NULL; 2526 + int bref = 0; 2526 2527 2527 2528 /* 2528 - * If bridge is in low power state, the 2529 - * configuration space of subordinate devices 2530 - * may be not accessible 2529 + * If we have a bridge, it should be in an active/D0 2530 + * state or the configuration space of subordinate 2531 + * devices may not be accessible or stable over the 2532 + * course of the call. 2531 2533 */ 2532 - if (bridge && bridge->current_state != PCI_D0) 2533 - continue; 2534 + if (bdev) { 2535 + bref = pm_runtime_get_if_active(bdev, true); 2536 + if (!bref) 2537 + continue; 2538 + 2539 + if (bridge->current_state != PCI_D0) 2540 + goto put_bridge; 2541 + } 2534 2542 2535 2543 /* 2536 - * If the device is in a low power state it 2537 - * should not be polled either. 2544 + * The device itself should be suspended but config 2545 + * space must be accessible, therefore it cannot be in 2546 + * D3cold. 2538 2547 */ 2539 - pm_status = pm_runtime_get_if_active(dev, true); 2540 - if (!pm_status) 2541 - continue; 2542 - 2543 - if (pdev->current_state != PCI_D3cold) 2548 + if (pm_runtime_suspended(dev) && 2549 + pdev->current_state != PCI_D3cold) 2544 2550 pci_pme_wakeup(pdev, NULL); 2545 2551 2546 - if (pm_status > 0) 2547 - pm_runtime_put(dev); 2552 + put_bridge: 2553 + if (bref > 0) 2554 + pm_runtime_put(bdev); 2548 2555 } else { 2549 2556 list_del(&pme_dev->list); 2550 2557 kfree(pme_dev);
+11
drivers/perf/arm-cmn.c
··· 2305 2305 dev_dbg(cmn->dev, "ignoring external node %llx\n", reg); 2306 2306 continue; 2307 2307 } 2308 + /* 2309 + * AmpereOneX erratum AC04_MESH_1 makes some XPs report a bogus 2310 + * child count larger than the number of valid child pointers. 2311 + * A child offset of 0 can only occur on CMN-600; otherwise it 2312 + * would imply the root node being its own grandchild, which 2313 + * we can safely dismiss in general. 2314 + */ 2315 + if (reg == 0 && cmn->part != PART_CMN600) { 2316 + dev_dbg(cmn->dev, "bogus child pointer?\n"); 2317 + continue; 2318 + } 2308 2319 2309 2320 arm_cmn_init_node_info(cmn, reg & CMN_CHILD_NODE_ADDR, dn); 2310 2321
+1 -1
drivers/perf/cxl_pmu.c
··· 419 419 CXL_PMU_EVENT_CXL_ATTR(s2m_ndr_cmp, CXL_PMU_GID_S2M_NDR, BIT(0)), 420 420 CXL_PMU_EVENT_CXL_ATTR(s2m_ndr_cmps, CXL_PMU_GID_S2M_NDR, BIT(1)), 421 421 CXL_PMU_EVENT_CXL_ATTR(s2m_ndr_cmpe, CXL_PMU_GID_S2M_NDR, BIT(2)), 422 - CXL_PMU_EVENT_CXL_ATTR(s2m_ndr_biconflictack, CXL_PMU_GID_S2M_NDR, BIT(3)), 422 + CXL_PMU_EVENT_CXL_ATTR(s2m_ndr_biconflictack, CXL_PMU_GID_S2M_NDR, BIT(4)), 423 423 /* CXL rev 3.0 Table 3-46 S2M DRS opcodes */ 424 424 CXL_PMU_EVENT_CXL_ATTR(s2m_drs_memdata, CXL_PMU_GID_S2M_DRS, BIT(0)), 425 425 CXL_PMU_EVENT_CXL_ATTR(s2m_drs_memdatanxm, CXL_PMU_GID_S2M_DRS, BIT(1)),
+8 -12
drivers/scsi/fcoe/fcoe_ctlr.c
··· 319 319 { 320 320 struct fcoe_fcf *sel; 321 321 struct fcoe_fcf *fcf; 322 - unsigned long flags; 323 322 324 323 mutex_lock(&fip->ctlr_mutex); 325 - spin_lock_irqsave(&fip->ctlr_lock, flags); 324 + spin_lock_bh(&fip->ctlr_lock); 326 325 327 326 kfree_skb(fip->flogi_req); 328 327 fip->flogi_req = NULL; 329 328 list_for_each_entry(fcf, &fip->fcfs, list) 330 329 fcf->flogi_sent = 0; 331 330 332 - spin_unlock_irqrestore(&fip->ctlr_lock, flags); 331 + spin_unlock_bh(&fip->ctlr_lock); 333 332 sel = fip->sel_fcf; 334 333 335 334 if (sel && ether_addr_equal(sel->fcf_mac, fip->dest_addr)) ··· 699 700 { 700 701 struct fc_frame *fp; 701 702 struct fc_frame_header *fh; 702 - unsigned long flags; 703 703 u16 old_xid; 704 704 u8 op; 705 705 u8 mac[ETH_ALEN]; ··· 732 734 op = FIP_DT_FLOGI; 733 735 if (fip->mode == FIP_MODE_VN2VN) 734 736 break; 735 - spin_lock_irqsave(&fip->ctlr_lock, flags); 737 + spin_lock_bh(&fip->ctlr_lock); 736 738 kfree_skb(fip->flogi_req); 737 739 fip->flogi_req = skb; 738 740 fip->flogi_req_send = 1; 739 - spin_unlock_irqrestore(&fip->ctlr_lock, flags); 741 + spin_unlock_bh(&fip->ctlr_lock); 740 742 schedule_work(&fip->timer_work); 741 743 return -EINPROGRESS; 742 744 case ELS_FDISC: ··· 1705 1707 static int fcoe_ctlr_flogi_retry(struct fcoe_ctlr *fip) 1706 1708 { 1707 1709 struct fcoe_fcf *fcf; 1708 - unsigned long flags; 1709 1710 int error; 1710 1711 1711 1712 mutex_lock(&fip->ctlr_mutex); 1712 - spin_lock_irqsave(&fip->ctlr_lock, flags); 1713 + spin_lock_bh(&fip->ctlr_lock); 1713 1714 LIBFCOE_FIP_DBG(fip, "re-sending FLOGI - reselect\n"); 1714 1715 fcf = fcoe_ctlr_select(fip); 1715 1716 if (!fcf || fcf->flogi_sent) { ··· 1719 1722 fcoe_ctlr_solicit(fip, NULL); 1720 1723 error = fcoe_ctlr_flogi_send_locked(fip); 1721 1724 } 1722 - spin_unlock_irqrestore(&fip->ctlr_lock, flags); 1725 + spin_unlock_bh(&fip->ctlr_lock); 1723 1726 mutex_unlock(&fip->ctlr_mutex); 1724 1727 return error; 1725 1728 } ··· 1736 1739 static void fcoe_ctlr_flogi_send(struct fcoe_ctlr *fip) 1737 1740 { 1738 1741 struct fcoe_fcf *fcf; 1739 - unsigned long flags; 1740 1742 1741 - spin_lock_irqsave(&fip->ctlr_lock, flags); 1743 + spin_lock_bh(&fip->ctlr_lock); 1742 1744 fcf = fip->sel_fcf; 1743 1745 if (!fcf || !fip->flogi_req_send) 1744 1746 goto unlock; ··· 1764 1768 } else /* XXX */ 1765 1769 LIBFCOE_FIP_DBG(fip, "No FCF selected - defer send\n"); 1766 1770 unlock: 1767 - spin_unlock_irqrestore(&fip->ctlr_lock, flags); 1771 + spin_unlock_bh(&fip->ctlr_lock); 1768 1772 } 1769 1773 1770 1774 /**
+2 -1
drivers/scsi/fnic/fnic.h
··· 305 305 unsigned int copy_wq_base; 306 306 struct work_struct link_work; 307 307 struct work_struct frame_work; 308 + struct work_struct flush_work; 308 309 struct sk_buff_head frame_queue; 309 310 struct sk_buff_head tx_queue; 310 311 ··· 364 363 int fnic_rq_cmpl_handler(struct fnic *fnic, int); 365 364 int fnic_alloc_rq_frame(struct vnic_rq *rq); 366 365 void fnic_free_rq_buf(struct vnic_rq *rq, struct vnic_rq_buf *buf); 367 - void fnic_flush_tx(struct fnic *); 366 + void fnic_flush_tx(struct work_struct *work); 368 367 void fnic_eth_send(struct fcoe_ctlr *, struct sk_buff *skb); 369 368 void fnic_set_port_id(struct fc_lport *, u32, struct fc_frame *); 370 369 void fnic_update_mac(struct fc_lport *, u8 *new);
+3 -2
drivers/scsi/fnic/fnic_fcs.c
··· 1182 1182 1183 1183 /** 1184 1184 * fnic_flush_tx() - send queued frames. 1185 - * @fnic: fnic device 1185 + * @work: pointer to work element 1186 1186 * 1187 1187 * Send frames that were waiting to go out in FC or Ethernet mode. 1188 1188 * Whenever changing modes we purge queued frames, so these frames should ··· 1190 1190 * 1191 1191 * Called without fnic_lock held. 1192 1192 */ 1193 - void fnic_flush_tx(struct fnic *fnic) 1193 + void fnic_flush_tx(struct work_struct *work) 1194 1194 { 1195 + struct fnic *fnic = container_of(work, struct fnic, flush_work); 1195 1196 struct sk_buff *skb; 1196 1197 struct fc_frame *fp; 1197 1198
+1
drivers/scsi/fnic/fnic_main.c
··· 830 830 spin_lock_init(&fnic->vlans_lock); 831 831 INIT_WORK(&fnic->fip_frame_work, fnic_handle_fip_frame); 832 832 INIT_WORK(&fnic->event_work, fnic_handle_event); 833 + INIT_WORK(&fnic->flush_work, fnic_flush_tx); 833 834 skb_queue_head_init(&fnic->fip_frame_queue); 834 835 INIT_LIST_HEAD(&fnic->evlist); 835 836 INIT_LIST_HEAD(&fnic->vlans);
+2 -2
drivers/scsi/fnic/fnic_scsi.c
··· 680 680 681 681 spin_unlock_irqrestore(&fnic->fnic_lock, flags); 682 682 683 - fnic_flush_tx(fnic); 683 + queue_work(fnic_event_queue, &fnic->flush_work); 684 684 685 685 reset_cmpl_handler_end: 686 686 fnic_clear_state_flags(fnic, FNIC_FLAGS_FWRESET); ··· 736 736 } 737 737 spin_unlock_irqrestore(&fnic->fnic_lock, flags); 738 738 739 - fnic_flush_tx(fnic); 739 + queue_work(fnic_event_queue, &fnic->flush_work); 740 740 queue_work(fnic_event_queue, &fnic->frame_work); 741 741 } else { 742 742 spin_unlock_irqrestore(&fnic->fnic_lock, flags);
+1 -1
drivers/staging/iio/impedance-analyzer/ad5933.c
··· 608 608 struct ad5933_state, work.work); 609 609 struct iio_dev *indio_dev = i2c_get_clientdata(st->client); 610 610 __be16 buf[2]; 611 - int val[2]; 611 + u16 val[2]; 612 612 unsigned char status; 613 613 int ret; 614 614
+34 -24
drivers/staging/media/atomisp/pci/atomisp_cmd.c
··· 3723 3723 3724 3724 static int atomisp_set_crop(struct atomisp_device *isp, 3725 3725 const struct v4l2_mbus_framefmt *format, 3726 + struct v4l2_subdev_state *sd_state, 3726 3727 int which) 3727 3728 { 3728 3729 struct atomisp_input_subdev *input = &isp->inputs[isp->asd.input_curr]; 3729 - struct v4l2_subdev_state pad_state = { 3730 - .pads = &input->pad_cfg, 3731 - }; 3732 3730 struct v4l2_subdev_selection sel = { 3733 3731 .which = which, 3734 3732 .target = V4L2_SEL_TGT_CROP, ··· 3752 3754 sel.r.left = ((input->native_rect.width - sel.r.width) / 2) & ~1; 3753 3755 sel.r.top = ((input->native_rect.height - sel.r.height) / 2) & ~1; 3754 3756 3755 - ret = v4l2_subdev_call(input->camera, pad, set_selection, &pad_state, &sel); 3757 + ret = v4l2_subdev_call(input->camera, pad, set_selection, sd_state, &sel); 3756 3758 if (ret) 3757 3759 dev_err(isp->dev, "Error setting crop to %ux%u @%ux%u: %d\n", 3758 3760 sel.r.width, sel.r.height, sel.r.left, sel.r.top, ret); ··· 3768 3770 const struct atomisp_format_bridge *fmt, *snr_fmt; 3769 3771 struct atomisp_sub_device *asd = &isp->asd; 3770 3772 struct atomisp_input_subdev *input = &isp->inputs[asd->input_curr]; 3771 - struct v4l2_subdev_state pad_state = { 3772 - .pads = &input->pad_cfg, 3773 - }; 3774 3773 struct v4l2_subdev_format format = { 3775 3774 .which = V4L2_SUBDEV_FORMAT_TRY, 3776 3775 }; ··· 3804 3809 dev_dbg(isp->dev, "try_mbus_fmt: asking for %ux%u\n", 3805 3810 format.format.width, format.format.height); 3806 3811 3807 - ret = atomisp_set_crop(isp, &format.format, V4L2_SUBDEV_FORMAT_TRY); 3808 - if (ret) 3809 - return ret; 3812 + v4l2_subdev_lock_state(input->try_sd_state); 3810 3813 3811 - ret = v4l2_subdev_call(input->camera, pad, set_fmt, &pad_state, &format); 3814 + ret = atomisp_set_crop(isp, &format.format, input->try_sd_state, 3815 + V4L2_SUBDEV_FORMAT_TRY); 3816 + if (ret == 0) 3817 + ret = v4l2_subdev_call(input->camera, pad, set_fmt, 3818 + input->try_sd_state, &format); 3819 + 3820 + v4l2_subdev_unlock_state(input->try_sd_state); 3821 + 3812 3822 if (ret) 3813 3823 return ret; 3814 3824 ··· 4238 4238 struct atomisp_device *isp = asd->isp; 4239 4239 struct atomisp_input_subdev *input = &isp->inputs[asd->input_curr]; 4240 4240 const struct atomisp_format_bridge *format; 4241 - struct v4l2_subdev_state pad_state = { 4242 - .pads = &input->pad_cfg, 4243 - }; 4241 + struct v4l2_subdev_state *act_sd_state; 4244 4242 struct v4l2_subdev_format vformat = { 4245 4243 .which = V4L2_SUBDEV_FORMAT_TRY, 4246 4244 }; ··· 4266 4268 4267 4269 /* Disable dvs if resolution can't be supported by sensor */ 4268 4270 if (asd->params.video_dis_en && asd->run_mode->val == ATOMISP_RUN_MODE_VIDEO) { 4269 - ret = atomisp_set_crop(isp, &vformat.format, V4L2_SUBDEV_FORMAT_TRY); 4270 - if (ret) 4271 - return ret; 4271 + v4l2_subdev_lock_state(input->try_sd_state); 4272 4272 4273 - vformat.which = V4L2_SUBDEV_FORMAT_TRY; 4274 - ret = v4l2_subdev_call(input->camera, pad, set_fmt, &pad_state, &vformat); 4273 + ret = atomisp_set_crop(isp, &vformat.format, input->try_sd_state, 4274 + V4L2_SUBDEV_FORMAT_TRY); 4275 + if (ret == 0) { 4276 + vformat.which = V4L2_SUBDEV_FORMAT_TRY; 4277 + ret = v4l2_subdev_call(input->camera, pad, set_fmt, 4278 + input->try_sd_state, &vformat); 4279 + } 4280 + 4281 + v4l2_subdev_unlock_state(input->try_sd_state); 4282 + 4275 4283 if (ret) 4276 4284 return ret; 4277 4285 ··· 4295 4291 } 4296 4292 } 4297 4293 4298 - ret = atomisp_set_crop(isp, &vformat.format, V4L2_SUBDEV_FORMAT_ACTIVE); 4299 - if (ret) 4300 - return ret; 4294 + act_sd_state = v4l2_subdev_lock_and_get_active_state(input->camera); 4301 4295 4302 - vformat.which = V4L2_SUBDEV_FORMAT_ACTIVE; 4303 - ret = v4l2_subdev_call(input->camera, pad, set_fmt, NULL, &vformat); 4296 + ret = atomisp_set_crop(isp, &vformat.format, act_sd_state, 4297 + V4L2_SUBDEV_FORMAT_ACTIVE); 4298 + if (ret == 0) { 4299 + vformat.which = V4L2_SUBDEV_FORMAT_ACTIVE; 4300 + ret = v4l2_subdev_call(input->camera, pad, set_fmt, act_sd_state, &vformat); 4301 + } 4302 + 4303 + if (act_sd_state) 4304 + v4l2_subdev_unlock_state(act_sd_state); 4305 + 4304 4306 if (ret) 4305 4307 return ret; 4306 4308
+2 -2
drivers/staging/media/atomisp/pci/atomisp_internal.h
··· 132 132 /* Sensor rects for sensors which support crop */ 133 133 struct v4l2_rect native_rect; 134 134 struct v4l2_rect active_rect; 135 - /* Sensor pad_cfg for which == V4L2_SUBDEV_FORMAT_TRY calls */ 136 - struct v4l2_subdev_pad_config pad_cfg; 135 + /* Sensor state for which == V4L2_SUBDEV_FORMAT_TRY calls */ 136 + struct v4l2_subdev_state *try_sd_state; 137 137 138 138 struct v4l2_subdev *motor; 139 139
+31 -21
drivers/staging/media/atomisp/pci/atomisp_ioctl.c
··· 781 781 .which = V4L2_SUBDEV_FORMAT_ACTIVE, 782 782 .code = input->code, 783 783 }; 784 + struct v4l2_subdev_state *act_sd_state; 784 785 int ret; 786 + 787 + if (!input->camera) 788 + return -EINVAL; 785 789 786 790 if (input->crop_support) 787 791 return atomisp_enum_framesizes_crop(isp, fsize); 788 792 789 - ret = v4l2_subdev_call(input->camera, pad, enum_frame_size, NULL, &fse); 793 + act_sd_state = v4l2_subdev_lock_and_get_active_state(input->camera); 794 + ret = v4l2_subdev_call(input->camera, pad, enum_frame_size, 795 + act_sd_state, &fse); 796 + if (act_sd_state) 797 + v4l2_subdev_unlock_state(act_sd_state); 790 798 if (ret) 791 799 return ret; 792 800 ··· 811 803 struct video_device *vdev = video_devdata(file); 812 804 struct atomisp_device *isp = video_get_drvdata(vdev); 813 805 struct atomisp_sub_device *asd = atomisp_to_video_pipe(vdev)->asd; 806 + struct atomisp_input_subdev *input = &isp->inputs[asd->input_curr]; 814 807 struct v4l2_subdev_frame_interval_enum fie = { 815 - .code = atomisp_in_fmt_conv[0].code, 808 + .code = atomisp_in_fmt_conv[0].code, 816 809 .index = fival->index, 817 810 .width = fival->width, 818 811 .height = fival->height, 819 812 .which = V4L2_SUBDEV_FORMAT_ACTIVE, 820 813 }; 814 + struct v4l2_subdev_state *act_sd_state; 821 815 int ret; 822 816 823 - ret = v4l2_subdev_call(isp->inputs[asd->input_curr].camera, 824 - pad, enum_frame_interval, NULL, 825 - &fie); 817 + if (!input->camera) 818 + return -EINVAL; 819 + 820 + act_sd_state = v4l2_subdev_lock_and_get_active_state(input->camera); 821 + ret = v4l2_subdev_call(input->camera, pad, enum_frame_interval, 822 + act_sd_state, &fie); 823 + if (act_sd_state) 824 + v4l2_subdev_unlock_state(act_sd_state); 826 825 if (ret) 827 826 return ret; 828 827 ··· 845 830 struct video_device *vdev = video_devdata(file); 846 831 struct atomisp_device *isp = video_get_drvdata(vdev); 847 832 struct atomisp_sub_device *asd = atomisp_to_video_pipe(vdev)->asd; 833 + struct atomisp_input_subdev *input = &isp->inputs[asd->input_curr]; 848 834 struct v4l2_subdev_mbus_code_enum code = { 849 835 .which = V4L2_SUBDEV_FORMAT_ACTIVE, 850 836 }; 851 837 const struct atomisp_format_bridge *format; 852 - struct v4l2_subdev *camera; 838 + struct v4l2_subdev_state *act_sd_state; 853 839 unsigned int i, fi = 0; 854 - int rval; 840 + int ret; 855 841 856 - camera = isp->inputs[asd->input_curr].camera; 857 - if(!camera) { 858 - dev_err(isp->dev, "%s(): camera is NULL, device is %s\n", 859 - __func__, vdev->name); 842 + if (!input->camera) 860 843 return -EINVAL; 861 - } 862 844 863 - rval = v4l2_subdev_call(camera, pad, enum_mbus_code, NULL, &code); 864 - if (rval == -ENOIOCTLCMD) { 865 - dev_warn(isp->dev, 866 - "enum_mbus_code pad op not supported by %s. Please fix your sensor driver!\n", 867 - camera->name); 868 - } 869 - 870 - if (rval) 871 - return rval; 845 + act_sd_state = v4l2_subdev_lock_and_get_active_state(input->camera); 846 + ret = v4l2_subdev_call(input->camera, pad, enum_mbus_code, 847 + act_sd_state, &code); 848 + if (act_sd_state) 849 + v4l2_subdev_unlock_state(act_sd_state); 850 + if (ret) 851 + return ret; 872 852 873 853 for (i = 0; i < ARRAY_SIZE(atomisp_output_fmts); i++) { 874 854 format = &atomisp_output_fmts[i];
+44 -15
drivers/staging/media/atomisp/pci/atomisp_v4l2.c
··· 862 862 v4l2_device_unregister(&isp->v4l2_dev); 863 863 media_device_unregister(&isp->media_dev); 864 864 media_device_cleanup(&isp->media_dev); 865 + 866 + for (i = 0; i < isp->input_cnt; i++) 867 + __v4l2_subdev_state_free(isp->inputs[i].try_sd_state); 865 868 } 866 869 867 870 static int atomisp_register_entities(struct atomisp_device *isp) ··· 936 933 937 934 static void atomisp_init_sensor(struct atomisp_input_subdev *input) 938 935 { 936 + static struct lock_class_key try_sd_state_key; 939 937 struct v4l2_subdev_mbus_code_enum mbus_code_enum = { }; 940 938 struct v4l2_subdev_frame_size_enum fse = { }; 941 - struct v4l2_subdev_state sd_state = { 942 - .pads = &input->pad_cfg, 943 - }; 944 939 struct v4l2_subdev_selection sel = { }; 940 + struct v4l2_subdev_state *try_sd_state, *act_sd_state; 945 941 int i, err; 946 942 943 + /* 944 + * FIXME: Drivers are not supposed to use __v4l2_subdev_state_alloc() 945 + * but atomisp needs this for try_fmt on its /dev/video# node since 946 + * it emulates a normal v4l2 device there, passing through try_fmt / 947 + * set_fmt to the sensor. 948 + */ 949 + try_sd_state = __v4l2_subdev_state_alloc(input->camera, 950 + "atomisp:try_sd_state->lock", &try_sd_state_key); 951 + if (IS_ERR(try_sd_state)) 952 + return; 953 + 954 + input->try_sd_state = try_sd_state; 955 + 956 + act_sd_state = v4l2_subdev_lock_and_get_active_state(input->camera); 957 + 947 958 mbus_code_enum.which = V4L2_SUBDEV_FORMAT_ACTIVE; 948 - err = v4l2_subdev_call(input->camera, pad, enum_mbus_code, NULL, &mbus_code_enum); 959 + err = v4l2_subdev_call(input->camera, pad, enum_mbus_code, 960 + act_sd_state, &mbus_code_enum); 949 961 if (!err) 950 962 input->code = mbus_code_enum.code; 951 963 952 964 sel.which = V4L2_SUBDEV_FORMAT_ACTIVE; 953 965 sel.target = V4L2_SEL_TGT_NATIVE_SIZE; 954 - err = v4l2_subdev_call(input->camera, pad, get_selection, NULL, &sel); 966 + err = v4l2_subdev_call(input->camera, pad, get_selection, 967 + act_sd_state, &sel); 955 968 if (err) 956 - return; 969 + goto unlock_act_sd_state; 957 970 958 971 input->native_rect = sel.r; 959 972 960 973 sel.which = V4L2_SUBDEV_FORMAT_ACTIVE; 961 974 sel.target = V4L2_SEL_TGT_CROP_DEFAULT; 962 - err = v4l2_subdev_call(input->camera, pad, get_selection, NULL, &sel); 975 + err = v4l2_subdev_call(input->camera, pad, get_selection, 976 + act_sd_state, &sel); 963 977 if (err) 964 - return; 978 + goto unlock_act_sd_state; 965 979 966 980 input->active_rect = sel.r; 967 981 ··· 993 973 fse.code = input->code; 994 974 fse.which = V4L2_SUBDEV_FORMAT_ACTIVE; 995 975 996 - err = v4l2_subdev_call(input->camera, pad, enum_frame_size, NULL, &fse); 976 + err = v4l2_subdev_call(input->camera, pad, enum_frame_size, 977 + act_sd_state, &fse); 997 978 if (err) 998 979 break; 999 980 ··· 1010 989 * for padding, set the crop rect to cover the entire sensor instead 1011 990 * of only the default active area. 1012 991 * 1013 - * Do this for both try and active formats since the try_crop rect in 1014 - * pad_cfg may influence (clamp) future try_fmt calls with which == try. 992 + * Do this for both try and active formats since the crop rect in 993 + * try_sd_state may influence (clamp size) in calls with which == try. 1015 994 */ 1016 995 sel.which = V4L2_SUBDEV_FORMAT_TRY; 1017 996 sel.target = V4L2_SEL_TGT_CROP; 1018 997 sel.r = input->native_rect; 1019 - err = v4l2_subdev_call(input->camera, pad, set_selection, &sd_state, &sel); 998 + v4l2_subdev_lock_state(input->try_sd_state); 999 + err = v4l2_subdev_call(input->camera, pad, set_selection, 1000 + input->try_sd_state, &sel); 1001 + v4l2_subdev_unlock_state(input->try_sd_state); 1020 1002 if (err) 1021 - return; 1003 + goto unlock_act_sd_state; 1022 1004 1023 1005 sel.which = V4L2_SUBDEV_FORMAT_ACTIVE; 1024 1006 sel.target = V4L2_SEL_TGT_CROP; 1025 1007 sel.r = input->native_rect; 1026 - err = v4l2_subdev_call(input->camera, pad, set_selection, NULL, &sel); 1008 + err = v4l2_subdev_call(input->camera, pad, set_selection, 1009 + act_sd_state, &sel); 1027 1010 if (err) 1028 - return; 1011 + goto unlock_act_sd_state; 1029 1012 1030 1013 dev_info(input->camera->dev, "Supports crop native %dx%d active %dx%d binning %d\n", 1031 1014 input->native_rect.width, input->native_rect.height, ··· 1037 1012 input->binning_support); 1038 1013 1039 1014 input->crop_support = true; 1015 + 1016 + unlock_act_sd_state: 1017 + if (act_sd_state) 1018 + v4l2_subdev_unlock_state(act_sd_state); 1040 1019 } 1041 1020 1042 1021 int atomisp_register_device_nodes(struct atomisp_device *isp)
+32 -16
drivers/target/target_core_configfs.c
··· 759 759 return count; 760 760 } 761 761 762 + static int target_try_configure_unmap(struct se_device *dev, 763 + const char *config_opt) 764 + { 765 + if (!dev->transport->configure_unmap) { 766 + pr_err("Generic Block Discard not supported\n"); 767 + return -ENOSYS; 768 + } 769 + 770 + if (!target_dev_configured(dev)) { 771 + pr_err("Generic Block Discard setup for %s requires device to be configured\n", 772 + config_opt); 773 + return -ENODEV; 774 + } 775 + 776 + if (!dev->transport->configure_unmap(dev)) { 777 + pr_err("Generic Block Discard setup for %s failed\n", 778 + config_opt); 779 + return -ENOSYS; 780 + } 781 + 782 + return 0; 783 + } 784 + 762 785 static ssize_t emulate_tpu_store(struct config_item *item, 763 786 const char *page, size_t count) 764 787 { ··· 799 776 * Discard supported is detected iblock_create_virtdevice(). 800 777 */ 801 778 if (flag && !da->max_unmap_block_desc_count) { 802 - if (!dev->transport->configure_unmap || 803 - !dev->transport->configure_unmap(dev)) { 804 - pr_err("Generic Block Discard not supported\n"); 805 - return -ENOSYS; 806 - } 779 + ret = target_try_configure_unmap(dev, "emulate_tpu"); 780 + if (ret) 781 + return ret; 807 782 } 808 783 809 784 da->emulate_tpu = flag; ··· 827 806 * Discard supported is detected iblock_create_virtdevice(). 828 807 */ 829 808 if (flag && !da->max_unmap_block_desc_count) { 830 - if (!dev->transport->configure_unmap || 831 - !dev->transport->configure_unmap(dev)) { 832 - pr_err("Generic Block Discard not supported\n"); 833 - return -ENOSYS; 834 - } 809 + ret = target_try_configure_unmap(dev, "emulate_tpws"); 810 + if (ret) 811 + return ret; 835 812 } 836 813 837 814 da->emulate_tpws = flag; ··· 1041 1022 * Discard supported is detected iblock_configure_device(). 1042 1023 */ 1043 1024 if (flag && !da->max_unmap_block_desc_count) { 1044 - if (!dev->transport->configure_unmap || 1045 - !dev->transport->configure_unmap(dev)) { 1046 - pr_err("dev[%p]: Thin Provisioning LBPRZ will not be set because max_unmap_block_desc_count is zero\n", 1047 - da->da_dev); 1048 - return -ENOSYS; 1049 - } 1025 + ret = target_try_configure_unmap(dev, "unmap_zeroes_data"); 1026 + if (ret) 1027 + return ret; 1050 1028 } 1051 1029 da->unmap_zeroes_data = flag; 1052 1030 pr_debug("dev[%p]: SE Device Thin Provisioning LBPRZ bit: %d\n",
+1 -1
drivers/thunderbolt/tb_regs.h
··· 203 203 #define ROUTER_CS_5_WOP BIT(1) 204 204 #define ROUTER_CS_5_WOU BIT(2) 205 205 #define ROUTER_CS_5_WOD BIT(3) 206 - #define ROUTER_CS_5_C3S BIT(23) 206 + #define ROUTER_CS_5_CNS BIT(23) 207 207 #define ROUTER_CS_5_PTO BIT(24) 208 208 #define ROUTER_CS_5_UTO BIT(25) 209 209 #define ROUTER_CS_5_HCO BIT(26)
+1 -1
drivers/thunderbolt/usb4.c
··· 290 290 } 291 291 292 292 /* TBT3 supported by the CM */ 293 - val |= ROUTER_CS_5_C3S; 293 + val &= ~ROUTER_CS_5_CNS; 294 294 295 295 return tb_sw_write(sw, &val, TB_CFG_SWITCH, ROUTER_CS_5, 1); 296 296 }
+1 -1
drivers/tty/serial/8250/8250_pci1xxxx.c
··· 302 302 * to read, the data is received one byte at a time. 303 303 */ 304 304 while (valid_burst_count--) { 305 - if (*buff_index >= (RX_BUF_SIZE - UART_BURST_SIZE)) 305 + if (*buff_index > (RX_BUF_SIZE - UART_BURST_SIZE)) 306 306 break; 307 307 burst_buf = (u32 *)&rx_buff[*buff_index]; 308 308 *burst_buf = readl(port->membase + UART_RX_BURST_FIFO);
+4 -1
drivers/tty/serial/mxs-auart.c
··· 605 605 return; 606 606 } 607 607 608 - pending = uart_port_tx(&s->port, ch, 608 + pending = uart_port_tx_flags(&s->port, ch, UART_TX_NOSTOP, 609 609 !(mxs_read(s, REG_STAT) & AUART_STAT_TXFF), 610 610 mxs_write(ch, s, REG_DATA)); 611 611 if (pending) 612 612 mxs_set(AUART_INTR_TXIEN, s, REG_INTR); 613 613 else 614 614 mxs_clr(AUART_INTR_TXIEN, s, REG_INTR); 615 + 616 + if (uart_tx_stopped(&s->port)) 617 + mxs_auart_stop_tx(&s->port); 615 618 } 616 619 617 620 static void mxs_auart_rx_char(struct mxs_auart_port *s)
-1
drivers/usb/dwc3/core.h
··· 376 376 /* Global HWPARAMS4 Register */ 377 377 #define DWC3_GHWPARAMS4_HIBER_SCRATCHBUFS(n) (((n) & (0x0f << 13)) >> 13) 378 378 #define DWC3_MAX_HIBER_SCRATCHBUFS 15 379 - #define DWC3_EXT_BUFF_CONTROL BIT(21) 380 379 381 380 /* Global HWPARAMS6 Register */ 382 381 #define DWC3_GHWPARAMS6_BCSUPPORT BIT(14)
-6
drivers/usb/dwc3/gadget.c
··· 673 673 params.param1 |= DWC3_DEPCFG_BINTERVAL_M1(bInterval_m1); 674 674 } 675 675 676 - if (dep->endpoint.fifo_mode) { 677 - if (!(dwc->hwparams.hwparams4 & DWC3_EXT_BUFF_CONTROL)) 678 - return -EINVAL; 679 - params.param1 |= DWC3_DEPCFG_EBC_HWO_NOWB | DWC3_DEPCFG_USE_EBC; 680 - } 681 - 682 676 return dwc3_send_gadget_ep_cmd(dep, DWC3_DEPCMD_SETEPCONFIG, &params); 683 677 } 684 678
-2
drivers/usb/dwc3/gadget.h
··· 26 26 #define DWC3_DEPCFG_XFER_NOT_READY_EN BIT(10) 27 27 #define DWC3_DEPCFG_FIFO_ERROR_EN BIT(11) 28 28 #define DWC3_DEPCFG_STREAM_EVENT_EN BIT(13) 29 - #define DWC3_DEPCFG_EBC_HWO_NOWB BIT(14) 30 - #define DWC3_DEPCFG_USE_EBC BIT(15) 31 29 #define DWC3_DEPCFG_BINTERVAL_M1(n) (((n) & 0xff) << 16) 32 30 #define DWC3_DEPCFG_STREAM_CAPABLE BIT(24) 33 31 #define DWC3_DEPCFG_EP_NUMBER(n) (((n) & 0x1f) << 25)
+12
fs/bcachefs/bcachefs.h
··· 1249 1249 return stdio; 1250 1250 } 1251 1251 1252 + static inline unsigned metadata_replicas_required(struct bch_fs *c) 1253 + { 1254 + return min(c->opts.metadata_replicas, 1255 + c->opts.metadata_replicas_required); 1256 + } 1257 + 1258 + static inline unsigned data_replicas_required(struct bch_fs *c) 1259 + { 1260 + return min(c->opts.data_replicas, 1261 + c->opts.data_replicas_required); 1262 + } 1263 + 1252 1264 #define BKEY_PADDED_ONSTACK(key, pad) \ 1253 1265 struct { struct bkey_i key; __u64 key ## _pad[pad]; } 1254 1266
+2 -1
fs/bcachefs/btree_update_interior.c
··· 280 280 writepoint_ptr(&c->btree_write_point), 281 281 &devs_have, 282 282 res->nr_replicas, 283 - c->opts.metadata_replicas_required, 283 + min(res->nr_replicas, 284 + c->opts.metadata_replicas_required), 284 285 watermark, 0, cl, &wp); 285 286 if (unlikely(ret)) 286 287 return ERR_PTR(ret);
+5 -4
fs/bcachefs/fs.c
··· 435 435 bch2_subvol_is_ro(c, inode->ei_subvol) ?: 436 436 __bch2_link(c, inode, dir, dentry); 437 437 if (unlikely(ret)) 438 - return ret; 438 + return bch2_err_class(ret); 439 439 440 440 ihold(&inode->v); 441 441 d_instantiate(dentry, &inode->v); ··· 487 487 struct bch_inode_info *dir= to_bch_ei(vdir); 488 488 struct bch_fs *c = dir->v.i_sb->s_fs_info; 489 489 490 - return bch2_subvol_is_ro(c, dir->ei_subvol) ?: 490 + int ret = bch2_subvol_is_ro(c, dir->ei_subvol) ?: 491 491 __bch2_unlink(vdir, dentry, false); 492 + return bch2_err_class(ret); 492 493 } 493 494 494 495 static int bch2_symlink(struct mnt_idmap *idmap, ··· 524 523 return 0; 525 524 err: 526 525 iput(&inode->v); 527 - return ret; 526 + return bch2_err_class(ret); 528 527 } 529 528 530 529 static int bch2_mkdir(struct mnt_idmap *idmap, ··· 642 641 src_inode, 643 642 dst_inode); 644 643 645 - return ret; 644 + return bch2_err_class(ret); 646 645 } 647 646 648 647 static void bch2_setattr_copy(struct mnt_idmap *idmap,
+1
fs/bcachefs/io_write.c
··· 1564 1564 BUG_ON(!op->write_point.v); 1565 1565 BUG_ON(bkey_eq(op->pos, POS_MAX)); 1566 1566 1567 + op->nr_replicas_required = min_t(unsigned, op->nr_replicas_required, op->nr_replicas); 1567 1568 op->start_time = local_clock(); 1568 1569 bch2_keylist_init(&op->insert_keys, op->inline_keys); 1569 1570 wbio_init(bio)->put_bio = false;
+3 -1
fs/bcachefs/journal_io.c
··· 1478 1478 c->opts.foreground_target; 1479 1479 unsigned i, replicas = 0, replicas_want = 1480 1480 READ_ONCE(c->opts.metadata_replicas); 1481 + unsigned replicas_need = min_t(unsigned, replicas_want, 1482 + READ_ONCE(c->opts.metadata_replicas_required)); 1481 1483 1482 1484 rcu_read_lock(); 1483 1485 retry: ··· 1528 1526 1529 1527 BUG_ON(bkey_val_u64s(&w->key.k) > BCH_REPLICAS_MAX); 1530 1528 1531 - return replicas >= c->opts.metadata_replicas_required ? 0 : -EROFS; 1529 + return replicas >= replicas_need ? 0 : -EROFS; 1532 1530 } 1533 1531 1534 1532 static void journal_buf_realloc(struct journal *j, struct journal_buf *buf)
+1 -1
fs/bcachefs/journal_reclaim.c
··· 205 205 206 206 j->can_discard = can_discard; 207 207 208 - if (nr_online < c->opts.metadata_replicas_required) { 208 + if (nr_online < metadata_replicas_required(c)) { 209 209 ret = JOURNAL_ERR_insufficient_devices; 210 210 goto out; 211 211 }
+1
fs/bcachefs/printbuf.c
··· 56 56 57 57 va_copy(args2, args); 58 58 len = vsnprintf(out->buf + out->pos, printbuf_remaining(out), fmt, args2); 59 + va_end(args2); 59 60 } while (len + 1 >= printbuf_remaining(out) && 60 61 !bch2_printbuf_make_room(out, len + 1)); 61 62
+6 -5
fs/bcachefs/recovery.c
··· 577 577 578 578 static bool check_version_upgrade(struct bch_fs *c) 579 579 { 580 - unsigned latest_compatible = bch2_latest_compatible_version(c->sb.version); 581 580 unsigned latest_version = bcachefs_metadata_version_current; 581 + unsigned latest_compatible = min(latest_version, 582 + bch2_latest_compatible_version(c->sb.version)); 582 583 unsigned old_version = c->sb.version_upgrade_complete ?: c->sb.version; 583 584 unsigned new_version = 0; 584 585 ··· 598 597 new_version = latest_version; 599 598 break; 600 599 case BCH_VERSION_UPGRADE_none: 601 - new_version = old_version; 600 + new_version = min(old_version, latest_version); 602 601 break; 603 602 } 604 603 } ··· 775 774 goto err; 776 775 } 777 776 778 - if (!(c->opts.nochanges && c->opts.norecovery)) { 777 + if (!c->opts.nochanges) { 779 778 mutex_lock(&c->sb_lock); 780 779 bool write_sb = false; 781 780 ··· 805 804 if (bch2_check_version_downgrade(c)) { 806 805 struct printbuf buf = PRINTBUF; 807 806 808 - prt_str(&buf, "Version downgrade required:\n"); 807 + prt_str(&buf, "Version downgrade required:"); 809 808 810 809 __le64 passes = ext->recovery_passes_required[0]; 811 810 bch2_sb_set_downgrade(c, ··· 813 812 BCH_VERSION_MINOR(c->sb.version)); 814 813 passes = ext->recovery_passes_required[0] & ~passes; 815 814 if (passes) { 816 - prt_str(&buf, " running recovery passes: "); 815 + prt_str(&buf, "\n running recovery passes: "); 817 816 prt_bitflags(&buf, bch2_recovery_passes, 818 817 bch2_recovery_passes_from_stable(le64_to_cpu(passes))); 819 818 }
+1 -1
fs/bcachefs/sb-members.c
··· 421 421 m = bch2_members_v2_get_mut(c->disk_sb.sb, ca->dev_idx); 422 422 for (unsigned i = 0; i < ARRAY_SIZE(m->errors_at_reset); i++) 423 423 m->errors_at_reset[i] = cpu_to_le64(atomic64_read(&ca->errors[i])); 424 - m->errors_reset_time = ktime_get_real_seconds(); 424 + m->errors_reset_time = cpu_to_le64(ktime_get_real_seconds()); 425 425 426 426 bch2_write_super(c); 427 427 mutex_unlock(&c->sb_lock);
+1 -1
fs/bcachefs/super-io.c
··· 717 717 718 718 if (IS_ERR(sb->bdev_handle)) { 719 719 ret = PTR_ERR(sb->bdev_handle); 720 - goto out; 720 + goto err; 721 721 } 722 722 sb->bdev = sb->bdev_handle->bdev; 723 723
+2 -2
fs/bcachefs/super.c
··· 1428 1428 1429 1429 required = max(!(flags & BCH_FORCE_IF_METADATA_DEGRADED) 1430 1430 ? c->opts.metadata_replicas 1431 - : c->opts.metadata_replicas_required, 1431 + : metadata_replicas_required(c), 1432 1432 !(flags & BCH_FORCE_IF_DATA_DEGRADED) 1433 1433 ? c->opts.data_replicas 1434 - : c->opts.data_replicas_required); 1434 + : data_replicas_required(c)); 1435 1435 1436 1436 return nr_rw >= required; 1437 1437 case BCH_MEMBER_STATE_failed:
+1 -1
fs/btrfs/defrag.c
··· 1046 1046 goto add; 1047 1047 1048 1048 /* Skip too large extent */ 1049 - if (range_len >= extent_thresh) 1049 + if (em->len >= extent_thresh) 1050 1050 goto next; 1051 1051 1052 1052 /*
+45 -17
fs/btrfs/extent_io.c
··· 2689 2689 * it beyond i_size. 2690 2690 */ 2691 2691 while (cur_offset < end && cur_offset < i_size) { 2692 + struct extent_state *cached_state = NULL; 2692 2693 u64 delalloc_start; 2693 2694 u64 delalloc_end; 2694 2695 u64 prealloc_start; 2696 + u64 lockstart; 2697 + u64 lockend; 2695 2698 u64 prealloc_len = 0; 2696 2699 bool delalloc; 2697 2700 2701 + lockstart = round_down(cur_offset, inode->root->fs_info->sectorsize); 2702 + lockend = round_up(end, inode->root->fs_info->sectorsize); 2703 + 2704 + /* 2705 + * We are only locking for the delalloc range because that's the 2706 + * only thing that can change here. With fiemap we have a lock 2707 + * on the inode, so no buffered or direct writes can happen. 2708 + * 2709 + * However mmaps and normal page writeback will cause this to 2710 + * change arbitrarily. We have to lock the extent lock here to 2711 + * make sure that nobody messes with the tree while we're doing 2712 + * btrfs_find_delalloc_in_range. 2713 + */ 2714 + lock_extent(&inode->io_tree, lockstart, lockend, &cached_state); 2698 2715 delalloc = btrfs_find_delalloc_in_range(inode, cur_offset, end, 2699 2716 delalloc_cached_state, 2700 2717 &delalloc_start, 2701 2718 &delalloc_end); 2719 + unlock_extent(&inode->io_tree, lockstart, lockend, &cached_state); 2702 2720 if (!delalloc) 2703 2721 break; 2704 2722 ··· 2884 2866 u64 start, u64 len) 2885 2867 { 2886 2868 const u64 ino = btrfs_ino(inode); 2887 - struct extent_state *cached_state = NULL; 2888 2869 struct extent_state *delalloc_cached_state = NULL; 2889 2870 struct btrfs_path *path; 2890 2871 struct fiemap_cache cache = { 0 }; 2891 2872 struct btrfs_backref_share_check_ctx *backref_ctx; 2892 2873 u64 last_extent_end; 2893 2874 u64 prev_extent_end; 2894 - u64 lockstart; 2895 - u64 lockend; 2875 + u64 range_start; 2876 + u64 range_end; 2877 + const u64 sectorsize = inode->root->fs_info->sectorsize; 2896 2878 bool stopped = false; 2897 2879 int ret; 2898 2880 ··· 2903 2885 goto out; 2904 2886 } 2905 2887 2906 - lockstart = round_down(start, inode->root->fs_info->sectorsize); 2907 - lockend = round_up(start + len, inode->root->fs_info->sectorsize); 2908 - prev_extent_end = lockstart; 2888 + range_start = round_down(start, sectorsize); 2889 + range_end = round_up(start + len, sectorsize); 2890 + prev_extent_end = range_start; 2909 2891 2910 2892 btrfs_inode_lock(inode, BTRFS_ILOCK_SHARED); 2911 - lock_extent(&inode->io_tree, lockstart, lockend, &cached_state); 2912 2893 2913 2894 ret = fiemap_find_last_extent_offset(inode, path, &last_extent_end); 2914 2895 if (ret < 0) ··· 2915 2898 btrfs_release_path(path); 2916 2899 2917 2900 path->reada = READA_FORWARD; 2918 - ret = fiemap_search_slot(inode, path, lockstart); 2901 + ret = fiemap_search_slot(inode, path, range_start); 2919 2902 if (ret < 0) { 2920 2903 goto out_unlock; 2921 2904 } else if (ret > 0) { ··· 2927 2910 goto check_eof_delalloc; 2928 2911 } 2929 2912 2930 - while (prev_extent_end < lockend) { 2913 + while (prev_extent_end < range_end) { 2931 2914 struct extent_buffer *leaf = path->nodes[0]; 2932 2915 struct btrfs_file_extent_item *ei; 2933 2916 struct btrfs_key key; ··· 2950 2933 * The first iteration can leave us at an extent item that ends 2951 2934 * before our range's start. Move to the next item. 2952 2935 */ 2953 - if (extent_end <= lockstart) 2936 + if (extent_end <= range_start) 2954 2937 goto next_item; 2955 2938 2956 2939 backref_ctx->curr_leaf_bytenr = leaf->start; 2957 2940 2958 2941 /* We have in implicit hole (NO_HOLES feature enabled). */ 2959 2942 if (prev_extent_end < key.offset) { 2960 - const u64 range_end = min(key.offset, lockend) - 1; 2943 + const u64 hole_end = min(key.offset, range_end) - 1; 2961 2944 2962 2945 ret = fiemap_process_hole(inode, fieinfo, &cache, 2963 2946 &delalloc_cached_state, 2964 2947 backref_ctx, 0, 0, 0, 2965 - prev_extent_end, range_end); 2948 + prev_extent_end, hole_end); 2966 2949 if (ret < 0) { 2967 2950 goto out_unlock; 2968 2951 } else if (ret > 0) { ··· 2972 2955 } 2973 2956 2974 2957 /* We've reached the end of the fiemap range, stop. */ 2975 - if (key.offset >= lockend) { 2958 + if (key.offset >= range_end) { 2976 2959 stopped = true; 2977 2960 break; 2978 2961 } ··· 3066 3049 btrfs_free_path(path); 3067 3050 path = NULL; 3068 3051 3069 - if (!stopped && prev_extent_end < lockend) { 3052 + if (!stopped && prev_extent_end < range_end) { 3070 3053 ret = fiemap_process_hole(inode, fieinfo, &cache, 3071 3054 &delalloc_cached_state, backref_ctx, 3072 - 0, 0, 0, prev_extent_end, lockend - 1); 3055 + 0, 0, 0, prev_extent_end, range_end - 1); 3073 3056 if (ret < 0) 3074 3057 goto out_unlock; 3075 - prev_extent_end = lockend; 3058 + prev_extent_end = range_end; 3076 3059 } 3077 3060 3078 3061 if (cache.cached && cache.offset + cache.len >= last_extent_end) { 3079 3062 const u64 i_size = i_size_read(&inode->vfs_inode); 3080 3063 3081 3064 if (prev_extent_end < i_size) { 3065 + struct extent_state *cached_state = NULL; 3082 3066 u64 delalloc_start; 3083 3067 u64 delalloc_end; 3068 + u64 lockstart; 3069 + u64 lockend; 3084 3070 bool delalloc; 3085 3071 3072 + lockstart = round_down(prev_extent_end, sectorsize); 3073 + lockend = round_up(i_size, sectorsize); 3074 + 3075 + /* 3076 + * See the comment in fiemap_process_hole as to why 3077 + * we're doing the locking here. 3078 + */ 3079 + lock_extent(&inode->io_tree, lockstart, lockend, &cached_state); 3086 3080 delalloc = btrfs_find_delalloc_in_range(inode, 3087 3081 prev_extent_end, 3088 3082 i_size - 1, 3089 3083 &delalloc_cached_state, 3090 3084 &delalloc_start, 3091 3085 &delalloc_end); 3086 + unlock_extent(&inode->io_tree, lockstart, lockend, &cached_state); 3092 3087 if (!delalloc) 3093 3088 cache.flags |= FIEMAP_EXTENT_LAST; 3094 3089 } else { ··· 3111 3082 ret = emit_last_fiemap_cache(fieinfo, &cache); 3112 3083 3113 3084 out_unlock: 3114 - unlock_extent(&inode->io_tree, lockstart, lockend, &cached_state); 3115 3085 btrfs_inode_unlock(inode, BTRFS_ILOCK_SHARED); 3116 3086 out: 3117 3087 free_extent_state(delalloc_cached_state);
+40 -25
fs/ceph/caps.c
··· 2156 2156 ceph_cap_string(cap->implemented), 2157 2157 ceph_cap_string(revoking)); 2158 2158 2159 + /* completed revocation? going down and there are no caps? */ 2160 + if (revoking) { 2161 + if ((revoking & cap_used) == 0) { 2162 + doutc(cl, "completed revocation of %s\n", 2163 + ceph_cap_string(cap->implemented & ~cap->issued)); 2164 + goto ack; 2165 + } 2166 + 2167 + /* 2168 + * If the "i_wrbuffer_ref" was increased by mmap or generic 2169 + * cache write just before the ceph_check_caps() is called, 2170 + * the Fb capability revoking will fail this time. Then we 2171 + * must wait for the BDI's delayed work to flush the dirty 2172 + * pages and to release the "i_wrbuffer_ref", which will cost 2173 + * at most 5 seconds. That means the MDS needs to wait at 2174 + * most 5 seconds to finished the Fb capability's revocation. 2175 + * 2176 + * Let's queue a writeback for it. 2177 + */ 2178 + if (S_ISREG(inode->i_mode) && ci->i_wrbuffer_ref && 2179 + (revoking & CEPH_CAP_FILE_BUFFER)) 2180 + queue_writeback = true; 2181 + } 2182 + 2159 2183 if (cap == ci->i_auth_cap && 2160 2184 (cap->issued & CEPH_CAP_FILE_WR)) { 2161 2185 /* request larger max_size from MDS? */ ··· 2205 2181 doutc(cl, "flushing snap caps\n"); 2206 2182 goto ack; 2207 2183 } 2208 - } 2209 - 2210 - /* completed revocation? going down and there are no caps? */ 2211 - if (revoking) { 2212 - if ((revoking & cap_used) == 0) { 2213 - doutc(cl, "completed revocation of %s\n", 2214 - ceph_cap_string(cap->implemented & ~cap->issued)); 2215 - goto ack; 2216 - } 2217 - 2218 - /* 2219 - * If the "i_wrbuffer_ref" was increased by mmap or generic 2220 - * cache write just before the ceph_check_caps() is called, 2221 - * the Fb capability revoking will fail this time. Then we 2222 - * must wait for the BDI's delayed work to flush the dirty 2223 - * pages and to release the "i_wrbuffer_ref", which will cost 2224 - * at most 5 seconds. That means the MDS needs to wait at 2225 - * most 5 seconds to finished the Fb capability's revocation. 2226 - * 2227 - * Let's queue a writeback for it. 2228 - */ 2229 - if (S_ISREG(inode->i_mode) && ci->i_wrbuffer_ref && 2230 - (revoking & CEPH_CAP_FILE_BUFFER)) 2231 - queue_writeback = true; 2232 2184 } 2233 2185 2234 2186 /* want more caps from mds? */ ··· 4772 4772 if (__ceph_caps_dirty(ci)) { 4773 4773 struct ceph_mds_client *mdsc = 4774 4774 ceph_inode_to_fs_client(inode)->mdsc; 4775 - __cap_delay_requeue_front(mdsc, ci); 4775 + 4776 + doutc(mdsc->fsc->client, "%p %llx.%llx\n", inode, 4777 + ceph_vinop(inode)); 4778 + spin_lock(&mdsc->cap_unlink_delay_lock); 4779 + ci->i_ceph_flags |= CEPH_I_FLUSH; 4780 + if (!list_empty(&ci->i_cap_delay_list)) 4781 + list_del_init(&ci->i_cap_delay_list); 4782 + list_add_tail(&ci->i_cap_delay_list, 4783 + &mdsc->cap_unlink_delay_list); 4784 + spin_unlock(&mdsc->cap_unlink_delay_lock); 4785 + 4786 + /* 4787 + * Fire the work immediately, because the MDS maybe 4788 + * waiting for caps release. 4789 + */ 4790 + ceph_queue_cap_unlink_work(mdsc); 4776 4791 } 4777 4792 } 4778 4793 spin_unlock(&ci->i_ceph_lock);
+48
fs/ceph/mds_client.c
··· 2484 2484 } 2485 2485 } 2486 2486 2487 + void ceph_queue_cap_unlink_work(struct ceph_mds_client *mdsc) 2488 + { 2489 + struct ceph_client *cl = mdsc->fsc->client; 2490 + if (mdsc->stopping) 2491 + return; 2492 + 2493 + if (queue_work(mdsc->fsc->cap_wq, &mdsc->cap_unlink_work)) { 2494 + doutc(cl, "caps unlink work queued\n"); 2495 + } else { 2496 + doutc(cl, "failed to queue caps unlink work\n"); 2497 + } 2498 + } 2499 + 2500 + static void ceph_cap_unlink_work(struct work_struct *work) 2501 + { 2502 + struct ceph_mds_client *mdsc = 2503 + container_of(work, struct ceph_mds_client, cap_unlink_work); 2504 + struct ceph_client *cl = mdsc->fsc->client; 2505 + 2506 + doutc(cl, "begin\n"); 2507 + spin_lock(&mdsc->cap_unlink_delay_lock); 2508 + while (!list_empty(&mdsc->cap_unlink_delay_list)) { 2509 + struct ceph_inode_info *ci; 2510 + struct inode *inode; 2511 + 2512 + ci = list_first_entry(&mdsc->cap_unlink_delay_list, 2513 + struct ceph_inode_info, 2514 + i_cap_delay_list); 2515 + list_del_init(&ci->i_cap_delay_list); 2516 + 2517 + inode = igrab(&ci->netfs.inode); 2518 + if (inode) { 2519 + spin_unlock(&mdsc->cap_unlink_delay_lock); 2520 + doutc(cl, "on %p %llx.%llx\n", inode, 2521 + ceph_vinop(inode)); 2522 + ceph_check_caps(ci, CHECK_CAPS_FLUSH); 2523 + iput(inode); 2524 + spin_lock(&mdsc->cap_unlink_delay_lock); 2525 + } 2526 + } 2527 + spin_unlock(&mdsc->cap_unlink_delay_lock); 2528 + doutc(cl, "done\n"); 2529 + } 2530 + 2487 2531 /* 2488 2532 * requests 2489 2533 */ ··· 5403 5359 INIT_LIST_HEAD(&mdsc->cap_delay_list); 5404 5360 INIT_LIST_HEAD(&mdsc->cap_wait_list); 5405 5361 spin_lock_init(&mdsc->cap_delay_lock); 5362 + INIT_LIST_HEAD(&mdsc->cap_unlink_delay_list); 5363 + spin_lock_init(&mdsc->cap_unlink_delay_lock); 5406 5364 INIT_LIST_HEAD(&mdsc->snap_flush_list); 5407 5365 spin_lock_init(&mdsc->snap_flush_lock); 5408 5366 mdsc->last_cap_flush_tid = 1; ··· 5413 5367 spin_lock_init(&mdsc->cap_dirty_lock); 5414 5368 init_waitqueue_head(&mdsc->cap_flushing_wq); 5415 5369 INIT_WORK(&mdsc->cap_reclaim_work, ceph_cap_reclaim_work); 5370 + INIT_WORK(&mdsc->cap_unlink_work, ceph_cap_unlink_work); 5416 5371 err = ceph_metric_init(&mdsc->metric); 5417 5372 if (err) 5418 5373 goto err_mdsmap; ··· 5687 5640 ceph_cleanup_global_and_empty_realms(mdsc); 5688 5641 5689 5642 cancel_work_sync(&mdsc->cap_reclaim_work); 5643 + cancel_work_sync(&mdsc->cap_unlink_work); 5690 5644 cancel_delayed_work_sync(&mdsc->delayed_work); /* cancel timer */ 5691 5645 5692 5646 doutc(cl, "done\n");
+5
fs/ceph/mds_client.h
··· 462 462 unsigned long last_renew_caps; /* last time we renewed our caps */ 463 463 struct list_head cap_delay_list; /* caps with delayed release */ 464 464 spinlock_t cap_delay_lock; /* protects cap_delay_list */ 465 + struct list_head cap_unlink_delay_list; /* caps with delayed release for unlink */ 466 + spinlock_t cap_unlink_delay_lock; /* protects cap_unlink_delay_list */ 465 467 struct list_head snap_flush_list; /* cap_snaps ready to flush */ 466 468 spinlock_t snap_flush_lock; 467 469 ··· 476 474 477 475 struct work_struct cap_reclaim_work; 478 476 atomic_t cap_reclaim_pending; 477 + 478 + struct work_struct cap_unlink_work; 479 479 480 480 /* 481 481 * Cap reservations ··· 578 574 struct ceph_mds_session *session); 579 575 extern void ceph_queue_cap_reclaim_work(struct ceph_mds_client *mdsc); 580 576 extern void ceph_reclaim_caps_nr(struct ceph_mds_client *mdsc, int nr); 577 + extern void ceph_queue_cap_unlink_work(struct ceph_mds_client *mdsc); 581 578 extern int ceph_iterate_session_caps(struct ceph_mds_session *session, 582 579 int (*cb)(struct inode *, int mds, void *), 583 580 void *arg);
+1
fs/smb/client/cached_dir.c
··· 242 242 .desired_access = FILE_READ_DATA | FILE_READ_ATTRIBUTES, 243 243 .disposition = FILE_OPEN, 244 244 .fid = pfid, 245 + .replay = !!(retries), 245 246 }; 246 247 247 248 rc = SMB2_open_init(tcon, server,
+1
fs/smb/client/cifsglob.h
··· 1378 1378 struct cifs_fid *fid; 1379 1379 umode_t mode; 1380 1380 bool reconnect:1; 1381 + bool replay:1; /* indicates that this open is for a replay */ 1381 1382 }; 1382 1383 1383 1384 struct cifs_fid {
+12 -2
fs/smb/client/connect.c
··· 3444 3444 * the user on mount 3445 3445 */ 3446 3446 if ((cifs_sb->ctx->wsize == 0) || 3447 - (cifs_sb->ctx->wsize > server->ops->negotiate_wsize(tcon, ctx))) 3448 - cifs_sb->ctx->wsize = server->ops->negotiate_wsize(tcon, ctx); 3447 + (cifs_sb->ctx->wsize > server->ops->negotiate_wsize(tcon, ctx))) { 3448 + cifs_sb->ctx->wsize = 3449 + round_down(server->ops->negotiate_wsize(tcon, ctx), PAGE_SIZE); 3450 + /* 3451 + * in the very unlikely event that the server sent a max write size under PAGE_SIZE, 3452 + * (which would get rounded down to 0) then reset wsize to absolute minimum eg 4096 3453 + */ 3454 + if (cifs_sb->ctx->wsize == 0) { 3455 + cifs_sb->ctx->wsize = PAGE_SIZE; 3456 + cifs_dbg(VFS, "wsize too small, reset to minimum ie PAGE_SIZE, usually 4096\n"); 3457 + } 3458 + } 3449 3459 if ((cifs_sb->ctx->rsize == 0) || 3450 3460 (cifs_sb->ctx->rsize > server->ops->negotiate_rsize(tcon, ctx))) 3451 3461 cifs_sb->ctx->rsize = server->ops->negotiate_rsize(tcon, ctx);
+11
fs/smb/client/fs_context.c
··· 1111 1111 case Opt_wsize: 1112 1112 ctx->wsize = result.uint_32; 1113 1113 ctx->got_wsize = true; 1114 + if (ctx->wsize % PAGE_SIZE != 0) { 1115 + ctx->wsize = round_down(ctx->wsize, PAGE_SIZE); 1116 + if (ctx->wsize == 0) { 1117 + ctx->wsize = PAGE_SIZE; 1118 + cifs_dbg(VFS, "wsize too small, reset to minimum %ld\n", PAGE_SIZE); 1119 + } else { 1120 + cifs_dbg(VFS, 1121 + "wsize rounded down to %d to multiple of PAGE_SIZE %ld\n", 1122 + ctx->wsize, PAGE_SIZE); 1123 + } 1124 + } 1114 1125 break; 1115 1126 case Opt_acregmax: 1116 1127 ctx->acregmax = HZ * result.uint_32;
+16
fs/smb/client/namespace.c
··· 168 168 return s; 169 169 } 170 170 171 + static void fs_context_set_ids(struct smb3_fs_context *ctx) 172 + { 173 + kuid_t uid = current_fsuid(); 174 + kgid_t gid = current_fsgid(); 175 + 176 + if (ctx->multiuser) { 177 + if (!ctx->uid_specified) 178 + ctx->linux_uid = uid; 179 + if (!ctx->gid_specified) 180 + ctx->linux_gid = gid; 181 + } 182 + if (!ctx->cruid_specified) 183 + ctx->cred_uid = uid; 184 + } 185 + 171 186 /* 172 187 * Create a vfsmount that we can automount 173 188 */ ··· 220 205 tmp.leaf_fullpath = NULL; 221 206 tmp.UNC = tmp.prepath = NULL; 222 207 tmp.dfs_root_ses = NULL; 208 + fs_context_set_ids(&tmp); 223 209 224 210 rc = smb3_fs_context_dup(ctx, &tmp); 225 211 if (rc) {
+11 -3
fs/smb/client/smb2ops.c
··· 619 619 goto out; 620 620 } 621 621 622 - while (bytes_left >= sizeof(*p)) { 622 + while (bytes_left >= (ssize_t)sizeof(*p)) { 623 623 memset(&tmp_iface, 0, sizeof(tmp_iface)); 624 624 tmp_iface.speed = le64_to_cpu(p->LinkSpeed); 625 625 tmp_iface.rdma_capable = le32_to_cpu(p->Capability & RDMA_CAPABLE) ? 1 : 0; ··· 1204 1204 .disposition = FILE_OPEN, 1205 1205 .create_options = cifs_create_options(cifs_sb, 0), 1206 1206 .fid = &fid, 1207 + .replay = !!(retries), 1207 1208 }; 1208 1209 1209 1210 rc = SMB2_open_init(tcon, server, ··· 1570 1569 .disposition = FILE_OPEN, 1571 1570 .create_options = cifs_create_options(cifs_sb, create_options), 1572 1571 .fid = &fid, 1572 + .replay = !!(retries), 1573 1573 }; 1574 1574 1575 1575 if (qi.flags & PASSTHRU_FSCTL) { ··· 2297 2295 .disposition = FILE_OPEN, 2298 2296 .create_options = cifs_create_options(cifs_sb, 0), 2299 2297 .fid = fid, 2298 + .replay = !!(retries), 2300 2299 }; 2301 2300 2302 2301 rc = SMB2_open_init(tcon, server, ··· 2684 2681 .disposition = FILE_OPEN, 2685 2682 .create_options = cifs_create_options(cifs_sb, 0), 2686 2683 .fid = &fid, 2684 + .replay = !!(retries), 2687 2685 }; 2688 2686 2689 2687 rc = SMB2_open_init(tcon, server, ··· 5217 5213 struct inode *new; 5218 5214 struct kvec iov; 5219 5215 __le16 *path; 5220 - char *sym; 5216 + char *sym, sep = CIFS_DIR_SEP(cifs_sb); 5221 5217 u16 len, plen; 5222 5218 int rc = 0; 5223 5219 ··· 5231 5227 .symlink_target = sym, 5232 5228 }; 5233 5229 5234 - path = cifs_convert_path_to_utf16(symname, cifs_sb); 5230 + convert_delimiter(sym, sep); 5231 + path = cifs_convert_path_to_utf16(sym, cifs_sb); 5235 5232 if (!path) { 5236 5233 rc = -ENOMEM; 5237 5234 goto out; ··· 5255 5250 buf->PrintNameLength = cpu_to_le16(plen); 5256 5251 memcpy(buf->PathBuffer, path, plen); 5257 5252 buf->Flags = cpu_to_le32(*symname != '/' ? SYMLINK_FLAG_RELATIVE : 0); 5253 + if (*sym != sep) 5254 + buf->Flags = cpu_to_le32(SYMLINK_FLAG_RELATIVE); 5258 5255 5256 + convert_delimiter(sym, '/'); 5259 5257 iov.iov_base = buf; 5260 5258 iov.iov_len = len; 5261 5259 new = smb2_get_reparse_inode(&data, inode->i_sb, xid,
+8 -2
fs/smb/client/smb2pdu.c
··· 2404 2404 */ 2405 2405 buf->dcontext.Timeout = cpu_to_le32(oparms->tcon->handle_timeout); 2406 2406 buf->dcontext.Flags = cpu_to_le32(SMB2_DHANDLE_FLAG_PERSISTENT); 2407 - generate_random_uuid(buf->dcontext.CreateGuid); 2408 - memcpy(pfid->create_guid, buf->dcontext.CreateGuid, 16); 2407 + 2408 + /* for replay, we should not overwrite the existing create guid */ 2409 + if (!oparms->replay) { 2410 + generate_random_uuid(buf->dcontext.CreateGuid); 2411 + memcpy(pfid->create_guid, buf->dcontext.CreateGuid, 16); 2412 + } else 2413 + memcpy(buf->dcontext.CreateGuid, pfid->create_guid, 16); 2409 2414 2410 2415 /* SMB2_CREATE_DURABLE_HANDLE_REQUEST is "DH2Q" */ 2411 2416 buf->Name[0] = 'D'; ··· 3147 3142 /* reinitialize for possible replay */ 3148 3143 flags = 0; 3149 3144 server = cifs_pick_channel(ses); 3145 + oparms->replay = !!(retries); 3150 3146 3151 3147 cifs_dbg(FYI, "create/open\n"); 3152 3148 if (!ses || !server)
+27 -15
fs/zonefs/file.c
··· 348 348 struct zonefs_inode_info *zi = ZONEFS_I(inode); 349 349 350 350 if (error) { 351 - zonefs_io_error(inode, true); 351 + /* 352 + * For Sync IOs, error recovery is called from 353 + * zonefs_file_dio_write(). 354 + */ 355 + if (!is_sync_kiocb(iocb)) 356 + zonefs_io_error(inode, true); 352 357 return error; 353 358 } 354 359 ··· 496 491 ret = -EINVAL; 497 492 goto inode_unlock; 498 493 } 494 + /* 495 + * Advance the zone write pointer offset. This assumes that the 496 + * IO will succeed, which is OK to do because we do not allow 497 + * partial writes (IOMAP_DIO_PARTIAL is not set) and if the IO 498 + * fails, the error path will correct the write pointer offset. 499 + */ 500 + z->z_wpoffset += count; 501 + zonefs_inode_account_active(inode); 499 502 mutex_unlock(&zi->i_truncate_mutex); 500 503 } 501 504 ··· 517 504 if (ret == -ENOTBLK) 518 505 ret = -EBUSY; 519 506 520 - if (zonefs_zone_is_seq(z) && 521 - (ret > 0 || ret == -EIOCBQUEUED)) { 522 - if (ret > 0) 523 - count = ret; 524 - 525 - /* 526 - * Update the zone write pointer offset assuming the write 527 - * operation succeeded. If it did not, the error recovery path 528 - * will correct it. Also do active seq file accounting. 529 - */ 530 - mutex_lock(&zi->i_truncate_mutex); 531 - z->z_wpoffset += count; 532 - zonefs_inode_account_active(inode); 533 - mutex_unlock(&zi->i_truncate_mutex); 507 + /* 508 + * For a failed IO or partial completion, trigger error recovery 509 + * to update the zone write pointer offset to a correct value. 510 + * For asynchronous IOs, zonefs_file_write_dio_end_io() may already 511 + * have executed error recovery if the IO already completed when we 512 + * reach here. However, we cannot know that and execute error recovery 513 + * again (that will not change anything). 514 + */ 515 + if (zonefs_zone_is_seq(z)) { 516 + if (ret > 0 && ret != count) 517 + ret = -EIO; 518 + if (ret < 0 && ret != -EIOCBQUEUED) 519 + zonefs_io_error(inode, true); 534 520 } 535 521 536 522 inode_unlock:
+38 -28
fs/zonefs/super.c
··· 246 246 z->z_mode = inode->i_mode; 247 247 } 248 248 249 - struct zonefs_ioerr_data { 250 - struct inode *inode; 251 - bool write; 252 - }; 253 - 254 249 static int zonefs_io_error_cb(struct blk_zone *zone, unsigned int idx, 255 250 void *data) 256 251 { 257 - struct zonefs_ioerr_data *err = data; 258 - struct inode *inode = err->inode; 252 + struct blk_zone *z = data; 253 + 254 + *z = *zone; 255 + return 0; 256 + } 257 + 258 + static void zonefs_handle_io_error(struct inode *inode, struct blk_zone *zone, 259 + bool write) 260 + { 259 261 struct zonefs_zone *z = zonefs_inode_zone(inode); 260 262 struct super_block *sb = inode->i_sb; 261 263 struct zonefs_sb_info *sbi = ZONEFS_SB(sb); ··· 272 270 data_size = zonefs_check_zone_condition(sb, z, zone); 273 271 isize = i_size_read(inode); 274 272 if (!(z->z_flags & (ZONEFS_ZONE_READONLY | ZONEFS_ZONE_OFFLINE)) && 275 - !err->write && isize == data_size) 276 - return 0; 273 + !write && isize == data_size) 274 + return; 277 275 278 276 /* 279 277 * At this point, we detected either a bad zone or an inconsistency ··· 294 292 * In all cases, warn about inode size inconsistency and handle the 295 293 * IO error according to the zone condition and to the mount options. 296 294 */ 297 - if (zonefs_zone_is_seq(z) && isize != data_size) 295 + if (isize != data_size) 298 296 zonefs_warn(sb, 299 297 "inode %lu: invalid size %lld (should be %lld)\n", 300 298 inode->i_ino, isize, data_size); ··· 354 352 zonefs_i_size_write(inode, data_size); 355 353 z->z_wpoffset = data_size; 356 354 zonefs_inode_account_active(inode); 357 - 358 - return 0; 359 355 } 360 356 361 357 /* ··· 367 367 { 368 368 struct zonefs_zone *z = zonefs_inode_zone(inode); 369 369 struct super_block *sb = inode->i_sb; 370 - struct zonefs_sb_info *sbi = ZONEFS_SB(sb); 371 370 unsigned int noio_flag; 372 - unsigned int nr_zones = 1; 373 - struct zonefs_ioerr_data err = { 374 - .inode = inode, 375 - .write = write, 376 - }; 371 + struct blk_zone zone; 377 372 int ret; 378 373 379 374 /* 380 - * The only files that have more than one zone are conventional zone 381 - * files with aggregated conventional zones, for which the inode zone 382 - * size is always larger than the device zone size. 375 + * Conventional zone have no write pointer and cannot become read-only 376 + * or offline. So simply fake a report for a single or aggregated zone 377 + * and let zonefs_handle_io_error() correct the zone inode information 378 + * according to the mount options. 383 379 */ 384 - if (z->z_size > bdev_zone_sectors(sb->s_bdev)) 385 - nr_zones = z->z_size >> 386 - (sbi->s_zone_sectors_shift + SECTOR_SHIFT); 380 + if (!zonefs_zone_is_seq(z)) { 381 + zone.start = z->z_sector; 382 + zone.len = z->z_size >> SECTOR_SHIFT; 383 + zone.wp = zone.start + zone.len; 384 + zone.type = BLK_ZONE_TYPE_CONVENTIONAL; 385 + zone.cond = BLK_ZONE_COND_NOT_WP; 386 + zone.capacity = zone.len; 387 + goto handle_io_error; 388 + } 387 389 388 390 /* 389 391 * Memory allocations in blkdev_report_zones() can trigger a memory ··· 396 394 * the GFP_NOIO context avoids both problems. 397 395 */ 398 396 noio_flag = memalloc_noio_save(); 399 - ret = blkdev_report_zones(sb->s_bdev, z->z_sector, nr_zones, 400 - zonefs_io_error_cb, &err); 401 - if (ret != nr_zones) 397 + ret = blkdev_report_zones(sb->s_bdev, z->z_sector, 1, 398 + zonefs_io_error_cb, &zone); 399 + memalloc_noio_restore(noio_flag); 400 + 401 + if (ret != 1) { 402 402 zonefs_err(sb, "Get inode %lu zone information failed %d\n", 403 403 inode->i_ino, ret); 404 - memalloc_noio_restore(noio_flag); 404 + zonefs_warn(sb, "remounting filesystem read-only\n"); 405 + sb->s_flags |= SB_RDONLY; 406 + return; 407 + } 408 + 409 + handle_io_error: 410 + zonefs_handle_io_error(inode, &zone, write); 405 411 } 406 412 407 413 static struct kmem_cache *zonefs_inode_cachep;
+18
include/linux/gpio/driver.h
··· 819 819 return ERR_PTR(-ENODEV); 820 820 } 821 821 822 + static inline struct gpio_device *gpiod_to_gpio_device(struct gpio_desc *desc) 823 + { 824 + WARN_ON(1); 825 + return ERR_PTR(-ENODEV); 826 + } 827 + 828 + static inline int gpio_device_get_base(struct gpio_device *gdev) 829 + { 830 + WARN_ON(1); 831 + return -ENODEV; 832 + } 833 + 834 + static inline const char *gpio_device_get_label(struct gpio_device *gdev) 835 + { 836 + WARN_ON(1); 837 + return NULL; 838 + } 839 + 822 840 static inline int gpiochip_lock_as_irq(struct gpio_chip *gc, 823 841 unsigned int offset) 824 842 {
+3 -1
include/linux/iio/adc/ad_sigma_delta.h
··· 8 8 #ifndef __AD_SIGMA_DELTA_H__ 9 9 #define __AD_SIGMA_DELTA_H__ 10 10 11 + #include <linux/iio/iio.h> 12 + 11 13 enum ad_sigma_delta_mode { 12 14 AD_SD_MODE_CONTINUOUS = 0, 13 15 AD_SD_MODE_SINGLE = 1, ··· 101 99 * 'rx_buf' is up to 32 bits per sample + 64 bit timestamp, 102 100 * rounded to 16 bytes to take into account padding. 103 101 */ 104 - uint8_t tx_buf[4] ____cacheline_aligned; 102 + uint8_t tx_buf[4] __aligned(IIO_DMA_MINALIGN); 105 103 uint8_t rx_buf[16] __aligned(8); 106 104 }; 107 105
+2 -2
include/linux/iio/common/st_sensors.h
··· 258 258 bool hw_irq_trigger; 259 259 s64 hw_timestamp; 260 260 261 - char buffer_data[ST_SENSORS_MAX_BUFFER_SIZE] ____cacheline_aligned; 262 - 263 261 struct mutex odr_lock; 262 + 263 + char buffer_data[ST_SENSORS_MAX_BUFFER_SIZE] __aligned(IIO_DMA_MINALIGN); 264 264 }; 265 265 266 266 #ifdef CONFIG_IIO_BUFFER
+2 -1
include/linux/iio/imu/adis.h
··· 11 11 12 12 #include <linux/spi/spi.h> 13 13 #include <linux/interrupt.h> 14 + #include <linux/iio/iio.h> 14 15 #include <linux/iio/types.h> 15 16 16 17 #define ADIS_WRITE_REG(reg) ((0x80 | (reg))) ··· 132 131 unsigned long irq_flag; 133 132 void *buffer; 134 133 135 - u8 tx[10] ____cacheline_aligned; 134 + u8 tx[10] __aligned(IIO_DMA_MINALIGN); 136 135 u8 rx[4]; 137 136 }; 138 137
+1 -1
include/linux/mlx5/mlx5_ifc.h
··· 1103 1103 u8 sw_r_roce_src_udp_port[0x1]; 1104 1104 u8 fl_rc_qp_when_roce_disabled[0x1]; 1105 1105 u8 fl_rc_qp_when_roce_enabled[0x1]; 1106 - u8 reserved_at_7[0x1]; 1106 + u8 roce_cc_general[0x1]; 1107 1107 u8 qp_ooo_transmit_default[0x1]; 1108 1108 u8 reserved_at_9[0x15]; 1109 1109 u8 qp_ts_format[0x2];
+4 -1
include/linux/mlx5/qp.h
··· 269 269 union { 270 270 struct { 271 271 __be16 sz; 272 - u8 start[2]; 272 + union { 273 + u8 start[2]; 274 + DECLARE_FLEX_ARRAY(u8, data); 275 + }; 273 276 } inline_hdr; 274 277 struct { 275 278 __be16 type;
+1
include/linux/nvme.h
··· 646 646 NVME_CMD_EFFECTS_NCC = 1 << 2, 647 647 NVME_CMD_EFFECTS_NIC = 1 << 3, 648 648 NVME_CMD_EFFECTS_CCC = 1 << 4, 649 + NVME_CMD_EFFECTS_CSER_MASK = GENMASK(15, 14), 649 650 NVME_CMD_EFFECTS_CSE_MASK = GENMASK(18, 16), 650 651 NVME_CMD_EFFECTS_UUID_SEL = 1 << 19, 651 652 NVME_CMD_EFFECTS_SCOPE_MASK = GENMASK(31, 20),
+10 -7
include/linux/seq_buf.h
··· 2 2 #ifndef _LINUX_SEQ_BUF_H 3 3 #define _LINUX_SEQ_BUF_H 4 4 5 - #include <linux/fs.h> 5 + #include <linux/bug.h> 6 + #include <linux/minmax.h> 7 + #include <linux/seq_file.h> 8 + #include <linux/types.h> 6 9 7 10 /* 8 11 * Trace sequences are used to allow a function to call several other functions ··· 13 10 */ 14 11 15 12 /** 16 - * seq_buf - seq buffer structure 13 + * struct seq_buf - seq buffer structure 17 14 * @buffer: pointer to the buffer 18 15 * @size: size of the buffer 19 16 * @len: the amount of data inside the buffer ··· 80 77 } 81 78 82 79 /** 83 - * seq_buf_str - get %NUL-terminated C string from seq_buf 80 + * seq_buf_str - get NUL-terminated C string from seq_buf 84 81 * @s: the seq_buf handle 85 82 * 86 - * This makes sure that the buffer in @s is nul terminated and 83 + * This makes sure that the buffer in @s is NUL-terminated and 87 84 * safe to read as a string. 88 85 * 89 86 * Note, if this is called when the buffer has overflowed, then ··· 93 90 * After this function is called, s->buffer is safe to use 94 91 * in string operations. 95 92 * 96 - * Returns @s->buf after making sure it is terminated. 93 + * Returns: @s->buf after making sure it is terminated. 97 94 */ 98 95 static inline const char *seq_buf_str(struct seq_buf *s) 99 96 { ··· 113 110 * @s: the seq_buf handle 114 111 * @bufp: the beginning of the buffer is stored here 115 112 * 116 - * Return the number of bytes available in the buffer, or zero if 113 + * Returns: the number of bytes available in the buffer, or zero if 117 114 * there's no space. 118 115 */ 119 116 static inline size_t seq_buf_get_buf(struct seq_buf *s, char **bufp) ··· 135 132 * @num: the number of bytes to commit 136 133 * 137 134 * Commit @num bytes of data written to a buffer previously acquired 138 - * by seq_buf_get. To signal an error condition, or that the data 135 + * by seq_buf_get_buf(). To signal an error condition, or that the data 139 136 * didn't fit in the available space, pass a negative @num value. 140 137 */ 141 138 static inline void seq_buf_commit(struct seq_buf *s, int num)
+27 -5
include/linux/serial_core.h
··· 748 748 749 749 void uart_write_wakeup(struct uart_port *port); 750 750 751 - #define __uart_port_tx(uport, ch, tx_ready, put_char, tx_done, for_test, \ 752 - for_post) \ 751 + /** 752 + * enum UART_TX_FLAGS -- flags for uart_port_tx_flags() 753 + * 754 + * @UART_TX_NOSTOP: don't call port->ops->stop_tx() on empty buffer 755 + */ 756 + enum UART_TX_FLAGS { 757 + UART_TX_NOSTOP = BIT(0), 758 + }; 759 + 760 + #define __uart_port_tx(uport, ch, flags, tx_ready, put_char, tx_done, \ 761 + for_test, for_post) \ 753 762 ({ \ 754 763 struct uart_port *__port = (uport); \ 755 764 struct circ_buf *xmit = &__port->state->xmit; \ ··· 786 777 if (pending < WAKEUP_CHARS) { \ 787 778 uart_write_wakeup(__port); \ 788 779 \ 789 - if (pending == 0) \ 780 + if (!((flags) & UART_TX_NOSTOP) && pending == 0) \ 790 781 __port->ops->stop_tx(__port); \ 791 782 } \ 792 783 \ ··· 821 812 */ 822 813 #define uart_port_tx_limited(port, ch, count, tx_ready, put_char, tx_done) ({ \ 823 814 unsigned int __count = (count); \ 824 - __uart_port_tx(port, ch, tx_ready, put_char, tx_done, __count, \ 815 + __uart_port_tx(port, ch, 0, tx_ready, put_char, tx_done, __count, \ 825 816 __count--); \ 826 817 }) 827 818 ··· 835 826 * See uart_port_tx_limited() for more details. 836 827 */ 837 828 #define uart_port_tx(port, ch, tx_ready, put_char) \ 838 - __uart_port_tx(port, ch, tx_ready, put_char, ({}), true, ({})) 829 + __uart_port_tx(port, ch, 0, tx_ready, put_char, ({}), true, ({})) 839 830 831 + 832 + /** 833 + * uart_port_tx_flags -- transmit helper for uart_port with flags 834 + * @port: uart port 835 + * @ch: variable to store a character to be written to the HW 836 + * @flags: %UART_TX_NOSTOP or similar 837 + * @tx_ready: can HW accept more data function 838 + * @put_char: function to write a character 839 + * 840 + * See uart_port_tx_limited() for more details. 841 + */ 842 + #define uart_port_tx_flags(port, ch, flags, tx_ready, put_char) \ 843 + __uart_port_tx(port, ch, flags, tx_ready, put_char, ({}), true, ({})) 840 844 /* 841 845 * Baud rate helpers. 842 846 */
-1
include/linux/usb/gadget.h
··· 236 236 unsigned max_streams:16; 237 237 unsigned mult:2; 238 238 unsigned maxburst:5; 239 - unsigned fifo_mode:1; 240 239 u8 address; 241 240 const struct usb_endpoint_descriptor *desc; 242 241 const struct usb_ss_ep_comp_descriptor *comp_desc;
+1 -1
include/net/netfilter/nf_flow_table.h
··· 276 276 } 277 277 278 278 void flow_offload_route_init(struct flow_offload *flow, 279 - const struct nf_flow_route *route); 279 + struct nf_flow_route *route); 280 280 281 281 int flow_offload_add(struct nf_flowtable *flow_table, struct flow_offload *flow); 282 282 void flow_offload_refresh(struct nf_flowtable *flow_table,
+3
include/net/switchdev.h
··· 308 308 int switchdev_port_attr_set(struct net_device *dev, 309 309 const struct switchdev_attr *attr, 310 310 struct netlink_ext_ack *extack); 311 + bool switchdev_port_obj_act_is_deferred(struct net_device *dev, 312 + enum switchdev_notifier_type nt, 313 + const struct switchdev_obj *obj); 311 314 int switchdev_port_obj_add(struct net_device *dev, 312 315 const struct switchdev_obj *obj, 313 316 struct netlink_ext_ack *extack);
+1 -1
include/net/tcp.h
··· 2551 2551 /* cleanup ulp */ 2552 2552 void (*release)(struct sock *sk); 2553 2553 /* diagnostic */ 2554 - int (*get_info)(const struct sock *sk, struct sk_buff *skb); 2554 + int (*get_info)(struct sock *sk, struct sk_buff *skb); 2555 2555 size_t (*get_info_size)(const struct sock *sk); 2556 2556 /* clone ulp */ 2557 2557 void (*clone)(const struct request_sock *req, struct sock *newsk,
+1
include/sound/tas2781.h
··· 142 142 143 143 void tas2781_reset(struct tasdevice_priv *tas_dev); 144 144 int tascodec_init(struct tasdevice_priv *tas_priv, void *codec, 145 + struct module *module, 145 146 void (*cont)(const struct firmware *fw, void *context)); 146 147 struct tasdevice_priv *tasdevice_kzalloc(struct i2c_client *i2c); 147 148 int tasdevice_init(struct tasdevice_priv *tas_priv);
+2 -2
include/uapi/linux/iio/types.h
··· 91 91 IIO_MOD_CO2, 92 92 IIO_MOD_VOC, 93 93 IIO_MOD_LIGHT_UV, 94 - IIO_MOD_LIGHT_UVA, 95 - IIO_MOD_LIGHT_UVB, 96 94 IIO_MOD_LIGHT_DUV, 97 95 IIO_MOD_PM1, 98 96 IIO_MOD_PM2P5, ··· 105 107 IIO_MOD_PITCH, 106 108 IIO_MOD_YAW, 107 109 IIO_MOD_ROLL, 110 + IIO_MOD_LIGHT_UVA, 111 + IIO_MOD_LIGHT_UVB, 108 112 }; 109 113 110 114 enum iio_event_type {
+3 -2
io_uring/net.c
··· 1372 1372 * has already been done 1373 1373 */ 1374 1374 if (issue_flags & IO_URING_F_MULTISHOT) 1375 - ret = IOU_ISSUE_SKIP_COMPLETE; 1375 + return IOU_ISSUE_SKIP_COMPLETE; 1376 1376 return ret; 1377 1377 } 1378 1378 if (ret == -ERESTARTSYS) ··· 1397 1397 ret, IORING_CQE_F_MORE)) 1398 1398 goto retry; 1399 1399 1400 - return -ECANCELED; 1400 + io_req_set_res(req, ret, 0); 1401 + return IOU_STOP_MULTISHOT; 1401 1402 } 1402 1403 1403 1404 int io_socket_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+4 -1
kernel/bpf/helpers.c
··· 1101 1101 struct bpf_prog *prog; 1102 1102 void __rcu *callback_fn; 1103 1103 void *value; 1104 + struct rcu_head rcu; 1104 1105 }; 1105 1106 1106 1107 /* the actual struct hidden inside uapi struct bpf_timer */ ··· 1333 1332 1334 1333 if (in_nmi()) 1335 1334 return -EOPNOTSUPP; 1335 + rcu_read_lock(); 1336 1336 __bpf_spin_lock_irqsave(&timer->lock); 1337 1337 t = timer->timer; 1338 1338 if (!t) { ··· 1355 1353 * if it was running. 1356 1354 */ 1357 1355 ret = ret ?: hrtimer_cancel(&t->timer); 1356 + rcu_read_unlock(); 1358 1357 return ret; 1359 1358 } 1360 1359 ··· 1410 1407 */ 1411 1408 if (this_cpu_read(hrtimer_running) != t) 1412 1409 hrtimer_cancel(&t->timer); 1413 - kfree(t); 1410 + kfree_rcu(t, rcu); 1414 1411 } 1415 1412 1416 1413 BPF_CALL_2(bpf_kptr_xchg, void *, map_value, void *, ptr)
+2
kernel/bpf/task_iter.c
··· 978 978 BUILD_BUG_ON(__alignof__(struct bpf_iter_task_kern) != 979 979 __alignof__(struct bpf_iter_task)); 980 980 981 + kit->pos = NULL; 982 + 981 983 switch (flags) { 982 984 case BPF_TASK_ITER_ALL_THREADS: 983 985 case BPF_TASK_ITER_ALL_PROCS:
+2
kernel/bpf/verifier.c
··· 5263 5263 #ifdef CONFIG_CGROUPS 5264 5264 BTF_ID(struct, cgroup) 5265 5265 #endif 5266 + #ifdef CONFIG_BPF_JIT 5266 5267 BTF_ID(struct, bpf_cpumask) 5268 + #endif 5267 5269 BTF_ID(struct, task_struct) 5268 5270 BTF_SET_END(rcu_protected_types) 5269 5271
+6
kernel/sched/membarrier.c
··· 162 162 | MEMBARRIER_PRIVATE_EXPEDITED_RSEQ_BITMASK \ 163 163 | MEMBARRIER_CMD_GET_REGISTRATIONS) 164 164 165 + static DEFINE_MUTEX(membarrier_ipi_mutex); 166 + #define SERIALIZE_IPI() guard(mutex)(&membarrier_ipi_mutex) 167 + 165 168 static void ipi_mb(void *info) 166 169 { 167 170 smp_mb(); /* IPIs should be serializing but paranoid. */ ··· 262 259 if (!zalloc_cpumask_var(&tmpmask, GFP_KERNEL)) 263 260 return -ENOMEM; 264 261 262 + SERIALIZE_IPI(); 265 263 cpus_read_lock(); 266 264 rcu_read_lock(); 267 265 for_each_online_cpu(cpu) { ··· 351 347 if (cpu_id < 0 && !zalloc_cpumask_var(&tmpmask, GFP_KERNEL)) 352 348 return -ENOMEM; 353 349 350 + SERIALIZE_IPI(); 354 351 cpus_read_lock(); 355 352 356 353 if (cpu_id >= 0) { ··· 465 460 * between threads which are users of @mm has its membarrier state 466 461 * updated. 467 462 */ 463 + SERIALIZE_IPI(); 468 464 cpus_read_lock(); 469 465 rcu_read_lock(); 470 466 for_each_online_cpu(cpu) {
+1 -1
kernel/trace/ftrace.c
··· 5331 5331 * not support ftrace_regs_caller but direct_call, use SAVE_ARGS so that it 5332 5332 * jumps from ftrace_caller for multiple ftrace_ops. 5333 5333 */ 5334 - #ifndef HAVE_DYNAMIC_FTRACE_WITH_REGS 5334 + #ifndef CONFIG_HAVE_DYNAMIC_FTRACE_WITH_REGS 5335 5335 #define MULTI_FLAGS (FTRACE_OPS_FL_DIRECT | FTRACE_OPS_FL_SAVE_ARGS) 5336 5336 #else 5337 5337 #define MULTI_FLAGS (FTRACE_OPS_FL_DIRECT | FTRACE_OPS_FL_SAVE_REGS)
+4
kernel/trace/ring_buffer.c
··· 5877 5877 if (psize <= BUF_PAGE_HDR_SIZE) 5878 5878 return -EINVAL; 5879 5879 5880 + /* Size of a subbuf cannot be greater than the write counter */ 5881 + if (psize > RB_WRITE_MASK + 1) 5882 + return -EINVAL; 5883 + 5880 5884 old_order = buffer->subbuf_order; 5881 5885 old_size = buffer->subbuf_size; 5882 5886
+4 -1
kernel/trace/trace.c
··· 39 39 #include <linux/ctype.h> 40 40 #include <linux/init.h> 41 41 #include <linux/panic_notifier.h> 42 + #include <linux/kmemleak.h> 42 43 #include <linux/poll.h> 43 44 #include <linux/nmi.h> 44 45 #include <linux/fs.h> ··· 1533 1532 bool tracer_tracing_is_on(struct trace_array *tr) 1534 1533 { 1535 1534 if (tr->array_buffer.buffer) 1536 - return ring_buffer_record_is_on(tr->array_buffer.buffer); 1535 + return ring_buffer_record_is_set_on(tr->array_buffer.buffer); 1537 1536 return !tr->buffer_disabled; 1538 1537 } 1539 1538 ··· 2340 2339 int order = get_order(sizeof(*s) + s->cmdline_num * TASK_COMM_LEN); 2341 2340 2342 2341 kfree(s->map_cmdline_to_pid); 2342 + kmemleak_free(s); 2343 2343 free_pages((unsigned long)s, order); 2344 2344 } 2345 2345 ··· 2360 2358 return NULL; 2361 2359 2362 2360 s = page_address(page); 2361 + kmemleak_alloc(s, size, 1, GFP_KERNEL); 2363 2362 memset(s, 0, sizeof(*s)); 2364 2363 2365 2364 /* Round up to actual allocation */
+2 -2
kernel/trace/trace_btf.c
··· 91 91 for_each_member(i, type, member) { 92 92 if (!member->name_off) { 93 93 /* Anonymous union/struct: push it for later use */ 94 - type = btf_type_skip_modifiers(btf, member->type, &tid); 95 - if (type && top < BTF_ANON_STACK_MAX) { 94 + if (btf_type_skip_modifiers(btf, member->type, &tid) && 95 + top < BTF_ANON_STACK_MAX) { 96 96 anon_stack[top].tid = tid; 97 97 anon_stack[top++].offset = 98 98 cur_offset + member->offset;
+2 -1
kernel/trace/trace_events_synth.c
··· 441 441 if (is_dynamic) { 442 442 union trace_synth_field *data = &entry->fields[*n_u64]; 443 443 444 + len = fetch_store_strlen((unsigned long)str_val); 444 445 data->as_dynamic.offset = struct_size(entry, fields, event->n_u64) + data_size; 445 - data->as_dynamic.len = fetch_store_strlen((unsigned long)str_val); 446 + data->as_dynamic.len = len; 446 447 447 448 ret = fetch_store_string((unsigned long)str_val, &entry->fields[*n_u64], entry); 448 449
+2 -6
kernel/workqueue.c
··· 5786 5786 list_for_each_entry(wq, &workqueues, list) { 5787 5787 if (!(wq->flags & WQ_UNBOUND)) 5788 5788 continue; 5789 - 5790 5789 /* creating multiple pwqs breaks ordering guarantee */ 5791 - if (!list_empty(&wq->pwqs)) { 5792 - if (wq->flags & __WQ_ORDERED_EXPLICIT) 5793 - continue; 5794 - wq->flags &= ~__WQ_ORDERED; 5795 - } 5790 + if (wq->flags & __WQ_ORDERED) 5791 + continue; 5796 5792 5797 5793 ctx = apply_wqattrs_prepare(wq, wq->unbound_attrs, unbound_cpumask); 5798 5794 if (IS_ERR(ctx)) {
+14 -10
lib/kobject.c
··· 74 74 if (error) 75 75 return error; 76 76 77 - error = sysfs_create_groups(kobj, ktype->default_groups); 78 - if (error) { 79 - sysfs_remove_dir(kobj); 80 - return error; 77 + if (ktype) { 78 + error = sysfs_create_groups(kobj, ktype->default_groups); 79 + if (error) { 80 + sysfs_remove_dir(kobj); 81 + return error; 82 + } 81 83 } 82 84 83 85 /* ··· 591 589 sd = kobj->sd; 592 590 ktype = get_ktype(kobj); 593 591 594 - sysfs_remove_groups(kobj, ktype->default_groups); 592 + if (ktype) 593 + sysfs_remove_groups(kobj, ktype->default_groups); 595 594 596 595 /* send "remove" if the caller did not do it but sent "add" */ 597 596 if (kobj->state_add_uevent_sent && !kobj->state_remove_uevent_sent) { ··· 669 666 pr_debug("'%s' (%p): %s, parent %p\n", 670 667 kobject_name(kobj), kobj, __func__, kobj->parent); 671 668 669 + if (t && !t->release) 670 + pr_debug("'%s' (%p): does not have a release() function, it is broken and must be fixed. See Documentation/core-api/kobject.rst.\n", 671 + kobject_name(kobj), kobj); 672 + 672 673 /* remove from sysfs if the caller did not do it */ 673 674 if (kobj->state_in_sysfs) { 674 675 pr_debug("'%s' (%p): auto cleanup kobject_del\n", ··· 683 676 parent = NULL; 684 677 } 685 678 686 - if (t->release) { 679 + if (t && t->release) { 687 680 pr_debug("'%s' (%p): calling ktype release\n", 688 681 kobject_name(kobj), kobj); 689 682 t->release(kobj); 690 - } else { 691 - pr_debug("'%s' (%p): does not have a release() function, it is broken and must be fixed. See Documentation/core-api/kobject.rst.\n", 692 - kobject_name(kobj), kobj); 693 683 } 694 684 695 685 /* free name if we allocated it */ ··· 1060 1056 { 1061 1057 const struct kobj_ns_type_operations *ops = NULL; 1062 1058 1063 - if (parent && parent->ktype->child_ns_type) 1059 + if (parent && parent->ktype && parent->ktype->child_ns_type) 1064 1060 ops = parent->ktype->child_ns_type(parent); 1065 1061 1066 1062 return ops;
+30 -19
lib/seq_buf.c
··· 13 13 * seq_buf_init() more than once to reset the seq_buf to start 14 14 * from scratch. 15 15 */ 16 - #include <linux/uaccess.h> 17 - #include <linux/seq_file.h> 16 + 17 + #include <linux/bug.h> 18 + #include <linux/err.h> 19 + #include <linux/export.h> 20 + #include <linux/hex.h> 21 + #include <linux/minmax.h> 22 + #include <linux/printk.h> 18 23 #include <linux/seq_buf.h> 24 + #include <linux/seq_file.h> 25 + #include <linux/sprintf.h> 26 + #include <linux/string.h> 27 + #include <linux/types.h> 28 + #include <linux/uaccess.h> 19 29 20 30 /** 21 31 * seq_buf_can_fit - can the new data fit in the current buffer? 22 32 * @s: the seq_buf descriptor 23 33 * @len: The length to see if it can fit in the current buffer 24 34 * 25 - * Returns true if there's enough unused space in the seq_buf buffer 35 + * Returns: true if there's enough unused space in the seq_buf buffer 26 36 * to fit the amount of new data according to @len. 27 37 */ 28 38 static bool seq_buf_can_fit(struct seq_buf *s, size_t len) ··· 45 35 * @m: the seq_file descriptor that is the destination 46 36 * @s: the seq_buf descriptor that is the source. 47 37 * 48 - * Returns zero on success, non zero otherwise 38 + * Returns: zero on success, non-zero otherwise. 49 39 */ 50 40 int seq_buf_print_seq(struct seq_file *m, struct seq_buf *s) 51 41 { ··· 60 50 * @fmt: printf format string 61 51 * @args: va_list of arguments from a printf() type function 62 52 * 63 - * Writes a vnprintf() format into the sequencce buffer. 53 + * Writes a vnprintf() format into the sequence buffer. 64 54 * 65 - * Returns zero on success, -1 on overflow. 55 + * Returns: zero on success, -1 on overflow. 66 56 */ 67 57 int seq_buf_vprintf(struct seq_buf *s, const char *fmt, va_list args) 68 58 { ··· 88 78 * 89 79 * Writes a printf() format into the sequence buffer. 90 80 * 91 - * Returns zero on success, -1 on overflow. 81 + * Returns: zero on success, -1 on overflow. 92 82 */ 93 83 int seq_buf_printf(struct seq_buf *s, const char *fmt, ...) 94 84 { ··· 104 94 EXPORT_SYMBOL_GPL(seq_buf_printf); 105 95 106 96 /** 107 - * seq_buf_do_printk - printk seq_buf line by line 97 + * seq_buf_do_printk - printk() seq_buf line by line 108 98 * @s: seq_buf descriptor 109 99 * @lvl: printk level 110 100 * 111 101 * printk()-s a multi-line sequential buffer line by line. The function 112 - * makes sure that the buffer in @s is nul terminated and safe to read 102 + * makes sure that the buffer in @s is NUL-terminated and safe to read 113 103 * as a string. 114 104 */ 115 105 void seq_buf_do_printk(struct seq_buf *s, const char *lvl) ··· 149 139 * This function will take the format and the binary array and finish 150 140 * the conversion into the ASCII string within the buffer. 151 141 * 152 - * Returns zero on success, -1 on overflow. 142 + * Returns: zero on success, -1 on overflow. 153 143 */ 154 144 int seq_buf_bprintf(struct seq_buf *s, const char *fmt, const u32 *binary) 155 145 { ··· 177 167 * 178 168 * Copy a simple string into the sequence buffer. 179 169 * 180 - * Returns zero on success, -1 on overflow 170 + * Returns: zero on success, -1 on overflow. 181 171 */ 182 172 int seq_buf_puts(struct seq_buf *s, const char *str) 183 173 { ··· 206 196 * 207 197 * Copy a single character into the sequence buffer. 208 198 * 209 - * Returns zero on success, -1 on overflow 199 + * Returns: zero on success, -1 on overflow. 210 200 */ 211 201 int seq_buf_putc(struct seq_buf *s, unsigned char c) 212 202 { ··· 222 212 EXPORT_SYMBOL_GPL(seq_buf_putc); 223 213 224 214 /** 225 - * seq_buf_putmem - write raw data into the sequenc buffer 215 + * seq_buf_putmem - write raw data into the sequence buffer 226 216 * @s: seq_buf descriptor 227 217 * @mem: The raw memory to copy into the buffer 228 218 * @len: The length of the raw memory to copy (in bytes) ··· 231 221 * buffer and a strcpy() would not work. Using this function allows 232 222 * for such cases. 233 223 * 234 - * Returns zero on success, -1 on overflow 224 + * Returns: zero on success, -1 on overflow. 235 225 */ 236 226 int seq_buf_putmem(struct seq_buf *s, const void *mem, unsigned int len) 237 227 { ··· 259 249 * raw memory into the buffer it writes its ASCII representation of it 260 250 * in hex characters. 261 251 * 262 - * Returns zero on success, -1 on overflow 252 + * Returns: zero on success, -1 on overflow. 263 253 */ 264 254 int seq_buf_putmem_hex(struct seq_buf *s, const void *mem, 265 255 unsigned int len) ··· 307 297 * 308 298 * Write a path name into the sequence buffer. 309 299 * 310 - * Returns the number of written bytes on success, -1 on overflow 300 + * Returns: the number of written bytes on success, -1 on overflow. 311 301 */ 312 302 int seq_buf_path(struct seq_buf *s, const struct path *path, const char *esc) 313 303 { ··· 342 332 * or until it reaches the end of the content in the buffer (@s->len), 343 333 * whichever comes first. 344 334 * 335 + * Returns: 345 336 * On success, it returns a positive number of the number of bytes 346 337 * it copied. 347 338 * ··· 393 382 * linebuf size is maximal length for one line. 394 383 * 32 * 3 - maximum bytes per line, each printed into 2 chars + 1 for 395 384 * separating space 396 - * 2 - spaces separating hex dump and ascii representation 397 - * 32 - ascii representation 385 + * 2 - spaces separating hex dump and ASCII representation 386 + * 32 - ASCII representation 398 387 * 1 - terminating '\0' 399 388 * 400 - * Returns zero on success, -1 on overflow 389 + * Returns: zero on success, -1 on overflow. 401 390 */ 402 391 int seq_buf_hex_dump(struct seq_buf *s, const char *prefix_str, int prefix_type, 403 392 int rowsize, int groupsize,
+57 -29
net/bridge/br_switchdev.c
··· 595 595 } 596 596 597 597 static int br_switchdev_mdb_queue_one(struct list_head *mdb_list, 598 + struct net_device *dev, 599 + unsigned long action, 598 600 enum switchdev_obj_id id, 599 601 const struct net_bridge_mdb_entry *mp, 600 602 struct net_device *orig_dev) 601 603 { 602 - struct switchdev_obj_port_mdb *mdb; 604 + struct switchdev_obj_port_mdb mdb = { 605 + .obj = { 606 + .id = id, 607 + .orig_dev = orig_dev, 608 + }, 609 + }; 610 + struct switchdev_obj_port_mdb *pmdb; 603 611 604 - mdb = kzalloc(sizeof(*mdb), GFP_ATOMIC); 605 - if (!mdb) 612 + br_switchdev_mdb_populate(&mdb, mp); 613 + 614 + if (action == SWITCHDEV_PORT_OBJ_ADD && 615 + switchdev_port_obj_act_is_deferred(dev, action, &mdb.obj)) { 616 + /* This event is already in the deferred queue of 617 + * events, so this replay must be elided, lest the 618 + * driver receives duplicate events for it. This can 619 + * only happen when replaying additions, since 620 + * modifications are always immediately visible in 621 + * br->mdb_list, whereas actual event delivery may be 622 + * delayed. 623 + */ 624 + return 0; 625 + } 626 + 627 + pmdb = kmemdup(&mdb, sizeof(mdb), GFP_ATOMIC); 628 + if (!pmdb) 606 629 return -ENOMEM; 607 630 608 - mdb->obj.id = id; 609 - mdb->obj.orig_dev = orig_dev; 610 - br_switchdev_mdb_populate(mdb, mp); 611 - list_add_tail(&mdb->obj.list, mdb_list); 612 - 631 + list_add_tail(&pmdb->obj.list, mdb_list); 613 632 return 0; 614 633 } 615 634 ··· 696 677 if (!br_opt_get(br, BROPT_MULTICAST_ENABLED)) 697 678 return 0; 698 679 699 - /* We cannot walk over br->mdb_list protected just by the rtnl_mutex, 700 - * because the write-side protection is br->multicast_lock. But we 701 - * need to emulate the [ blocking ] calling context of a regular 702 - * switchdev event, so since both br->multicast_lock and RCU read side 703 - * critical sections are atomic, we have no choice but to pick the RCU 704 - * read side lock, queue up all our events, leave the critical section 705 - * and notify switchdev from blocking context. 706 - */ 707 - rcu_read_lock(); 680 + if (adding) 681 + action = SWITCHDEV_PORT_OBJ_ADD; 682 + else 683 + action = SWITCHDEV_PORT_OBJ_DEL; 708 684 709 - hlist_for_each_entry_rcu(mp, &br->mdb_list, mdb_node) { 685 + /* br_switchdev_mdb_queue_one() will take care to not queue a 686 + * replay of an event that is already pending in the switchdev 687 + * deferred queue. In order to safely determine that, there 688 + * must be no new deferred MDB notifications enqueued for the 689 + * duration of the MDB scan. Therefore, grab the write-side 690 + * lock to avoid racing with any concurrent IGMP/MLD snooping. 691 + */ 692 + spin_lock_bh(&br->multicast_lock); 693 + 694 + hlist_for_each_entry(mp, &br->mdb_list, mdb_node) { 710 695 struct net_bridge_port_group __rcu * const *pp; 711 696 const struct net_bridge_port_group *p; 712 697 713 698 if (mp->host_joined) { 714 - err = br_switchdev_mdb_queue_one(&mdb_list, 699 + err = br_switchdev_mdb_queue_one(&mdb_list, dev, action, 715 700 SWITCHDEV_OBJ_ID_HOST_MDB, 716 701 mp, br_dev); 717 702 if (err) { 718 - rcu_read_unlock(); 703 + spin_unlock_bh(&br->multicast_lock); 719 704 goto out_free_mdb; 720 705 } 721 706 } 722 707 723 - for (pp = &mp->ports; (p = rcu_dereference(*pp)) != NULL; 708 + for (pp = &mp->ports; (p = mlock_dereference(*pp, br)) != NULL; 724 709 pp = &p->next) { 725 710 if (p->key.port->dev != dev) 726 711 continue; 727 712 728 - err = br_switchdev_mdb_queue_one(&mdb_list, 713 + err = br_switchdev_mdb_queue_one(&mdb_list, dev, action, 729 714 SWITCHDEV_OBJ_ID_PORT_MDB, 730 715 mp, dev); 731 716 if (err) { 732 - rcu_read_unlock(); 717 + spin_unlock_bh(&br->multicast_lock); 733 718 goto out_free_mdb; 734 719 } 735 720 } 736 721 } 737 722 738 - rcu_read_unlock(); 739 - 740 - if (adding) 741 - action = SWITCHDEV_PORT_OBJ_ADD; 742 - else 743 - action = SWITCHDEV_PORT_OBJ_DEL; 723 + spin_unlock_bh(&br->multicast_lock); 744 724 745 725 list_for_each_entry(obj, &mdb_list, list) { 746 726 err = br_switchdev_mdb_replay_one(nb, dev, ··· 804 786 br_switchdev_mdb_replay(br_dev, dev, ctx, false, blocking_nb, NULL); 805 787 806 788 br_switchdev_vlan_replay(br_dev, ctx, false, blocking_nb, NULL); 789 + 790 + /* Make sure that the device leaving this bridge has seen all 791 + * relevant events before it is disassociated. In the normal 792 + * case, when the device is directly attached to the bridge, 793 + * this is covered by del_nbp(). If the association was indirect 794 + * however, e.g. via a team or bond, and the device is leaving 795 + * that intermediate device, then the bridge port remains in 796 + * place. 797 + */ 798 + switchdev_deferred_process(); 807 799 } 808 800 809 801 /* Let the bridge know that this port is offloaded, so that it can assign a
+5 -2
net/core/skmsg.c
··· 1226 1226 1227 1227 rcu_read_lock(); 1228 1228 psock = sk_psock(sk); 1229 - if (psock) 1230 - psock->saved_data_ready(sk); 1229 + if (psock) { 1230 + read_lock_bh(&sk->sk_callback_lock); 1231 + sk_psock_data_ready(sk, psock); 1232 + read_unlock_bh(&sk->sk_callback_lock); 1233 + } 1231 1234 rcu_read_unlock(); 1232 1235 } 1233 1236 }
+11 -12
net/core/sock.c
··· 1188 1188 */ 1189 1189 WRITE_ONCE(sk->sk_txrehash, (u8)val); 1190 1190 return 0; 1191 + case SO_PEEK_OFF: 1192 + { 1193 + int (*set_peek_off)(struct sock *sk, int val); 1194 + 1195 + set_peek_off = READ_ONCE(sock->ops)->set_peek_off; 1196 + if (set_peek_off) 1197 + ret = set_peek_off(sk, val); 1198 + else 1199 + ret = -EOPNOTSUPP; 1200 + return ret; 1201 + } 1191 1202 } 1192 1203 1193 1204 sockopt_lock_sock(sk); ··· 1440 1429 case SO_WIFI_STATUS: 1441 1430 sock_valbool_flag(sk, SOCK_WIFI_STATUS, valbool); 1442 1431 break; 1443 - 1444 - case SO_PEEK_OFF: 1445 - { 1446 - int (*set_peek_off)(struct sock *sk, int val); 1447 - 1448 - set_peek_off = READ_ONCE(sock->ops)->set_peek_off; 1449 - if (set_peek_off) 1450 - ret = set_peek_off(sk, val); 1451 - else 1452 - ret = -EOPNOTSUPP; 1453 - break; 1454 - } 1455 1432 1456 1433 case SO_NOFCS: 1457 1434 sock_valbool_flag(sk, SOCK_NOFCS, valbool);
+9 -3
net/devlink/core.c
··· 529 529 { 530 530 int err; 531 531 532 - err = genl_register_family(&devlink_nl_family); 533 - if (err) 534 - goto out; 535 532 err = register_pernet_subsys(&devlink_pernet_ops); 536 533 if (err) 537 534 goto out; 535 + err = genl_register_family(&devlink_nl_family); 536 + if (err) 537 + goto out_unreg_pernet_subsys; 538 538 err = register_netdevice_notifier(&devlink_port_netdevice_nb); 539 + if (!err) 540 + return 0; 539 541 542 + genl_unregister_family(&devlink_nl_family); 543 + 544 + out_unreg_pernet_subsys: 545 + unregister_pernet_subsys(&devlink_pernet_ops); 540 546 out: 541 547 WARN_ON(err); 542 548 return err;
+1 -1
net/devlink/port.c
··· 583 583 584 584 xa_for_each_start(&devlink->ports, port_index, devlink_port, state->idx) { 585 585 err = devlink_nl_port_fill(msg, devlink_port, 586 - DEVLINK_CMD_NEW, 586 + DEVLINK_CMD_PORT_NEW, 587 587 NETLINK_CB(cb->skb).portid, 588 588 cb->nlh->nlmsg_seq, flags, 589 589 cb->extack);
+2 -1
net/ipv4/arp.c
··· 1125 1125 if (neigh) { 1126 1126 if (!(READ_ONCE(neigh->nud_state) & NUD_NOARP)) { 1127 1127 read_lock_bh(&neigh->lock); 1128 - memcpy(r->arp_ha.sa_data, neigh->ha, dev->addr_len); 1128 + memcpy(r->arp_ha.sa_data, neigh->ha, 1129 + min(dev->addr_len, sizeof(r->arp_ha.sa_data_min))); 1129 1130 r->arp_flags = arp_state_to_flags(neigh); 1130 1131 read_unlock_bh(&neigh->lock); 1131 1132 r->arp_ha.sa_family = dev->type;
+17 -4
net/ipv4/devinet.c
··· 1825 1825 return err; 1826 1826 } 1827 1827 1828 + /* Combine dev_addr_genid and dev_base_seq to detect changes. 1829 + */ 1830 + static u32 inet_base_seq(const struct net *net) 1831 + { 1832 + u32 res = atomic_read(&net->ipv4.dev_addr_genid) + 1833 + net->dev_base_seq; 1834 + 1835 + /* Must not return 0 (see nl_dump_check_consistent()). 1836 + * Chose a value far away from 0. 1837 + */ 1838 + if (!res) 1839 + res = 0x80000000; 1840 + return res; 1841 + } 1842 + 1828 1843 static int inet_dump_ifaddr(struct sk_buff *skb, struct netlink_callback *cb) 1829 1844 { 1830 1845 const struct nlmsghdr *nlh = cb->nlh; ··· 1891 1876 idx = 0; 1892 1877 head = &tgt_net->dev_index_head[h]; 1893 1878 rcu_read_lock(); 1894 - cb->seq = atomic_read(&tgt_net->ipv4.dev_addr_genid) ^ 1895 - tgt_net->dev_base_seq; 1879 + cb->seq = inet_base_seq(tgt_net); 1896 1880 hlist_for_each_entry_rcu(dev, head, index_hlist) { 1897 1881 if (idx < s_idx) 1898 1882 goto cont; ··· 2292 2278 idx = 0; 2293 2279 head = &net->dev_index_head[h]; 2294 2280 rcu_read_lock(); 2295 - cb->seq = atomic_read(&net->ipv4.dev_addr_genid) ^ 2296 - net->dev_base_seq; 2281 + cb->seq = inet_base_seq(net); 2297 2282 hlist_for_each_entry_rcu(dev, head, index_hlist) { 2298 2283 if (idx < s_idx) 2299 2284 goto cont;
+24 -1
net/ipv4/inet_hashtables.c
··· 1130 1130 return 0; 1131 1131 1132 1132 error: 1133 + if (sk_hashed(sk)) { 1134 + spinlock_t *lock = inet_ehash_lockp(hinfo, sk->sk_hash); 1135 + 1136 + sock_prot_inuse_add(net, sk->sk_prot, -1); 1137 + 1138 + spin_lock(lock); 1139 + sk_nulls_del_node_init_rcu(sk); 1140 + spin_unlock(lock); 1141 + 1142 + sk->sk_hash = 0; 1143 + inet_sk(sk)->inet_sport = 0; 1144 + inet_sk(sk)->inet_num = 0; 1145 + 1146 + if (tw) 1147 + inet_twsk_bind_unhash(tw, hinfo); 1148 + } 1149 + 1133 1150 spin_unlock(&head2->lock); 1134 1151 if (tb_created) 1135 1152 inet_bind_bucket_destroy(hinfo->bind_bucket_cachep, tb); 1136 - spin_unlock_bh(&head->lock); 1153 + spin_unlock(&head->lock); 1154 + 1155 + if (tw) 1156 + inet_twsk_deschedule_put(tw); 1157 + 1158 + local_bh_enable(); 1159 + 1137 1160 return -ENOMEM; 1138 1161 } 1139 1162
+1 -5
net/ipv4/udp.c
··· 1589 1589 1590 1590 void skb_consume_udp(struct sock *sk, struct sk_buff *skb, int len) 1591 1591 { 1592 - if (unlikely(READ_ONCE(udp_sk(sk)->peeking_with_offset))) { 1593 - bool slow = lock_sock_fast(sk); 1594 - 1592 + if (unlikely(READ_ONCE(udp_sk(sk)->peeking_with_offset))) 1595 1593 sk_peek_offset_bwd(sk, len); 1596 - unlock_sock_fast(sk, slow); 1597 - } 1598 1594 1599 1595 if (!skb_unref(skb)) 1600 1596 return;
+18 -3
net/ipv6/addrconf.c
··· 710 710 return err; 711 711 } 712 712 713 + /* Combine dev_addr_genid and dev_base_seq to detect changes. 714 + */ 715 + static u32 inet6_base_seq(const struct net *net) 716 + { 717 + u32 res = atomic_read(&net->ipv6.dev_addr_genid) + 718 + net->dev_base_seq; 719 + 720 + /* Must not return 0 (see nl_dump_check_consistent()). 721 + * Chose a value far away from 0. 722 + */ 723 + if (!res) 724 + res = 0x80000000; 725 + return res; 726 + } 727 + 728 + 713 729 static int inet6_netconf_dump_devconf(struct sk_buff *skb, 714 730 struct netlink_callback *cb) 715 731 { ··· 759 743 idx = 0; 760 744 head = &net->dev_index_head[h]; 761 745 rcu_read_lock(); 762 - cb->seq = atomic_read(&net->ipv6.dev_addr_genid) ^ 763 - net->dev_base_seq; 746 + cb->seq = inet6_base_seq(net); 764 747 hlist_for_each_entry_rcu(dev, head, index_hlist) { 765 748 if (idx < s_idx) 766 749 goto cont; ··· 5434 5419 } 5435 5420 5436 5421 rcu_read_lock(); 5437 - cb->seq = atomic_read(&tgt_net->ipv6.dev_addr_genid) ^ tgt_net->dev_base_seq; 5422 + cb->seq = inet6_base_seq(tgt_net); 5438 5423 for (h = s_h; h < NETDEV_HASHENTRIES; h++, s_idx = 0) { 5439 5424 idx = 0; 5440 5425 head = &tgt_net->dev_index_head[h];
+10
net/ipv6/exthdrs.c
··· 177 177 case IPV6_TLV_IOAM: 178 178 if (!ipv6_hop_ioam(skb, off)) 179 179 return false; 180 + 181 + nh = skb_network_header(skb); 180 182 break; 181 183 case IPV6_TLV_JUMBO: 182 184 if (!ipv6_hop_jumbo(skb, off)) ··· 944 942 945 943 if (!skb_valid_dst(skb)) 946 944 ip6_route_input(skb); 945 + 946 + /* About to mangle packet header */ 947 + if (skb_ensure_writable(skb, optoff + 2 + hdr->opt_len)) 948 + goto drop; 949 + 950 + /* Trace pointer may have changed */ 951 + trace = (struct ioam6_trace_hdr *)(skb_network_header(skb) 952 + + optoff + sizeof(*hdr)); 947 953 948 954 ioam6_fill_trace_data(skb, ns, trace, true); 949 955 break;
+11 -9
net/ipv6/seg6.c
··· 512 512 { 513 513 int err; 514 514 515 - err = genl_register_family(&seg6_genl_family); 515 + err = register_pernet_subsys(&ip6_segments_ops); 516 516 if (err) 517 517 goto out; 518 518 519 - err = register_pernet_subsys(&ip6_segments_ops); 519 + err = genl_register_family(&seg6_genl_family); 520 520 if (err) 521 - goto out_unregister_genl; 521 + goto out_unregister_pernet; 522 522 523 523 #ifdef CONFIG_IPV6_SEG6_LWTUNNEL 524 524 err = seg6_iptunnel_init(); 525 525 if (err) 526 - goto out_unregister_pernet; 526 + goto out_unregister_genl; 527 527 528 528 err = seg6_local_init(); 529 - if (err) 530 - goto out_unregister_pernet; 529 + if (err) { 530 + seg6_iptunnel_exit(); 531 + goto out_unregister_genl; 532 + } 531 533 #endif 532 534 533 535 #ifdef CONFIG_IPV6_SEG6_HMAC ··· 550 548 #endif 551 549 #endif 552 550 #ifdef CONFIG_IPV6_SEG6_LWTUNNEL 553 - out_unregister_pernet: 554 - unregister_pernet_subsys(&ip6_segments_ops); 555 - #endif 556 551 out_unregister_genl: 557 552 genl_unregister_family(&seg6_genl_family); 553 + #endif 554 + out_unregister_pernet: 555 + unregister_pernet_subsys(&ip6_segments_ops); 558 556 goto out; 559 557 } 560 558
+2 -2
net/iucv/iucv.c
··· 156 156 static LIST_HEAD(iucv_handler_list); 157 157 158 158 /* 159 - * iucv_path_table: an array of iucv_path structures. 159 + * iucv_path_table: array of pointers to iucv_path structures. 160 160 */ 161 161 static struct iucv_path **iucv_path_table; 162 162 static unsigned long iucv_max_pathid; ··· 545 545 546 546 cpus_read_lock(); 547 547 rc = -ENOMEM; 548 - alloc_size = iucv_max_pathid * sizeof(struct iucv_path); 548 + alloc_size = iucv_max_pathid * sizeof(*iucv_path_table); 549 549 iucv_path_table = kzalloc(alloc_size, GFP_KERNEL); 550 550 if (!iucv_path_table) 551 551 goto out;
+1 -1
net/l2tp/l2tp_ip6.c
··· 627 627 628 628 back_from_confirm: 629 629 lock_sock(sk); 630 - ulen = len + skb_queue_empty(&sk->sk_write_queue) ? transhdrlen : 0; 630 + ulen = len + (skb_queue_empty(&sk->sk_write_queue) ? transhdrlen : 0); 631 631 err = ip6_append_data(sk, ip_generic_getfrag, msg, 632 632 ulen, transhdrlen, &ipc6, 633 633 &fl6, (struct rt6_info *)dst,
+1 -1
net/mctp/route.c
··· 721 721 spin_unlock_irqrestore(&mns->keys_lock, flags); 722 722 723 723 if (!tagbits) { 724 - kfree(key); 724 + mctp_key_unref(key); 725 725 return ERR_PTR(-EBUSY); 726 726 } 727 727
+6 -2
net/mptcp/diag.c
··· 13 13 #include <uapi/linux/mptcp.h> 14 14 #include "protocol.h" 15 15 16 - static int subflow_get_info(const struct sock *sk, struct sk_buff *skb) 16 + static int subflow_get_info(struct sock *sk, struct sk_buff *skb) 17 17 { 18 18 struct mptcp_subflow_context *sf; 19 19 struct nlattr *start; 20 20 u32 flags = 0; 21 + bool slow; 21 22 int err; 22 23 23 24 start = nla_nest_start_noflag(skb, INET_ULP_INFO_MPTCP); 24 25 if (!start) 25 26 return -EMSGSIZE; 26 27 28 + slow = lock_sock_fast(sk); 27 29 rcu_read_lock(); 28 30 sf = rcu_dereference(inet_csk(sk)->icsk_ulp_data); 29 31 if (!sf) { ··· 65 63 sf->map_data_len) || 66 64 nla_put_u32(skb, MPTCP_SUBFLOW_ATTR_FLAGS, flags) || 67 65 nla_put_u8(skb, MPTCP_SUBFLOW_ATTR_ID_REM, sf->remote_id) || 68 - nla_put_u8(skb, MPTCP_SUBFLOW_ATTR_ID_LOC, sf->local_id)) { 66 + nla_put_u8(skb, MPTCP_SUBFLOW_ATTR_ID_LOC, subflow_get_local_id(sf))) { 69 67 err = -EMSGSIZE; 70 68 goto nla_failure; 71 69 } 72 70 73 71 rcu_read_unlock(); 72 + unlock_sock_fast(sk, slow); 74 73 nla_nest_end(skb, start); 75 74 return 0; 76 75 77 76 nla_failure: 78 77 rcu_read_unlock(); 78 + unlock_sock_fast(sk, slow); 79 79 nla_nest_cancel(skb, start); 80 80 return err; 81 81 }
+43 -26
net/mptcp/pm_netlink.c
··· 396 396 } 397 397 } 398 398 399 - static bool lookup_address_in_vec(const struct mptcp_addr_info *addrs, unsigned int nr, 400 - const struct mptcp_addr_info *addr) 401 - { 402 - int i; 403 - 404 - for (i = 0; i < nr; i++) { 405 - if (addrs[i].id == addr->id) 406 - return true; 407 - } 408 - 409 - return false; 410 - } 411 - 412 399 /* Fill all the remote addresses into the array addrs[], 413 400 * and return the array size. 414 401 */ ··· 427 440 msk->pm.subflows++; 428 441 addrs[i++] = remote; 429 442 } else { 443 + DECLARE_BITMAP(unavail_id, MPTCP_PM_MAX_ADDR_ID + 1); 444 + 445 + /* Forbid creation of new subflows matching existing 446 + * ones, possibly already created by incoming ADD_ADDR 447 + */ 448 + bitmap_zero(unavail_id, MPTCP_PM_MAX_ADDR_ID + 1); 449 + mptcp_for_each_subflow(msk, subflow) 450 + if (READ_ONCE(subflow->local_id) == local->id) 451 + __set_bit(subflow->remote_id, unavail_id); 452 + 430 453 mptcp_for_each_subflow(msk, subflow) { 431 454 ssk = mptcp_subflow_tcp_sock(subflow); 432 455 remote_address((struct sock_common *)ssk, &addrs[i]); 433 - addrs[i].id = subflow->remote_id; 456 + addrs[i].id = READ_ONCE(subflow->remote_id); 434 457 if (deny_id0 && !addrs[i].id) 458 + continue; 459 + 460 + if (test_bit(addrs[i].id, unavail_id)) 435 461 continue; 436 462 437 463 if (!mptcp_pm_addr_families_match(sk, local, &addrs[i])) 438 464 continue; 439 465 440 - if (!lookup_address_in_vec(addrs, i, &addrs[i]) && 441 - msk->pm.subflows < subflows_max) { 466 + if (msk->pm.subflows < subflows_max) { 467 + /* forbid creating multiple address towards 468 + * this id 469 + */ 470 + __set_bit(addrs[i].id, unavail_id); 442 471 msk->pm.subflows++; 443 472 i++; 444 473 } ··· 802 799 803 800 mptcp_for_each_subflow_safe(msk, subflow, tmp) { 804 801 struct sock *ssk = mptcp_subflow_tcp_sock(subflow); 802 + u8 remote_id = READ_ONCE(subflow->remote_id); 805 803 int how = RCV_SHUTDOWN | SEND_SHUTDOWN; 806 - u8 id = subflow->local_id; 804 + u8 id = subflow_get_local_id(subflow); 807 805 808 - if (rm_type == MPTCP_MIB_RMADDR && subflow->remote_id != rm_id) 806 + if (rm_type == MPTCP_MIB_RMADDR && remote_id != rm_id) 809 807 continue; 810 808 if (rm_type == MPTCP_MIB_RMSUBFLOW && !mptcp_local_id_match(msk, id, rm_id)) 811 809 continue; 812 810 813 811 pr_debug(" -> %s rm_list_ids[%d]=%u local_id=%u remote_id=%u mpc_id=%u", 814 812 rm_type == MPTCP_MIB_RMADDR ? "address" : "subflow", 815 - i, rm_id, subflow->local_id, subflow->remote_id, 816 - msk->mpc_endpoint_id); 813 + i, rm_id, id, remote_id, msk->mpc_endpoint_id); 817 814 spin_unlock_bh(&msk->pm.lock); 818 815 mptcp_subflow_shutdown(sk, ssk, how); 819 816 ··· 904 901 } 905 902 906 903 static int mptcp_pm_nl_append_new_local_addr(struct pm_nl_pernet *pernet, 907 - struct mptcp_pm_addr_entry *entry) 904 + struct mptcp_pm_addr_entry *entry, 905 + bool needs_id) 908 906 { 909 907 struct mptcp_pm_addr_entry *cur, *del_entry = NULL; 910 908 unsigned int addr_max; ··· 953 949 } 954 950 } 955 951 956 - if (!entry->addr.id) { 952 + if (!entry->addr.id && needs_id) { 957 953 find_next: 958 954 entry->addr.id = find_next_zero_bit(pernet->id_bitmap, 959 955 MPTCP_PM_MAX_ADDR_ID + 1, ··· 964 960 } 965 961 } 966 962 967 - if (!entry->addr.id) 963 + if (!entry->addr.id && needs_id) 968 964 goto out; 969 965 970 966 __set_bit(entry->addr.id, pernet->id_bitmap); ··· 1096 1092 entry->ifindex = 0; 1097 1093 entry->flags = MPTCP_PM_ADDR_FLAG_IMPLICIT; 1098 1094 entry->lsk = NULL; 1099 - ret = mptcp_pm_nl_append_new_local_addr(pernet, entry); 1095 + ret = mptcp_pm_nl_append_new_local_addr(pernet, entry, true); 1100 1096 if (ret < 0) 1101 1097 kfree(entry); 1102 1098 ··· 1289 1285 return 0; 1290 1286 } 1291 1287 1288 + static bool mptcp_pm_has_addr_attr_id(const struct nlattr *attr, 1289 + struct genl_info *info) 1290 + { 1291 + struct nlattr *tb[MPTCP_PM_ADDR_ATTR_MAX + 1]; 1292 + 1293 + if (!nla_parse_nested_deprecated(tb, MPTCP_PM_ADDR_ATTR_MAX, attr, 1294 + mptcp_pm_address_nl_policy, info->extack) && 1295 + tb[MPTCP_PM_ADDR_ATTR_ID]) 1296 + return true; 1297 + return false; 1298 + } 1299 + 1292 1300 int mptcp_pm_nl_add_addr_doit(struct sk_buff *skb, struct genl_info *info) 1293 1301 { 1294 1302 struct nlattr *attr = info->attrs[MPTCP_PM_ENDPOINT_ADDR]; ··· 1342 1326 goto out_free; 1343 1327 } 1344 1328 } 1345 - ret = mptcp_pm_nl_append_new_local_addr(pernet, entry); 1329 + ret = mptcp_pm_nl_append_new_local_addr(pernet, entry, 1330 + !mptcp_pm_has_addr_attr_id(attr, info)); 1346 1331 if (ret < 0) { 1347 1332 GENL_SET_ERR_MSG_FMT(info, "too many addresses or duplicate one: %d", ret); 1348 1333 goto out_free; ··· 1997 1980 if (WARN_ON_ONCE(!sf)) 1998 1981 return -EINVAL; 1999 1982 2000 - if (nla_put_u8(skb, MPTCP_ATTR_LOC_ID, sf->local_id)) 1983 + if (nla_put_u8(skb, MPTCP_ATTR_LOC_ID, subflow_get_local_id(sf))) 2001 1984 return -EMSGSIZE; 2002 1985 2003 1986 if (nla_put_u8(skb, MPTCP_ATTR_REM_ID, sf->remote_id))
+8 -7
net/mptcp/pm_userspace.c
··· 26 26 } 27 27 28 28 static int mptcp_userspace_pm_append_new_local_addr(struct mptcp_sock *msk, 29 - struct mptcp_pm_addr_entry *entry) 29 + struct mptcp_pm_addr_entry *entry, 30 + bool needs_id) 30 31 { 31 32 DECLARE_BITMAP(id_bitmap, MPTCP_PM_MAX_ADDR_ID + 1); 32 33 struct mptcp_pm_addr_entry *match = NULL; ··· 42 41 spin_lock_bh(&msk->pm.lock); 43 42 list_for_each_entry(e, &msk->pm.userspace_pm_local_addr_list, list) { 44 43 addr_match = mptcp_addresses_equal(&e->addr, &entry->addr, true); 45 - if (addr_match && entry->addr.id == 0) 44 + if (addr_match && entry->addr.id == 0 && needs_id) 46 45 entry->addr.id = e->addr.id; 47 46 id_match = (e->addr.id == entry->addr.id); 48 47 if (addr_match && id_match) { ··· 65 64 } 66 65 67 66 *e = *entry; 68 - if (!e->addr.id) 67 + if (!e->addr.id && needs_id) 69 68 e->addr.id = find_next_zero_bit(id_bitmap, 70 69 MPTCP_PM_MAX_ADDR_ID + 1, 71 70 1); ··· 154 153 if (new_entry.addr.port == msk_sport) 155 154 new_entry.addr.port = 0; 156 155 157 - return mptcp_userspace_pm_append_new_local_addr(msk, &new_entry); 156 + return mptcp_userspace_pm_append_new_local_addr(msk, &new_entry, true); 158 157 } 159 158 160 159 int mptcp_pm_nl_announce_doit(struct sk_buff *skb, struct genl_info *info) ··· 199 198 goto announce_err; 200 199 } 201 200 202 - err = mptcp_userspace_pm_append_new_local_addr(msk, &addr_val); 201 + err = mptcp_userspace_pm_append_new_local_addr(msk, &addr_val, false); 203 202 if (err < 0) { 204 203 GENL_SET_ERR_MSG(info, "did not match address and id"); 205 204 goto announce_err; ··· 234 233 235 234 lock_sock(sk); 236 235 mptcp_for_each_subflow(msk, subflow) { 237 - if (subflow->local_id == 0) { 236 + if (READ_ONCE(subflow->local_id) == 0) { 238 237 has_id_0 = true; 239 238 break; 240 239 } ··· 379 378 } 380 379 381 380 local.addr = addr_l; 382 - err = mptcp_userspace_pm_append_new_local_addr(msk, &local); 381 + err = mptcp_userspace_pm_append_new_local_addr(msk, &local, false); 383 382 if (err < 0) { 384 383 GENL_SET_ERR_MSG(info, "did not match address and id"); 385 384 goto create_err;
+1 -1
net/mptcp/protocol.c
··· 85 85 subflow->subflow_id = msk->subflow_id++; 86 86 87 87 /* This is the first subflow, always with id 0 */ 88 - subflow->local_id_valid = 1; 88 + WRITE_ONCE(subflow->local_id, 0); 89 89 mptcp_sock_graft(msk->first, sk->sk_socket); 90 90 iput(SOCK_INODE(ssock)); 91 91
+12 -3
net/mptcp/protocol.h
··· 493 493 remote_key_valid : 1, /* received the peer key from */ 494 494 disposable : 1, /* ctx can be free at ulp release time */ 495 495 stale : 1, /* unable to snd/rcv data, do not use for xmit */ 496 - local_id_valid : 1, /* local_id is correctly initialized */ 497 496 valid_csum_seen : 1, /* at least one csum validated */ 498 497 is_mptfo : 1, /* subflow is doing TFO */ 499 - __unused : 9; 498 + __unused : 10; 500 499 bool data_avail; 501 500 bool scheduled; 502 501 u32 remote_nonce; ··· 506 507 u8 hmac[MPTCPOPT_HMAC_LEN]; /* MPJ subflow only */ 507 508 u64 iasn; /* initial ack sequence number, MPC subflows only */ 508 509 }; 509 - u8 local_id; 510 + s16 local_id; /* if negative not initialized yet */ 510 511 u8 remote_id; 511 512 u8 reset_seen:1; 512 513 u8 reset_transient:1; ··· 557 558 { 558 559 memset(&subflow->reset, 0, sizeof(subflow->reset)); 559 560 subflow->request_mptcp = 1; 561 + WRITE_ONCE(subflow->local_id, -1); 560 562 } 561 563 562 564 static inline u64 ··· 1023 1023 int mptcp_pm_get_local_id(struct mptcp_sock *msk, struct sock_common *skc); 1024 1024 int mptcp_pm_nl_get_local_id(struct mptcp_sock *msk, struct mptcp_addr_info *skc); 1025 1025 int mptcp_userspace_pm_get_local_id(struct mptcp_sock *msk, struct mptcp_addr_info *skc); 1026 + 1027 + static inline u8 subflow_get_local_id(const struct mptcp_subflow_context *subflow) 1028 + { 1029 + int local_id = READ_ONCE(subflow->local_id); 1030 + 1031 + if (local_id < 0) 1032 + return 0; 1033 + return local_id; 1034 + } 1026 1035 1027 1036 void __init mptcp_pm_nl_init(void); 1028 1037 void mptcp_pm_nl_work(struct mptcp_sock *msk);
+8 -7
net/mptcp/subflow.c
··· 536 536 subflow->backup = mp_opt.backup; 537 537 subflow->thmac = mp_opt.thmac; 538 538 subflow->remote_nonce = mp_opt.nonce; 539 - subflow->remote_id = mp_opt.join_id; 539 + WRITE_ONCE(subflow->remote_id, mp_opt.join_id); 540 540 pr_debug("subflow=%p, thmac=%llu, remote_nonce=%u backup=%d", 541 541 subflow, subflow->thmac, subflow->remote_nonce, 542 542 subflow->backup); ··· 578 578 579 579 static void subflow_set_local_id(struct mptcp_subflow_context *subflow, int local_id) 580 580 { 581 - subflow->local_id = local_id; 582 - subflow->local_id_valid = 1; 581 + WARN_ON_ONCE(local_id < 0 || local_id > 255); 582 + WRITE_ONCE(subflow->local_id, local_id); 583 583 } 584 584 585 585 static int subflow_chk_local_id(struct sock *sk) ··· 588 588 struct mptcp_sock *msk = mptcp_sk(subflow->conn); 589 589 int err; 590 590 591 - if (likely(subflow->local_id_valid)) 591 + if (likely(subflow->local_id >= 0)) 592 592 return 0; 593 593 594 594 err = mptcp_pm_get_local_id(msk, (struct sock_common *)sk); ··· 1569 1569 pr_debug("msk=%p remote_token=%u local_id=%d remote_id=%d", msk, 1570 1570 remote_token, local_id, remote_id); 1571 1571 subflow->remote_token = remote_token; 1572 - subflow->remote_id = remote_id; 1572 + WRITE_ONCE(subflow->remote_id, remote_id); 1573 1573 subflow->request_join = 1; 1574 1574 subflow->request_bkup = !!(flags & MPTCP_PM_ADDR_FLAG_BACKUP); 1575 1575 subflow->subflow_id = msk->subflow_id++; ··· 1733 1733 pr_debug("subflow=%p", ctx); 1734 1734 1735 1735 ctx->tcp_sock = sk; 1736 + WRITE_ONCE(ctx->local_id, -1); 1736 1737 1737 1738 return ctx; 1738 1739 } ··· 1969 1968 new_ctx->idsn = subflow_req->idsn; 1970 1969 1971 1970 /* this is the first subflow, id is always 0 */ 1972 - new_ctx->local_id_valid = 1; 1971 + subflow_set_local_id(new_ctx, 0); 1973 1972 } else if (subflow_req->mp_join) { 1974 1973 new_ctx->ssn_offset = subflow_req->ssn_offset; 1975 1974 new_ctx->mp_join = 1; 1976 1975 new_ctx->fully_established = 1; 1977 1976 new_ctx->remote_key_valid = 1; 1978 1977 new_ctx->backup = subflow_req->backup; 1979 - new_ctx->remote_id = subflow_req->remote_id; 1978 + WRITE_ONCE(new_ctx->remote_id, subflow_req->remote_id); 1980 1979 new_ctx->token = subflow_req->token; 1981 1980 new_ctx->thmac = subflow_req->thmac; 1982 1981
+14 -3
net/netfilter/nf_flow_table_core.c
··· 87 87 return 0; 88 88 } 89 89 90 + static struct dst_entry *nft_route_dst_fetch(struct nf_flow_route *route, 91 + enum flow_offload_tuple_dir dir) 92 + { 93 + struct dst_entry *dst = route->tuple[dir].dst; 94 + 95 + route->tuple[dir].dst = NULL; 96 + 97 + return dst; 98 + } 99 + 90 100 static int flow_offload_fill_route(struct flow_offload *flow, 91 - const struct nf_flow_route *route, 101 + struct nf_flow_route *route, 92 102 enum flow_offload_tuple_dir dir) 93 103 { 94 104 struct flow_offload_tuple *flow_tuple = &flow->tuplehash[dir].tuple; 95 - struct dst_entry *dst = route->tuple[dir].dst; 105 + struct dst_entry *dst = nft_route_dst_fetch(route, dir); 96 106 int i, j = 0; 97 107 98 108 switch (flow_tuple->l3proto) { ··· 132 122 ETH_ALEN); 133 123 flow_tuple->out.ifidx = route->tuple[dir].out.ifindex; 134 124 flow_tuple->out.hw_ifidx = route->tuple[dir].out.hw_ifindex; 125 + dst_release(dst); 135 126 break; 136 127 case FLOW_OFFLOAD_XMIT_XFRM: 137 128 case FLOW_OFFLOAD_XMIT_NEIGH: ··· 157 146 } 158 147 159 148 void flow_offload_route_init(struct flow_offload *flow, 160 - const struct nf_flow_route *route) 149 + struct nf_flow_route *route) 161 150 { 162 151 flow_offload_fill_route(flow, route, FLOW_OFFLOAD_DIR_ORIGINAL); 163 152 flow_offload_fill_route(flow, route, FLOW_OFFLOAD_DIR_REPLY);
+42 -39
net/netfilter/nf_tables_api.c
··· 684 684 return err; 685 685 } 686 686 687 - static int nft_trans_flowtable_add(struct nft_ctx *ctx, int msg_type, 688 - struct nft_flowtable *flowtable) 687 + static struct nft_trans * 688 + nft_trans_flowtable_add(struct nft_ctx *ctx, int msg_type, 689 + struct nft_flowtable *flowtable) 689 690 { 690 691 struct nft_trans *trans; 691 692 692 693 trans = nft_trans_alloc(ctx, msg_type, 693 694 sizeof(struct nft_trans_flowtable)); 694 695 if (trans == NULL) 695 - return -ENOMEM; 696 + return ERR_PTR(-ENOMEM); 696 697 697 698 if (msg_type == NFT_MSG_NEWFLOWTABLE) 698 699 nft_activate_next(ctx->net, flowtable); ··· 702 701 nft_trans_flowtable(trans) = flowtable; 703 702 nft_trans_commit_list_add_tail(ctx->net, trans); 704 703 705 - return 0; 704 + return trans; 706 705 } 707 706 708 707 static int nft_delflowtable(struct nft_ctx *ctx, 709 708 struct nft_flowtable *flowtable) 710 709 { 711 - int err; 710 + struct nft_trans *trans; 712 711 713 - err = nft_trans_flowtable_add(ctx, NFT_MSG_DELFLOWTABLE, flowtable); 714 - if (err < 0) 715 - return err; 712 + trans = nft_trans_flowtable_add(ctx, NFT_MSG_DELFLOWTABLE, flowtable); 713 + if (IS_ERR(trans)) 714 + return PTR_ERR(trans); 716 715 717 716 nft_deactivate_next(ctx->net, flowtable); 718 717 nft_use_dec(&ctx->table->use); 719 718 720 - return err; 719 + return 0; 721 720 } 722 721 723 722 static void __nft_reg_track_clobber(struct nft_regs_track *track, u8 dreg) ··· 1264 1263 return 0; 1265 1264 1266 1265 err_register_hooks: 1266 + ctx->table->flags |= NFT_TABLE_F_DORMANT; 1267 1267 nft_trans_destroy(trans); 1268 1268 return ret; 1269 1269 } ··· 2094 2092 struct nft_hook *hook; 2095 2093 int err; 2096 2094 2097 - hook = kmalloc(sizeof(struct nft_hook), GFP_KERNEL_ACCOUNT); 2095 + hook = kzalloc(sizeof(struct nft_hook), GFP_KERNEL_ACCOUNT); 2098 2096 if (!hook) { 2099 2097 err = -ENOMEM; 2100 2098 goto err_hook_alloc; ··· 2517 2515 RCU_INIT_POINTER(chain->blob_gen_0, blob); 2518 2516 RCU_INIT_POINTER(chain->blob_gen_1, blob); 2519 2517 2520 - err = nf_tables_register_hook(net, table, chain); 2521 - if (err < 0) 2522 - goto err_destroy_chain; 2523 - 2524 2518 if (!nft_use_inc(&table->use)) { 2525 2519 err = -EMFILE; 2526 - goto err_use; 2520 + goto err_destroy_chain; 2527 2521 } 2528 2522 2529 2523 trans = nft_trans_chain_add(ctx, NFT_MSG_NEWCHAIN); 2530 2524 if (IS_ERR(trans)) { 2531 2525 err = PTR_ERR(trans); 2532 - goto err_unregister_hook; 2526 + goto err_trans; 2533 2527 } 2534 2528 2535 2529 nft_trans_chain_policy(trans) = NFT_CHAIN_POLICY_UNSET; ··· 2533 2535 nft_trans_chain_policy(trans) = policy; 2534 2536 2535 2537 err = nft_chain_add(table, chain); 2536 - if (err < 0) { 2537 - nft_trans_destroy(trans); 2538 - goto err_unregister_hook; 2539 - } 2538 + if (err < 0) 2539 + goto err_chain_add; 2540 + 2541 + /* This must be LAST to ensure no packets are walking over this chain. */ 2542 + err = nf_tables_register_hook(net, table, chain); 2543 + if (err < 0) 2544 + goto err_register_hook; 2540 2545 2541 2546 return 0; 2542 2547 2543 - err_unregister_hook: 2548 + err_register_hook: 2549 + nft_chain_del(chain); 2550 + err_chain_add: 2551 + nft_trans_destroy(trans); 2552 + err_trans: 2544 2553 nft_use_dec_restore(&table->use); 2545 - err_use: 2546 - nf_tables_unregister_hook(net, table, chain); 2547 2554 err_destroy_chain: 2548 2555 nf_tables_chain_destroy(ctx); 2549 2556 ··· 8465 8462 u8 family = info->nfmsg->nfgen_family; 8466 8463 const struct nf_flowtable_type *type; 8467 8464 struct nft_flowtable *flowtable; 8468 - struct nft_hook *hook, *next; 8469 8465 struct net *net = info->net; 8470 8466 struct nft_table *table; 8467 + struct nft_trans *trans; 8471 8468 struct nft_ctx ctx; 8472 8469 int err; 8473 8470 ··· 8547 8544 err = nft_flowtable_parse_hook(&ctx, nla, &flowtable_hook, flowtable, 8548 8545 extack, true); 8549 8546 if (err < 0) 8550 - goto err4; 8547 + goto err_flowtable_parse_hooks; 8551 8548 8552 8549 list_splice(&flowtable_hook.list, &flowtable->hook_list); 8553 8550 flowtable->data.priority = flowtable_hook.priority; 8554 8551 flowtable->hooknum = flowtable_hook.num; 8555 8552 8553 + trans = nft_trans_flowtable_add(&ctx, NFT_MSG_NEWFLOWTABLE, flowtable); 8554 + if (IS_ERR(trans)) { 8555 + err = PTR_ERR(trans); 8556 + goto err_flowtable_trans; 8557 + } 8558 + 8559 + /* This must be LAST to ensure no packets are walking over this flowtable. */ 8556 8560 err = nft_register_flowtable_net_hooks(ctx.net, table, 8557 8561 &flowtable->hook_list, 8558 8562 flowtable); 8559 - if (err < 0) { 8560 - nft_hooks_destroy(&flowtable->hook_list); 8561 - goto err4; 8562 - } 8563 - 8564 - err = nft_trans_flowtable_add(&ctx, NFT_MSG_NEWFLOWTABLE, flowtable); 8565 8563 if (err < 0) 8566 - goto err5; 8564 + goto err_flowtable_hooks; 8567 8565 8568 8566 list_add_tail_rcu(&flowtable->list, &table->flowtables); 8569 8567 8570 8568 return 0; 8571 - err5: 8572 - list_for_each_entry_safe(hook, next, &flowtable->hook_list, list) { 8573 - nft_unregister_flowtable_hook(net, flowtable, hook); 8574 - list_del_rcu(&hook->list); 8575 - kfree_rcu(hook, rcu); 8576 - } 8577 - err4: 8569 + 8570 + err_flowtable_hooks: 8571 + nft_trans_destroy(trans); 8572 + err_flowtable_trans: 8573 + nft_hooks_destroy(&flowtable->hook_list); 8574 + err_flowtable_parse_hooks: 8578 8575 flowtable->data.type->free(&flowtable->data); 8579 8576 err3: 8580 8577 module_put(type->owner);
+2 -2
net/phonet/datagram.c
··· 34 34 35 35 switch (cmd) { 36 36 case SIOCINQ: 37 - lock_sock(sk); 37 + spin_lock_bh(&sk->sk_receive_queue.lock); 38 38 skb = skb_peek(&sk->sk_receive_queue); 39 39 *karg = skb ? skb->len : 0; 40 - release_sock(sk); 40 + spin_unlock_bh(&sk->sk_receive_queue.lock); 41 41 return 0; 42 42 43 43 case SIOCPNADDRESOURCE:
+32 -9
net/phonet/pep.c
··· 917 917 return 0; 918 918 } 919 919 920 + static unsigned int pep_first_packet_length(struct sock *sk) 921 + { 922 + struct pep_sock *pn = pep_sk(sk); 923 + struct sk_buff_head *q; 924 + struct sk_buff *skb; 925 + unsigned int len = 0; 926 + bool found = false; 927 + 928 + if (sock_flag(sk, SOCK_URGINLINE)) { 929 + q = &pn->ctrlreq_queue; 930 + spin_lock_bh(&q->lock); 931 + skb = skb_peek(q); 932 + if (skb) { 933 + len = skb->len; 934 + found = true; 935 + } 936 + spin_unlock_bh(&q->lock); 937 + } 938 + 939 + if (likely(!found)) { 940 + q = &sk->sk_receive_queue; 941 + spin_lock_bh(&q->lock); 942 + skb = skb_peek(q); 943 + if (skb) 944 + len = skb->len; 945 + spin_unlock_bh(&q->lock); 946 + } 947 + 948 + return len; 949 + } 950 + 920 951 static int pep_ioctl(struct sock *sk, int cmd, int *karg) 921 952 { 922 953 struct pep_sock *pn = pep_sk(sk); ··· 960 929 break; 961 930 } 962 931 963 - lock_sock(sk); 964 - if (sock_flag(sk, SOCK_URGINLINE) && 965 - !skb_queue_empty(&pn->ctrlreq_queue)) 966 - *karg = skb_peek(&pn->ctrlreq_queue)->len; 967 - else if (!skb_queue_empty(&sk->sk_receive_queue)) 968 - *karg = skb_peek(&sk->sk_receive_queue)->len; 969 - else 970 - *karg = 0; 971 - release_sock(sk); 932 + *karg = pep_first_packet_length(sk); 972 933 ret = 0; 973 934 break; 974 935
+15 -21
net/sched/act_mirred.c
··· 232 232 return err; 233 233 } 234 234 235 - static bool is_mirred_nested(void) 236 - { 237 - return unlikely(__this_cpu_read(mirred_nest_level) > 1); 238 - } 239 - 240 - static int tcf_mirred_forward(bool want_ingress, struct sk_buff *skb) 235 + static int 236 + tcf_mirred_forward(bool at_ingress, bool want_ingress, struct sk_buff *skb) 241 237 { 242 238 int err; 243 239 244 240 if (!want_ingress) 245 241 err = tcf_dev_queue_xmit(skb, dev_queue_xmit); 246 - else if (is_mirred_nested()) 242 + else if (!at_ingress) 247 243 err = netif_rx(skb); 248 244 else 249 245 err = netif_receive_skb(skb); ··· 266 270 if (unlikely(!(dev->flags & IFF_UP)) || !netif_carrier_ok(dev)) { 267 271 net_notice_ratelimited("tc mirred to Houston: device %s is down\n", 268 272 dev->name); 269 - err = -ENODEV; 270 - goto out; 273 + goto err_cant_do; 271 274 } 272 275 273 276 /* we could easily avoid the clone only if called by ingress and clsact; ··· 278 283 tcf_mirred_can_reinsert(retval); 279 284 if (!dont_clone) { 280 285 skb_to_send = skb_clone(skb, GFP_ATOMIC); 281 - if (!skb_to_send) { 282 - err = -ENOMEM; 283 - goto out; 284 - } 286 + if (!skb_to_send) 287 + goto err_cant_do; 285 288 } 286 289 287 290 want_ingress = tcf_mirred_act_wants_ingress(m_eaction); ··· 312 319 313 320 skb_set_redirected(skb_to_send, skb_to_send->tc_at_ingress); 314 321 315 - err = tcf_mirred_forward(want_ingress, skb_to_send); 322 + err = tcf_mirred_forward(at_ingress, want_ingress, skb_to_send); 316 323 } else { 317 - err = tcf_mirred_forward(want_ingress, skb_to_send); 324 + err = tcf_mirred_forward(at_ingress, want_ingress, skb_to_send); 318 325 } 319 - 320 - if (err) { 321 - out: 326 + if (err) 322 327 tcf_action_inc_overlimit_qstats(&m->common); 323 - if (is_redirect) 324 - retval = TC_ACT_SHOT; 325 - } 326 328 329 + return retval; 330 + 331 + err_cant_do: 332 + if (is_redirect) 333 + retval = TC_ACT_SHOT; 334 + tcf_action_inc_overlimit_qstats(&m->common); 327 335 return retval; 328 336 } 329 337
+4 -1
net/sched/cls_flower.c
··· 2460 2460 } 2461 2461 2462 2462 errout_idr: 2463 - if (!fold) 2463 + if (!fold) { 2464 + spin_lock(&tp->lock); 2464 2465 idr_remove(&head->handle_idr, fnew->handle); 2466 + spin_unlock(&tp->lock); 2467 + } 2465 2468 __fl_put(fnew); 2466 2469 errout_tb: 2467 2470 kfree(tb);
+73
net/switchdev/switchdev.c
··· 19 19 #include <linux/rtnetlink.h> 20 20 #include <net/switchdev.h> 21 21 22 + static bool switchdev_obj_eq(const struct switchdev_obj *a, 23 + const struct switchdev_obj *b) 24 + { 25 + const struct switchdev_obj_port_vlan *va, *vb; 26 + const struct switchdev_obj_port_mdb *ma, *mb; 27 + 28 + if (a->id != b->id || a->orig_dev != b->orig_dev) 29 + return false; 30 + 31 + switch (a->id) { 32 + case SWITCHDEV_OBJ_ID_PORT_VLAN: 33 + va = SWITCHDEV_OBJ_PORT_VLAN(a); 34 + vb = SWITCHDEV_OBJ_PORT_VLAN(b); 35 + return va->flags == vb->flags && 36 + va->vid == vb->vid && 37 + va->changed == vb->changed; 38 + case SWITCHDEV_OBJ_ID_PORT_MDB: 39 + case SWITCHDEV_OBJ_ID_HOST_MDB: 40 + ma = SWITCHDEV_OBJ_PORT_MDB(a); 41 + mb = SWITCHDEV_OBJ_PORT_MDB(b); 42 + return ma->vid == mb->vid && 43 + ether_addr_equal(ma->addr, mb->addr); 44 + default: 45 + break; 46 + } 47 + 48 + BUG(); 49 + } 50 + 22 51 static LIST_HEAD(deferred); 23 52 static DEFINE_SPINLOCK(deferred_lock); 24 53 ··· 335 306 return switchdev_port_obj_del_now(dev, obj); 336 307 } 337 308 EXPORT_SYMBOL_GPL(switchdev_port_obj_del); 309 + 310 + /** 311 + * switchdev_port_obj_act_is_deferred - Is object action pending? 312 + * 313 + * @dev: port device 314 + * @nt: type of action; add or delete 315 + * @obj: object to test 316 + * 317 + * Returns true if a deferred item is pending, which is 318 + * equivalent to the action @nt on an object @obj. 319 + * 320 + * rtnl_lock must be held. 321 + */ 322 + bool switchdev_port_obj_act_is_deferred(struct net_device *dev, 323 + enum switchdev_notifier_type nt, 324 + const struct switchdev_obj *obj) 325 + { 326 + struct switchdev_deferred_item *dfitem; 327 + bool found = false; 328 + 329 + ASSERT_RTNL(); 330 + 331 + spin_lock_bh(&deferred_lock); 332 + 333 + list_for_each_entry(dfitem, &deferred, list) { 334 + if (dfitem->dev != dev) 335 + continue; 336 + 337 + if ((dfitem->func == switchdev_port_obj_add_deferred && 338 + nt == SWITCHDEV_PORT_OBJ_ADD) || 339 + (dfitem->func == switchdev_port_obj_del_deferred && 340 + nt == SWITCHDEV_PORT_OBJ_DEL)) { 341 + if (switchdev_obj_eq((const void *)dfitem->data, obj)) { 342 + found = true; 343 + break; 344 + } 345 + } 346 + } 347 + 348 + spin_unlock_bh(&deferred_lock); 349 + 350 + return found; 351 + } 352 + EXPORT_SYMBOL_GPL(switchdev_port_obj_act_is_deferred); 338 353 339 354 static ATOMIC_NOTIFIER_HEAD(switchdev_notif_chain); 340 355 static BLOCKING_NOTIFIER_HEAD(switchdev_blocking_notif_chain);
+1 -1
net/tls/tls_main.c
··· 1003 1003 return 0; 1004 1004 } 1005 1005 1006 - static int tls_get_info(const struct sock *sk, struct sk_buff *skb) 1006 + static int tls_get_info(struct sock *sk, struct sk_buff *skb) 1007 1007 { 1008 1008 u16 version, cipher_type; 1009 1009 struct tls_context *ctx;
+16 -8
net/tls/tls_sw.c
··· 1772 1772 u8 *control, 1773 1773 size_t skip, 1774 1774 size_t len, 1775 - bool is_peek) 1775 + bool is_peek, 1776 + bool *more) 1776 1777 { 1777 1778 struct sk_buff *skb = skb_peek(&ctx->rx_list); 1778 1779 struct tls_msg *tlm; ··· 1786 1785 1787 1786 err = tls_record_content_type(msg, tlm, control); 1788 1787 if (err <= 0) 1789 - goto out; 1788 + goto more; 1790 1789 1791 1790 if (skip < rxm->full_len) 1792 1791 break; ··· 1804 1803 1805 1804 err = tls_record_content_type(msg, tlm, control); 1806 1805 if (err <= 0) 1807 - goto out; 1806 + goto more; 1808 1807 1809 1808 err = skb_copy_datagram_msg(skb, rxm->offset + skip, 1810 1809 msg, chunk); 1811 1810 if (err < 0) 1812 - goto out; 1811 + goto more; 1813 1812 1814 1813 len = len - chunk; 1815 1814 copied = copied + chunk; ··· 1845 1844 1846 1845 out: 1847 1846 return copied ? : err; 1847 + more: 1848 + if (more) 1849 + *more = true; 1850 + goto out; 1848 1851 } 1849 1852 1850 1853 static bool ··· 1952 1947 int target, err; 1953 1948 bool is_kvec = iov_iter_is_kvec(&msg->msg_iter); 1954 1949 bool is_peek = flags & MSG_PEEK; 1950 + bool rx_more = false; 1955 1951 bool released = true; 1956 1952 bool bpf_strp_enabled; 1957 1953 bool zc_capable; ··· 1972 1966 goto end; 1973 1967 1974 1968 /* Process pending decrypted records. It must be non-zero-copy */ 1975 - err = process_rx_list(ctx, msg, &control, 0, len, is_peek); 1969 + err = process_rx_list(ctx, msg, &control, 0, len, is_peek, &rx_more); 1976 1970 if (err < 0) 1977 1971 goto end; 1978 1972 1979 1973 copied = err; 1980 - if (len <= copied) 1974 + if (len <= copied || (copied && control != TLS_RECORD_TYPE_DATA) || rx_more) 1981 1975 goto end; 1982 1976 1983 1977 target = sock_rcvlowat(sk, flags & MSG_WAITALL, len); ··· 2070 2064 decrypted += chunk; 2071 2065 len -= chunk; 2072 2066 __skb_queue_tail(&ctx->rx_list, skb); 2067 + if (unlikely(control != TLS_RECORD_TYPE_DATA)) 2068 + break; 2073 2069 continue; 2074 2070 } 2075 2071 ··· 2136 2128 /* Drain records from the rx_list & copy if required */ 2137 2129 if (is_peek || is_kvec) 2138 2130 err = process_rx_list(ctx, msg, &control, copied, 2139 - decrypted, is_peek); 2131 + decrypted, is_peek, NULL); 2140 2132 else 2141 2133 err = process_rx_list(ctx, msg, &control, 0, 2142 - async_copy_bytes, is_peek); 2134 + async_copy_bytes, is_peek, NULL); 2143 2135 } 2144 2136 2145 2137 copied += decrypted;
+3 -16
net/unix/af_unix.c
··· 780 780 static int unix_seqpacket_recvmsg(struct socket *, struct msghdr *, size_t, 781 781 int); 782 782 783 - static int unix_set_peek_off(struct sock *sk, int val) 784 - { 785 - struct unix_sock *u = unix_sk(sk); 786 - 787 - if (mutex_lock_interruptible(&u->iolock)) 788 - return -EINTR; 789 - 790 - WRITE_ONCE(sk->sk_peek_off, val); 791 - mutex_unlock(&u->iolock); 792 - 793 - return 0; 794 - } 795 - 796 783 #ifdef CONFIG_PROC_FS 797 784 static int unix_count_nr_fds(struct sock *sk) 798 785 { ··· 847 860 .read_skb = unix_stream_read_skb, 848 861 .mmap = sock_no_mmap, 849 862 .splice_read = unix_stream_splice_read, 850 - .set_peek_off = unix_set_peek_off, 863 + .set_peek_off = sk_set_peek_off, 851 864 .show_fdinfo = unix_show_fdinfo, 852 865 }; 853 866 ··· 871 884 .read_skb = unix_read_skb, 872 885 .recvmsg = unix_dgram_recvmsg, 873 886 .mmap = sock_no_mmap, 874 - .set_peek_off = unix_set_peek_off, 887 + .set_peek_off = sk_set_peek_off, 875 888 .show_fdinfo = unix_show_fdinfo, 876 889 }; 877 890 ··· 894 907 .sendmsg = unix_seqpacket_sendmsg, 895 908 .recvmsg = unix_seqpacket_recvmsg, 896 909 .mmap = sock_no_mmap, 897 - .set_peek_off = unix_set_peek_off, 910 + .set_peek_off = sk_set_peek_off, 898 911 .show_fdinfo = unix_show_fdinfo, 899 912 }; 900 913
+9 -13
net/unix/garbage.c
··· 322 322 * which are creating the cycle(s). 323 323 */ 324 324 skb_queue_head_init(&hitlist); 325 - list_for_each_entry(u, &gc_candidates, link) 325 + list_for_each_entry(u, &gc_candidates, link) { 326 326 scan_children(&u->sk, inc_inflight, &hitlist); 327 + 328 + #if IS_ENABLED(CONFIG_AF_UNIX_OOB) 329 + if (u->oob_skb) { 330 + kfree_skb(u->oob_skb); 331 + u->oob_skb = NULL; 332 + } 333 + #endif 334 + } 327 335 328 336 /* not_cycle_list contains those sockets which do not make up a 329 337 * cycle. Restore these to the inflight list. ··· 346 338 347 339 /* Here we are. Hitlist is filled. Die. */ 348 340 __skb_queue_purge(&hitlist); 349 - 350 - #if IS_ENABLED(CONFIG_AF_UNIX_OOB) 351 - while (!list_empty(&gc_candidates)) { 352 - u = list_entry(gc_candidates.next, struct unix_sock, link); 353 - if (u->oob_skb) { 354 - struct sk_buff *skb = u->oob_skb; 355 - 356 - u->oob_skb = NULL; 357 - kfree_skb(skb); 358 - } 359 - } 360 - #endif 361 341 362 342 spin_lock(&unix_gc_lock); 363 343
+2 -1
net/xdp/xsk.c
··· 722 722 memcpy(vaddr, buffer, len); 723 723 kunmap_local(vaddr); 724 724 725 - skb_add_rx_frag(skb, nr_frags, page, 0, len, 0); 725 + skb_add_rx_frag(skb, nr_frags, page, 0, len, PAGE_SIZE); 726 + refcount_add(PAGE_SIZE, &xs->sk.sk_wmem_alloc); 726 727 } 727 728 728 729 if (first_frag && desc->options & XDP_TX_METADATA) {
+1 -1
scripts/bpf_doc.py
··· 513 513 instructions to the kernel when the programs are loaded. The format for that 514 514 string is identical to the one in use for kernel modules (Dual licenses, such 515 515 as "Dual BSD/GPL", may be used). Some helper functions are only accessible to 516 - programs that are compatible with the GNU Privacy License (GPL). 516 + programs that are compatible with the GNU General Public License (GNU GPL). 517 517 518 518 In order to use such helpers, the eBPF program must be loaded with the correct 519 519 license string passed (via **attr**) to the **bpf**\\ () system call, and this
+1 -1
scripts/clang-tools/gen_compile_commands.py
··· 170 170 # escape the pound sign '#', either as '\#' or '$(pound)' (depending on the 171 171 # kernel version). The compile_commands.json file is not interepreted 172 172 # by Make, so this code replaces the escaped version with '#'. 173 - prefix = command_prefix.replace('\#', '#').replace('$(pound)', '#') 173 + prefix = command_prefix.replace(r'\#', '#').replace('$(pound)', '#') 174 174 175 175 # Return the canonical path, eliminating any symbolic links encountered in the path. 176 176 abs_path = os.path.realpath(os.path.join(root_directory, file_path))
+2 -11
scripts/mksysmap
··· 48 48 / __kvm_nvhe_\\$/d 49 49 / __kvm_nvhe_\.L/d 50 50 51 - # arm64 lld 52 - / __AArch64ADRPThunk_/d 53 - 54 - # arm lld 55 - / __ARMV5PILongThunk_/d 56 - / __ARMV7PILongThunk_/d 57 - / __ThumbV7PILongThunk_/d 58 - 59 - # mips lld 60 - / __LA25Thunk_/d 61 - / __microLA25Thunk_/d 51 + # lld arm/aarch64/mips thunks 52 + / __[[:alnum:]]*Thunk_/d 62 53 63 54 # CFI type identifiers 64 55 / __kcfi_typeid_/d
+6 -1
scripts/mod/sumversion.c
··· 326 326 327 327 /* Sum all files in the same dir or subdirs. */ 328 328 while ((line = get_line(&pos))) { 329 - char* p = line; 329 + char* p; 330 + 331 + /* trim the leading spaces away */ 332 + while (isspace(*line)) 333 + line++; 334 + p = line; 330 335 331 336 if (strncmp(line, "source_", sizeof("source_")-1) == 0) { 332 337 p = strrchr(line, ' ');
+5 -2
security/security.c
··· 29 29 #include <linux/backing-dev.h> 30 30 #include <linux/string.h> 31 31 #include <linux/msg.h> 32 + #include <linux/overflow.h> 32 33 #include <net/flow.h> 33 34 34 35 /* How many LSMs were built into the kernel? */ ··· 4016 4015 struct security_hook_list *hp; 4017 4016 struct lsm_ctx *lctx; 4018 4017 int rc = LSM_RET_DEFAULT(setselfattr); 4018 + u64 required_len; 4019 4019 4020 4020 if (flags) 4021 4021 return -EINVAL; ··· 4029 4027 if (IS_ERR(lctx)) 4030 4028 return PTR_ERR(lctx); 4031 4029 4032 - if (size < lctx->len || size < lctx->ctx_len + sizeof(*lctx) || 4033 - lctx->len < lctx->ctx_len + sizeof(*lctx)) { 4030 + if (size < lctx->len || 4031 + check_add_overflow(sizeof(*lctx), lctx->ctx_len, &required_len) || 4032 + lctx->len < required_len) { 4034 4033 rc = -EINVAL; 4035 4034 goto free_out; 4036 4035 }
+2 -2
sound/pci/hda/Kconfig
··· 156 156 depends on I2C 157 157 depends on ACPI || COMPILE_TEST 158 158 depends on SND_SOC 159 - select CS_DSP 159 + select FW_CS_DSP 160 160 select SND_HDA_GENERIC 161 161 select SND_SOC_CS35L56_SHARED 162 162 select SND_HDA_SCODEC_CS35L56 ··· 171 171 depends on SPI_MASTER 172 172 depends on ACPI || COMPILE_TEST 173 173 depends on SND_SOC 174 - select CS_DSP 174 + select FW_CS_DSP 175 175 select SND_HDA_GENERIC 176 176 select SND_SOC_CS35L56_SHARED 177 177 select SND_HDA_SCODEC_CS35L56
+4
sound/pci/hda/cs35l41_hda_property.c
··· 91 91 { "10431D1F", 2, INTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 0, 1, -1, 1000, 4500, 24 }, 92 92 { "10431DA2", 2, EXTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 1, 2, 0, 0, 0, 0 }, 93 93 { "10431E02", 2, EXTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 1, 2, 0, 0, 0, 0 }, 94 + { "10431E12", 2, EXTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 0, 1, -1, 0, 0, 0 }, 94 95 { "10431EE2", 2, EXTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 0, -1, -1, 0, 0, 0 }, 95 96 { "10431F12", 2, INTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 0, 1, -1, 1000, 4500, 24 }, 96 97 { "10431F1F", 2, EXTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 1, -1, 0, 0, 0, 0 }, 97 98 { "10431F62", 2, EXTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 1, 2, 0, 0, 0, 0 }, 99 + { "17AA386F", 2, EXTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 0, -1, -1, 0, 0, 0 }, 98 100 { "17AA38B4", 2, EXTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 0, 1, -1, 0, 0, 0 }, 99 101 { "17AA38B5", 2, EXTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 0, 1, -1, 0, 0, 0 }, 100 102 { "17AA38B6", 2, EXTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 0, 1, -1, 0, 0, 0 }, ··· 429 427 { "CSC3551", "10431D1F", generic_dsd_config }, 430 428 { "CSC3551", "10431DA2", generic_dsd_config }, 431 429 { "CSC3551", "10431E02", generic_dsd_config }, 430 + { "CSC3551", "10431E12", generic_dsd_config }, 432 431 { "CSC3551", "10431EE2", generic_dsd_config }, 433 432 { "CSC3551", "10431F12", generic_dsd_config }, 434 433 { "CSC3551", "10431F1F", generic_dsd_config }, 435 434 { "CSC3551", "10431F62", generic_dsd_config }, 435 + { "CSC3551", "17AA386F", generic_dsd_config }, 436 436 { "CSC3551", "17AA38B4", generic_dsd_config }, 437 437 { "CSC3551", "17AA38B5", generic_dsd_config }, 438 438 { "CSC3551", "17AA38B6", generic_dsd_config },
+18
sound/pci/hda/patch_conexant.c
··· 344 344 CXT_FIXUP_HP_ZBOOK_MUTE_LED, 345 345 CXT_FIXUP_HEADSET_MIC, 346 346 CXT_FIXUP_HP_MIC_NO_PRESENCE, 347 + CXT_PINCFG_SWS_JS201D, 347 348 }; 348 349 349 350 /* for hda_fixup_thinkpad_acpi() */ ··· 842 841 {} 843 842 }; 844 843 844 + /* SuoWoSi/South-holding JS201D with sn6140 */ 845 + static const struct hda_pintbl cxt_pincfg_sws_js201d[] = { 846 + { 0x16, 0x03211040 }, /* hp out */ 847 + { 0x17, 0x91170110 }, /* SPK/Class_D */ 848 + { 0x18, 0x95a70130 }, /* Internal mic */ 849 + { 0x19, 0x03a11020 }, /* Headset Mic */ 850 + { 0x1a, 0x40f001f0 }, /* Not used */ 851 + { 0x21, 0x40f001f0 }, /* Not used */ 852 + {} 853 + }; 854 + 845 855 static const struct hda_fixup cxt_fixups[] = { 846 856 [CXT_PINCFG_LENOVO_X200] = { 847 857 .type = HDA_FIXUP_PINS, ··· 1008 996 .chained = true, 1009 997 .chain_id = CXT_FIXUP_HEADSET_MIC, 1010 998 }, 999 + [CXT_PINCFG_SWS_JS201D] = { 1000 + .type = HDA_FIXUP_PINS, 1001 + .v.pins = cxt_pincfg_sws_js201d, 1002 + }, 1011 1003 }; 1012 1004 1013 1005 static const struct snd_pci_quirk cxt5045_fixups[] = { ··· 1085 1069 SND_PCI_QUIRK(0x103c, 0x8457, "HP Z2 G4 mini", CXT_FIXUP_HP_MIC_NO_PRESENCE), 1086 1070 SND_PCI_QUIRK(0x103c, 0x8458, "HP Z2 G4 mini premium", CXT_FIXUP_HP_MIC_NO_PRESENCE), 1087 1071 SND_PCI_QUIRK(0x1043, 0x138d, "Asus", CXT_FIXUP_HEADPHONE_MIC_PIN), 1072 + SND_PCI_QUIRK(0x14f1, 0x0265, "SWS JS201D", CXT_PINCFG_SWS_JS201D), 1088 1073 SND_PCI_QUIRK(0x152d, 0x0833, "OLPC XO-1.5", CXT_FIXUP_OLPC_XO), 1089 1074 SND_PCI_QUIRK(0x17aa, 0x20f2, "Lenovo T400", CXT_PINCFG_LENOVO_TP410), 1090 1075 SND_PCI_QUIRK(0x17aa, 0x215e, "Lenovo T410", CXT_PINCFG_LENOVO_TP410), ··· 1126 1109 { .id = CXT_FIXUP_HP_ZBOOK_MUTE_LED, .name = "hp-zbook-mute-led" }, 1127 1110 { .id = CXT_FIXUP_HP_MIC_NO_PRESENCE, .name = "hp-mic-fix" }, 1128 1111 { .id = CXT_PINCFG_LENOVO_NOTEBOOK, .name = "lenovo-20149" }, 1112 + { .id = CXT_PINCFG_SWS_JS201D, .name = "sws-js201d" }, 1129 1113 {} 1130 1114 }; 1131 1115
+8 -4
sound/pci/hda/patch_realtek.c
··· 9737 9737 SND_PCI_QUIRK(0x1028, 0x0b71, "Dell Inspiron 16 Plus 7620", ALC295_FIXUP_DELL_INSPIRON_TOP_SPEAKERS), 9738 9738 SND_PCI_QUIRK(0x1028, 0x0beb, "Dell XPS 15 9530 (2023)", ALC289_FIXUP_DELL_CS35L41_SPI_2), 9739 9739 SND_PCI_QUIRK(0x1028, 0x0c03, "Dell Precision 5340", ALC269_FIXUP_DELL4_MIC_NO_PRESENCE), 9740 + SND_PCI_QUIRK(0x1028, 0x0c0b, "Dell Oasis 14 RPL-P", ALC289_FIXUP_RTK_AMP_DUAL_SPK), 9740 9741 SND_PCI_QUIRK(0x1028, 0x0c0d, "Dell Oasis", ALC289_FIXUP_RTK_AMP_DUAL_SPK), 9742 + SND_PCI_QUIRK(0x1028, 0x0c0e, "Dell Oasis 16", ALC289_FIXUP_RTK_AMP_DUAL_SPK), 9741 9743 SND_PCI_QUIRK(0x1028, 0x0c19, "Dell Precision 3340", ALC236_FIXUP_DELL_DUAL_CODECS), 9742 9744 SND_PCI_QUIRK(0x1028, 0x0c1a, "Dell Precision 3340", ALC236_FIXUP_DELL_DUAL_CODECS), 9743 9745 SND_PCI_QUIRK(0x1028, 0x0c1b, "Dell Precision 3440", ALC236_FIXUP_DELL_DUAL_CODECS), ··· 9930 9928 SND_PCI_QUIRK(0x103c, 0x8abb, "HP ZBook Firefly 14 G9", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED), 9931 9929 SND_PCI_QUIRK(0x103c, 0x8ad1, "HP EliteBook 840 14 inch G9 Notebook PC", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED), 9932 9930 SND_PCI_QUIRK(0x103c, 0x8ad2, "HP EliteBook 860 16 inch G9 Notebook PC", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED), 9931 + SND_PCI_QUIRK(0x103c, 0x8b0f, "HP Elite mt645 G7 Mobile Thin Client U81", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF), 9933 9932 SND_PCI_QUIRK(0x103c, 0x8b2f, "HP 255 15.6 inch G10 Notebook PC", ALC236_FIXUP_HP_MUTE_LED_COEFBIT2), 9934 9933 SND_PCI_QUIRK(0x103c, 0x8b42, "HP", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED), 9935 9934 SND_PCI_QUIRK(0x103c, 0x8b43, "HP", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED), ··· 9938 9935 SND_PCI_QUIRK(0x103c, 0x8b45, "HP", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED), 9939 9936 SND_PCI_QUIRK(0x103c, 0x8b46, "HP", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED), 9940 9937 SND_PCI_QUIRK(0x103c, 0x8b47, "HP", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED), 9938 + SND_PCI_QUIRK(0x103c, 0x8b59, "HP Elite mt645 G7 Mobile Thin Client U89", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF), 9941 9939 SND_PCI_QUIRK(0x103c, 0x8b5d, "HP", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF), 9942 9940 SND_PCI_QUIRK(0x103c, 0x8b5e, "HP", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF), 9943 9941 SND_PCI_QUIRK(0x103c, 0x8b63, "HP Elite Dragonfly 13.5 inch G4", ALC245_FIXUP_CS35L41_SPI_4_HP_GPIO_LED), ··· 10007 10003 SND_PCI_QUIRK(0x1043, 0x1662, "ASUS GV301QH", ALC294_FIXUP_ASUS_DUAL_SPK), 10008 10004 SND_PCI_QUIRK(0x1043, 0x1663, "ASUS GU603ZI/ZJ/ZQ/ZU/ZV", ALC285_FIXUP_ASUS_HEADSET_MIC), 10009 10005 SND_PCI_QUIRK(0x1043, 0x1683, "ASUS UM3402YAR", ALC287_FIXUP_CS35L41_I2C_2), 10006 + SND_PCI_QUIRK(0x1043, 0x16a3, "ASUS UX3402VA", ALC245_FIXUP_CS35L41_SPI_2), 10010 10007 SND_PCI_QUIRK(0x1043, 0x16b2, "ASUS GU603", ALC289_FIXUP_ASUS_GA401), 10011 10008 SND_PCI_QUIRK(0x1043, 0x16d3, "ASUS UX5304VA", ALC245_FIXUP_CS35L41_SPI_2), 10012 10009 SND_PCI_QUIRK(0x1043, 0x16e3, "ASUS UX50", ALC269_FIXUP_STEREO_DMIC), ··· 10051 10046 SND_PCI_QUIRK(0x1043, 0x1d4e, "ASUS TM420", ALC256_FIXUP_ASUS_HPE), 10052 10047 SND_PCI_QUIRK(0x1043, 0x1da2, "ASUS UP6502ZA/ZD", ALC245_FIXUP_CS35L41_SPI_2), 10053 10048 SND_PCI_QUIRK(0x1043, 0x1e02, "ASUS UX3402ZA", ALC245_FIXUP_CS35L41_SPI_2), 10054 - SND_PCI_QUIRK(0x1043, 0x16a3, "ASUS UX3402VA", ALC245_FIXUP_CS35L41_SPI_2), 10055 - SND_PCI_QUIRK(0x1043, 0x1f62, "ASUS UX7602ZM", ALC245_FIXUP_CS35L41_SPI_2), 10056 10049 SND_PCI_QUIRK(0x1043, 0x1e11, "ASUS Zephyrus G15", ALC289_FIXUP_ASUS_GA502), 10057 - SND_PCI_QUIRK(0x1043, 0x1e12, "ASUS UM6702RA/RC", ALC287_FIXUP_CS35L41_I2C_2), 10050 + SND_PCI_QUIRK(0x1043, 0x1e12, "ASUS UM3402", ALC287_FIXUP_CS35L41_I2C_2), 10058 10051 SND_PCI_QUIRK(0x1043, 0x1e51, "ASUS Zephyrus M15", ALC294_FIXUP_ASUS_GU502_PINS), 10059 10052 SND_PCI_QUIRK(0x1043, 0x1e5e, "ASUS ROG Strix G513", ALC294_FIXUP_ASUS_G513_PINS), 10060 10053 SND_PCI_QUIRK(0x1043, 0x1e8e, "ASUS Zephyrus G15", ALC289_FIXUP_ASUS_GA401), 10061 - SND_PCI_QUIRK(0x1043, 0x1ee2, "ASUS UM3402", ALC287_FIXUP_CS35L41_I2C_2), 10054 + SND_PCI_QUIRK(0x1043, 0x1ee2, "ASUS UM6702RA/RC", ALC287_FIXUP_CS35L41_I2C_2), 10062 10055 SND_PCI_QUIRK(0x1043, 0x1c52, "ASUS Zephyrus G15 2022", ALC289_FIXUP_ASUS_GA401), 10063 10056 SND_PCI_QUIRK(0x1043, 0x1f11, "ASUS Zephyrus G14", ALC289_FIXUP_ASUS_GA401), 10064 10057 SND_PCI_QUIRK(0x1043, 0x1f12, "ASUS UM5302", ALC287_FIXUP_CS35L41_I2C_2), ··· 10263 10260 SND_PCI_QUIRK(0x17aa, 0x3853, "Lenovo Yoga 7 15ITL5", ALC287_FIXUP_YOGA7_14ITL_SPEAKERS), 10264 10261 SND_PCI_QUIRK(0x17aa, 0x3855, "Legion 7 16ITHG6", ALC287_FIXUP_LEGION_16ITHG6), 10265 10262 SND_PCI_QUIRK(0x17aa, 0x3869, "Lenovo Yoga7 14IAL7", ALC287_FIXUP_YOGA9_14IAP7_BASS_SPK_PIN), 10263 + SND_PCI_QUIRK(0x17aa, 0x386f, "Legion 7i 16IAX7", ALC287_FIXUP_CS35L41_I2C_2), 10266 10264 SND_PCI_QUIRK(0x17aa, 0x3870, "Lenovo Yoga 7 14ARB7", ALC287_FIXUP_YOGA7_14ARB7_I2C), 10267 10265 SND_PCI_QUIRK(0x17aa, 0x387d, "Yoga S780-16 pro Quad AAC", ALC287_FIXUP_TAS2781_I2C), 10268 10266 SND_PCI_QUIRK(0x17aa, 0x387e, "Yoga S780-16 pro Quad YC", ALC287_FIXUP_TAS2781_I2C),
+1 -1
sound/pci/hda/tas2781_hda_i2c.c
··· 710 710 711 711 strscpy(comps->name, dev_name(dev), sizeof(comps->name)); 712 712 713 - ret = tascodec_init(tas_hda->priv, codec, tasdev_fw_ready); 713 + ret = tascodec_init(tas_hda->priv, codec, THIS_MODULE, tasdev_fw_ready); 714 714 if (!ret) 715 715 comps->playback_hook = tas2781_hda_playback_hook; 716 716
+14
sound/soc/amd/yc/acp6x-mach.c
··· 238 238 .driver_data = &acp6x_card, 239 239 .matches = { 240 240 DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 241 + DMI_MATCH(DMI_PRODUCT_NAME, "82UU"), 242 + } 243 + }, 244 + { 245 + .driver_data = &acp6x_card, 246 + .matches = { 247 + DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 241 248 DMI_MATCH(DMI_PRODUCT_NAME, "82V2"), 242 249 } 243 250 }, ··· 253 246 .matches = { 254 247 DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 255 248 DMI_MATCH(DMI_PRODUCT_NAME, "82YM"), 249 + } 250 + }, 251 + { 252 + .driver_data = &acp6x_card, 253 + .matches = { 254 + DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 255 + DMI_MATCH(DMI_PRODUCT_NAME, "83AS"), 256 256 } 257 257 }, 258 258 {
-1
sound/soc/codecs/cs35l56-shared.c
··· 51 51 { CS35L56_SWIRE_DP3_CH2_INPUT, 0x00000019 }, 52 52 { CS35L56_SWIRE_DP3_CH3_INPUT, 0x00000029 }, 53 53 { CS35L56_SWIRE_DP3_CH4_INPUT, 0x00000028 }, 54 - { CS35L56_IRQ1_CFG, 0x00000000 }, 55 54 { CS35L56_IRQ1_MASK_1, 0x83ffffff }, 56 55 { CS35L56_IRQ1_MASK_2, 0xffff7fff }, 57 56 { CS35L56_IRQ1_MASK_4, 0xe0ffffff },
+168 -84
sound/soc/codecs/cs35l56.c
··· 5 5 // Copyright (C) 2023 Cirrus Logic, Inc. and 6 6 // Cirrus Logic International Semiconductor Ltd. 7 7 8 + #include <linux/acpi.h> 8 9 #include <linux/completion.h> 9 10 #include <linux/debugfs.h> 10 11 #include <linux/delay.h> ··· 16 15 #include <linux/module.h> 17 16 #include <linux/pm.h> 18 17 #include <linux/pm_runtime.h> 18 + #include <linux/property.h> 19 19 #include <linux/regmap.h> 20 20 #include <linux/regulator/consumer.h> 21 21 #include <linux/slab.h> ··· 70 68 "ASP1 TX1 Source", "ASP1 TX2 Source", "ASP1 TX3 Source", "ASP1 TX4 Source" 71 69 }; 72 70 71 + static int cs35l56_sync_asp1_mixer_widgets_with_firmware(struct cs35l56_private *cs35l56) 72 + { 73 + struct snd_soc_dapm_context *dapm = snd_soc_component_get_dapm(cs35l56->component); 74 + const char *prefix = cs35l56->component->name_prefix; 75 + char full_name[SNDRV_CTL_ELEM_ID_NAME_MAXLEN]; 76 + const char *name; 77 + struct snd_kcontrol *kcontrol; 78 + struct soc_enum *e; 79 + unsigned int val[4]; 80 + int i, item, ret; 81 + 82 + if (cs35l56->asp1_mixer_widgets_initialized) 83 + return 0; 84 + 85 + /* 86 + * Resume so we can read the registers from silicon if the regmap 87 + * cache has not yet been populated. 88 + */ 89 + ret = pm_runtime_resume_and_get(cs35l56->base.dev); 90 + if (ret < 0) 91 + return ret; 92 + 93 + /* Wait for firmware download and reboot */ 94 + cs35l56_wait_dsp_ready(cs35l56); 95 + 96 + ret = regmap_bulk_read(cs35l56->base.regmap, CS35L56_ASP1TX1_INPUT, 97 + val, ARRAY_SIZE(val)); 98 + 99 + pm_runtime_mark_last_busy(cs35l56->base.dev); 100 + pm_runtime_put_autosuspend(cs35l56->base.dev); 101 + 102 + if (ret) { 103 + dev_err(cs35l56->base.dev, "Failed to read ASP1 mixer regs: %d\n", ret); 104 + return ret; 105 + } 106 + 107 + for (i = 0; i < ARRAY_SIZE(cs35l56_asp1_mux_control_names); ++i) { 108 + name = cs35l56_asp1_mux_control_names[i]; 109 + 110 + if (prefix) { 111 + snprintf(full_name, sizeof(full_name), "%s %s", prefix, name); 112 + name = full_name; 113 + } 114 + 115 + kcontrol = snd_soc_card_get_kcontrol(dapm->card, name); 116 + if (!kcontrol) { 117 + dev_warn(cs35l56->base.dev, "Could not find control %s\n", name); 118 + continue; 119 + } 120 + 121 + e = (struct soc_enum *)kcontrol->private_value; 122 + item = snd_soc_enum_val_to_item(e, val[i] & CS35L56_ASP_TXn_SRC_MASK); 123 + snd_soc_dapm_mux_update_power(dapm, kcontrol, item, e, NULL); 124 + } 125 + 126 + cs35l56->asp1_mixer_widgets_initialized = true; 127 + 128 + return 0; 129 + } 130 + 73 131 static int cs35l56_dspwait_asp1tx_get(struct snd_kcontrol *kcontrol, 74 132 struct snd_ctl_elem_value *ucontrol) 75 133 { ··· 140 78 unsigned int addr, val; 141 79 int ret; 142 80 143 - /* Wait for mux to be initialized */ 144 - cs35l56_wait_dsp_ready(cs35l56); 145 - flush_work(&cs35l56->mux_init_work); 81 + ret = cs35l56_sync_asp1_mixer_widgets_with_firmware(cs35l56); 82 + if (ret) 83 + return ret; 146 84 147 85 addr = cs35l56_asp1_mixer_regs[index]; 148 86 ret = regmap_read(cs35l56->base.regmap, addr, &val); ··· 168 106 bool changed; 169 107 int ret; 170 108 171 - /* Wait for mux to be initialized */ 172 - cs35l56_wait_dsp_ready(cs35l56); 173 - flush_work(&cs35l56->mux_init_work); 109 + ret = cs35l56_sync_asp1_mixer_widgets_with_firmware(cs35l56); 110 + if (ret) 111 + return ret; 174 112 175 113 addr = cs35l56_asp1_mixer_regs[index]; 176 114 val = snd_soc_enum_item_to_val(e, item); 177 115 178 116 ret = regmap_update_bits_check(cs35l56->base.regmap, addr, 179 117 CS35L56_ASP_TXn_SRC_MASK, val, &changed); 180 - if (!ret) 118 + if (ret) 181 119 return ret; 182 120 183 121 if (changed) 184 122 snd_soc_dapm_mux_update_power(dapm, kcontrol, item, e, NULL); 185 123 186 124 return changed; 187 - } 188 - 189 - static void cs35l56_mark_asp1_mixer_widgets_dirty(struct cs35l56_private *cs35l56) 190 - { 191 - struct snd_soc_dapm_context *dapm = snd_soc_component_get_dapm(cs35l56->component); 192 - const char *prefix = cs35l56->component->name_prefix; 193 - char full_name[SNDRV_CTL_ELEM_ID_NAME_MAXLEN]; 194 - const char *name; 195 - struct snd_kcontrol *kcontrol; 196 - struct soc_enum *e; 197 - unsigned int val[4]; 198 - int i, item, ret; 199 - 200 - /* 201 - * Resume so we can read the registers from silicon if the regmap 202 - * cache has not yet been populated. 203 - */ 204 - ret = pm_runtime_resume_and_get(cs35l56->base.dev); 205 - if (ret < 0) 206 - return; 207 - 208 - ret = regmap_bulk_read(cs35l56->base.regmap, CS35L56_ASP1TX1_INPUT, 209 - val, ARRAY_SIZE(val)); 210 - 211 - pm_runtime_mark_last_busy(cs35l56->base.dev); 212 - pm_runtime_put_autosuspend(cs35l56->base.dev); 213 - 214 - if (ret) { 215 - dev_err(cs35l56->base.dev, "Failed to read ASP1 mixer regs: %d\n", ret); 216 - return; 217 - } 218 - 219 - snd_soc_card_mutex_lock(dapm->card); 220 - WARN_ON(!dapm->card->instantiated); 221 - 222 - for (i = 0; i < ARRAY_SIZE(cs35l56_asp1_mux_control_names); ++i) { 223 - name = cs35l56_asp1_mux_control_names[i]; 224 - 225 - if (prefix) { 226 - snprintf(full_name, sizeof(full_name), "%s %s", prefix, name); 227 - name = full_name; 228 - } 229 - 230 - kcontrol = snd_soc_card_get_kcontrol(dapm->card, name); 231 - if (!kcontrol) { 232 - dev_warn(cs35l56->base.dev, "Could not find control %s\n", name); 233 - continue; 234 - } 235 - 236 - e = (struct soc_enum *)kcontrol->private_value; 237 - item = snd_soc_enum_val_to_item(e, val[i] & CS35L56_ASP_TXn_SRC_MASK); 238 - snd_soc_dapm_mux_update_power(dapm, kcontrol, item, e, NULL); 239 - } 240 - 241 - snd_soc_card_mutex_unlock(dapm->card); 242 - } 243 - 244 - static void cs35l56_mux_init_work(struct work_struct *work) 245 - { 246 - struct cs35l56_private *cs35l56 = container_of(work, 247 - struct cs35l56_private, 248 - mux_init_work); 249 - 250 - cs35l56_mark_asp1_mixer_widgets_dirty(cs35l56); 251 125 } 252 126 253 127 static DECLARE_TLV_DB_SCALE(vol_tlv, -10000, 25, 0); ··· 934 936 else 935 937 cs35l56_patch(cs35l56, firmware_missing); 936 938 937 - 938 - /* 939 - * Set starting value of ASP1 mux widgets. Updating a mux takes 940 - * the DAPM mutex. Post this to a separate job so that DAPM 941 - * power-up can wait for dsp_work to complete without deadlocking 942 - * on the DAPM mutex. 943 - */ 944 - queue_work(cs35l56->dsp_wq, &cs35l56->mux_init_work); 945 939 err: 946 940 pm_runtime_mark_last_busy(cs35l56->base.dev); 947 941 pm_runtime_put_autosuspend(cs35l56->base.dev); ··· 979 989 debugfs_create_bool("can_hibernate", 0444, debugfs_root, &cs35l56->base.can_hibernate); 980 990 debugfs_create_bool("fw_patched", 0444, debugfs_root, &cs35l56->base.fw_patched); 981 991 992 + /* 993 + * The widgets for the ASP1TX mixer can't be initialized 994 + * until the firmware has been downloaded and rebooted. 995 + */ 996 + regcache_drop_region(cs35l56->base.regmap, CS35L56_ASP1TX1_INPUT, CS35L56_ASP1TX4_INPUT); 997 + cs35l56->asp1_mixer_widgets_initialized = false; 998 + 982 999 queue_work(cs35l56->dsp_wq, &cs35l56->dsp_work); 983 1000 984 1001 return 0; ··· 996 999 struct cs35l56_private *cs35l56 = snd_soc_component_get_drvdata(component); 997 1000 998 1001 cancel_work_sync(&cs35l56->dsp_work); 999 - cancel_work_sync(&cs35l56->mux_init_work); 1000 1002 1001 1003 if (cs35l56->dsp.cs_dsp.booted) 1002 1004 wm_adsp_power_down(&cs35l56->dsp); ··· 1066 1070 1067 1071 dev_dbg(dev, "system_suspend\n"); 1068 1072 1069 - if (cs35l56->component) { 1073 + if (cs35l56->component) 1070 1074 flush_work(&cs35l56->dsp_work); 1071 - cancel_work_sync(&cs35l56->mux_init_work); 1072 - } 1073 1075 1074 1076 /* 1075 1077 * The interrupt line is normally shared, but after we start suspending ··· 1218 1224 return -ENOMEM; 1219 1225 1220 1226 INIT_WORK(&cs35l56->dsp_work, cs35l56_dsp_work); 1221 - INIT_WORK(&cs35l56->mux_init_work, cs35l56_mux_init_work); 1222 1227 1223 1228 dsp = &cs35l56->dsp; 1224 1229 cs35l56_init_cs_dsp(&cs35l56->base, &dsp->cs_dsp); ··· 1260 1267 dev_dbg(dev, "Firmware UID: %s\n", cs35l56->dsp.system_name); 1261 1268 1262 1269 return 0; 1270 + } 1271 + 1272 + /* 1273 + * Some SoundWire laptops have a spk-id-gpios property but it points to 1274 + * the wrong ACPI Device node so can't be used to get the GPIO. Try to 1275 + * find the SDCA node containing the GpioIo resource and add a GPIO 1276 + * mapping to it. 1277 + */ 1278 + static const struct acpi_gpio_params cs35l56_af01_first_gpio = { 0, 0, false }; 1279 + static const struct acpi_gpio_mapping cs35l56_af01_spkid_gpios_mapping[] = { 1280 + { "spk-id-gpios", &cs35l56_af01_first_gpio, 1 }, 1281 + { } 1282 + }; 1283 + 1284 + static void cs35l56_acpi_dev_release_driver_gpios(void *adev) 1285 + { 1286 + acpi_dev_remove_driver_gpios(adev); 1287 + } 1288 + 1289 + static int cs35l56_try_get_broken_sdca_spkid_gpio(struct cs35l56_private *cs35l56) 1290 + { 1291 + struct fwnode_handle *af01_fwnode; 1292 + const union acpi_object *obj; 1293 + struct gpio_desc *desc; 1294 + int ret; 1295 + 1296 + /* Find the SDCA node containing the GpioIo */ 1297 + af01_fwnode = device_get_named_child_node(cs35l56->base.dev, "AF01"); 1298 + if (!af01_fwnode) { 1299 + dev_dbg(cs35l56->base.dev, "No AF01 node\n"); 1300 + return -ENOENT; 1301 + } 1302 + 1303 + ret = acpi_dev_get_property(ACPI_COMPANION(cs35l56->base.dev), 1304 + "spk-id-gpios", ACPI_TYPE_PACKAGE, &obj); 1305 + if (ret) { 1306 + dev_dbg(cs35l56->base.dev, "Could not get spk-id-gpios package: %d\n", ret); 1307 + return -ENOENT; 1308 + } 1309 + 1310 + /* The broken properties we can handle are a 4-element package (one GPIO) */ 1311 + if (obj->package.count != 4) { 1312 + dev_warn(cs35l56->base.dev, "Unexpected spk-id element count %d\n", 1313 + obj->package.count); 1314 + return -ENOENT; 1315 + } 1316 + 1317 + /* Add a GPIO mapping if it doesn't already have one */ 1318 + if (!fwnode_property_present(af01_fwnode, "spk-id-gpios")) { 1319 + struct acpi_device *adev = to_acpi_device_node(af01_fwnode); 1320 + 1321 + /* 1322 + * Can't use devm_acpi_dev_add_driver_gpios() because the 1323 + * mapping isn't being added to the node pointed to by 1324 + * ACPI_COMPANION(). 1325 + */ 1326 + ret = acpi_dev_add_driver_gpios(adev, cs35l56_af01_spkid_gpios_mapping); 1327 + if (ret) { 1328 + return dev_err_probe(cs35l56->base.dev, ret, 1329 + "Failed to add gpio mapping to AF01\n"); 1330 + } 1331 + 1332 + ret = devm_add_action_or_reset(cs35l56->base.dev, 1333 + cs35l56_acpi_dev_release_driver_gpios, 1334 + adev); 1335 + if (ret) 1336 + return ret; 1337 + 1338 + dev_dbg(cs35l56->base.dev, "Added spk-id-gpios mapping to AF01\n"); 1339 + } 1340 + 1341 + desc = fwnode_gpiod_get_index(af01_fwnode, "spk-id", 0, GPIOD_IN, NULL); 1342 + if (IS_ERR(desc)) { 1343 + ret = PTR_ERR(desc); 1344 + return dev_err_probe(cs35l56->base.dev, ret, "Get GPIO from AF01 failed\n"); 1345 + } 1346 + 1347 + ret = gpiod_get_value_cansleep(desc); 1348 + gpiod_put(desc); 1349 + 1350 + if (ret < 0) { 1351 + dev_err_probe(cs35l56->base.dev, ret, "Error reading spk-id GPIO\n"); 1352 + return ret; 1353 + } 1354 + 1355 + dev_info(cs35l56->base.dev, "Got spk-id from AF01\n"); 1356 + 1357 + return ret; 1263 1358 } 1264 1359 1265 1360 int cs35l56_common_probe(struct cs35l56_private *cs35l56) ··· 1394 1313 } 1395 1314 1396 1315 ret = cs35l56_get_speaker_id(&cs35l56->base); 1316 + if (ACPI_COMPANION(cs35l56->base.dev) && cs35l56->sdw_peripheral && (ret == -ENOENT)) 1317 + ret = cs35l56_try_get_broken_sdca_spkid_gpio(cs35l56); 1318 + 1397 1319 if ((ret < 0) && (ret != -ENOENT)) 1398 1320 goto err; 1399 1321
+1 -1
sound/soc/codecs/cs35l56.h
··· 34 34 struct wm_adsp dsp; /* must be first member */ 35 35 struct cs35l56_base base; 36 36 struct work_struct dsp_work; 37 - struct work_struct mux_init_work; 38 37 struct workqueue_struct *dsp_wq; 39 38 struct snd_soc_component *component; 40 39 struct regulator_bulk_data supplies[CS35L56_NUM_BULK_SUPPLIES]; ··· 51 52 u8 asp_slot_count; 52 53 bool tdm_mode; 53 54 bool sysclk_set; 55 + bool asp1_mixer_widgets_initialized; 54 56 u8 old_sdw_clock_scale; 55 57 }; 56 58
+45 -3
sound/soc/codecs/cs42l43.c
··· 2257 2257 pm_runtime_use_autosuspend(priv->dev); 2258 2258 pm_runtime_set_active(priv->dev); 2259 2259 pm_runtime_get_noresume(priv->dev); 2260 - devm_pm_runtime_enable(priv->dev); 2260 + 2261 + ret = devm_pm_runtime_enable(priv->dev); 2262 + if (ret) 2263 + goto err_pm; 2261 2264 2262 2265 for (i = 0; i < ARRAY_SIZE(cs42l43_irqs); i++) { 2263 2266 ret = cs42l43_request_irq(priv, dom, cs42l43_irqs[i].name, ··· 2336 2333 return 0; 2337 2334 } 2338 2335 2339 - static DEFINE_RUNTIME_DEV_PM_OPS(cs42l43_codec_pm_ops, NULL, 2340 - cs42l43_codec_runtime_resume, NULL); 2336 + static int cs42l43_codec_suspend(struct device *dev) 2337 + { 2338 + struct cs42l43 *cs42l43 = dev_get_drvdata(dev); 2339 + 2340 + disable_irq(cs42l43->irq); 2341 + 2342 + return 0; 2343 + } 2344 + 2345 + static int cs42l43_codec_suspend_noirq(struct device *dev) 2346 + { 2347 + struct cs42l43 *cs42l43 = dev_get_drvdata(dev); 2348 + 2349 + enable_irq(cs42l43->irq); 2350 + 2351 + return 0; 2352 + } 2353 + 2354 + static int cs42l43_codec_resume(struct device *dev) 2355 + { 2356 + struct cs42l43 *cs42l43 = dev_get_drvdata(dev); 2357 + 2358 + enable_irq(cs42l43->irq); 2359 + 2360 + return 0; 2361 + } 2362 + 2363 + static int cs42l43_codec_resume_noirq(struct device *dev) 2364 + { 2365 + struct cs42l43 *cs42l43 = dev_get_drvdata(dev); 2366 + 2367 + disable_irq(cs42l43->irq); 2368 + 2369 + return 0; 2370 + } 2371 + 2372 + static const struct dev_pm_ops cs42l43_codec_pm_ops = { 2373 + SYSTEM_SLEEP_PM_OPS(cs42l43_codec_suspend, cs42l43_codec_resume) 2374 + NOIRQ_SYSTEM_SLEEP_PM_OPS(cs42l43_codec_suspend_noirq, cs42l43_codec_resume_noirq) 2375 + RUNTIME_PM_OPS(NULL, cs42l43_codec_runtime_resume, NULL) 2376 + }; 2341 2377 2342 2378 static const struct platform_device_id cs42l43_codec_id_table[] = { 2343 2379 { "cs42l43-codec", },
+26
sound/soc/codecs/rt5645.c
··· 3317 3317 report, SND_JACK_HEADPHONE); 3318 3318 snd_soc_jack_report(rt5645->mic_jack, 3319 3319 report, SND_JACK_MICROPHONE); 3320 + mutex_unlock(&rt5645->jd_mutex); 3320 3321 return; 3321 3322 case 4: 3322 3323 val = snd_soc_component_read(rt5645->component, RT5645_A_JD_CTRL1) & 0x0020; ··· 3693 3692 .mono_speaker = true, 3694 3693 }; 3695 3694 3695 + static const struct rt5645_platform_data jd_mode3_inv_data = { 3696 + .jd_mode = 3, 3697 + .inv_jd1_1 = true, 3698 + }; 3699 + 3696 3700 static const struct rt5645_platform_data jd_mode3_platform_data = { 3697 3701 .jd_mode = 3, 3698 3702 }; ··· 3843 3837 DMI_EXACT_MATCH(DMI_BOARD_VENDOR, "AMI Corporation"), 3844 3838 DMI_EXACT_MATCH(DMI_BOARD_NAME, "Cherry Trail CR"), 3845 3839 DMI_EXACT_MATCH(DMI_BOARD_VERSION, "Default string"), 3840 + /* 3841 + * Above strings are too generic, LattePanda BIOS versions for 3842 + * all 4 hw revisions are: 3843 + * DF-BI-7-S70CR100-* 3844 + * DF-BI-7-S70CR110-* 3845 + * DF-BI-7-S70CR200-* 3846 + * LP-BS-7-S70CR700-* 3847 + * Do a partial match for S70CR to avoid false positive matches. 3848 + */ 3849 + DMI_MATCH(DMI_BIOS_VERSION, "S70CR"), 3846 3850 }, 3847 3851 .driver_data = (void *)&lattepanda_board_platform_data, 3848 3852 }, ··· 3886 3870 DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "SW5-017"), 3887 3871 }, 3888 3872 .driver_data = (void *)&intel_braswell_platform_data, 3873 + }, 3874 + { 3875 + .ident = "Meegopad T08", 3876 + .matches = { 3877 + DMI_MATCH(DMI_SYS_VENDOR, "Default string"), 3878 + DMI_MATCH(DMI_PRODUCT_NAME, "Default string"), 3879 + DMI_MATCH(DMI_BOARD_NAME, "T3 MRD"), 3880 + DMI_MATCH(DMI_BOARD_VERSION, "V1.1"), 3881 + }, 3882 + .driver_data = (void *)&jd_mode3_inv_data, 3889 3883 }, 3890 3884 { } 3891 3885 };
+2 -1
sound/soc/codecs/tas2781-comlib.c
··· 267 267 EXPORT_SYMBOL_GPL(tas2781_reset); 268 268 269 269 int tascodec_init(struct tasdevice_priv *tas_priv, void *codec, 270 + struct module *module, 270 271 void (*cont)(const struct firmware *fw, void *context)) 271 272 { 272 273 int ret = 0; ··· 281 280 tas_priv->dev_name, tas_priv->ndev); 282 281 crc8_populate_msb(tas_priv->crc8_lkp_tbl, TASDEVICE_CRC8_POLYNOMIAL); 283 282 tas_priv->codec = codec; 284 - ret = request_firmware_nowait(THIS_MODULE, FW_ACTION_UEVENT, 283 + ret = request_firmware_nowait(module, FW_ACTION_UEVENT, 285 284 tas_priv->rca_binaryname, tas_priv->dev, GFP_KERNEL, tas_priv, 286 285 cont); 287 286 if (ret)
+1 -1
sound/soc/codecs/tas2781-i2c.c
··· 566 566 { 567 567 struct tasdevice_priv *tas_priv = snd_soc_component_get_drvdata(codec); 568 568 569 - return tascodec_init(tas_priv, codec, tasdevice_fw_ready); 569 + return tascodec_init(tas_priv, codec, THIS_MODULE, tasdevice_fw_ready); 570 570 } 571 571 572 572 static void tasdevice_deinit(void *context)
+3
sound/soc/intel/avs/core.c
··· 477 477 return 0; 478 478 479 479 err_i915_init: 480 + pci_free_irq(pci, 0, adev); 481 + pci_free_irq(pci, 0, bus); 482 + pci_free_irq_vectors(pci); 480 483 pci_clear_master(pci); 481 484 pci_set_drvdata(pci, NULL); 482 485 err_acquire_irq:
+1 -1
sound/soc/intel/avs/topology.c
··· 857 857 } 858 858 859 859 /* If topology sets value don't overwrite it */ 860 - if (cfg->copier.vindex.i2s.instance) 860 + if (cfg->copier.vindex.val) 861 861 return; 862 862 863 863 mach = dev_get_platdata(comp->card->dev);
+2 -1
sound/soc/intel/boards/bytcht_cx2072x.c
··· 241 241 242 242 /* fix index of codec dai */ 243 243 for (i = 0; i < ARRAY_SIZE(byt_cht_cx2072x_dais); i++) { 244 - if (!strcmp(byt_cht_cx2072x_dais[i].codecs->name, 244 + if (byt_cht_cx2072x_dais[i].codecs->name && 245 + !strcmp(byt_cht_cx2072x_dais[i].codecs->name, 245 246 "i2c-14F10720:00")) { 246 247 dai_index = i; 247 248 break;
+2 -1
sound/soc/intel/boards/bytcht_da7213.c
··· 245 245 246 246 /* fix index of codec dai */ 247 247 for (i = 0; i < ARRAY_SIZE(dailink); i++) { 248 - if (!strcmp(dailink[i].codecs->name, "i2c-DLGS7213:00")) { 248 + if (dailink[i].codecs->name && 249 + !strcmp(dailink[i].codecs->name, "i2c-DLGS7213:00")) { 249 250 dai_index = i; 250 251 break; 251 252 }
+2 -1
sound/soc/intel/boards/bytcht_es8316.c
··· 546 546 547 547 /* fix index of codec dai */ 548 548 for (i = 0; i < ARRAY_SIZE(byt_cht_es8316_dais); i++) { 549 - if (!strcmp(byt_cht_es8316_dais[i].codecs->name, 549 + if (byt_cht_es8316_dais[i].codecs->name && 550 + !strcmp(byt_cht_es8316_dais[i].codecs->name, 550 551 "i2c-ESSX8316:00")) { 551 552 dai_index = i; 552 553 break;
+2 -1
sound/soc/intel/boards/bytcr_rt5640.c
··· 1652 1652 1653 1653 /* fix index of codec dai */ 1654 1654 for (i = 0; i < ARRAY_SIZE(byt_rt5640_dais); i++) { 1655 - if (!strcmp(byt_rt5640_dais[i].codecs->name, 1655 + if (byt_rt5640_dais[i].codecs->name && 1656 + !strcmp(byt_rt5640_dais[i].codecs->name, 1656 1657 "i2c-10EC5640:00")) { 1657 1658 dai_index = i; 1658 1659 break;
+2 -1
sound/soc/intel/boards/bytcr_rt5651.c
··· 910 910 911 911 /* fix index of codec dai */ 912 912 for (i = 0; i < ARRAY_SIZE(byt_rt5651_dais); i++) { 913 - if (!strcmp(byt_rt5651_dais[i].codecs->name, 913 + if (byt_rt5651_dais[i].codecs->name && 914 + !strcmp(byt_rt5651_dais[i].codecs->name, 914 915 "i2c-10EC5651:00")) { 915 916 dai_index = i; 916 917 break;
+2 -1
sound/soc/intel/boards/bytcr_wm5102.c
··· 605 605 606 606 /* find index of codec dai */ 607 607 for (i = 0; i < ARRAY_SIZE(byt_wm5102_dais); i++) { 608 - if (!strcmp(byt_wm5102_dais[i].codecs->name, 608 + if (byt_wm5102_dais[i].codecs->name && 609 + !strcmp(byt_wm5102_dais[i].codecs->name, 609 610 "wm5102-codec")) { 610 611 dai_index = i; 611 612 break;
+3 -4
sound/soc/intel/boards/cht_bsw_rt5645.c
··· 40 40 struct cht_mc_private { 41 41 struct snd_soc_jack jack; 42 42 struct cht_acpi_card *acpi_card; 43 - char codec_name[SND_ACPI_I2C_ID_LEN]; 44 43 struct clk *mclk; 45 44 }; 46 45 ··· 566 567 } 567 568 568 569 card->dev = &pdev->dev; 569 - sprintf(drv->codec_name, "i2c-%s:00", drv->acpi_card->codec_id); 570 570 571 571 /* set correct codec name */ 572 572 for (i = 0; i < ARRAY_SIZE(cht_dailink); i++) 573 - if (!strcmp(card->dai_link[i].codecs->name, 573 + if (cht_dailink[i].codecs->name && 574 + !strcmp(cht_dailink[i].codecs->name, 574 575 "i2c-10EC5645:00")) { 575 - card->dai_link[i].codecs->name = drv->codec_name; 576 576 dai_index = i; 577 + break; 577 578 } 578 579 579 580 /* fixup codec name based on HID */
+2 -1
sound/soc/intel/boards/cht_bsw_rt5672.c
··· 466 466 467 467 /* find index of codec dai */ 468 468 for (i = 0; i < ARRAY_SIZE(cht_dailink); i++) { 469 - if (!strcmp(cht_dailink[i].codecs->name, RT5672_I2C_DEFAULT)) { 469 + if (cht_dailink[i].codecs->name && 470 + !strcmp(cht_dailink[i].codecs->name, RT5672_I2C_DEFAULT)) { 470 471 dai_index = i; 471 472 break; 472 473 }
+4 -4
sound/soc/qcom/qdsp6/q6apm-dai.c
··· 123 123 .fifo_size = 0, 124 124 }; 125 125 126 - static void event_handler(uint32_t opcode, uint32_t token, uint32_t *payload, void *priv) 126 + static void event_handler(uint32_t opcode, uint32_t token, void *payload, void *priv) 127 127 { 128 128 struct q6apm_dai_rtd *prtd = priv; 129 129 struct snd_pcm_substream *substream = prtd->substream; ··· 157 157 } 158 158 159 159 static void event_handler_compr(uint32_t opcode, uint32_t token, 160 - uint32_t *payload, void *priv) 160 + void *payload, void *priv) 161 161 { 162 162 struct q6apm_dai_rtd *prtd = priv; 163 163 struct snd_compr_stream *substream = prtd->cstream; ··· 352 352 353 353 spin_lock_init(&prtd->lock); 354 354 prtd->substream = substream; 355 - prtd->graph = q6apm_graph_open(dev, (q6apm_cb)event_handler, prtd, graph_id); 355 + prtd->graph = q6apm_graph_open(dev, event_handler, prtd, graph_id); 356 356 if (IS_ERR(prtd->graph)) { 357 357 dev_err(dev, "%s: Could not allocate memory\n", __func__); 358 358 ret = PTR_ERR(prtd->graph); ··· 496 496 return -ENOMEM; 497 497 498 498 prtd->cstream = stream; 499 - prtd->graph = q6apm_graph_open(dev, (q6apm_cb)event_handler_compr, prtd, graph_id); 499 + prtd->graph = q6apm_graph_open(dev, event_handler_compr, prtd, graph_id); 500 500 if (IS_ERR(prtd->graph)) { 501 501 ret = PTR_ERR(prtd->graph); 502 502 kfree(prtd);
+2
sound/soc/sof/amd/acp-ipc.c
··· 188 188 189 189 dsp_ack = snd_sof_dsp_read(sdev, ACP_DSP_BAR, ACP_SCRATCH_REG_0 + dsp_ack_write); 190 190 if (dsp_ack) { 191 + spin_lock_irq(&sdev->ipc_lock); 191 192 /* handle immediate reply from DSP core */ 192 193 acp_dsp_ipc_get_reply(sdev); 193 194 snd_sof_ipc_reply(sdev, 0); 194 195 /* set the done bit */ 195 196 acp_dsp_ipc_dsp_done(sdev); 197 + spin_unlock_irq(&sdev->ipc_lock); 196 198 ipc_irq = true; 197 199 } 198 200
+8 -9
sound/soc/sof/amd/acp.c
··· 355 355 unsigned int count = ACP_HW_SEM_RETRY_COUNT; 356 356 357 357 spin_lock_irq(&sdev->ipc_lock); 358 - while (snd_sof_dsp_read(sdev, ACP_DSP_BAR, desc->hw_semaphore_offset)) { 359 - /* Wait until acquired HW Semaphore lock or timeout */ 360 - count--; 361 - if (!count) { 362 - dev_err(sdev->dev, "%s: Failed to acquire HW lock\n", __func__); 363 - spin_unlock_irq(&sdev->ipc_lock); 364 - return IRQ_NONE; 365 - } 358 + /* Wait until acquired HW Semaphore lock or timeout */ 359 + while (snd_sof_dsp_read(sdev, ACP_DSP_BAR, desc->hw_semaphore_offset) && --count) 360 + ; 361 + spin_unlock_irq(&sdev->ipc_lock); 362 + 363 + if (!count) { 364 + dev_err(sdev->dev, "%s: Failed to acquire HW lock\n", __func__); 365 + return IRQ_NONE; 366 366 } 367 367 368 368 sof_ops(sdev)->irq_thread(irq, sdev); 369 369 /* Unlock or Release HW Semaphore */ 370 370 snd_sof_dsp_write(sdev, ACP_DSP_BAR, desc->hw_semaphore_offset, 0x0); 371 371 372 - spin_unlock_irq(&sdev->ipc_lock); 373 372 return IRQ_HANDLED; 374 373 }; 375 374
+1 -1
sound/soc/sof/intel/pci-lnl.c
··· 36 36 [SOF_IPC_TYPE_4] = "intel/sof-ipc4/lnl", 37 37 }, 38 38 .default_tplg_path = { 39 - [SOF_IPC_TYPE_4] = "intel/sof-ace-tplg", 39 + [SOF_IPC_TYPE_4] = "intel/sof-ipc4-tplg", 40 40 }, 41 41 .default_fw_filename = { 42 42 [SOF_IPC_TYPE_4] = "sof-lnl.ri",
+32 -32
sound/soc/sof/intel/pci-tgl.c
··· 33 33 .dspless_mode_supported = true, /* Only supported for HDaudio */ 34 34 .default_fw_path = { 35 35 [SOF_IPC_TYPE_3] = "intel/sof", 36 - [SOF_IPC_TYPE_4] = "intel/avs/tgl", 36 + [SOF_IPC_TYPE_4] = "intel/sof-ipc4/tgl", 37 37 }, 38 38 .default_lib_path = { 39 - [SOF_IPC_TYPE_4] = "intel/avs-lib/tgl", 39 + [SOF_IPC_TYPE_4] = "intel/sof-ipc4-lib/tgl", 40 40 }, 41 41 .default_tplg_path = { 42 42 [SOF_IPC_TYPE_3] = "intel/sof-tplg", 43 - [SOF_IPC_TYPE_4] = "intel/avs-tplg", 43 + [SOF_IPC_TYPE_4] = "intel/sof-ipc4-tplg", 44 44 }, 45 45 .default_fw_filename = { 46 46 [SOF_IPC_TYPE_3] = "sof-tgl.ri", 47 - [SOF_IPC_TYPE_4] = "dsp_basefw.bin", 47 + [SOF_IPC_TYPE_4] = "sof-tgl.ri", 48 48 }, 49 49 .nocodec_tplg_filename = "sof-tgl-nocodec.tplg", 50 50 .ops = &sof_tgl_ops, ··· 66 66 .dspless_mode_supported = true, /* Only supported for HDaudio */ 67 67 .default_fw_path = { 68 68 [SOF_IPC_TYPE_3] = "intel/sof", 69 - [SOF_IPC_TYPE_4] = "intel/avs/tgl-h", 69 + [SOF_IPC_TYPE_4] = "intel/sof-ipc4/tgl-h", 70 70 }, 71 71 .default_lib_path = { 72 - [SOF_IPC_TYPE_4] = "intel/avs-lib/tgl-h", 72 + [SOF_IPC_TYPE_4] = "intel/sof-ipc4-lib/tgl-h", 73 73 }, 74 74 .default_tplg_path = { 75 75 [SOF_IPC_TYPE_3] = "intel/sof-tplg", 76 - [SOF_IPC_TYPE_4] = "intel/avs-tplg", 76 + [SOF_IPC_TYPE_4] = "intel/sof-ipc4-tplg", 77 77 }, 78 78 .default_fw_filename = { 79 79 [SOF_IPC_TYPE_3] = "sof-tgl-h.ri", 80 - [SOF_IPC_TYPE_4] = "dsp_basefw.bin", 80 + [SOF_IPC_TYPE_4] = "sof-tgl-h.ri", 81 81 }, 82 82 .nocodec_tplg_filename = "sof-tgl-nocodec.tplg", 83 83 .ops = &sof_tgl_ops, ··· 98 98 .dspless_mode_supported = true, /* Only supported for HDaudio */ 99 99 .default_fw_path = { 100 100 [SOF_IPC_TYPE_3] = "intel/sof", 101 - [SOF_IPC_TYPE_4] = "intel/avs/ehl", 101 + [SOF_IPC_TYPE_4] = "intel/sof-ipc4/ehl", 102 102 }, 103 103 .default_lib_path = { 104 - [SOF_IPC_TYPE_4] = "intel/avs-lib/ehl", 104 + [SOF_IPC_TYPE_4] = "intel/sof-ipc4-lib/ehl", 105 105 }, 106 106 .default_tplg_path = { 107 107 [SOF_IPC_TYPE_3] = "intel/sof-tplg", 108 - [SOF_IPC_TYPE_4] = "intel/avs-tplg", 108 + [SOF_IPC_TYPE_4] = "intel/sof-ipc4-tplg", 109 109 }, 110 110 .default_fw_filename = { 111 111 [SOF_IPC_TYPE_3] = "sof-ehl.ri", 112 - [SOF_IPC_TYPE_4] = "dsp_basefw.bin", 112 + [SOF_IPC_TYPE_4] = "sof-ehl.ri", 113 113 }, 114 114 .nocodec_tplg_filename = "sof-ehl-nocodec.tplg", 115 115 .ops = &sof_tgl_ops, ··· 131 131 .dspless_mode_supported = true, /* Only supported for HDaudio */ 132 132 .default_fw_path = { 133 133 [SOF_IPC_TYPE_3] = "intel/sof", 134 - [SOF_IPC_TYPE_4] = "intel/avs/adl-s", 134 + [SOF_IPC_TYPE_4] = "intel/sof-ipc4/adl-s", 135 135 }, 136 136 .default_lib_path = { 137 - [SOF_IPC_TYPE_4] = "intel/avs-lib/adl-s", 137 + [SOF_IPC_TYPE_4] = "intel/sof-ipc4-lib/adl-s", 138 138 }, 139 139 .default_tplg_path = { 140 140 [SOF_IPC_TYPE_3] = "intel/sof-tplg", 141 - [SOF_IPC_TYPE_4] = "intel/avs-tplg", 141 + [SOF_IPC_TYPE_4] = "intel/sof-ipc4-tplg", 142 142 }, 143 143 .default_fw_filename = { 144 144 [SOF_IPC_TYPE_3] = "sof-adl-s.ri", 145 - [SOF_IPC_TYPE_4] = "dsp_basefw.bin", 145 + [SOF_IPC_TYPE_4] = "sof-adl-s.ri", 146 146 }, 147 147 .nocodec_tplg_filename = "sof-adl-nocodec.tplg", 148 148 .ops = &sof_tgl_ops, ··· 164 164 .dspless_mode_supported = true, /* Only supported for HDaudio */ 165 165 .default_fw_path = { 166 166 [SOF_IPC_TYPE_3] = "intel/sof", 167 - [SOF_IPC_TYPE_4] = "intel/avs/adl", 167 + [SOF_IPC_TYPE_4] = "intel/sof-ipc4/adl", 168 168 }, 169 169 .default_lib_path = { 170 - [SOF_IPC_TYPE_4] = "intel/avs-lib/adl", 170 + [SOF_IPC_TYPE_4] = "intel/sof-ipc4-lib/adl", 171 171 }, 172 172 .default_tplg_path = { 173 173 [SOF_IPC_TYPE_3] = "intel/sof-tplg", 174 - [SOF_IPC_TYPE_4] = "intel/avs-tplg", 174 + [SOF_IPC_TYPE_4] = "intel/sof-ipc4-tplg", 175 175 }, 176 176 .default_fw_filename = { 177 177 [SOF_IPC_TYPE_3] = "sof-adl.ri", 178 - [SOF_IPC_TYPE_4] = "dsp_basefw.bin", 178 + [SOF_IPC_TYPE_4] = "sof-adl.ri", 179 179 }, 180 180 .nocodec_tplg_filename = "sof-adl-nocodec.tplg", 181 181 .ops = &sof_tgl_ops, ··· 197 197 .dspless_mode_supported = true, /* Only supported for HDaudio */ 198 198 .default_fw_path = { 199 199 [SOF_IPC_TYPE_3] = "intel/sof", 200 - [SOF_IPC_TYPE_4] = "intel/avs/adl-n", 200 + [SOF_IPC_TYPE_4] = "intel/sof-ipc4/adl-n", 201 201 }, 202 202 .default_lib_path = { 203 - [SOF_IPC_TYPE_4] = "intel/avs-lib/adl-n", 203 + [SOF_IPC_TYPE_4] = "intel/sof-ipc4-lib/adl-n", 204 204 }, 205 205 .default_tplg_path = { 206 206 [SOF_IPC_TYPE_3] = "intel/sof-tplg", 207 - [SOF_IPC_TYPE_4] = "intel/avs-tplg", 207 + [SOF_IPC_TYPE_4] = "intel/sof-ipc4-tplg", 208 208 }, 209 209 .default_fw_filename = { 210 210 [SOF_IPC_TYPE_3] = "sof-adl-n.ri", 211 - [SOF_IPC_TYPE_4] = "dsp_basefw.bin", 211 + [SOF_IPC_TYPE_4] = "sof-adl-n.ri", 212 212 }, 213 213 .nocodec_tplg_filename = "sof-adl-nocodec.tplg", 214 214 .ops = &sof_tgl_ops, ··· 230 230 .dspless_mode_supported = true, /* Only supported for HDaudio */ 231 231 .default_fw_path = { 232 232 [SOF_IPC_TYPE_3] = "intel/sof", 233 - [SOF_IPC_TYPE_4] = "intel/avs/rpl-s", 233 + [SOF_IPC_TYPE_4] = "intel/sof-ipc4/rpl-s", 234 234 }, 235 235 .default_lib_path = { 236 - [SOF_IPC_TYPE_4] = "intel/avs-lib/rpl-s", 236 + [SOF_IPC_TYPE_4] = "intel/sof-ipc4-lib/rpl-s", 237 237 }, 238 238 .default_tplg_path = { 239 239 [SOF_IPC_TYPE_3] = "intel/sof-tplg", 240 - [SOF_IPC_TYPE_4] = "intel/avs-tplg", 240 + [SOF_IPC_TYPE_4] = "intel/sof-ipc4-tplg", 241 241 }, 242 242 .default_fw_filename = { 243 243 [SOF_IPC_TYPE_3] = "sof-rpl-s.ri", 244 - [SOF_IPC_TYPE_4] = "dsp_basefw.bin", 244 + [SOF_IPC_TYPE_4] = "sof-rpl-s.ri", 245 245 }, 246 246 .nocodec_tplg_filename = "sof-rpl-nocodec.tplg", 247 247 .ops = &sof_tgl_ops, ··· 263 263 .dspless_mode_supported = true, /* Only supported for HDaudio */ 264 264 .default_fw_path = { 265 265 [SOF_IPC_TYPE_3] = "intel/sof", 266 - [SOF_IPC_TYPE_4] = "intel/avs/rpl", 266 + [SOF_IPC_TYPE_4] = "intel/sof-ipc4/rpl", 267 267 }, 268 268 .default_lib_path = { 269 - [SOF_IPC_TYPE_4] = "intel/avs-lib/rpl", 269 + [SOF_IPC_TYPE_4] = "intel/sof-ipc4-lib/rpl", 270 270 }, 271 271 .default_tplg_path = { 272 272 [SOF_IPC_TYPE_3] = "intel/sof-tplg", 273 - [SOF_IPC_TYPE_4] = "intel/avs-tplg", 273 + [SOF_IPC_TYPE_4] = "intel/sof-ipc4-tplg", 274 274 }, 275 275 .default_fw_filename = { 276 276 [SOF_IPC_TYPE_3] = "sof-rpl.ri", 277 - [SOF_IPC_TYPE_4] = "dsp_basefw.bin", 277 + [SOF_IPC_TYPE_4] = "sof-rpl.ri", 278 278 }, 279 279 .nocodec_tplg_filename = "sof-rpl-nocodec.tplg", 280 280 .ops = &sof_tgl_ops,
+48 -21
sound/soc/sof/ipc3-topology.c
··· 2360 2360 return 0; 2361 2361 } 2362 2362 2363 + static int sof_ipc3_free_widgets_in_list(struct snd_sof_dev *sdev, bool include_scheduler, 2364 + bool *dyn_widgets, bool verify) 2365 + { 2366 + struct sof_ipc_fw_version *v = &sdev->fw_ready.version; 2367 + struct snd_sof_widget *swidget; 2368 + int ret; 2369 + 2370 + list_for_each_entry(swidget, &sdev->widget_list, list) { 2371 + if (swidget->dynamic_pipeline_widget) { 2372 + *dyn_widgets = true; 2373 + continue; 2374 + } 2375 + 2376 + /* Do not free widgets for static pipelines with FW older than SOF2.2 */ 2377 + if (!verify && !swidget->dynamic_pipeline_widget && 2378 + SOF_FW_VER(v->major, v->minor, v->micro) < SOF_FW_VER(2, 2, 0)) { 2379 + mutex_lock(&swidget->setup_mutex); 2380 + swidget->use_count = 0; 2381 + mutex_unlock(&swidget->setup_mutex); 2382 + if (swidget->spipe) 2383 + swidget->spipe->complete = 0; 2384 + continue; 2385 + } 2386 + 2387 + if (include_scheduler && swidget->id != snd_soc_dapm_scheduler) 2388 + continue; 2389 + 2390 + if (!include_scheduler && swidget->id == snd_soc_dapm_scheduler) 2391 + continue; 2392 + 2393 + ret = sof_widget_free(sdev, swidget); 2394 + if (ret < 0) 2395 + return ret; 2396 + } 2397 + 2398 + return 0; 2399 + } 2400 + 2363 2401 /* 2364 2402 * For older firmware, this function doesn't free widgets for static pipelines during suspend. 2365 2403 * It only resets use_count for all widgets. ··· 2414 2376 * This function is called during suspend and for one-time topology verification during 2415 2377 * first boot. In both cases, there is no need to protect swidget->use_count and 2416 2378 * sroute->setup because during suspend all running streams are suspended and during 2417 - * topology loading the sound card unavailable to open PCMs. 2379 + * topology loading the sound card unavailable to open PCMs. Do not free the scheduler 2380 + * widgets yet so that the secondary cores do not get powered down before all the widgets 2381 + * associated with the scheduler are freed. 2418 2382 */ 2419 - list_for_each_entry(swidget, &sdev->widget_list, list) { 2420 - if (swidget->dynamic_pipeline_widget) { 2421 - dyn_widgets = true; 2422 - continue; 2423 - } 2383 + ret = sof_ipc3_free_widgets_in_list(sdev, false, &dyn_widgets, verify); 2384 + if (ret < 0) 2385 + return ret; 2424 2386 2425 - /* Do not free widgets for static pipelines with FW older than SOF2.2 */ 2426 - if (!verify && !swidget->dynamic_pipeline_widget && 2427 - SOF_FW_VER(v->major, v->minor, v->micro) < SOF_FW_VER(2, 2, 0)) { 2428 - mutex_lock(&swidget->setup_mutex); 2429 - swidget->use_count = 0; 2430 - mutex_unlock(&swidget->setup_mutex); 2431 - if (swidget->spipe) 2432 - swidget->spipe->complete = 0; 2433 - continue; 2434 - } 2435 - 2436 - ret = sof_widget_free(sdev, swidget); 2437 - if (ret < 0) 2438 - return ret; 2439 - } 2387 + /* free all the scheduler widgets now */ 2388 + ret = sof_ipc3_free_widgets_in_list(sdev, true, &dyn_widgets, verify); 2389 + if (ret < 0) 2390 + return ret; 2440 2391 2441 2392 /* 2442 2393 * Tear down all pipelines associated with PCMs that did not get suspended
+1 -1
sound/soc/sof/ipc3.c
··· 1067 1067 return; 1068 1068 } 1069 1069 1070 - if (hdr.size < sizeof(hdr)) { 1070 + if (hdr.size < sizeof(hdr) || hdr.size > SOF_IPC_MSG_MAX_SIZE) { 1071 1071 dev_err(sdev->dev, "The received message size is invalid\n"); 1072 1072 return; 1073 1073 }
+12 -1
sound/soc/sof/ipc4-pcm.c
··· 413 413 ret = sof_ipc4_set_multi_pipeline_state(sdev, state, trigger_list); 414 414 if (ret < 0) { 415 415 dev_err(sdev->dev, "failed to set final state %d for all pipelines\n", state); 416 - goto free; 416 + /* 417 + * workaround: if the firmware is crashed while setting the 418 + * pipelines to reset state we must ignore the error code and 419 + * reset it to 0. 420 + * Since the firmware is crashed we will not send IPC messages 421 + * and we are going to see errors printed, but the state of the 422 + * widgets will be correct for the next boot. 423 + */ 424 + if (sdev->fw_state != SOF_FW_CRASHED || state != SOF_IPC4_PIPE_RESET) 425 + goto free; 426 + 427 + ret = 0; 417 428 } 418 429 419 430 /* update RUNNING/RESET state for all pipelines that were just triggered */
+27 -50
sound/usb/midi.c
··· 1742 1742 } 1743 1743 } 1744 1744 1745 - static struct usb_midi_in_jack_descriptor *find_usb_in_jack_descriptor( 1746 - struct usb_host_interface *hostif, uint8_t jack_id) 1745 + /* return iJack for the corresponding jackID */ 1746 + static int find_usb_ijack(struct usb_host_interface *hostif, uint8_t jack_id) 1747 1747 { 1748 1748 unsigned char *extra = hostif->extra; 1749 1749 int extralen = hostif->extralen; 1750 + struct usb_descriptor_header *h; 1751 + struct usb_midi_out_jack_descriptor *outjd; 1752 + struct usb_midi_in_jack_descriptor *injd; 1753 + size_t sz; 1750 1754 1751 1755 while (extralen > 4) { 1752 - struct usb_midi_in_jack_descriptor *injd = 1753 - (struct usb_midi_in_jack_descriptor *)extra; 1756 + h = (struct usb_descriptor_header *)extra; 1757 + if (h->bDescriptorType != USB_DT_CS_INTERFACE) 1758 + goto next; 1754 1759 1755 - if (injd->bLength >= sizeof(*injd) && 1756 - injd->bDescriptorType == USB_DT_CS_INTERFACE && 1757 - injd->bDescriptorSubtype == UAC_MIDI_IN_JACK && 1758 - injd->bJackID == jack_id) 1759 - return injd; 1760 - if (!extra[0]) 1761 - break; 1762 - extralen -= extra[0]; 1763 - extra += extra[0]; 1764 - } 1765 - return NULL; 1766 - } 1767 - 1768 - static struct usb_midi_out_jack_descriptor *find_usb_out_jack_descriptor( 1769 - struct usb_host_interface *hostif, uint8_t jack_id) 1770 - { 1771 - unsigned char *extra = hostif->extra; 1772 - int extralen = hostif->extralen; 1773 - 1774 - while (extralen > 4) { 1775 - struct usb_midi_out_jack_descriptor *outjd = 1776 - (struct usb_midi_out_jack_descriptor *)extra; 1777 - 1778 - if (outjd->bLength >= sizeof(*outjd) && 1779 - outjd->bDescriptorType == USB_DT_CS_INTERFACE && 1760 + outjd = (struct usb_midi_out_jack_descriptor *)h; 1761 + if (h->bLength >= sizeof(*outjd) && 1780 1762 outjd->bDescriptorSubtype == UAC_MIDI_OUT_JACK && 1781 - outjd->bJackID == jack_id) 1782 - return outjd; 1763 + outjd->bJackID == jack_id) { 1764 + sz = USB_DT_MIDI_OUT_SIZE(outjd->bNrInputPins); 1765 + if (outjd->bLength < sz) 1766 + goto next; 1767 + return *(extra + sz - 1); 1768 + } 1769 + 1770 + injd = (struct usb_midi_in_jack_descriptor *)h; 1771 + if (injd->bLength >= sizeof(*injd) && 1772 + injd->bDescriptorSubtype == UAC_MIDI_IN_JACK && 1773 + injd->bJackID == jack_id) 1774 + return injd->iJack; 1775 + 1776 + next: 1783 1777 if (!extra[0]) 1784 1778 break; 1785 1779 extralen -= extra[0]; 1786 1780 extra += extra[0]; 1787 1781 } 1788 - return NULL; 1782 + return 0; 1789 1783 } 1790 1784 1791 1785 static void snd_usbmidi_init_substream(struct snd_usb_midi *umidi, ··· 1790 1796 const char *name_format; 1791 1797 struct usb_interface *intf; 1792 1798 struct usb_host_interface *hostif; 1793 - struct usb_midi_in_jack_descriptor *injd; 1794 - struct usb_midi_out_jack_descriptor *outjd; 1795 1799 uint8_t jack_name_buf[32]; 1796 1800 uint8_t *default_jack_name = "MIDI"; 1797 1801 uint8_t *jack_name = default_jack_name; 1798 1802 uint8_t iJack; 1799 - size_t sz; 1800 1803 int res; 1801 1804 1802 1805 struct snd_rawmidi_substream *substream = ··· 1807 1816 intf = umidi->iface; 1808 1817 if (intf && jack_id >= 0) { 1809 1818 hostif = intf->cur_altsetting; 1810 - iJack = 0; 1811 - if (stream != SNDRV_RAWMIDI_STREAM_OUTPUT) { 1812 - /* in jacks connect to outs */ 1813 - outjd = find_usb_out_jack_descriptor(hostif, jack_id); 1814 - if (outjd) { 1815 - sz = USB_DT_MIDI_OUT_SIZE(outjd->bNrInputPins); 1816 - if (outjd->bLength >= sz) 1817 - iJack = *(((uint8_t *) outjd) + sz - sizeof(uint8_t)); 1818 - } 1819 - } else { 1820 - /* and out jacks connect to ins */ 1821 - injd = find_usb_in_jack_descriptor(hostif, jack_id); 1822 - if (injd) 1823 - iJack = injd->iJack; 1824 - } 1819 + iJack = find_usb_ijack(hostif, jack_id); 1825 1820 if (iJack != 0) { 1826 1821 res = usb_string(umidi->dev, iJack, jack_name_buf, 1827 1822 ARRAY_SIZE(jack_name_buf));
+15 -4
tools/net/ynl/lib/ynl.c
··· 466 466 467 467 int ynl_recv_ack(struct ynl_sock *ys, int ret) 468 468 { 469 + struct ynl_parse_arg yarg = { .ys = ys, }; 470 + 469 471 if (!ret) { 470 472 yerr(ys, YNL_ERROR_EXPECT_ACK, 471 473 "Expecting an ACK but nothing received"); ··· 480 478 return ret; 481 479 } 482 480 return mnl_cb_run(ys->rx_buf, ret, ys->seq, ys->portid, 483 - ynl_cb_null, ys); 481 + ynl_cb_null, &yarg); 484 482 } 485 483 486 484 int ynl_cb_null(const struct nlmsghdr *nlh, void *data) ··· 588 586 return err; 589 587 } 590 588 591 - return ynl_recv_ack(ys, err); 589 + err = ynl_recv_ack(ys, err); 590 + if (err < 0) { 591 + free(ys->mcast_groups); 592 + return err; 593 + } 594 + 595 + return 0; 592 596 } 593 597 594 598 struct ynl_sock * ··· 749 741 750 742 static int ynl_ntf_trampoline(const struct nlmsghdr *nlh, void *data) 751 743 { 752 - return ynl_ntf_parse((struct ynl_sock *)data, nlh); 744 + struct ynl_parse_arg *yarg = data; 745 + 746 + return ynl_ntf_parse(yarg->ys, nlh); 753 747 } 754 748 755 749 int ynl_ntf_check(struct ynl_sock *ys) 756 750 { 751 + struct ynl_parse_arg yarg = { .ys = ys, }; 757 752 ssize_t len; 758 753 int err; 759 754 ··· 778 767 return len; 779 768 780 769 err = mnl_cb_run2(ys->rx_buf, len, ys->seq, ys->portid, 781 - ynl_ntf_trampoline, ys, 770 + ynl_ntf_trampoline, &yarg, 782 771 ynl_cb_array, NLMSG_MIN_TYPE); 783 772 if (err < 0) 784 773 return err;
+1
tools/testing/selftests/bpf/prog_tests/iters.c
··· 193 193 ASSERT_EQ(skel->bss->procs_cnt, 1, "procs_cnt"); 194 194 ASSERT_EQ(skel->bss->threads_cnt, thread_num + 1, "threads_cnt"); 195 195 ASSERT_EQ(skel->bss->proc_threads_cnt, thread_num + 1, "proc_threads_cnt"); 196 + ASSERT_EQ(skel->bss->invalid_cnt, 0, "invalid_cnt"); 196 197 pthread_mutex_unlock(&do_nothing_mutex); 197 198 for (int i = 0; i < thread_num; i++) 198 199 ASSERT_OK(pthread_join(thread_ids[i], &ret), "pthread_join");
+57
tools/testing/selftests/bpf/prog_tests/read_vsyscall.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* Copyright (C) 2024. Huawei Technologies Co., Ltd */ 3 + #include "test_progs.h" 4 + #include "read_vsyscall.skel.h" 5 + 6 + #if defined(__x86_64__) 7 + /* For VSYSCALL_ADDR */ 8 + #include <asm/vsyscall.h> 9 + #else 10 + /* To prevent build failure on non-x86 arch */ 11 + #define VSYSCALL_ADDR 0UL 12 + #endif 13 + 14 + struct read_ret_desc { 15 + const char *name; 16 + int ret; 17 + } all_read[] = { 18 + { .name = "probe_read_kernel", .ret = -ERANGE }, 19 + { .name = "probe_read_kernel_str", .ret = -ERANGE }, 20 + { .name = "probe_read", .ret = -ERANGE }, 21 + { .name = "probe_read_str", .ret = -ERANGE }, 22 + { .name = "probe_read_user", .ret = -EFAULT }, 23 + { .name = "probe_read_user_str", .ret = -EFAULT }, 24 + { .name = "copy_from_user", .ret = -EFAULT }, 25 + { .name = "copy_from_user_task", .ret = -EFAULT }, 26 + }; 27 + 28 + void test_read_vsyscall(void) 29 + { 30 + struct read_vsyscall *skel; 31 + unsigned int i; 32 + int err; 33 + 34 + #if !defined(__x86_64__) 35 + test__skip(); 36 + return; 37 + #endif 38 + skel = read_vsyscall__open_and_load(); 39 + if (!ASSERT_OK_PTR(skel, "read_vsyscall open_load")) 40 + return; 41 + 42 + skel->bss->target_pid = getpid(); 43 + err = read_vsyscall__attach(skel); 44 + if (!ASSERT_EQ(err, 0, "read_vsyscall attach")) 45 + goto out; 46 + 47 + /* userspace may don't have vsyscall page due to LEGACY_VSYSCALL_NONE, 48 + * but it doesn't affect the returned error codes. 49 + */ 50 + skel->bss->user_ptr = (void *)VSYSCALL_ADDR; 51 + usleep(1); 52 + 53 + for (i = 0; i < ARRAY_SIZE(all_read); i++) 54 + ASSERT_EQ(skel->bss->read_ret[i], all_read[i].ret, all_read[i].name); 55 + out: 56 + read_vsyscall__destroy(skel); 57 + }
+34 -1
tools/testing/selftests/bpf/prog_tests/timer.c
··· 4 4 #include "timer.skel.h" 5 5 #include "timer_failure.skel.h" 6 6 7 + #define NUM_THR 8 8 + 9 + static void *spin_lock_thread(void *arg) 10 + { 11 + int i, err, prog_fd = *(int *)arg; 12 + LIBBPF_OPTS(bpf_test_run_opts, topts); 13 + 14 + for (i = 0; i < 10000; i++) { 15 + err = bpf_prog_test_run_opts(prog_fd, &topts); 16 + if (!ASSERT_OK(err, "test_run_opts err") || 17 + !ASSERT_OK(topts.retval, "test_run_opts retval")) 18 + break; 19 + } 20 + 21 + pthread_exit(arg); 22 + } 23 + 7 24 static int timer(struct timer *timer_skel) 8 25 { 9 - int err, prog_fd; 26 + int i, err, prog_fd; 10 27 LIBBPF_OPTS(bpf_test_run_opts, topts); 28 + pthread_t thread_id[NUM_THR]; 29 + void *ret; 11 30 12 31 err = timer__attach(timer_skel); 13 32 if (!ASSERT_OK(err, "timer_attach")) ··· 61 42 62 43 /* check that code paths completed */ 63 44 ASSERT_EQ(timer_skel->bss->ok, 1 | 2 | 4, "ok"); 45 + 46 + prog_fd = bpf_program__fd(timer_skel->progs.race); 47 + for (i = 0; i < NUM_THR; i++) { 48 + err = pthread_create(&thread_id[i], NULL, 49 + &spin_lock_thread, &prog_fd); 50 + if (!ASSERT_OK(err, "pthread_create")) 51 + break; 52 + } 53 + 54 + while (i) { 55 + err = pthread_join(thread_id[--i], &ret); 56 + if (ASSERT_OK(err, "pthread_join")) 57 + ASSERT_EQ(ret, (void *)&prog_fd, "pthread_join"); 58 + } 64 59 65 60 return 0; 66 61 }
+11 -1
tools/testing/selftests/bpf/progs/iters_task.c
··· 10 10 char _license[] SEC("license") = "GPL"; 11 11 12 12 pid_t target_pid; 13 - int procs_cnt, threads_cnt, proc_threads_cnt; 13 + int procs_cnt, threads_cnt, proc_threads_cnt, invalid_cnt; 14 14 15 15 void bpf_rcu_read_lock(void) __ksym; 16 16 void bpf_rcu_read_unlock(void) __ksym; ··· 26 26 procs_cnt = threads_cnt = proc_threads_cnt = 0; 27 27 28 28 bpf_rcu_read_lock(); 29 + bpf_for_each(task, pos, NULL, ~0U) { 30 + /* Below instructions shouldn't be executed for invalid flags */ 31 + invalid_cnt++; 32 + } 33 + 34 + bpf_for_each(task, pos, NULL, BPF_TASK_ITER_PROC_THREADS) { 35 + /* Below instructions shouldn't be executed for invalid task__nullable */ 36 + invalid_cnt++; 37 + } 38 + 29 39 bpf_for_each(task, pos, NULL, BPF_TASK_ITER_ALL_PROCS) 30 40 if (pos->pid == target_pid) 31 41 procs_cnt++;
+45
tools/testing/selftests/bpf/progs/read_vsyscall.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* Copyright (C) 2024. Huawei Technologies Co., Ltd */ 3 + #include <linux/types.h> 4 + #include <bpf/bpf_helpers.h> 5 + 6 + #include "bpf_misc.h" 7 + 8 + int target_pid = 0; 9 + void *user_ptr = 0; 10 + int read_ret[8]; 11 + 12 + char _license[] SEC("license") = "GPL"; 13 + 14 + SEC("fentry/" SYS_PREFIX "sys_nanosleep") 15 + int do_probe_read(void *ctx) 16 + { 17 + char buf[8]; 18 + 19 + if ((bpf_get_current_pid_tgid() >> 32) != target_pid) 20 + return 0; 21 + 22 + read_ret[0] = bpf_probe_read_kernel(buf, sizeof(buf), user_ptr); 23 + read_ret[1] = bpf_probe_read_kernel_str(buf, sizeof(buf), user_ptr); 24 + read_ret[2] = bpf_probe_read(buf, sizeof(buf), user_ptr); 25 + read_ret[3] = bpf_probe_read_str(buf, sizeof(buf), user_ptr); 26 + read_ret[4] = bpf_probe_read_user(buf, sizeof(buf), user_ptr); 27 + read_ret[5] = bpf_probe_read_user_str(buf, sizeof(buf), user_ptr); 28 + 29 + return 0; 30 + } 31 + 32 + SEC("fentry.s/" SYS_PREFIX "sys_nanosleep") 33 + int do_copy_from_user(void *ctx) 34 + { 35 + char buf[8]; 36 + 37 + if ((bpf_get_current_pid_tgid() >> 32) != target_pid) 38 + return 0; 39 + 40 + read_ret[6] = bpf_copy_from_user(buf, sizeof(buf), user_ptr); 41 + read_ret[7] = bpf_copy_from_user_task(buf, sizeof(buf), user_ptr, 42 + bpf_get_current_task_btf(), 0); 43 + 44 + return 0; 45 + }
+33 -1
tools/testing/selftests/bpf/progs/timer.c
··· 51 51 __uint(max_entries, 1); 52 52 __type(key, int); 53 53 __type(value, struct elem); 54 - } abs_timer SEC(".maps"), soft_timer_pinned SEC(".maps"), abs_timer_pinned SEC(".maps"); 54 + } abs_timer SEC(".maps"), soft_timer_pinned SEC(".maps"), abs_timer_pinned SEC(".maps"), 55 + race_array SEC(".maps"); 55 56 56 57 __u64 bss_data; 57 58 __u64 abs_data; ··· 388 387 { 389 388 bpf_printk("test5"); 390 389 test_pinned_timer(false); 390 + 391 + return 0; 392 + } 393 + 394 + static int race_timer_callback(void *race_array, int *race_key, struct bpf_timer *timer) 395 + { 396 + bpf_timer_start(timer, 1000000, 0); 397 + return 0; 398 + } 399 + 400 + SEC("syscall") 401 + int race(void *ctx) 402 + { 403 + struct bpf_timer *timer; 404 + int err, race_key = 0; 405 + struct elem init; 406 + 407 + __builtin_memset(&init, 0, sizeof(struct elem)); 408 + bpf_map_update_elem(&race_array, &race_key, &init, BPF_ANY); 409 + 410 + timer = bpf_map_lookup_elem(&race_array, &race_key); 411 + if (!timer) 412 + return 1; 413 + 414 + err = bpf_timer_init(timer, &race_array, CLOCK_MONOTONIC); 415 + if (err && err != -EBUSY) 416 + return 1; 417 + 418 + bpf_timer_set_callback(timer, race_timer_callback); 419 + bpf_timer_start(timer, 0, 0); 420 + bpf_timer_cancel(timer); 391 421 392 422 return 0; 393 423 }
+2
tools/testing/selftests/drivers/net/bonding/bond_options.sh
··· 70 70 71 71 # create bond 72 72 bond_reset "${param}" 73 + # set active_slave to primary eth1 specifically 74 + ip -n ${s_ns} link set bond0 type bond active_slave eth1 73 75 74 76 # check bonding member prio value 75 77 ip -n ${s_ns} link set eth0 type bond_slave prio 0
+6 -6
tools/testing/selftests/kvm/aarch64/arch_timer.c
··· 248 248 REPORT_GUEST_ASSERT(uc); 249 249 break; 250 250 default: 251 - TEST_FAIL("Unexpected guest exit\n"); 251 + TEST_FAIL("Unexpected guest exit"); 252 252 } 253 253 254 254 return NULL; ··· 287 287 288 288 /* Allow the error where the vCPU thread is already finished */ 289 289 TEST_ASSERT(ret == 0 || ret == ESRCH, 290 - "Failed to migrate the vCPU:%u to pCPU: %u; ret: %d\n", 290 + "Failed to migrate the vCPU:%u to pCPU: %u; ret: %d", 291 291 vcpu_idx, new_pcpu, ret); 292 292 293 293 return ret; ··· 326 326 327 327 pthread_mutex_init(&vcpu_done_map_lock, NULL); 328 328 vcpu_done_map = bitmap_zalloc(test_args.nr_vcpus); 329 - TEST_ASSERT(vcpu_done_map, "Failed to allocate vcpu done bitmap\n"); 329 + TEST_ASSERT(vcpu_done_map, "Failed to allocate vcpu done bitmap"); 330 330 331 331 for (i = 0; i < (unsigned long)test_args.nr_vcpus; i++) { 332 332 ret = pthread_create(&pt_vcpu_run[i], NULL, test_vcpu_run, 333 333 (void *)(unsigned long)i); 334 - TEST_ASSERT(!ret, "Failed to create vCPU-%d pthread\n", i); 334 + TEST_ASSERT(!ret, "Failed to create vCPU-%d pthread", i); 335 335 } 336 336 337 337 /* Spawn a thread to control the vCPU migrations */ ··· 340 340 341 341 ret = pthread_create(&pt_vcpu_migration, NULL, 342 342 test_vcpu_migration, NULL); 343 - TEST_ASSERT(!ret, "Failed to create the migration pthread\n"); 343 + TEST_ASSERT(!ret, "Failed to create the migration pthread"); 344 344 } 345 345 346 346 ··· 384 384 if (kvm_has_cap(KVM_CAP_COUNTER_OFFSET)) 385 385 vm_ioctl(vm, KVM_ARM_SET_COUNTER_OFFSET, &test_args.offset); 386 386 else 387 - TEST_FAIL("no support for global offset\n"); 387 + TEST_FAIL("no support for global offset"); 388 388 } 389 389 390 390 for (i = 0; i < nr_vcpus; i++)
+8 -8
tools/testing/selftests/kvm/aarch64/hypercalls.c
··· 175 175 /* First 'read' should be an upper limit of the features supported */ 176 176 vcpu_get_reg(vcpu, reg_info->reg, &val); 177 177 TEST_ASSERT(val == FW_REG_ULIMIT_VAL(reg_info->max_feat_bit), 178 - "Expected all the features to be set for reg: 0x%lx; expected: 0x%lx; read: 0x%lx\n", 178 + "Expected all the features to be set for reg: 0x%lx; expected: 0x%lx; read: 0x%lx", 179 179 reg_info->reg, FW_REG_ULIMIT_VAL(reg_info->max_feat_bit), val); 180 180 181 181 /* Test a 'write' by disabling all the features of the register map */ 182 182 ret = __vcpu_set_reg(vcpu, reg_info->reg, 0); 183 183 TEST_ASSERT(ret == 0, 184 - "Failed to clear all the features of reg: 0x%lx; ret: %d\n", 184 + "Failed to clear all the features of reg: 0x%lx; ret: %d", 185 185 reg_info->reg, errno); 186 186 187 187 vcpu_get_reg(vcpu, reg_info->reg, &val); 188 188 TEST_ASSERT(val == 0, 189 - "Expected all the features to be cleared for reg: 0x%lx\n", reg_info->reg); 189 + "Expected all the features to be cleared for reg: 0x%lx", reg_info->reg); 190 190 191 191 /* 192 192 * Test enabling a feature that's not supported. ··· 195 195 if (reg_info->max_feat_bit < 63) { 196 196 ret = __vcpu_set_reg(vcpu, reg_info->reg, BIT(reg_info->max_feat_bit + 1)); 197 197 TEST_ASSERT(ret != 0 && errno == EINVAL, 198 - "Unexpected behavior or return value (%d) while setting an unsupported feature for reg: 0x%lx\n", 198 + "Unexpected behavior or return value (%d) while setting an unsupported feature for reg: 0x%lx", 199 199 errno, reg_info->reg); 200 200 } 201 201 } ··· 216 216 */ 217 217 vcpu_get_reg(vcpu, reg_info->reg, &val); 218 218 TEST_ASSERT(val == 0, 219 - "Expected all the features to be cleared for reg: 0x%lx\n", 219 + "Expected all the features to be cleared for reg: 0x%lx", 220 220 reg_info->reg); 221 221 222 222 /* ··· 226 226 */ 227 227 ret = __vcpu_set_reg(vcpu, reg_info->reg, FW_REG_ULIMIT_VAL(reg_info->max_feat_bit)); 228 228 TEST_ASSERT(ret != 0 && errno == EBUSY, 229 - "Unexpected behavior or return value (%d) while setting a feature while VM is running for reg: 0x%lx\n", 229 + "Unexpected behavior or return value (%d) while setting a feature while VM is running for reg: 0x%lx", 230 230 errno, reg_info->reg); 231 231 } 232 232 } ··· 265 265 case TEST_STAGE_HVC_IFACE_FALSE_INFO: 266 266 break; 267 267 default: 268 - TEST_FAIL("Unknown test stage: %d\n", prev_stage); 268 + TEST_FAIL("Unknown test stage: %d", prev_stage); 269 269 } 270 270 } 271 271 ··· 294 294 REPORT_GUEST_ASSERT(uc); 295 295 break; 296 296 default: 297 - TEST_FAIL("Unexpected guest exit\n"); 297 + TEST_FAIL("Unexpected guest exit"); 298 298 } 299 299 } 300 300
+3 -3
tools/testing/selftests/kvm/aarch64/page_fault_test.c
··· 414 414 if (fd != -1) { 415 415 ret = fallocate(fd, FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE, 416 416 0, paging_size); 417 - TEST_ASSERT(ret == 0, "fallocate failed\n"); 417 + TEST_ASSERT(ret == 0, "fallocate failed"); 418 418 } else { 419 419 ret = madvise(hva, paging_size, MADV_DONTNEED); 420 - TEST_ASSERT(ret == 0, "madvise failed\n"); 420 + TEST_ASSERT(ret == 0, "madvise failed"); 421 421 } 422 422 423 423 return true; ··· 501 501 502 502 void fail_vcpu_run_no_handler(int ret) 503 503 { 504 - TEST_FAIL("Unexpected vcpu run failure\n"); 504 + TEST_FAIL("Unexpected vcpu run failure"); 505 505 } 506 506 507 507 void fail_vcpu_run_mmio_no_syndrome_handler(int ret)
+1 -1
tools/testing/selftests/kvm/aarch64/smccc_filter.c
··· 178 178 struct ucall uc; 179 179 180 180 if (get_ucall(vcpu, &uc) != UCALL_SYNC) 181 - TEST_FAIL("Unexpected ucall: %lu\n", uc.cmd); 181 + TEST_FAIL("Unexpected ucall: %lu", uc.cmd); 182 182 183 183 TEST_ASSERT(uc.args[1] == SMCCC_RET_NOT_SUPPORTED, 184 184 "Unexpected SMCCC return code: %lu", uc.args[1]);
+6 -6
tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
··· 517 517 518 518 if (expect_fail) 519 519 TEST_ASSERT(pmcr_orig == pmcr, 520 - "PMCR.N modified by KVM to a larger value (PMCR: 0x%lx) for pmcr_n: 0x%lx\n", 520 + "PMCR.N modified by KVM to a larger value (PMCR: 0x%lx) for pmcr_n: 0x%lx", 521 521 pmcr, pmcr_n); 522 522 else 523 523 TEST_ASSERT(pmcr_n == get_pmcr_n(pmcr), 524 - "Failed to update PMCR.N to %lu (received: %lu)\n", 524 + "Failed to update PMCR.N to %lu (received: %lu)", 525 525 pmcr_n, get_pmcr_n(pmcr)); 526 526 } 527 527 ··· 594 594 */ 595 595 vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(set_reg_id), &reg_val); 596 596 TEST_ASSERT((reg_val & (~valid_counters_mask)) == 0, 597 - "Initial read of set_reg: 0x%llx has unimplemented counters enabled: 0x%lx\n", 597 + "Initial read of set_reg: 0x%llx has unimplemented counters enabled: 0x%lx", 598 598 KVM_ARM64_SYS_REG(set_reg_id), reg_val); 599 599 600 600 vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(clr_reg_id), &reg_val); 601 601 TEST_ASSERT((reg_val & (~valid_counters_mask)) == 0, 602 - "Initial read of clr_reg: 0x%llx has unimplemented counters enabled: 0x%lx\n", 602 + "Initial read of clr_reg: 0x%llx has unimplemented counters enabled: 0x%lx", 603 603 KVM_ARM64_SYS_REG(clr_reg_id), reg_val); 604 604 605 605 /* ··· 611 611 612 612 vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(set_reg_id), &reg_val); 613 613 TEST_ASSERT((reg_val & (~valid_counters_mask)) == 0, 614 - "Read of set_reg: 0x%llx has unimplemented counters enabled: 0x%lx\n", 614 + "Read of set_reg: 0x%llx has unimplemented counters enabled: 0x%lx", 615 615 KVM_ARM64_SYS_REG(set_reg_id), reg_val); 616 616 617 617 vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(clr_reg_id), &reg_val); 618 618 TEST_ASSERT((reg_val & (~valid_counters_mask)) == 0, 619 - "Read of clr_reg: 0x%llx has unimplemented counters enabled: 0x%lx\n", 619 + "Read of clr_reg: 0x%llx has unimplemented counters enabled: 0x%lx", 620 620 KVM_ARM64_SYS_REG(clr_reg_id), reg_val); 621 621 } 622 622
+2 -2
tools/testing/selftests/kvm/demand_paging_test.c
··· 45 45 46 46 /* Let the guest access its memory */ 47 47 ret = _vcpu_run(vcpu); 48 - TEST_ASSERT(ret == 0, "vcpu_run failed: %d\n", ret); 48 + TEST_ASSERT(ret == 0, "vcpu_run failed: %d", ret); 49 49 if (get_ucall(vcpu, NULL) != UCALL_SYNC) { 50 50 TEST_ASSERT(false, 51 - "Invalid guest sync status: exit_reason=%s\n", 51 + "Invalid guest sync status: exit_reason=%s", 52 52 exit_reason_str(run->exit_reason)); 53 53 } 54 54
+2 -2
tools/testing/selftests/kvm/dirty_log_perf_test.c
··· 88 88 ret = _vcpu_run(vcpu); 89 89 ts_diff = timespec_elapsed(start); 90 90 91 - TEST_ASSERT(ret == 0, "vcpu_run failed: %d\n", ret); 91 + TEST_ASSERT(ret == 0, "vcpu_run failed: %d", ret); 92 92 TEST_ASSERT(get_ucall(vcpu, NULL) == UCALL_SYNC, 93 - "Invalid guest sync status: exit_reason=%s\n", 93 + "Invalid guest sync status: exit_reason=%s", 94 94 exit_reason_str(run->exit_reason)); 95 95 96 96 pr_debug("Got sync event from vCPU %d\n", vcpu_idx);
+29 -25
tools/testing/selftests/kvm/dirty_log_test.c
··· 262 262 "vcpu run failed: errno=%d", err); 263 263 264 264 TEST_ASSERT(get_ucall(vcpu, NULL) == UCALL_SYNC, 265 - "Invalid guest sync status: exit_reason=%s\n", 265 + "Invalid guest sync status: exit_reason=%s", 266 266 exit_reason_str(run->exit_reason)); 267 267 268 268 vcpu_handle_sync_stop(); ··· 376 376 377 377 cleared = kvm_vm_reset_dirty_ring(vcpu->vm); 378 378 379 - /* Cleared pages should be the same as collected */ 379 + /* 380 + * Cleared pages should be the same as collected, as KVM is supposed to 381 + * clear only the entries that have been harvested. 382 + */ 380 383 TEST_ASSERT(cleared == count, "Reset dirty pages (%u) mismatch " 381 384 "with collected (%u)", cleared, count); 382 385 ··· 413 410 pr_info("vcpu continues now.\n"); 414 411 } else { 415 412 TEST_ASSERT(false, "Invalid guest sync status: " 416 - "exit_reason=%s\n", 413 + "exit_reason=%s", 417 414 exit_reason_str(run->exit_reason)); 418 415 } 419 - } 420 - 421 - static void dirty_ring_before_vcpu_join(void) 422 - { 423 - /* Kick another round of vcpu just to make sure it will quit */ 424 - sem_post(&sem_vcpu_cont); 425 416 } 426 417 427 418 struct log_mode { ··· 430 433 uint32_t *ring_buf_idx); 431 434 /* Hook to call when after each vcpu run */ 432 435 void (*after_vcpu_run)(struct kvm_vcpu *vcpu, int ret, int err); 433 - void (*before_vcpu_join) (void); 434 436 } log_modes[LOG_MODE_NUM] = { 435 437 { 436 438 .name = "dirty-log", ··· 448 452 .supported = dirty_ring_supported, 449 453 .create_vm_done = dirty_ring_create_vm_done, 450 454 .collect_dirty_pages = dirty_ring_collect_dirty_pages, 451 - .before_vcpu_join = dirty_ring_before_vcpu_join, 452 455 .after_vcpu_run = dirty_ring_after_vcpu_run, 453 456 }, 454 457 }; ··· 506 511 507 512 if (mode->after_vcpu_run) 508 513 mode->after_vcpu_run(vcpu, ret, err); 509 - } 510 - 511 - static void log_mode_before_vcpu_join(void) 512 - { 513 - struct log_mode *mode = &log_modes[host_log_mode]; 514 - 515 - if (mode->before_vcpu_join) 516 - mode->before_vcpu_join(); 517 514 } 518 515 519 516 static void generate_random_array(uint64_t *guest_array, uint64_t size) ··· 706 719 struct kvm_vm *vm; 707 720 unsigned long *bmap; 708 721 uint32_t ring_buf_idx = 0; 722 + int sem_val; 709 723 710 724 if (!log_mode_supported()) { 711 725 print_skip("Log mode '%s' not supported", ··· 776 788 /* Start the iterations */ 777 789 iteration = 1; 778 790 sync_global_to_guest(vm, iteration); 779 - host_quit = false; 791 + WRITE_ONCE(host_quit, false); 780 792 host_dirty_count = 0; 781 793 host_clear_count = 0; 782 794 host_track_next_count = 0; 783 795 WRITE_ONCE(dirty_ring_vcpu_ring_full, false); 796 + 797 + /* 798 + * Ensure the previous iteration didn't leave a dangling semaphore, i.e. 799 + * that the main task and vCPU worker were synchronized and completed 800 + * verification of all iterations. 801 + */ 802 + sem_getvalue(&sem_vcpu_stop, &sem_val); 803 + TEST_ASSERT_EQ(sem_val, 0); 804 + sem_getvalue(&sem_vcpu_cont, &sem_val); 805 + TEST_ASSERT_EQ(sem_val, 0); 784 806 785 807 pthread_create(&vcpu_thread, NULL, vcpu_worker, vcpu); 786 808 ··· 817 819 assert(host_log_mode == LOG_MODE_DIRTY_RING || 818 820 atomic_read(&vcpu_sync_stop_requested) == false); 819 821 vm_dirty_log_verify(mode, bmap); 820 - sem_post(&sem_vcpu_cont); 821 822 822 - iteration++; 823 + /* 824 + * Set host_quit before sem_vcpu_cont in the final iteration to 825 + * ensure that the vCPU worker doesn't resume the guest. As 826 + * above, the dirty ring test may stop and wait even when not 827 + * explicitly request to do so, i.e. would hang waiting for a 828 + * "continue" if it's allowed to resume the guest. 829 + */ 830 + if (++iteration == p->iterations) 831 + WRITE_ONCE(host_quit, true); 832 + 833 + sem_post(&sem_vcpu_cont); 823 834 sync_global_to_guest(vm, iteration); 824 835 } 825 836 826 - /* Tell the vcpu thread to quit */ 827 - host_quit = true; 828 - log_mode_before_vcpu_join(); 829 837 pthread_join(vcpu_thread, NULL); 830 838 831 839 pr_info("Total bits checked: dirty (%"PRIu64"), clear (%"PRIu64"), "
+1 -1
tools/testing/selftests/kvm/get-reg-list.c
··· 152 152 continue; 153 153 154 154 __TEST_REQUIRE(kvm_has_cap(s->capability), 155 - "%s: %s not available, skipping tests\n", 155 + "%s: %s not available, skipping tests", 156 156 config_name(c), s->name); 157 157 } 158 158 }
+4 -4
tools/testing/selftests/kvm/guest_print_test.c
··· 98 98 int offset = len_str - len_substr; 99 99 100 100 TEST_ASSERT(len_substr <= len_str, 101 - "Expected '%s' to be a substring of '%s'\n", 101 + "Expected '%s' to be a substring of '%s'", 102 102 assert_msg, expected_assert_msg); 103 103 104 104 TEST_ASSERT(strcmp(&assert_msg[offset], expected_assert_msg) == 0, ··· 116 116 vcpu_run(vcpu); 117 117 118 118 TEST_ASSERT(run->exit_reason == UCALL_EXIT_REASON, 119 - "Unexpected exit reason: %u (%s),\n", 119 + "Unexpected exit reason: %u (%s),", 120 120 run->exit_reason, exit_reason_str(run->exit_reason)); 121 121 122 122 switch (get_ucall(vcpu, &uc)) { ··· 161 161 vcpu_run(vcpu); 162 162 163 163 TEST_ASSERT(run->exit_reason == UCALL_EXIT_REASON, 164 - "Unexpected exit reason: %u (%s),\n", 164 + "Unexpected exit reason: %u (%s),", 165 165 run->exit_reason, exit_reason_str(run->exit_reason)); 166 166 167 167 TEST_ASSERT(get_ucall(vcpu, &uc) == UCALL_ABORT, 168 - "Unexpected ucall command: %lu, Expected: %u (UCALL_ABORT)\n", 168 + "Unexpected ucall command: %lu, Expected: %u (UCALL_ABORT)", 169 169 uc.cmd, UCALL_ABORT); 170 170 171 171 kvm_vm_free(vm);
+3 -3
tools/testing/selftests/kvm/hardware_disable_test.c
··· 41 41 42 42 vcpu_run(vcpu); 43 43 44 - TEST_ASSERT(false, "%s: exited with reason %d: %s\n", 44 + TEST_ASSERT(false, "%s: exited with reason %d: %s", 45 45 __func__, run->exit_reason, 46 46 exit_reason_str(run->exit_reason)); 47 47 pthread_exit(NULL); ··· 55 55 fd = open("/dev/null", O_RDWR); 56 56 close(fd); 57 57 } 58 - TEST_ASSERT(false, "%s: exited\n", __func__); 58 + TEST_ASSERT(false, "%s: exited", __func__); 59 59 pthread_exit(NULL); 60 60 } 61 61 ··· 118 118 for (i = 0; i < VCPU_NUM; ++i) 119 119 check_join(threads[i], &b); 120 120 /* Should not be reached */ 121 - TEST_ASSERT(false, "%s: [%d] child escaped the ninja\n", __func__, run); 121 + TEST_ASSERT(false, "%s: [%d] child escaped the ninja", __func__, run); 122 122 } 123 123 124 124 void wait_for_child_setup(pid_t pid)
+2
tools/testing/selftests/kvm/include/test_util.h
··· 195 195 196 196 char *strdup_printf(const char *fmt, ...) __attribute__((format(printf, 1, 2), nonnull(1))); 197 197 198 + char *sys_get_cur_clocksource(void); 199 + 198 200 #endif /* SELFTEST_KVM_TEST_UTIL_H */
+2
tools/testing/selftests/kvm/include/x86_64/processor.h
··· 1271 1271 #define PFERR_GUEST_PAGE_MASK BIT_ULL(PFERR_GUEST_PAGE_BIT) 1272 1272 #define PFERR_IMPLICIT_ACCESS BIT_ULL(PFERR_IMPLICIT_ACCESS_BIT) 1273 1273 1274 + bool sys_clocksource_is_based_on_tsc(void); 1275 + 1274 1276 #endif /* SELFTEST_KVM_PROCESSOR_H */
+1 -1
tools/testing/selftests/kvm/kvm_create_max_vcpus.c
··· 65 65 66 66 int r = setrlimit(RLIMIT_NOFILE, &rl); 67 67 __TEST_REQUIRE(r >= 0, 68 - "RLIMIT_NOFILE hard limit is too low (%d, wanted %d)\n", 68 + "RLIMIT_NOFILE hard limit is too low (%d, wanted %d)", 69 69 old_rlim_max, nr_fds_wanted); 70 70 } else { 71 71 TEST_ASSERT(!setrlimit(RLIMIT_NOFILE, &rl), "setrlimit() failed!");
+2 -2
tools/testing/selftests/kvm/kvm_page_table_test.c
··· 204 204 ret = _vcpu_run(vcpu); 205 205 ts_diff = timespec_elapsed(start); 206 206 207 - TEST_ASSERT(ret == 0, "vcpu_run failed: %d\n", ret); 207 + TEST_ASSERT(ret == 0, "vcpu_run failed: %d", ret); 208 208 TEST_ASSERT(get_ucall(vcpu, NULL) == UCALL_SYNC, 209 - "Invalid guest sync status: exit_reason=%s\n", 209 + "Invalid guest sync status: exit_reason=%s", 210 210 exit_reason_str(vcpu->run->exit_reason)); 211 211 212 212 pr_debug("Got sync event from vCPU %d\n", vcpu->id);
+1 -1
tools/testing/selftests/kvm/lib/aarch64/processor.c
··· 398 398 int i; 399 399 400 400 TEST_ASSERT(num >= 1 && num <= 8, "Unsupported number of args,\n" 401 - " num: %u\n", num); 401 + " num: %u", num); 402 402 403 403 va_start(ap, num); 404 404
+2 -2
tools/testing/selftests/kvm/lib/aarch64/vgic.c
··· 38 38 struct list_head *iter; 39 39 unsigned int nr_gic_pages, nr_vcpus_created = 0; 40 40 41 - TEST_ASSERT(nr_vcpus, "Number of vCPUs cannot be empty\n"); 41 + TEST_ASSERT(nr_vcpus, "Number of vCPUs cannot be empty"); 42 42 43 43 /* 44 44 * Make sure that the caller is infact calling this ··· 47 47 list_for_each(iter, &vm->vcpus) 48 48 nr_vcpus_created++; 49 49 TEST_ASSERT(nr_vcpus == nr_vcpus_created, 50 - "Number of vCPUs requested (%u) doesn't match with the ones created for the VM (%u)\n", 50 + "Number of vCPUs requested (%u) doesn't match with the ones created for the VM (%u)", 51 51 nr_vcpus, nr_vcpus_created); 52 52 53 53 /* Distributor setup */
+1 -1
tools/testing/selftests/kvm/lib/elf.c
··· 184 184 "Seek to program segment offset failed,\n" 185 185 " program header idx: %u errno: %i\n" 186 186 " offset_rv: 0x%jx\n" 187 - " expected: 0x%jx\n", 187 + " expected: 0x%jx", 188 188 n1, errno, (intmax_t) offset_rv, 189 189 (intmax_t) phdr.p_offset); 190 190 test_read(fd, addr_gva2hva(vm, phdr.p_vaddr),
+10 -9
tools/testing/selftests/kvm/lib/kvm_util.c
··· 27 27 int fd; 28 28 29 29 fd = open(path, flags); 30 - __TEST_REQUIRE(fd >= 0, "%s not available (errno: %d)", path, errno); 30 + __TEST_REQUIRE(fd >= 0 || errno != ENOENT, "Cannot open %s: %s", path, strerror(errno)); 31 + TEST_ASSERT(fd >= 0, "Failed to open '%s'", path); 31 32 32 33 return fd; 33 34 } ··· 321 320 uint64_t nr_pages; 322 321 323 322 TEST_ASSERT(nr_runnable_vcpus, 324 - "Use vm_create_barebones() for VMs that _never_ have vCPUs\n"); 323 + "Use vm_create_barebones() for VMs that _never_ have vCPUs"); 325 324 326 325 TEST_ASSERT(nr_runnable_vcpus <= kvm_check_cap(KVM_CAP_MAX_VCPUS), 327 326 "nr_vcpus = %d too large for host, max-vcpus = %d", ··· 492 491 CPU_ZERO(&mask); 493 492 CPU_SET(pcpu, &mask); 494 493 r = sched_setaffinity(0, sizeof(mask), &mask); 495 - TEST_ASSERT(!r, "sched_setaffinity() failed for pCPU '%u'.\n", pcpu); 494 + TEST_ASSERT(!r, "sched_setaffinity() failed for pCPU '%u'.", pcpu); 496 495 } 497 496 498 497 static uint32_t parse_pcpu(const char *cpu_str, const cpu_set_t *allowed_mask) ··· 500 499 uint32_t pcpu = atoi_non_negative("CPU number", cpu_str); 501 500 502 501 TEST_ASSERT(CPU_ISSET(pcpu, allowed_mask), 503 - "Not allowed to run on pCPU '%d', check cgroups?\n", pcpu); 502 + "Not allowed to run on pCPU '%d', check cgroups?", pcpu); 504 503 return pcpu; 505 504 } 506 505 ··· 530 529 int i, r; 531 530 532 531 cpu_list = strdup(pcpus_string); 533 - TEST_ASSERT(cpu_list, "strdup() allocation failed.\n"); 532 + TEST_ASSERT(cpu_list, "strdup() allocation failed."); 534 533 535 534 r = sched_getaffinity(0, sizeof(allowed_mask), &allowed_mask); 536 535 TEST_ASSERT(!r, "sched_getaffinity() failed"); ··· 539 538 540 539 /* 1. Get all pcpus for vcpus. */ 541 540 for (i = 0; i < nr_vcpus; i++) { 542 - TEST_ASSERT(cpu, "pCPU not provided for vCPU '%d'\n", i); 541 + TEST_ASSERT(cpu, "pCPU not provided for vCPU '%d'", i); 543 542 vcpu_to_pcpu[i] = parse_pcpu(cpu, &allowed_mask); 544 543 cpu = strtok(NULL, delim); 545 544 } ··· 1058 1057 TEST_ASSERT(ret == 0, "KVM_SET_USER_MEMORY_REGION2 IOCTL failed,\n" 1059 1058 " rc: %i errno: %i\n" 1060 1059 " slot: %u flags: 0x%x\n" 1061 - " guest_phys_addr: 0x%lx size: 0x%lx guest_memfd: %d\n", 1060 + " guest_phys_addr: 0x%lx size: 0x%lx guest_memfd: %d", 1062 1061 ret, errno, slot, flags, 1063 1062 guest_paddr, (uint64_t) region->region.memory_size, 1064 1063 region->region.guest_memfd); ··· 1223 1222 len = min_t(uint64_t, end - gpa, region->region.memory_size - offset); 1224 1223 1225 1224 ret = fallocate(region->region.guest_memfd, mode, fd_offset, len); 1226 - TEST_ASSERT(!ret, "fallocate() failed to %s at %lx (len = %lu), fd = %d, mode = %x, offset = %lx\n", 1225 + TEST_ASSERT(!ret, "fallocate() failed to %s at %lx (len = %lu), fd = %d, mode = %x, offset = %lx", 1227 1226 punch_hole ? "punch hole" : "allocate", gpa, len, 1228 1227 region->region.guest_memfd, mode, fd_offset); 1229 1228 } ··· 1266 1265 struct kvm_vcpu *vcpu; 1267 1266 1268 1267 /* Confirm a vcpu with the specified id doesn't already exist. */ 1269 - TEST_ASSERT(!vcpu_exists(vm, vcpu_id), "vCPU%d already exists\n", vcpu_id); 1268 + TEST_ASSERT(!vcpu_exists(vm, vcpu_id), "vCPU%d already exists", vcpu_id); 1270 1269 1271 1270 /* Allocate and initialize new vcpu structure. */ 1272 1271 vcpu = calloc(1, sizeof(*vcpu));
+1 -1
tools/testing/selftests/kvm/lib/memstress.c
··· 192 192 TEST_ASSERT(guest_num_pages < region_end_gfn, 193 193 "Requested more guest memory than address space allows.\n" 194 194 " guest pages: %" PRIx64 " max gfn: %" PRIx64 195 - " nr_vcpus: %d wss: %" PRIx64 "]\n", 195 + " nr_vcpus: %d wss: %" PRIx64 "]", 196 196 guest_num_pages, region_end_gfn - 1, nr_vcpus, vcpu_memory_bytes); 197 197 198 198 args->gpa = (region_end_gfn - guest_num_pages - 1) * args->guest_page_size;
+1 -1
tools/testing/selftests/kvm/lib/riscv/processor.c
··· 327 327 int i; 328 328 329 329 TEST_ASSERT(num >= 1 && num <= 8, "Unsupported number of args,\n" 330 - " num: %u\n", num); 330 + " num: %u", num); 331 331 332 332 va_start(ap, num); 333 333
+1 -1
tools/testing/selftests/kvm/lib/s390x/processor.c
··· 198 198 int i; 199 199 200 200 TEST_ASSERT(num >= 1 && num <= 5, "Unsupported number of args,\n" 201 - " num: %u\n", 201 + " num: %u", 202 202 num); 203 203 204 204 va_start(ap, num);
+25
tools/testing/selftests/kvm/lib/test_util.c
··· 392 392 393 393 return str; 394 394 } 395 + 396 + #define CLOCKSOURCE_PATH "/sys/devices/system/clocksource/clocksource0/current_clocksource" 397 + 398 + char *sys_get_cur_clocksource(void) 399 + { 400 + char *clk_name; 401 + struct stat st; 402 + FILE *fp; 403 + 404 + fp = fopen(CLOCKSOURCE_PATH, "r"); 405 + TEST_ASSERT(fp, "failed to open clocksource file, errno: %d", errno); 406 + 407 + TEST_ASSERT(!fstat(fileno(fp), &st), "failed to stat clocksource file, errno: %d", 408 + errno); 409 + 410 + clk_name = malloc(st.st_size); 411 + TEST_ASSERT(clk_name, "failed to allocate buffer to read file"); 412 + 413 + TEST_ASSERT(fgets(clk_name, st.st_size, fp), "failed to read clocksource file: %d", 414 + ferror(fp)); 415 + 416 + fclose(fp); 417 + 418 + return clk_name; 419 + }
+1 -1
tools/testing/selftests/kvm/lib/userfaultfd_util.c
··· 69 69 if (pollfd[1].revents & POLLIN) { 70 70 r = read(pollfd[1].fd, &tmp_chr, 1); 71 71 TEST_ASSERT(r == 1, 72 - "Error reading pipefd in UFFD thread\n"); 72 + "Error reading pipefd in UFFD thread"); 73 73 break; 74 74 } 75 75
+16 -5
tools/testing/selftests/kvm/lib/x86_64/processor.c
··· 170 170 * this level. 171 171 */ 172 172 TEST_ASSERT(current_level != target_level, 173 - "Cannot create hugepage at level: %u, vaddr: 0x%lx\n", 173 + "Cannot create hugepage at level: %u, vaddr: 0x%lx", 174 174 current_level, vaddr); 175 175 TEST_ASSERT(!(*pte & PTE_LARGE_MASK), 176 - "Cannot create page table at level: %u, vaddr: 0x%lx\n", 176 + "Cannot create page table at level: %u, vaddr: 0x%lx", 177 177 current_level, vaddr); 178 178 } 179 179 return pte; ··· 220 220 /* Fill in page table entry. */ 221 221 pte = virt_get_pte(vm, pde, vaddr, PG_LEVEL_4K); 222 222 TEST_ASSERT(!(*pte & PTE_PRESENT_MASK), 223 - "PTE already present for 4k page at vaddr: 0x%lx\n", vaddr); 223 + "PTE already present for 4k page at vaddr: 0x%lx", vaddr); 224 224 *pte = PTE_PRESENT_MASK | PTE_WRITABLE_MASK | (paddr & PHYSICAL_PAGE_MASK); 225 225 } 226 226 ··· 253 253 if (*pte & PTE_LARGE_MASK) { 254 254 TEST_ASSERT(*level == PG_LEVEL_NONE || 255 255 *level == current_level, 256 - "Unexpected hugepage at level %d\n", current_level); 256 + "Unexpected hugepage at level %d", current_level); 257 257 *level = current_level; 258 258 } 259 259 ··· 825 825 struct kvm_regs regs; 826 826 827 827 TEST_ASSERT(num >= 1 && num <= 6, "Unsupported number of args,\n" 828 - " num: %u\n", 828 + " num: %u", 829 829 num); 830 830 831 831 va_start(ap, num); ··· 1298 1298 { 1299 1299 host_cpu_is_intel = this_cpu_is_intel(); 1300 1300 host_cpu_is_amd = this_cpu_is_amd(); 1301 + } 1302 + 1303 + bool sys_clocksource_is_based_on_tsc(void) 1304 + { 1305 + char *clk_name = sys_get_cur_clocksource(); 1306 + bool ret = !strcmp(clk_name, "tsc\n") || 1307 + !strcmp(clk_name, "hyperv_clocksource_tsc_page\n"); 1308 + 1309 + free(clk_name); 1310 + 1311 + return ret; 1301 1312 }
+3 -3
tools/testing/selftests/kvm/lib/x86_64/vmx.c
··· 54 54 /* KVM should return supported EVMCS version range */ 55 55 TEST_ASSERT(((evmcs_ver >> 8) >= (evmcs_ver & 0xff)) && 56 56 (evmcs_ver & 0xff) > 0, 57 - "Incorrect EVMCS version range: %x:%x\n", 57 + "Incorrect EVMCS version range: %x:%x", 58 58 evmcs_ver & 0xff, evmcs_ver >> 8); 59 59 60 60 return evmcs_ver; ··· 387 387 * this level. 388 388 */ 389 389 TEST_ASSERT(current_level != target_level, 390 - "Cannot create hugepage at level: %u, nested_paddr: 0x%lx\n", 390 + "Cannot create hugepage at level: %u, nested_paddr: 0x%lx", 391 391 current_level, nested_paddr); 392 392 TEST_ASSERT(!pte->page_size, 393 - "Cannot create page table at level: %u, nested_paddr: 0x%lx\n", 393 + "Cannot create page table at level: %u, nested_paddr: 0x%lx", 394 394 current_level, nested_paddr); 395 395 } 396 396 }
+1 -1
tools/testing/selftests/kvm/memslot_modification_stress_test.c
··· 45 45 /* Let the guest access its memory until a stop signal is received */ 46 46 while (!READ_ONCE(memstress_args.stop_vcpus)) { 47 47 ret = _vcpu_run(vcpu); 48 - TEST_ASSERT(ret == 0, "vcpu_run failed: %d\n", ret); 48 + TEST_ASSERT(ret == 0, "vcpu_run failed: %d", ret); 49 49 50 50 if (get_ucall(vcpu, NULL) == UCALL_SYNC) 51 51 continue;
+3 -3
tools/testing/selftests/kvm/memslot_perf_test.c
··· 175 175 struct timespec ts; 176 176 177 177 TEST_ASSERT(!clock_gettime(CLOCK_REALTIME, &ts), 178 - "clock_gettime() failed: %d\n", errno); 178 + "clock_gettime() failed: %d", errno); 179 179 180 180 ts.tv_sec += 2; 181 181 TEST_ASSERT(!sem_timedwait(&vcpu_ready, &ts), 182 - "sem_timedwait() failed: %d\n", errno); 182 + "sem_timedwait() failed: %d", errno); 183 183 } 184 184 185 185 static void *vm_gpa2hva(struct vm_data *data, uint64_t gpa, uint64_t *rempages) ··· 336 336 337 337 gpa = vm_phy_pages_alloc(data->vm, npages, guest_addr, slot); 338 338 TEST_ASSERT(gpa == guest_addr, 339 - "vm_phy_pages_alloc() failed\n"); 339 + "vm_phy_pages_alloc() failed"); 340 340 341 341 data->hva_slots[slot - 1] = addr_gpa2hva(data->vm, guest_addr); 342 342 memset(data->hva_slots[slot - 1], 0, npages * guest_page_size);
+1 -1
tools/testing/selftests/kvm/riscv/get-reg-list.c
··· 177 177 178 178 /* Double check whether the desired extension was enabled */ 179 179 __TEST_REQUIRE(vcpu_has_ext(vcpu, feature), 180 - "%s not available, skipping tests\n", s->name); 180 + "%s not available, skipping tests", s->name); 181 181 } 182 182 } 183 183
+2 -2
tools/testing/selftests/kvm/rseq_test.c
··· 245 245 } while (snapshot != atomic_read(&seq_cnt)); 246 246 247 247 TEST_ASSERT(rseq_cpu == cpu, 248 - "rseq CPU = %d, sched CPU = %d\n", rseq_cpu, cpu); 248 + "rseq CPU = %d, sched CPU = %d", rseq_cpu, cpu); 249 249 } 250 250 251 251 /* ··· 256 256 * migrations given the 1us+ delay in the migration task. 257 257 */ 258 258 TEST_ASSERT(i > (NR_TASK_MIGRATIONS / 2), 259 - "Only performed %d KVM_RUNs, task stalled too much?\n", i); 259 + "Only performed %d KVM_RUNs, task stalled too much?", i); 260 260 261 261 pthread_join(migration_thread, NULL); 262 262
+2 -2
tools/testing/selftests/kvm/s390x/resets.c
··· 78 78 * (notably, the emergency call interrupt we have injected) should 79 79 * be cleared by the resets, so this should be 0. 80 80 */ 81 - TEST_ASSERT(irqs >= 0, "Could not fetch IRQs: errno %d\n", errno); 81 + TEST_ASSERT(irqs >= 0, "Could not fetch IRQs: errno %d", errno); 82 82 TEST_ASSERT(!irqs, "IRQ pending"); 83 83 } 84 84 ··· 199 199 irq->type = KVM_S390_INT_EMERGENCY; 200 200 irq->u.emerg.code = vcpu->id; 201 201 irqs = __vcpu_ioctl(vcpu, KVM_S390_SET_IRQ_STATE, &irq_state); 202 - TEST_ASSERT(irqs >= 0, "Error injecting EMERGENCY IRQ errno %d\n", errno); 202 + TEST_ASSERT(irqs >= 0, "Error injecting EMERGENCY IRQ errno %d", errno); 203 203 } 204 204 205 205 static struct kvm_vm *create_vm(struct kvm_vcpu **vcpu)
+10 -10
tools/testing/selftests/kvm/s390x/sync_regs_test.c
··· 39 39 #define REG_COMPARE(reg) \ 40 40 TEST_ASSERT(left->reg == right->reg, \ 41 41 "Register " #reg \ 42 - " values did not match: 0x%llx, 0x%llx\n", \ 42 + " values did not match: 0x%llx, 0x%llx", \ 43 43 left->reg, right->reg) 44 44 45 45 #define REG_COMPARE32(reg) \ 46 46 TEST_ASSERT(left->reg == right->reg, \ 47 47 "Register " #reg \ 48 - " values did not match: 0x%x, 0x%x\n", \ 48 + " values did not match: 0x%x, 0x%x", \ 49 49 left->reg, right->reg) 50 50 51 51 ··· 82 82 run->kvm_valid_regs = INVALID_SYNC_FIELD; 83 83 rv = _vcpu_run(vcpu); 84 84 TEST_ASSERT(rv < 0 && errno == EINVAL, 85 - "Invalid kvm_valid_regs did not cause expected KVM_RUN error: %d\n", 85 + "Invalid kvm_valid_regs did not cause expected KVM_RUN error: %d", 86 86 rv); 87 87 run->kvm_valid_regs = 0; 88 88 89 89 run->kvm_valid_regs = INVALID_SYNC_FIELD | TEST_SYNC_FIELDS; 90 90 rv = _vcpu_run(vcpu); 91 91 TEST_ASSERT(rv < 0 && errno == EINVAL, 92 - "Invalid kvm_valid_regs did not cause expected KVM_RUN error: %d\n", 92 + "Invalid kvm_valid_regs did not cause expected KVM_RUN error: %d", 93 93 rv); 94 94 run->kvm_valid_regs = 0; 95 95 } ··· 103 103 run->kvm_dirty_regs = INVALID_SYNC_FIELD; 104 104 rv = _vcpu_run(vcpu); 105 105 TEST_ASSERT(rv < 0 && errno == EINVAL, 106 - "Invalid kvm_dirty_regs did not cause expected KVM_RUN error: %d\n", 106 + "Invalid kvm_dirty_regs did not cause expected KVM_RUN error: %d", 107 107 rv); 108 108 run->kvm_dirty_regs = 0; 109 109 110 110 run->kvm_dirty_regs = INVALID_SYNC_FIELD | TEST_SYNC_FIELDS; 111 111 rv = _vcpu_run(vcpu); 112 112 TEST_ASSERT(rv < 0 && errno == EINVAL, 113 - "Invalid kvm_dirty_regs did not cause expected KVM_RUN error: %d\n", 113 + "Invalid kvm_dirty_regs did not cause expected KVM_RUN error: %d", 114 114 rv); 115 115 run->kvm_dirty_regs = 0; 116 116 } ··· 125 125 /* Request and verify all valid register sets. */ 126 126 run->kvm_valid_regs = TEST_SYNC_FIELDS; 127 127 rv = _vcpu_run(vcpu); 128 - TEST_ASSERT(rv == 0, "vcpu_run failed: %d\n", rv); 128 + TEST_ASSERT(rv == 0, "vcpu_run failed: %d", rv); 129 129 TEST_ASSERT_KVM_EXIT_REASON(vcpu, KVM_EXIT_S390_SIEIC); 130 130 TEST_ASSERT(run->s390_sieic.icptcode == 4 && 131 131 (run->s390_sieic.ipa >> 8) == 0x83 && 132 132 (run->s390_sieic.ipb >> 16) == 0x501, 133 - "Unexpected interception code: ic=%u, ipa=0x%x, ipb=0x%x\n", 133 + "Unexpected interception code: ic=%u, ipa=0x%x, ipb=0x%x", 134 134 run->s390_sieic.icptcode, run->s390_sieic.ipa, 135 135 run->s390_sieic.ipb); 136 136 ··· 161 161 } 162 162 163 163 rv = _vcpu_run(vcpu); 164 - TEST_ASSERT(rv == 0, "vcpu_run failed: %d\n", rv); 164 + TEST_ASSERT(rv == 0, "vcpu_run failed: %d", rv); 165 165 TEST_ASSERT_KVM_EXIT_REASON(vcpu, KVM_EXIT_S390_SIEIC); 166 166 TEST_ASSERT(run->s.regs.gprs[11] == 0xBAD1DEA + 1, 167 167 "r11 sync regs value incorrect 0x%llx.", ··· 193 193 run->s.regs.gprs[11] = 0xDEADBEEF; 194 194 run->s.regs.diag318 = 0x4B1D; 195 195 rv = _vcpu_run(vcpu); 196 - TEST_ASSERT(rv == 0, "vcpu_run failed: %d\n", rv); 196 + TEST_ASSERT(rv == 0, "vcpu_run failed: %d", rv); 197 197 TEST_ASSERT_KVM_EXIT_REASON(vcpu, KVM_EXIT_S390_SIEIC); 198 198 TEST_ASSERT(run->s.regs.gprs[11] != 0xDEADBEEF, 199 199 "r11 sync regs value incorrect 0x%llx.",
+3 -3
tools/testing/selftests/kvm/set_memory_region_test.c
··· 98 98 struct timespec ts; 99 99 100 100 TEST_ASSERT(!clock_gettime(CLOCK_REALTIME, &ts), 101 - "clock_gettime() failed: %d\n", errno); 101 + "clock_gettime() failed: %d", errno); 102 102 103 103 ts.tv_sec += 2; 104 104 TEST_ASSERT(!sem_timedwait(&vcpu_ready, &ts), 105 - "sem_timedwait() failed: %d\n", errno); 105 + "sem_timedwait() failed: %d", errno); 106 106 107 107 /* Wait for the vCPU thread to reenter the guest. */ 108 108 usleep(100000); ··· 302 302 if (run->exit_reason == KVM_EXIT_INTERNAL_ERROR) 303 303 TEST_ASSERT(regs.rip >= final_rip_start && 304 304 regs.rip < final_rip_end, 305 - "Bad rip, expected 0x%lx - 0x%lx, got 0x%llx\n", 305 + "Bad rip, expected 0x%lx - 0x%lx, got 0x%llx", 306 306 final_rip_start, final_rip_end, regs.rip); 307 307 308 308 kvm_vm_free(vm);
+1 -1
tools/testing/selftests/kvm/system_counter_offset_test.c
··· 108 108 handle_abort(&uc); 109 109 return; 110 110 default: 111 - TEST_ASSERT(0, "unhandled ucall %ld\n", 111 + TEST_ASSERT(0, "unhandled ucall %ld", 112 112 get_ucall(vcpu, &uc)); 113 113 } 114 114 }
+3 -3
tools/testing/selftests/kvm/x86_64/amx_test.c
··· 221 221 vm_vaddr_t amx_cfg, tiledata, xstate; 222 222 struct ucall uc; 223 223 u32 amx_offset; 224 - int stage, ret; 224 + int ret; 225 225 226 226 /* 227 227 * Note, all off-by-default features must be enabled before anything ··· 263 263 memset(addr_gva2hva(vm, xstate), 0, PAGE_SIZE * DIV_ROUND_UP(XSAVE_SIZE, PAGE_SIZE)); 264 264 vcpu_args_set(vcpu, 3, amx_cfg, tiledata, xstate); 265 265 266 - for (stage = 1; ; stage++) { 266 + for (;;) { 267 267 vcpu_run(vcpu); 268 268 TEST_ASSERT_KVM_EXIT_REASON(vcpu, KVM_EXIT_IO); 269 269 ··· 296 296 void *tiles_data = (void *)addr_gva2hva(vm, tiledata); 297 297 /* Only check TMM0 register, 1 tile */ 298 298 ret = memcmp(amx_start, tiles_data, TILE_SIZE); 299 - TEST_ASSERT(ret == 0, "memcmp failed, ret=%d\n", ret); 299 + TEST_ASSERT(ret == 0, "memcmp failed, ret=%d", ret); 300 300 kvm_x86_state_cleanup(state); 301 301 break; 302 302 case 9:
+2 -2
tools/testing/selftests/kvm/x86_64/cpuid_test.c
··· 84 84 85 85 TEST_ASSERT(e1->function == e2->function && 86 86 e1->index == e2->index && e1->flags == e2->flags, 87 - "CPUID entries[%d] mismtach: 0x%x.%d.%x vs. 0x%x.%d.%x\n", 87 + "CPUID entries[%d] mismtach: 0x%x.%d.%x vs. 0x%x.%d.%x", 88 88 i, e1->function, e1->index, e1->flags, 89 89 e2->function, e2->index, e2->flags); 90 90 ··· 170 170 171 171 vcpu_ioctl(vcpu, KVM_GET_CPUID2, cpuid); 172 172 TEST_ASSERT(cpuid->nent == vcpu->cpuid->nent, 173 - "KVM didn't update nent on success, wanted %u, got %u\n", 173 + "KVM didn't update nent on success, wanted %u, got %u", 174 174 vcpu->cpuid->nent, cpuid->nent); 175 175 176 176 for (i = 0; i < vcpu->cpuid->nent; i++) {
+12 -9
tools/testing/selftests/kvm/x86_64/dirty_log_page_splitting_test.c
··· 92 92 uint64_t host_num_pages; 93 93 uint64_t pages_per_slot; 94 94 int i; 95 - uint64_t total_4k_pages; 96 95 struct kvm_page_stats stats_populated; 97 96 struct kvm_page_stats stats_dirty_logging_enabled; 98 97 struct kvm_page_stats stats_dirty_pass[ITERATIONS]; ··· 106 107 guest_num_pages = vm_adjust_num_guest_pages(mode, guest_num_pages); 107 108 host_num_pages = vm_num_host_pages(mode, guest_num_pages); 108 109 pages_per_slot = host_num_pages / SLOTS; 110 + TEST_ASSERT_EQ(host_num_pages, pages_per_slot * SLOTS); 111 + TEST_ASSERT(!(host_num_pages % 512), 112 + "Number of pages, '%lu' not a multiple of 2MiB", host_num_pages); 109 113 110 114 bitmaps = memstress_alloc_bitmaps(SLOTS, pages_per_slot); 111 115 ··· 167 165 memstress_free_bitmaps(bitmaps, SLOTS); 168 166 memstress_destroy_vm(vm); 169 167 170 - /* Make assertions about the page counts. */ 171 - total_4k_pages = stats_populated.pages_4k; 172 - total_4k_pages += stats_populated.pages_2m * 512; 173 - total_4k_pages += stats_populated.pages_1g * 512 * 512; 168 + TEST_ASSERT_EQ((stats_populated.pages_2m * 512 + 169 + stats_populated.pages_1g * 512 * 512), host_num_pages); 174 170 175 171 /* 176 172 * Check that all huge pages were split. Since large pages can only ··· 180 180 */ 181 181 if (dirty_log_manual_caps) { 182 182 TEST_ASSERT_EQ(stats_clear_pass[0].hugepages, 0); 183 - TEST_ASSERT_EQ(stats_clear_pass[0].pages_4k, total_4k_pages); 183 + TEST_ASSERT(stats_clear_pass[0].pages_4k >= host_num_pages, 184 + "Expected at least '%lu' 4KiB pages, found only '%lu'", 185 + host_num_pages, stats_clear_pass[0].pages_4k); 184 186 TEST_ASSERT_EQ(stats_dirty_logging_enabled.hugepages, stats_populated.hugepages); 185 187 } else { 186 188 TEST_ASSERT_EQ(stats_dirty_logging_enabled.hugepages, 0); 187 - TEST_ASSERT_EQ(stats_dirty_logging_enabled.pages_4k, total_4k_pages); 189 + TEST_ASSERT(stats_dirty_logging_enabled.pages_4k >= host_num_pages, 190 + "Expected at least '%lu' 4KiB pages, found only '%lu'", 191 + host_num_pages, stats_dirty_logging_enabled.pages_4k); 188 192 } 189 193 190 194 /* 191 195 * Once dirty logging is disabled and the vCPUs have touched all their 192 - * memory again, the page counts should be the same as they were 196 + * memory again, the hugepage counts should be the same as they were 193 197 * right after initial population of memory. 194 198 */ 195 - TEST_ASSERT_EQ(stats_populated.pages_4k, stats_repopulated.pages_4k); 196 199 TEST_ASSERT_EQ(stats_populated.pages_2m, stats_repopulated.pages_2m); 197 200 TEST_ASSERT_EQ(stats_populated.pages_1g, stats_repopulated.pages_1g); 198 201 }
+1 -1
tools/testing/selftests/kvm/x86_64/flds_emulation.h
··· 41 41 42 42 insn_bytes = run->emulation_failure.insn_bytes; 43 43 TEST_ASSERT(insn_bytes[0] == 0xd9 && insn_bytes[1] == 0, 44 - "Expected 'flds [eax]', opcode '0xd9 0x00', got opcode 0x%02x 0x%02x\n", 44 + "Expected 'flds [eax]', opcode '0xd9 0x00', got opcode 0x%02x 0x%02x", 45 45 insn_bytes[0], insn_bytes[1]); 46 46 47 47 vcpu_regs_get(vcpu, &regs);
+3 -2
tools/testing/selftests/kvm/x86_64/hyperv_clock.c
··· 212 212 int stage; 213 213 214 214 TEST_REQUIRE(kvm_has_cap(KVM_CAP_HYPERV_TIME)); 215 + TEST_REQUIRE(sys_clocksource_is_based_on_tsc()); 215 216 216 217 vm = vm_create_with_one_vcpu(&vcpu, guest_main); 217 218 ··· 221 220 tsc_page_gva = vm_vaddr_alloc_page(vm); 222 221 memset(addr_gva2hva(vm, tsc_page_gva), 0x0, getpagesize()); 223 222 TEST_ASSERT((addr_gva2gpa(vm, tsc_page_gva) & (getpagesize() - 1)) == 0, 224 - "TSC page has to be page aligned\n"); 223 + "TSC page has to be page aligned"); 225 224 vcpu_args_set(vcpu, 2, tsc_page_gva, addr_gva2gpa(vm, tsc_page_gva)); 226 225 227 226 host_check_tsc_msr_rdtsc(vcpu); ··· 238 237 break; 239 238 case UCALL_DONE: 240 239 /* Keep in sync with guest_main() */ 241 - TEST_ASSERT(stage == 11, "Testing ended prematurely, stage %d\n", 240 + TEST_ASSERT(stage == 11, "Testing ended prematurely, stage %d", 242 241 stage); 243 242 goto out; 244 243 default:
+5 -4
tools/testing/selftests/kvm/x86_64/hyperv_features.c
··· 454 454 case 44: 455 455 /* MSR is not available when CPUID feature bit is unset */ 456 456 if (!has_invtsc) 457 - continue; 457 + goto next_stage; 458 458 msr->idx = HV_X64_MSR_TSC_INVARIANT_CONTROL; 459 459 msr->write = false; 460 460 msr->fault_expected = true; ··· 462 462 case 45: 463 463 /* MSR is vailable when CPUID feature bit is set */ 464 464 if (!has_invtsc) 465 - continue; 465 + goto next_stage; 466 466 vcpu_set_cpuid_feature(vcpu, HV_ACCESS_TSC_INVARIANT); 467 467 msr->idx = HV_X64_MSR_TSC_INVARIANT_CONTROL; 468 468 msr->write = false; ··· 471 471 case 46: 472 472 /* Writing bits other than 0 is forbidden */ 473 473 if (!has_invtsc) 474 - continue; 474 + goto next_stage; 475 475 msr->idx = HV_X64_MSR_TSC_INVARIANT_CONTROL; 476 476 msr->write = true; 477 477 msr->write_val = 0xdeadbeef; ··· 480 480 case 47: 481 481 /* Setting bit 0 enables the feature */ 482 482 if (!has_invtsc) 483 - continue; 483 + goto next_stage; 484 484 msr->idx = HV_X64_MSR_TSC_INVARIANT_CONTROL; 485 485 msr->write = true; 486 486 msr->write_val = 1; ··· 513 513 return; 514 514 } 515 515 516 + next_stage: 516 517 stage++; 517 518 kvm_vm_free(vm); 518 519 }
+1 -1
tools/testing/selftests/kvm/x86_64/hyperv_ipi.c
··· 289 289 switch (get_ucall(vcpu[0], &uc)) { 290 290 case UCALL_SYNC: 291 291 TEST_ASSERT(uc.args[1] == stage, 292 - "Unexpected stage: %ld (%d expected)\n", 292 + "Unexpected stage: %ld (%d expected)", 293 293 uc.args[1], stage); 294 294 break; 295 295 case UCALL_DONE:
+1 -1
tools/testing/selftests/kvm/x86_64/hyperv_tlb_flush.c
··· 658 658 switch (get_ucall(vcpu[0], &uc)) { 659 659 case UCALL_SYNC: 660 660 TEST_ASSERT(uc.args[1] == stage, 661 - "Unexpected stage: %ld (%d expected)\n", 661 + "Unexpected stage: %ld (%d expected)", 662 662 uc.args[1], stage); 663 663 break; 664 664 case UCALL_ABORT:
+3 -39
tools/testing/selftests/kvm/x86_64/kvm_clock_test.c
··· 92 92 break; 93 93 } while (errno == EINTR); 94 94 95 - TEST_ASSERT(!r, "clock_gettime() failed: %d\n", r); 95 + TEST_ASSERT(!r, "clock_gettime() failed: %d", r); 96 96 97 97 data.realtime = ts.tv_sec * NSEC_PER_SEC; 98 98 data.realtime += ts.tv_nsec; ··· 127 127 handle_abort(&uc); 128 128 return; 129 129 default: 130 - TEST_ASSERT(0, "unhandled ucall: %ld\n", uc.cmd); 130 + TEST_ASSERT(0, "unhandled ucall: %ld", uc.cmd); 131 131 } 132 132 } 133 - } 134 - 135 - #define CLOCKSOURCE_PATH "/sys/devices/system/clocksource/clocksource0/current_clocksource" 136 - 137 - static void check_clocksource(void) 138 - { 139 - char *clk_name; 140 - struct stat st; 141 - FILE *fp; 142 - 143 - fp = fopen(CLOCKSOURCE_PATH, "r"); 144 - if (!fp) { 145 - pr_info("failed to open clocksource file: %d; assuming TSC.\n", 146 - errno); 147 - return; 148 - } 149 - 150 - if (fstat(fileno(fp), &st)) { 151 - pr_info("failed to stat clocksource file: %d; assuming TSC.\n", 152 - errno); 153 - goto out; 154 - } 155 - 156 - clk_name = malloc(st.st_size); 157 - TEST_ASSERT(clk_name, "failed to allocate buffer to read file\n"); 158 - 159 - if (!fgets(clk_name, st.st_size, fp)) { 160 - pr_info("failed to read clocksource file: %d; assuming TSC.\n", 161 - ferror(fp)); 162 - goto out; 163 - } 164 - 165 - TEST_ASSERT(!strncmp(clk_name, "tsc\n", st.st_size), 166 - "clocksource not supported: %s", clk_name); 167 - out: 168 - fclose(fp); 169 133 } 170 134 171 135 int main(void) ··· 143 179 flags = kvm_check_cap(KVM_CAP_ADJUST_CLOCK); 144 180 TEST_REQUIRE(flags & KVM_CLOCK_REALTIME); 145 181 146 - check_clocksource(); 182 + TEST_REQUIRE(sys_clocksource_is_based_on_tsc()); 147 183 148 184 vm = vm_create_with_one_vcpu(&vcpu, guest_main); 149 185
+3 -3
tools/testing/selftests/kvm/x86_64/nx_huge_pages_test.c
··· 257 257 TEST_REQUIRE(kvm_has_cap(KVM_CAP_VM_DISABLE_NX_HUGE_PAGES)); 258 258 259 259 __TEST_REQUIRE(token == MAGIC_TOKEN, 260 - "This test must be run with the magic token %d.\n" 261 - "This is done by nx_huge_pages_test.sh, which\n" 262 - "also handles environment setup for the test.", MAGIC_TOKEN); 260 + "This test must be run with the magic token via '-t %d'.\n" 261 + "Running via nx_huge_pages_test.sh, which also handles " 262 + "environment setup, is strongly recommended.", MAGIC_TOKEN); 263 263 264 264 run_test(reclaim_period_ms, false, reboot_permissions); 265 265 run_test(reclaim_period_ms, true, reboot_permissions);
+1 -1
tools/testing/selftests/kvm/x86_64/platform_info_test.c
··· 44 44 45 45 get_ucall(vcpu, &uc); 46 46 TEST_ASSERT(uc.cmd == UCALL_SYNC, 47 - "Received ucall other than UCALL_SYNC: %lu\n", uc.cmd); 47 + "Received ucall other than UCALL_SYNC: %lu", uc.cmd); 48 48 TEST_ASSERT((uc.args[1] & MSR_PLATFORM_INFO_MAX_TURBO_RATIO) == 49 49 MSR_PLATFORM_INFO_MAX_TURBO_RATIO, 50 50 "Expected MSR_PLATFORM_INFO to have max turbo ratio mask: %i.",
+1 -1
tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c
··· 866 866 * userspace doesn't set any pmu filter. 867 867 */ 868 868 count = run_vcpu_to_sync(vcpu); 869 - TEST_ASSERT(count, "Unexpected count value: %ld\n", count); 869 + TEST_ASSERT(count, "Unexpected count value: %ld", count); 870 870 871 871 for (i = 0; i < BIT(nr_fixed_counters); i++) { 872 872 bitmap = BIT(i);
+14 -14
tools/testing/selftests/kvm/x86_64/sev_migrate_tests.c
··· 91 91 int ret; 92 92 93 93 ret = __sev_migrate_from(dst, src); 94 - TEST_ASSERT(!ret, "Migration failed, ret: %d, errno: %d\n", ret, errno); 94 + TEST_ASSERT(!ret, "Migration failed, ret: %d, errno: %d", ret, errno); 95 95 } 96 96 97 97 static void test_sev_migrate_from(bool es) ··· 113 113 /* Migrate the guest back to the original VM. */ 114 114 ret = __sev_migrate_from(src_vm, dst_vms[NR_MIGRATE_TEST_VMS - 1]); 115 115 TEST_ASSERT(ret == -1 && errno == EIO, 116 - "VM that was migrated from should be dead. ret %d, errno: %d\n", ret, 116 + "VM that was migrated from should be dead. ret %d, errno: %d", ret, 117 117 errno); 118 118 119 119 kvm_vm_free(src_vm); ··· 172 172 vm_no_sev = aux_vm_create(true); 173 173 ret = __sev_migrate_from(vm_no_vcpu, vm_no_sev); 174 174 TEST_ASSERT(ret == -1 && errno == EINVAL, 175 - "Migrations require SEV enabled. ret %d, errno: %d\n", ret, 175 + "Migrations require SEV enabled. ret %d, errno: %d", ret, 176 176 errno); 177 177 178 178 if (!have_sev_es) ··· 187 187 ret = __sev_migrate_from(sev_vm, sev_es_vm); 188 188 TEST_ASSERT( 189 189 ret == -1 && errno == EINVAL, 190 - "Should not be able migrate to SEV enabled VM. ret: %d, errno: %d\n", 190 + "Should not be able migrate to SEV enabled VM. ret: %d, errno: %d", 191 191 ret, errno); 192 192 193 193 ret = __sev_migrate_from(sev_es_vm, sev_vm); 194 194 TEST_ASSERT( 195 195 ret == -1 && errno == EINVAL, 196 - "Should not be able migrate to SEV-ES enabled VM. ret: %d, errno: %d\n", 196 + "Should not be able migrate to SEV-ES enabled VM. ret: %d, errno: %d", 197 197 ret, errno); 198 198 199 199 ret = __sev_migrate_from(vm_no_vcpu, sev_es_vm); 200 200 TEST_ASSERT( 201 201 ret == -1 && errno == EINVAL, 202 - "SEV-ES migrations require same number of vCPUS. ret: %d, errno: %d\n", 202 + "SEV-ES migrations require same number of vCPUS. ret: %d, errno: %d", 203 203 ret, errno); 204 204 205 205 ret = __sev_migrate_from(vm_no_vcpu, sev_es_vm_no_vmsa); 206 206 TEST_ASSERT( 207 207 ret == -1 && errno == EINVAL, 208 - "SEV-ES migrations require UPDATE_VMSA. ret %d, errno: %d\n", 208 + "SEV-ES migrations require UPDATE_VMSA. ret %d, errno: %d", 209 209 ret, errno); 210 210 211 211 kvm_vm_free(sev_vm); ··· 227 227 int ret; 228 228 229 229 ret = __sev_mirror_create(dst, src); 230 - TEST_ASSERT(!ret, "Copying context failed, ret: %d, errno: %d\n", ret, errno); 230 + TEST_ASSERT(!ret, "Copying context failed, ret: %d, errno: %d", ret, errno); 231 231 } 232 232 233 233 static void verify_mirror_allowed_cmds(int vm_fd) ··· 259 259 ret = __sev_ioctl(vm_fd, cmd_id, NULL, &fw_error); 260 260 TEST_ASSERT( 261 261 ret == -1 && errno == EINVAL, 262 - "Should not be able call command: %d. ret: %d, errno: %d\n", 262 + "Should not be able call command: %d. ret: %d, errno: %d", 263 263 cmd_id, ret, errno); 264 264 } 265 265 ··· 301 301 ret = __sev_mirror_create(sev_vm, sev_vm); 302 302 TEST_ASSERT( 303 303 ret == -1 && errno == EINVAL, 304 - "Should not be able copy context to self. ret: %d, errno: %d\n", 304 + "Should not be able copy context to self. ret: %d, errno: %d", 305 305 ret, errno); 306 306 307 307 ret = __sev_mirror_create(vm_no_vcpu, vm_with_vcpu); 308 308 TEST_ASSERT(ret == -1 && errno == EINVAL, 309 - "Copy context requires SEV enabled. ret %d, errno: %d\n", ret, 309 + "Copy context requires SEV enabled. ret %d, errno: %d", ret, 310 310 errno); 311 311 312 312 ret = __sev_mirror_create(vm_with_vcpu, sev_vm); 313 313 TEST_ASSERT( 314 314 ret == -1 && errno == EINVAL, 315 - "SEV copy context requires no vCPUS on the destination. ret: %d, errno: %d\n", 315 + "SEV copy context requires no vCPUS on the destination. ret: %d, errno: %d", 316 316 ret, errno); 317 317 318 318 if (!have_sev_es) ··· 322 322 ret = __sev_mirror_create(sev_vm, sev_es_vm); 323 323 TEST_ASSERT( 324 324 ret == -1 && errno == EINVAL, 325 - "Should not be able copy context to SEV enabled VM. ret: %d, errno: %d\n", 325 + "Should not be able copy context to SEV enabled VM. ret: %d, errno: %d", 326 326 ret, errno); 327 327 328 328 ret = __sev_mirror_create(sev_es_vm, sev_vm); 329 329 TEST_ASSERT( 330 330 ret == -1 && errno == EINVAL, 331 - "Should not be able copy context to SEV-ES enabled VM. ret: %d, errno: %d\n", 331 + "Should not be able copy context to SEV-ES enabled VM. ret: %d, errno: %d", 332 332 ret, errno); 333 333 334 334 kvm_vm_free(sev_es_vm);
+2 -2
tools/testing/selftests/kvm/x86_64/smaller_maxphyaddr_emulation_test.c
··· 74 74 MEM_REGION_SIZE / PAGE_SIZE, 0); 75 75 gpa = vm_phy_pages_alloc(vm, MEM_REGION_SIZE / PAGE_SIZE, 76 76 MEM_REGION_GPA, MEM_REGION_SLOT); 77 - TEST_ASSERT(gpa == MEM_REGION_GPA, "Failed vm_phy_pages_alloc\n"); 77 + TEST_ASSERT(gpa == MEM_REGION_GPA, "Failed vm_phy_pages_alloc"); 78 78 virt_map(vm, MEM_REGION_GVA, MEM_REGION_GPA, 1); 79 79 hva = addr_gpa2hva(vm, MEM_REGION_GPA); 80 80 memset(hva, 0, PAGE_SIZE); ··· 102 102 case UCALL_DONE: 103 103 break; 104 104 default: 105 - TEST_FAIL("Unrecognized ucall: %lu\n", uc.cmd); 105 + TEST_FAIL("Unrecognized ucall: %lu", uc.cmd); 106 106 } 107 107 108 108 kvm_vm_free(vm);
+5 -5
tools/testing/selftests/kvm/x86_64/sync_regs_test.c
··· 46 46 #define REG_COMPARE(reg) \ 47 47 TEST_ASSERT(left->reg == right->reg, \ 48 48 "Register " #reg \ 49 - " values did not match: 0x%llx, 0x%llx\n", \ 49 + " values did not match: 0x%llx, 0x%llx", \ 50 50 left->reg, right->reg) 51 51 REG_COMPARE(rax); 52 52 REG_COMPARE(rbx); ··· 230 230 run->kvm_valid_regs = INVALID_SYNC_FIELD; 231 231 rv = _vcpu_run(vcpu); 232 232 TEST_ASSERT(rv < 0 && errno == EINVAL, 233 - "Invalid kvm_valid_regs did not cause expected KVM_RUN error: %d\n", 233 + "Invalid kvm_valid_regs did not cause expected KVM_RUN error: %d", 234 234 rv); 235 235 run->kvm_valid_regs = 0; 236 236 237 237 run->kvm_valid_regs = INVALID_SYNC_FIELD | TEST_SYNC_FIELDS; 238 238 rv = _vcpu_run(vcpu); 239 239 TEST_ASSERT(rv < 0 && errno == EINVAL, 240 - "Invalid kvm_valid_regs did not cause expected KVM_RUN error: %d\n", 240 + "Invalid kvm_valid_regs did not cause expected KVM_RUN error: %d", 241 241 rv); 242 242 run->kvm_valid_regs = 0; 243 243 ··· 245 245 run->kvm_dirty_regs = INVALID_SYNC_FIELD; 246 246 rv = _vcpu_run(vcpu); 247 247 TEST_ASSERT(rv < 0 && errno == EINVAL, 248 - "Invalid kvm_dirty_regs did not cause expected KVM_RUN error: %d\n", 248 + "Invalid kvm_dirty_regs did not cause expected KVM_RUN error: %d", 249 249 rv); 250 250 run->kvm_dirty_regs = 0; 251 251 252 252 run->kvm_dirty_regs = INVALID_SYNC_FIELD | TEST_SYNC_FIELDS; 253 253 rv = _vcpu_run(vcpu); 254 254 TEST_ASSERT(rv < 0 && errno == EINVAL, 255 - "Invalid kvm_dirty_regs did not cause expected KVM_RUN error: %d\n", 255 + "Invalid kvm_dirty_regs did not cause expected KVM_RUN error: %d", 256 256 rv); 257 257 run->kvm_dirty_regs = 0; 258 258
+4 -4
tools/testing/selftests/kvm/x86_64/ucna_injection_test.c
··· 143 143 144 144 TEST_ASSERT_KVM_EXIT_REASON(vcpu, KVM_EXIT_IO); 145 145 TEST_ASSERT(get_ucall(vcpu, &uc) == UCALL_SYNC, 146 - "Expect UCALL_SYNC\n"); 146 + "Expect UCALL_SYNC"); 147 147 TEST_ASSERT(uc.args[1] == SYNC_GP, "#GP is expected."); 148 148 printf("vCPU received GP in guest.\n"); 149 149 } ··· 188 188 189 189 TEST_ASSERT_KVM_EXIT_REASON(params->vcpu, KVM_EXIT_IO); 190 190 TEST_ASSERT(get_ucall(params->vcpu, &uc) == UCALL_SYNC, 191 - "Expect UCALL_SYNC\n"); 191 + "Expect UCALL_SYNC"); 192 192 TEST_ASSERT(uc.args[1] == SYNC_FIRST_UCNA, "Injecting first UCNA."); 193 193 194 194 printf("Injecting first UCNA at %#x.\n", FIRST_UCNA_ADDR); ··· 198 198 199 199 TEST_ASSERT_KVM_EXIT_REASON(params->vcpu, KVM_EXIT_IO); 200 200 TEST_ASSERT(get_ucall(params->vcpu, &uc) == UCALL_SYNC, 201 - "Expect UCALL_SYNC\n"); 201 + "Expect UCALL_SYNC"); 202 202 TEST_ASSERT(uc.args[1] == SYNC_SECOND_UCNA, "Injecting second UCNA."); 203 203 204 204 printf("Injecting second UCNA at %#x.\n", SECOND_UCNA_ADDR); ··· 208 208 209 209 TEST_ASSERT_KVM_EXIT_REASON(params->vcpu, KVM_EXIT_IO); 210 210 if (get_ucall(params->vcpu, &uc) == UCALL_ABORT) { 211 - TEST_ASSERT(false, "vCPU assertion failure: %s.\n", 211 + TEST_ASSERT(false, "vCPU assertion failure: %s.", 212 212 (const char *)uc.args[0]); 213 213 } 214 214
+1 -1
tools/testing/selftests/kvm/x86_64/userspace_io_test.c
··· 71 71 break; 72 72 73 73 TEST_ASSERT(run->io.port == 0x80, 74 - "Expected I/O at port 0x80, got port 0x%x\n", run->io.port); 74 + "Expected I/O at port 0x80, got port 0x%x", run->io.port); 75 75 76 76 /* 77 77 * Modify the rep string count in RCX: 2 => 1 and 3 => 8192.
+1 -1
tools/testing/selftests/kvm/x86_64/vmx_apic_access_test.c
··· 99 99 TEST_ASSERT_KVM_EXIT_REASON(vcpu, KVM_EXIT_INTERNAL_ERROR); 100 100 TEST_ASSERT(run->internal.suberror == 101 101 KVM_INTERNAL_ERROR_EMULATION, 102 - "Got internal suberror other than KVM_INTERNAL_ERROR_EMULATION: %u\n", 102 + "Got internal suberror other than KVM_INTERNAL_ERROR_EMULATION: %u", 103 103 run->internal.suberror); 104 104 break; 105 105 }
+8 -8
tools/testing/selftests/kvm/x86_64/vmx_dirty_log_test.c
··· 128 128 */ 129 129 kvm_vm_get_dirty_log(vm, TEST_MEM_SLOT_INDEX, bmap); 130 130 if (uc.args[1]) { 131 - TEST_ASSERT(test_bit(0, bmap), "Page 0 incorrectly reported clean\n"); 132 - TEST_ASSERT(host_test_mem[0] == 1, "Page 0 not written by guest\n"); 131 + TEST_ASSERT(test_bit(0, bmap), "Page 0 incorrectly reported clean"); 132 + TEST_ASSERT(host_test_mem[0] == 1, "Page 0 not written by guest"); 133 133 } else { 134 - TEST_ASSERT(!test_bit(0, bmap), "Page 0 incorrectly reported dirty\n"); 135 - TEST_ASSERT(host_test_mem[0] == 0xaaaaaaaaaaaaaaaaULL, "Page 0 written by guest\n"); 134 + TEST_ASSERT(!test_bit(0, bmap), "Page 0 incorrectly reported dirty"); 135 + TEST_ASSERT(host_test_mem[0] == 0xaaaaaaaaaaaaaaaaULL, "Page 0 written by guest"); 136 136 } 137 137 138 - TEST_ASSERT(!test_bit(1, bmap), "Page 1 incorrectly reported dirty\n"); 139 - TEST_ASSERT(host_test_mem[4096 / 8] == 0xaaaaaaaaaaaaaaaaULL, "Page 1 written by guest\n"); 140 - TEST_ASSERT(!test_bit(2, bmap), "Page 2 incorrectly reported dirty\n"); 141 - TEST_ASSERT(host_test_mem[8192 / 8] == 0xaaaaaaaaaaaaaaaaULL, "Page 2 written by guest\n"); 138 + TEST_ASSERT(!test_bit(1, bmap), "Page 1 incorrectly reported dirty"); 139 + TEST_ASSERT(host_test_mem[4096 / 8] == 0xaaaaaaaaaaaaaaaaULL, "Page 1 written by guest"); 140 + TEST_ASSERT(!test_bit(2, bmap), "Page 2 incorrectly reported dirty"); 141 + TEST_ASSERT(host_test_mem[8192 / 8] == 0xaaaaaaaaaaaaaaaaULL, "Page 2 written by guest"); 142 142 break; 143 143 case UCALL_DONE: 144 144 done = true;
+1 -1
tools/testing/selftests/kvm/x86_64/vmx_exception_with_invalid_guest_state.c
··· 28 28 29 29 TEST_ASSERT_KVM_EXIT_REASON(vcpu, KVM_EXIT_INTERNAL_ERROR); 30 30 TEST_ASSERT(run->emulation_failure.suberror == KVM_INTERNAL_ERROR_EMULATION, 31 - "Expected emulation failure, got %d\n", 31 + "Expected emulation failure, got %d", 32 32 run->emulation_failure.suberror); 33 33 } 34 34
+1 -18
tools/testing/selftests/kvm/x86_64/vmx_nested_tsc_scaling_test.c
··· 116 116 GUEST_DONE(); 117 117 } 118 118 119 - static bool system_has_stable_tsc(void) 120 - { 121 - bool tsc_is_stable; 122 - FILE *fp; 123 - char buf[4]; 124 - 125 - fp = fopen("/sys/devices/system/clocksource/clocksource0/current_clocksource", "r"); 126 - if (fp == NULL) 127 - return false; 128 - 129 - tsc_is_stable = fgets(buf, sizeof(buf), fp) && 130 - !strncmp(buf, "tsc", sizeof(buf)); 131 - 132 - fclose(fp); 133 - return tsc_is_stable; 134 - } 135 - 136 119 int main(int argc, char *argv[]) 137 120 { 138 121 struct kvm_vcpu *vcpu; ··· 131 148 132 149 TEST_REQUIRE(kvm_cpu_has(X86_FEATURE_VMX)); 133 150 TEST_REQUIRE(kvm_has_cap(KVM_CAP_TSC_CONTROL)); 134 - TEST_REQUIRE(system_has_stable_tsc()); 151 + TEST_REQUIRE(sys_clocksource_is_based_on_tsc()); 135 152 136 153 /* 137 154 * We set L1's scale factor to be a random number from 2 to 10.
+4 -4
tools/testing/selftests/kvm/x86_64/xapic_ipi_test.c
··· 216 216 "Halting vCPU halted %lu times, woke %lu times, received %lu IPIs.\n" 217 217 "Halter TPR=%#x PPR=%#x LVR=%#x\n" 218 218 "Migrations attempted: %lu\n" 219 - "Migrations completed: %lu\n", 219 + "Migrations completed: %lu", 220 220 vcpu->id, (const char *)uc.args[0], 221 221 params->data->ipis_sent, params->data->hlt_count, 222 222 params->data->wake_count, ··· 288 288 } 289 289 290 290 TEST_ASSERT(nodes > 1, 291 - "Did not find at least 2 numa nodes. Can't do migration\n"); 291 + "Did not find at least 2 numa nodes. Can't do migration"); 292 292 293 293 fprintf(stderr, "Migrating amongst %d nodes found\n", nodes); 294 294 ··· 347 347 wake_count != data->wake_count, 348 348 "IPI, HLT and wake count have not increased " 349 349 "in the last %lu seconds. " 350 - "HLTer is likely hung.\n", interval_secs); 350 + "HLTer is likely hung.", interval_secs); 351 351 352 352 ipis_sent = data->ipis_sent; 353 353 hlt_count = data->hlt_count; ··· 381 381 "-m adds calls to migrate_pages while vCPUs are running." 382 382 " Default is no migrations.\n" 383 383 "-d <delay microseconds> - delay between migrate_pages() calls." 384 - " Default is %d microseconds.\n", 384 + " Default is %d microseconds.", 385 385 DEFAULT_RUN_SECS, DEFAULT_DELAY_USECS); 386 386 } 387 387 }
+1 -1
tools/testing/selftests/kvm/x86_64/xcr0_cpuid_test.c
··· 116 116 vcpu_run(vcpu); 117 117 118 118 TEST_ASSERT(run->exit_reason == KVM_EXIT_IO, 119 - "Unexpected exit reason: %u (%s),\n", 119 + "Unexpected exit reason: %u (%s),", 120 120 run->exit_reason, 121 121 exit_reason_str(run->exit_reason)); 122 122
+1 -1
tools/testing/selftests/kvm/x86_64/xss_msr_test.c
··· 29 29 30 30 xss_val = vcpu_get_msr(vcpu, MSR_IA32_XSS); 31 31 TEST_ASSERT(xss_val == 0, 32 - "MSR_IA32_XSS should be initialized to zero\n"); 32 + "MSR_IA32_XSS should be initialized to zero"); 33 33 34 34 vcpu_set_msr(vcpu, MSR_IA32_XSS, xss_val); 35 35
-3
tools/testing/selftests/net/forwarding/tc_actions.sh
··· 235 235 check_err $? "didn't mirred redirect ICMP" 236 236 tc_check_packets "dev $h1 ingress" 102 10 237 237 check_err $? "didn't drop mirred ICMP" 238 - local overlimits=$(tc_rule_stats_get ${h1} 101 egress .overlimits) 239 - test ${overlimits} = 10 240 - check_err $? "wrong overlimits, expected 10 got ${overlimits}" 241 238 242 239 tc filter del dev $h1 egress protocol ip pref 100 handle 100 flower 243 240 tc filter del dev $h1 egress protocol ip pref 101 handle 101 flower
+17 -19
tools/testing/selftests/net/ioam6.sh
··· 367 367 local desc=$2 368 368 local node_src=$3 369 369 local node_dst=$4 370 - local ip6_src=$5 371 - local ip6_dst=$6 372 - local if_dst=$7 373 - local trace_type=$8 374 - local ioam_ns=$9 370 + local ip6_dst=$5 371 + local trace_type=$6 372 + local ioam_ns=$7 373 + local type=$8 375 374 376 - ip netns exec $node_dst ./ioam6_parser $if_dst $name $ip6_src $ip6_dst \ 377 - $trace_type $ioam_ns & 375 + ip netns exec $node_dst ./ioam6_parser $name $trace_type $ioam_ns $type & 378 376 local spid=$! 379 377 sleep 0.1 380 378 ··· 487 489 trace prealloc type 0x800000 ns 0 size 4 dev veth0 488 490 489 491 run_test ${FUNCNAME[0]} "${desc} ($1 mode)" $ioam_node_alpha $ioam_node_beta \ 490 - db01::2 db01::1 veth0 0x800000 0 492 + db01::1 0x800000 0 $1 491 493 492 494 [ "$1" = "encap" ] && ip -netns $ioam_node_beta link set ip6tnl0 down 493 495 } ··· 507 509 trace prealloc type 0xc00000 ns 123 size 4 dev veth0 508 510 509 511 run_test ${FUNCNAME[0]} "${desc} ($1 mode)" $ioam_node_alpha $ioam_node_beta \ 510 - db01::2 db01::1 veth0 0xc00000 123 512 + db01::1 0xc00000 123 $1 511 513 512 514 [ "$1" = "encap" ] && ip -netns $ioam_node_beta link set ip6tnl0 down 513 515 } ··· 541 543 if [ $cmd_res != 0 ] 542 544 then 543 545 npassed=$((npassed+1)) 544 - log_test_passed "$descr" 546 + log_test_passed "$descr ($1 mode)" 545 547 else 546 548 nfailed=$((nfailed+1)) 547 - log_test_failed "$descr" 549 + log_test_failed "$descr ($1 mode)" 548 550 fi 549 551 else 550 552 run_test "out_bit$i" "$descr ($1 mode)" $ioam_node_alpha \ 551 - $ioam_node_beta db01::2 db01::1 veth0 ${bit2type[$i]} 123 553 + $ioam_node_beta db01::1 ${bit2type[$i]} 123 $1 552 554 fi 553 555 done 554 556 ··· 572 574 trace prealloc type 0xfff002 ns 123 size 100 dev veth0 573 575 574 576 run_test ${FUNCNAME[0]} "${desc} ($1 mode)" $ioam_node_alpha $ioam_node_beta \ 575 - db01::2 db01::1 veth0 0xfff002 123 577 + db01::1 0xfff002 123 $1 576 578 577 579 [ "$1" = "encap" ] && ip -netns $ioam_node_beta link set ip6tnl0 down 578 580 } ··· 602 604 trace prealloc type 0x800000 ns 0 size 4 dev veth0 603 605 604 606 run_test ${FUNCNAME[0]} "${desc} ($1 mode)" $ioam_node_alpha $ioam_node_beta \ 605 - db01::2 db01::1 veth0 0x800000 0 607 + db01::1 0x800000 0 $1 606 608 607 609 [ "$1" = "encap" ] && ip -netns $ioam_node_beta link set ip6tnl0 down 608 610 } ··· 622 624 trace prealloc type 0xc00000 ns 123 size 4 dev veth0 623 625 624 626 run_test ${FUNCNAME[0]} "${desc} ($1 mode)" $ioam_node_alpha $ioam_node_beta \ 625 - db01::2 db01::1 veth0 0xc00000 123 627 + db01::1 0xc00000 123 $1 626 628 627 629 [ "$1" = "encap" ] && ip -netns $ioam_node_beta link set ip6tnl0 down 628 630 } ··· 649 651 dev veth0 650 652 651 653 run_test "in_bit$i" "${desc/<n>/$i} ($1 mode)" $ioam_node_alpha \ 652 - $ioam_node_beta db01::2 db01::1 veth0 ${bit2type[$i]} 123 654 + $ioam_node_beta db01::1 ${bit2type[$i]} 123 $1 653 655 done 654 656 655 657 [ "$1" = "encap" ] && ip -netns $ioam_node_beta link set ip6tnl0 down ··· 677 679 trace prealloc type 0xc00000 ns 123 size 4 dev veth0 678 680 679 681 run_test ${FUNCNAME[0]} "${desc} ($1 mode)" $ioam_node_alpha $ioam_node_beta \ 680 - db01::2 db01::1 veth0 0xc00000 123 682 + db01::1 0xc00000 123 $1 681 683 682 684 [ "$1" = "encap" ] && ip -netns $ioam_node_beta link set ip6tnl0 down 683 685 ··· 701 703 trace prealloc type 0xfff002 ns 123 size 80 dev veth0 702 704 703 705 run_test ${FUNCNAME[0]} "${desc} ($1 mode)" $ioam_node_alpha $ioam_node_beta \ 704 - db01::2 db01::1 veth0 0xfff002 123 706 + db01::1 0xfff002 123 $1 705 707 706 708 [ "$1" = "encap" ] && ip -netns $ioam_node_beta link set ip6tnl0 down 707 709 } ··· 729 731 trace prealloc type 0xfff002 ns 123 size 244 via db01::1 dev veth0 730 732 731 733 run_test ${FUNCNAME[0]} "${desc} ($1 mode)" $ioam_node_alpha $ioam_node_gamma \ 732 - db01::2 db02::2 veth0 0xfff002 123 734 + db02::2 0xfff002 123 $1 733 735 734 736 [ "$1" = "encap" ] && ip -netns $ioam_node_gamma link set ip6tnl0 down 735 737 }
+48 -47
tools/testing/selftests/net/ioam6_parser.c
··· 8 8 #include <errno.h> 9 9 #include <limits.h> 10 10 #include <linux/const.h> 11 - #include <linux/if_ether.h> 12 11 #include <linux/ioam6.h> 13 12 #include <linux/ipv6.h> 14 13 #include <stdlib.h> ··· 511 512 return -1; 512 513 } 513 514 514 - static int ipv6_addr_equal(const struct in6_addr *a1, const struct in6_addr *a2) 515 - { 516 - return ((a1->s6_addr32[0] ^ a2->s6_addr32[0]) | 517 - (a1->s6_addr32[1] ^ a2->s6_addr32[1]) | 518 - (a1->s6_addr32[2] ^ a2->s6_addr32[2]) | 519 - (a1->s6_addr32[3] ^ a2->s6_addr32[3])) == 0; 520 - } 521 - 522 515 static int get_u32(__u32 *val, const char *arg, int base) 523 516 { 524 517 unsigned long res; ··· 594 603 595 604 int main(int argc, char **argv) 596 605 { 597 - int fd, size, hoplen, tid, ret = 1; 598 - struct in6_addr src, dst; 606 + int fd, size, hoplen, tid, ret = 1, on = 1; 599 607 struct ioam6_hdr *opt; 600 - struct ipv6hdr *ip6h; 601 - __u8 buffer[400], *p; 602 - __u16 ioam_ns; 608 + struct cmsghdr *cmsg; 609 + struct msghdr msg; 610 + struct iovec iov; 611 + __u8 buffer[512]; 603 612 __u32 tr_type; 613 + __u16 ioam_ns; 614 + __u8 *ptr; 604 615 605 - if (argc != 7) 616 + if (argc != 5) 606 617 goto out; 607 618 608 - tid = str2id(argv[2]); 619 + tid = str2id(argv[1]); 609 620 if (tid < 0 || !func[tid]) 610 621 goto out; 611 622 612 - if (inet_pton(AF_INET6, argv[3], &src) != 1 || 613 - inet_pton(AF_INET6, argv[4], &dst) != 1) 623 + if (get_u32(&tr_type, argv[2], 16) || 624 + get_u16(&ioam_ns, argv[3], 0)) 614 625 goto out; 615 626 616 - if (get_u32(&tr_type, argv[5], 16) || 617 - get_u16(&ioam_ns, argv[6], 0)) 627 + fd = socket(PF_INET6, SOCK_RAW, 628 + !strcmp(argv[4], "encap") ? IPPROTO_IPV6 : IPPROTO_ICMPV6); 629 + if (fd < 0) 618 630 goto out; 619 631 620 - fd = socket(AF_PACKET, SOCK_DGRAM, __cpu_to_be16(ETH_P_IPV6)); 621 - if (!fd) 622 - goto out; 632 + setsockopt(fd, IPPROTO_IPV6, IPV6_RECVHOPOPTS, &on, sizeof(on)); 623 633 624 - if (setsockopt(fd, SOL_SOCKET, SO_BINDTODEVICE, 625 - argv[1], strlen(argv[1]))) 634 + iov.iov_len = 1; 635 + iov.iov_base = malloc(CMSG_SPACE(sizeof(buffer))); 636 + if (!iov.iov_base) 626 637 goto close; 627 - 628 638 recv: 629 - size = recv(fd, buffer, sizeof(buffer), 0); 639 + memset(&msg, 0, sizeof(msg)); 640 + msg.msg_iov = &iov; 641 + msg.msg_iovlen = 1; 642 + msg.msg_control = buffer; 643 + msg.msg_controllen = CMSG_SPACE(sizeof(buffer)); 644 + 645 + size = recvmsg(fd, &msg, 0); 630 646 if (size <= 0) 631 647 goto close; 632 648 633 - ip6h = (struct ipv6hdr *)buffer; 649 + for (cmsg = CMSG_FIRSTHDR(&msg); cmsg; cmsg = CMSG_NXTHDR(&msg, cmsg)) { 650 + if (cmsg->cmsg_level != IPPROTO_IPV6 || 651 + cmsg->cmsg_type != IPV6_HOPOPTS || 652 + cmsg->cmsg_len < sizeof(struct ipv6_hopopt_hdr)) 653 + continue; 634 654 635 - if (!ipv6_addr_equal(&ip6h->saddr, &src) || 636 - !ipv6_addr_equal(&ip6h->daddr, &dst)) 637 - goto recv; 655 + ptr = (__u8 *)CMSG_DATA(cmsg); 638 656 639 - if (ip6h->nexthdr != IPPROTO_HOPOPTS) 640 - goto close; 657 + hoplen = (ptr[1] + 1) << 3; 658 + ptr += sizeof(struct ipv6_hopopt_hdr); 641 659 642 - p = buffer + sizeof(*ip6h); 643 - hoplen = (p[1] + 1) << 3; 644 - p += sizeof(struct ipv6_hopopt_hdr); 660 + while (hoplen > 0) { 661 + opt = (struct ioam6_hdr *)ptr; 645 662 646 - while (hoplen > 0) { 647 - opt = (struct ioam6_hdr *)p; 663 + if (opt->opt_type == IPV6_TLV_IOAM && 664 + opt->type == IOAM6_TYPE_PREALLOC) { 665 + ptr += sizeof(*opt); 666 + ret = func[tid](tid, 667 + (struct ioam6_trace_hdr *)ptr, 668 + tr_type, ioam_ns); 669 + goto close; 670 + } 648 671 649 - if (opt->opt_type == IPV6_TLV_IOAM && 650 - opt->type == IOAM6_TYPE_PREALLOC) { 651 - p += sizeof(*opt); 652 - ret = func[tid](tid, (struct ioam6_trace_hdr *)p, 653 - tr_type, ioam_ns); 654 - break; 672 + ptr += opt->opt_len + 2; 673 + hoplen -= opt->opt_len + 2; 655 674 } 656 - 657 - p += opt->opt_len + 2; 658 - hoplen -= opt->opt_len + 2; 659 675 } 676 + 677 + goto recv; 660 678 close: 679 + free(iov.iov_base); 661 680 close(fd); 662 681 out: 663 682 return ret;
+25 -16
tools/testing/selftests/net/mptcp/diag.sh
··· 62 62 nr=$(eval $command) 63 63 64 64 printf "%-50s" "$msg" 65 - if [ $nr != $expected ]; then 66 - if [ $nr = "$skip" ] && ! mptcp_lib_expect_all_features; then 65 + if [ "$nr" != "$expected" ]; then 66 + if [ "$nr" = "$skip" ] && ! mptcp_lib_expect_all_features; then 67 67 echo "[ skip ] Feature probably not supported" 68 68 mptcp_lib_result_skip "${msg}" 69 69 else ··· 166 166 chk_msk_inuse() 167 167 { 168 168 local expected=$1 169 - local msg="$2" 169 + local msg="....chk ${2:-${expected}} msk in use" 170 170 local listen_nr 171 + 172 + if [ "${expected}" -eq 0 ]; then 173 + msg+=" after flush" 174 + fi 171 175 172 176 listen_nr=$(ss -N "${ns}" -Ml | grep -c LISTEN) 173 177 expected=$((expected + listen_nr)) ··· 183 179 sleep 0.1 184 180 done 185 181 186 - __chk_nr get_msk_inuse $expected "$msg" 0 182 + __chk_nr get_msk_inuse $expected "${msg}" 0 187 183 } 188 184 189 185 # $1: cestab nr 190 186 chk_msk_cestab() 191 187 { 192 - local cestab=$1 188 + local expected=$1 189 + local msg="....chk ${2:-${expected}} cestab" 190 + 191 + if [ "${expected}" -eq 0 ]; then 192 + msg+=" after flush" 193 + fi 193 194 194 195 __chk_nr "mptcp_lib_get_counter ${ns} MPTcpExtMPCurrEstab" \ 195 - "${cestab}" "....chk ${cestab} cestab" "" 196 + "${expected}" "${msg}" "" 196 197 } 197 198 198 199 wait_connected() ··· 236 227 chk_msk_nr 2 "after MPC handshake " 237 228 chk_msk_remote_key_nr 2 "....chk remote_key" 238 229 chk_msk_fallback_nr 0 "....chk no fallback" 239 - chk_msk_inuse 2 "....chk 2 msk in use" 230 + chk_msk_inuse 2 240 231 chk_msk_cestab 2 241 232 flush_pids 242 233 243 - chk_msk_inuse 0 "....chk 0 msk in use after flush" 244 - chk_msk_cestab 0 234 + chk_msk_inuse 0 "2->0" 235 + chk_msk_cestab 0 "2->0" 245 236 246 237 echo "a" | \ 247 238 timeout ${timeout_test} \ ··· 256 247 127.0.0.1 >/dev/null & 257 248 wait_connected $ns 10001 258 249 chk_msk_fallback_nr 1 "check fallback" 259 - chk_msk_inuse 1 "....chk 1 msk in use" 250 + chk_msk_inuse 1 260 251 chk_msk_cestab 1 261 252 flush_pids 262 253 263 - chk_msk_inuse 0 "....chk 0 msk in use after flush" 264 - chk_msk_cestab 0 254 + chk_msk_inuse 0 "1->0" 255 + chk_msk_cestab 0 "1->0" 265 256 266 257 NR_CLIENTS=100 267 258 for I in `seq 1 $NR_CLIENTS`; do ··· 282 273 done 283 274 284 275 wait_msk_nr $((NR_CLIENTS*2)) "many msk socket present" 285 - chk_msk_inuse $((NR_CLIENTS*2)) "....chk many msk in use" 286 - chk_msk_cestab $((NR_CLIENTS*2)) 276 + chk_msk_inuse $((NR_CLIENTS*2)) "many" 277 + chk_msk_cestab $((NR_CLIENTS*2)) "many" 287 278 flush_pids 288 279 289 - chk_msk_inuse 0 "....chk 0 msk in use after flush" 290 - chk_msk_cestab 0 280 + chk_msk_inuse 0 "many->0" 281 + chk_msk_cestab 0 "many->0" 291 282 292 283 mptcp_lib_result_print_all_tap 293 284 exit $ret
+7 -1
tools/testing/selftests/net/mptcp/pm_netlink.sh
··· 183 183 subflow 10.0.1.1" " (nobackup)" 184 184 185 185 # fullmesh support has been added later 186 - ip netns exec $ns1 ./pm_nl_ctl set id 1 flags fullmesh 186 + ip netns exec $ns1 ./pm_nl_ctl set id 1 flags fullmesh 2>/dev/null 187 187 if ip netns exec $ns1 ./pm_nl_ctl dump | grep -q "fullmesh" || 188 188 mptcp_lib_expect_all_features; then 189 189 check "ip netns exec $ns1 ./pm_nl_ctl dump" "id 1 flags \ ··· 194 194 ip netns exec $ns1 ./pm_nl_ctl set id 1 flags backup,fullmesh 195 195 check "ip netns exec $ns1 ./pm_nl_ctl dump" "id 1 flags \ 196 196 subflow,backup,fullmesh 10.0.1.1" " (backup,fullmesh)" 197 + else 198 + for st in fullmesh nofullmesh backup,fullmesh; do 199 + st=" (${st})" 200 + printf "%-50s%s\n" "${st}" "[SKIP]" 201 + mptcp_lib_result_skip "${st}" 202 + done 197 203 fi 198 204 199 205 mptcp_lib_result_print_all_tap
+2 -1
tools/testing/selftests/net/mptcp/simult_flows.sh
··· 250 250 [ $bail -eq 0 ] || exit $ret 251 251 fi 252 252 253 - printf "%-60s" "$msg - reverse direction" 253 + msg+=" - reverse direction" 254 + printf "%-60s" "${msg}" 254 255 do_transfer $large $small $time 255 256 lret=$? 256 257 mptcp_lib_result_code "${lret}" "${msg}"
+2 -2
tools/testing/selftests/net/mptcp/userspace_pm.sh
··· 75 75 { 76 76 test_name="${1}" 77 77 78 - _printf "%-63s" "${test_name}" 78 + _printf "%-68s" "${test_name}" 79 79 } 80 80 81 81 print_results() ··· 542 542 local remid 543 543 local info 544 544 545 - info="${e_saddr} (${e_from}) => ${e_daddr} (${e_to})" 545 + info="${e_saddr} (${e_from}) => ${e_daddr}:${e_dport} (${e_to})" 546 546 547 547 if [ "$e_type" = "$SUB_ESTABLISHED" ] 548 548 then
+45
tools/testing/selftests/net/tls.c
··· 1485 1485 EXPECT_EQ(memcmp(buf, test_str, send_len), 0); 1486 1486 } 1487 1487 1488 + TEST_F(tls, control_msg_nomerge) 1489 + { 1490 + char *rec1 = "1111"; 1491 + char *rec2 = "2222"; 1492 + int send_len = 5; 1493 + char buf[15]; 1494 + 1495 + if (self->notls) 1496 + SKIP(return, "no TLS support"); 1497 + 1498 + EXPECT_EQ(tls_send_cmsg(self->fd, 100, rec1, send_len, 0), send_len); 1499 + EXPECT_EQ(tls_send_cmsg(self->fd, 100, rec2, send_len, 0), send_len); 1500 + 1501 + EXPECT_EQ(tls_recv_cmsg(_metadata, self->cfd, 100, buf, sizeof(buf), MSG_PEEK), send_len); 1502 + EXPECT_EQ(memcmp(buf, rec1, send_len), 0); 1503 + 1504 + EXPECT_EQ(tls_recv_cmsg(_metadata, self->cfd, 100, buf, sizeof(buf), MSG_PEEK), send_len); 1505 + EXPECT_EQ(memcmp(buf, rec1, send_len), 0); 1506 + 1507 + EXPECT_EQ(tls_recv_cmsg(_metadata, self->cfd, 100, buf, sizeof(buf), 0), send_len); 1508 + EXPECT_EQ(memcmp(buf, rec1, send_len), 0); 1509 + 1510 + EXPECT_EQ(tls_recv_cmsg(_metadata, self->cfd, 100, buf, sizeof(buf), 0), send_len); 1511 + EXPECT_EQ(memcmp(buf, rec2, send_len), 0); 1512 + } 1513 + 1514 + TEST_F(tls, data_control_data) 1515 + { 1516 + char *rec1 = "1111"; 1517 + char *rec2 = "2222"; 1518 + char *rec3 = "3333"; 1519 + int send_len = 5; 1520 + char buf[15]; 1521 + 1522 + if (self->notls) 1523 + SKIP(return, "no TLS support"); 1524 + 1525 + EXPECT_EQ(send(self->fd, rec1, send_len, 0), send_len); 1526 + EXPECT_EQ(tls_send_cmsg(self->fd, 100, rec2, send_len, 0), send_len); 1527 + EXPECT_EQ(send(self->fd, rec3, send_len, 0), send_len); 1528 + 1529 + EXPECT_EQ(recv(self->cfd, buf, sizeof(buf), MSG_PEEK), send_len); 1530 + EXPECT_EQ(recv(self->cfd, buf, sizeof(buf), MSG_PEEK), send_len); 1531 + } 1532 + 1488 1533 TEST_F(tls, shutdown) 1489 1534 { 1490 1535 char const *test_str = "test_read";
+2 -2
tools/testing/selftests/powerpc/papr_vpd/papr_vpd.c
··· 263 263 off_t size; 264 264 int fd; 265 265 266 - SKIP_IF_MSG(get_system_loc_code(&lc), 267 - "Cannot determine system location code"); 268 266 SKIP_IF_MSG(devfd < 0 && errno == ENOENT, 269 267 DEVPATH " not present"); 268 + SKIP_IF_MSG(get_system_loc_code(&lc), 269 + "Cannot determine system location code"); 270 270 271 271 FAIL_IF(devfd < 0); 272 272