···11What: /sys/bus/hid/drivers/hid-appletb-kbd/<dev>/mode22-Date: September, 202333-KernelVersion: 6.522+Date: March, 202533+KernelVersion: 6.1544Contact: linux-input@vger.kernel.org55Description:66 The set of keys displayed on the Touch Bar.
···11+.. SPDX-License-Identifier: GPL-2.022+33+Indirect Target Selection (ITS)44+===============================55+66+ITS is a vulnerability in some Intel CPUs that support Enhanced IBRS and were77+released before Alder Lake. ITS may allow an attacker to control the prediction88+of indirect branches and RETs located in the lower half of a cacheline.99+1010+ITS is assigned CVE-2024-28956 with a CVSS score of 4.7 (Medium).1111+1212+Scope of Impact1313+---------------1414+- **eIBRS Guest/Host Isolation**: Indirect branches in KVM/kernel may still be1515+ predicted with unintended target corresponding to a branch in the guest.1616+1717+- **Intra-Mode BTI**: In-kernel training such as through cBPF or other native1818+ gadgets.1919+2020+- **Indirect Branch Prediction Barrier (IBPB)**: After an IBPB, indirect2121+ branches may still be predicted with targets corresponding to direct branches2222+ executed prior to the IBPB. This is fixed by the IPU 2025.1 microcode, which2323+ should be available via distro updates. Alternatively microcode can be2424+ obtained from Intel's github repository [#f1]_.2525+2626+Affected CPUs2727+-------------2828+Below is the list of ITS affected CPUs [#f2]_ [#f3]_:2929+3030+ ======================== ============ ==================== ===============3131+ Common name Family_Model eIBRS Intra-mode BTI3232+ Guest/Host Isolation3333+ ======================== ============ ==================== ===============3434+ SKYLAKE_X (step >= 6) 06_55H Affected Affected3535+ ICELAKE_X 06_6AH Not affected Affected3636+ ICELAKE_D 06_6CH Not affected Affected3737+ ICELAKE_L 06_7EH Not affected Affected3838+ TIGERLAKE_L 06_8CH Not affected Affected3939+ TIGERLAKE 06_8DH Not affected Affected4040+ KABYLAKE_L (step >= 12) 06_8EH Affected Affected4141+ KABYLAKE (step >= 13) 06_9EH Affected Affected4242+ COMETLAKE 06_A5H Affected Affected4343+ COMETLAKE_L 06_A6H Affected Affected4444+ ROCKETLAKE 06_A7H Not affected Affected4545+ ======================== ============ ==================== ===============4646+4747+- All affected CPUs enumerate Enhanced IBRS feature.4848+- IBPB isolation is affected on all ITS affected CPUs, and need a microcode4949+ update for mitigation.5050+- None of the affected CPUs enumerate BHI_CTRL which was introduced in Golden5151+ Cove (Alder Lake and Sapphire Rapids). This can help guests to determine the5252+ host's affected status.5353+- Intel Atom CPUs are not affected by ITS.5454+5555+Mitigation5656+----------5757+As only the indirect branches and RETs that have their last byte of instruction5858+in the lower half of the cacheline are vulnerable to ITS, the basic idea behind5959+the mitigation is to not allow indirect branches in the lower half.6060+6161+This is achieved by relying on existing retpoline support in the kernel, and in6262+compilers. ITS-vulnerable retpoline sites are runtime patched to point to newly6363+added ITS-safe thunks. These safe thunks consists of indirect branch in the6464+second half of the cacheline. Not all retpoline sites are patched to thunks, if6565+a retpoline site is evaluated to be ITS-safe, it is replaced with an inline6666+indirect branch.6767+6868+Dynamic thunks6969+~~~~~~~~~~~~~~7070+From a dynamically allocated pool of safe-thunks, each vulnerable site is7171+replaced with a new thunk, such that they get a unique address. This could7272+improve the branch prediction accuracy. Also, it is a defense-in-depth measure7373+against aliasing.7474+7575+Note, for simplicity, indirect branches in eBPF programs are always replaced7676+with a jump to a static thunk in __x86_indirect_its_thunk_array. If required,7777+in future this can be changed to use dynamic thunks.7878+7979+All vulnerable RETs are replaced with a static thunk, they do not use dynamic8080+thunks. This is because RETs get their prediction from RSB mostly that does not8181+depend on source address. RETs that underflow RSB may benefit from dynamic8282+thunks. But, RETs significantly outnumber indirect branches, and any benefit8383+from a unique source address could be outweighed by the increased icache8484+footprint and iTLB pressure.8585+8686+Retpoline8787+~~~~~~~~~8888+Retpoline sequence also mitigates ITS-unsafe indirect branches. For this8989+reason, when retpoline is enabled, ITS mitigation only relocates the RETs to9090+safe thunks. Unless user requested the RSB-stuffing mitigation.9191+9292+RSB Stuffing9393+~~~~~~~~~~~~9494+RSB-stuffing via Call Depth Tracking is a mitigation for Retbleed RSB-underflow9595+attacks. And it also mitigates RETs that are vulnerable to ITS.9696+9797+Mitigation in guests9898+^^^^^^^^^^^^^^^^^^^^9999+All guests deploy ITS mitigation by default, irrespective of eIBRS enumeration100100+and Family/Model of the guest. This is because eIBRS feature could be hidden101101+from a guest. One exception to this is when a guest enumerates BHI_DIS_S, which102102+indicates that the guest is running on an unaffected host.103103+104104+To prevent guests from unnecessarily deploying the mitigation on unaffected105105+platforms, Intel has defined ITS_NO bit(62) in MSR IA32_ARCH_CAPABILITIES. When106106+a guest sees this bit set, it should not enumerate the ITS bug. Note, this bit107107+is not set by any hardware, but is **intended for VMMs to synthesize** it for108108+guests as per the host's affected status.109109+110110+Mitigation options111111+^^^^^^^^^^^^^^^^^^112112+The ITS mitigation can be controlled using the "indirect_target_selection"113113+kernel parameter. The available options are:114114+115115+ ======== ===================================================================116116+ on (default) Deploy the "Aligned branch/return thunks" mitigation.117117+ If spectre_v2 mitigation enables retpoline, aligned-thunks are only118118+ deployed for the affected RET instructions. Retpoline mitigates119119+ indirect branches.120120+121121+ off Disable ITS mitigation.122122+123123+ vmexit Equivalent to "=on" if the CPU is affected by guest/host isolation124124+ part of ITS. Otherwise, mitigation is not deployed. This option is125125+ useful when host userspace is not in the threat model, and only126126+ attacks from guest to host are considered.127127+128128+ stuff Deploy RSB-fill mitigation when retpoline is also deployed.129129+ Otherwise, deploy the default mitigation. When retpoline mitigation130130+ is enabled, RSB-stuffing via Call-Depth-Tracking also mitigates131131+ ITS.132132+133133+ force Force the ITS bug and deploy the default mitigation.134134+ ======== ===================================================================135135+136136+Sysfs reporting137137+---------------138138+139139+The sysfs file showing ITS mitigation status is:140140+141141+ /sys/devices/system/cpu/vulnerabilities/indirect_target_selection142142+143143+Note, microcode mitigation status is not reported in this file.144144+145145+The possible values in this file are:146146+147147+.. list-table::148148+149149+ * - Not affected150150+ - The processor is not vulnerable.151151+ * - Vulnerable152152+ - System is vulnerable and no mitigation has been applied.153153+ * - Vulnerable, KVM: Not affected154154+ - System is vulnerable to intra-mode BTI, but not affected by eIBRS155155+ guest/host isolation.156156+ * - Mitigation: Aligned branch/return thunks157157+ - The mitigation is enabled, affected indirect branches and RETs are158158+ relocated to safe thunks.159159+ * - Mitigation: Retpolines, Stuffing RSB160160+ - The mitigation is enabled using retpoline and RSB stuffing.161161+162162+References163163+----------164164+.. [#f1] Microcode repository - https://github.com/intel/Intel-Linux-Processor-Microcode-Data-Files165165+166166+.. [#f2] Affected Processors list - https://www.intel.com/content/www/us/en/developer/topic-technology/software-security-guidance/processors-affected-consolidated-product-cpu-model.html167167+168168+.. [#f3] Affected Processors list (machine readable) - https://github.com/intel/Intel-affected-processor-list
+18
Documentation/admin-guide/kernel-parameters.txt
···22022202 different crypto accelerators. This option can be used22032203 to achieve best performance for particular HW.2204220422052205+ indirect_target_selection= [X86,Intel] Mitigation control for Indirect22062206+ Target Selection(ITS) bug in Intel CPUs. Updated22072207+ microcode is also required for a fix in IBPB.22082208+22092209+ on: Enable mitigation (default).22102210+ off: Disable mitigation.22112211+ force: Force the ITS bug and deploy default22122212+ mitigation.22132213+ vmexit: Only deploy mitigation if CPU is affected by22142214+ guest/host isolation part of ITS.22152215+ stuff: Deploy RSB-fill mitigation when retpoline is22162216+ also deployed. Otherwise, deploy the default22172217+ mitigation.22182218+22192219+ For details see:22202220+ Documentation/admin-guide/hw-vuln/indirect-target-selection.rst22212221+22052222 init= [KNL]22062223 Format: <full_path>22072224 Run specified binary instead of /sbin/init as init···37103693 expose users to several CPU vulnerabilities.37113694 Equivalent to: if nokaslr then kpti=0 [ARM64]37123695 gather_data_sampling=off [X86]36963696+ indirect_target_selection=off [X86]37133697 kvm.nx_huge_pages=off [X86]37143698 l1tf=off [X86]37153699 mds=off [X86]
···4646`KBUILD_BUILD_USER and KBUILD_BUILD_HOST`_ variables. If you are4747building from a git commit, you could use its committer address.48484949+Absolute filenames5050+------------------5151+5252+When the kernel is built out-of-tree, debug information may include5353+absolute filenames for the source files. This must be overridden by5454+including the ``-fdebug-prefix-map`` option in the `KCFLAGS`_ variable.5555+5656+Depending on the compiler used, the ``__FILE__`` macro may also expand5757+to an absolute filename in an out-of-tree build. Kbuild automatically5858+uses the ``-fmacro-prefix-map`` option to prevent this, if it is5959+supported.6060+6161+The Reproducible Builds web site has more information about these6262+`prefix-map options`_.6363+4964Generated files in source packages5065----------------------------------5166···131116132117.. _KBUILD_BUILD_TIMESTAMP: kbuild.html#kbuild-build-timestamp133118.. _KBUILD_BUILD_USER and KBUILD_BUILD_HOST: kbuild.html#kbuild-build-user-kbuild-build-host119119+.. _KCFLAGS: kbuild.html#kcflags120120+.. _prefix-map options: https://reproducible-builds.org/docs/build-path/134121.. _Reproducible Builds project: https://reproducible-builds.org/135122.. _SOURCE_DATE_EPOCH: https://reproducible-builds.org/docs/source-date-epoch/
···8118113.2.4 Other caveats for MAC drivers812812^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^813813814814-Stacked PHCs, especially DSA (but not only) - since that doesn't require any815815-modification to MAC drivers, so it is more difficult to ensure correctness of816816-all possible code paths - is that they uncover bugs which were impossible to817817-trigger before the existence of stacked PTP clocks. One example has to do with818818-this line of code, already presented earlier::814814+The use of stacked PHCs may uncover MAC driver bugs which were impossible to815815+trigger without them. One example has to do with this line of code, already816816+presented earlier::819817820818 skb_shinfo(skb)->tx_flags |= SKBTX_IN_PROGRESS;821819
+22-4
MAINTAINERS
···1014710147F: include/linux/gpio/regmap.h1014810148K: (devm_)?gpio_regmap_(un)?register10149101491015010150+GPIO SLOPPY LOGIC ANALYZER1015110151+M: Wolfram Sang <wsa+renesas@sang-engineering.com>1015210152+S: Supported1015310153+F: Documentation/dev-tools/gpio-sloppy-logic-analyzer.rst1015410154+F: drivers/gpio/gpio-sloppy-logic-analyzer.c1015510155+F: tools/gpio/gpio-sloppy-logic-analyzer.sh1015610156+1015010157GPIO SUBSYSTEM1015110158M: Linus Walleij <linus.walleij@linaro.org>1015210159M: Bartosz Golaszewski <brgl@bgdev.pl>···1554915542F: include/linux/execmem.h1555015543F: mm/execmem.c15551155441554515545+MEMORY MANAGEMENT - GUP (GET USER PAGES)1554615546+M: Andrew Morton <akpm@linux-foundation.org>1554715547+M: David Hildenbrand <david@redhat.com>1554815548+R: Jason Gunthorpe <jgg@nvidia.com>1554915549+R: John Hubbard <jhubbard@nvidia.com>1555015550+R: Peter Xu <peterx@redhat.com>1555115551+L: linux-mm@kvack.org1555215552+S: Maintained1555315553+W: http://www.linux-mm.org1555415554+T: git git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm1555515555+F: mm/gup.c1555615556+1555215557MEMORY MANAGEMENT - NUMA MEMBLOCKS AND NUMA EMULATION1555315558M: Andrew Morton <akpm@linux-foundation.org>1555415559M: Mike Rapoport <rppt@kernel.org>···1845218433PARAVIRT_OPS INTERFACE1845318434M: Juergen Gross <jgross@suse.com>1845418435R: Ajay Kaher <ajay.kaher@broadcom.com>1845518455-R: Alexey Makhalov <alexey.amakhalov@broadcom.com>1843618436+R: Alexey Makhalov <alexey.makhalov@broadcom.com>1845618437R: Broadcom internal kernel review list <bcm-kernel-feedback-list@broadcom.com>1845718438L: virtualization@lists.linux.dev1845818439L: x86@kernel.org···22943229242294422925SPEAR PLATFORM/CLOCK/PINCTRL SUPPORT2294522926M: Viresh Kumar <vireshk@kernel.org>2294622946-M: Shiraz Hashim <shiraz.linux.kernel@gmail.com>2294722927L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)2294822928L: soc@lists.linux.dev2294922929S: Maintained···25943259252594425926VMWARE HYPERVISOR INTERFACE2594525927M: Ajay Kaher <ajay.kaher@broadcom.com>2594625946-M: Alexey Makhalov <alexey.amakhalov@broadcom.com>2592825928+M: Alexey Makhalov <alexey.makhalov@broadcom.com>2594725929R: Broadcom internal kernel review list <bcm-kernel-feedback-list@broadcom.com>2594825930L: virtualization@lists.linux.dev2594925931L: x86@kernel.org···2597125953VMWARE VIRTUAL PTP CLOCK DRIVER2597225954M: Nick Shi <nick.shi@broadcom.com>2597325955R: Ajay Kaher <ajay.kaher@broadcom.com>2597425974-R: Alexey Makhalov <alexey.amakhalov@broadcom.com>2595625956+R: Alexey Makhalov <alexey.makhalov@broadcom.com>2597525957R: Broadcom internal kernel review list <bcm-kernel-feedback-list@broadcom.com>2597625958L: netdev@vger.kernel.org2597725959S: Supported
+2-3
Makefile
···22VERSION = 633PATCHLEVEL = 1544SUBLEVEL = 055-EXTRAVERSION = -rc655+EXTRAVERSION = -rc766NAME = Baby Opossum Posse7788# *DOCUMENTATION*···1068106810691069# change __FILE__ to the relative path to the source directory10701070ifdef building_out_of_srctree10711071-KBUILD_CPPFLAGS += $(call cc-option,-ffile-prefix-map=$(srcroot)/=)10721072-KBUILD_RUSTFLAGS += --remap-path-prefix=$(srcroot)/=10711071+KBUILD_CPPFLAGS += $(call cc-option,-fmacro-prefix-map=$(srcroot)/=)10731072endif1074107310751074# include additional Makefiles when needed
+3-3
arch/arm/boot/dts/amlogic/meson8.dtsi
···451451 pwm_ef: pwm@86c0 {452452 compatible = "amlogic,meson8-pwm-v2";453453 clocks = <&xtal>,454454- <>, /* unknown/untested, the datasheet calls it "Video PLL" */454454+ <0>, /* unknown/untested, the datasheet calls it "Video PLL" */455455 <&clkc CLKID_FCLK_DIV4>,456456 <&clkc CLKID_FCLK_DIV3>;457457 reg = <0x86c0 0x10>;···705705&pwm_ab {706706 compatible = "amlogic,meson8-pwm-v2";707707 clocks = <&xtal>,708708- <>, /* unknown/untested, the datasheet calls it "Video PLL" */708708+ <0>, /* unknown/untested, the datasheet calls it "Video PLL" */709709 <&clkc CLKID_FCLK_DIV4>,710710 <&clkc CLKID_FCLK_DIV3>;711711};···713713&pwm_cd {714714 compatible = "amlogic,meson8-pwm-v2";715715 clocks = <&xtal>,716716- <>, /* unknown/untested, the datasheet calls it "Video PLL" */716716+ <0>, /* unknown/untested, the datasheet calls it "Video PLL" */717717 <&clkc CLKID_FCLK_DIV4>,718718 <&clkc CLKID_FCLK_DIV3>;719719};
+3-3
arch/arm/boot/dts/amlogic/meson8b.dtsi
···406406 compatible = "amlogic,meson8b-pwm-v2", "amlogic,meson8-pwm-v2";407407 reg = <0x86c0 0x10>;408408 clocks = <&xtal>,409409- <>, /* unknown/untested, the datasheet calls it "Video PLL" */409409+ <0>, /* unknown/untested, the datasheet calls it "Video PLL" */410410 <&clkc CLKID_FCLK_DIV4>,411411 <&clkc CLKID_FCLK_DIV3>;412412 #pwm-cells = <3>;···680680&pwm_ab {681681 compatible = "amlogic,meson8b-pwm-v2", "amlogic,meson8-pwm-v2";682682 clocks = <&xtal>,683683- <>, /* unknown/untested, the datasheet calls it "Video PLL" */683683+ <0>, /* unknown/untested, the datasheet calls it "Video PLL" */684684 <&clkc CLKID_FCLK_DIV4>,685685 <&clkc CLKID_FCLK_DIV3>;686686};···688688&pwm_cd {689689 compatible = "amlogic,meson8b-pwm-v2", "amlogic,meson8-pwm-v2";690690 clocks = <&xtal>,691691- <>, /* unknown/untested, the datasheet calls it "Video PLL" */691691+ <0>, /* unknown/untested, the datasheet calls it "Video PLL" */692692 <&clkc CLKID_FCLK_DIV4>,693693 <&clkc CLKID_FCLK_DIV3>;694694};
···741741742742&pwm_ab {743743 clocks = <&xtal>,744744- <>, /* unknown/untested, the datasheet calls it "vid_pll" */744744+ <0>, /* unknown/untested, the datasheet calls it "vid_pll" */745745 <&clkc CLKID_FCLK_DIV4>,746746 <&clkc CLKID_FCLK_DIV3>;747747};···752752753753&pwm_cd {754754 clocks = <&xtal>,755755- <>, /* unknown/untested, the datasheet calls it "vid_pll" */755755+ <0>, /* unknown/untested, the datasheet calls it "vid_pll" */756756 <&clkc CLKID_FCLK_DIV4>,757757 <&clkc CLKID_FCLK_DIV3>;758758};759759760760&pwm_ef {761761 clocks = <&xtal>,762762- <>, /* unknown/untested, the datasheet calls it "vid_pll" */762762+ <0>, /* unknown/untested, the datasheet calls it "vid_pll" */763763 <&clkc CLKID_FCLK_DIV4>,764764 <&clkc CLKID_FCLK_DIV3>;765765};
+3-3
arch/arm64/boot/dts/amlogic/meson-gxl.dtsi
···811811812812&pwm_ab {813813 clocks = <&xtal>,814814- <>, /* unknown/untested, the datasheet calls it "vid_pll" */814814+ <0>, /* unknown/untested, the datasheet calls it "vid_pll" */815815 <&clkc CLKID_FCLK_DIV4>,816816 <&clkc CLKID_FCLK_DIV3>;817817};···822822823823&pwm_cd {824824 clocks = <&xtal>,825825- <>, /* unknown/untested, the datasheet calls it "vid_pll" */825825+ <0>, /* unknown/untested, the datasheet calls it "vid_pll" */826826 <&clkc CLKID_FCLK_DIV4>,827827 <&clkc CLKID_FCLK_DIV3>;828828};829829830830&pwm_ef {831831 clocks = <&xtal>,832832- <>, /* unknown/untested, the datasheet calls it "vid_pll" */832832+ <0>, /* unknown/untested, the datasheet calls it "vid_pll" */833833 <&clkc CLKID_FCLK_DIV4>,834834 <&clkc CLKID_FCLK_DIV3>;835835};
+10
arch/arm64/boot/dts/apple/t8103-j293.dts
···7777 };7878};79798080+/*8181+ * The driver depends on boot loader initialized state which resets when this8282+ * power-domain is powered off. This happens on suspend or when the driver is8383+ * missing during boot. Mark the domain as always on until the driver can8484+ * handle this.8585+ */8686+&ps_dispdfr_be {8787+ apple,always-on;8888+};8989+8090&display_dfr {8191 status = "okay";8292};
+10
arch/arm64/boot/dts/apple/t8112-j493.dts
···4040 };4141};42424343+/*4444+ * The driver depends on boot loader initialized state which resets when this4545+ * power-domain is powered off. This happens on suspend or when the driver is4646+ * missing during boot. Mark the domain as always on until the driver can4747+ * handle this.4848+ */4949+&ps_dispdfr_be {5050+ apple,always-on;5151+};5252+4353&display_dfr {4454 status = "okay";4555};
···55555656/* Query offset/name of register from its name/offset */5757extern int regs_query_register_offset(const char *name);5858-#define MAX_REG_OFFSET (offsetof(struct pt_regs, __last))5858+#define MAX_REG_OFFSET (offsetof(struct pt_regs, __last) - sizeof(unsigned long))59596060/**6161 * regs_get_register() - get register value from its offset
···27112711 of speculative execution in a similar way to the Meltdown and Spectre27122712 security vulnerabilities.2713271327142714+config MITIGATION_ITS27152715+ bool "Enable Indirect Target Selection mitigation"27162716+ depends on CPU_SUP_INTEL && X86_6427172717+ depends on MITIGATION_RETPOLINE && MITIGATION_RETHUNK27182718+ select EXECMEM27192719+ default y27202720+ help27212721+ Enable Indirect Target Selection (ITS) mitigation. ITS is a bug in27222722+ BPU on some Intel CPUs that may allow Spectre V2 style attacks. If27232723+ disabled, mitigation cannot be enabled via cmdline.27242724+ See <file:Documentation/admin-guide/hw-vuln/indirect-target-selection.rst>27252725+27142726endif2715272727162728config ARCH_HAS_ADD_PAGES
+165-90
arch/x86/coco/sev/core.c
···959959 set_pages_state(vaddr, npages, SNP_PAGE_STATE_PRIVATE);960960}961961962962+static int vmgexit_ap_control(u64 event, struct sev_es_save_area *vmsa, u32 apic_id)963963+{964964+ bool create = event != SVM_VMGEXIT_AP_DESTROY;965965+ struct ghcb_state state;966966+ unsigned long flags;967967+ struct ghcb *ghcb;968968+ int ret = 0;969969+970970+ local_irq_save(flags);971971+972972+ ghcb = __sev_get_ghcb(&state);973973+974974+ vc_ghcb_invalidate(ghcb);975975+976976+ if (create)977977+ ghcb_set_rax(ghcb, vmsa->sev_features);978978+979979+ ghcb_set_sw_exit_code(ghcb, SVM_VMGEXIT_AP_CREATION);980980+ ghcb_set_sw_exit_info_1(ghcb,981981+ ((u64)apic_id << 32) |982982+ ((u64)snp_vmpl << 16) |983983+ event);984984+ ghcb_set_sw_exit_info_2(ghcb, __pa(vmsa));985985+986986+ sev_es_wr_ghcb_msr(__pa(ghcb));987987+ VMGEXIT();988988+989989+ if (!ghcb_sw_exit_info_1_is_valid(ghcb) ||990990+ lower_32_bits(ghcb->save.sw_exit_info_1)) {991991+ pr_err("SNP AP %s error\n", (create ? "CREATE" : "DESTROY"));992992+ ret = -EINVAL;993993+ }994994+995995+ __sev_put_ghcb(&state);996996+997997+ local_irq_restore(flags);998998+999999+ return ret;10001000+}10011001+10021002+static int snp_set_vmsa(void *va, void *caa, int apic_id, bool make_vmsa)10031003+{10041004+ int ret;10051005+10061006+ if (snp_vmpl) {10071007+ struct svsm_call call = {};10081008+ unsigned long flags;10091009+10101010+ local_irq_save(flags);10111011+10121012+ call.caa = this_cpu_read(svsm_caa);10131013+ call.rcx = __pa(va);10141014+10151015+ if (make_vmsa) {10161016+ /* Protocol 0, Call ID 2 */10171017+ call.rax = SVSM_CORE_CALL(SVSM_CORE_CREATE_VCPU);10181018+ call.rdx = __pa(caa);10191019+ call.r8 = apic_id;10201020+ } else {10211021+ /* Protocol 0, Call ID 3 */10221022+ call.rax = SVSM_CORE_CALL(SVSM_CORE_DELETE_VCPU);10231023+ }10241024+10251025+ ret = svsm_perform_call_protocol(&call);10261026+10271027+ local_irq_restore(flags);10281028+ } else {10291029+ /*10301030+ * If the kernel runs at VMPL0, it can change the VMSA10311031+ * bit for a page using the RMPADJUST instruction.10321032+ * However, for the instruction to succeed it must10331033+ * target the permissions of a lesser privileged (higher10341034+ * numbered) VMPL level, so use VMPL1.10351035+ */10361036+ u64 attrs = 1;10371037+10381038+ if (make_vmsa)10391039+ attrs |= RMPADJUST_VMSA_PAGE_BIT;10401040+10411041+ ret = rmpadjust((unsigned long)va, RMP_PG_SIZE_4K, attrs);10421042+ }10431043+10441044+ return ret;10451045+}10461046+10471047+static void snp_cleanup_vmsa(struct sev_es_save_area *vmsa, int apic_id)10481048+{10491049+ int err;10501050+10511051+ err = snp_set_vmsa(vmsa, NULL, apic_id, false);10521052+ if (err)10531053+ pr_err("clear VMSA page failed (%u), leaking page\n", err);10541054+ else10551055+ free_page((unsigned long)vmsa);10561056+}10571057+9621058static void set_pte_enc(pte_t *kpte, int level, void *va)9631059{9641060 struct pte_enc_desc d = {···11011005 data = per_cpu(runtime_data, cpu);11021006 ghcb = (unsigned long)&data->ghcb_page;1103100711041104- if (addr <= ghcb && ghcb <= addr + size) {10081008+ /* Handle the case of a huge page containing the GHCB page */10091009+ if (addr <= ghcb && ghcb < addr + size) {11051010 skipped_addr = true;11061011 break;11071012 }···11521055 pr_warn("Failed to stop shared<->private conversions\n");11531056}1154105710581058+/*10591059+ * Shutdown all APs except the one handling kexec/kdump and clearing10601060+ * the VMSA tag on AP's VMSA pages as they are not being used as10611061+ * VMSA page anymore.10621062+ */10631063+static void shutdown_all_aps(void)10641064+{10651065+ struct sev_es_save_area *vmsa;10661066+ int apic_id, this_cpu, cpu;10671067+10681068+ this_cpu = get_cpu();10691069+10701070+ /*10711071+ * APs are already in HLT loop when enc_kexec_finish() callback10721072+ * is invoked.10731073+ */10741074+ for_each_present_cpu(cpu) {10751075+ vmsa = per_cpu(sev_vmsa, cpu);10761076+10771077+ /*10781078+ * The BSP or offlined APs do not have guest allocated VMSA10791079+ * and there is no need to clear the VMSA tag for this page.10801080+ */10811081+ if (!vmsa)10821082+ continue;10831083+10841084+ /*10851085+ * Cannot clear the VMSA tag for the currently running vCPU.10861086+ */10871087+ if (this_cpu == cpu) {10881088+ unsigned long pa;10891089+ struct page *p;10901090+10911091+ pa = __pa(vmsa);10921092+ /*10931093+ * Mark the VMSA page of the running vCPU as offline10941094+ * so that is excluded and not touched by makedumpfile10951095+ * while generating vmcore during kdump.10961096+ */10971097+ p = pfn_to_online_page(pa >> PAGE_SHIFT);10981098+ if (p)10991099+ __SetPageOffline(p);11001100+ continue;11011101+ }11021102+11031103+ apic_id = cpuid_to_apicid[cpu];11041104+11051105+ /*11061106+ * Issue AP destroy to ensure AP gets kicked out of guest mode11071107+ * to allow using RMPADJUST to remove the VMSA tag on it's11081108+ * VMSA page.11091109+ */11101110+ vmgexit_ap_control(SVM_VMGEXIT_AP_DESTROY, vmsa, apic_id);11111111+ snp_cleanup_vmsa(vmsa, apic_id);11121112+ }11131113+11141114+ put_cpu();11151115+}11161116+11551117void snp_kexec_finish(void)11561118{11571119 struct sev_es_runtime_data *data;11201120+ unsigned long size, addr;11581121 unsigned int level, cpu;11591159- unsigned long size;11601122 struct ghcb *ghcb;11611123 pte_t *pte;11621124···1224106812251069 if (!IS_ENABLED(CONFIG_KEXEC_CORE))12261070 return;10711071+10721072+ shutdown_all_aps();1227107312281074 unshare_all_memory();12291075···12431085 ghcb = &data->ghcb_page;12441086 pte = lookup_address((unsigned long)ghcb, &level);12451087 size = page_level_size(level);12461246- set_pte_enc(pte, level, (void *)ghcb);12471247- snp_set_memory_private((unsigned long)ghcb, (size / PAGE_SIZE));10881088+ /* Handle the case of a huge page containing the GHCB page */10891089+ addr = (unsigned long)ghcb & page_level_mask(level);10901090+ set_pte_enc(pte, level, (void *)addr);10911091+ snp_set_memory_private(addr, (size / PAGE_SIZE));12481092 }12491249-}12501250-12511251-static int snp_set_vmsa(void *va, void *caa, int apic_id, bool make_vmsa)12521252-{12531253- int ret;12541254-12551255- if (snp_vmpl) {12561256- struct svsm_call call = {};12571257- unsigned long flags;12581258-12591259- local_irq_save(flags);12601260-12611261- call.caa = this_cpu_read(svsm_caa);12621262- call.rcx = __pa(va);12631263-12641264- if (make_vmsa) {12651265- /* Protocol 0, Call ID 2 */12661266- call.rax = SVSM_CORE_CALL(SVSM_CORE_CREATE_VCPU);12671267- call.rdx = __pa(caa);12681268- call.r8 = apic_id;12691269- } else {12701270- /* Protocol 0, Call ID 3 */12711271- call.rax = SVSM_CORE_CALL(SVSM_CORE_DELETE_VCPU);12721272- }12731273-12741274- ret = svsm_perform_call_protocol(&call);12751275-12761276- local_irq_restore(flags);12771277- } else {12781278- /*12791279- * If the kernel runs at VMPL0, it can change the VMSA12801280- * bit for a page using the RMPADJUST instruction.12811281- * However, for the instruction to succeed it must12821282- * target the permissions of a lesser privileged (higher12831283- * numbered) VMPL level, so use VMPL1.12841284- */12851285- u64 attrs = 1;12861286-12871287- if (make_vmsa)12881288- attrs |= RMPADJUST_VMSA_PAGE_BIT;12891289-12901290- ret = rmpadjust((unsigned long)va, RMP_PG_SIZE_4K, attrs);12911291- }12921292-12931293- return ret;12941093}1295109412961095#define __ATTR_BASE (SVM_SELECTOR_P_MASK | SVM_SELECTOR_S_MASK)···12811166 return page_address(p + 1);12821167}1283116812841284-static void snp_cleanup_vmsa(struct sev_es_save_area *vmsa, int apic_id)12851285-{12861286- int err;12871287-12881288- err = snp_set_vmsa(vmsa, NULL, apic_id, false);12891289- if (err)12901290- pr_err("clear VMSA page failed (%u), leaking page\n", err);12911291- else12921292- free_page((unsigned long)vmsa);12931293-}12941294-12951169static int wakeup_cpu_via_vmgexit(u32 apic_id, unsigned long start_ip)12961170{12971171 struct sev_es_save_area *cur_vmsa, *vmsa;12981298- struct ghcb_state state;12991172 struct svsm_ca *caa;13001300- unsigned long flags;13011301- struct ghcb *ghcb;13021173 u8 sipi_vector;13031174 int cpu, ret;13041175 u64 cr4;···13981297 }1399129814001299 /* Issue VMGEXIT AP Creation NAE event */14011401- local_irq_save(flags);14021402-14031403- ghcb = __sev_get_ghcb(&state);14041404-14051405- vc_ghcb_invalidate(ghcb);14061406- ghcb_set_rax(ghcb, vmsa->sev_features);14071407- ghcb_set_sw_exit_code(ghcb, SVM_VMGEXIT_AP_CREATION);14081408- ghcb_set_sw_exit_info_1(ghcb,14091409- ((u64)apic_id << 32) |14101410- ((u64)snp_vmpl << 16) |14111411- SVM_VMGEXIT_AP_CREATE);14121412- ghcb_set_sw_exit_info_2(ghcb, __pa(vmsa));14131413-14141414- sev_es_wr_ghcb_msr(__pa(ghcb));14151415- VMGEXIT();14161416-14171417- if (!ghcb_sw_exit_info_1_is_valid(ghcb) ||14181418- lower_32_bits(ghcb->save.sw_exit_info_1)) {14191419- pr_err("SNP AP Creation error\n");14201420- ret = -EINVAL;14211421- }14221422-14231423- __sev_put_ghcb(&state);14241424-14251425- local_irq_restore(flags);14261426-14271427- /* Perform cleanup if there was an error */13001300+ ret = vmgexit_ap_control(SVM_VMGEXIT_AP_CREATE, vmsa, apic_id);14281301 if (ret) {14291302 snp_cleanup_vmsa(vmsa, apic_id);14301303 vmsa = NULL;
+17-3
arch/x86/entry/entry_64.S
···15251525 * ORC to unwind properly.15261526 *15271527 * The alignment is for performance and not for safety, and may be safely15281528- * refactored in the future if needed.15281528+ * refactored in the future if needed. The .skips are for safety, to ensure15291529+ * that all RETs are in the second half of a cacheline to mitigate Indirect15301530+ * Target Selection, rather than taking the slowpath via its_return_thunk.15291531 */15301532SYM_FUNC_START(clear_bhb_loop)15311533 ANNOTATE_NOENDBR···15381536 call 1f15391537 jmp 5f15401538 .align 64, 0xcc15391539+ /*15401540+ * Shift instructions so that the RET is in the upper half of the15411541+ * cacheline and don't take the slowpath to its_return_thunk.15421542+ */15431543+ .skip 32 - (.Lret1 - 1f), 0xcc15411544 ANNOTATE_INTRA_FUNCTION_CALL154215451: call 2f15431543- RET15461546+.Lret1: RET15441547 .align 64, 0xcc15481548+ /*15491549+ * As above shift instructions for RET at .Lret2 as well.15501550+ *15511551+ * This should be ideally be: .skip 32 - (.Lret2 - 2f), 0xcc15521552+ * but some Clang versions (e.g. 18) don't like this.15531553+ */15541554+ .skip 32 - 18, 0xcc154515552: movl $5, %eax154615563: jmp 4f15471557 nop···15611547 jnz 3b15621548 sub $1, %ecx15631549 jnz 1b15641564- RET15501550+.Lret2: RET156515515: lfence15661552 pop %rbp15671553 RET
+5-4
arch/x86/events/intel/ds.c
···24652465 setup_pebs_fixed_sample_data);24662466}2467246724682468-static void intel_pmu_pebs_event_update_no_drain(struct cpu_hw_events *cpuc, int size)24682468+static void intel_pmu_pebs_event_update_no_drain(struct cpu_hw_events *cpuc, u64 mask)24692469{24702470+ u64 pebs_enabled = cpuc->pebs_enabled & mask;24702471 struct perf_event *event;24712472 int bit;24722473···24782477 * It needs to call intel_pmu_save_and_restart_reload() to24792478 * update the event->count for this case.24802479 */24812481- for_each_set_bit(bit, (unsigned long *)&cpuc->pebs_enabled, size) {24802480+ for_each_set_bit(bit, (unsigned long *)&pebs_enabled, X86_PMC_IDX_MAX) {24822481 event = cpuc->events[bit];24832482 if (event->hw.flags & PERF_X86_EVENT_AUTO_RELOAD)24842483 intel_pmu_save_and_restart_reload(event, 0);···25132512 }2514251325152514 if (unlikely(base >= top)) {25162516- intel_pmu_pebs_event_update_no_drain(cpuc, size);25152515+ intel_pmu_pebs_event_update_no_drain(cpuc, mask);25172516 return;25182517 }25192518···26272626 (hybrid(cpuc->pmu, fixed_cntr_mask64) << INTEL_PMC_IDX_FIXED);2628262726292628 if (unlikely(base >= top)) {26302630- intel_pmu_pebs_event_update_no_drain(cpuc, X86_PMC_IDX_MAX);26292629+ intel_pmu_pebs_event_update_no_drain(cpuc, mask);26312630 return;26322631 }26332632
···7575#define X86_FEATURE_CENTAUR_MCR ( 3*32+ 3) /* "centaur_mcr" Centaur MCRs (= MTRRs) */7676#define X86_FEATURE_K8 ( 3*32+ 4) /* Opteron, Athlon64 */7777#define X86_FEATURE_ZEN5 ( 3*32+ 5) /* CPU based on Zen5 microarchitecture */7878-/* Free ( 3*32+ 6) */7878+#define X86_FEATURE_ZEN6 ( 3*32+ 6) /* CPU based on Zen6 microarchitecture */7979/* Free ( 3*32+ 7) */8080#define X86_FEATURE_CONSTANT_TSC ( 3*32+ 8) /* "constant_tsc" TSC ticks at a constant rate */8181#define X86_FEATURE_UP ( 3*32+ 9) /* "up" SMP kernel running on UP */···481481#define X86_FEATURE_AMD_HETEROGENEOUS_CORES (21*32 + 6) /* Heterogeneous Core Topology */482482#define X86_FEATURE_AMD_WORKLOAD_CLASS (21*32 + 7) /* Workload Classification */483483#define X86_FEATURE_PREFER_YMM (21*32 + 8) /* Avoid ZMM registers due to downclocking */484484+#define X86_FEATURE_INDIRECT_THUNK_ITS (21*32 + 9) /* Use thunk for indirect branches in lower half of cacheline */484485485486/*486487 * BUG word(s)···534533#define X86_BUG_BHI X86_BUG(1*32 + 3) /* "bhi" CPU is affected by Branch History Injection */535534#define X86_BUG_IBPB_NO_RET X86_BUG(1*32 + 4) /* "ibpb_no_ret" IBPB omits return target predictions */536535#define X86_BUG_SPECTRE_V2_USER X86_BUG(1*32 + 5) /* "spectre_v2_user" CPU is affected by Spectre variant 2 attack between user processes */536536+#define X86_BUG_ITS X86_BUG(1*32 + 6) /* "its" CPU is affected by Indirect Target Selection */537537+#define X86_BUG_ITS_NATIVE_ONLY X86_BUG(1*32 + 7) /* "its_native_only" CPU is affected by ITS, VMX is not affected */537538#endif /* _ASM_X86_CPUFEATURES_H */
+8
arch/x86/include/asm/msr-index.h
···211211 * VERW clears CPU Register212212 * File.213213 */214214+#define ARCH_CAP_ITS_NO BIT_ULL(62) /*215215+ * Not susceptible to216216+ * Indirect Target Selection.217217+ * This bit is not set by218218+ * HW, but is synthesized by219219+ * VMMs for guests to know220220+ * their affected status.221221+ */214222215223#define MSR_IA32_FLUSH_CMD 0x0000010b216224#define L1D_FLUSH BIT(0) /*
···8181 break;82828383 case RET:8484- if (cpu_feature_enabled(X86_FEATURE_RETHUNK))8484+ if (cpu_wants_rethunk_at(insn))8585 code = text_gen_insn(JMP32_INSN_OPCODE, insn, x86_return_thunk);8686 else8787 code = &retinsn;···9090 case JCC:9191 if (!func) {9292 func = __static_call_return;9393- if (cpu_feature_enabled(X86_FEATURE_RETHUNK))9393+ if (cpu_wants_rethunk())9494 func = x86_return_thunk;9595 }9696
+10
arch/x86/kernel/vmlinux.lds.S
···505505 "SRSO function pair won't alias");506506#endif507507508508+#if defined(CONFIG_MITIGATION_ITS) && !defined(CONFIG_DEBUG_FORCE_FUNCTION_ALIGN_64B)509509+. = ASSERT(__x86_indirect_its_thunk_rax & 0x20, "__x86_indirect_thunk_rax not in second half of cacheline");510510+. = ASSERT(((__x86_indirect_its_thunk_rcx - __x86_indirect_its_thunk_rax) % 64) == 0, "Indirect thunks are not cacheline apart");511511+. = ASSERT(__x86_indirect_its_thunk_array == __x86_indirect_its_thunk_rax, "Gap in ITS thunk array");512512+#endif513513+514514+#if defined(CONFIG_MITIGATION_ITS) && !defined(CONFIG_DEBUG_FORCE_FUNCTION_ALIGN_64B)515515+. = ASSERT(its_return_thunk & 0x20, "its_return_thunk not in second half of cacheline");516516+#endif517517+508518#endif /* CONFIG_X86_64 */509519510520/*
+3-1
arch/x86/kvm/x86.c
···15841584 ARCH_CAP_PSCHANGE_MC_NO | ARCH_CAP_TSX_CTRL_MSR | ARCH_CAP_TAA_NO | \15851585 ARCH_CAP_SBDR_SSDP_NO | ARCH_CAP_FBSDP_NO | ARCH_CAP_PSDP_NO | \15861586 ARCH_CAP_FB_CLEAR | ARCH_CAP_RRSBA | ARCH_CAP_PBRSB_NO | ARCH_CAP_GDS_NO | \15871587- ARCH_CAP_RFDS_NO | ARCH_CAP_RFDS_CLEAR | ARCH_CAP_BHI_NO)15871587+ ARCH_CAP_RFDS_NO | ARCH_CAP_RFDS_CLEAR | ARCH_CAP_BHI_NO | ARCH_CAP_ITS_NO)1588158815891589static u64 kvm_get_arch_capabilities(void)15901590{···16181618 data |= ARCH_CAP_MDS_NO;16191619 if (!boot_cpu_has_bug(X86_BUG_RFDS))16201620 data |= ARCH_CAP_RFDS_NO;16211621+ if (!boot_cpu_has_bug(X86_BUG_ITS))16221622+ data |= ARCH_CAP_ITS_NO;1621162316221624 if (!boot_cpu_has(X86_FEATURE_RTM)) {16231625 /*
+48
arch/x86/lib/retpoline.S
···367367368368#endif /* CONFIG_MITIGATION_CALL_DEPTH_TRACKING */369369370370+#ifdef CONFIG_MITIGATION_ITS371371+372372+.macro ITS_THUNK reg373373+374374+/*375375+ * If CFI paranoid is used then the ITS thunk starts with opcodes (0xea; jne 1b)376376+ * that complete the fineibt_paranoid caller sequence.377377+ */378378+1: .byte 0xea379379+SYM_INNER_LABEL(__x86_indirect_paranoid_thunk_\reg, SYM_L_GLOBAL)380380+ UNWIND_HINT_UNDEFINED381381+ ANNOTATE_NOENDBR382382+ jne 1b383383+SYM_INNER_LABEL(__x86_indirect_its_thunk_\reg, SYM_L_GLOBAL)384384+ UNWIND_HINT_UNDEFINED385385+ ANNOTATE_NOENDBR386386+ ANNOTATE_RETPOLINE_SAFE387387+ jmp *%\reg388388+ int3389389+ .align 32, 0xcc /* fill to the end of the line */390390+ .skip 32 - (__x86_indirect_its_thunk_\reg - 1b), 0xcc /* skip to the next upper half */391391+.endm392392+393393+/* ITS mitigation requires thunks be aligned to upper half of cacheline */394394+.align 64, 0xcc395395+.skip 29, 0xcc396396+397397+#define GEN(reg) ITS_THUNK reg398398+#include <asm/GEN-for-each-reg.h>399399+#undef GEN400400+401401+ .align 64, 0xcc402402+SYM_FUNC_ALIAS(__x86_indirect_its_thunk_array, __x86_indirect_its_thunk_rax)403403+SYM_CODE_END(__x86_indirect_its_thunk_array)404404+405405+.align 64, 0xcc406406+.skip 32, 0xcc407407+SYM_CODE_START(its_return_thunk)408408+ UNWIND_HINT_FUNC409409+ ANNOTATE_NOENDBR410410+ ANNOTATE_UNRET_SAFE411411+ ret412412+ int3413413+SYM_CODE_END(its_return_thunk)414414+EXPORT_SYMBOL(its_return_thunk)415415+416416+#endif /* CONFIG_MITIGATION_ITS */417417+370418/*371419 * This function name is magical and is used by -mfunction-return=thunk-extern372420 * for the compiler to generate JMPs to it.
+4-1
arch/x86/mm/init_32.c
···3030#include <linux/initrd.h>3131#include <linux/cpumask.h>3232#include <linux/gfp.h>3333+#include <linux/execmem.h>33343435#include <asm/asm.h>3536#include <asm/bios_ebda.h>···566565 "only %luMB highmem pages available, ignoring highmem size of %luMB!\n"567566568567#define MSG_HIGHMEM_TRIMMED \569569- "Warning: only 4GB will be used. Support for for CONFIG_HIGHMEM64G was removed!\n"568568+ "Warning: only 4GB will be used. Support for CONFIG_HIGHMEM64G was removed!\n"570569/*571570 * We have more RAM than fits into lowmem - we try to put it into572571 * highmem, also taking the highmem=x boot parameter into account:···755754 set_pages_ro(virt_to_page(start), size >> PAGE_SHIFT);756755 pr_info("Write protecting kernel text and read-only data: %luk\n",757756 size >> 10);757757+758758+ execmem_cache_make_ro();758759759760 kernel_set_to_readonly = 1;760761
···4040 *4141 * These are the usage functions:4242 *4343- * tpm2_start_auth_session() which allocates the opaque auth structure4444- * and gets a session from the TPM. This must be called before4545- * any of the following functions. The session is protected by a4646- * session_key which is derived from a random salt value4747- * encrypted to the NULL seed.4843 * tpm2_end_auth_session() kills the session and frees the resources.4944 * Under normal operation this function is done by5045 * tpm_buf_check_hmac_response(), so this is only to be used on···958963}959964960965/**961961- * tpm2_start_auth_session() - create a HMAC authentication session with the TPM962962- * @chip: the TPM chip structure to create the session with966966+ * tpm2_start_auth_session() - Create an a HMAC authentication session967967+ * @chip: A TPM chip963968 *964964- * This function loads the NULL seed from its saved context and starts965965- * an authentication session on the null seed, fills in the966966- * @chip->auth structure to contain all the session details necessary967967- * for performing the HMAC, encrypt and decrypt operations and968968- * returns. The NULL seed is flushed before this function returns.969969+ * Loads the ephemeral key (null seed), and starts an HMAC authenticated970970+ * session. The null seed is flushed before the return.969971 *970970- * Return: zero on success or actual error encountered.972972+ * Returns zero on success, or a POSIX error code.971973 */972974int tpm2_start_auth_session(struct tpm_chip *chip)973975{···10161024 /* hash algorithm for session */10171025 tpm_buf_append_u16(&buf, TPM_ALG_SHA256);1018102610191019- rc = tpm_transmit_cmd(chip, &buf, 0, "start auth session");10271027+ rc = tpm_ret_to_err(tpm_transmit_cmd(chip, &buf, 0, "StartAuthSession"));10201028 tpm2_flush_context(chip, null_key);1021102910221030 if (rc == TPM2_RC_SUCCESS)
···320320 count++;321321322322 dma_resv_list_set(fobj, i, fence, usage);323323- /* pointer update must be visible before we extend the num_fences */324324- smp_store_mb(fobj->num_fences, count);323323+ /* fence update must be visible before we extend the num_fences */324324+ smp_wmb();325325+ fobj->num_fences = count;325326}326327EXPORT_SYMBOL(dma_resv_add_fence);327328
+10-9
drivers/dma/amd/ptdma/ptdma-dmaengine.c
···342342 struct pt_dma_chan *chan;343343 unsigned long flags;344344345345+ if (!desc)346346+ return;347347+345348 dma_chan = desc->vd.tx.chan;346349 chan = to_pt_chan(dma_chan);347350···358355 desc->status = DMA_ERROR;359356360357 spin_lock_irqsave(&chan->vc.lock, flags);361361- if (desc) {362362- if (desc->status != DMA_COMPLETE) {363363- if (desc->status != DMA_ERROR)364364- desc->status = DMA_COMPLETE;358358+ if (desc->status != DMA_COMPLETE) {359359+ if (desc->status != DMA_ERROR)360360+ desc->status = DMA_COMPLETE;365361366366- dma_cookie_complete(tx_desc);367367- dma_descriptor_unmap(tx_desc);368368- } else {369369- tx_desc = NULL;370370- }362362+ dma_cookie_complete(tx_desc);363363+ dma_descriptor_unmap(tx_desc);364364+ } else {365365+ tx_desc = NULL;371366 }372367 spin_unlock_irqrestore(&chan->vc.lock, flags);373368
···10911091 u32 residue_diff;10921092 ktime_t time_diff;10931093 unsigned long delay;10941094+ unsigned long flags;1094109510951096 while (1) {10971097+ spin_lock_irqsave(&uc->vc.lock, flags);10981098+10961099 if (uc->desc) {10971100 /* Get previous residue and time stamp */10981101 residue_diff = uc->tx_drain.residue;···11301127 break;11311128 }1132112911301130+ spin_unlock_irqrestore(&uc->vc.lock, flags);11311131+11331132 usleep_range(ktime_to_us(delay),11341133 ktime_to_us(delay) + 10);11351134 continue;···1148114311491144 break;11501145 }11461146+11471147+ spin_unlock_irqrestore(&uc->vc.lock, flags);11511148}1152114911531150static irqreturn_t udma_ring_irq_handler(int irq, void *data)···42534246 struct of_dma *ofdma)42544247{42554248 struct udma_dev *ud = ofdma->of_dma_data;42564256- dma_cap_mask_t mask = ud->ddev.cap_mask;42574249 struct udma_filter_param filter_param;42584250 struct dma_chan *chan;42594251···42844278 }42854279 }4286428042874287- chan = __dma_request_channel(&mask, udma_dma_filter_fn, &filter_param,42814281+ chan = __dma_request_channel(&ud->ddev.cap_mask, udma_dma_filter_fn, &filter_param,42884282 ofdma->of_node);42894283 if (!chan) {42904284 dev_err(ud->dev, "get channel fail in %s.\n", __func__);
+6
drivers/gpio/gpio-pca953x.c
···1204120412051205 guard(mutex)(&chip->i2c_lock);1206120612071207+ if (chip->client->irq > 0)12081208+ enable_irq(chip->client->irq);12071209 regcache_cache_only(chip->regmap, false);12081210 regcache_mark_dirty(chip->regmap);12091211 ret = pca953x_regcache_sync(chip);···12181216static void pca953x_save_context(struct pca953x_chip *chip)12191217{12201218 guard(mutex)(&chip->i2c_lock);12191219+12201220+ /* Disable IRQ to prevent early triggering while regmap "cache only" is on */12211221+ if (chip->client->irq > 0)12221222+ disable_irq(chip->client->irq);12211223 regcache_cache_only(chip->regmap, true);12221224}12231225
+10-2
drivers/gpio/gpio-virtuser.c
···401401 char buf[32], *trimmed;402402 int ret, dir, val = 0;403403404404- ret = simple_write_to_buffer(buf, sizeof(buf), ppos, user_buf, count);404404+ if (count >= sizeof(buf))405405+ return -EINVAL;406406+407407+ ret = simple_write_to_buffer(buf, sizeof(buf) - 1, ppos, user_buf, count);405408 if (ret < 0)406409 return ret;410410+411411+ buf[ret] = '\0';407412408413 trimmed = strim(buf);409414···628623 char buf[GPIO_VIRTUSER_NAME_BUF_LEN + 2];629624 int ret;630625626626+ if (count >= sizeof(buf))627627+ return -EINVAL;628628+631629 ret = simple_write_to_buffer(buf, GPIO_VIRTUSER_NAME_BUF_LEN, ppos,632630 user_buf, count);633631 if (ret < 0)634632 return ret;635633636636- buf[strlen(buf) - 1] = '\0';634634+ buf[ret] = '\0';637635638636 ret = gpiod_set_consumer_name(data->ad.desc, buf);639637 if (ret)
+1-1
drivers/gpu/drm/amd/amdgpu/amdgpu_csa.c
···109109 struct drm_exec exec;110110 int r;111111112112- drm_exec_init(&exec, DRM_EXEC_INTERRUPTIBLE_WAIT, 0);112112+ drm_exec_init(&exec, 0, 0);113113 drm_exec_until_all_locked(&exec) {114114 r = amdgpu_vm_lock_pd(vm, &exec, 0);115115 if (likely(!r))
+12
drivers/gpu/drm/amd/amdgpu/gmc_v11_0.c
···752752 adev->gmc.vram_type = vram_type;753753 adev->gmc.vram_vendor = vram_vendor;754754755755+ /* The mall_size is already calculated as mall_size_per_umc * num_umc.756756+ * However, for gfx1151, which features a 2-to-1 UMC mapping,757757+ * the result must be multiplied by 2 to determine the actual mall size.758758+ */759759+ switch (amdgpu_ip_version(adev, GC_HWIP, 0)) {760760+ case IP_VERSION(11, 5, 1):761761+ adev->gmc.mall_size *= 2;762762+ break;763763+ default:764764+ break;765765+ }766766+755767 switch (amdgpu_ip_version(adev, GC_HWIP, 0)) {756768 case IP_VERSION(11, 0, 0):757769 case IP_VERSION(11, 0, 1):
+8
drivers/gpu/drm/amd/amdgpu/vcn_v4_0_5.c
···10231023 ring->doorbell_index << VCN_RB1_DB_CTRL__OFFSET__SHIFT |10241024 VCN_RB1_DB_CTRL__EN_MASK);1025102510261026+ /* Keeping one read-back to ensure all register writes are done, otherwise10271027+ * it may introduce race conditions */10281028+ RREG32_SOC15(VCN, inst_idx, regVCN_RB1_DB_CTRL);10291029+10261030 return 0;10271031}10281032···12081204 tmp |= VCN_RB_ENABLE__RB1_EN_MASK;12091205 WREG32_SOC15(VCN, i, regVCN_RB_ENABLE, tmp);12101206 fw_shared->sq.queue_mode &= ~(FW_QUEUE_RING_RESET | FW_QUEUE_DPG_HOLD_OFF);12071207+12081208+ /* Keeping one read-back to ensure all register writes are done, otherwise12091209+ * it may introduce race conditions */12101210+ RREG32_SOC15(VCN, i, regVCN_RB_ENABLE);1211121112121212 return 0;12131213}
+4-1
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
···372372static inline bool is_dc_timing_adjust_needed(struct dm_crtc_state *old_state,373373 struct dm_crtc_state *new_state)374374{375375+ if (new_state->stream->adjust.timing_adjust_pending)376376+ return true;375377 if (new_state->freesync_config.state == VRR_STATE_ACTIVE_FIXED)376378 return true;377379 else if (amdgpu_dm_crtc_vrr_active(old_state) != amdgpu_dm_crtc_vrr_active(new_state))···1276512763 /* The reply is stored in the top nibble of the command. */1276612764 payload->reply[0] = (adev->dm.dmub_notify->aux_reply.command >> 4) & 0xF;12767127651276812768- if (!payload->write && p_notify->aux_reply.length)1276612766+ /*write req may receive a byte indicating partially written number as well*/1276712767+ if (p_notify->aux_reply.length)1276912768 memcpy(payload->data, p_notify->aux_reply.data,1277012769 p_notify->aux_reply.length);1277112770
···19801980 dc->res_pool->hubbub, pipe_ctx->plane_res.hubp->inst, pipe_ctx->hubp_regs.det_size);19811981 }1982198219831983- if (pipe_ctx->update_flags.raw ||19841984- (pipe_ctx->plane_state && pipe_ctx->plane_state->update_flags.raw) ||19851985- pipe_ctx->stream->update_flags.raw)19831983+ if (pipe_ctx->plane_state && (pipe_ctx->update_flags.raw ||19841984+ pipe_ctx->plane_state->update_flags.raw ||19851985+ pipe_ctx->stream->update_flags.raw))19861986 dc->hwss.update_dchubp_dpp(dc, pipe_ctx, context);1987198719881988 if (pipe_ctx->plane_state && (pipe_ctx->update_flags.bits.enable ||
+11-2
drivers/gpu/drm/amd/display/dc/link/link_dpms.c
···148148void link_set_all_streams_dpms_off_for_link(struct dc_link *link)149149{150150 struct pipe_ctx *pipes[MAX_PIPES];151151+ struct dc_stream_state *streams[MAX_PIPES];151152 struct dc_state *state = link->dc->current_state;152153 uint8_t count;153154 int i;···161160162161 link_get_master_pipes_with_dpms_on(link, state, &count, pipes);163162163163+ /* The subsequent call to dc_commit_updates_for_stream for a full update164164+ * will release the current state and swap to a new state. Releasing the165165+ * current state results in the stream pointers in the pipe_ctx structs166166+ * to be zero'd. Hence, cache all streams prior to dc_commit_updates_for_stream.167167+ */168168+ for (i = 0; i < count; i++)169169+ streams[i] = pipes[i]->stream;170170+164171 for (i = 0; i < count; i++) {165165- stream_update.stream = pipes[i]->stream;172172+ stream_update.stream = streams[i];166173 dc_commit_updates_for_stream(link->ctx->dc, NULL, 0,167167- pipes[i]->stream, &stream_update,174174+ streams[i], &stream_update,168175 state);169176 }170177
+32-5
drivers/gpu/drm/drm_gpusvm.c
···11181118 lockdep_assert_held(&gpusvm->notifier_lock);1119111911201120 if (range->flags.has_dma_mapping) {11211121+ struct drm_gpusvm_range_flags flags = {11221122+ .__flags = range->flags.__flags,11231123+ };11241124+11211125 for (i = 0, j = 0; i < npages; j++) {11221126 struct drm_pagemap_device_addr *addr = &range->dma_addr[j];11231127···11351131 dev, *addr);11361132 i += 1 << addr->order;11371133 }11381138- range->flags.has_devmem_pages = false;11391139- range->flags.has_dma_mapping = false;11341134+11351135+ /* WRITE_ONCE pairs with READ_ONCE for opportunistic checks */11361136+ flags.has_devmem_pages = false;11371137+ flags.has_dma_mapping = false;11381138+ WRITE_ONCE(range->flags.__flags, flags.__flags);11391139+11401140 range->dpagemap = NULL;11411141 }11421142}···13421334 int err = 0;13431335 struct dev_pagemap *pagemap;13441336 struct drm_pagemap *dpagemap;13371337+ struct drm_gpusvm_range_flags flags;1345133813461339retry:13471340 hmm_range.notifier_seq = mmu_interval_read_begin(notifier);···13871378 */13881379 drm_gpusvm_notifier_lock(gpusvm);1389138013901390- if (range->flags.unmapped) {13811381+ flags.__flags = range->flags.__flags;13821382+ if (flags.unmapped) {13911383 drm_gpusvm_notifier_unlock(gpusvm);13921384 err = -EFAULT;13931385 goto err_free;···14641454 goto err_unmap;14651455 }1466145614571457+ if (ctx->devmem_only) {14581458+ err = -EFAULT;14591459+ goto err_unmap;14601460+ }14611461+14671462 addr = dma_map_page(gpusvm->drm->dev,14681463 page, 0,14691464 PAGE_SIZE << order,···14841469 }14851470 i += 1 << order;14861471 num_dma_mapped = i;14871487- range->flags.has_dma_mapping = true;14721472+ flags.has_dma_mapping = true;14881473 }1489147414901475 if (zdd) {14911491- range->flags.has_devmem_pages = true;14761476+ flags.has_devmem_pages = true;14921477 range->dpagemap = dpagemap;14931478 }14791479+14801480+ /* WRITE_ONCE pairs with READ_ONCE for opportunistic checks */14811481+ WRITE_ONCE(range->flags.__flags, flags.__flags);1494148214951483 drm_gpusvm_notifier_unlock(gpusvm);14961484 kvfree(pfns);···17831765 goto err_finalize;1784176617851767 /* Upon success bind devmem allocation to range and zdd */17681768+ devmem_allocation->timeslice_expiration = get_jiffies_64() +17691769+ msecs_to_jiffies(ctx->timeslice_ms);17861770 zdd->devmem_allocation = devmem_allocation; /* Owns ref */1787177117881772err_finalize:···20041984 unsigned long start, end;20051985 void *buf;20061986 int i, err = 0;19871987+19881988+ if (page) {19891989+ zdd = page->zone_device_data;19901990+ if (time_before64(get_jiffies_64(),19911991+ zdd->devmem_allocation->timeslice_expiration))19921992+ return 0;19931993+ }2007199420081995 start = ALIGN_DOWN(fault_addr, size);20091996 end = ALIGN(fault_addr + 1, size);
+2-2
drivers/gpu/drm/meson/meson_encoder_hdmi.c
···7575 unsigned long long venc_freq;7676 unsigned long long hdmi_freq;77777878- vclk_freq = mode->clock * 1000;7878+ vclk_freq = mode->clock * 1000ULL;79798080 /* For 420, pixel clock is half unlike venc clock */8181 if (encoder_hdmi->output_bus_fmt == MEDIA_BUS_FMT_UYYVYY8_0_5X24)···123123 struct meson_encoder_hdmi *encoder_hdmi = bridge_to_meson_encoder_hdmi(bridge);124124 struct meson_drm *priv = encoder_hdmi->priv;125125 bool is_hdmi2_sink = display_info->hdmi.scdc.supported;126126- unsigned long long clock = mode->clock * 1000;126126+ unsigned long long clock = mode->clock * 1000ULL;127127 unsigned long long phy_freq;128128 unsigned long long vclk_freq;129129 unsigned long long venc_freq;
···2424#include "xe_hw_fence.h"2525#include "xe_map.h"2626#include "xe_memirq.h"2727+#include "xe_mmio.h"2728#include "xe_sriov.h"2829#include "xe_trace_lrc.h"2930#include "xe_vm.h"···651650#define LRC_START_SEQNO_PPHWSP_OFFSET (LRC_SEQNO_PPHWSP_OFFSET + 8)652651#define LRC_CTX_JOB_TIMESTAMP_OFFSET (LRC_START_SEQNO_PPHWSP_OFFSET + 8)653652#define LRC_PARALLEL_PPHWSP_OFFSET 2048653653+#define LRC_ENGINE_ID_PPHWSP_OFFSET 2096654654#define LRC_PPHWSP_SIZE SZ_4K655655656656u32 xe_lrc_regs_offset(struct xe_lrc *lrc)···686684687685static u32 __xe_lrc_ctx_job_timestamp_offset(struct xe_lrc *lrc)688686{689689- /* The start seqno is stored in the driver-defined portion of PPHWSP */687687+ /* This is stored in the driver-defined portion of PPHWSP */690688 return xe_lrc_pphwsp_offset(lrc) + LRC_CTX_JOB_TIMESTAMP_OFFSET;691689}692690···696694 return xe_lrc_pphwsp_offset(lrc) + LRC_PARALLEL_PPHWSP_OFFSET;697695}698696697697+static inline u32 __xe_lrc_engine_id_offset(struct xe_lrc *lrc)698698+{699699+ return xe_lrc_pphwsp_offset(lrc) + LRC_ENGINE_ID_PPHWSP_OFFSET;700700+}701701+699702static u32 __xe_lrc_ctx_timestamp_offset(struct xe_lrc *lrc)700703{701704 return __xe_lrc_regs_offset(lrc) + CTX_TIMESTAMP * sizeof(u32);705705+}706706+707707+static u32 __xe_lrc_ctx_timestamp_udw_offset(struct xe_lrc *lrc)708708+{709709+ return __xe_lrc_regs_offset(lrc) + CTX_TIMESTAMP_UDW * sizeof(u32);702710}703711704712static inline u32 __xe_lrc_indirect_ring_offset(struct xe_lrc *lrc)···738726DECL_MAP_ADDR_HELPERS(start_seqno)739727DECL_MAP_ADDR_HELPERS(ctx_job_timestamp)740728DECL_MAP_ADDR_HELPERS(ctx_timestamp)729729+DECL_MAP_ADDR_HELPERS(ctx_timestamp_udw)741730DECL_MAP_ADDR_HELPERS(parallel)742731DECL_MAP_ADDR_HELPERS(indirect_ring)732732+DECL_MAP_ADDR_HELPERS(engine_id)743733744734#undef DECL_MAP_ADDR_HELPERS745735···757743}758744759745/**746746+ * xe_lrc_ctx_timestamp_udw_ggtt_addr() - Get ctx timestamp udw GGTT address747747+ * @lrc: Pointer to the lrc.748748+ *749749+ * Returns: ctx timestamp udw GGTT address750750+ */751751+u32 xe_lrc_ctx_timestamp_udw_ggtt_addr(struct xe_lrc *lrc)752752+{753753+ return __xe_lrc_ctx_timestamp_udw_ggtt_addr(lrc);754754+}755755+756756+/**760757 * xe_lrc_ctx_timestamp() - Read ctx timestamp value761758 * @lrc: Pointer to the lrc.762759 *763760 * Returns: ctx timestamp value764761 */765765-u32 xe_lrc_ctx_timestamp(struct xe_lrc *lrc)762762+u64 xe_lrc_ctx_timestamp(struct xe_lrc *lrc)766763{767764 struct xe_device *xe = lrc_to_xe(lrc);768765 struct iosys_map map;766766+ u32 ldw, udw = 0;769767770768 map = __xe_lrc_ctx_timestamp_map(lrc);771771- return xe_map_read32(xe, &map);769769+ ldw = xe_map_read32(xe, &map);770770+771771+ if (xe->info.has_64bit_timestamp) {772772+ map = __xe_lrc_ctx_timestamp_udw_map(lrc);773773+ udw = xe_map_read32(xe, &map);774774+ }775775+776776+ return (u64)udw << 32 | ldw;772777}773778774779/**···897864898865static void xe_lrc_set_ppgtt(struct xe_lrc *lrc, struct xe_vm *vm)899866{900900- u64 desc = xe_vm_pdp4_descriptor(vm, lrc->tile);867867+ u64 desc = xe_vm_pdp4_descriptor(vm, gt_to_tile(lrc->gt));901868902869 xe_lrc_write_ctx_reg(lrc, CTX_PDP0_UDW, upper_32_bits(desc));903870 xe_lrc_write_ctx_reg(lrc, CTX_PDP0_LDW, lower_32_bits(desc));···910877 xe_bo_unpin(lrc->bo);911878 xe_bo_unlock(lrc->bo);912879 xe_bo_put(lrc->bo);880880+ xe_bo_unpin_map_no_vm(lrc->bb_per_ctx_bo);881881+}882882+883883+/*884884+ * xe_lrc_setup_utilization() - Setup wa bb to assist in calculating active885885+ * context run ticks.886886+ * @lrc: Pointer to the lrc.887887+ *888888+ * Context Timestamp (CTX_TIMESTAMP) in the LRC accumulates the run ticks of the889889+ * context, but only gets updated when the context switches out. In order to890890+ * check how long a context has been active before it switches out, two things891891+ * are required:892892+ *893893+ * (1) Determine if the context is running:894894+ * To do so, we program the WA BB to set an initial value for CTX_TIMESTAMP in895895+ * the LRC. The value chosen is 1 since 0 is the initial value when the LRC is896896+ * initialized. During a query, we just check for this value to determine if the897897+ * context is active. If the context switched out, it would overwrite this898898+ * location with the actual CTX_TIMESTAMP MMIO value. Note that WA BB runs as899899+ * the last part of context restore, so reusing this LRC location will not900900+ * clobber anything.901901+ *902902+ * (2) Calculate the time that the context has been active for:903903+ * The CTX_TIMESTAMP ticks only when the context is active. If a context is904904+ * active, we just use the CTX_TIMESTAMP MMIO as the new value of utilization.905905+ * While doing so, we need to read the CTX_TIMESTAMP MMIO for the specific906906+ * engine instance. Since we do not know which instance the context is running907907+ * on until it is scheduled, we also read the ENGINE_ID MMIO in the WA BB and908908+ * store it in the PPHSWP.909909+ */910910+#define CONTEXT_ACTIVE 1ULL911911+static void xe_lrc_setup_utilization(struct xe_lrc *lrc)912912+{913913+ u32 *cmd;914914+915915+ cmd = lrc->bb_per_ctx_bo->vmap.vaddr;916916+917917+ *cmd++ = MI_STORE_REGISTER_MEM | MI_SRM_USE_GGTT | MI_SRM_ADD_CS_OFFSET;918918+ *cmd++ = ENGINE_ID(0).addr;919919+ *cmd++ = __xe_lrc_engine_id_ggtt_addr(lrc);920920+ *cmd++ = 0;921921+922922+ *cmd++ = MI_STORE_DATA_IMM | MI_SDI_GGTT | MI_SDI_NUM_DW(1);923923+ *cmd++ = __xe_lrc_ctx_timestamp_ggtt_addr(lrc);924924+ *cmd++ = 0;925925+ *cmd++ = lower_32_bits(CONTEXT_ACTIVE);926926+927927+ if (lrc_to_xe(lrc)->info.has_64bit_timestamp) {928928+ *cmd++ = MI_STORE_DATA_IMM | MI_SDI_GGTT | MI_SDI_NUM_DW(1);929929+ *cmd++ = __xe_lrc_ctx_timestamp_udw_ggtt_addr(lrc);930930+ *cmd++ = 0;931931+ *cmd++ = upper_32_bits(CONTEXT_ACTIVE);932932+ }933933+934934+ *cmd++ = MI_BATCH_BUFFER_END;935935+936936+ xe_lrc_write_ctx_reg(lrc, CTX_BB_PER_CTX_PTR,937937+ xe_bo_ggtt_addr(lrc->bb_per_ctx_bo) | 1);938938+913939}914940915941#define PVC_CTX_ASID (0x2e + 1)···985893 void *init_data = NULL;986894 u32 arb_enable;987895 u32 lrc_size;896896+ u32 bo_flags;988897 int err;989898990899 kref_init(&lrc->refcount);900900+ lrc->gt = gt;991901 lrc->flags = 0;992902 lrc_size = ring_size + xe_gt_lrc_size(gt, hwe->class);993903 if (xe_gt_has_indirect_ring_state(gt))994904 lrc->flags |= XE_LRC_FLAG_INDIRECT_RING_STATE;905905+906906+ bo_flags = XE_BO_FLAG_VRAM_IF_DGFX(tile) | XE_BO_FLAG_GGTT |907907+ XE_BO_FLAG_GGTT_INVALIDATE;995908996909 /*997910 * FIXME: Perma-pinning LRC as we don't yet support moving GGTT address···1004907 */1005908 lrc->bo = xe_bo_create_pin_map(xe, tile, vm, lrc_size,1006909 ttm_bo_type_kernel,10071007- XE_BO_FLAG_VRAM_IF_DGFX(tile) |10081008- XE_BO_FLAG_GGTT |10091009- XE_BO_FLAG_GGTT_INVALIDATE);910910+ bo_flags);1010911 if (IS_ERR(lrc->bo))1011912 return PTR_ERR(lrc->bo);1012913914914+ lrc->bb_per_ctx_bo = xe_bo_create_pin_map(xe, tile, NULL, SZ_4K,915915+ ttm_bo_type_kernel,916916+ bo_flags);917917+ if (IS_ERR(lrc->bb_per_ctx_bo)) {918918+ err = PTR_ERR(lrc->bb_per_ctx_bo);919919+ goto err_lrc_finish;920920+ }921921+1013922 lrc->size = lrc_size;10141014- lrc->tile = gt_to_tile(hwe->gt);1015923 lrc->ring.size = ring_size;1016924 lrc->ring.tail = 0;10171017- lrc->ctx_timestamp = 0;10189251019926 xe_hw_fence_ctx_init(&lrc->fence_ctx, hwe->gt,1020927 hwe->fence_irq, hwe->name);···1091990 xe_lrc_read_ctx_reg(lrc, CTX_CONTEXT_CONTROL) |1092991 _MASKED_BIT_ENABLE(CTX_CTRL_PXP_ENABLE));1093992993993+ lrc->ctx_timestamp = 0;1094994 xe_lrc_write_ctx_reg(lrc, CTX_TIMESTAMP, 0);995995+ if (lrc_to_xe(lrc)->info.has_64bit_timestamp)996996+ xe_lrc_write_ctx_reg(lrc, CTX_TIMESTAMP_UDW, 0);10959971096998 if (xe->info.has_asid && vm)1097999 xe_lrc_write_ctx_reg(lrc, PVC_CTX_ASID, vm->usm.asid);···1122101811231019 map = __xe_lrc_start_seqno_map(lrc);11241020 xe_map_write32(lrc_to_xe(lrc), &map, lrc->fence_ctx.next_seqno - 1);10211021+10221022+ xe_lrc_setup_utilization(lrc);1125102311261024 return 0;11271025···13421236struct iosys_map xe_lrc_parallel_map(struct xe_lrc *lrc)13431237{13441238 return __xe_lrc_parallel_map(lrc);12391239+}12401240+12411241+/**12421242+ * xe_lrc_engine_id() - Read engine id value12431243+ * @lrc: Pointer to the lrc.12441244+ *12451245+ * Returns: context id value12461246+ */12471247+static u32 xe_lrc_engine_id(struct xe_lrc *lrc)12481248+{12491249+ struct xe_device *xe = lrc_to_xe(lrc);12501250+ struct iosys_map map;12511251+12521252+ map = __xe_lrc_engine_id_map(lrc);12531253+ return xe_map_read32(xe, &map);13451254}1346125513471256static int instr_dw(u32 cmd_header)···18051684 snapshot->lrc_offset = xe_lrc_pphwsp_offset(lrc);18061685 snapshot->lrc_size = lrc->bo->size - snapshot->lrc_offset;18071686 snapshot->lrc_snapshot = NULL;18081808- snapshot->ctx_timestamp = xe_lrc_ctx_timestamp(lrc);16871687+ snapshot->ctx_timestamp = lower_32_bits(xe_lrc_ctx_timestamp(lrc));18091688 snapshot->ctx_job_timestamp = xe_lrc_ctx_job_timestamp(lrc);18101689 return snapshot;18111690}···19051784 kfree(snapshot);19061785}1907178617871787+static int get_ctx_timestamp(struct xe_lrc *lrc, u32 engine_id, u64 *reg_ctx_ts)17881788+{17891789+ u16 class = REG_FIELD_GET(ENGINE_CLASS_ID, engine_id);17901790+ u16 instance = REG_FIELD_GET(ENGINE_INSTANCE_ID, engine_id);17911791+ struct xe_hw_engine *hwe;17921792+ u64 val;17931793+17941794+ hwe = xe_gt_hw_engine(lrc->gt, class, instance, false);17951795+ if (xe_gt_WARN_ONCE(lrc->gt, !hwe || xe_hw_engine_is_reserved(hwe),17961796+ "Unexpected engine class:instance %d:%d for context utilization\n",17971797+ class, instance))17981798+ return -1;17991799+18001800+ if (lrc_to_xe(lrc)->info.has_64bit_timestamp)18011801+ val = xe_mmio_read64_2x32(&hwe->gt->mmio,18021802+ RING_CTX_TIMESTAMP(hwe->mmio_base));18031803+ else18041804+ val = xe_mmio_read32(&hwe->gt->mmio,18051805+ RING_CTX_TIMESTAMP(hwe->mmio_base));18061806+18071807+ *reg_ctx_ts = val;18081808+18091809+ return 0;18101810+}18111811+19081812/**19091813 * xe_lrc_update_timestamp() - Update ctx timestamp19101814 * @lrc: Pointer to the lrc.19111815 * @old_ts: Old timestamp value19121816 *19131817 * Populate @old_ts current saved ctx timestamp, read new ctx timestamp and19141914- * update saved value.18181818+ * update saved value. With support for active contexts, the calculation may be18191819+ * slightly racy, so follow a read-again logic to ensure that the context is18201820+ * still active before returning the right timestamp.19151821 *19161822 * Returns: New ctx timestamp value19171823 */19181918-u32 xe_lrc_update_timestamp(struct xe_lrc *lrc, u32 *old_ts)18241824+u64 xe_lrc_update_timestamp(struct xe_lrc *lrc, u64 *old_ts)19191825{18261826+ u64 lrc_ts, reg_ts;18271827+ u32 engine_id;18281828+19201829 *old_ts = lrc->ctx_timestamp;1921183019221922- lrc->ctx_timestamp = xe_lrc_ctx_timestamp(lrc);18311831+ lrc_ts = xe_lrc_ctx_timestamp(lrc);18321832+ /* CTX_TIMESTAMP mmio read is invalid on VF, so return the LRC value */18331833+ if (IS_SRIOV_VF(lrc_to_xe(lrc))) {18341834+ lrc->ctx_timestamp = lrc_ts;18351835+ goto done;18361836+ }1923183718381838+ if (lrc_ts == CONTEXT_ACTIVE) {18391839+ engine_id = xe_lrc_engine_id(lrc);18401840+ if (!get_ctx_timestamp(lrc, engine_id, ®_ts))18411841+ lrc->ctx_timestamp = reg_ts;18421842+18431843+ /* read lrc again to ensure context is still active */18441844+ lrc_ts = xe_lrc_ctx_timestamp(lrc);18451845+ }18461846+18471847+ /*18481848+ * If context switched out, just use the lrc_ts. Note that this needs to18491849+ * be a separate if condition.18501850+ */18511851+ if (lrc_ts != CONTEXT_ACTIVE)18521852+ lrc->ctx_timestamp = lrc_ts;18531853+18541854+done:19241855 trace_xe_lrc_update_timestamp(lrc, *old_ts);1925185619261857 return lrc->ctx_timestamp;
···2525 /** @size: size of lrc including any indirect ring state page */2626 u32 size;27272828- /** @tile: tile which this LRC belongs to */2929- struct xe_tile *tile;2828+ /** @gt: gt which this LRC belongs to */2929+ struct xe_gt *gt;30303131 /** @flags: LRC flags */3232#define XE_LRC_FLAG_INDIRECT_RING_STATE 0x1···5252 struct xe_hw_fence_ctx fence_ctx;53535454 /** @ctx_timestamp: readout value of CTX_TIMESTAMP on last update */5555- u32 ctx_timestamp;5555+ u64 ctx_timestamp;5656+5757+ /** @bb_per_ctx_bo: buffer object for per context batch wa buffer */5858+ struct xe_bo *bb_per_ctx_bo;5659};57605861struct xe_lrc_snapshot;
-3
drivers/gpu/drm/xe/xe_module.c
···2929module_param_named(svm_notifier_size, xe_modparam.svm_notifier_size, uint, 0600);3030MODULE_PARM_DESC(svm_notifier_size, "Set the svm notifier size(in MiB), must be power of 2");31313232-module_param_named(always_migrate_to_vram, xe_modparam.always_migrate_to_vram, bool, 0444);3333-MODULE_PARM_DESC(always_migrate_to_vram, "Always migrate to VRAM on GPU fault");3434-3532module_param_named_unsafe(force_execlist, xe_modparam.force_execlist, bool, 0444);3633MODULE_PARM_DESC(force_execlist, "Force Execlist submission");3734
···8383 case ALS_IDX:8484 privdata->dev_en.is_als_present = false;8585 break;8686+ case SRA_IDX:8787+ privdata->dev_en.is_sra_present = false;8888+ break;8689 }87908891 if (cl_data->sensor_sts[i] == SENSOR_ENABLED) {···137134 for (i = 0; i < cl_data->num_hid_devices; i++) {138135 cl_data->sensor_sts[i] = SENSOR_DISABLED;139136140140- if (cl_data->num_hid_devices == 1 && cl_data->sensor_idx[0] == SRA_IDX)141141- break;142142-143137 if (cl_data->sensor_idx[i] == SRA_IDX) {144138 info.sensor_idx = cl_data->sensor_idx[i];145139 writel(0, privdata->mmio + amd_get_p2c_val(privdata, 0));···145145 (privdata, cl_data->sensor_idx[i], ENABLE_SENSOR);146146147147 cl_data->sensor_sts[i] = (status == 0) ? SENSOR_ENABLED : SENSOR_DISABLED;148148- if (cl_data->sensor_sts[i] == SENSOR_ENABLED)148148+ if (cl_data->sensor_sts[i] == SENSOR_ENABLED) {149149+ cl_data->is_any_sensor_enabled = true;149150 privdata->dev_en.is_sra_present = true;151151+ }150152 continue;151153 }152154···240238cleanup:241239 amd_sfh_hid_client_deinit(privdata);242240 for (i = 0; i < cl_data->num_hid_devices; i++) {241241+ if (cl_data->sensor_idx[i] == SRA_IDX)242242+ continue;243243 devm_kfree(dev, cl_data->feature_report[i]);244244 devm_kfree(dev, in_data->input_report[i]);245245 devm_kfree(dev, cl_data->report_descr[i]);
+9
drivers/hid/bpf/hid_bpf_dispatch.c
···3838 struct hid_bpf_ops *e;3939 int ret;40404141+ if (unlikely(hdev->bpf.destroyed))4242+ return ERR_PTR(-ENODEV);4343+4144 if (type >= HID_REPORT_TYPES)4245 return ERR_PTR(-EINVAL);4346···9693 struct hid_bpf_ops *e;9794 int ret, idx;98959696+ if (unlikely(hdev->bpf.destroyed))9797+ return -ENODEV;9898+9999 if (rtype >= HID_REPORT_TYPES)100100 return -EINVAL;101101···135129 };136130 struct hid_bpf_ops *e;137131 int ret, idx;132132+133133+ if (unlikely(hdev->bpf.destroyed))134134+ return -ENODEV;138135139136 idx = srcu_read_lock(&hdev->bpf.srcu);140137 list_for_each_entry_srcu(e, &hdev->bpf.prog_list, list,
+1
drivers/hid/bpf/progs/XPPen__ACK05.bpf.c
···157157 ReportCount(5) // padding158158 Input(Const)159159 // Byte 4 in report - just exists so we get to be a tablet pad160160+ UsagePage_Digitizers160161 Usage_Dig_BarrelSwitch // BTN_STYLUS161162 ReportCount(1)162163 ReportSize(1)
···7070{7171 while (!kfifo_is_empty(fifo)) {7272 int size = kfifo_peek_len(fifo);7373- u8 *buf = kzalloc(size, GFP_KERNEL);7373+ u8 *buf;7474 unsigned int count;7575 int err;7676+7777+ buf = kzalloc(size, GFP_KERNEL);7878+ if (!buf) {7979+ kfifo_skip(fifo);8080+ continue;8181+ }76827783 count = kfifo_out(fifo, buf, size);7884 if (count != size) {···8781 // to flush seems reasonable enough, however.8882 hid_warn(hdev, "%s: removed fifo entry with unexpected size\n",8983 __func__);8484+ kfree(buf);9085 continue;9186 }9287 err = hid_report_raw_event(hdev, HID_INPUT_REPORT, buf, size, false);···23682361 unsigned int connect_mask = HID_CONNECT_HIDRAW;2369236223702363 features->pktlen = wacom_compute_pktlen(hdev);23642364+ if (!features->pktlen)23652365+ return -ENODEV;2371236623722367 if (!devres_open_group(&hdev->dev, wacom, GFP_KERNEL))23732368 return -ENOMEM;
+3-62
drivers/hv/channel.c
···10771077EXPORT_SYMBOL(vmbus_sendpacket);1078107810791079/*10801080- * vmbus_sendpacket_pagebuffer - Send a range of single-page buffer10811081- * packets using a GPADL Direct packet type. This interface allows you10821082- * to control notifying the host. This will be useful for sending10831083- * batched data. Also the sender can control the send flags10841084- * explicitly.10851085- */10861086-int vmbus_sendpacket_pagebuffer(struct vmbus_channel *channel,10871087- struct hv_page_buffer pagebuffers[],10881088- u32 pagecount, void *buffer, u32 bufferlen,10891089- u64 requestid)10901090-{10911091- int i;10921092- struct vmbus_channel_packet_page_buffer desc;10931093- u32 descsize;10941094- u32 packetlen;10951095- u32 packetlen_aligned;10961096- struct kvec bufferlist[3];10971097- u64 aligned_data = 0;10981098-10991099- if (pagecount > MAX_PAGE_BUFFER_COUNT)11001100- return -EINVAL;11011101-11021102- /*11031103- * Adjust the size down since vmbus_channel_packet_page_buffer is the11041104- * largest size we support11051105- */11061106- descsize = sizeof(struct vmbus_channel_packet_page_buffer) -11071107- ((MAX_PAGE_BUFFER_COUNT - pagecount) *11081108- sizeof(struct hv_page_buffer));11091109- packetlen = descsize + bufferlen;11101110- packetlen_aligned = ALIGN(packetlen, sizeof(u64));11111111-11121112- /* Setup the descriptor */11131113- desc.type = VM_PKT_DATA_USING_GPA_DIRECT;11141114- desc.flags = VMBUS_DATA_PACKET_FLAG_COMPLETION_REQUESTED;11151115- desc.dataoffset8 = descsize >> 3; /* in 8-bytes granularity */11161116- desc.length8 = (u16)(packetlen_aligned >> 3);11171117- desc.transactionid = VMBUS_RQST_ERROR; /* will be updated in hv_ringbuffer_write() */11181118- desc.reserved = 0;11191119- desc.rangecount = pagecount;11201120-11211121- for (i = 0; i < pagecount; i++) {11221122- desc.range[i].len = pagebuffers[i].len;11231123- desc.range[i].offset = pagebuffers[i].offset;11241124- desc.range[i].pfn = pagebuffers[i].pfn;11251125- }11261126-11271127- bufferlist[0].iov_base = &desc;11281128- bufferlist[0].iov_len = descsize;11291129- bufferlist[1].iov_base = buffer;11301130- bufferlist[1].iov_len = bufferlen;11311131- bufferlist[2].iov_base = &aligned_data;11321132- bufferlist[2].iov_len = (packetlen_aligned - packetlen);11331133-11341134- return hv_ringbuffer_write(channel, bufferlist, 3, requestid, NULL);11351135-}11361136-EXPORT_SYMBOL_GPL(vmbus_sendpacket_pagebuffer);11371137-11381138-/*11391139- * vmbus_sendpacket_multipagebuffer - Send a multi-page buffer packet10801080+ * vmbus_sendpacket_mpb_desc - Send one or more multi-page buffer packets11401081 * using a GPADL Direct packet type.11411141- * The buffer includes the vmbus descriptor.10821082+ * The desc argument must include space for the VMBus descriptor. The10831083+ * rangecount field must already be set.11421084 */11431085int vmbus_sendpacket_mpb_desc(struct vmbus_channel *channel,11441086 struct vmbus_packet_mpb_array *desc,···11021160 desc->length8 = (u16)(packetlen_aligned >> 3);11031161 desc->transactionid = VMBUS_RQST_ERROR; /* will be updated in hv_ringbuffer_write() */11041162 desc->reserved = 0;11051105- desc->rangecount = 1;1106116311071164 bufferlist[0].iov_base = desc;11081165 bufferlist[0].iov_len = desc_size;
+3-1
drivers/i2c/busses/i2c-designware-pcidrv.c
···278278279279 if ((dev->flags & MODEL_MASK) == MODEL_AMD_NAVI_GPU) {280280 dev->slave = i2c_new_ccgx_ucsi(&dev->adapter, dev->irq, &dgpu_node);281281- if (IS_ERR(dev->slave))281281+ if (IS_ERR(dev->slave)) {282282+ i2c_del_adapter(&dev->adapter);282283 return dev_err_probe(device, PTR_ERR(dev->slave),283284 "register UCSI failed\n");285285+ }284286 }285287286288 pm_runtime_set_autosuspend_delay(device, 1000);
+4-2
drivers/infiniband/core/device.c
···1352135213531353 down_read(&devices_rwsem);1354135413551355+ /* Mark for userspace that device is ready */13561356+ kobject_uevent(&device->dev.kobj, KOBJ_ADD);13571357+13551358 ret = rdma_nl_notify_event(device, 0, RDMA_REGISTER_EVENT);13561359 if (ret)13571360 goto out;···14711468 return ret;14721469 }14731470 dev_set_uevent_suppress(&device->dev, false);14741474- /* Mark for userspace that device is ready */14751475- kobject_uevent(&device->dev.kobj, KOBJ_ADD);1476147114771472 ib_device_notify_register(device);14731473+14781474 ib_device_put(device);1479147514801476 return 0;
+3-1
drivers/infiniband/hw/irdma/main.c
···221221 break;222222223223 if (i < IRDMA_MIN_MSIX) {224224- for (; i > 0; i--)224224+ while (--i >= 0)225225 ice_free_rdma_qvector(pf, &rf->msix_entries[i]);226226227227 kfree(rf->msix_entries);···254254 irdma_ib_unregister_device(iwdev);255255 ice_rdma_update_vsi_filter(pf, iwdev->vsi_num, false);256256 irdma_deinit_interrupts(iwdev->rf, pf);257257+258258+ kfree(iwdev->rf);257259258260 pr_debug("INIT: Gen2 PF[%d] device remove success\n", PCI_FUNC(pf->pdev->devfn));259261}
···326326 }327327}328328329329+static void b53_set_eap_mode(struct b53_device *dev, int port, int mode)330330+{331331+ u64 eap_conf;332332+333333+ if (is5325(dev) || is5365(dev) || dev->chip_id == BCM5389_DEVICE_ID)334334+ return;335335+336336+ b53_read64(dev, B53_EAP_PAGE, B53_PORT_EAP_CONF(port), &eap_conf);337337+338338+ if (is63xx(dev)) {339339+ eap_conf &= ~EAP_MODE_MASK_63XX;340340+ eap_conf |= (u64)mode << EAP_MODE_SHIFT_63XX;341341+ } else {342342+ eap_conf &= ~EAP_MODE_MASK;343343+ eap_conf |= (u64)mode << EAP_MODE_SHIFT;344344+ }345345+346346+ b53_write64(dev, B53_EAP_PAGE, B53_PORT_EAP_CONF(port), eap_conf);347347+}348348+329349static void b53_set_forwarding(struct b53_device *dev, int enable)330350{331351 u8 mgmt;···605585 b53_port_set_ucast_flood(dev, port, true);606586 b53_port_set_mcast_flood(dev, port, true);607587 b53_port_set_learning(dev, port, false);588588+589589+ /* Force all traffic to go to the CPU port to prevent the ASIC from590590+ * trying to forward to bridged ports on matching FDB entries, then591591+ * dropping frames because it isn't allowed to forward there.592592+ */593593+ if (dsa_is_user_port(ds, port))594594+ b53_set_eap_mode(dev, port, EAP_MODE_SIMPLIFIED);608595609596 return 0;610597}···20692042 pvlan |= BIT(i);20702043 }2071204420452045+ /* Disable redirection of unknown SA to the CPU port */20462046+ b53_set_eap_mode(dev, port, EAP_MODE_BASIC);20472047+20722048 /* Configure the local port VLAN control membership to include20732049 * remote ports and update the local port bitmask20742050 */···21062076 if (port != i)21072077 pvlan &= ~BIT(i);21082078 }20792079+20802080+ /* Enable redirection of unknown SA to the CPU port */20812081+ b53_set_eap_mode(dev, port, EAP_MODE_SIMPLIFIED);2109208221102083 b53_write16(dev, B53_PVLAN_PAGE, B53_PVLAN_PORT_MASK(port), pvlan);21112084 dev->ports[port].vlan_ctl_mask = pvlan;
···265265 unsigned int mode,266266 phy_interface_t interface);267267268268+/**269269+ * ksz_phylink_mac_disable_tx_lpi() - Callback to signal LPI support (Dummy)270270+ * @config: phylink config structure271271+ *272272+ * This function is a dummy handler. See ksz_phylink_mac_enable_tx_lpi() for273273+ * a detailed explanation of EEE/LPI handling in KSZ switches.274274+ */275275+static void ksz_phylink_mac_disable_tx_lpi(struct phylink_config *config)276276+{277277+}278278+279279+/**280280+ * ksz_phylink_mac_enable_tx_lpi() - Callback to signal LPI support (Dummy)281281+ * @config: phylink config structure282282+ * @timer: timer value before entering LPI (unused)283283+ * @tx_clock_stop: whether to stop the TX clock in LPI mode (unused)284284+ *285285+ * This function signals to phylink that the driver architecture supports286286+ * LPI management, enabling phylink to control EEE advertisement during287287+ * negotiation according to IEEE Std 802.3 (Clause 78).288288+ *289289+ * Hardware Management of EEE/LPI State:290290+ * For KSZ switch ports with integrated PHYs (e.g., KSZ9893R ports 1-2),291291+ * observation and testing suggest that the actual EEE / Low Power Idle (LPI)292292+ * state transitions are managed autonomously by the hardware based on293293+ * the auto-negotiation results. (Note: While the datasheet describes EEE294294+ * operation based on negotiation, it doesn't explicitly detail the internal295295+ * MAC/PHY interaction, so autonomous hardware management of the MAC state296296+ * for LPI is inferred from observed behavior).297297+ * This hardware control, consistent with the switch's ability to operate298298+ * autonomously via strapping, means MAC-level software intervention is not299299+ * required or exposed for managing the LPI state once EEE is negotiated.300300+ * (Ref: KSZ9893R Data Sheet DS00002420D, primarily Section 4.7.5 explaining301301+ * EEE, also Sections 4.1.7 on Auto-Negotiation and 3.2.1 on Configuration302302+ * Straps).303303+ *304304+ * Additionally, ports configured as MAC interfaces (e.g., KSZ9893R port 3)305305+ * lack documented MAC-level LPI control.306306+ *307307+ * Therefore, this callback performs no action and serves primarily to inform308308+ * phylink of LPI awareness and to document the inferred hardware behavior.309309+ *310310+ * Returns: 0 (Always success)311311+ */312312+static int ksz_phylink_mac_enable_tx_lpi(struct phylink_config *config,313313+ u32 timer, bool tx_clock_stop)314314+{315315+ return 0;316316+}317317+268318static const struct phylink_mac_ops ksz88x3_phylink_mac_ops = {269319 .mac_config = ksz88x3_phylink_mac_config,270320 .mac_link_down = ksz_phylink_mac_link_down,271321 .mac_link_up = ksz8_phylink_mac_link_up,322322+ .mac_disable_tx_lpi = ksz_phylink_mac_disable_tx_lpi,323323+ .mac_enable_tx_lpi = ksz_phylink_mac_enable_tx_lpi,272324};273325274326static const struct phylink_mac_ops ksz8_phylink_mac_ops = {275327 .mac_config = ksz_phylink_mac_config,276328 .mac_link_down = ksz_phylink_mac_link_down,277329 .mac_link_up = ksz8_phylink_mac_link_up,330330+ .mac_disable_tx_lpi = ksz_phylink_mac_disable_tx_lpi,331331+ .mac_enable_tx_lpi = ksz_phylink_mac_enable_tx_lpi,278332};279333280334static const struct ksz_dev_ops ksz88xx_dev_ops = {···412358 .mac_config = ksz_phylink_mac_config,413359 .mac_link_down = ksz_phylink_mac_link_down,414360 .mac_link_up = ksz9477_phylink_mac_link_up,361361+ .mac_disable_tx_lpi = ksz_phylink_mac_disable_tx_lpi,362362+ .mac_enable_tx_lpi = ksz_phylink_mac_enable_tx_lpi,415363};416364417365static const struct ksz_dev_ops ksz9477_dev_ops = {···457401 .mac_config = ksz_phylink_mac_config,458402 .mac_link_down = ksz_phylink_mac_link_down,459403 .mac_link_up = ksz9477_phylink_mac_link_up,404404+ .mac_disable_tx_lpi = ksz_phylink_mac_disable_tx_lpi,405405+ .mac_enable_tx_lpi = ksz_phylink_mac_enable_tx_lpi,460406};461407462408static const struct ksz_dev_ops lan937x_dev_ops = {···2074201620752017 if (dev->dev_ops->get_caps)20762018 dev->dev_ops->get_caps(dev, port, config);20192019+20202020+ if (ds->ops->support_eee && ds->ops->support_eee(ds, port)) {20212021+ memcpy(config->lpi_interfaces, config->supported_interfaces,20222022+ sizeof(config->lpi_interfaces));20232023+20242024+ config->lpi_capabilities = MAC_100FD;20252025+ if (dev->info->gbit_capable[port])20262026+ config->lpi_capabilities |= MAC_1000FD;20272027+20282028+ /* EEE is fully operational */20292029+ config->eee_enabled_default = true;20302030+ }20772031}2078203220792033void ksz_r_mib_stats64(struct ksz_device *dev, int port)···30783008 if (!port)30793009 return MICREL_KSZ8_P1_ERRATA;30803010 break;30813081- case KSZ8567_CHIP_ID:30823082- /* KSZ8567R Errata DS80000752C Module 4 */30833083- case KSZ8765_CHIP_ID:30843084- case KSZ8794_CHIP_ID:30853085- case KSZ8795_CHIP_ID:30863086- /* KSZ879x/KSZ877x/KSZ876x Errata DS80000687C Module 2 */30873087- case KSZ9477_CHIP_ID:30883088- /* KSZ9477S Errata DS80000754A Module 4 */30893089- case KSZ9567_CHIP_ID:30903090- /* KSZ9567S Errata DS80000756A Module 4 */30913091- case KSZ9896_CHIP_ID:30923092- /* KSZ9896C Errata DS80000757A Module 3 */30933093- case KSZ9897_CHIP_ID:30943094- case LAN9646_CHIP_ID:30953095- /* KSZ9897R Errata DS80000758C Module 4 */30963096- /* Energy Efficient Ethernet (EEE) feature select must be manually disabled30973097- * The EEE feature is enabled by default, but it is not fully30983098- * operational. It must be manually disabled through register30993099- * controls. If not disabled, the PHY ports can auto-negotiate31003100- * to enable EEE, and this feature can cause link drops when31013101- * linked to another device supporting EEE.31023102- *31033103- * The same item appears in the errata for all switches above.31043104- */31053105- return MICREL_NO_EEE;31063011 }3107301231083013 return 0;···35113466 return -EOPNOTSUPP;35123467}3513346834693469+/**34703470+ * ksz_support_eee - Determine Energy Efficient Ethernet (EEE) support for a34713471+ * port34723472+ * @ds: Pointer to the DSA switch structure34733473+ * @port: Port number to check34743474+ *34753475+ * This function also documents devices where EEE was initially advertised but34763476+ * later withdrawn due to reliability issues, as described in official errata34773477+ * documents. These devices are explicitly listed to record known limitations,34783478+ * even if there is no technical necessity for runtime checks.34793479+ *34803480+ * Returns: true if the internal PHY on the given port supports fully34813481+ * operational EEE, false otherwise.34823482+ */35143483static bool ksz_support_eee(struct dsa_switch *ds, int port)35153484{35163485 struct ksz_device *dev = ds->priv;···3534347535353476 switch (dev->chip_id) {35363477 case KSZ8563_CHIP_ID:35373537- case KSZ8567_CHIP_ID:35383538- case KSZ9477_CHIP_ID:35393478 case KSZ9563_CHIP_ID:35403540- case KSZ9567_CHIP_ID:35413479 case KSZ9893_CHIP_ID:34803480+ return true;34813481+ case KSZ8567_CHIP_ID:34823482+ /* KSZ8567R Errata DS80000752C Module 4 */34833483+ case KSZ8765_CHIP_ID:34843484+ case KSZ8794_CHIP_ID:34853485+ case KSZ8795_CHIP_ID:34863486+ /* KSZ879x/KSZ877x/KSZ876x Errata DS80000687C Module 2 */34873487+ case KSZ9477_CHIP_ID:34883488+ /* KSZ9477S Errata DS80000754A Module 4 */34893489+ case KSZ9567_CHIP_ID:34903490+ /* KSZ9567S Errata DS80000756A Module 4 */35423491 case KSZ9896_CHIP_ID:34923492+ /* KSZ9896C Errata DS80000757A Module 3 */35433493 case KSZ9897_CHIP_ID:35443494 case LAN9646_CHIP_ID:35453545- return true;34953495+ /* KSZ9897R Errata DS80000758C Module 4 */34963496+ /* Energy Efficient Ethernet (EEE) feature select must be34973497+ * manually disabled34983498+ * The EEE feature is enabled by default, but it is not fully34993499+ * operational. It must be manually disabled through register35003500+ * controls. If not disabled, the PHY ports can auto-negotiate35013501+ * to enable EEE, and this feature can cause link drops when35023502+ * linked to another device supporting EEE.35033503+ *35043504+ * The same item appears in the errata for all switches above.35053505+ */35063506+ break;35463507 }3547350835483509 return false;
+1-5
drivers/net/dsa/sja1105/sja1105_main.c
···20812081 switch (state) {20822082 case BR_STATE_DISABLED:20832083 case BR_STATE_BLOCKING:20842084+ case BR_STATE_LISTENING:20842085 /* From UM10944 description of DRPDTAG (why put this there?):20852086 * "Management traffic flows to the port regardless of the state20862087 * of the INGRESS flag". So BPDUs are still be allowed to pass.20872088 * At the moment no difference between DISABLED and BLOCKING.20882089 */20892090 mac[port].ingress = false;20902090- mac[port].egress = false;20912091- mac[port].dyn_learn = false;20922092- break;20932093- case BR_STATE_LISTENING:20942094- mac[port].ingress = true;20952091 mac[port].egress = false;20962092 mac[port].dyn_learn = false;20972093 break;
···315315 struct otx2_nic *pfvf = netdev_priv(netdev);316316 struct cgx_pause_frm_cfg *req, *rsp;317317318318- if (is_otx2_lbkvf(pfvf->pdev))318318+ if (is_otx2_lbkvf(pfvf->pdev) || is_otx2_sdp_rep(pfvf->pdev))319319 return;320320321321 mutex_lock(&pfvf->mbox.lock);···347347 if (pause->autoneg)348348 return -EOPNOTSUPP;349349350350- if (is_otx2_lbkvf(pfvf->pdev))350350+ if (is_otx2_lbkvf(pfvf->pdev) || is_otx2_sdp_rep(pfvf->pdev))351351 return -EOPNOTSUPP;352352353353 if (pause->rx_pause)···941941{942942 struct otx2_nic *pfvf = netdev_priv(netdev);943943944944- /* LBK link is internal and always UP */945945- if (is_otx2_lbkvf(pfvf->pdev))944944+ /* LBK and SDP links are internal and always UP */945945+ if (is_otx2_lbkvf(pfvf->pdev) || is_otx2_sdp_rep(pfvf->pdev))946946 return 1;947947 return pfvf->linfo.link_up;948948}···14131413{14141414 struct otx2_nic *pfvf = netdev_priv(netdev);1415141514161416- if (is_otx2_lbkvf(pfvf->pdev)) {14161416+ if (is_otx2_lbkvf(pfvf->pdev) || is_otx2_sdp_rep(pfvf->pdev)) {14171417 cmd->base.duplex = DUPLEX_FULL;14181418 cmd->base.speed = SPEED_100000;14191419 } else {
···43494349 if (netdev->features & NETIF_F_HW_VLAN_CTAG_FILTER)43504350 netdev_warn(netdev, "Disabling HW_VLAN CTAG FILTERING, not supported in switchdev mode\n");4351435143524352+ features &= ~NETIF_F_HW_MACSEC;43534353+ if (netdev->features & NETIF_F_HW_MACSEC)43544354+ netdev_warn(netdev, "Disabling HW MACsec offload, not supported in switchdev mode\n");43554355+43524356 return features;43534357}43544358
···9999 }100100 local_buffer = eeprom_ptrs;101101102102- for (i = 0; i < TXGBE_EEPROM_LAST_WORD; i++)102102+ for (i = 0; i < TXGBE_EEPROM_LAST_WORD; i++) {103103+ if (wx->mac.type == wx_mac_aml) {104104+ if (i >= TXGBE_EEPROM_I2C_SRART_PTR &&105105+ i < TXGBE_EEPROM_I2C_END_PTR)106106+ local_buffer[i] = 0xffff;107107+ }103108 if (i != wx->eeprom.sw_region_offset + TXGBE_EEPROM_CHECKSUM)104109 *checksum += local_buffer[i];110110+ }105111106112 kvfree(eeprom_ptrs);107113
···158158 u8 cp_partial; /* partial copy into send buffer */159159160160 u8 rmsg_size; /* RNDIS header and PPI size */161161- u8 rmsg_pgcnt; /* page count of RNDIS header and PPI */162161 u8 page_buf_cnt;163162164163 u16 q_idx;···891892#define NETVSC_MIN_OUT_MSG_SIZE (sizeof(struct vmpacket_descriptor) + \892893 sizeof(struct nvsp_message))893894#define NETVSC_MIN_IN_MSG_SIZE sizeof(struct vmpacket_descriptor)895895+896896+/* Maximum # of contiguous data ranges that can make up a trasmitted packet.897897+ * Typically it's the max SKB fragments plus 2 for the rndis packet and the898898+ * linear portion of the SKB. But if MAX_SKB_FRAGS is large, the value may899899+ * need to be limited to MAX_PAGE_BUFFER_COUNT, which is the max # of entries900900+ * in a GPA direct packet sent to netvsp over VMBus.901901+ */902902+#if MAX_SKB_FRAGS + 2 < MAX_PAGE_BUFFER_COUNT903903+#define MAX_DATA_RANGES (MAX_SKB_FRAGS + 2)904904+#else905905+#define MAX_DATA_RANGES MAX_PAGE_BUFFER_COUNT906906+#endif894907895908/* Estimated requestor size:896909 * out_ring_size/min_out_msg_size + in_ring_size/min_in_msg_size
+48-9
drivers/net/hyperv/netvsc.c
···953953 + pend_size;954954 int i;955955 u32 padding = 0;956956- u32 page_count = packet->cp_partial ? packet->rmsg_pgcnt :957957- packet->page_buf_cnt;956956+ u32 page_count = packet->cp_partial ? 1 : packet->page_buf_cnt;958957 u32 remain;959958960959 /* Add padding */···10541055 return 0;10551056}1056105710581058+/* Build an "array" of mpb entries describing the data to be transferred10591059+ * over VMBus. After the desc header fields, each "array" entry is variable10601060+ * size, and each entry starts after the end of the previous entry. The10611061+ * "offset" and "len" fields for each entry imply the size of the entry.10621062+ *10631063+ * The pfns are in HV_HYP_PAGE_SIZE, because all communication with Hyper-V10641064+ * uses that granularity, even if the system page size of the guest is larger.10651065+ * Each entry in the input "pb" array must describe a contiguous range of10661066+ * guest physical memory so that the pfns are sequential if the range crosses10671067+ * a page boundary. The offset field must be < HV_HYP_PAGE_SIZE.10681068+ */10691069+static inline void netvsc_build_mpb_array(struct hv_page_buffer *pb,10701070+ u32 page_buffer_count,10711071+ struct vmbus_packet_mpb_array *desc,10721072+ u32 *desc_size)10731073+{10741074+ struct hv_mpb_array *mpb_entry = &desc->range;10751075+ int i, j;10761076+10771077+ for (i = 0; i < page_buffer_count; i++) {10781078+ u32 offset = pb[i].offset;10791079+ u32 len = pb[i].len;10801080+10811081+ mpb_entry->offset = offset;10821082+ mpb_entry->len = len;10831083+10841084+ for (j = 0; j < HVPFN_UP(offset + len); j++)10851085+ mpb_entry->pfn_array[j] = pb[i].pfn + j;10861086+10871087+ mpb_entry = (struct hv_mpb_array *)&mpb_entry->pfn_array[j];10881088+ }10891089+10901090+ desc->rangecount = page_buffer_count;10911091+ *desc_size = (char *)mpb_entry - (char *)desc;10921092+}10931093+10571094static inline int netvsc_send_pkt(10581095 struct hv_device *device,10591096 struct hv_netvsc_packet *packet,···1132109711331098 packet->dma_range = NULL;11341099 if (packet->page_buf_cnt) {11001100+ struct vmbus_channel_packet_page_buffer desc;11011101+ u32 desc_size;11021102+11351103 if (packet->cp_partial)11361136- pb += packet->rmsg_pgcnt;11041104+ pb++;1137110511381106 ret = netvsc_dma_map(ndev_ctx->device_ctx, packet, pb);11391107 if (ret) {···11441106 goto exit;11451107 }1146110811471147- ret = vmbus_sendpacket_pagebuffer(out_channel,11481148- pb, packet->page_buf_cnt,11491149- &nvmsg, sizeof(nvmsg),11501150- req_id);11511151-11091109+ netvsc_build_mpb_array(pb, packet->page_buf_cnt,11101110+ (struct vmbus_packet_mpb_array *)&desc,11111111+ &desc_size);11121112+ ret = vmbus_sendpacket_mpb_desc(out_channel,11131113+ (struct vmbus_packet_mpb_array *)&desc,11141114+ desc_size, &nvmsg, sizeof(nvmsg), req_id);11521115 if (ret)11531116 netvsc_dma_unmap(ndev_ctx->device_ctx, packet);11541117 } else {···12981259 packet->send_buf_index = section_index;1299126013001261 if (packet->cp_partial) {13011301- packet->page_buf_cnt -= packet->rmsg_pgcnt;12621262+ packet->page_buf_cnt--;13021263 packet->total_data_buflen = msd_len + packet->rmsg_size;13031264 } else {13041265 packet->page_buf_cnt = 0;
+14-48
drivers/net/hyperv/netvsc_drv.c
···326326 return txq;327327}328328329329-static u32 fill_pg_buf(unsigned long hvpfn, u32 offset, u32 len,330330- struct hv_page_buffer *pb)331331-{332332- int j = 0;333333-334334- hvpfn += offset >> HV_HYP_PAGE_SHIFT;335335- offset = offset & ~HV_HYP_PAGE_MASK;336336-337337- while (len > 0) {338338- unsigned long bytes;339339-340340- bytes = HV_HYP_PAGE_SIZE - offset;341341- if (bytes > len)342342- bytes = len;343343- pb[j].pfn = hvpfn;344344- pb[j].offset = offset;345345- pb[j].len = bytes;346346-347347- offset += bytes;348348- len -= bytes;349349-350350- if (offset == HV_HYP_PAGE_SIZE && len) {351351- hvpfn++;352352- offset = 0;353353- j++;354354- }355355- }356356-357357- return j + 1;358358-}359359-360329static u32 init_page_array(void *hdr, u32 len, struct sk_buff *skb,361330 struct hv_netvsc_packet *packet,362331 struct hv_page_buffer *pb)363332{364364- u32 slots_used = 0;365365- char *data = skb->data;366333 int frags = skb_shinfo(skb)->nr_frags;367334 int i;368335···338371 * 2. skb linear data339372 * 3. skb fragment data340373 */341341- slots_used += fill_pg_buf(virt_to_hvpfn(hdr),342342- offset_in_hvpage(hdr),343343- len,344344- &pb[slots_used]);345374375375+ pb[0].offset = offset_in_hvpage(hdr);376376+ pb[0].len = len;377377+ pb[0].pfn = virt_to_hvpfn(hdr);346378 packet->rmsg_size = len;347347- packet->rmsg_pgcnt = slots_used;348379349349- slots_used += fill_pg_buf(virt_to_hvpfn(data),350350- offset_in_hvpage(data),351351- skb_headlen(skb),352352- &pb[slots_used]);380380+ pb[1].offset = offset_in_hvpage(skb->data);381381+ pb[1].len = skb_headlen(skb);382382+ pb[1].pfn = virt_to_hvpfn(skb->data);353383354384 for (i = 0; i < frags; i++) {355385 skb_frag_t *frag = skb_shinfo(skb)->frags + i;386386+ struct hv_page_buffer *cur_pb = &pb[i + 2];387387+ u64 pfn = page_to_hvpfn(skb_frag_page(frag));388388+ u32 offset = skb_frag_off(frag);356389357357- slots_used += fill_pg_buf(page_to_hvpfn(skb_frag_page(frag)),358358- skb_frag_off(frag),359359- skb_frag_size(frag),360360- &pb[slots_used]);390390+ cur_pb->offset = offset_in_hvpage(offset);391391+ cur_pb->len = skb_frag_size(frag);392392+ cur_pb->pfn = pfn + (offset >> HV_HYP_PAGE_SHIFT);361393 }362362- return slots_used;394394+ return frags + 2;363395}364396365397static int count_skb_frag_slots(struct sk_buff *skb)···449483 struct net_device *vf_netdev;450484 u32 rndis_msg_size;451485 u32 hash;452452- struct hv_page_buffer pb[MAX_PAGE_BUFFER_COUNT];486486+ struct hv_page_buffer pb[MAX_DATA_RANGES];453487454488 /* If VF is present and up then redirect packets to it.455489 * Skip the VF if it is marked down or has no carrier.
···20272027 return err;20282028 }2029202920302030- /* According to KSZ9477 Errata DS80000754C (Module 4) all EEE modes20312031- * in this switch shall be regarded as broken.20322032- */20332033- if (phydev->dev_flags & MICREL_NO_EEE)20342034- phy_disable_eee(phydev);20352035-20362030 return kszphy_config_init(phydev);20372031}20382032···56995705 .handle_interrupt = kszphy_handle_interrupt,57005706 .suspend = genphy_suspend,57015707 .resume = ksz9477_resume,57025702- .get_features = ksz9477_get_features,57035708} };5704570957055710module_phy_driver(ksphy_driver);
+1
drivers/net/wireless/mediatek/mt76/dma.c
···10111011 int i;1012101210131013 mt76_worker_disable(&dev->tx_worker);10141014+ napi_disable(&dev->tx_napi);10141015 netif_napi_del(&dev->tx_napi);1015101610161017 for (i = 0; i < ARRAY_SIZE(dev->phys); i++) {
···390390 * as it only leads to a small amount of wasted memory for the lifetime of391391 * the I/O.392392 */393393-static int nvme_pci_npages_prp(void)393393+static __always_inline int nvme_pci_npages_prp(void)394394{395395 unsigned max_bytes = (NVME_MAX_KB_SZ * 1024) + NVME_CTRL_PAGE_SIZE;396396 unsigned nprps = DIV_ROUND_UP(max_bytes, NVME_CTRL_PAGE_SIZE);···12021202 WARN_ON_ONCE(test_bit(NVMEQ_POLLED, &nvmeq->flags));1203120312041204 disable_irq(pci_irq_vector(pdev, nvmeq->cq_vector));12051205+ spin_lock(&nvmeq->cq_poll_lock);12051206 nvme_poll_cq(nvmeq, NULL);12071207+ spin_unlock(&nvmeq->cq_poll_lock);12061208 enable_irq(pci_irq_vector(pdev, nvmeq->cq_vector));12071209}12081210···37383736 { PCI_DEVICE(0x1e49, 0x0021), /* ZHITAI TiPro5000 NVMe SSD */37393737 .driver_data = NVME_QUIRK_NO_DEEPEST_PS, },37403738 { PCI_DEVICE(0x1e49, 0x0041), /* ZHITAI TiPro7000 NVMe SSD */37393739+ .driver_data = NVME_QUIRK_NO_DEEPEST_PS, },37403740+ { PCI_DEVICE(0x025e, 0xf1ac), /* SOLIDIGM P44 pro SSDPFKKW020X7 */37413741 .driver_data = NVME_QUIRK_NO_DEEPEST_PS, },37423742 { PCI_DEVICE(0xc0a9, 0x540a), /* Crucial P2 */37433743 .driver_data = NVME_QUIRK_BOGUS_NID, },
+23-16
drivers/nvme/target/pci-epf.c
···6262#define NVMET_PCI_EPF_CQ_RETRY_INTERVAL msecs_to_jiffies(1)63636464enum nvmet_pci_epf_queue_flags {6565- NVMET_PCI_EPF_Q_IS_SQ = 0, /* The queue is a submission queue */6666- NVMET_PCI_EPF_Q_LIVE, /* The queue is live */6565+ NVMET_PCI_EPF_Q_LIVE = 0, /* The queue is live */6766 NVMET_PCI_EPF_Q_IRQ_ENABLED, /* IRQ is enabled for this queue */6867};6968···595596 struct nvmet_pci_epf_irq_vector *iv = cq->iv;596597 bool ret;597598598598- if (!test_bit(NVMET_PCI_EPF_Q_IRQ_ENABLED, &cq->flags))599599- return false;600600-601599 /* IRQ coalescing for the admin queue is not allowed. */602600 if (!cq->qid)603601 return true;···621625 struct pci_epf *epf = nvme_epf->epf;622626 int ret = 0;623627624624- if (!test_bit(NVMET_PCI_EPF_Q_LIVE, &cq->flags))628628+ if (!test_bit(NVMET_PCI_EPF_Q_LIVE, &cq->flags) ||629629+ !test_bit(NVMET_PCI_EPF_Q_IRQ_ENABLED, &cq->flags))625630 return;626631627632 mutex_lock(&ctrl->irq_lock);···633636 switch (nvme_epf->irq_type) {634637 case PCI_IRQ_MSIX:635638 case PCI_IRQ_MSI:639639+ /*640640+ * If we fail to raise an MSI or MSI-X interrupt, it is likely641641+ * because the host is using legacy INTX IRQs (e.g. BIOS,642642+ * grub), but we can fallback to the INTX type only if the643643+ * endpoint controller supports this type.644644+ */636645 ret = pci_epc_raise_irq(epf->epc, epf->func_no, epf->vfunc_no,637646 nvme_epf->irq_type, cq->vector + 1);638638- if (!ret)647647+ if (!ret || !nvme_epf->epc_features->intx_capable)639648 break;640640- /*641641- * If we got an error, it is likely because the host is using642642- * legacy IRQs (e.g. BIOS, grub).643643- */644649 fallthrough;645650 case PCI_IRQ_INTX:646651 ret = pci_epc_raise_irq(epf->epc, epf->func_no, epf->vfunc_no,···655656 }656657657658 if (ret)658658- dev_err(ctrl->dev, "Failed to raise IRQ (err=%d)\n", ret);659659+ dev_err_ratelimited(ctrl->dev,660660+ "CQ[%u]: Failed to raise IRQ (err=%d)\n",661661+ cq->qid, ret);659662660663unlock:661664 mutex_unlock(&ctrl->irq_lock);···1320131913211320 set_bit(NVMET_PCI_EPF_Q_LIVE, &cq->flags);1322132113231323- dev_dbg(ctrl->dev, "CQ[%u]: %u entries of %zu B, IRQ vector %u\n",13241324- cqid, qsize, cq->qes, cq->vector);13221322+ if (test_bit(NVMET_PCI_EPF_Q_IRQ_ENABLED, &cq->flags))13231323+ dev_dbg(ctrl->dev,13241324+ "CQ[%u]: %u entries of %zu B, IRQ vector %u\n",13251325+ cqid, qsize, cq->qes, cq->vector);13261326+ else13271327+ dev_dbg(ctrl->dev,13281328+ "CQ[%u]: %u entries of %zu B, IRQ disabled\n",13291329+ cqid, qsize, cq->qes);1325133013261331 return NVME_SC_SUCCESS;13271332···1351134413521345 cancel_delayed_work_sync(&cq->work);13531346 nvmet_pci_epf_drain_queue(cq);13541354- nvmet_pci_epf_remove_irq_vector(ctrl, cq->vector);13471347+ if (test_and_clear_bit(NVMET_PCI_EPF_Q_IRQ_ENABLED, &cq->flags))13481348+ nvmet_pci_epf_remove_irq_vector(ctrl, cq->vector);13551349 nvmet_pci_epf_mem_unmap(ctrl->nvme_epf, &cq->pci_map);1356135013571351 return NVME_SC_SUCCESS;···1541153315421534 if (sq) {15431535 queue = &ctrl->sq[qid];15441544- set_bit(NVMET_PCI_EPF_Q_IS_SQ, &queue->flags);15451536 } else {15461537 queue = &ctrl->cq[qid];15471538 INIT_DELAYED_WORK(&queue->work, nvmet_pci_epf_cq_work);
+15-7
drivers/phy/phy-can-transceiver.c
···9393};9494MODULE_DEVICE_TABLE(of, can_transceiver_phy_ids);95959696+/* Temporary wrapper until the multiplexer subsystem supports optional muxes */9797+static inline struct mux_state *9898+devm_mux_state_get_optional(struct device *dev, const char *mux_name)9999+{100100+ if (!of_property_present(dev->of_node, "mux-states"))101101+ return NULL;102102+103103+ return devm_mux_state_get(dev, mux_name);104104+}105105+96106static int can_transceiver_phy_probe(struct platform_device *pdev)97107{98108 struct phy_provider *phy_provider;···124114 match = of_match_node(can_transceiver_phy_ids, pdev->dev.of_node);125115 drvdata = match->data;126116127127- mux_state = devm_mux_state_get(dev, NULL);128128- if (IS_ERR(mux_state)) {129129- if (PTR_ERR(mux_state) == -EPROBE_DEFER)130130- return PTR_ERR(mux_state);131131- } else {132132- can_transceiver_phy->mux_state = mux_state;133133- }117117+ mux_state = devm_mux_state_get_optional(dev, NULL);118118+ if (IS_ERR(mux_state))119119+ return PTR_ERR(mux_state);120120+121121+ can_transceiver_phy->mux_state = mux_state;134122135123 phy = devm_phy_create(dev, dev->of_node,136124 &can_transceiver_phy_ops);
+2-1
drivers/phy/qualcomm/phy-qcom-qmp-ufs.c
···17541754 qmp_ufs_init_all(qmp, &cfg->tbls_hs_overlay[i]);17551755 }1756175617571757- qmp_ufs_init_all(qmp, &cfg->tbls_hs_b);17571757+ if (qmp->mode == PHY_MODE_UFS_HS_B)17581758+ qmp_ufs_init_all(qmp, &cfg->tbls_hs_b);17581759}1759176017601761static int qmp_ufs_com_init(struct qmp_ufs *qmp)
+74-59
drivers/phy/renesas/phy-rcar-gen3-usb2.c
···99 * Copyright (C) 2014 Cogent Embedded, Inc.1010 */11111212+#include <linux/cleanup.h>1213#include <linux/extcon-provider.h>1314#include <linux/interrupt.h>1415#include <linux/io.h>···108107 struct rcar_gen3_chan *ch;109108 u32 int_enable_bits;110109 bool initialized;111111- bool otg_initialized;112110 bool powered;113111};114112···119119 struct regulator *vbus;120120 struct reset_control *rstc;121121 struct work_struct work;122122- struct mutex lock; /* protects rphys[...].powered */122122+ spinlock_t lock; /* protects access to hardware and driver data structure. */123123 enum usb_dr_mode dr_mode;124124- int irq;125124 u32 obint_enable_bits;126125 bool extcon_host;127126 bool is_otg_channel;···319320 return false;320321}321322322322-static bool rcar_gen3_needs_init_otg(struct rcar_gen3_chan *ch)323323+static bool rcar_gen3_is_any_otg_rphy_initialized(struct rcar_gen3_chan *ch)323324{324324- int i;325325-326326- for (i = 0; i < NUM_OF_PHYS; i++) {327327- if (ch->rphys[i].otg_initialized)328328- return false;325325+ for (enum rcar_gen3_phy_index i = PHY_INDEX_BOTH_HC; i <= PHY_INDEX_EHCI;326326+ i++) {327327+ if (ch->rphys[i].initialized)328328+ return true;329329 }330330331331- return true;331331+ return false;332332}333333334334static bool rcar_gen3_are_all_rphys_power_off(struct rcar_gen3_chan *ch)···349351 bool is_b_device;350352 enum phy_mode cur_mode, new_mode;351353352352- if (!ch->is_otg_channel || !rcar_gen3_is_any_rphy_initialized(ch))354354+ guard(spinlock_irqsave)(&ch->lock);355355+356356+ if (!ch->is_otg_channel || !rcar_gen3_is_any_otg_rphy_initialized(ch))353357 return -EIO;354358355359 if (sysfs_streq(buf, "host"))···389389{390390 struct rcar_gen3_chan *ch = dev_get_drvdata(dev);391391392392- if (!ch->is_otg_channel || !rcar_gen3_is_any_rphy_initialized(ch))392392+ if (!ch->is_otg_channel || !rcar_gen3_is_any_otg_rphy_initialized(ch))393393 return -EIO;394394395395 return sprintf(buf, "%s\n", rcar_gen3_is_host(ch) ? "host" :···401401{402402 void __iomem *usb2_base = ch->base;403403 u32 val;404404+405405+ if (!ch->is_otg_channel || rcar_gen3_is_any_otg_rphy_initialized(ch))406406+ return;404407405408 /* Should not use functions of read-modify-write a register */406409 val = readl(usb2_base + USB2_LINECTRL1);···418415 val = readl(usb2_base + USB2_ADPCTRL);419416 writel(val | USB2_ADPCTRL_IDPULLUP, usb2_base + USB2_ADPCTRL);420417 }421421- msleep(20);418418+ mdelay(20);422419423420 writel(0xffffffff, usb2_base + USB2_OBINTSTA);424421 writel(ch->obint_enable_bits, usb2_base + USB2_OBINTEN);···430427{431428 struct rcar_gen3_chan *ch = _ch;432429 void __iomem *usb2_base = ch->base;433433- u32 status = readl(usb2_base + USB2_OBINTSTA);430430+ struct device *dev = ch->dev;434431 irqreturn_t ret = IRQ_NONE;432432+ u32 status;435433436436- if (status & ch->obint_enable_bits) {437437- dev_vdbg(ch->dev, "%s: %08x\n", __func__, status);438438- writel(ch->obint_enable_bits, usb2_base + USB2_OBINTSTA);439439- rcar_gen3_device_recognition(ch);440440- ret = IRQ_HANDLED;434434+ pm_runtime_get_noresume(dev);435435+436436+ if (pm_runtime_suspended(dev))437437+ goto rpm_put;438438+439439+ scoped_guard(spinlock, &ch->lock) {440440+ status = readl(usb2_base + USB2_OBINTSTA);441441+ if (status & ch->obint_enable_bits) {442442+ dev_vdbg(dev, "%s: %08x\n", __func__, status);443443+ writel(ch->obint_enable_bits, usb2_base + USB2_OBINTSTA);444444+ rcar_gen3_device_recognition(ch);445445+ ret = IRQ_HANDLED;446446+ }441447 }442448449449+rpm_put:450450+ pm_runtime_put_noidle(dev);443451 return ret;444452}445453···460446 struct rcar_gen3_chan *channel = rphy->ch;461447 void __iomem *usb2_base = channel->base;462448 u32 val;463463- int ret;464449465465- if (!rcar_gen3_is_any_rphy_initialized(channel) && channel->irq >= 0) {466466- INIT_WORK(&channel->work, rcar_gen3_phy_usb2_work);467467- ret = request_irq(channel->irq, rcar_gen3_phy_usb2_irq,468468- IRQF_SHARED, dev_name(channel->dev), channel);469469- if (ret < 0) {470470- dev_err(channel->dev, "No irq handler (%d)\n", channel->irq);471471- return ret;472472- }473473- }450450+ guard(spinlock_irqsave)(&channel->lock);474451475452 /* Initialize USB2 part */476453 val = readl(usb2_base + USB2_INT_ENABLE);477454 val |= USB2_INT_ENABLE_UCOM_INTEN | rphy->int_enable_bits;478455 writel(val, usb2_base + USB2_INT_ENABLE);479479- writel(USB2_SPD_RSM_TIMSET_INIT, usb2_base + USB2_SPD_RSM_TIMSET);480480- writel(USB2_OC_TIMSET_INIT, usb2_base + USB2_OC_TIMSET);481456482482- /* Initialize otg part */483483- if (channel->is_otg_channel) {484484- if (rcar_gen3_needs_init_otg(channel))485485- rcar_gen3_init_otg(channel);486486- rphy->otg_initialized = true;457457+ if (!rcar_gen3_is_any_rphy_initialized(channel)) {458458+ writel(USB2_SPD_RSM_TIMSET_INIT, usb2_base + USB2_SPD_RSM_TIMSET);459459+ writel(USB2_OC_TIMSET_INIT, usb2_base + USB2_OC_TIMSET);487460 }461461+462462+ /* Initialize otg part (only if we initialize a PHY with IRQs). */463463+ if (rphy->int_enable_bits)464464+ rcar_gen3_init_otg(channel);488465489466 rphy->initialized = true;490467···489484 void __iomem *usb2_base = channel->base;490485 u32 val;491486492492- rphy->initialized = false;487487+ guard(spinlock_irqsave)(&channel->lock);493488494494- if (channel->is_otg_channel)495495- rphy->otg_initialized = false;489489+ rphy->initialized = false;496490497491 val = readl(usb2_base + USB2_INT_ENABLE);498492 val &= ~rphy->int_enable_bits;499493 if (!rcar_gen3_is_any_rphy_initialized(channel))500494 val &= ~USB2_INT_ENABLE_UCOM_INTEN;501495 writel(val, usb2_base + USB2_INT_ENABLE);502502-503503- if (channel->irq >= 0 && !rcar_gen3_is_any_rphy_initialized(channel))504504- free_irq(channel->irq, channel);505496506497 return 0;507498}···510509 u32 val;511510 int ret = 0;512511513513- mutex_lock(&channel->lock);514514- if (!rcar_gen3_are_all_rphys_power_off(channel))515515- goto out;516516-517512 if (channel->vbus) {518513 ret = regulator_enable(channel->vbus);519514 if (ret)520520- goto out;515515+ return ret;521516 }517517+518518+ guard(spinlock_irqsave)(&channel->lock);519519+520520+ if (!rcar_gen3_are_all_rphys_power_off(channel))521521+ goto out;522522523523 val = readl(usb2_base + USB2_USBCTR);524524 val |= USB2_USBCTR_PLL_RST;···530528out:531529 /* The powered flag should be set for any other phys anyway */532530 rphy->powered = true;533533- mutex_unlock(&channel->lock);534531535532 return 0;536533}···540539 struct rcar_gen3_chan *channel = rphy->ch;541540 int ret = 0;542541543543- mutex_lock(&channel->lock);544544- rphy->powered = false;542542+ scoped_guard(spinlock_irqsave, &channel->lock) {543543+ rphy->powered = false;545544546546- if (!rcar_gen3_are_all_rphys_power_off(channel))547547- goto out;545545+ if (rcar_gen3_are_all_rphys_power_off(channel)) {546546+ u32 val = readl(channel->base + USB2_USBCTR);547547+548548+ val |= USB2_USBCTR_PLL_RST;549549+ writel(val, channel->base + USB2_USBCTR);550550+ }551551+ }548552549553 if (channel->vbus)550554 ret = regulator_disable(channel->vbus);551551-552552-out:553553- mutex_unlock(&channel->lock);554555555556 return ret;556557}···706703 struct device *dev = &pdev->dev;707704 struct rcar_gen3_chan *channel;708705 struct phy_provider *provider;709709- int ret = 0, i;706706+ int ret = 0, i, irq;710707711708 if (!dev->of_node) {712709 dev_err(dev, "This driver needs device tree\n");···722719 return PTR_ERR(channel->base);723720724721 channel->obint_enable_bits = USB2_OBINT_BITS;725725- /* get irq number here and request_irq for OTG in phy_init */726726- channel->irq = platform_get_irq_optional(pdev, 0);727722 channel->dr_mode = rcar_gen3_get_dr_mode(dev->of_node);728723 if (channel->dr_mode != USB_DR_MODE_UNKNOWN) {729724 channel->is_otg_channel = true;···764763 if (phy_data->no_adp_ctrl)765764 channel->obint_enable_bits = USB2_OBINT_IDCHG_EN;766765767767- mutex_init(&channel->lock);766766+ spin_lock_init(&channel->lock);768767 for (i = 0; i < NUM_OF_PHYS; i++) {769768 channel->rphys[i].phy = devm_phy_create(dev, NULL,770769 phy_data->phy_usb2_ops);···788787 goto error;789788 }790789 channel->vbus = NULL;790790+ }791791+792792+ irq = platform_get_irq_optional(pdev, 0);793793+ if (irq < 0 && irq != -ENXIO) {794794+ ret = irq;795795+ goto error;796796+ } else if (irq > 0) {797797+ INIT_WORK(&channel->work, rcar_gen3_phy_usb2_work);798798+ ret = devm_request_irq(dev, irq, rcar_gen3_phy_usb2_irq,799799+ IRQF_SHARED, dev_name(dev), channel);800800+ if (ret < 0) {801801+ dev_err(dev, "Failed to request irq (%d)\n", irq);802802+ goto error;803803+ }791804 }792805793806 provider = devm_of_phy_provider_register(dev, rcar_gen3_phy_usb2_xlate);
+1-1
drivers/phy/rockchip/phy-rockchip-samsung-dcphy.c
···16531653 return ret;16541654 }1655165516561656- clk_prepare_enable(samsung->ref_clk);16561656+ ret = clk_prepare_enable(samsung->ref_clk);16571657 if (ret) {16581658 dev_err(samsung->dev, "Failed to enable reference clock, %d\n", ret);16591659 clk_disable_unprepare(samsung->pclk);
···132132133133static int max20086_parse_regulators_dt(struct max20086 *chip, bool *boot_on)134134{135135- struct of_regulator_match matches[MAX20086_MAX_REGULATORS] = { };135135+ struct of_regulator_match *matches;136136 struct device_node *node;137137 unsigned int i;138138 int ret;···142142 dev_err(chip->dev, "regulators node not found\n");143143 return -ENODEV;144144 }145145+146146+ matches = devm_kcalloc(chip->dev, chip->info->num_outputs,147147+ sizeof(*matches), GFP_KERNEL);148148+ if (!matches)149149+ return -ENOMEM;145150146151 for (i = 0; i < chip->info->num_outputs; ++i)147152 matches[i].name = max20086_output_names[i];
+5-1
drivers/scsi/sd_zbc.c
···169169 unsigned int nr_zones, size_t *buflen)170170{171171 struct request_queue *q = sdkp->disk->queue;172172+ unsigned int max_segments;172173 size_t bufsize;173174 void *buf;174175···181180 * Furthermore, since the report zone command cannot be split, make182181 * sure that the allocated buffer can always be mapped by limiting the183182 * number of pages allocated to the HBA max segments limit.183183+ * Since max segments can be larger than the max inline bio vectors,184184+ * further limit the allocated buffer to BIO_MAX_INLINE_VECS.184185 */185186 nr_zones = min(nr_zones, sdkp->zone_info.nr_zones);186187 bufsize = roundup((nr_zones + 1) * 64, SECTOR_SIZE);187188 bufsize = min_t(size_t, bufsize,188189 queue_max_hw_sectors(q) << SECTOR_SHIFT);189189- bufsize = min_t(size_t, bufsize, queue_max_segments(q) << PAGE_SHIFT);190190+ max_segments = min(BIO_MAX_INLINE_VECS, queue_max_segments(q));191191+ bufsize = min_t(size_t, bufsize, max_segments << PAGE_SHIFT);190192191193 while (bufsize >= SECTOR_SIZE) {192194 buf = kvzalloc(bufsize, GFP_KERNEL | __GFP_NORETRY);
···122122 set_bit(SDW_GROUP13_DEV_NUM, bus->assigned);123123 set_bit(SDW_MASTER_DEV_NUM, bus->assigned);124124125125+ ret = sdw_irq_create(bus, fwnode);126126+ if (ret)127127+ return ret;128128+125129 /*126130 * SDW is an enumerable bus, but devices can be powered off. So,127131 * they won't be able to report as present.···142138143139 if (ret < 0) {144140 dev_err(bus->dev, "Finding slaves failed:%d\n", ret);141141+ sdw_irq_delete(bus);145142 return ret;146143 }147144···160155 bus->params.curr_dr_freq = bus->params.max_dr_freq;161156 bus->params.curr_bank = SDW_BANK0;162157 bus->params.next_bank = SDW_BANK1;163163-164164- ret = sdw_irq_create(bus, fwnode);165165- if (ret)166166- return ret;167158168159 return 0;169160}
···264264 else265265 reg |= SUN4I_CTL_DHB;266266267267+ /* Now that the settings are correct, enable the interface */268268+ reg |= SUN4I_CTL_ENABLE;269269+267270 sun4i_spi_write(sspi, SUN4I_CTL_REG, reg);268271269272 /* Ensure that we have a parent clock fast enough */···407404 }408405409406 sun4i_spi_write(sspi, SUN4I_CTL_REG,410410- SUN4I_CTL_ENABLE | SUN4I_CTL_MASTER | SUN4I_CTL_TP);407407+ SUN4I_CTL_MASTER | SUN4I_CTL_TP);411408412409 return 0;413410
···830830 struct elf_phdr *elf_ppnt, *elf_phdata, *interp_elf_phdata = NULL;831831 struct elf_phdr *elf_property_phdata = NULL;832832 unsigned long elf_brk;833833+ bool brk_moved = false;833834 int retval, i;834835 unsigned long elf_entry;835836 unsigned long e_entry;···10981097 /* Calculate any requested alignment. */10991098 alignment = maximum_alignment(elf_phdata, elf_ex->e_phnum);1100109911011101- /*11021102- * There are effectively two types of ET_DYN11031103- * binaries: programs (i.e. PIE: ET_DYN with PT_INTERP)11041104- * and loaders (ET_DYN without PT_INTERP, since they11051105- * _are_ the ELF interpreter). The loaders must11061106- * be loaded away from programs since the program11071107- * may otherwise collide with the loader (especially11081108- * for ET_EXEC which does not have a randomized11091109- * position). For example to handle invocations of11001100+ /**11011101+ * DOC: PIE handling11021102+ *11031103+ * There are effectively two types of ET_DYN ELF11041104+ * binaries: programs (i.e. PIE: ET_DYN with11051105+ * PT_INTERP) and loaders (i.e. static PIE: ET_DYN11061106+ * without PT_INTERP, usually the ELF interpreter11071107+ * itself). Loaders must be loaded away from programs11081108+ * since the program may otherwise collide with the11091109+ * loader (especially for ET_EXEC which does not have11101110+ * a randomized position).11111111+ *11121112+ * For example, to handle invocations of11101113 * "./ld.so someprog" to test out a new version of11111114 * the loader, the subsequent program that the11121115 * loader loads must avoid the loader itself, so···11231118 * ELF_ET_DYN_BASE and loaders are loaded into the11241119 * independently randomized mmap region (0 load_bias11251120 * without MAP_FIXED nor MAP_FIXED_NOREPLACE).11211121+ *11221122+ * See below for "brk" handling details, which is11231123+ * also affected by program vs loader and ASLR.11261124 */11271125 if (interpreter) {11281126 /* On ET_DYN with PT_INTERP, we do the ASLR. */···12421234 start_data += load_bias;12431235 end_data += load_bias;1244123612451245- current->mm->start_brk = current->mm->brk = ELF_PAGEALIGN(elf_brk);12461246-12471237 if (interpreter) {12481238 elf_entry = load_elf_interp(interp_elf_ex,12491239 interpreter,···12971291 mm->end_data = end_data;12981292 mm->start_stack = bprm->p;1299129313001300- if ((current->flags & PF_RANDOMIZE) && (snapshot_randomize_va_space > 1)) {12941294+ /**12951295+ * DOC: "brk" handling12961296+ *12971297+ * For architectures with ELF randomization, when executing a12981298+ * loader directly (i.e. static PIE: ET_DYN without PT_INTERP),12991299+ * move the brk area out of the mmap region and into the unused13001300+ * ELF_ET_DYN_BASE region. Since "brk" grows up it may collide13011301+ * early with the stack growing down or other regions being put13021302+ * into the mmap region by the kernel (e.g. vdso).13031303+ *13041304+ * In the CONFIG_COMPAT_BRK case, though, everything is turned13051305+ * off because we're not allowed to move the brk at all.13061306+ */13071307+ if (!IS_ENABLED(CONFIG_COMPAT_BRK) &&13081308+ IS_ENABLED(CONFIG_ARCH_HAS_ELF_RANDOMIZE) &&13091309+ elf_ex->e_type == ET_DYN && !interpreter) {13101310+ elf_brk = ELF_ET_DYN_BASE;13111311+ /* This counts as moving the brk, so let brk(2) know. */13121312+ brk_moved = true;13131313+ }13141314+ mm->start_brk = mm->brk = ELF_PAGEALIGN(elf_brk);13151315+13161316+ if ((current->flags & PF_RANDOMIZE) && snapshot_randomize_va_space > 1) {13011317 /*13021302- * For architectures with ELF randomization, when executing13031303- * a loader directly (i.e. no interpreter listed in ELF13041304- * headers), move the brk area out of the mmap region13051305- * (since it grows up, and may collide early with the stack13061306- * growing down), and into the unused ELF_ET_DYN_BASE region.13181318+ * If we didn't move the brk to ELF_ET_DYN_BASE (above),13191319+ * leave a gap between .bss and brk.13071320 */13081308- if (IS_ENABLED(CONFIG_ARCH_HAS_ELF_RANDOMIZE) &&13091309- elf_ex->e_type == ET_DYN && !interpreter) {13101310- mm->brk = mm->start_brk = ELF_ET_DYN_BASE;13111311- } else {13121312- /* Otherwise leave a gap between .bss and brk. */13211321+ if (!brk_moved)13131322 mm->brk = mm->start_brk = mm->brk + PAGE_SIZE;13141314- }1315132313161324 mm->brk = mm->start_brk = arch_randomize_brk(mm);13251325+ brk_moved = true;13261326+ }13271327+13171328#ifdef compat_brk_randomized13291329+ if (brk_moved)13181330 current->brk_randomized = 1;13191331#endif13201320- }1321133213221333 if (current->personality & MMAP_PAGE_ZERO) {13231334 /* Why this, you ask??? Well SVr4 maps page 0 as read-only,
+15-2
fs/btrfs/discard.c
···9494 struct btrfs_block_group *block_group)9595{9696 lockdep_assert_held(&discard_ctl->lock);9797- if (!btrfs_run_discard_work(discard_ctl))9898- return;999710098 if (list_empty(&block_group->discard_list) ||10199 block_group->discard_index == BTRFS_DISCARD_INDEX_UNUSED) {···114116 struct btrfs_block_group *block_group)115117{116118 if (!btrfs_is_block_group_data_only(block_group))119119+ return;120120+121121+ if (!btrfs_run_discard_work(discard_ctl))117122 return;118123119124 spin_lock(&discard_ctl->lock);···245244 block_group->used != 0) {246245 if (btrfs_is_block_group_data_only(block_group)) {247246 __add_to_discard_list(discard_ctl, block_group);247247+ /*248248+ * The block group must have been moved to other249249+ * discard list even if discard was disabled in250250+ * the meantime or a transaction abort happened,251251+ * otherwise we can end up in an infinite loop,252252+ * always jumping into the 'again' label and253253+ * keep getting this block group over and over254254+ * in case there are no other block groups in255255+ * the discard lists.256256+ */257257+ ASSERT(block_group->discard_index !=258258+ BTRFS_DISCARD_INDEX_UNUSED);248259 } else {249260 list_del_init(&block_group->discard_list);250261 btrfs_put_block_group(block_group);
···11091109 struct extent_state *cached = NULL;11101110 struct extent_map *em;11111111 int ret = 0;11121112+ bool free_pages = false;11121113 u64 start = async_extent->start;11131114 u64 end = async_extent->start + async_extent->ram_size - 1;11141115···11301129 }1131113011321131 if (async_extent->compress_type == BTRFS_COMPRESS_NONE) {11321132+ ASSERT(!async_extent->folios);11331133+ ASSERT(async_extent->nr_folios == 0);11331134 submit_uncompressed_range(inode, async_extent, locked_folio);11351135+ free_pages = true;11341136 goto done;11351137 }11361138···11491145 * fall back to uncompressed.11501146 */11511147 submit_uncompressed_range(inode, async_extent, locked_folio);11481148+ free_pages = true;11521149 goto done;11531150 }11541151···11911186done:11921187 if (async_chunk->blkcg_css)11931188 kthread_associate_blkcg(NULL);11891189+ if (free_pages)11901190+ free_async_extent_pages(async_extent);11941191 kfree(async_extent);11951192 return;11961193
+4
fs/btrfs/super.c
···569569 break;570570 case Opt_commit_interval:571571 ctx->commit_interval = result.uint_32;572572+ if (ctx->commit_interval > BTRFS_WARNING_COMMIT_INTERVAL) {573573+ btrfs_warn(NULL, "excessive commit interval %u, use with care",574574+ ctx->commit_interval);575575+ }572576 if (ctx->commit_interval == 0)573577 ctx->commit_interval = BTRFS_DEFAULT_COMMIT_INTERVAL;574578 break;
+1-3
fs/buffer.c
···12201220 /* FIXME: do we need to set this in both places? */12211221 if (bh->b_folio && bh->b_folio->mapping)12221222 mapping_set_error(bh->b_folio->mapping, -EIO);12231223- if (bh->b_assoc_map) {12231223+ if (bh->b_assoc_map)12241224 mapping_set_error(bh->b_assoc_map, -EIO);12251225- errseq_set(&bh->b_assoc_map->host->i_sb->s_wb_err, -EIO);12261226- }12271225}12281226EXPORT_SYMBOL(mark_buffer_write_io_error);12291227
···4949 struct nfs4_pnfs_ds_addr *da;5050 struct nfs4_ff_layout_ds *new_ds = NULL;5151 struct nfs4_ff_ds_version *ds_versions = NULL;5252+ struct net *net = server->nfs_client->cl_net;5253 u32 mp_count;5354 u32 version_count;5455 __be32 *p;···81808281 for (i = 0; i < mp_count; i++) {8382 /* multipath ds */8484- da = nfs4_decode_mp_ds_addr(server->nfs_client->cl_net,8585- &stream, gfp_flags);8383+ da = nfs4_decode_mp_ds_addr(net, &stream, gfp_flags);8684 if (da)8785 list_add_tail(&da->da_node, &dsaddrs);8886 }···149149 new_ds->ds_versions = ds_versions;150150 new_ds->ds_versions_cnt = version_count;151151152152- new_ds->ds = nfs4_pnfs_ds_add(&dsaddrs, gfp_flags);152152+ new_ds->ds = nfs4_pnfs_ds_add(net, &dsaddrs, gfp_flags);153153 if (!new_ds->ds)154154 goto out_err_drain_dsaddrs;155155
+1-1
fs/nfs/localio.c
···278278 new = __nfs_local_open_fh(clp, cred, fh, nfl, mode);279279 if (IS_ERR(new))280280 return NULL;281281+ rcu_read_lock();281282 /* try to swap in the pointer */282283 spin_lock(&clp->cl_uuid.lock);283284 nf = rcu_dereference_protected(*pnf, 1);···288287 rcu_assign_pointer(*pnf, nf);289288 }290289 spin_unlock(&clp->cl_uuid.lock);291291- rcu_read_lock();292290 }293291 nf = nfs_local_file_get(nf);294292 rcu_read_unlock();
+5-1
fs/nfs/netns.h
···3131 unsigned short nfs_callback_tcpport;3232 unsigned short nfs_callback_tcpport6;3333 int cb_users[NFS4_MAX_MINOR_VERSION + 1];3434-#endif3434+#endif /* CONFIG_NFS_V4 */3535+#if IS_ENABLED(CONFIG_NFS_V4_1)3636+ struct list_head nfs4_data_server_cache;3737+ spinlock_t nfs4_data_server_lock;3838+#endif /* CONFIG_NFS_V4_1 */3539 struct nfs_netns_client *nfs_client;3640 spinlock_t nfs_client_lock;3741 ktime_t boot_time;
+1-1
fs/nfs/nfs3acl.c
···104104105105 switch (status) {106106 case 0:107107- status = nfs_refresh_inode(inode, res.fattr);107107+ nfs_refresh_inode(inode, res.fattr);108108 break;109109 case -EPFNOSUPPORT:110110 case -EPROTONOSUPPORT:
···1616#include "nfs4session.h"1717#include "internal.h"1818#include "pnfs.h"1919+#include "netns.h"19202021#define NFSDBG_FACILITY NFSDBG_PNFS2122···505504/*506505 * Data server cache507506 *508508- * Data servers can be mapped to different device ids.509509- * nfs4_pnfs_ds reference counting507507+ * Data servers can be mapped to different device ids, but should508508+ * never be shared between net namespaces.509509+ *510510+ * nfs4_pnfs_ds reference counting:510511 * - set to 1 on allocation511512 * - incremented when a device id maps a data server already in the cache.512513 * - decremented when deviceid is removed from the cache.513514 */514514-static DEFINE_SPINLOCK(nfs4_ds_cache_lock);515515-static LIST_HEAD(nfs4_data_server_cache);516515517516/* Debug routines */518517static void···605604 * Lookup DS by addresses. nfs4_ds_cache_lock is held606605 */607606static struct nfs4_pnfs_ds *608608-_data_server_lookup_locked(const struct list_head *dsaddrs)607607+_data_server_lookup_locked(const struct nfs_net *nn, const struct list_head *dsaddrs)609608{610609 struct nfs4_pnfs_ds *ds;611610612612- list_for_each_entry(ds, &nfs4_data_server_cache, ds_node)611611+ list_for_each_entry(ds, &nn->nfs4_data_server_cache, ds_node)613612 if (_same_data_server_addrs_locked(&ds->ds_addrs, dsaddrs))614613 return ds;615614 return NULL;···654653655654void nfs4_pnfs_ds_put(struct nfs4_pnfs_ds *ds)656655{657657- if (refcount_dec_and_lock(&ds->ds_count,658658- &nfs4_ds_cache_lock)) {656656+ struct nfs_net *nn = net_generic(ds->ds_net, nfs_net_id);657657+658658+ if (refcount_dec_and_lock(&ds->ds_count, &nn->nfs4_data_server_lock)) {659659 list_del_init(&ds->ds_node);660660- spin_unlock(&nfs4_ds_cache_lock);660660+ spin_unlock(&nn->nfs4_data_server_lock);661661 destroy_ds(ds);662662 }663663}···718716 * uncached and return cached struct nfs4_pnfs_ds.719717 */720718struct nfs4_pnfs_ds *721721-nfs4_pnfs_ds_add(struct list_head *dsaddrs, gfp_t gfp_flags)719719+nfs4_pnfs_ds_add(const struct net *net, struct list_head *dsaddrs, gfp_t gfp_flags)722720{721721+ struct nfs_net *nn = net_generic(net, nfs_net_id);723722 struct nfs4_pnfs_ds *tmp_ds, *ds = NULL;724723 char *remotestr;725724···736733 /* this is only used for debugging, so it's ok if its NULL */737734 remotestr = nfs4_pnfs_remotestr(dsaddrs, gfp_flags);738735739739- spin_lock(&nfs4_ds_cache_lock);740740- tmp_ds = _data_server_lookup_locked(dsaddrs);736736+ spin_lock(&nn->nfs4_data_server_lock);737737+ tmp_ds = _data_server_lookup_locked(nn, dsaddrs);741738 if (tmp_ds == NULL) {742739 INIT_LIST_HEAD(&ds->ds_addrs);743740 list_splice_init(dsaddrs, &ds->ds_addrs);744741 ds->ds_remotestr = remotestr;745742 refcount_set(&ds->ds_count, 1);746743 INIT_LIST_HEAD(&ds->ds_node);744744+ ds->ds_net = net;747745 ds->ds_clp = NULL;748748- list_add(&ds->ds_node, &nfs4_data_server_cache);746746+ list_add(&ds->ds_node, &nn->nfs4_data_server_cache);749747 dprintk("%s add new data server %s\n", __func__,750748 ds->ds_remotestr);751749 } else {···758754 refcount_read(&tmp_ds->ds_count));759755 ds = tmp_ds;760756 }761761- spin_unlock(&nfs4_ds_cache_lock);757757+ spin_unlock(&nn->nfs4_data_server_lock);762758out:763759 return ds;764760}
···29682968 /* Eventually save off posix specific response info and timestamps */2969296929702970err_free_rsp_buf:29712971- free_rsp_buf(resp_buftype, rsp);29712971+ free_rsp_buf(resp_buftype, rsp_iov.iov_base);29722972 kfree(pc_buf);29732973err_free_req:29742974 cifs_small_buf_release(req);
+1-1
fs/udf/truncate.c
···115115 }116116 /* This inode entry is in-memory only and thus we don't have to mark117117 * the inode dirty */118118- if (ret == 0)118118+ if (ret >= 0)119119 iinfo->i_lenExtents = inode->i_size;120120 brelse(epos.bh);121121}
+24
fs/xattr.c
···14281428 return !strncmp(name, XATTR_TRUSTED_PREFIX, XATTR_TRUSTED_PREFIX_LEN);14291429}1430143014311431+static bool xattr_is_maclabel(const char *name)14321432+{14331433+ const char *suffix = name + XATTR_SECURITY_PREFIX_LEN;14341434+14351435+ return !strncmp(name, XATTR_SECURITY_PREFIX,14361436+ XATTR_SECURITY_PREFIX_LEN) &&14371437+ security_ismaclabel(suffix);14381438+}14391439+14311440/**14321441 * simple_xattr_list - list all xattr objects14331442 * @inode: inode from which to get the xattrs···14691460 if (err)14701461 return err;1471146214631463+ err = security_inode_listsecurity(inode, buffer, remaining_size);14641464+ if (err < 0)14651465+ return err;14661466+14671467+ if (buffer) {14681468+ if (remaining_size < err)14691469+ return -ERANGE;14701470+ buffer += err;14711471+ }14721472+ remaining_size -= err;14731473+14721474 read_lock(&xattrs->lock);14731475 for (rbp = rb_first(&xattrs->rb_root); rbp; rbp = rb_next(rbp)) {14741476 xattr = rb_entry(rbp, struct simple_xattr, rb_node);1475147714761478 /* skip "trusted." attributes for unprivileged callers */14771479 if (!trusted && xattr_is_trusted(xattr->name))14801480+ continue;14811481+14821482+ /* skip MAC labels; these are provided by LSM above */14831483+ if (xattr_is_maclabel(xattr->name))14781484 continue;1479148514801486 err = xattr_list_one(&buffer, &remaining_size, xattr->name);
+27-1
fs/xfs/xfs_super.c
···11491149 return 0;1150115011511151free_freecounters:11521152- while (--i > 0)11521152+ while (--i >= 0)11531153 percpu_counter_destroy(&mp->m_free[i].count);11541154 percpu_counter_destroy(&mp->m_delalloc_rtextents);11551155free_delalloc:···21142114 if (error)21152115 return error;2116211621172117+ /* attr2 -> noattr2 */21182118+ if (xfs_has_noattr2(new_mp)) {21192119+ if (xfs_has_crc(mp)) {21202120+ xfs_warn(mp,21212121+ "attr2 is always enabled for a V5 filesystem - can't be changed.");21222122+ return -EINVAL;21232123+ }21242124+ mp->m_features &= ~XFS_FEAT_ATTR2;21252125+ mp->m_features |= XFS_FEAT_NOATTR2;21262126+ } else if (xfs_has_attr2(new_mp)) {21272127+ /* noattr2 -> attr2 */21282128+ mp->m_features &= ~XFS_FEAT_NOATTR2;21292129+ mp->m_features |= XFS_FEAT_ATTR2;21302130+ }21312131+21172132 /* inode32 -> inode64 */21182133 if (xfs_has_small_inums(mp) && !xfs_has_small_inums(new_mp)) {21192134 mp->m_features &= ~XFS_FEAT_SMALL_INUMS;···21402125 mp->m_features |= XFS_FEAT_SMALL_INUMS;21412126 mp->m_maxagi = xfs_set_inode_alloc(mp, mp->m_sb.sb_agcount);21422127 }21282128+21292129+ /*21302130+ * Now that mp has been modified according to the remount options, we21312131+ * do a final option validation with xfs_finish_flags() just like it is21322132+ * just like it is done during mount. We cannot use21332133+ * done during mount. We cannot use xfs_finish_flags() on new_mp as it21342134+ * contains only the user given options.21352135+ */21362136+ error = xfs_finish_flags(mp);21372137+ if (error)21382138+ return error;2143213921442140 /* ro -> rw */21452141 if (xfs_is_readonly(mp) && !(flags & SB_RDONLY)) {
+18-16
fs/xfs/xfs_trans_ail.c
···315315}316316317317/*318318- * Delete the given item from the AIL. Return a pointer to the item.318318+ * Delete the given item from the AIL.319319 */320320static void321321xfs_ail_delete(···777777}778778779779/*780780- * xfs_trans_ail_update - bulk AIL insertion operation.780780+ * xfs_trans_ail_update_bulk - bulk AIL insertion operation.781781 *782782- * @xfs_trans_ail_update takes an array of log items that all need to be782782+ * @xfs_trans_ail_update_bulk takes an array of log items that all need to be783783 * positioned at the same LSN in the AIL. If an item is not in the AIL, it will784784- * be added. Otherwise, it will be repositioned by removing it and re-adding785785- * it to the AIL. If we move the first item in the AIL, update the log tail to786786- * match the new minimum LSN in the AIL.784784+ * be added. Otherwise, it will be repositioned by removing it and re-adding785785+ * it to the AIL.787786 *788788- * This function takes the AIL lock once to execute the update operations on789789- * all the items in the array, and as such should not be called with the AIL790790- * lock held. As a result, once we have the AIL lock, we need to check each log791791- * item LSN to confirm it needs to be moved forward in the AIL.787787+ * If we move the first item in the AIL, update the log tail to match the new788788+ * minimum LSN in the AIL.792789 *793793- * To optimise the insert operation, we delete all the items from the AIL in794794- * the first pass, moving them into a temporary list, then splice the temporary795795- * list into the correct position in the AIL. This avoids needing to do an796796- * insert operation on every item.790790+ * This function should be called with the AIL lock held.797791 *798798- * This function must be called with the AIL lock held. The lock is dropped799799- * before returning.792792+ * To optimise the insert operation, we add all items to a temporary list, then793793+ * splice this list into the correct position in the AIL.794794+ *795795+ * Items that are already in the AIL are first deleted from their current796796+ * location before being added to the temporary list.797797+ *798798+ * This avoids needing to do an insert operation on every item.799799+ *800800+ * The AIL lock is dropped by xfs_ail_update_finish() before returning to801801+ * the caller.800802 */801803void802804xfs_trans_ail_update_bulk(
···8989 * @ops: Pointer to the operations structure for GPU SVM device memory9090 * @dpagemap: The struct drm_pagemap of the pages this allocation belongs to.9191 * @size: Size of device memory allocation9292+ * @timeslice_expiration: Timeslice expiration in jiffies9293 */9394struct drm_gpusvm_devmem {9495 struct device *dev;···9897 const struct drm_gpusvm_devmem_ops *ops;9998 struct drm_pagemap *dpagemap;10099 size_t size;100100+ u64 timeslice_expiration;101101};102102103103/**···188186};189187190188/**189189+ * struct drm_gpusvm_range_flags - Structure representing a GPU SVM range flags190190+ *191191+ * @migrate_devmem: Flag indicating whether the range can be migrated to device memory192192+ * @unmapped: Flag indicating if the range has been unmapped193193+ * @partial_unmap: Flag indicating if the range has been partially unmapped194194+ * @has_devmem_pages: Flag indicating if the range has devmem pages195195+ * @has_dma_mapping: Flag indicating if the range has a DMA mapping196196+ * @__flags: Flags for range in u16 form (used for READ_ONCE)197197+ */198198+struct drm_gpusvm_range_flags {199199+ union {200200+ struct {201201+ /* All flags below must be set upon creation */202202+ u16 migrate_devmem : 1;203203+ /* All flags below must be set / cleared under notifier lock */204204+ u16 unmapped : 1;205205+ u16 partial_unmap : 1;206206+ u16 has_devmem_pages : 1;207207+ u16 has_dma_mapping : 1;208208+ };209209+ u16 __flags;210210+ };211211+};212212+213213+/**191214 * struct drm_gpusvm_range - Structure representing a GPU SVM range192215 *193216 * @gpusvm: Pointer to the GPU SVM structure···225198 * @dpagemap: The struct drm_pagemap of the device pages we're dma-mapping.226199 * Note this is assuming only one drm_pagemap per range is allowed.227200 * @flags: Flags for range228228- * @flags.migrate_devmem: Flag indicating whether the range can be migrated to device memory229229- * @flags.unmapped: Flag indicating if the range has been unmapped230230- * @flags.partial_unmap: Flag indicating if the range has been partially unmapped231231- * @flags.has_devmem_pages: Flag indicating if the range has devmem pages232232- * @flags.has_dma_mapping: Flag indicating if the range has a DMA mapping233201 *234202 * This structure represents a GPU SVM range used for tracking memory ranges235203 * mapped in a DRM device.···238216 unsigned long notifier_seq;239217 struct drm_pagemap_device_addr *dma_addr;240218 struct drm_pagemap *dpagemap;241241- struct {242242- /* All flags below must be set upon creation */243243- u16 migrate_devmem : 1;244244- /* All flags below must be set / cleared under notifier lock */245245- u16 unmapped : 1;246246- u16 partial_unmap : 1;247247- u16 has_devmem_pages : 1;248248- u16 has_dma_mapping : 1;249249- } flags;219219+ struct drm_gpusvm_range_flags flags;250220};251221252222/**···297283 * @check_pages_threshold: Check CPU pages for present if chunk is less than or298284 * equal to threshold. If not present, reduce chunk299285 * size.286286+ * @timeslice_ms: The timeslice MS which in minimum time a piece of memory287287+ * remains with either exclusive GPU or CPU access.300288 * @in_notifier: entering from a MMU notifier301289 * @read_only: operating on read-only memory302290 * @devmem_possible: possible to use device memory291291+ * @devmem_only: use only device memory303292 *304293 * Context that is DRM GPUSVM is operating in (i.e. user arguments).305294 */306295struct drm_gpusvm_ctx {307296 unsigned long check_pages_threshold;297297+ unsigned long timeslice_ms;308298 unsigned int in_notifier :1;309299 unsigned int read_only :1;310300 unsigned int devmem_possible :1;301301+ unsigned int devmem_only :1;311302};312303313304int drm_gpusvm_init(struct drm_gpusvm *gpusvm,
···8686}8787#endif88888989-/*9090- * Caller holds a reference to the file already, we don't need to do9191- * anything else to get an extra reference.9292- */9393-__cold void io_uring_show_fdinfo(struct seq_file *m, struct file *file)8989+static void __io_uring_show_fdinfo(struct io_ring_ctx *ctx, struct seq_file *m)9490{9595- struct io_ring_ctx *ctx = file->private_data;9691 struct io_overflow_cqe *ocqe;9792 struct io_rings *r = ctx->rings;9893 struct rusage sq_usage;···101106 unsigned int sq_entries, cq_entries;102107 int sq_pid = -1, sq_cpu = -1;103108 u64 sq_total_time = 0, sq_work_time = 0;104104- bool has_lock;105109 unsigned int i;106110107111 if (ctx->flags & IORING_SETUP_CQE32)···170176 seq_printf(m, "\n");171177 }172178173173- /*174174- * Avoid ABBA deadlock between the seq lock and the io_uring mutex,175175- * since fdinfo case grabs it in the opposite direction of normal use176176- * cases. If we fail to get the lock, we just don't iterate any177177- * structures that could be going away outside the io_uring mutex.178178- */179179- has_lock = mutex_trylock(&ctx->uring_lock);180180-181181- if (has_lock && (ctx->flags & IORING_SETUP_SQPOLL)) {179179+ if (ctx->flags & IORING_SETUP_SQPOLL) {182180 struct io_sq_data *sq = ctx->sq_data;183181184182 /*···192206 seq_printf(m, "SqTotalTime:\t%llu\n", sq_total_time);193207 seq_printf(m, "SqWorkTime:\t%llu\n", sq_work_time);194208 seq_printf(m, "UserFiles:\t%u\n", ctx->file_table.data.nr);195195- for (i = 0; has_lock && i < ctx->file_table.data.nr; i++) {209209+ for (i = 0; i < ctx->file_table.data.nr; i++) {196210 struct file *f = NULL;197211198212 if (ctx->file_table.data.nodes[i])···204218 }205219 }206220 seq_printf(m, "UserBufs:\t%u\n", ctx->buf_table.nr);207207- for (i = 0; has_lock && i < ctx->buf_table.nr; i++) {221221+ for (i = 0; i < ctx->buf_table.nr; i++) {208222 struct io_mapped_ubuf *buf = NULL;209223210224 if (ctx->buf_table.nodes[i])···214228 else215229 seq_printf(m, "%5u: <none>\n", i);216230 }217217- if (has_lock && !xa_empty(&ctx->personalities)) {231231+ if (!xa_empty(&ctx->personalities)) {218232 unsigned long index;219233 const struct cred *cred;220234···224238 }225239226240 seq_puts(m, "PollList:\n");227227- for (i = 0; has_lock && i < (1U << ctx->cancel_table.hash_bits); i++) {241241+ for (i = 0; i < (1U << ctx->cancel_table.hash_bits); i++) {228242 struct io_hash_bucket *hb = &ctx->cancel_table.hbs[i];229243 struct io_kiocb *req;230244···232246 seq_printf(m, " op=%d, task_works=%d\n", req->opcode,233247 task_work_pending(req->tctx->task));234248 }235235-236236- if (has_lock)237237- mutex_unlock(&ctx->uring_lock);238249239250 seq_puts(m, "CqOverflowList:\n");240251 spin_lock(&ctx->completion_lock);···244261 }245262 spin_unlock(&ctx->completion_lock);246263 napi_show_fdinfo(ctx, m);264264+}265265+266266+/*267267+ * Caller holds a reference to the file already, we don't need to do268268+ * anything else to get an extra reference.269269+ */270270+__cold void io_uring_show_fdinfo(struct seq_file *m, struct file *file)271271+{272272+ struct io_ring_ctx *ctx = file->private_data;273273+274274+ /*275275+ * Avoid ABBA deadlock between the seq lock and the io_uring mutex,276276+ * since fdinfo case grabs it in the opposite direction of normal use277277+ * cases.278278+ */279279+ if (mutex_trylock(&ctx->uring_lock)) {280280+ __io_uring_show_fdinfo(ctx, m);281281+ mutex_unlock(&ctx->uring_lock);282282+ }247283}248284#endif
+1-1
io_uring/memmap.c
···116116 void *ptr;117117118118 if (io_check_coalesce_buffer(mr->pages, mr->nr_pages, &ifd)) {119119- if (ifd.nr_folios == 1) {119119+ if (ifd.nr_folios == 1 && !PageHighMem(mr->pages[0])) {120120 mr->ptr = page_address(mr->pages[0]);121121 return 0;122122 }
+5
io_uring/uring_cmd.c
···254254 return -EOPNOTSUPP;255255 issue_flags |= IO_URING_F_IOPOLL;256256 req->iopoll_completed = 0;257257+ if (ctx->flags & IORING_SETUP_HYBRID_IOPOLL) {258258+ /* make sure every req only blocks once */259259+ req->flags &= ~REQ_F_IOPOLL_STATE;260260+ req->iopoll_start = ktime_get_ns();261261+ }257262 }258263259264 ret = file->f_op->uring_cmd(ioucmd, issue_flags);
+4-2
kernel/cgroup/cpuset.c
···1116111611171117 if (top_cs) {11181118 /*11191119- * Percpu kthreads in top_cpuset are ignored11191119+ * PF_NO_SETAFFINITY tasks are ignored.11201120+ * All per cpu kthreads should have PF_NO_SETAFFINITY11211121+ * flag set, see kthread_set_per_cpu().11201122 */11211121- if (kthread_is_per_cpu(task))11231123+ if (task->flags & PF_NO_SETAFFINITY)11221124 continue;11231125 cpumask_andnot(new_cpus, possible_mask, subpartitions_cpus);11241126 } else {
+5-4
kernel/fork.c
···498498 vma_numab_state_init(new);499499 dup_anon_vma_name(orig, new);500500501501- /* track_pfn_copy() will later take care of copying internal state. */502502- if (unlikely(new->vm_flags & VM_PFNMAP))503503- untrack_pfn_clear(new);504504-505501 return new;506502}507503···668672 tmp = vm_area_dup(mpnt);669673 if (!tmp)670674 goto fail_nomem;675675+676676+ /* track_pfn_copy() will later take care of copying internal state. */677677+ if (unlikely(tmp->vm_flags & VM_PFNMAP))678678+ untrack_pfn_clear(tmp);679679+671680 retval = vma_dup_policy(mpnt, tmp);672681 if (retval)673682 goto fail_nomem_policy;
+5
kernel/module/Kconfig
···192192 depends on !DEBUG_INFO_REDUCED && !DEBUG_INFO_SPLIT193193 # Requires ELF object files.194194 depends on !LTO195195+ # To avoid conflicts with the discarded __gendwarfksyms_ptr symbols on196196+ # X86, requires pahole before commit 47dcb534e253 ("btf_encoder: Stop197197+ # indexing symbols for VARs") or after commit 9810758003ce ("btf_encoder:198198+ # Verify 0 address DWARF variables are in ELF section").199199+ depends on !X86 || !DEBUG_INFO_BTF || PAHOLE_VERSION < 128 || PAHOLE_VERSION > 129195200 help196201 Calculate symbol versions from DWARF debugging information using197202 gendwarfksyms. Requires DEBUG_INFO to be enabled.
+123-68
kernel/sched/ext.c
···11181118 current->scx.kf_mask &= ~mask;11191119}1120112011211121-#define SCX_CALL_OP(mask, op, args...) \11211121+/*11221122+ * Track the rq currently locked.11231123+ *11241124+ * This allows kfuncs to safely operate on rq from any scx ops callback,11251125+ * knowing which rq is already locked.11261126+ */11271127+static DEFINE_PER_CPU(struct rq *, locked_rq);11281128+11291129+static inline void update_locked_rq(struct rq *rq)11301130+{11311131+ /*11321132+ * Check whether @rq is actually locked. This can help expose bugs11331133+ * or incorrect assumptions about the context in which a kfunc or11341134+ * callback is executed.11351135+ */11361136+ if (rq)11371137+ lockdep_assert_rq_held(rq);11381138+ __this_cpu_write(locked_rq, rq);11391139+}11401140+11411141+/*11421142+ * Return the rq currently locked from an scx callback, or NULL if no rq is11431143+ * locked.11441144+ */11451145+static inline struct rq *scx_locked_rq(void)11461146+{11471147+ return __this_cpu_read(locked_rq);11481148+}11491149+11501150+#define SCX_CALL_OP(mask, op, rq, args...) \11221151do { \11521152+ update_locked_rq(rq); \11231153 if (mask) { \11241154 scx_kf_allow(mask); \11251155 scx_ops.op(args); \···11571127 } else { \11581128 scx_ops.op(args); \11591129 } \11301130+ update_locked_rq(NULL); \11601131} while (0)1161113211621162-#define SCX_CALL_OP_RET(mask, op, args...) \11331133+#define SCX_CALL_OP_RET(mask, op, rq, args...) \11631134({ \11641135 __typeof__(scx_ops.op(args)) __ret; \11361136+ \11371137+ update_locked_rq(rq); \11651138 if (mask) { \11661139 scx_kf_allow(mask); \11671140 __ret = scx_ops.op(args); \···11721139 } else { \11731140 __ret = scx_ops.op(args); \11741141 } \11421142+ update_locked_rq(NULL); \11751143 __ret; \11761144})11771145···11871153 * scx_kf_allowed_on_arg_tasks() to test whether the invocation is allowed on11881154 * the specific task.11891155 */11901190-#define SCX_CALL_OP_TASK(mask, op, task, args...) \11561156+#define SCX_CALL_OP_TASK(mask, op, rq, task, args...) \11911157do { \11921158 BUILD_BUG_ON((mask) & ~__SCX_KF_TERMINAL); \11931159 current->scx.kf_tasks[0] = task; \11941194- SCX_CALL_OP(mask, op, task, ##args); \11601160+ SCX_CALL_OP(mask, op, rq, task, ##args); \11951161 current->scx.kf_tasks[0] = NULL; \11961162} while (0)1197116311981198-#define SCX_CALL_OP_TASK_RET(mask, op, task, args...) \11641164+#define SCX_CALL_OP_TASK_RET(mask, op, rq, task, args...) \11991165({ \12001166 __typeof__(scx_ops.op(task, ##args)) __ret; \12011167 BUILD_BUG_ON((mask) & ~__SCX_KF_TERMINAL); \12021168 current->scx.kf_tasks[0] = task; \12031203- __ret = SCX_CALL_OP_RET(mask, op, task, ##args); \11691169+ __ret = SCX_CALL_OP_RET(mask, op, rq, task, ##args); \12041170 current->scx.kf_tasks[0] = NULL; \12051171 __ret; \12061172})1207117312081208-#define SCX_CALL_OP_2TASKS_RET(mask, op, task0, task1, args...) \11741174+#define SCX_CALL_OP_2TASKS_RET(mask, op, rq, task0, task1, args...) \12091175({ \12101176 __typeof__(scx_ops.op(task0, task1, ##args)) __ret; \12111177 BUILD_BUG_ON((mask) & ~__SCX_KF_TERMINAL); \12121178 current->scx.kf_tasks[0] = task0; \12131179 current->scx.kf_tasks[1] = task1; \12141214- __ret = SCX_CALL_OP_RET(mask, op, task0, task1, ##args); \11801180+ __ret = SCX_CALL_OP_RET(mask, op, rq, task0, task1, ##args); \12151181 current->scx.kf_tasks[0] = NULL; \12161182 current->scx.kf_tasks[1] = NULL; \12171183 __ret; \···22062172 WARN_ON_ONCE(*ddsp_taskp);22072173 *ddsp_taskp = p;2208217422092209- SCX_CALL_OP_TASK(SCX_KF_ENQUEUE, enqueue, p, enq_flags);21752175+ SCX_CALL_OP_TASK(SCX_KF_ENQUEUE, enqueue, rq, p, enq_flags);2210217622112177 *ddsp_taskp = NULL;22122178 if (p->scx.ddsp_dsq_id != SCX_DSQ_INVALID)···23032269 add_nr_running(rq, 1);2304227023052271 if (SCX_HAS_OP(runnable) && !task_on_rq_migrating(p))23062306- SCX_CALL_OP_TASK(SCX_KF_REST, runnable, p, enq_flags);22722272+ SCX_CALL_OP_TASK(SCX_KF_REST, runnable, rq, p, enq_flags);2307227323082274 if (enq_flags & SCX_ENQ_WAKEUP)23092275 touch_core_sched(rq, p);···23172283 __scx_add_event(SCX_EV_SELECT_CPU_FALLBACK, 1);23182284}2319228523202320-static void ops_dequeue(struct task_struct *p, u64 deq_flags)22862286+static void ops_dequeue(struct rq *rq, struct task_struct *p, u64 deq_flags)23212287{23222288 unsigned long opss;23232289···23382304 BUG();23392305 case SCX_OPSS_QUEUED:23402306 if (SCX_HAS_OP(dequeue))23412341- SCX_CALL_OP_TASK(SCX_KF_REST, dequeue, p, deq_flags);23072307+ SCX_CALL_OP_TASK(SCX_KF_REST, dequeue, rq, p, deq_flags);2342230823432309 if (atomic_long_try_cmpxchg(&p->scx.ops_state, &opss,23442310 SCX_OPSS_NONE))···23712337 return true;23722338 }2373233923742374- ops_dequeue(p, deq_flags);23402340+ ops_dequeue(rq, p, deq_flags);2375234123762342 /*23772343 * A currently running task which is going off @rq first gets dequeued···23872353 */23882354 if (SCX_HAS_OP(stopping) && task_current(rq, p)) {23892355 update_curr_scx(rq);23902390- SCX_CALL_OP_TASK(SCX_KF_REST, stopping, p, false);23562356+ SCX_CALL_OP_TASK(SCX_KF_REST, stopping, rq, p, false);23912357 }2392235823932359 if (SCX_HAS_OP(quiescent) && !task_on_rq_migrating(p))23942394- SCX_CALL_OP_TASK(SCX_KF_REST, quiescent, p, deq_flags);23602360+ SCX_CALL_OP_TASK(SCX_KF_REST, quiescent, rq, p, deq_flags);2395236123962362 if (deq_flags & SCX_DEQ_SLEEP)23972363 p->scx.flags |= SCX_TASK_DEQD_FOR_SLEEP;···24112377 struct task_struct *p = rq->curr;2412237824132379 if (SCX_HAS_OP(yield))24142414- SCX_CALL_OP_2TASKS_RET(SCX_KF_REST, yield, p, NULL);23802380+ SCX_CALL_OP_2TASKS_RET(SCX_KF_REST, yield, rq, p, NULL);24152381 else24162382 p->scx.slice = 0;24172383}···24212387 struct task_struct *from = rq->curr;2422238824232389 if (SCX_HAS_OP(yield))24242424- return SCX_CALL_OP_2TASKS_RET(SCX_KF_REST, yield, from, to);23902390+ return SCX_CALL_OP_2TASKS_RET(SCX_KF_REST, yield, rq, from, to);24252391 else24262392 return false;24272393}···29792945 * emitted in switch_class().29802946 */29812947 if (SCX_HAS_OP(cpu_acquire))29822982- SCX_CALL_OP(SCX_KF_REST, cpu_acquire, cpu_of(rq), NULL);29482948+ SCX_CALL_OP(SCX_KF_REST, cpu_acquire, rq, cpu_of(rq), NULL);29832949 rq->scx.cpu_released = false;29842950 }29852951···30242990 do {30252991 dspc->nr_tasks = 0;3026299230273027- SCX_CALL_OP(SCX_KF_DISPATCH, dispatch, cpu_of(rq),29932993+ SCX_CALL_OP(SCX_KF_DISPATCH, dispatch, rq, cpu_of(rq),30282994 prev_on_scx ? prev : NULL);3029299530302996 flush_dispatch_buf(rq);···31383104 * Core-sched might decide to execute @p before it is31393105 * dispatched. Call ops_dequeue() to notify the BPF scheduler.31403106 */31413141- ops_dequeue(p, SCX_DEQ_CORE_SCHED_EXEC);31073107+ ops_dequeue(rq, p, SCX_DEQ_CORE_SCHED_EXEC);31423108 dispatch_dequeue(rq, p);31433109 }31443110···3146311231473113 /* see dequeue_task_scx() on why we skip when !QUEUED */31483114 if (SCX_HAS_OP(running) && (p->scx.flags & SCX_TASK_QUEUED))31493149- SCX_CALL_OP_TASK(SCX_KF_REST, running, p);31153115+ SCX_CALL_OP_TASK(SCX_KF_REST, running, rq, p);3150311631513117 clr_task_runnable(p, true);31523118···32273193 .task = next,32283194 };3229319532303230- SCX_CALL_OP(SCX_KF_CPU_RELEASE,32313231- cpu_release, cpu_of(rq), &args);31963196+ SCX_CALL_OP(SCX_KF_CPU_RELEASE, cpu_release, rq, cpu_of(rq), &args);32323197 }32333198 rq->scx.cpu_released = true;32343199 }···3240320732413208 /* see dequeue_task_scx() on why we skip when !QUEUED */32423209 if (SCX_HAS_OP(stopping) && (p->scx.flags & SCX_TASK_QUEUED))32433243- SCX_CALL_OP_TASK(SCX_KF_REST, stopping, p, true);32103210+ SCX_CALL_OP_TASK(SCX_KF_REST, stopping, rq, p, true);3244321132453212 if (p->scx.flags & SCX_TASK_QUEUED) {32463213 set_task_runnable(rq, p);···33813348 * verifier.33823349 */33833350 if (SCX_HAS_OP(core_sched_before) && !scx_rq_bypassing(task_rq(a)))33843384- return SCX_CALL_OP_2TASKS_RET(SCX_KF_REST, core_sched_before,33513351+ return SCX_CALL_OP_2TASKS_RET(SCX_KF_REST, core_sched_before, NULL,33853352 (struct task_struct *)a,33863353 (struct task_struct *)b);33873354 else···34183385 *ddsp_taskp = p;3419338634203387 cpu = SCX_CALL_OP_TASK_RET(SCX_KF_ENQUEUE | SCX_KF_SELECT_CPU,34213421- select_cpu, p, prev_cpu, wake_flags);33883388+ select_cpu, NULL, p, prev_cpu, wake_flags);34223389 p->scx.selected_cpu = cpu;34233390 *ddsp_taskp = NULL;34243391 if (ops_cpu_valid(cpu, "from ops.select_cpu()"))···34633430 * designation pointless. Cast it away when calling the operation.34643431 */34653432 if (SCX_HAS_OP(set_cpumask))34663466- SCX_CALL_OP_TASK(SCX_KF_REST, set_cpumask, p,34673467- (struct cpumask *)p->cpus_ptr);34333433+ SCX_CALL_OP_TASK(SCX_KF_REST, set_cpumask, NULL,34343434+ p, (struct cpumask *)p->cpus_ptr);34683435}3469343634703437static void handle_hotplug(struct rq *rq, bool online)···34773444 scx_idle_update_selcpu_topology(&scx_ops);3478344534793446 if (online && SCX_HAS_OP(cpu_online))34803480- SCX_CALL_OP(SCX_KF_UNLOCKED, cpu_online, cpu);34473447+ SCX_CALL_OP(SCX_KF_UNLOCKED, cpu_online, NULL, cpu);34813448 else if (!online && SCX_HAS_OP(cpu_offline))34823482- SCX_CALL_OP(SCX_KF_UNLOCKED, cpu_offline, cpu);34493449+ SCX_CALL_OP(SCX_KF_UNLOCKED, cpu_offline, NULL, cpu);34833450 else34843451 scx_ops_exit(SCX_ECODE_ACT_RESTART | SCX_ECODE_RSN_HOTPLUG,34853452 "cpu %d going %s, exiting scheduler", cpu,···35833550 curr->scx.slice = 0;35843551 touch_core_sched(rq, curr);35853552 } else if (SCX_HAS_OP(tick)) {35863586- SCX_CALL_OP_TASK(SCX_KF_REST, tick, curr);35533553+ SCX_CALL_OP_TASK(SCX_KF_REST, tick, rq, curr);35873554 }3588355535893556 if (!curr->scx.slice)···36603627 .fork = fork,36613628 };3662362936633663- ret = SCX_CALL_OP_RET(SCX_KF_UNLOCKED, init_task, p, &args);36303630+ ret = SCX_CALL_OP_RET(SCX_KF_UNLOCKED, init_task, NULL, p, &args);36643631 if (unlikely(ret)) {36653632 ret = ops_sanitize_err("init_task", ret);36663633 return ret;···3701366837023669static void scx_ops_enable_task(struct task_struct *p)37033670{36713671+ struct rq *rq = task_rq(p);37043672 u32 weight;3705367337063706- lockdep_assert_rq_held(task_rq(p));36743674+ lockdep_assert_rq_held(rq);3707367537083676 /*37093677 * Set the weight before calling ops.enable() so that the scheduler···37183684 p->scx.weight = sched_weight_to_cgroup(weight);3719368537203686 if (SCX_HAS_OP(enable))37213721- SCX_CALL_OP_TASK(SCX_KF_REST, enable, p);36873687+ SCX_CALL_OP_TASK(SCX_KF_REST, enable, rq, p);37223688 scx_set_task_state(p, SCX_TASK_ENABLED);3723368937243690 if (SCX_HAS_OP(set_weight))37253725- SCX_CALL_OP_TASK(SCX_KF_REST, set_weight, p, p->scx.weight);36913691+ SCX_CALL_OP_TASK(SCX_KF_REST, set_weight, rq, p, p->scx.weight);37263692}3727369337283694static void scx_ops_disable_task(struct task_struct *p)37293695{37303730- lockdep_assert_rq_held(task_rq(p));36963696+ struct rq *rq = task_rq(p);36973697+36983698+ lockdep_assert_rq_held(rq);37313699 WARN_ON_ONCE(scx_get_task_state(p) != SCX_TASK_ENABLED);3732370037333701 if (SCX_HAS_OP(disable))37343734- SCX_CALL_OP_TASK(SCX_KF_REST, disable, p);37023702+ SCX_CALL_OP_TASK(SCX_KF_REST, disable, rq, p);37353703 scx_set_task_state(p, SCX_TASK_READY);37363704}37373705···37623726 }3763372737643728 if (SCX_HAS_OP(exit_task))37653765- SCX_CALL_OP_TASK(SCX_KF_REST, exit_task, p, &args);37293729+ SCX_CALL_OP_TASK(SCX_KF_REST, exit_task, task_rq(p), p, &args);37663730 scx_set_task_state(p, SCX_TASK_NONE);37673731}37683732···3871383538723836 p->scx.weight = sched_weight_to_cgroup(scale_load_down(lw->weight));38733837 if (SCX_HAS_OP(set_weight))38743874- SCX_CALL_OP_TASK(SCX_KF_REST, set_weight, p, p->scx.weight);38383838+ SCX_CALL_OP_TASK(SCX_KF_REST, set_weight, rq, p, p->scx.weight);38753839}3876384038773841static void prio_changed_scx(struct rq *rq, struct task_struct *p, int oldprio)···38873851 * different scheduler class. Keep the BPF scheduler up-to-date.38883852 */38893853 if (SCX_HAS_OP(set_cpumask))38903890- SCX_CALL_OP_TASK(SCX_KF_REST, set_cpumask, p,38913891- (struct cpumask *)p->cpus_ptr);38543854+ SCX_CALL_OP_TASK(SCX_KF_REST, set_cpumask, rq,38553855+ p, (struct cpumask *)p->cpus_ptr);38923856}3893385738943858static void switched_from_scx(struct rq *rq, struct task_struct *p)···39493913 struct scx_cgroup_init_args args =39503914 { .weight = tg->scx_weight };3951391539523952- ret = SCX_CALL_OP_RET(SCX_KF_UNLOCKED, cgroup_init,39163916+ ret = SCX_CALL_OP_RET(SCX_KF_UNLOCKED, cgroup_init, NULL,39533917 tg->css.cgroup, &args);39543918 if (ret)39553919 ret = ops_sanitize_err("cgroup_init", ret);···39713935 percpu_down_read(&scx_cgroup_rwsem);3972393639733937 if (SCX_HAS_OP(cgroup_exit) && (tg->scx_flags & SCX_TG_INITED))39743974- SCX_CALL_OP(SCX_KF_UNLOCKED, cgroup_exit, tg->css.cgroup);39383938+ SCX_CALL_OP(SCX_KF_UNLOCKED, cgroup_exit, NULL, tg->css.cgroup);39753939 tg->scx_flags &= ~(SCX_TG_ONLINE | SCX_TG_INITED);3976394039773941 percpu_up_read(&scx_cgroup_rwsem);···40043968 continue;4005396940063970 if (SCX_HAS_OP(cgroup_prep_move)) {40074007- ret = SCX_CALL_OP_RET(SCX_KF_UNLOCKED, cgroup_prep_move,39713971+ ret = SCX_CALL_OP_RET(SCX_KF_UNLOCKED, cgroup_prep_move, NULL,40083972 p, from, css->cgroup);40093973 if (ret)40103974 goto err;···40183982err:40193983 cgroup_taskset_for_each(p, css, tset) {40203984 if (SCX_HAS_OP(cgroup_cancel_move) && p->scx.cgrp_moving_from)40214021- SCX_CALL_OP(SCX_KF_UNLOCKED, cgroup_cancel_move, p,40224022- p->scx.cgrp_moving_from, css->cgroup);39853985+ SCX_CALL_OP(SCX_KF_UNLOCKED, cgroup_cancel_move, NULL,39863986+ p, p->scx.cgrp_moving_from, css->cgroup);40233987 p->scx.cgrp_moving_from = NULL;40243988 }40253989···40374001 * cgrp_moving_from set.40384002 */40394003 if (SCX_HAS_OP(cgroup_move) && !WARN_ON_ONCE(!p->scx.cgrp_moving_from))40404040- SCX_CALL_OP_TASK(SCX_KF_UNLOCKED, cgroup_move, p,40414041- p->scx.cgrp_moving_from, tg_cgrp(task_group(p)));40044004+ SCX_CALL_OP_TASK(SCX_KF_UNLOCKED, cgroup_move, NULL,40054005+ p, p->scx.cgrp_moving_from, tg_cgrp(task_group(p)));40424006 p->scx.cgrp_moving_from = NULL;40434007}40444008···4057402140584022 cgroup_taskset_for_each(p, css, tset) {40594023 if (SCX_HAS_OP(cgroup_cancel_move) && p->scx.cgrp_moving_from)40604060- SCX_CALL_OP(SCX_KF_UNLOCKED, cgroup_cancel_move, p,40614061- p->scx.cgrp_moving_from, css->cgroup);40244024+ SCX_CALL_OP(SCX_KF_UNLOCKED, cgroup_cancel_move, NULL,40254025+ p, p->scx.cgrp_moving_from, css->cgroup);40624026 p->scx.cgrp_moving_from = NULL;40634027 }40644028out_unlock:···4071403540724036 if (scx_cgroup_enabled && tg->scx_weight != weight) {40734037 if (SCX_HAS_OP(cgroup_set_weight))40744074- SCX_CALL_OP(SCX_KF_UNLOCKED, cgroup_set_weight,40384038+ SCX_CALL_OP(SCX_KF_UNLOCKED, cgroup_set_weight, NULL,40754039 tg_cgrp(tg), weight);40764040 tg->scx_weight = weight;40774041 }···42604224 continue;42614225 rcu_read_unlock();4262422642634263- SCX_CALL_OP(SCX_KF_UNLOCKED, cgroup_exit, css->cgroup);42274227+ SCX_CALL_OP(SCX_KF_UNLOCKED, cgroup_exit, NULL, css->cgroup);4264422842654229 rcu_read_lock();42664230 css_put(css);···42974261 continue;42984262 rcu_read_unlock();4299426343004300- ret = SCX_CALL_OP_RET(SCX_KF_UNLOCKED, cgroup_init,42644264+ ret = SCX_CALL_OP_RET(SCX_KF_UNLOCKED, cgroup_init, NULL,43014265 css->cgroup, &args);43024266 if (ret) {43034267 css_put(css);···47944758 }4795475947964760 if (scx_ops.exit)47974797- SCX_CALL_OP(SCX_KF_UNLOCKED, exit, ei);47614761+ SCX_CALL_OP(SCX_KF_UNLOCKED, exit, NULL, ei);4798476247994763 cancel_delayed_work_sync(&scx_watchdog_work);48004764···5001496550024966 if (SCX_HAS_OP(dump_task)) {50034967 ops_dump_init(s, " ");50045004- SCX_CALL_OP(SCX_KF_REST, dump_task, dctx, p);49684968+ SCX_CALL_OP(SCX_KF_REST, dump_task, NULL, dctx, p);50054969 ops_dump_exit();50064970 }50074971···5048501250495013 if (SCX_HAS_OP(dump)) {50505014 ops_dump_init(&s, "");50515051- SCX_CALL_OP(SCX_KF_UNLOCKED, dump, &dctx);50155015+ SCX_CALL_OP(SCX_KF_UNLOCKED, dump, NULL, &dctx);50525016 ops_dump_exit();50535017 }50545018···51055069 used = seq_buf_used(&ns);51065070 if (SCX_HAS_OP(dump_cpu)) {51075071 ops_dump_init(&ns, " ");51085108- SCX_CALL_OP(SCX_KF_REST, dump_cpu, &dctx, cpu, idle);50725072+ SCX_CALL_OP(SCX_KF_REST, dump_cpu, NULL, &dctx, cpu, idle);51095073 ops_dump_exit();51105074 }51115075···53645328 scx_idle_enable(ops);5365532953665330 if (scx_ops.init) {53675367- ret = SCX_CALL_OP_RET(SCX_KF_UNLOCKED, init);53315331+ ret = SCX_CALL_OP_RET(SCX_KF_UNLOCKED, init, NULL);53685332 if (ret) {53695333 ret = ops_sanitize_err("init", ret);53705334 cpus_read_unlock();···68276791 BUILD_BUG_ON(__alignof__(struct bpf_iter_scx_dsq_kern) !=68286792 __alignof__(struct bpf_iter_scx_dsq));6829679367946794+ /*67956795+ * next() and destroy() will be called regardless of the return value.67966796+ * Always clear $kit->dsq.67976797+ */67986798+ kit->dsq = NULL;67996799+68306800 if (flags & ~__SCX_DSQ_ITER_USER_FLAGS)68316801 return -EINVAL;68326802···71197077 }7120707871217079 if (ops_cpu_valid(cpu, NULL)) {71227122- struct rq *rq = cpu_rq(cpu);70807080+ struct rq *rq = cpu_rq(cpu), *locked_rq = scx_locked_rq();70817081+ struct rq_flags rf;70827082+70837083+ /*70847084+ * When called with an rq lock held, restrict the operation70857085+ * to the corresponding CPU to prevent ABBA deadlocks.70867086+ */70877087+ if (locked_rq && rq != locked_rq) {70887088+ scx_ops_error("Invalid target CPU %d", cpu);70897089+ return;70907090+ }70917091+70927092+ /*70937093+ * If no rq lock is held, allow to operate on any CPU by70947094+ * acquiring the corresponding rq lock.70957095+ */70967096+ if (!locked_rq) {70977097+ rq_lock_irqsave(rq, &rf);70987098+ update_rq_clock(rq);70997099+ }7123710071247101 rq->scx.cpuperf_target = perf;71027102+ cpufreq_update_util(rq, 0);7125710371267126- rcu_read_lock_sched_notrace();71277127- cpufreq_update_util(cpu_rq(cpu), 0);71287128- rcu_read_unlock_sched_notrace();71047104+ if (!locked_rq)71057105+ rq_unlock_irqrestore(rq, &rf);71297106 }71307107}71317108···73757314BTF_ID_FLAGS(func, scx_bpf_get_possible_cpumask, KF_ACQUIRE)73767315BTF_ID_FLAGS(func, scx_bpf_get_online_cpumask, KF_ACQUIRE)73777316BTF_ID_FLAGS(func, scx_bpf_put_cpumask, KF_RELEASE)73787378-BTF_ID_FLAGS(func, scx_bpf_get_idle_cpumask, KF_ACQUIRE)73797379-BTF_ID_FLAGS(func, scx_bpf_get_idle_smtmask, KF_ACQUIRE)73807380-BTF_ID_FLAGS(func, scx_bpf_put_idle_cpumask, KF_RELEASE)73817381-BTF_ID_FLAGS(func, scx_bpf_test_and_clear_cpu_idle)73827382-BTF_ID_FLAGS(func, scx_bpf_pick_idle_cpu, KF_RCU)73837383-BTF_ID_FLAGS(func, scx_bpf_pick_any_cpu, KF_RCU)73847317BTF_ID_FLAGS(func, scx_bpf_task_running, KF_RCU)73857318BTF_ID_FLAGS(func, scx_bpf_task_cpu, KF_RCU)73867319BTF_ID_FLAGS(func, scx_bpf_cpu_rq)
+1-1
kernel/sched/ext_idle.c
···674674 * managed by put_prev_task_idle()/set_next_task_idle().675675 */676676 if (SCX_HAS_OP(update_idle) && do_notify && !scx_rq_bypassing(rq))677677- SCX_CALL_OP(SCX_KF_REST, update_idle, cpu_of(rq), idle);677677+ SCX_CALL_OP(SCX_KF_REST, update_idle, rq, cpu_of(rq), idle);678678679679 /*680680 * Update the idle masks:
+2-1
kernel/trace/fprobe.c
···454454 struct fprobe_hlist_node *node;455455 int ret = 0;456456457457- hlist_for_each_entry_rcu(node, head, hlist) {457457+ hlist_for_each_entry_rcu(node, head, hlist,458458+ lockdep_is_held(&fprobe_mutex)) {458459 if (!within_module(node->addr, mod))459460 continue;460461 if (delete_fprobe_node(node))
+5-3
kernel/trace/ring_buffer.c
···1887188718881888 head_page = cpu_buffer->head_page;1889188918901890- /* If both the head and commit are on the reader_page then we are done. */18911891- if (head_page == cpu_buffer->reader_page &&18921892- head_page == cpu_buffer->commit_page)18901890+ /* If the commit_buffer is the reader page, update the commit page */18911891+ if (meta->commit_buffer == (unsigned long)cpu_buffer->reader_page->page) {18921892+ cpu_buffer->commit_page = cpu_buffer->reader_page;18931893+ /* Nothing more to do, the only page is the reader page */18931894 goto done;18951895+ }1894189618951897 /* Iterate until finding the commit page */18961898 for (i = 0; i < meta->nr_subbufs + 1; i++, rb_inc_page(&head_page)) {
+15-1
kernel/trace/trace_dynevent.c
···1616#include "trace_output.h" /* for trace_event_sem */1717#include "trace_dynevent.h"18181919-static DEFINE_MUTEX(dyn_event_ops_mutex);1919+DEFINE_MUTEX(dyn_event_ops_mutex);2020static LIST_HEAD(dyn_event_ops_list);21212222bool trace_event_dyn_try_get_ref(struct trace_event_call *dyn_call)···113113 }114114 tracing_reset_all_online_cpus();115115 mutex_unlock(&event_mutex);116116+ return ret;117117+}118118+119119+/*120120+ * Locked version of event creation. The event creation must be protected by121121+ * dyn_event_ops_mutex because of protecting trace_probe_log.122122+ */123123+int dyn_event_create(const char *raw_command, struct dyn_event_operations *type)124124+{125125+ int ret;126126+127127+ mutex_lock(&dyn_event_ops_mutex);128128+ ret = type->create(raw_command);129129+ mutex_unlock(&dyn_event_ops_mutex);116130 return ret;117131}118132
+1
kernel/trace/trace_dynevent.h
···100100void dyn_event_seq_stop(struct seq_file *m, void *v);101101int dyn_events_release_all(struct dyn_event_operations *type);102102int dyn_event_release(const char *raw_command, struct dyn_event_operations *type);103103+int dyn_event_create(const char *raw_command, struct dyn_event_operations *type);103104104105/*105106 * for_each_dyn_event - iterate over the dyn_event list
···3751375137523752 /* Stabilize the mapcount vs. refcount and recheck. */37533753 folio_lock_large_mapcount(folio);37543754- VM_WARN_ON_ONCE(folio_large_mapcount(folio) < folio_ref_count(folio));37543754+ VM_WARN_ON_ONCE_FOLIO(folio_large_mapcount(folio) > folio_ref_count(folio), folio);3755375537563756 if (folio_test_large_maybe_mapped_shared(folio))37573757 goto unlock;
···290290#endif291291292292static bool page_contains_unaccepted(struct page *page, unsigned int order);293293-static bool cond_accept_memory(struct zone *zone, unsigned int order);293293+static bool cond_accept_memory(struct zone *zone, unsigned int order,294294+ int alloc_flags);294295static bool __free_unaccepted(struct page *page);295296296297int page_group_by_mobility_disabled __read_mostly;···11521151 __pgalloc_tag_sub(page, nr);11531152}1154115311551155-static inline void pgalloc_tag_sub_pages(struct page *page, unsigned int nr)11541154+/* When tag is not NULL, assuming mem_alloc_profiling_enabled */11551155+static inline void pgalloc_tag_sub_pages(struct alloc_tag *tag, unsigned int nr)11561156{11571157- struct alloc_tag *tag;11581158-11591159- if (!mem_alloc_profiling_enabled())11601160- return;11611161-11621162- tag = __pgalloc_tag_get(page);11631157 if (tag)11641158 this_cpu_sub(tag->counters->bytes, PAGE_SIZE * nr);11651159}···11641168static inline void pgalloc_tag_add(struct page *page, struct task_struct *task,11651169 unsigned int nr) {}11661170static inline void pgalloc_tag_sub(struct page *page, unsigned int nr) {}11671167-static inline void pgalloc_tag_sub_pages(struct page *page, unsigned int nr) {}11711171+static inline void pgalloc_tag_sub_pages(struct alloc_tag *tag, unsigned int nr) {}1168117211691173#endif /* CONFIG_MEM_ALLOC_PROFILING */11701174···36123616 }36133617 }3614361836153615- cond_accept_memory(zone, order);36193619+ cond_accept_memory(zone, order, alloc_flags);3616362036173621 /*36183622 * Detect whether the number of free pages is below high···36393643 gfp_mask)) {36403644 int ret;3641364536423642- if (cond_accept_memory(zone, order))36463646+ if (cond_accept_memory(zone, order, alloc_flags))36433647 goto try_this_zone;3644364836453649 /*···3692369636933697 return page;36943698 } else {36953695- if (cond_accept_memory(zone, order))36993699+ if (cond_accept_memory(zone, order, alloc_flags))36963700 goto try_this_zone;3697370136983702 /* Try again if zone has deferred pages */···48454849 goto failed;48464850 }4847485148484848- cond_accept_memory(zone, 0);48524852+ cond_accept_memory(zone, 0, alloc_flags);48494853retry_this_zone:48504854 mark = wmark_pages(zone, alloc_flags & ALLOC_WMARK_MASK) + nr_pages;48514855 if (zone_watermark_fast(zone, 0, mark,···48544858 break;48554859 }4856486048574857- if (cond_accept_memory(zone, 0))48614861+ if (cond_accept_memory(zone, 0, alloc_flags))48584862 goto retry_this_zone;4859486348604864 /* Try again if zone has deferred pages */···50615065{50625066 /* get PageHead before we drop reference */50635067 int head = PageHead(page);50685068+ /* get alloc tag in case the page is released by others */50695069+ struct alloc_tag *tag = pgalloc_tag_get(page);5064507050655071 if (put_page_testzero(page))50665072 __free_frozen_pages(page, order, fpi_flags);50675073 else if (!head) {50685068- pgalloc_tag_sub_pages(page, (1 << order) - 1);50745074+ pgalloc_tag_sub_pages(tag, (1 << order) - 1);50695075 while (order-- > 0)50705076 __free_frozen_pages(page + (1 << order), order,50715077 fpi_flags);···7172717471737175#ifdef CONFIG_UNACCEPTED_MEMORY7174717671757175-/* Counts number of zones with unaccepted pages. */71767176-static DEFINE_STATIC_KEY_FALSE(zones_with_unaccepted_pages);71777177-71787177static bool lazy_accept = true;71797179-71807180-void unaccepted_cleanup_work(struct work_struct *work)71817181-{71827182- static_branch_dec(&zones_with_unaccepted_pages);71837183-}7184717871857179static int __init accept_memory_parse(char *p)71867180{···71987208static void __accept_page(struct zone *zone, unsigned long *flags,71997209 struct page *page)72007210{72017201- bool last;72027202-72037211 list_del(&page->lru);72047204- last = list_empty(&zone->unaccepted_pages);72057205-72067212 account_freepages(zone, -MAX_ORDER_NR_PAGES, MIGRATE_MOVABLE);72077213 __mod_zone_page_state(zone, NR_UNACCEPTED, -MAX_ORDER_NR_PAGES);72087214 __ClearPageUnaccepted(page);···72077221 accept_memory(page_to_phys(page), PAGE_SIZE << MAX_PAGE_ORDER);7208722272097223 __free_pages_ok(page, MAX_PAGE_ORDER, FPI_TO_TAIL);72107210-72117211- if (last) {72127212- /*72137213- * There are two corner cases:72147214- *72157215- * - If allocation occurs during the CPU bring up,72167216- * static_branch_dec() cannot be used directly as72177217- * it causes a deadlock on cpu_hotplug_lock.72187218- *72197219- * Instead, use schedule_work() to prevent deadlock.72207220- *72217221- * - If allocation occurs before workqueues are initialized,72227222- * static_branch_dec() should be called directly.72237223- *72247224- * Workqueues are initialized before CPU bring up, so this72257225- * will not conflict with the first scenario.72267226- */72277227- if (system_wq)72287228- schedule_work(&zone->unaccepted_cleanup);72297229- else72307230- unaccepted_cleanup_work(&zone->unaccepted_cleanup);72317231- }72327224}7233722572347226void accept_page(struct page *page)···72437279 return true;72447280}7245728172467246-static inline bool has_unaccepted_memory(void)72477247-{72487248- return static_branch_unlikely(&zones_with_unaccepted_pages);72497249-}72507250-72517251-static bool cond_accept_memory(struct zone *zone, unsigned int order)72827282+static bool cond_accept_memory(struct zone *zone, unsigned int order,72837283+ int alloc_flags)72527284{72537285 long to_accept, wmark;72547286 bool ret = false;7255728772567256- if (!has_unaccepted_memory())72887288+ if (list_empty(&zone->unaccepted_pages))72577289 return false;7258729072597259- if (list_empty(&zone->unaccepted_pages))72917291+ /* Bailout, since try_to_accept_memory_one() needs to take a lock */72927292+ if (alloc_flags & ALLOC_TRYLOCK)72607293 return false;7261729472627295 wmark = promo_wmark_pages(zone);···72867325{72877326 struct zone *zone = page_zone(page);72887327 unsigned long flags;72897289- bool first = false;7290732872917329 if (!lazy_accept)72927330 return false;7293733172947332 spin_lock_irqsave(&zone->lock, flags);72957295- first = list_empty(&zone->unaccepted_pages);72967333 list_add_tail(&page->lru, &zone->unaccepted_pages);72977334 account_freepages(zone, MAX_ORDER_NR_PAGES, MIGRATE_MOVABLE);72987335 __mod_zone_page_state(zone, NR_UNACCEPTED, MAX_ORDER_NR_PAGES);72997336 __SetPageUnaccepted(page);73007337 spin_unlock_irqrestore(&zone->lock, flags);73017301-73027302- if (first)73037303- static_branch_inc(&zones_with_unaccepted_pages);7304733873057339 return true;73067340}···73077351 return false;73087352}7309735373107310-static bool cond_accept_memory(struct zone *zone, unsigned int order)73547354+static bool cond_accept_memory(struct zone *zone, unsigned int order,73557355+ int alloc_flags)73117356{73127357 return false;73137358}···73797422 if (!pcp_allowed_order(order))73807423 return NULL;7381742473827382-#ifdef CONFIG_UNACCEPTED_MEMORY73837383- /* Bailout, since try_to_accept_memory_one() needs to take a lock */73847384- if (has_unaccepted_memory())73857385- return NULL;73867386-#endif73877425 /* Bailout, since _deferred_grow_zone() needs to take a lock */73887426 if (deferred_pages_enabled())73897427 return NULL;
+9
mm/swapfile.c
···33323332 }3333333333343334 /*33353335+ * The swap subsystem needs a major overhaul to support this.33363336+ * It doesn't work yet so just disable it for now.33373337+ */33383338+ if (mapping_min_folio_order(mapping) > 0) {33393339+ error = -EINVAL;33403340+ goto bad_swap_unlock_inode;33413341+ }33423342+33433343+ /*33353344 * Read the swap header.33363345 */33373346 if (!mapping->a_ops->read_folio) {
+10-2
mm/userfaultfd.c
···10641064 src_folio->index = linear_page_index(dst_vma, dst_addr);1065106510661066 orig_dst_pte = mk_pte(&src_folio->page, dst_vma->vm_page_prot);10671067- /* Follow mremap() behavior and treat the entry dirty after the move */10681068- orig_dst_pte = pte_mkwrite(pte_mkdirty(orig_dst_pte), dst_vma);10671067+ /* Set soft dirty bit so userspace can notice the pte was moved */10681068+#ifdef CONFIG_MEM_SOFT_DIRTY10691069+ orig_dst_pte = pte_mksoft_dirty(orig_dst_pte);10701070+#endif10711071+ if (pte_dirty(orig_src_pte))10721072+ orig_dst_pte = pte_mkdirty(orig_dst_pte);10731073+ orig_dst_pte = pte_mkwrite(orig_dst_pte, dst_vma);1069107410701075 set_pte_at(mm, dst_addr, dst_pte, orig_dst_pte);10711076out:···11051100 }1106110111071102 orig_src_pte = ptep_get_and_clear(mm, src_addr, src_pte);11031103+#ifdef CONFIG_MEM_SOFT_DIRTY11041104+ orig_src_pte = pte_swp_mksoft_dirty(orig_src_pte);11051105+#endif11081106 set_pte_at(mm, dst_addr, dst_pte, orig_src_pte);11091107 double_pt_unlock(dst_ptl, src_ptl);11101108
+4-4
mm/zsmalloc.c
···12431243 class = zspage_class(pool, zspage);12441244 off = offset_in_page(class->size * obj_idx);1245124512461246- if (off + class->size <= PAGE_SIZE) {12461246+ if (!ZsHugePage(zspage))12471247+ off += ZS_HANDLE_SIZE;12481248+12491249+ if (off + mem_len <= PAGE_SIZE) {12471250 /* this object is contained entirely within a page */12481251 void *dst = kmap_local_zpdesc(zpdesc);1249125212501250- if (!ZsHugePage(zspage))12511251- off += ZS_HANDLE_SIZE;12521253 memcpy(dst + off, handle_mem, mem_len);12531254 kunmap_local(dst);12541255 } else {12551256 /* this object spans two pages */12561257 size_t sizes[2];1257125812581258- off += ZS_HANDLE_SIZE;12591259 sizes[0] = PAGE_SIZE - off;12601260 sizes[1] = mem_len - sizes[0];12611261
+18-13
net/batman-adv/hard-interface.c
···506506 return false;507507}508508509509-static void batadv_check_known_mac_addr(const struct net_device *net_dev)509509+static void batadv_check_known_mac_addr(const struct batadv_hard_iface *hard_iface)510510{511511- const struct batadv_hard_iface *hard_iface;511511+ const struct net_device *mesh_iface = hard_iface->mesh_iface;512512+ const struct batadv_hard_iface *tmp_hard_iface;512513513513- rcu_read_lock();514514- list_for_each_entry_rcu(hard_iface, &batadv_hardif_list, list) {515515- if (hard_iface->if_status != BATADV_IF_ACTIVE &&516516- hard_iface->if_status != BATADV_IF_TO_BE_ACTIVATED)514514+ if (!mesh_iface)515515+ return;516516+517517+ list_for_each_entry(tmp_hard_iface, &batadv_hardif_list, list) {518518+ if (tmp_hard_iface == hard_iface)517519 continue;518520519519- if (hard_iface->net_dev == net_dev)521521+ if (tmp_hard_iface->mesh_iface != mesh_iface)520522 continue;521523522522- if (!batadv_compare_eth(hard_iface->net_dev->dev_addr,523523- net_dev->dev_addr))524524+ if (tmp_hard_iface->if_status == BATADV_IF_NOT_IN_USE)525525+ continue;526526+527527+ if (!batadv_compare_eth(tmp_hard_iface->net_dev->dev_addr,528528+ hard_iface->net_dev->dev_addr))524529 continue;525530526531 pr_warn("The newly added mac address (%pM) already exists on: %s\n",527527- net_dev->dev_addr, hard_iface->net_dev->name);532532+ hard_iface->net_dev->dev_addr, tmp_hard_iface->net_dev->name);528533 pr_warn("It is strongly recommended to keep mac addresses unique to avoid problems!\n");529534 }530530- rcu_read_unlock();531535}532536533537/**···767763 hard_iface->net_dev->name, hardif_mtu,768764 required_mtu);769765766766+ batadv_check_known_mac_addr(hard_iface);767767+770768 if (batadv_hardif_is_iface_up(hard_iface))771769 batadv_hardif_activate_interface(hard_iface);772770 else···907901908902 batadv_v_hardif_init(hard_iface);909903910910- batadv_check_known_mac_addr(hard_iface->net_dev);911904 kref_get(&hard_iface->refcount);912905 list_add_tail_rcu(&hard_iface->list, &batadv_hardif_list);913906 batadv_hardif_generation++;···993988 if (hard_iface->if_status == BATADV_IF_NOT_IN_USE)994989 goto hardif_put;995990996996- batadv_check_known_mac_addr(hard_iface->net_dev);991991+ batadv_check_known_mac_addr(hard_iface);997992998993 bat_priv = netdev_priv(hard_iface->mesh_iface);999994 bat_priv->algo_ops->iface.update_mac(hard_iface);
···2020 struct sg_table *sgt;2121 struct net_device *dev;2222 struct gen_pool *chunk_pool;2323+ /* Protect dev */2424+ struct mutex lock;23252426 /* The user holds a ref (via the netlink API) for as long as they want2527 * the binding to remain alive. Each page pool using this binding holds
···195195 /* Drop excess packets if new limit is lower */196196 qlen = sch->q.qlen;197197 while (sch->q.qlen > sch->limit) {198198- struct sk_buff *skb = __qdisc_dequeue_head(&sch->q);198198+ struct sk_buff *skb = qdisc_dequeue_internal(sch, true);199199200200 dropped += qdisc_pkt_len(skb);201201 qdisc_qstats_backlog_dec(sch, skb);
+2-1
net/tls/tls_strp.c
···396396 return 0;397397398398 shinfo = skb_shinfo(strp->anchor);399399- shinfo->frag_list = NULL;400399401400 /* If we don't know the length go max plus page for cipher overhead */402401 need_spc = strp->stm.full_len ?: TLS_MAX_PAYLOAD_SIZE + PAGE_SIZE;···410411 skb_fill_page_desc(strp->anchor, shinfo->nr_frags++,411412 page, 0, 0);412413 }414414+415415+ shinfo->frag_list = NULL;413416414417 strp->copy_mode = 1;415418 strp->stm.offset = 0;
+1-1
samples/ftrace/sample-trace-array.c
···112112 /*113113 * If context specific per-cpu buffers havent already been allocated.114114 */115115- trace_printk_init_buffers();115115+ trace_array_init_printk(tr);116116117117 simple_tsk = kthread_run(simple_thread, NULL, "sample-instance");118118 if (IS_ERR(simple_tsk)) {
+12
scripts/Makefile.extrawarn
···3737# https://gcc.gnu.org/bugzilla/show_bug.cgi?id=1112193838KBUILD_CFLAGS += $(call cc-disable-warning, format-overflow-non-kprintf)3939KBUILD_CFLAGS += $(call cc-disable-warning, format-truncation-non-kprintf)4040+4141+# Clang may emit a warning when a const variable, such as the dummy variables4242+# in typecheck(), or const member of an aggregate type are not initialized,4343+# which can result in unexpected behavior. However, in many audited cases of4444+# the "field" variant of the warning, this is intentional because the field is4545+# never used within a particular call path, the field is within a union with4646+# other non-const members, or the containing object is not const so the field4747+# can be modified via memcpy() / memset(). While the variable warning also gets4848+# disabled with this same switch, there should not be too much coverage lost4949+# because -Wuninitialized will still flag when an uninitialized const variable5050+# is used.5151+KBUILD_CFLAGS += $(call cc-disable-warning, default-const-init-unsafe)4052else41534254# gcc inanely warns about local variables called 'main'
···73737474targets += vmlinux.o75757676-# module.builtin.modinfo7676+# modules.builtin.modinfo7777# ---------------------------------------------------------------------------78787979OBJCOPYFLAGS_modules.builtin.modinfo := -j .modinfo -O binary···8282modules.builtin.modinfo: vmlinux.o FORCE8383 $(call if_changed,objcopy)84848585-# module.builtin8585+# modules.builtin8686# ---------------------------------------------------------------------------87878888# The second line aids cases where multiple modules share the same object.
+1
scripts/package/kernel.spec
···1616Source2: diff.patch1717Provides: kernel-%{KERNELRELEASE}1818BuildRequires: bc binutils bison dwarves1919+BuildRequires: (elfutils-devel or libdw-devel)1920BuildRequires: (elfutils-libelf-devel or libelf-devel) flex2021BuildRequires: gcc make openssl openssl-devel perl python3 rsync2122
···732732 */733733static int __deliver_to_subscribers(struct snd_seq_client *client,734734 struct snd_seq_event *event,735735- struct snd_seq_client_port *src_port,736736- int atomic, int hop)735735+ int port, int atomic, int hop)737736{737737+ struct snd_seq_client_port *src_port;738738 struct snd_seq_subscribers *subs;739739 int err, result = 0, num_ev = 0;740740 union __snd_seq_event event_saved;741741 size_t saved_size;742742 struct snd_seq_port_subs_info *grp;743743+744744+ if (port < 0)745745+ return 0;746746+ src_port = snd_seq_port_use_ptr(client, port);747747+ if (!src_port)748748+ return 0;743749744750 /* save original event record */745751 saved_size = snd_seq_event_packet_size(event);···781775 read_unlock(&grp->list_lock);782776 else783777 up_read(&grp->list_mutex);778778+ snd_seq_port_unlock(src_port);784779 memcpy(event, &event_saved, saved_size);785780 return (result < 0) ? result : num_ev;786781}···790783 struct snd_seq_event *event,791784 int atomic, int hop)792785{793793- struct snd_seq_client_port *src_port;794794- int ret = 0, ret2;786786+ int ret;787787+#if IS_ENABLED(CONFIG_SND_SEQ_UMP)788788+ int ret2;789789+#endif795790796796- src_port = snd_seq_port_use_ptr(client, event->source.port);797797- if (src_port) {798798- ret = __deliver_to_subscribers(client, event, src_port, atomic, hop);799799- snd_seq_port_unlock(src_port);800800- }801801-802802- if (client->ump_endpoint_port < 0 ||803803- event->source.port == client->ump_endpoint_port)791791+ ret = __deliver_to_subscribers(client, event,792792+ event->source.port, atomic, hop);793793+#if IS_ENABLED(CONFIG_SND_SEQ_UMP)794794+ if (!snd_seq_client_is_ump(client) || client->ump_endpoint_port < 0)804795 return ret;805805-806806- src_port = snd_seq_port_use_ptr(client, client->ump_endpoint_port);807807- if (!src_port)808808- return ret;809809- ret2 = __deliver_to_subscribers(client, event, src_port, atomic, hop);810810- snd_seq_port_unlock(src_port);811811- return ret2 < 0 ? ret2 : ret;796796+ /* If it's an event from EP port (and with a UMP group),797797+ * deliver to subscribers of the corresponding UMP group port, too.798798+ * Or, if it's from non-EP port, deliver to subscribers of EP port, too.799799+ */800800+ if (event->source.port == client->ump_endpoint_port)801801+ ret2 = __deliver_to_subscribers(client, event,802802+ snd_seq_ump_group_port(event),803803+ atomic, hop);804804+ else805805+ ret2 = __deliver_to_subscribers(client, event,806806+ client->ump_endpoint_port,807807+ atomic, hop);808808+ if (ret2 < 0)809809+ return ret2;810810+#endif811811+ return ret;812812}813813814814/* deliver an event to the destination port(s).
+18
sound/core/seq/seq_ump_convert.c
···12851285 else12861286 return cvt_to_ump_midi1(dest, dest_port, event, atomic, hop);12871287}12881288+12891289+/* return the UMP group-port number of the event;12901290+ * return -1 if groupless or non-UMP event12911291+ */12921292+int snd_seq_ump_group_port(const struct snd_seq_event *event)12931293+{12941294+ const struct snd_seq_ump_event *ump_ev =12951295+ (const struct snd_seq_ump_event *)event;12961296+ unsigned char type;12971297+12981298+ if (!snd_seq_ev_is_ump(event))12991299+ return -1;13001300+ type = ump_message_type(ump_ev->ump[0]);13011301+ if (ump_is_groupless_msg(type))13021302+ return -1;13031303+ /* group-port number starts from 1 */13041304+ return ump_message_group(ump_ev->ump[0]) + 1;13051305+}
+1
sound/core/seq/seq_ump_convert.h
···1818 struct snd_seq_client_port *dest_port,1919 struct snd_seq_event *event,2020 int atomic, int hop);2121+int snd_seq_ump_group_port(const struct snd_seq_event *event);21222223#endif /* __SEQ_UMP_CONVERT_H */
+1-1
sound/hda/intel-sdw-acpi.c
···177177 * sdw_intel_startup() is required for creation of devices and bus178178 * startup179179 */180180-int sdw_intel_acpi_scan(acpi_handle *parent_handle,180180+int sdw_intel_acpi_scan(acpi_handle parent_handle,181181 struct sdw_intel_acpi_info *info)182182{183183 acpi_status status;
···22422242 QUIRK_FLAG_CTL_MSG_DELAY_1M),22432243 DEVICE_FLG(0x0c45, 0x6340, /* Sonix HD USB Camera */22442244 QUIRK_FLAG_GET_SAMPLE_RATE),22452245+ DEVICE_FLG(0x0c45, 0x636b, /* Microdia JP001 USB Camera */22462246+ QUIRK_FLAG_GET_SAMPLE_RATE),22452247 DEVICE_FLG(0x0d8c, 0x0014, /* USB Audio Device */22462248 QUIRK_FLAG_CTL_MSG_DELAY_1M),22472249 DEVICE_FLG(0x0ecb, 0x205c, /* JBL Quantum610 Wireless */···22522250 QUIRK_FLAG_FIXED_RATE),22532251 DEVICE_FLG(0x0fd9, 0x0008, /* Hauppauge HVR-950Q */22542252 QUIRK_FLAG_SHARE_MEDIA_DEVICE | QUIRK_FLAG_ALIGN_TRANSFER),22532253+ DEVICE_FLG(0x1101, 0x0003, /* Audioengine D1 */22542254+ QUIRK_FLAG_GET_SAMPLE_RATE),22552255 DEVICE_FLG(0x1224, 0x2a25, /* Jieli Technology USB PHY 2.0 */22562256 QUIRK_FLAG_GET_SAMPLE_RATE | QUIRK_FLAG_MIC_RES_16),22572257 DEVICE_FLG(0x1395, 0x740a, /* Sennheiser DECT */
+15-7
tools/net/ynl/pyynl/ethtool.py
···338338 print('Capabilities:')339339 [print(f'\t{v}') for v in bits_to_dict(tsinfo['timestamping'])]340340341341- print(f'PTP Hardware Clock: {tsinfo["phc-index"]}')341341+ print(f'PTP Hardware Clock: {tsinfo.get("phc-index", "none")}')342342343343- print('Hardware Transmit Timestamp Modes:')344344- [print(f'\t{v}') for v in bits_to_dict(tsinfo['tx-types'])]343343+ if 'tx-types' in tsinfo:344344+ print('Hardware Transmit Timestamp Modes:')345345+ [print(f'\t{v}') for v in bits_to_dict(tsinfo['tx-types'])]346346+ else:347347+ print('Hardware Transmit Timestamp Modes: none')345348346346- print('Hardware Receive Filter Modes:')347347- [print(f'\t{v}') for v in bits_to_dict(tsinfo['rx-filters'])]349349+ if 'rx-filters' in tsinfo:350350+ print('Hardware Receive Filter Modes:')351351+ [print(f'\t{v}') for v in bits_to_dict(tsinfo['rx-filters'])]352352+ else:353353+ print('Hardware Receive Filter Modes: none')348354349349- print('Statistics:')350350- [print(f'\t{k}: {v}') for k, v in tsinfo['stats'].items()]355355+ if 'stats' in tsinfo and tsinfo['stats']:356356+ print('Statistics:')357357+ [print(f'\t{k}: {v}') for k, v in tsinfo['stats'].items()]358358+351359 return352360353361 print(f'Settings for {args.device}:')
+3-4
tools/net/ynl/pyynl/ynl_gen_c.py
···11431143 self.pure_nested_structs[nested].request = True11441144 if attr in rs_members['reply']:11451145 self.pure_nested_structs[nested].reply = True11461146-11471147- if spec.is_multi_val():11481148- child = self.pure_nested_structs.get(nested)11491149- child.in_multi_val = True11461146+ if spec.is_multi_val():11471147+ child = self.pure_nested_structs.get(nested)11481148+ child.in_multi_val = True1150114911511150 self._sort_pure_types()11521151
+9
tools/objtool/arch/x86/decode.c
···189189 op2 = ins.opcode.bytes[1];190190 op3 = ins.opcode.bytes[2];191191192192+ /*193193+ * XXX hack, decoder is buggered and thinks 0xea is 7 bytes long.194194+ */195195+ if (op1 == 0xea) {196196+ insn->len = 1;197197+ insn->type = INSN_BUG;198198+ return 0;199199+ }200200+192201 if (ins.rex_prefix.nbytes) {193202 rex = ins.rex_prefix.bytes[0];194203 rex_w = X86_REX_W(rex) >> 3;
+1
tools/testing/selftests/Makefile
···121121TARGETS += vDSO122122TARGETS += mm123123TARGETS += x86124124+TARGETS += x86/bugs124125TARGETS += zram125126#Please keep the TARGETS list alphabetically sorted126127# Run "make quicktest=1 run_tests" or
+22-33
tools/testing/selftests/drivers/net/hw/ncdevmem.c
···431431 return 0;432432}433433434434+static struct netdev_queue_id *create_queues(void)435435+{436436+ struct netdev_queue_id *queues;437437+ size_t i = 0;438438+439439+ queues = calloc(num_queues, sizeof(*queues));440440+ for (i = 0; i < num_queues; i++) {441441+ queues[i]._present.type = 1;442442+ queues[i]._present.id = 1;443443+ queues[i].type = NETDEV_QUEUE_TYPE_RX;444444+ queues[i].id = start_queue + i;445445+ }446446+447447+ return queues;448448+}449449+434450int do_server(struct memory_buffer *mem)435451{436452 char ctrl_data[sizeof(int) * 20000];···464448 char buffer[256];465449 int socket_fd;466450 int client_fd;467467- size_t i = 0;468451 int ret;469452470453 ret = parse_address(server_ip, atoi(port), &server_sin);···486471487472 sleep(1);488473489489- queues = malloc(sizeof(*queues) * num_queues);490490-491491- for (i = 0; i < num_queues; i++) {492492- queues[i]._present.type = 1;493493- queues[i]._present.id = 1;494494- queues[i].type = NETDEV_QUEUE_TYPE_RX;495495- queues[i].id = start_queue + i;496496- }497497-498498- if (bind_rx_queue(ifindex, mem->fd, queues, num_queues, &ys))474474+ if (bind_rx_queue(ifindex, mem->fd, create_queues(), num_queues, &ys))499475 error(1, 0, "Failed to bind\n");500476501477 tmp_mem = malloc(mem->size);···551545 goto cleanup;552546 }553547554554- i++;555548 for (cm = CMSG_FIRSTHDR(&msg); cm; cm = CMSG_NXTHDR(&msg, cm)) {556549 if (cm->cmsg_level != SOL_SOCKET ||557550 (cm->cmsg_type != SCM_DEVMEM_DMABUF &&···635630636631void run_devmem_tests(void)637632{638638- struct netdev_queue_id *queues;639633 struct memory_buffer *mem;640634 struct ynl_sock *ys;641641- size_t i = 0;642635643636 mem = provider->alloc(getpagesize() * NUM_PAGES);644637···644641 if (configure_rss())645642 error(1, 0, "rss error\n");646643647647- queues = calloc(num_queues, sizeof(*queues));648648-649644 if (configure_headersplit(1))650645 error(1, 0, "Failed to configure header split\n");651646652652- if (!bind_rx_queue(ifindex, mem->fd, queues, num_queues, &ys))647647+ if (!bind_rx_queue(ifindex, mem->fd,648648+ calloc(num_queues, sizeof(struct netdev_queue_id)),649649+ num_queues, &ys))653650 error(1, 0, "Binding empty queues array should have failed\n");654654-655655- for (i = 0; i < num_queues; i++) {656656- queues[i]._present.type = 1;657657- queues[i]._present.id = 1;658658- queues[i].type = NETDEV_QUEUE_TYPE_RX;659659- queues[i].id = start_queue + i;660660- }661651662652 if (configure_headersplit(0))663653 error(1, 0, "Failed to configure header split\n");664654665665- if (!bind_rx_queue(ifindex, mem->fd, queues, num_queues, &ys))655655+ if (!bind_rx_queue(ifindex, mem->fd, create_queues(), num_queues, &ys))666656 error(1, 0, "Configure dmabuf with header split off should have failed\n");667657668658 if (configure_headersplit(1))669659 error(1, 0, "Failed to configure header split\n");670660671671- for (i = 0; i < num_queues; i++) {672672- queues[i]._present.type = 1;673673- queues[i]._present.id = 1;674674- queues[i].type = NETDEV_QUEUE_TYPE_RX;675675- queues[i].id = start_queue + i;676676- }677677-678678- if (bind_rx_queue(ifindex, mem->fd, queues, num_queues, &ys))661661+ if (bind_rx_queue(ifindex, mem->fd, create_queues(), num_queues, &ys))679662 error(1, 0, "Failed to bind\n");680663681664 /* Deactivating a bound queue should not be legal */
···11+#!/usr/bin/env python322+# SPDX-License-Identifier: GPL-2.033+#44+# Copyright (c) 2025 Intel Corporation55+#66+# Test for indirect target selection (ITS) mitigation.77+#88+# Tests if the RETs are correctly patched by evaluating the99+# vmlinux .return_sites in /proc/kcore.1010+#1111+# Install dependencies1212+# add-apt-repository ppa:michel-slm/kernel-utils1313+# apt update1414+# apt install -y python3-drgn python3-pyelftools python3-capstone1515+#1616+# Run on target machine1717+# mkdir -p /usr/lib/debug/lib/modules/$(uname -r)1818+# cp $VMLINUX /usr/lib/debug/lib/modules/$(uname -r)/vmlinux1919+#2020+# Usage: ./its_ret_alignment.py2121+2222+import os, sys, argparse2323+from pathlib import Path2424+2525+this_dir = os.path.dirname(os.path.realpath(__file__))2626+sys.path.insert(0, this_dir + '/../../kselftest')2727+import ksft2828+import common as c2929+3030+bug = "indirect_target_selection"3131+mitigation = c.get_sysfs(bug)3232+if not mitigation or "Aligned branch/return thunks" not in mitigation:3333+ ksft.test_result_skip("Skipping its_ret_alignment.py: Aligned branch/return thunks not enabled")3434+ ksft.finished()3535+3636+c.check_dependencies_or_skip(['drgn', 'elftools', 'capstone'], script_name="its_ret_alignment.py")3737+3838+from elftools.elf.elffile import ELFFile3939+from drgn.helpers.common.memory import identify_address4040+4141+cap = c.init_capstone()4242+4343+if len(os.sys.argv) > 1:4444+ arg_vmlinux = os.sys.argv[1]4545+ if not os.path.exists(arg_vmlinux):4646+ ksft.test_result_fail(f"its_ret_alignment.py: vmlinux not found at user-supplied path: {arg_vmlinux}")4747+ ksft.exit_fail()4848+ os.makedirs(f"/usr/lib/debug/lib/modules/{os.uname().release}", exist_ok=True)4949+ os.system(f'cp {arg_vmlinux} /usr/lib/debug/lib/modules/$(uname -r)/vmlinux')5050+5151+vmlinux = f"/usr/lib/debug/lib/modules/{os.uname().release}/vmlinux"5252+if not os.path.exists(vmlinux):5353+ ksft.test_result_fail(f"its_ret_alignment.py: vmlinux not found at {vmlinux}")5454+ ksft.exit_fail()5555+5656+ksft.print_msg(f"Using vmlinux: {vmlinux}")5757+5858+rethunks_start_vmlinux, rethunks_sec_offset, size = c.get_section_info(vmlinux, '.return_sites')5959+ksft.print_msg(f"vmlinux: Section .return_sites (0x{rethunks_start_vmlinux:x}) found at 0x{rethunks_sec_offset:x} with size 0x{size:x}")6060+6161+sites_offset = c.get_patch_sites(vmlinux, rethunks_sec_offset, size)6262+total_rethunk_tests = len(sites_offset)6363+ksft.print_msg(f"Found {total_rethunk_tests} rethunk sites")6464+6565+prog = c.get_runtime_kernel()6666+rethunks_start_kcore = prog.symbol('__return_sites').address6767+ksft.print_msg(f'kcore: __rethunk_sites: 0x{rethunks_start_kcore:x}')6868+6969+its_return_thunk = prog.symbol('its_return_thunk').address7070+ksft.print_msg(f'kcore: its_return_thunk: 0x{its_return_thunk:x}')7171+7272+tests_passed = 07373+tests_failed = 07474+tests_unknown = 07575+tests_skipped = 07676+7777+with open(vmlinux, 'rb') as f:7878+ elffile = ELFFile(f)7979+ text_section = elffile.get_section_by_name('.text')8080+8181+ for i in range(len(sites_offset)):8282+ site = rethunks_start_kcore + sites_offset[i]8383+ vmlinux_site = rethunks_start_vmlinux + sites_offset[i]8484+ try:8585+ passed = unknown = failed = skipped = False8686+8787+ symbol = identify_address(prog, site)8888+ vmlinux_insn = c.get_instruction_from_vmlinux(elffile, text_section, text_section['sh_addr'], vmlinux_site)8989+ kcore_insn = list(cap.disasm(prog.read(site, 16), site))[0]9090+9191+ insn_end = site + kcore_insn.size - 19292+9393+ safe_site = insn_end & 0x209494+ site_status = "" if safe_site else "(unsafe)"9595+9696+ ksft.print_msg(f"\nSite {i}: {symbol} <0x{site:x}> {site_status}")9797+ ksft.print_msg(f"\tvmlinux: 0x{vmlinux_insn.address:x}:\t{vmlinux_insn.mnemonic}\t{vmlinux_insn.op_str}")9898+ ksft.print_msg(f"\tkcore: 0x{kcore_insn.address:x}:\t{kcore_insn.mnemonic}\t{kcore_insn.op_str}")9999+100100+ if safe_site:101101+ tests_passed += 1102102+ passed = True103103+ ksft.print_msg(f"\tPASSED: At safe address")104104+ continue105105+106106+ if "jmp" in kcore_insn.mnemonic:107107+ passed = True108108+ elif "ret" not in kcore_insn.mnemonic:109109+ skipped = True110110+111111+ if passed:112112+ ksft.print_msg(f"\tPASSED: Found {kcore_insn.mnemonic} {kcore_insn.op_str}")113113+ tests_passed += 1114114+ elif skipped:115115+ ksft.print_msg(f"\tSKIPPED: Found '{kcore_insn.mnemonic}'")116116+ tests_skipped += 1117117+ elif unknown:118118+ ksft.print_msg(f"UNKNOWN: An unknown instruction: {kcore_insn}")119119+ tests_unknown += 1120120+ else:121121+ ksft.print_msg(f'\t************* FAILED *************')122122+ ksft.print_msg(f"\tFound {kcore_insn.mnemonic} {kcore_insn.op_str}")123123+ ksft.print_msg(f'\t**********************************')124124+ tests_failed += 1125125+ except Exception as e:126126+ ksft.print_msg(f"UNKNOWN: An unexpected error occurred: {e}")127127+ tests_unknown += 1128128+129129+ksft.print_msg(f"\n\nSummary:")130130+ksft.print_msg(f"PASSED: \t{tests_passed} \t/ {total_rethunk_tests}")131131+ksft.print_msg(f"FAILED: \t{tests_failed} \t/ {total_rethunk_tests}")132132+ksft.print_msg(f"SKIPPED: \t{tests_skipped} \t/ {total_rethunk_tests}")133133+ksft.print_msg(f"UNKNOWN: \t{tests_unknown} \t/ {total_rethunk_tests}")134134+135135+if tests_failed == 0:136136+ ksft.test_result_pass("All ITS return thunk sites passed.")137137+else:138138+ ksft.test_result_fail(f"{tests_failed} failed sites need ITS return thunks.")139139+ksft.finished()
+65
tools/testing/selftests/x86/bugs/its_sysfs.py
···11+#!/usr/bin/env python322+# SPDX-License-Identifier: GPL-2.033+#44+# Copyright (c) 2025 Intel Corporation55+#66+# Test for Indirect Target Selection(ITS) mitigation sysfs status.77+88+import sys, os, re99+this_dir = os.path.dirname(os.path.realpath(__file__))1010+sys.path.insert(0, this_dir + '/../../kselftest')1111+import ksft1212+1313+from common import *1414+1515+bug = "indirect_target_selection"1616+mitigation = get_sysfs(bug)1717+1818+ITS_MITIGATION_ALIGNED_THUNKS = "Mitigation: Aligned branch/return thunks"1919+ITS_MITIGATION_RETPOLINE_STUFF = "Mitigation: Retpolines, Stuffing RSB"2020+ITS_MITIGATION_VMEXIT_ONLY = "Mitigation: Vulnerable, KVM: Not affected"2121+ITS_MITIGATION_VULNERABLE = "Vulnerable"2222+2323+def check_mitigation():2424+ if mitigation == ITS_MITIGATION_ALIGNED_THUNKS:2525+ if cmdline_has(f'{bug}=stuff') and sysfs_has("spectre_v2", "Retpolines"):2626+ bug_check_fail(bug, ITS_MITIGATION_ALIGNED_THUNKS, ITS_MITIGATION_RETPOLINE_STUFF)2727+ return2828+ if cmdline_has(f'{bug}=vmexit') and cpuinfo_has('its_native_only'):2929+ bug_check_fail(bug, ITS_MITIGATION_ALIGNED_THUNKS, ITS_MITIGATION_VMEXIT_ONLY)3030+ return3131+ bug_check_pass(bug, ITS_MITIGATION_ALIGNED_THUNKS)3232+ return3333+3434+ if mitigation == ITS_MITIGATION_RETPOLINE_STUFF:3535+ if cmdline_has(f'{bug}=stuff') and sysfs_has("spectre_v2", "Retpolines"):3636+ bug_check_pass(bug, ITS_MITIGATION_RETPOLINE_STUFF)3737+ return3838+ if sysfs_has('retbleed', 'Stuffing'):3939+ bug_check_pass(bug, ITS_MITIGATION_RETPOLINE_STUFF)4040+ return4141+ bug_check_fail(bug, ITS_MITIGATION_RETPOLINE_STUFF, ITS_MITIGATION_ALIGNED_THUNKS)4242+4343+ if mitigation == ITS_MITIGATION_VMEXIT_ONLY:4444+ if cmdline_has(f'{bug}=vmexit') and cpuinfo_has('its_native_only'):4545+ bug_check_pass(bug, ITS_MITIGATION_VMEXIT_ONLY)4646+ return4747+ bug_check_fail(bug, ITS_MITIGATION_VMEXIT_ONLY, ITS_MITIGATION_ALIGNED_THUNKS)4848+4949+ if mitigation == ITS_MITIGATION_VULNERABLE:5050+ if sysfs_has("spectre_v2", "Vulnerable"):5151+ bug_check_pass(bug, ITS_MITIGATION_VULNERABLE)5252+ else:5353+ bug_check_fail(bug, "Mitigation", ITS_MITIGATION_VULNERABLE)5454+5555+ bug_status_unknown(bug, mitigation)5656+ return5757+5858+ksft.print_header()5959+ksft.set_plan(1)6060+ksft.print_msg(f'{bug}: {mitigation} ...')6161+6262+if not basic_checks_sufficient(bug, mitigation):6363+ check_mitigation()6464+6565+ksft.finished()
+16-12
tools/testing/vsock/vsock_test.c
···12641264 send_buf(fd, buf, sizeof(buf), 0, sizeof(buf));12651265 control_expectln("RECEIVED");1266126612671267- ret = ioctl(fd, SIOCOUTQ, &sock_bytes_unsent);12681268- if (ret < 0) {12691269- if (errno == EOPNOTSUPP) {12701270- fprintf(stderr, "Test skipped, SIOCOUTQ not supported.\n");12711271- } else {12671267+ /* SIOCOUTQ isn't guaranteed to instantly track sent data. Even though12681268+ * the "RECEIVED" message means that the other side has received the12691269+ * data, there can be a delay in our kernel before updating the "unsent12701270+ * bytes" counter. Repeat SIOCOUTQ until it returns 0.12711271+ */12721272+ timeout_begin(TIMEOUT);12731273+ do {12741274+ ret = ioctl(fd, SIOCOUTQ, &sock_bytes_unsent);12751275+ if (ret < 0) {12761276+ if (errno == EOPNOTSUPP) {12771277+ fprintf(stderr, "Test skipped, SIOCOUTQ not supported.\n");12781278+ break;12791279+ }12721280 perror("ioctl");12731281 exit(EXIT_FAILURE);12741282 }12751275- } else if (ret == 0 && sock_bytes_unsent != 0) {12761276- fprintf(stderr,12771277- "Unexpected 'SIOCOUTQ' value, expected 0, got %i\n",12781278- sock_bytes_unsent);12791279- exit(EXIT_FAILURE);12801280- }12811281-12831283+ timeout_check("SIOCOUTQ");12841284+ } while (sock_bytes_unsent != 0);12851285+ timeout_end();12821286 close(fd);12831287}12841288