···29172917 for Movable pages. "nn[KMGTPE]", "nn%", and "mirror"29182918 are exclusive, so you cannot specify multiple forms.2919291929202920+ kfence.burst= [MM,KFENCE] The number of additional successive29212921+ allocations to be attempted through KFENCE for each29222922+ sample interval.29232923+ Format: <unsigned integer>29242924+ Default: 029252925+29262926+ kfence.check_on_panic=29272927+ [MM,KFENCE] Whether to check all KFENCE-managed objects'29282928+ canaries on panic.29292929+ Format: <bool>29302930+ Default: false29312931+29322932+ kfence.deferrable=29332933+ [MM,KFENCE] Whether to use a deferrable timer to trigger29342934+ allocations. This avoids forcing CPU wake-ups if the29352935+ system is idle, at the risk of a less predictable29362936+ sample interval.29372937+ Format: <bool>29382938+ Default: CONFIG_KFENCE_DEFERRABLE29392939+29402940+ kfence.sample_interval=29412941+ [MM,KFENCE] KFENCE's sample interval in milliseconds.29422942+ Format: <unsigned integer>29432943+ 0 - Disable KFENCE.29442944+ >0 - Enabled KFENCE with given sample interval.29452945+ Default: CONFIG_KFENCE_SAMPLE_INTERVAL29462946+29472947+ kfence.skip_covered_thresh=29482948+ [MM,KFENCE] If pool utilization reaches this threshold29492949+ (pool usage%), KFENCE limits currently covered29502950+ allocations of the same source from further filling29512951+ up the pool.29522952+ Format: <unsigned integer>29532953+ Default: 7529542954+29202955 kgdbdbgp= [KGDB,HW,EARLY] kgdb over EHCI usb debug port.29212956 Format: <Controller#>[,poll interval]29222957 The controller # is the number of the ehci usb debug
+8
Documentation/admin-guide/sysctl/net.rst
···303303Maximum number of packets, queued on the INPUT side, when the interface304304receives packets faster than kernel can process them.305305306306+qdisc_max_burst307307+------------------308308+309309+Maximum number of packets that can be temporarily stored before310310+reaching qdisc.311311+312312+Default: 1000313313+306314netdev_rss_key307315--------------308316
···4848 enum: [1, 2]4949 default: 25050 description:5151- Number of lanes available per direction. Note that it is assume same5252- number of lanes is used both directions at once.5151+ Number of lanes available per direction. Note that it is assumed that5252+ the same number of lanes are used in both directions at once.53535454 vdd-hba-supply:5555 description:
···11+# SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause)22+#33+# Copyright (c) 2025 Valve Corporation.44+#55+---66+name: dev-energymodel77+88+doc: |99+ Energy model netlink interface to notify its changes.1010+1111+protocol: genetlink1212+1313+uapi-header: linux/dev_energymodel.h1414+1515+definitions:1616+ -1717+ type: flags1818+ name: perf-state-flags1919+ entries:2020+ -2121+ name: perf-state-inefficient2222+ doc: >-2323+ The performance state is inefficient. There is in this perf-domain,2424+ another performance state with a higher frequency but a lower or2525+ equal power cost.2626+ -2727+ type: flags2828+ name: perf-domain-flags2929+ entries:3030+ -3131+ name: perf-domain-microwatts3232+ doc: >-3333+ The power values are in micro-Watts or some other scale.3434+ -3535+ name: perf-domain-skip-inefficiencies3636+ doc: >-3737+ Skip inefficient states when estimating energy consumption.3838+ -3939+ name: perf-domain-artificial4040+ doc: >-4141+ The power values are artificial and might be created by platform4242+ missing real power information.4343+4444+attribute-sets:4545+ -4646+ name: perf-domain4747+ doc: >-4848+ Information on a single performance domains.4949+ attributes:5050+ -5151+ name: pad5252+ type: pad5353+ -5454+ name: perf-domain-id5555+ type: u325656+ doc: >-5757+ A unique ID number for each performance domain.5858+ -5959+ name: flags6060+ type: u646161+ doc: >-6262+ Bitmask of performance domain flags.6363+ enum: perf-domain-flags6464+ -6565+ name: cpus6666+ type: u646767+ multi-attr: true6868+ doc: >-6969+ CPUs that belong to this performance domain.7070+ -7171+ name: perf-table7272+ doc: >-7373+ Performance states table.7474+ attributes:7575+ -7676+ name: perf-domain-id7777+ type: u327878+ doc: >-7979+ A unique ID number for each performance domain.8080+ -8181+ name: perf-state8282+ type: nest8383+ nested-attributes: perf-state8484+ multi-attr: true8585+ -8686+ name: perf-state8787+ doc: >-8888+ Performance state of a performance domain.8989+ attributes:9090+ -9191+ name: pad9292+ type: pad9393+ -9494+ name: performance9595+ type: u649696+ doc: >-9797+ CPU performance (capacity) at a given frequency.9898+ -9999+ name: frequency100100+ type: u64101101+ doc: >-102102+ The frequency in KHz, for consistency with CPUFreq.103103+ -104104+ name: power105105+ type: u64106106+ doc: >-107107+ The power consumed at this level (by 1 CPU or by a registered108108+ device). It can be a total power: static and dynamic.109109+ -110110+ name: cost111111+ type: u64112112+ doc: >-113113+ The cost coefficient associated with this level, used during energy114114+ calculation. Equal to: power * max_frequency / frequency.115115+ -116116+ name: flags117117+ type: u64118118+ doc: >-119119+ Bitmask of performance state flags.120120+ enum: perf-state-flags121121+122122+operations:123123+ list:124124+ -125125+ name: get-perf-domains126126+ attribute-set: perf-domain127127+ doc: Get the list of information for all performance domains.128128+ do:129129+ request:130130+ attributes:131131+ - perf-domain-id132132+ reply:133133+ attributes: &perf-domain-attrs134134+ - pad135135+ - perf-domain-id136136+ - flags137137+ - cpus138138+ dump:139139+ reply:140140+ attributes: *perf-domain-attrs141141+ -142142+ name: get-perf-table143143+ attribute-set: perf-table144144+ doc: Get the energy model table of a performance domain.145145+ do:146146+ request:147147+ attributes:148148+ - perf-domain-id149149+ reply:150150+ attributes:151151+ - perf-domain-id152152+ - perf-state153153+ -154154+ name: perf-domain-created155155+ doc: A performance domain is created.156156+ notify: get-perf-table157157+ mcgrp: event158158+ -159159+ name: perf-domain-updated160160+ doc: A performance domain is updated.161161+ notify: get-perf-table162162+ mcgrp: event163163+ -164164+ name: perf-domain-deleted165165+ doc: A performance domain is deleted.166166+ attribute-set: perf-table167167+ event:168168+ attributes:169169+ - perf-domain-id170170+ mcgrp: event171171+172172+mcast-groups:173173+ list:174174+ -175175+ name: event
-113
Documentation/netlink/specs/em.yaml
···11-# SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause)22-33-name: em44-55-doc: |66- Energy model netlink interface to notify its changes.77-88-protocol: genetlink99-1010-uapi-header: linux/energy_model.h1111-1212-attribute-sets:1313- -1414- name: pds1515- attributes:1616- -1717- name: pd1818- type: nest1919- nested-attributes: pd2020- multi-attr: true2121- -2222- name: pd2323- attributes:2424- -2525- name: pad2626- type: pad2727- -2828- name: pd-id2929- type: u323030- -3131- name: flags3232- type: u643333- -3434- name: cpus3535- type: string3636- -3737- name: pd-table3838- attributes:3939- -4040- name: pd-id4141- type: u324242- -4343- name: ps4444- type: nest4545- nested-attributes: ps4646- multi-attr: true4747- -4848- name: ps4949- attributes:5050- -5151- name: pad5252- type: pad5353- -5454- name: performance5555- type: u645656- -5757- name: frequency5858- type: u645959- -6060- name: power6161- type: u646262- -6363- name: cost6464- type: u646565- -6666- name: flags6767- type: u646868-6969-operations:7070- list:7171- -7272- name: get-pds7373- attribute-set: pds7474- doc: Get the list of information for all performance domains.7575- do:7676- reply:7777- attributes:7878- - pd7979- -8080- name: get-pd-table8181- attribute-set: pd-table8282- doc: Get the energy model table of a performance domain.8383- do:8484- request:8585- attributes:8686- - pd-id8787- reply:8888- attributes:8989- - pd-id9090- - ps9191- -9292- name: pd-created9393- doc: A performance domain is created.9494- notify: get-pd-table9595- mcgrp: event9696- -9797- name: pd-updated9898- doc: A performance domain is updated.9999- notify: get-pd-table100100- mcgrp: event101101- -102102- name: pd-deleted103103- doc: A performance domain is deleted.104104- attribute-set: pd-table105105- event:106106- attributes:107107- - pd-id108108- mcgrp: event109109-110110-mcast-groups:111111- list:112112- -113113- name: event
···425425static struct kcore_list kcore_kseg0;426426#endif427427428428+static inline void __init highmem_init(void)429429+{430430+#ifdef CONFIG_HIGHMEM431431+ unsigned long tmp;432432+433433+ /*434434+ * If CPU cannot support HIGHMEM discard the memory above highstart_pfn435435+ */436436+ if (cpu_has_dc_aliases) {437437+ memblock_remove(PFN_PHYS(highstart_pfn), -1);438438+ return;439439+ }440440+441441+ for (tmp = highstart_pfn; tmp < highend_pfn; tmp++) {442442+ struct page *page = pfn_to_page(tmp);443443+444444+ if (!memblock_is_memory(PFN_PHYS(tmp)))445445+ SetPageReserved(page);446446+ }447447+#endif448448+}449449+428450void __init arch_mm_preinit(void)429451{430452 /*···457435458436 maar_init();459437 setup_zero_pages(); /* Setup zeroed pages. */438438+ highmem_init();460439461440#ifdef CONFIG_64BIT462441 if ((unsigned long) &_text > (unsigned long) CKSEG0)
+10-5
arch/powerpc/kernel/watchdog.c
···2626#include <linux/delay.h>2727#include <linux/processor.h>2828#include <linux/smp.h>2929+#include <linux/sys_info.h>29303031#include <asm/interrupt.h>3132#include <asm/paca.h>···236235 pr_emerg("CPU %d TB:%lld, last SMP heartbeat TB:%lld (%lldms ago)\n",237236 cpu, tb, last_reset, tb_to_ns(tb - last_reset) / 1000000);238237239239- if (!sysctl_hardlockup_all_cpu_backtrace) {238238+ if (sysctl_hardlockup_all_cpu_backtrace ||239239+ (hardlockup_si_mask & SYS_INFO_ALL_BT)) {240240+ trigger_allbutcpu_cpu_backtrace(cpu);241241+ cpumask_clear(&wd_smp_cpus_ipi);242242+ } else {240243 /*241244 * Try to trigger the stuck CPUs, unless we are going to242245 * get a backtrace on all of them anyway.···249244 smp_send_nmi_ipi(c, wd_lockup_ipi, 1000000);250245 __cpumask_clear_cpu(c, &wd_smp_cpus_ipi);251246 }252252- } else {253253- trigger_allbutcpu_cpu_backtrace(cpu);254254- cpumask_clear(&wd_smp_cpus_ipi);255247 }256248249249+ sys_info(hardlockup_si_mask & ~SYS_INFO_ALL_BT);257250 if (hardlockup_panic)258251 nmi_panic(NULL, "Hard LOCKUP");259252···418415419416 xchg(&__wd_nmi_output, 1); // see wd_lockup_ipi420417421421- if (sysctl_hardlockup_all_cpu_backtrace)418418+ if (sysctl_hardlockup_all_cpu_backtrace ||419419+ (hardlockup_si_mask & SYS_INFO_ALL_BT))422420 trigger_allbutcpu_cpu_backtrace(cpu);423421422422+ sys_info(hardlockup_si_mask & ~SYS_INFO_ALL_BT);424423 if (hardlockup_panic)425424 nmi_panic(regs, "Hard LOCKUP");426425
+2-4
arch/riscv/net/bpf_jit_comp64.c
···1133113311341134 store_args(nr_arg_slots, args_off, ctx);1135113511361136- /* skip to actual body of traced function */11371137- if (flags & BPF_TRAMP_F_ORIG_STACK)11381138- orig_call += RV_FENTRY_NINSNS * 4;11391139-11401136 if (flags & BPF_TRAMP_F_CALL_ORIG) {11411137 emit_imm(RV_REG_A0, ctx->insns ? (const s64)im : RV_MAX_COUNT_IMM, ctx);11421138 ret = emit_call((const u64)__bpf_tramp_enter, true, ctx);···11671171 }1168117211691173 if (flags & BPF_TRAMP_F_CALL_ORIG) {11741174+ /* skip to actual body of traced function */11751175+ orig_call += RV_FENTRY_NINSNS * 4;11701176 restore_args(min_t(int, nr_arg_slots, RV_MAX_REG_ARGS), args_off, ctx);11711177 restore_stack_args(nr_arg_slots - RV_MAX_REG_ARGS, args_off, stk_arg_off, ctx);11721178 ret = emit_call((const u64)orig_call, true, ctx);
+17-4
arch/x86/kernel/cpu/resctrl/core.c
···825825826826 if (boot_cpu_data.x86_vendor == X86_VENDOR_INTEL)827827 return __get_mem_config_intel(&hw_res->r_resctrl);828828- else if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD)828828+ else if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD ||829829+ boot_cpu_data.x86_vendor == X86_VENDOR_HYGON)829830 return __rdt_get_mem_config_amd(&hw_res->r_resctrl);830831831832 return false;···988987{989988 if (boot_cpu_data.x86_vendor == X86_VENDOR_INTEL)990989 rdt_init_res_defs_intel();991991- else if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD)990990+ else if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD ||991991+ boot_cpu_data.x86_vendor == X86_VENDOR_HYGON)992992 rdt_init_res_defs_amd();993993}994994···10211019 c->x86_cache_occ_scale = ebx;10221020 c->x86_cache_mbm_width_offset = eax & 0xff;1023102110241024- if (c->x86_vendor == X86_VENDOR_AMD && !c->x86_cache_mbm_width_offset)10251025- c->x86_cache_mbm_width_offset = MBM_CNTR_WIDTH_OFFSET_AMD;10221022+ if (!c->x86_cache_mbm_width_offset) {10231023+ switch (c->x86_vendor) {10241024+ case X86_VENDOR_AMD:10251025+ c->x86_cache_mbm_width_offset = MBM_CNTR_WIDTH_OFFSET_AMD;10261026+ break;10271027+ case X86_VENDOR_HYGON:10281028+ c->x86_cache_mbm_width_offset = MBM_CNTR_WIDTH_OFFSET_HYGON;10291029+ break;10301030+ default:10311031+ /* Leave c->x86_cache_mbm_width_offset as 0 */10321032+ break;10331033+ }10341034+ }10261035 }10271036}10281037
+3
arch/x86/kernel/cpu/resctrl/internal.h
···14141515#define MBM_CNTR_WIDTH_OFFSET_AMD 2016161717+/* Hygon MBM counter width as an offset from MBM_CNTR_WIDTH_BASE */1818+#define MBM_CNTR_WIDTH_OFFSET_HYGON 81919+1720#define RMID_VAL_ERROR BIT_ULL(63)18211922#define RMID_VAL_UNAVAIL BIT_ULL(62)
+29-3
arch/x86/kernel/fpu/core.c
···319319#ifdef CONFIG_X86_64320320void fpu_update_guest_xfd(struct fpu_guest *guest_fpu, u64 xfd)321321{322322+ struct fpstate *fpstate = guest_fpu->fpstate;323323+322324 fpregs_lock();323323- guest_fpu->fpstate->xfd = xfd;324324- if (guest_fpu->fpstate->in_use)325325- xfd_update_state(guest_fpu->fpstate);325325+326326+ /*327327+ * KVM's guest ABI is that setting XFD[i]=1 *can* immediately revert the328328+ * save state to its initial configuration. Likewise, KVM_GET_XSAVE does329329+ * the same as XSAVE and returns XSTATE_BV[i]=0 whenever XFD[i]=1.330330+ *331331+ * If the guest's FPU state is in hardware, just update XFD: the XSAVE332332+ * in fpu_swap_kvm_fpstate will clear XSTATE_BV[i] whenever XFD[i]=1.333333+ *334334+ * If however the guest's FPU state is NOT resident in hardware, clear335335+ * disabled components in XSTATE_BV now, or a subsequent XRSTOR will336336+ * attempt to load disabled components and generate #NM _in the host_.337337+ */338338+ if (xfd && test_thread_flag(TIF_NEED_FPU_LOAD))339339+ fpstate->regs.xsave.header.xfeatures &= ~xfd;340340+341341+ fpstate->xfd = xfd;342342+ if (fpstate->in_use)343343+ xfd_update_state(fpstate);344344+326345 fpregs_unlock();327346}328347EXPORT_SYMBOL_FOR_KVM(fpu_update_guest_xfd);···447428 }448429449430 if (ustate->xsave.header.xfeatures & ~xcr0)431431+ return -EINVAL;432432+433433+ /*434434+ * Disabled features must be in their initial state, otherwise XRSTOR435435+ * causes an exception.436436+ */437437+ if (WARN_ON_ONCE(ustate->xsave.header.xfeatures & kstate->xfd))450438 return -EINVAL;451439452440 /*
+16-3
arch/x86/kernel/kvm.c
···8989 struct swait_queue_head wq;9090 u32 token;9191 int cpu;9292+ bool dummy;9293};93949495static struct kvm_task_sleep_head {···121120 raw_spin_lock(&b->lock);122121 e = _find_apf_task(b, token);123122 if (e) {124124- /* dummy entry exist -> wake up was delivered ahead of PF */125125- hlist_del(&e->link);123123+ struct kvm_task_sleep_node *dummy = NULL;124124+125125+ /*126126+ * The entry can either be a 'dummy' entry (which is put on the127127+ * list when wake-up happens ahead of APF handling completion)128128+ * or a token from another task which should not be touched.129129+ */130130+ if (e->dummy) {131131+ hlist_del(&e->link);132132+ dummy = e;133133+ }134134+126135 raw_spin_unlock(&b->lock);127127- kfree(e);136136+ kfree(dummy);128137 return false;129138 }130139131140 n->token = token;132141 n->cpu = smp_processor_id();142142+ n->dummy = false;133143 init_swait_queue_head(&n->wq);134144 hlist_add_head(&n->link, &b->list);135145 raw_spin_unlock(&b->lock);···243231 }244232 dummy->token = token;245233 dummy->cpu = smp_processor_id();234234+ dummy->dummy = true;246235 init_swait_queue_head(&dummy->wq);247236 hlist_add_head(&dummy->link, &b->list);248237 dummy = NULL;
+9
arch/x86/kvm/x86.c
···58075807static int kvm_vcpu_ioctl_x86_set_xsave(struct kvm_vcpu *vcpu,58085808 struct kvm_xsave *guest_xsave)58095809{58105810+ union fpregs_state *xstate = (union fpregs_state *)guest_xsave->region;58115811+58105812 if (fpstate_is_confidential(&vcpu->arch.guest_fpu))58115813 return vcpu->kvm->arch.has_protected_state ? -EINVAL : 0;58145814+58155815+ /*58165816+ * For backwards compatibility, do not expect disabled features to be in58175817+ * their initial state. XSTATE_BV[i] must still be cleared whenever58185818+ * XFD[i]=1, or XRSTOR would cause a #NM.58195819+ */58205820+ xstate->xsave.header.xfeatures &= ~vcpu->arch.guest_fpu.fpstate->xfd;5812582158135822 return fpu_copy_uabi_to_guest_fpstate(&vcpu->arch.guest_fpu,58145823 guest_xsave->region,
+5-5
arch/x86/mm/kaslr.c
···115115116116 /*117117 * Adapt physical memory region size based on available memory,118118- * except when CONFIG_PCI_P2PDMA is enabled. P2PDMA exposes the119119- * device BAR space assuming the direct map space is large enough120120- * for creating a ZONE_DEVICE mapping in the direct map corresponding121121- * to the physical BAR address.118118+ * except when CONFIG_ZONE_DEVICE is enabled. ZONE_DEVICE wants to map119119+ * any physical address into the direct-map. KASLR wants to reliably120120+ * steal some physical address bits. Those design choices are in direct121121+ * conflict.122122 */123123- if (!IS_ENABLED(CONFIG_PCI_P2PDMA) && (memory_tb < kaslr_regions[0].size_tb))123123+ if (!IS_ENABLED(CONFIG_ZONE_DEVICE) && (memory_tb < kaslr_regions[0].size_tb))124124 kaslr_regions[0].size_tb = memory_tb;125125126126 /*
···75757676static u64 cxl_apply_xor_maps(struct cxl_root_decoder *cxlrd, u64 addr)7777{7878- struct cxl_cxims_data *cximsd = cxlrd->platform_data;7878+ int hbiw = cxlrd->cxlsd.nr_targets;7979+ struct cxl_cxims_data *cximsd;79808080- return cxl_do_xormap_calc(cximsd, addr, cxlrd->cxlsd.nr_targets);8181+ /* No xormaps for host bridge interleave ways of 1 or 3 */8282+ if (hbiw == 1 || hbiw == 3)8383+ return addr;8484+8585+ cximsd = cxlrd->platform_data;8686+8787+ return cxl_do_xormap_calc(cximsd, addr, hbiw);8188}82898390struct cxl_cxims_context {
+2-2
drivers/cxl/core/hdm.c
···403403 * is not set.404404 */405405 if (cxled->part < 0)406406- for (int i = 0; cxlds->nr_partitions; i++)406406+ for (int i = 0; i < cxlds->nr_partitions; i++)407407 if (resource_contains(&cxlds->part[i].res, res)) {408408 cxled->part = i;409409 break;···530530531531resource_size_t cxl_dpa_resource_start(struct cxl_endpoint_decoder *cxled)532532{533533- resource_size_t base = -1;533533+ resource_size_t base = RESOURCE_SIZE_MAX;534534535535 lockdep_assert_held(&cxl_rwsem.dpa);536536 if (cxled->dpa_res)
···6767/**6868 * struct dev_dax - instance data for a subdivision of a dax region, and6969 * data while the device is activated in the driver.7070- * @region - parent region7171- * @dax_dev - core dax functionality7070+ * @region: parent region7171+ * @dax_dev: core dax functionality7272+ * @align: alignment of this instance7273 * @target_node: effective numa node if dev_dax memory range is onlined7374 * @dyn_id: is this a dynamic or statically created instance7475 * @id: ida allocated id when the dax_region is not static7576 * @ida: mapping id allocator7676- * @dev - device core7777- * @pgmap - pgmap for memmap setup / lifetime (driver owned)7777+ * @dev: device core7878+ * @pgmap: pgmap for memmap setup / lifetime (driver owned)7979+ * @memmap_on_memory: allow kmem to put the memmap in the memory7880 * @nr_range: size of @ranges7981 * @ranges: range tuples of memory used8082 */
···7979{8080 struct platform_device *pdev = of_find_device_by_node(ofdma->of_node);8181 struct ti_am335x_xbar_data *xbar = platform_get_drvdata(pdev);8282- struct ti_am335x_xbar_map *map;8282+ struct ti_am335x_xbar_map *map = ERR_PTR(-EINVAL);83838484 if (dma_spec->args_count != 3)8585- return ERR_PTR(-EINVAL);8585+ goto out_put_pdev;86868787 if (dma_spec->args[2] >= xbar->xbar_events) {8888 dev_err(&pdev->dev, "Invalid XBAR event number: %d\n",8989 dma_spec->args[2]);9090- return ERR_PTR(-EINVAL);9090+ goto out_put_pdev;9191 }92929393 if (dma_spec->args[0] >= xbar->dma_requests) {9494 dev_err(&pdev->dev, "Invalid DMA request line number: %d\n",9595 dma_spec->args[0]);9696- return ERR_PTR(-EINVAL);9696+ goto out_put_pdev;9797 }98989999 /* The of_node_put() will be done in the core for the node */100100 dma_spec->np = of_parse_phandle(ofdma->of_node, "dma-masters", 0);101101 if (!dma_spec->np) {102102 dev_err(&pdev->dev, "Can't get DMA master\n");103103- return ERR_PTR(-EINVAL);103103+ goto out_put_pdev;104104 }105105106106 map = kzalloc(sizeof(*map), GFP_KERNEL);107107 if (!map) {108108 of_node_put(dma_spec->np);109109- return ERR_PTR(-ENOMEM);109109+ map = ERR_PTR(-ENOMEM);110110+ goto out_put_pdev;110111 }111112112113 map->dma_line = (u16)dma_spec->args[0];···120119 map->mux_val, map->dma_line);121120122121 ti_am335x_xbar_write(xbar->iomem, map->dma_line, map->mux_val);122122+123123+out_put_pdev:124124+ put_device(&pdev->dev);123125124126 return map;125127}···245241{246242 struct platform_device *pdev = of_find_device_by_node(ofdma->of_node);247243 struct ti_dra7_xbar_data *xbar = platform_get_drvdata(pdev);248248- struct ti_dra7_xbar_map *map;244244+ struct ti_dra7_xbar_map *map = ERR_PTR(-EINVAL);249245250246 if (dma_spec->args[0] >= xbar->xbar_requests) {251247 dev_err(&pdev->dev, "Invalid XBAR request number: %d\n",252248 dma_spec->args[0]);253253- put_device(&pdev->dev);254254- return ERR_PTR(-EINVAL);249249+ goto out_put_pdev;255250 }256251257252 /* The of_node_put() will be done in the core for the node */258253 dma_spec->np = of_parse_phandle(ofdma->of_node, "dma-masters", 0);259254 if (!dma_spec->np) {260255 dev_err(&pdev->dev, "Can't get DMA master\n");261261- put_device(&pdev->dev);262262- return ERR_PTR(-EINVAL);256256+ goto out_put_pdev;263257 }264258265259 map = kzalloc(sizeof(*map), GFP_KERNEL);266260 if (!map) {267261 of_node_put(dma_spec->np);268268- put_device(&pdev->dev);269269- return ERR_PTR(-ENOMEM);262262+ map = ERR_PTR(-ENOMEM);263263+ goto out_put_pdev;270264 }271265272266 mutex_lock(&xbar->mutex);···275273 dev_err(&pdev->dev, "Run out of free DMA requests\n");276274 kfree(map);277275 of_node_put(dma_spec->np);278278- put_device(&pdev->dev);279279- return ERR_PTR(-ENOMEM);276276+ map = ERR_PTR(-ENOMEM);277277+ goto out_put_pdev;280278 }281279 set_bit(map->xbar_out, xbar->dma_inuse);282280 mutex_unlock(&xbar->mutex);···289287 map->xbar_in, map->xbar_out);290288291289 ti_dra7_xbar_write(xbar->iomem, map->xbar_out, map->xbar_in);290290+291291+out_put_pdev:292292+ put_device(&pdev->dev);292293293294 return map;294295}
+1-1
drivers/dma/ti/k3-udma-private.c
···4242 }43434444 ud = platform_get_drvdata(pdev);4545+ put_device(&pdev->dev);4546 if (!ud) {4647 pr_debug("UDMA has not been probed\n");4747- put_device(&pdev->dev);4848 return ERR_PTR(-EPROBE_DEFER);4949 }5050
+4
drivers/dma/ti/omap-dma.c
···18081808 if (rc) {18091809 pr_warn("OMAP-DMA: failed to register slave DMA engine device: %d\n",18101810 rc);18111811+ if (od->ll123_supported)18121812+ dma_pool_destroy(od->desc_pool);18111813 omap_dma_free(od);18121814 return rc;18131815 }···18251823 if (rc) {18261824 pr_warn("OMAP-DMA: failed to register DMA controller\n");18271825 dma_async_device_unregister(&od->ddev);18261826+ if (od->ll123_supported)18271827+ dma_pool_destroy(od->desc_pool);18281828 omap_dma_free(od);18291829 }18301830 }
+1
drivers/dma/xilinx/xdma-regs.h
···991010/* The length of register space exposed to host */1111#define XDMA_REG_SPACE_LEN 655361212+#define XDMA_MAX_REG_OFFSET (XDMA_REG_SPACE_LEN - 4)12131314/*1415 * maximum number of DMA channels for each direction:
···66 * Copyright (c) 2007, MontaVista Software, Inc. <source@mvista.com>77 */8899+#include <linux/cleanup.h>910#include <linux/gpio/driver.h>1011#include <linux/errno.h>1112#include <linux/kernel.h>···110109 return __davinci_direction(chip, offset, true, value);111110}112111112112+static int davinci_get_direction(struct gpio_chip *chip, unsigned int offset)113113+{114114+ struct davinci_gpio_controller *d = gpiochip_get_data(chip);115115+ struct davinci_gpio_regs __iomem *g;116116+ u32 mask = __gpio_mask(offset), val;117117+ int bank = offset / 32;118118+119119+ g = d->regs[bank];120120+121121+ guard(spinlock_irqsave)(&d->lock);122122+123123+ val = readl_relaxed(&g->dir);124124+125125+ return (val & mask) ? GPIO_LINE_DIRECTION_IN : GPIO_LINE_DIRECTION_OUT;126126+}127127+113128/*114129 * Read the pin's value (works even if it's set up as output);115130 * returns zero/nonzero.···220203 chips->chip.get = davinci_gpio_get;221204 chips->chip.direction_output = davinci_direction_out;222205 chips->chip.set = davinci_gpio_set;206206+ chips->chip.get_direction = davinci_get_direction;223207224208 chips->chip.ngpio = ngpio;225209 chips->chip.base = -1;
-3
drivers/gpio/gpiolib.c
···468468 test_bit(GPIOD_FLAG_IS_OUT, &flags))469469 return 0;470470471471- if (!guard.gc->get_direction)472472- return -ENOTSUPP;473473-474471 ret = gpiochip_get_direction(guard.gc, offset);475472 if (ret < 0)476473 return ret;
+2
drivers/gpu/drm/amd/amdgpu/amdgpu.h
···274274extern int amdgpu_wbrf;275275extern int amdgpu_user_queue;276276277277+extern uint amdgpu_hdmi_hpd_debounce_delay_ms;278278+277279#define AMDGPU_VM_MAX_NUM_CTX 4096278280#define AMDGPU_SG_THRESHOLD (256*1024*1024)279281#define AMDGPU_WAIT_IDLE_TIMEOUT_IN_MS 3000
+8
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
···5063506350645064 amdgpu_ttm_set_buffer_funcs_status(adev, false);5065506550665066+ /*50675067+ * device went through surprise hotplug; we need to destroy topology50685068+ * before ip_fini_early to prevent kfd locking refcount issues by calling50695069+ * amdgpu_amdkfd_suspend()50705070+ */50715071+ if (drm_dev_is_unplugged(adev_to_drm(adev)))50725072+ amdgpu_amdkfd_device_fini_sw(adev);50735073+50665074 amdgpu_device_ip_fini_early(adev);5067507550685076 amdgpu_irq_fini_hw(adev);
···9595 bo->flags & AMDGPU_GEM_CREATE_GFX12_DCC)9696 attach->peer2peer = false;97979898- /*9999- * Disable peer-to-peer access for DCC-enabled VRAM surfaces on GFX12+.100100- * Such buffers cannot be safely accessed over P2P due to device-local101101- * compression metadata. Fallback to system-memory path instead.102102- * Device supports GFX12 (GC 12.x or newer)103103- * BO was created with the AMDGPU_GEM_CREATE_GFX12_DCC flag104104- *105105- */106106- if (amdgpu_ip_version(adev, GC_HWIP, 0) >= IP_VERSION(12, 0, 0) &&107107- bo->flags & AMDGPU_GEM_CREATE_GFX12_DCC)108108- attach->peer2peer = false;109109-11098 if (!amdgpu_dmabuf_is_xgmi_accessible(attach_adev, bo) &&11199 pci_p2pdma_distance(adev->pdev, attach->dev, false) < 0)112100 attach->peer2peer = false;
+11
drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
···247247int amdgpu_umsch_mm_fwlog;248248int amdgpu_rebar = -1; /* auto */249249int amdgpu_user_queue = -1;250250+uint amdgpu_hdmi_hpd_debounce_delay_ms;250251251252DECLARE_DYNDBG_CLASSMAP(drm_debug_classes, DD_CLASS_TYPE_DISJOINT_BITS, 0,252253 "DRM_UT_CORE",···11231122 */11241123MODULE_PARM_DESC(user_queue, "Enable user queues (-1 = auto (default), 0 = disable, 1 = enable, 2 = enable UQs and disable KQs)");11251124module_param_named(user_queue, amdgpu_user_queue, int, 0444);11251125+11261126+/*11271127+ * DOC: hdmi_hpd_debounce_delay_ms (uint)11281128+ * HDMI HPD disconnect debounce delay in milliseconds.11291129+ *11301130+ * Used to filter short disconnect->reconnect HPD toggles some HDMI sinks11311131+ * generate while entering/leaving power save. Set to 0 to disable by default.11321132+ */11331133+MODULE_PARM_DESC(hdmi_hpd_debounce_delay_ms, "HDMI HPD disconnect debounce delay in milliseconds (0 to disable (by default), 1500 is common)");11341134+module_param_named(hdmi_hpd_debounce_delay_ms, amdgpu_hdmi_hpd_debounce_delay_ms, uint, 0644);1126113511271136/* These devices are not supported by amdgpu.11281137 * They are supported by the mach64, r128, radeon drivers
+2-2
drivers/gpu/drm/amd/amdgpu/amdgpu_gart.c
···375375 * @start_page: first page to map in the GART aperture376376 * @num_pages: number of pages to be mapped377377 * @flags: page table entry flags378378- * @dst: CPU address of the GART table378378+ * @dst: valid CPU address of GART table, cannot be null379379 *380380 * Binds a BO that is allocated in VRAM to the GART page table381381 * (all ASICs).···396396 return;397397398398 for (i = 0; i < num_pages; ++i) {399399- amdgpu_gmc_set_pte_pde(adev, adev->gart.ptr,399399+ amdgpu_gmc_set_pte_pde(adev, dst,400400 start_page + i, pa + AMDGPU_GPU_PAGE_SIZE * i, flags);401401 }402402
+4
drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
···732732 return 0;733733734734 if (!adev->gmc.flush_pasid_uses_kiq || !ring->sched.ready) {735735+736736+ if (!adev->gmc.gmc_funcs->flush_gpu_tlb_pasid)737737+ return 0;738738+735739 if (adev->gmc.flush_tlb_needs_extra_type_2)736740 adev->gmc.gmc_funcs->flush_gpu_tlb_pasid(adev, pasid,737741 2, all_hub,
+16
drivers/gpu/drm/amd/amdgpu/amdgpu_userq.c
···885885 return 0;886886}887887888888+bool amdgpu_userq_enabled(struct drm_device *dev)889889+{890890+ struct amdgpu_device *adev = drm_to_adev(dev);891891+ int i;892892+893893+ for (i = 0; i < AMDGPU_HW_IP_NUM; i++) {894894+ if (adev->userq_funcs[i])895895+ return true;896896+ }897897+898898+ return false;899899+}900900+888901int amdgpu_userq_ioctl(struct drm_device *dev, void *data,889902 struct drm_file *filp)890903{891904 union drm_amdgpu_userq *args = data;892905 int r;906906+907907+ if (!amdgpu_userq_enabled(dev))908908+ return -ENOTSUPP;893909894910 if (amdgpu_userq_input_args_validate(dev, args, filp) < 0)895911 return -EINVAL;
···141141void142142amdgpu_userq_fence_driver_free(struct amdgpu_usermode_queue *userq)143143{144144+ dma_fence_put(userq->last_fence);145145+144146 amdgpu_userq_walk_and_drop_fence_drv(&userq->fence_drv_xa);145147 xa_destroy(&userq->fence_drv_xa);146148 /* Drop the fence_drv reference held by user queue */···473471 struct drm_exec exec;474472 u64 wptr;475473474474+ if (!amdgpu_userq_enabled(dev))475475+ return -ENOTSUPP;476476+476477 num_syncobj_handles = args->num_syncobj_handles;477478 syncobj_handles = memdup_user(u64_to_user_ptr(args->syncobj_handles),478479 size_mul(sizeof(u32), num_syncobj_handles));···657652 u16 num_points, num_fences = 0;658653 int r, i, rentry, wentry, cnt;659654 struct drm_exec exec;655655+656656+ if (!amdgpu_userq_enabled(dev))657657+ return -ENOTSUPP;660658661659 num_read_bo_handles = wait_info->num_bo_read_handles;662660 bo_handles_read = memdup_user(u64_to_user_ptr(wait_info->bo_read_handles),
+1-3
drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
···10691069 }1070107010711071 /* Prepare a TLB flush fence to be attached to PTs */10721072- if (!params->unlocked &&10731073- /* SI doesn't support pasid or KIQ/MES */10741074- params->adev->family > AMDGPU_FAMILY_SI) {10721072+ if (!params->unlocked) {10751073 amdgpu_vm_tlb_fence_create(params->adev, vm, fence);1076107410771075 /* Makes sure no PD/PT is freed before the flush */
···163163164164 unsigned long ref_clk_rate;165165 struct regmap *regm;166166+ int main_irq;166167167168 unsigned long tmds_char_rate;168169};···1272127112731272 dw_hdmi_qp_init_hw(hdmi);1274127312741274+ hdmi->main_irq = plat_data->main_irq;12751275 ret = devm_request_threaded_irq(dev, plat_data->main_irq,12761276 dw_hdmi_qp_main_hardirq, NULL,12771277 IRQF_SHARED, dev_name(dev), hdmi);···13331331}13341332EXPORT_SYMBOL_GPL(dw_hdmi_qp_bind);1335133313341334+void dw_hdmi_qp_suspend(struct device *dev, struct dw_hdmi_qp *hdmi)13351335+{13361336+ disable_irq(hdmi->main_irq);13371337+}13381338+EXPORT_SYMBOL_GPL(dw_hdmi_qp_suspend);13391339+13361340void dw_hdmi_qp_resume(struct device *dev, struct dw_hdmi_qp *hdmi)13371341{13381342 dw_hdmi_qp_init_hw(hdmi);13431343+ enable_irq(hdmi->main_irq);13391344}13401345EXPORT_SYMBOL_GPL(dw_hdmi_qp_resume);13411346
+51-24
drivers/gpu/drm/drm_gpuvm.c
···16021602}16031603EXPORT_SYMBOL_GPL(drm_gpuvm_bo_create);1604160416051605+/*16061606+ * drm_gpuvm_bo_destroy_not_in_lists() - final part of drm_gpuvm_bo cleanup16071607+ * @vm_bo: the &drm_gpuvm_bo to destroy16081608+ *16091609+ * It is illegal to call this method if the @vm_bo is present in the GEMs gpuva16101610+ * list, the extobj list, or the evicted list.16111611+ *16121612+ * Note that this puts a refcount on the GEM object, which may destroy the GEM16131613+ * object if the refcount reaches zero. It's illegal for this to happen if the16141614+ * caller holds the GEMs gpuva mutex because it would free the mutex.16151615+ */16051616static void16061606-drm_gpuvm_bo_destroy(struct kref *kref)16171617+drm_gpuvm_bo_destroy_not_in_lists(struct drm_gpuvm_bo *vm_bo)16071618{16081608- struct drm_gpuvm_bo *vm_bo = container_of(kref, struct drm_gpuvm_bo,16091609- kref);16101619 struct drm_gpuvm *gpuvm = vm_bo->vm;16111620 const struct drm_gpuvm_ops *ops = gpuvm->ops;16121621 struct drm_gem_object *obj = vm_bo->obj;16131613- bool lock = !drm_gpuvm_resv_protected(gpuvm);16141614-16151615- if (!lock)16161616- drm_gpuvm_resv_assert_held(gpuvm);16171617-16181618- drm_gpuvm_bo_list_del(vm_bo, extobj, lock);16191619- drm_gpuvm_bo_list_del(vm_bo, evict, lock);16201620-16211621- drm_gem_gpuva_assert_lock_held(gpuvm, obj);16221622- list_del(&vm_bo->list.entry.gem);1623162216241623 if (ops && ops->vm_bo_free)16251624 ops->vm_bo_free(vm_bo);···1627162816281629 drm_gpuvm_put(gpuvm);16291630 drm_gem_object_put(obj);16311631+}16321632+16331633+static void16341634+drm_gpuvm_bo_destroy_not_in_lists_kref(struct kref *kref)16351635+{16361636+ struct drm_gpuvm_bo *vm_bo = container_of(kref, struct drm_gpuvm_bo,16371637+ kref);16381638+16391639+ drm_gpuvm_bo_destroy_not_in_lists(vm_bo);16401640+}16411641+16421642+static void16431643+drm_gpuvm_bo_destroy(struct kref *kref)16441644+{16451645+ struct drm_gpuvm_bo *vm_bo = container_of(kref, struct drm_gpuvm_bo,16461646+ kref);16471647+ struct drm_gpuvm *gpuvm = vm_bo->vm;16481648+ bool lock = !drm_gpuvm_resv_protected(gpuvm);16491649+16501650+ if (!lock)16511651+ drm_gpuvm_resv_assert_held(gpuvm);16521652+16531653+ drm_gpuvm_bo_list_del(vm_bo, extobj, lock);16541654+ drm_gpuvm_bo_list_del(vm_bo, evict, lock);16551655+16561656+ drm_gem_gpuva_assert_lock_held(gpuvm, vm_bo->obj);16571657+ list_del(&vm_bo->list.entry.gem);16581658+16591659+ drm_gpuvm_bo_destroy_not_in_lists(vm_bo);16301660}1631166116321662/**···17731745void17741746drm_gpuvm_bo_deferred_cleanup(struct drm_gpuvm *gpuvm)17751747{17761776- const struct drm_gpuvm_ops *ops = gpuvm->ops;17771748 struct drm_gpuvm_bo *vm_bo;17781778- struct drm_gem_object *obj;17791749 struct llist_node *bo_defer;1780175017811751 bo_defer = llist_del_all(&gpuvm->bo_defer);···17921766 while (bo_defer) {17931767 vm_bo = llist_entry(bo_defer, struct drm_gpuvm_bo, list.entry.bo_defer);17941768 bo_defer = bo_defer->next;17951795- obj = vm_bo->obj;17961796- if (ops && ops->vm_bo_free)17971797- ops->vm_bo_free(vm_bo);17981798- else17991799- kfree(vm_bo);18001800-18011801- drm_gpuvm_put(gpuvm);18021802- drm_gem_object_put(obj);17691769+ drm_gpuvm_bo_destroy_not_in_lists(vm_bo);18031770 }18041771}18051772EXPORT_SYMBOL_GPL(drm_gpuvm_bo_deferred_cleanup);···18801861 * count is decreased. If not found @__vm_bo is returned without further18811862 * increase of the reference count.18821863 *18641864+ * The provided @__vm_bo must not already be in the gpuva, evict, or extobj18651865+ * lists prior to calling this method.18661866+ *18831867 * A new &drm_gpuvm_bo is added to the GEMs gpuva list.18841868 *18851869 * Returns: a pointer to the found &drm_gpuvm_bo or @__vm_bo if no existing···18951873 struct drm_gem_object *obj = __vm_bo->obj;18961874 struct drm_gpuvm_bo *vm_bo;1897187518761876+ drm_WARN_ON(gpuvm->drm, !drm_gpuvm_immediate_mode(gpuvm));18771877+18781878+ mutex_lock(&obj->gpuva.lock);18981879 vm_bo = drm_gpuvm_bo_find(gpuvm, obj);18991880 if (vm_bo) {19001900- drm_gpuvm_bo_put(__vm_bo);18811881+ mutex_unlock(&obj->gpuva.lock);18821882+ kref_put(&__vm_bo->kref, drm_gpuvm_bo_destroy_not_in_lists_kref);19011883 return vm_bo;19021884 }1903188519041886 drm_gem_gpuva_assert_lock_held(gpuvm, obj);19051887 list_add_tail(&__vm_bo->list.entry.gem, &obj->gpuva.list);18881888+ mutex_unlock(&obj->gpuva.lock);1906188919071890 return __vm_bo;19081891}
···686686}687687688688/* This list includes registers that are useful in debugging GuC hangs. */689689-const struct {689689+static const struct {690690 u32 start;691691 u32 count;692692} guc_hw_reg_state[] = {
···4343 union nv50_head_atom_mask clr = {4444 .mask = asyh->clr.mask & ~(flush ? 0 : asyh->set.mask),4545 };4646+4747+ lockdep_assert_held(&head->disp->mutex);4848+4649 if (clr.crc) nv50_crc_atomic_clr(head);4750 if (clr.olut) head->func->olut_clr(head);4851 if (clr.core) head->func->core_clr(head);···6865void6966nv50_head_flush_set(struct nv50_head *head, struct nv50_head_atom *asyh)7067{6868+ lockdep_assert_held(&head->disp->mutex);6969+7170 if (asyh->set.view ) head->func->view (head, asyh);7271 if (asyh->set.mode ) head->func->mode (head, asyh);7372 if (asyh->set.core ) head->func->core_set(head, asyh);
+55-55
drivers/gpu/drm/panel/panel-simple.c
···623623 if (IS_ERR(desc))624624 return ERR_CAST(desc);625625626626+ connector_type = desc->connector_type;627627+ /* Catch common mistakes for panels. */628628+ switch (connector_type) {629629+ case 0:630630+ dev_warn(dev, "Specify missing connector_type\n");631631+ connector_type = DRM_MODE_CONNECTOR_DPI;632632+ break;633633+ case DRM_MODE_CONNECTOR_LVDS:634634+ WARN_ON(desc->bus_flags &635635+ ~(DRM_BUS_FLAG_DE_LOW |636636+ DRM_BUS_FLAG_DE_HIGH |637637+ DRM_BUS_FLAG_DATA_MSB_TO_LSB |638638+ DRM_BUS_FLAG_DATA_LSB_TO_MSB));639639+ WARN_ON(desc->bus_format != MEDIA_BUS_FMT_RGB666_1X7X3_SPWG &&640640+ desc->bus_format != MEDIA_BUS_FMT_RGB888_1X7X4_SPWG &&641641+ desc->bus_format != MEDIA_BUS_FMT_RGB888_1X7X4_JEIDA);642642+ WARN_ON(desc->bus_format == MEDIA_BUS_FMT_RGB666_1X7X3_SPWG &&643643+ desc->bpc != 6);644644+ WARN_ON((desc->bus_format == MEDIA_BUS_FMT_RGB888_1X7X4_SPWG ||645645+ desc->bus_format == MEDIA_BUS_FMT_RGB888_1X7X4_JEIDA) &&646646+ desc->bpc != 8);647647+ break;648648+ case DRM_MODE_CONNECTOR_eDP:649649+ dev_warn(dev, "eDP panels moved to panel-edp\n");650650+ return ERR_PTR(-EINVAL);651651+ case DRM_MODE_CONNECTOR_DSI:652652+ if (desc->bpc != 6 && desc->bpc != 8)653653+ dev_warn(dev, "Expected bpc in {6,8} but got: %u\n", desc->bpc);654654+ break;655655+ case DRM_MODE_CONNECTOR_DPI:656656+ bus_flags = DRM_BUS_FLAG_DE_LOW |657657+ DRM_BUS_FLAG_DE_HIGH |658658+ DRM_BUS_FLAG_PIXDATA_SAMPLE_POSEDGE |659659+ DRM_BUS_FLAG_PIXDATA_SAMPLE_NEGEDGE |660660+ DRM_BUS_FLAG_DATA_MSB_TO_LSB |661661+ DRM_BUS_FLAG_DATA_LSB_TO_MSB |662662+ DRM_BUS_FLAG_SYNC_SAMPLE_POSEDGE |663663+ DRM_BUS_FLAG_SYNC_SAMPLE_NEGEDGE;664664+ if (desc->bus_flags & ~bus_flags)665665+ dev_warn(dev, "Unexpected bus_flags(%d)\n", desc->bus_flags & ~bus_flags);666666+ if (!(desc->bus_flags & bus_flags))667667+ dev_warn(dev, "Specify missing bus_flags\n");668668+ if (desc->bus_format == 0)669669+ dev_warn(dev, "Specify missing bus_format\n");670670+ if (desc->bpc != 6 && desc->bpc != 8)671671+ dev_warn(dev, "Expected bpc in {6,8} but got: %u\n", desc->bpc);672672+ break;673673+ default:674674+ dev_warn(dev, "Specify a valid connector_type: %d\n", desc->connector_type);675675+ connector_type = DRM_MODE_CONNECTOR_DPI;676676+ break;677677+ }678678+626679 panel = devm_drm_panel_alloc(dev, struct panel_simple, base,627627- &panel_simple_funcs, desc->connector_type);680680+ &panel_simple_funcs, connector_type);628681 if (IS_ERR(panel))629682 return ERR_CAST(panel);630683···717664 err = panel_simple_override_nondefault_lvds_datamapping(dev, panel);718665 if (err)719666 goto free_ddc;720720- }721721-722722- connector_type = desc->connector_type;723723- /* Catch common mistakes for panels. */724724- switch (connector_type) {725725- case 0:726726- dev_warn(dev, "Specify missing connector_type\n");727727- connector_type = DRM_MODE_CONNECTOR_DPI;728728- break;729729- case DRM_MODE_CONNECTOR_LVDS:730730- WARN_ON(desc->bus_flags &731731- ~(DRM_BUS_FLAG_DE_LOW |732732- DRM_BUS_FLAG_DE_HIGH |733733- DRM_BUS_FLAG_DATA_MSB_TO_LSB |734734- DRM_BUS_FLAG_DATA_LSB_TO_MSB));735735- WARN_ON(desc->bus_format != MEDIA_BUS_FMT_RGB666_1X7X3_SPWG &&736736- desc->bus_format != MEDIA_BUS_FMT_RGB888_1X7X4_SPWG &&737737- desc->bus_format != MEDIA_BUS_FMT_RGB888_1X7X4_JEIDA);738738- WARN_ON(desc->bus_format == MEDIA_BUS_FMT_RGB666_1X7X3_SPWG &&739739- desc->bpc != 6);740740- WARN_ON((desc->bus_format == MEDIA_BUS_FMT_RGB888_1X7X4_SPWG ||741741- desc->bus_format == MEDIA_BUS_FMT_RGB888_1X7X4_JEIDA) &&742742- desc->bpc != 8);743743- break;744744- case DRM_MODE_CONNECTOR_eDP:745745- dev_warn(dev, "eDP panels moved to panel-edp\n");746746- err = -EINVAL;747747- goto free_ddc;748748- case DRM_MODE_CONNECTOR_DSI:749749- if (desc->bpc != 6 && desc->bpc != 8)750750- dev_warn(dev, "Expected bpc in {6,8} but got: %u\n", desc->bpc);751751- break;752752- case DRM_MODE_CONNECTOR_DPI:753753- bus_flags = DRM_BUS_FLAG_DE_LOW |754754- DRM_BUS_FLAG_DE_HIGH |755755- DRM_BUS_FLAG_PIXDATA_SAMPLE_POSEDGE |756756- DRM_BUS_FLAG_PIXDATA_SAMPLE_NEGEDGE |757757- DRM_BUS_FLAG_DATA_MSB_TO_LSB |758758- DRM_BUS_FLAG_DATA_LSB_TO_MSB |759759- DRM_BUS_FLAG_SYNC_SAMPLE_POSEDGE |760760- DRM_BUS_FLAG_SYNC_SAMPLE_NEGEDGE;761761- if (desc->bus_flags & ~bus_flags)762762- dev_warn(dev, "Unexpected bus_flags(%d)\n", desc->bus_flags & ~bus_flags);763763- if (!(desc->bus_flags & bus_flags))764764- dev_warn(dev, "Specify missing bus_flags\n");765765- if (desc->bus_format == 0)766766- dev_warn(dev, "Specify missing bus_format\n");767767- if (desc->bpc != 6 && desc->bpc != 8)768768- dev_warn(dev, "Expected bpc in {6,8} but got: %u\n", desc->bpc);769769- break;770770- default:771771- dev_warn(dev, "Specify a valid connector_type: %d\n", desc->connector_type);772772- connector_type = DRM_MODE_CONNECTOR_DPI;773773- break;774667 }775668776669 dev_set_drvdata(dev, panel);···18991900 },19001901 .bus_format = MEDIA_BUS_FMT_RGB888_1X24,19011902 .bus_flags = DRM_BUS_FLAG_DE_HIGH | DRM_BUS_FLAG_PIXDATA_DRIVE_POSEDGE,19031903+ .connector_type = DRM_MODE_CONNECTOR_DPI,19021904};1903190519041906static const struct display_timing dlc_dlc0700yzg_1_timing = {
-10
drivers/gpu/drm/panthor/panthor_mmu.c
···12521252 goto err_cleanup;12531253 }1254125412551255- /* drm_gpuvm_bo_obtain_prealloc() will call drm_gpuvm_bo_put() on our12561256- * pre-allocated BO if the <BO,VM> association exists. Given we12571257- * only have one ref on preallocated_vm_bo, drm_gpuvm_bo_destroy() will12581258- * be called immediately, and we have to hold the VM resv lock when12591259- * calling this function.12601260- */12611261- dma_resv_lock(panthor_vm_resv(vm), NULL);12621262- mutex_lock(&bo->base.base.gpuva.lock);12631255 op_ctx->map.vm_bo = drm_gpuvm_bo_obtain_prealloc(preallocated_vm_bo);12641264- mutex_unlock(&bo->base.base.gpuva.lock);12651265- dma_resv_unlock(panthor_vm_resv(vm));1266125612671257 op_ctx->map.bo_offset = offset;12681258
+12-2
drivers/gpu/drm/rockchip/dw_hdmi_qp-rockchip.c
···121121 struct drm_crtc *crtc = encoder->crtc;122122123123 /* Unconditionally switch to TMDS as FRL is not yet supported */124124- gpiod_set_value(hdmi->frl_enable_gpio, 0);124124+ gpiod_set_value_cansleep(hdmi->frl_enable_gpio, 0);125125126126 if (!crtc || !crtc->state)127127 return;···640640 component_del(&pdev->dev, &dw_hdmi_qp_rockchip_ops);641641}642642643643+static int __maybe_unused dw_hdmi_qp_rockchip_suspend(struct device *dev)644644+{645645+ struct rockchip_hdmi_qp *hdmi = dev_get_drvdata(dev);646646+647647+ dw_hdmi_qp_suspend(dev, hdmi->hdmi);648648+649649+ return 0;650650+}651651+643652static int __maybe_unused dw_hdmi_qp_rockchip_resume(struct device *dev)644653{645654 struct rockchip_hdmi_qp *hdmi = dev_get_drvdata(dev);···664655}665656666657static const struct dev_pm_ops dw_hdmi_qp_rockchip_pm = {667667- SET_SYSTEM_SLEEP_PM_OPS(NULL, dw_hdmi_qp_rockchip_resume)658658+ SET_SYSTEM_SLEEP_PM_OPS(dw_hdmi_qp_rockchip_suspend,659659+ dw_hdmi_qp_rockchip_resume)668660};669661670662struct platform_driver dw_hdmi_qp_rockchip_pltfm_driver = {
+13-4
drivers/gpu/drm/rockchip/rockchip_vop2_reg.c
···21042104 * Spin until the previous port_mux figuration is done.21052105 */21062106 ret = readx_poll_timeout_atomic(rk3568_vop2_read_port_mux, vop2, port_mux_sel,21072107- port_mux_sel == vop2->old_port_sel, 0, 50 * 1000);21072107+ port_mux_sel == vop2->old_port_sel, 10, 50 * 1000);21082108 if (ret)21092109 DRM_DEV_ERROR(vop2->dev, "wait port_mux done timeout: 0x%x--0x%x\n",21102110 port_mux_sel, vop2->old_port_sel);···21242124 * Spin until the previous layer configuration is done.21252125 */21262126 ret = readx_poll_timeout_atomic(rk3568_vop2_read_layer_cfg, vop2, atv_layer_cfg,21272127- atv_layer_cfg == cfg, 0, 50 * 1000);21272127+ atv_layer_cfg == cfg, 10, 50 * 1000);21282128 if (ret)21292129 DRM_DEV_ERROR(vop2->dev, "wait layer cfg done timeout: 0x%x--0x%x\n",21302130 atv_layer_cfg, cfg);···21442144 u8 layer_sel_id;21452145 unsigned int ofs;21462146 u32 ovl_ctrl;21472147+ u32 cfg_done;21472148 int i;21482149 struct vop2_video_port *vp0 = &vop2->vps[0];21492150 struct vop2_video_port *vp1 = &vop2->vps[1];···22992298 rk3568_vop2_wait_for_port_mux_done(vop2);23002299 }2301230023022302- if (layer_sel != old_layer_sel && atv_layer_sel != old_layer_sel)23032303- rk3568_vop2_wait_for_layer_cfg_done(vop2, vop2->old_layer_sel);23012301+ if (layer_sel != old_layer_sel && atv_layer_sel != old_layer_sel) {23022302+ cfg_done = vop2_readl(vop2, RK3568_REG_CFG_DONE);23032303+ cfg_done &= (BIT(vop2->data->nr_vps) - 1);23042304+ cfg_done &= ~BIT(vp->id);23052305+ /*23062306+ * Changes of other VPs' overlays have not taken effect23072307+ */23082308+ if (cfg_done)23092309+ rk3568_vop2_wait_for_layer_cfg_done(vop2, vop2->old_layer_sel);23102310+ }2304231123052312 vop2_writel(vop2, RK3568_OVL_LAYER_SEL, layer_sel);23062313 mutex_unlock(&vop2->ovl_lock);
···515515/**516516 * vmw_event_fence_action_seq_passed517517 *518518- * @action: The struct vmw_fence_action embedded in a struct519519- * vmw_event_fence_action.518518+ * @f: The struct dma_fence which provides timestamp for the action event519519+ * @cb: The struct dma_fence_cb callback for the action event.520520 *521521- * This function is called when the seqno of the fence where @action is522522- * attached has passed. It queues the event on the submitter's event list.523523- * This function is always called from atomic context.521521+ * This function is called when the seqno of the fence has passed522522+ * and it is always called from atomic context.523523+ * It queues the event on the submitter's event list.524524 */525525static void vmw_event_fence_action_seq_passed(struct dma_fence *f,526526 struct dma_fence_cb *cb)
+8-6
drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
···766766 return ERR_PTR(ret);767767 }768768769769- ttm_bo_reserve(&bo->tbo, false, false, NULL);770770- ret = vmw_bo_dirty_add(bo);771771- if (!ret && surface && surface->res.func->dirty_alloc) {772772- surface->res.coherent = true;773773- ret = surface->res.func->dirty_alloc(&surface->res);769769+ if (bo) {770770+ ttm_bo_reserve(&bo->tbo, false, false, NULL);771771+ ret = vmw_bo_dirty_add(bo);772772+ if (!ret && surface && surface->res.func->dirty_alloc) {773773+ surface->res.coherent = true;774774+ ret = surface->res.func->dirty_alloc(&surface->res);775775+ }776776+ ttm_bo_unreserve(&bo->tbo);774777 }775775- ttm_bo_unreserve(&bo->tbo);776778777779 return &vfb->base;778780}
+3-1
drivers/gpu/drm/vmwgfx/vmwgfx_shader.c
···923923 ttm_bo_unreserve(&buf->tbo);924924925925 res = vmw_shader_alloc(dev_priv, buf, size, 0, shader_type);926926- if (unlikely(ret != 0))926926+ if (IS_ERR(res)) {927927+ ret = PTR_ERR(res);927928 goto no_reserve;929929+ }928930929931 ret = vmw_cmdbuf_res_add(man, vmw_cmdbuf_res_shader,930932 vmw_shader_key(user_key, shader_type),
+2
drivers/hv/mshv_common.c
···142142}143143EXPORT_SYMBOL_GPL(hv_call_get_partition_property);144144145145+#ifdef CONFIG_X86145146/*146147 * Corresponding sleep states have to be initialized in order for a subsequent147148 * HVCALL_ENTER_SLEEP_STATE call to succeed. Currently only S5 state as per···238237 BUG();239238240239}240240+#endif
+11-9
drivers/hv/mshv_regions.c
···58585959 page_order = folio_order(page_folio(page));6060 /* The hypervisor only supports 4K and 2M page sizes */6161- if (page_order && page_order != HPAGE_PMD_ORDER)6161+ if (page_order && page_order != PMD_ORDER)6262 return -EINVAL;63636464 stride = 1 << page_order;···494494 unsigned long mstart, mend;495495 int ret = -EPERM;496496497497- if (mmu_notifier_range_blockable(range))498498- mutex_lock(®ion->mutex);499499- else if (!mutex_trylock(®ion->mutex))500500- goto out_fail;501501-502502- mmu_interval_set_seq(mni, cur_seq);503503-504497 mstart = max(range->start, region->start_uaddr);505498 mend = min(range->end, region->start_uaddr +506499 (region->nr_pages << HV_HYP_PAGE_SHIFT));···501508 page_offset = HVPFN_DOWN(mstart - region->start_uaddr);502509 page_count = HVPFN_DOWN(mend - mstart);503510511511+ if (mmu_notifier_range_blockable(range))512512+ mutex_lock(®ion->mutex);513513+ else if (!mutex_trylock(®ion->mutex))514514+ goto out_fail;515515+516516+ mmu_interval_set_seq(mni, cur_seq);517517+504518 ret = mshv_region_remap_pages(region, HV_MAP_GPA_NO_ACCESS,505519 page_offset, page_count);506520 if (ret)507507- goto out_fail;521521+ goto out_unlock;508522509523 mshv_region_invalidate_pages(region, page_offset, page_count);510524···519519520520 return true;521521522522+out_unlock:523523+ mutex_unlock(®ion->mutex);522524out_fail:523525 WARN_ONCE(ret,524526 "Failed to invalidate region %#llx-%#llx (range %#lx-%#lx, event: %u, pages %#llx-%#llx, mm: %#llx): %d\n",
+7
drivers/i2c/busses/i2c-imx-lpi2c.c
···593593 return false;594594595595 /*596596+ * A system-wide suspend or resume transition is in progress. LPI2C should use PIO to597597+ * transfer data to avoid issue caused by no ready DMA HW resource.598598+ */599599+ if (pm_suspend_in_progress())600600+ return false;601601+602602+ /*596603 * When the length of data is less than I2C_DMA_THRESHOLD,597604 * cpu mode is used directly to avoid low performance.598605 */
+7-4
drivers/i2c/busses/i2c-qcom-geni.c
···116116 dma_addr_t dma_addr;117117 struct dma_chan *tx_c;118118 struct dma_chan *rx_c;119119+ bool no_dma;119120 bool gpi_mode;120121 bool abort_done;121122 bool is_tx_multi_desc_xfer;···448447 size_t len = msg->len;449448 struct i2c_msg *cur;450449451451- dma_buf = i2c_get_dma_safe_msg_buf(msg, 32);450450+ dma_buf = gi2c->no_dma ? NULL : i2c_get_dma_safe_msg_buf(msg, 32);452451 if (dma_buf)453452 geni_se_select_mode(se, GENI_SE_DMA);454453 else···487486 size_t len = msg->len;488487 struct i2c_msg *cur;489488490490- dma_buf = i2c_get_dma_safe_msg_buf(msg, 32);489489+ dma_buf = gi2c->no_dma ? NULL : i2c_get_dma_safe_msg_buf(msg, 32);491490 if (dma_buf)492491 geni_se_select_mode(se, GENI_SE_DMA);493492 else···10811080 goto err_resources;10821081 }1083108210841084- if (desc && desc->no_dma_support)10831083+ if (desc && desc->no_dma_support) {10851084 fifo_disable = false;10861086- else10851085+ gi2c->no_dma = true;10861086+ } else {10871087 fifo_disable = readl_relaxed(gi2c->se.base + GENI_IF_DISABLE_RO) & FIFO_IF_DISABLE;10881088+ }1088108910891090 if (fifo_disable) {10901091 /* FIFO is disabled, so we can only use GPI DMA */
+39-7
drivers/i2c/busses/i2c-riic.c
···670670671671static int riic_i2c_suspend(struct device *dev)672672{673673- struct riic_dev *riic = dev_get_drvdata(dev);674674- int ret;673673+ /*674674+ * Some I2C devices may need the I2C controller to remain active675675+ * during resume_noirq() or suspend_noirq(). If the controller is676676+ * autosuspended, there is no way to wake it up once runtime PM is677677+ * disabled (in suspend_late()).678678+ *679679+ * During system resume, the I2C controller will be available only680680+ * after runtime PM is re-enabled (in resume_early()). However, this681681+ * may be too late for some devices.682682+ *683683+ * Wake up the controller in the suspend() callback while runtime PM684684+ * is still enabled. The I2C controller will remain available until685685+ * the suspend_noirq() callback (pm_runtime_force_suspend()) is686686+ * called. During resume, the I2C controller can be restored by the687687+ * resume_noirq() callback (pm_runtime_force_resume()).688688+ *689689+ * Finally, the resume() callback re-enables autosuspend, ensuring690690+ * the I2C controller remains available until the system enters691691+ * suspend_noirq() and from resume_noirq().692692+ */693693+ return pm_runtime_resume_and_get(dev);694694+}675695676676- ret = pm_runtime_resume_and_get(dev);677677- if (ret)678678- return ret;696696+static int riic_i2c_resume(struct device *dev)697697+{698698+ pm_runtime_put_autosuspend(dev);699699+700700+ return 0;701701+}702702+703703+static int riic_i2c_suspend_noirq(struct device *dev)704704+{705705+ struct riic_dev *riic = dev_get_drvdata(dev);679706680707 i2c_mark_adapter_suspended(&riic->adapter);681708···710683 riic_clear_set_bit(riic, ICCR1_ICE, 0, RIIC_ICCR1);711684712685 pm_runtime_mark_last_busy(dev);713713- pm_runtime_put_sync(dev);686686+ pm_runtime_force_suspend(dev);714687715688 return reset_control_assert(riic->rstc);716689}717690718718-static int riic_i2c_resume(struct device *dev)691691+static int riic_i2c_resume_noirq(struct device *dev)719692{720693 struct riic_dev *riic = dev_get_drvdata(dev);721694 int ret;722695723696 ret = reset_control_deassert(riic->rstc);697697+ if (ret)698698+ return ret;699699+700700+ ret = pm_runtime_force_resume(dev);724701 if (ret)725702 return ret;726703···745714}746715747716static const struct dev_pm_ops riic_i2c_pm_ops = {717717+ NOIRQ_SYSTEM_SLEEP_PM_OPS(riic_i2c_suspend_noirq, riic_i2c_resume_noirq)748718 SYSTEM_SLEEP_PM_OPS(riic_i2c_suspend, riic_i2c_resume)749719};750720
···158158 tmp_vec.local_id = new_vec->local_id;159159160160 /* Point device to the temporary vector */161161- imsic_msi_update_msg(d, &tmp_vec);161161+ imsic_msi_update_msg(irq_get_irq_data(d->irq), &tmp_vec);162162 }163163164164 /* Point device to the new vector */165165- imsic_msi_update_msg(d, new_vec);165165+ imsic_msi_update_msg(irq_get_irq_data(d->irq), new_vec);166166167167 /* Update irq descriptors with the new vector */168168 d->chip_data = new_vec;
···6677config IPU_BRIDGE88 tristate "Intel IPU Bridge"99- depends on ACPI || COMPILE_TEST99+ depends on ACPI1010 depends on I2C1111 help1212 The IPU bridge is a helper library for Intel IPU drivers to
+29
drivers/media/pci/intel/ipu-bridge.c
···55#include <acpi/acpi_bus.h>66#include <linux/cleanup.h>77#include <linux/device.h>88+#include <linux/dmi.h>89#include <linux/i2c.h>910#include <linux/mei_cl_bus.h>1011#include <linux/platform_device.h>···9796 IPU_SENSOR_CONFIG("SONY471A", 1, 200000000),9897 /* Toshiba T4KA3 */9998 IPU_SENSOR_CONFIG("XMCC0003", 1, 321468000),9999+};100100+101101+/*102102+ * DMI matches for laptops which have their sensor mounted upside-down103103+ * without reporting a rotation of 180° in neither the SSDB nor the _PLD.104104+ */105105+static const struct dmi_system_id upside_down_sensor_dmi_ids[] = {106106+ {107107+ .matches = {108108+ DMI_EXACT_MATCH(DMI_SYS_VENDOR, "Dell Inc."),109109+ DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "XPS 13 9350"),110110+ },111111+ .driver_data = "OVTI02C1",112112+ },113113+ {114114+ .matches = {115115+ DMI_EXACT_MATCH(DMI_SYS_VENDOR, "Dell Inc."),116116+ DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "XPS 16 9640"),117117+ },118118+ .driver_data = "OVTI02C1",119119+ },120120+ {} /* Terminating entry */100121};101122102123static const struct ipu_property_names prop_names = {···271248static u32 ipu_bridge_parse_rotation(struct acpi_device *adev,272249 struct ipu_sensor_ssdb *ssdb)273250{251251+ const struct dmi_system_id *dmi_id;252252+253253+ dmi_id = dmi_first_match(upside_down_sensor_dmi_ids);254254+ if (dmi_id && acpi_dev_hid_match(adev, dmi_id->driver_data))255255+ return 180;256256+274257 switch (ssdb->degree) {275258 case IPU_SENSOR_ROTATION_NORMAL:276259 return 0;
···11# SPDX-License-Identifier: GPL-2.0-only2233menuconfig CAN_DEV44- bool "CAN Device Drivers"44+ tristate "CAN Device Drivers"55 default y66 depends on CAN77 help···1717 virtual ones. If you own such devices or plan to use the virtual CAN1818 interfaces to develop applications, say Y here.19192020-if CAN_DEV && CAN2020+ To compile as a module, choose M here: the module will be called2121+ can-dev.2222+2323+if CAN_DEV21242225config CAN_VCAN2326 tristate "Virtual Local CAN Interface (vcan)"
···751751 hf, parent->hf_size_rx,752752 gs_usb_receive_bulk_callback, parent);753753754754+ usb_anchor_urb(urb, &parent->rx_submitted);755755+754756 rc = usb_submit_urb(urb, GFP_ATOMIC);755757756758 /* USB failure take down all interfaces */
+15
drivers/net/can/vcan.c
···130130 return NETDEV_TX_OK;131131}132132133133+static void vcan_set_cap_info(struct net_device *dev)134134+{135135+ u32 can_cap = CAN_CAP_CC;136136+137137+ if (dev->mtu > CAN_MTU)138138+ can_cap |= CAN_CAP_FD;139139+140140+ if (dev->mtu >= CANXL_MIN_MTU)141141+ can_cap |= CAN_CAP_XL;142142+143143+ can_set_cap(dev, can_cap);144144+}145145+133146static int vcan_change_mtu(struct net_device *dev, int new_mtu)134147{135148 /* Do not allow changing the MTU while running */···154141 return -EINVAL;155142156143 WRITE_ONCE(dev->mtu, new_mtu);144144+ vcan_set_cap_info(dev);157145 return 0;158146}159147···176162 dev->tx_queue_len = 0;177163 dev->flags = IFF_NOARP;178164 can_set_ml_priv(dev, netdev_priv(dev));165165+ vcan_set_cap_info(dev);179166180167 /* set flags according to driver capabilities */181168 if (echo)
+15
drivers/net/can/vxcan.c
···125125 return iflink;126126}127127128128+static void vxcan_set_cap_info(struct net_device *dev)129129+{130130+ u32 can_cap = CAN_CAP_CC;131131+132132+ if (dev->mtu > CAN_MTU)133133+ can_cap |= CAN_CAP_FD;134134+135135+ if (dev->mtu >= CANXL_MIN_MTU)136136+ can_cap |= CAN_CAP_XL;137137+138138+ can_set_cap(dev, can_cap);139139+}140140+128141static int vxcan_change_mtu(struct net_device *dev, int new_mtu)129142{130143 /* Do not allow changing the MTU while running */···149136 return -EINVAL;150137151138 WRITE_ONCE(dev->mtu, new_mtu);139139+ vxcan_set_cap_info(dev);152140 return 0;153141}154142···181167182168 can_ml = netdev_priv(dev) + ALIGN(sizeof(struct vxcan_priv), NETDEV_ALIGN);183169 can_set_ml_priv(dev, can_ml);170170+ vxcan_set_cap_info(dev);184171}185172186173/* forward declaration for rtnl_create_link() */
···6325632563266326void mlx5e_priv_cleanup(struct mlx5e_priv *priv)63276327{63286328+ bool destroying = test_bit(MLX5E_STATE_DESTROYING, &priv->state);63286329 int i;6329633063306331 /* bail if change profile failed and also rollback failed */···63536352 }6354635363556354 memset(priv, 0, sizeof(*priv));63556355+ if (destroying) /* restore destroying bit, to allow unload */63566356+ set_bit(MLX5E_STATE_DESTROYING, &priv->state);63566357}6357635863586359static unsigned int mlx5e_get_max_num_txqs(struct mlx5_core_dev *mdev,···65876584 return err;65886585}6589658665906590-int mlx5e_netdev_change_profile(struct mlx5e_priv *priv,65916591- const struct mlx5e_profile *new_profile, void *new_ppriv)65876587+int mlx5e_netdev_change_profile(struct net_device *netdev,65886588+ struct mlx5_core_dev *mdev,65896589+ const struct mlx5e_profile *new_profile,65906590+ void *new_ppriv)65926591{65936593- const struct mlx5e_profile *orig_profile = priv->profile;65946594- struct net_device *netdev = priv->netdev;65956595- struct mlx5_core_dev *mdev = priv->mdev;65966596- void *orig_ppriv = priv->ppriv;65926592+ struct mlx5e_priv *priv = netdev_priv(netdev);65936593+ const struct mlx5e_profile *orig_profile;65976594 int err, rollback_err;65956595+ void *orig_ppriv;6598659665996599- /* cleanup old profile */66006600- mlx5e_detach_netdev(priv);66016601- priv->profile->cleanup(priv);66026602- mlx5e_priv_cleanup(priv);65976597+ orig_profile = priv->profile;65986598+ orig_ppriv = priv->ppriv;65996599+66006600+ /* NULL could happen if previous change_profile failed to rollback */66016601+ if (priv->profile) {66026602+ WARN_ON_ONCE(priv->mdev != mdev);66036603+ /* cleanup old profile */66046604+ mlx5e_detach_netdev(priv);66056605+ priv->profile->cleanup(priv);66066606+ mlx5e_priv_cleanup(priv);66076607+ }66086608+ /* priv members are not valid from this point ... */6603660966046610 if (mdev->state == MLX5_DEVICE_STATE_INTERNAL_ERROR) {66056611 mlx5e_netdev_init_profile(netdev, mdev, new_profile, new_ppriv);···66256613 return 0;6626661466276615rollback:66166616+ if (!orig_profile) {66176617+ netdev_warn(netdev, "no original profile to rollback to\n");66186618+ priv->profile = NULL;66196619+ return err;66206620+ }66216621+66286622 rollback_err = mlx5e_netdev_attach_profile(netdev, mdev, orig_profile, orig_ppriv);66296629- if (rollback_err)66306630- netdev_err(netdev, "%s: failed to rollback to orig profile, %d\n",66316631- __func__, rollback_err);66236623+ if (rollback_err) {66246624+ netdev_err(netdev, "failed to rollback to orig profile, %d\n",66256625+ rollback_err);66266626+ priv->profile = NULL;66276627+ }66326628 return err;66336629}6634663066356635-void mlx5e_netdev_attach_nic_profile(struct mlx5e_priv *priv)66316631+void mlx5e_netdev_attach_nic_profile(struct net_device *netdev,66326632+ struct mlx5_core_dev *mdev)66366633{66376637- mlx5e_netdev_change_profile(priv, &mlx5e_nic_profile, NULL);66346634+ mlx5e_netdev_change_profile(netdev, mdev, &mlx5e_nic_profile, NULL);66386635}6639663666406640-void mlx5e_destroy_netdev(struct mlx5e_priv *priv)66376637+void mlx5e_destroy_netdev(struct net_device *netdev)66416638{66426642- struct net_device *netdev = priv->netdev;66396639+ struct mlx5e_priv *priv = netdev_priv(netdev);6643664066446644- mlx5e_priv_cleanup(priv);66416641+ if (priv->profile)66426642+ mlx5e_priv_cleanup(priv);66456643 free_netdev(netdev);66466644}66476645···66596637{66606638 struct mlx5_adev *edev = container_of(adev, struct mlx5_adev, adev);66616639 struct mlx5e_dev *mlx5e_dev = auxiliary_get_drvdata(adev);66626662- struct mlx5e_priv *priv = mlx5e_dev->priv;66636663- struct net_device *netdev = priv->netdev;66406640+ struct mlx5e_priv *priv = netdev_priv(mlx5e_dev->netdev);66416641+ struct net_device *netdev = mlx5e_dev->netdev;66646642 struct mlx5_core_dev *mdev = edev->mdev;66656643 struct mlx5_core_dev *pos, *to;66666644 int err, i;···6706668467076685static int _mlx5e_suspend(struct auxiliary_device *adev, bool pre_netdev_reg)67086686{66876687+ struct mlx5_adev *edev = container_of(adev, struct mlx5_adev, adev);67096688 struct mlx5e_dev *mlx5e_dev = auxiliary_get_drvdata(adev);67106710- struct mlx5e_priv *priv = mlx5e_dev->priv;67116711- struct net_device *netdev = priv->netdev;67126712- struct mlx5_core_dev *mdev = priv->mdev;66896689+ struct mlx5e_priv *priv = netdev_priv(mlx5e_dev->netdev);66906690+ struct net_device *netdev = mlx5e_dev->netdev;66916691+ struct mlx5_core_dev *mdev = edev->mdev;67136692 struct mlx5_core_dev *pos;67146693 int i;67156694···67716748 goto err_devlink_port_unregister;67726749 }67736750 SET_NETDEV_DEVLINK_PORT(netdev, &mlx5e_dev->dl_port);67516751+ mlx5e_dev->netdev = netdev;6774675267756753 mlx5e_build_nic_netdev(netdev);6776675467776755 priv = netdev_priv(netdev);67786778- mlx5e_dev->priv = priv;6779675667806757 priv->profile = profile;67816758 priv->ppriv = NULL;···68086785err_profile_cleanup:68096786 profile->cleanup(priv);68106787err_destroy_netdev:68116811- mlx5e_destroy_netdev(priv);67886788+ mlx5e_destroy_netdev(netdev);68126789err_devlink_port_unregister:68136790 mlx5e_devlink_port_unregister(mlx5e_dev);68146791err_devlink_unregister:···68386815{68396816 struct mlx5_adev *edev = container_of(adev, struct mlx5_adev, adev);68406817 struct mlx5e_dev *mlx5e_dev = auxiliary_get_drvdata(adev);68416841- struct mlx5e_priv *priv = mlx5e_dev->priv;68186818+ struct net_device *netdev = mlx5e_dev->netdev;68196819+ struct mlx5e_priv *priv = netdev_priv(netdev);68426820 struct mlx5_core_dev *mdev = edev->mdev;6843682168446822 mlx5_core_uplink_netdev_set(mdev, NULL);68456845- mlx5e_dcbnl_delete_app(priv);68236823+68246824+ if (priv->profile)68256825+ mlx5e_dcbnl_delete_app(priv);68466826 /* When unload driver, the netdev is in registered state68476827 * if it's from legacy mode. If from switchdev mode, it68486828 * is already unregistered before changing to NIC profile.68496829 */68506850- if (priv->netdev->reg_state == NETREG_REGISTERED) {68516851- unregister_netdev(priv->netdev);68306830+ if (netdev->reg_state == NETREG_REGISTERED) {68316831+ unregister_netdev(netdev);68526832 _mlx5e_suspend(adev, false);68536833 } else {68546834 struct mlx5_core_dev *pos;···68666840 /* Avoid cleanup if profile rollback failed. */68676841 if (priv->profile)68686842 priv->profile->cleanup(priv);68696869- mlx5e_destroy_netdev(priv);68436843+ mlx5e_destroy_netdev(netdev);68706844 mlx5e_devlink_port_unregister(mlx5e_dev);68716845 mlx5e_destroy_devlink(mlx5e_dev);68726846}
+7-8
drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
···15081508{15091509 struct mlx5e_rep_priv *rpriv = mlx5e_rep_to_rep_priv(rep);15101510 struct net_device *netdev;15111511- struct mlx5e_priv *priv;15121511 int err;1513151215141513 netdev = mlx5_uplink_netdev_get(dev);15151514 if (!netdev)15161515 return 0;1517151615181518- priv = netdev_priv(netdev);15191519- rpriv->netdev = priv->netdev;15201520- err = mlx5e_netdev_change_profile(priv, &mlx5e_uplink_rep_profile,15211521- rpriv);15171517+ /* must not use netdev_priv(netdev), it might not be initialized yet */15181518+ rpriv->netdev = netdev;15191519+ err = mlx5e_netdev_change_profile(netdev, dev,15201520+ &mlx5e_uplink_rep_profile, rpriv);15221521 mlx5_uplink_netdev_put(dev, netdev);15231522 return err;15241523}···15451546 if (!(priv->mdev->priv.flags & MLX5_PRIV_FLAGS_SWITCH_LEGACY))15461547 unregister_netdev(netdev);1547154815481548- mlx5e_netdev_attach_nic_profile(priv);15491549+ mlx5e_netdev_attach_nic_profile(netdev, priv->mdev);15491550}1550155115511552static int···16111612 priv->profile->cleanup(priv);1612161316131614err_destroy_netdev:16141614- mlx5e_destroy_netdev(netdev_priv(netdev));16151615+ mlx5e_destroy_netdev(netdev);16151616 return err;16161617}16171618···16661667 mlx5e_rep_vnic_reporter_destroy(priv);16671668 mlx5e_detach_netdev(priv);16681669 priv->profile->cleanup(priv);16691669- mlx5e_destroy_netdev(priv);16701670+ mlx5e_destroy_netdev(netdev);16701671free_ppriv:16711672 kvfree(ppriv); /* mlx5e_rep_priv */16721673}
+3
drivers/net/hyperv/netvsc_drv.c
···17501750 rxfh->hfunc != ETH_RSS_HASH_TOP)17511751 return -EOPNOTSUPP;1752175217531753+ if (!ndc->rx_table_sz)17541754+ return -EOPNOTSUPP;17551755+17531756 rndis_dev = ndev->extension;17541757 if (rxfh->indir) {17551758 for (i = 0; i < ndc->rx_table_sz; i++)
···17411741 val |= YT8521_LED_1000_ON_EN;1742174217431743 if (test_bit(TRIGGER_NETDEV_FULL_DUPLEX, &rules))17441744- val |= YT8521_LED_HDX_ON_EN;17441744+ val |= YT8521_LED_FDX_ON_EN;1745174517461746 if (test_bit(TRIGGER_NETDEV_HALF_DUPLEX, &rules))17471747- val |= YT8521_LED_FDX_ON_EN;17471747+ val |= YT8521_LED_HDX_ON_EN;1748174817491749 if (test_bit(TRIGGER_NETDEV_TX, &rules) ||17501750 test_bit(TRIGGER_NETDEV_RX, &rules))
+43-132
drivers/net/virtio_net.c
···425425 u16 rss_indir_table_size;426426 u32 rss_hash_types_supported;427427 u32 rss_hash_types_saved;428428- struct virtio_net_rss_config_hdr *rss_hdr;429429- struct virtio_net_rss_config_trailer rss_trailer;430430- u8 rss_hash_key_data[VIRTIO_NET_RSS_MAX_KEY_SIZE];431428432429 /* Has control virtqueue */433430 bool has_cvq;···438441 /* Packet virtio header size */439442 u8 hdr_len;440443441441- /* Work struct for delayed refilling if we run low on memory. */442442- struct delayed_work refill;443443-444444 /* UDP tunnel support */445445 bool tx_tnl;446446447447 bool rx_tnl;448448449449 bool rx_tnl_csum;450450-451451- /* Is delayed refill enabled? */452452- bool refill_enabled;453453-454454- /* The lock to synchronize the access to refill_enabled */455455- spinlock_t refill_lock;456450457451 /* Work struct for config space updates */458452 struct work_struct config_work;···481493 struct failover *failover;482494483495 u64 device_stats_cap;496496+497497+ struct virtio_net_rss_config_hdr *rss_hdr;498498+499499+ /* Must be last as it ends in a flexible-array member. */500500+ TRAILING_OVERLAP(struct virtio_net_rss_config_trailer, rss_trailer, hash_key_data,501501+ u8 rss_hash_key_data[VIRTIO_NET_RSS_MAX_KEY_SIZE];502502+ );484503};504504+static_assert(offsetof(struct virtnet_info, rss_trailer.hash_key_data) ==505505+ offsetof(struct virtnet_info, rss_hash_key_data));485506486507struct padded_vnet_hdr {487508 struct virtio_net_hdr_v1_hash hdr;···715718 give_pages(rq, buf);716719 else717720 put_page(virt_to_head_page(buf));718718-}719719-720720-static void enable_delayed_refill(struct virtnet_info *vi)721721-{722722- spin_lock_bh(&vi->refill_lock);723723- vi->refill_enabled = true;724724- spin_unlock_bh(&vi->refill_lock);725725-}726726-727727-static void disable_delayed_refill(struct virtnet_info *vi)728728-{729729- spin_lock_bh(&vi->refill_lock);730730- vi->refill_enabled = false;731731- spin_unlock_bh(&vi->refill_lock);732721}733722734723static void enable_rx_mode_work(struct virtnet_info *vi)···29312948 napi_disable(napi);29322949}2933295029342934-static void refill_work(struct work_struct *work)29352935-{29362936- struct virtnet_info *vi =29372937- container_of(work, struct virtnet_info, refill.work);29382938- bool still_empty;29392939- int i;29402940-29412941- for (i = 0; i < vi->curr_queue_pairs; i++) {29422942- struct receive_queue *rq = &vi->rq[i];29432943-29442944- /*29452945- * When queue API support is added in the future and the call29462946- * below becomes napi_disable_locked, this driver will need to29472947- * be refactored.29482948- *29492949- * One possible solution would be to:29502950- * - cancel refill_work with cancel_delayed_work (note:29512951- * non-sync)29522952- * - cancel refill_work with cancel_delayed_work_sync in29532953- * virtnet_remove after the netdev is unregistered29542954- * - wrap all of the work in a lock (perhaps the netdev29552955- * instance lock)29562956- * - check netif_running() and return early to avoid a race29572957- */29582958- napi_disable(&rq->napi);29592959- still_empty = !try_fill_recv(vi, rq, GFP_KERNEL);29602960- virtnet_napi_do_enable(rq->vq, &rq->napi);29612961-29622962- /* In theory, this can happen: if we don't get any buffers in29632963- * we will *never* try to fill again.29642964- */29652965- if (still_empty)29662966- schedule_delayed_work(&vi->refill, HZ/2);29672967- }29682968-}29692969-29702951static int virtnet_receive_xsk_bufs(struct virtnet_info *vi,29712952 struct receive_queue *rq,29722953 int budget,···29933046 else29943047 packets = virtnet_receive_packets(vi, rq, budget, xdp_xmit, &stats);2995304830493049+ u64_stats_set(&stats.packets, packets);29963050 if (rq->vq->num_free > min((unsigned int)budget, virtqueue_get_vring_size(rq->vq)) / 2) {29972997- if (!try_fill_recv(vi, rq, GFP_ATOMIC)) {29982998- spin_lock(&vi->refill_lock);29992999- if (vi->refill_enabled)30003000- schedule_delayed_work(&vi->refill, 0);30013001- spin_unlock(&vi->refill_lock);30023002- }30513051+ if (!try_fill_recv(vi, rq, GFP_ATOMIC))30523052+ /* We need to retry refilling in the next NAPI poll so30533053+ * we must return budget to make sure the NAPI is30543054+ * repolled.30553055+ */30563056+ packets = budget;30033057 }3004305830053005- u64_stats_set(&stats.packets, packets);30063059 u64_stats_update_begin(&rq->stats.syncp);30073060 for (i = 0; i < ARRAY_SIZE(virtnet_rq_stats_desc); i++) {30083061 size_t offset = virtnet_rq_stats_desc[i].offset;···31733226 struct virtnet_info *vi = netdev_priv(dev);31743227 int i, err;3175322831763176- enable_delayed_refill(vi);31773177-31783229 for (i = 0; i < vi->max_queue_pairs; i++) {31793230 if (i < vi->curr_queue_pairs)31803180- /* Make sure we have some buffers: if oom use wq. */31813181- if (!try_fill_recv(vi, &vi->rq[i], GFP_KERNEL))31823182- schedule_delayed_work(&vi->refill, 0);32313231+ /* Pre-fill rq agressively, to make sure we are ready to32323232+ * get packets immediately.32333233+ */32343234+ try_fill_recv(vi, &vi->rq[i], GFP_KERNEL);3183323531843236 err = virtnet_enable_queue_pair(vi, i);31853237 if (err < 0)···31973251 return 0;3198325231993253err_enable_qp:32003200- disable_delayed_refill(vi);32013201- cancel_delayed_work_sync(&vi->refill);32023202-32033254 for (i--; i >= 0; i--) {32043255 virtnet_disable_queue_pair(vi, i);32053256 virtnet_cancel_dim(vi, &vi->rq[i].dim);···33753432 return NETDEV_TX_OK;33763433}3377343433783378-static void __virtnet_rx_pause(struct virtnet_info *vi,33793379- struct receive_queue *rq)34353435+static void virtnet_rx_pause(struct virtnet_info *vi,34363436+ struct receive_queue *rq)33803437{33813438 bool running = netif_running(vi->dev);33823439···33903447{33913448 int i;3392344933933393- /*33943394- * Make sure refill_work does not run concurrently to33953395- * avoid napi_disable race which leads to deadlock.33963396- */33973397- disable_delayed_refill(vi);33983398- cancel_delayed_work_sync(&vi->refill);33993450 for (i = 0; i < vi->max_queue_pairs; i++)34003400- __virtnet_rx_pause(vi, &vi->rq[i]);34513451+ virtnet_rx_pause(vi, &vi->rq[i]);34013452}3402345334033403-static void virtnet_rx_pause(struct virtnet_info *vi, struct receive_queue *rq)34543454+static void virtnet_rx_resume(struct virtnet_info *vi,34553455+ struct receive_queue *rq,34563456+ bool refill)34043457{34053405- /*34063406- * Make sure refill_work does not run concurrently to34073407- * avoid napi_disable race which leads to deadlock.34083408- */34093409- disable_delayed_refill(vi);34103410- cancel_delayed_work_sync(&vi->refill);34113411- __virtnet_rx_pause(vi, rq);34123412-}34583458+ if (netif_running(vi->dev)) {34593459+ /* Pre-fill rq agressively, to make sure we are ready to get34603460+ * packets immediately.34613461+ */34623462+ if (refill)34633463+ try_fill_recv(vi, rq, GFP_KERNEL);3413346434143414-static void __virtnet_rx_resume(struct virtnet_info *vi,34153415- struct receive_queue *rq,34163416- bool refill)34173417-{34183418- bool running = netif_running(vi->dev);34193419- bool schedule_refill = false;34203420-34213421- if (refill && !try_fill_recv(vi, rq, GFP_KERNEL))34223422- schedule_refill = true;34233423- if (running)34243465 virtnet_napi_enable(rq);34253425-34263426- if (schedule_refill)34273427- schedule_delayed_work(&vi->refill, 0);34663466+ }34283467}3429346834303469static void virtnet_rx_resume_all(struct virtnet_info *vi)34313470{34323471 int i;3433347234343434- enable_delayed_refill(vi);34353473 for (i = 0; i < vi->max_queue_pairs; i++) {34363474 if (i < vi->curr_queue_pairs)34373437- __virtnet_rx_resume(vi, &vi->rq[i], true);34753475+ virtnet_rx_resume(vi, &vi->rq[i], true);34383476 else34393439- __virtnet_rx_resume(vi, &vi->rq[i], false);34773477+ virtnet_rx_resume(vi, &vi->rq[i], false);34403478 }34413441-}34423442-34433443-static void virtnet_rx_resume(struct virtnet_info *vi, struct receive_queue *rq)34443444-{34453445- enable_delayed_refill(vi);34463446- __virtnet_rx_resume(vi, rq, true);34473479}3448348034493481static int virtnet_rx_resize(struct virtnet_info *vi,···34343516 if (err)34353517 netdev_err(vi->dev, "resize rx fail: rx queue index: %d err: %d\n", qindex, err);3436351834373437- virtnet_rx_resume(vi, rq);35193519+ virtnet_rx_resume(vi, rq, true);34383520 return err;34393521}34403522···37473829 }37483830succ:37493831 vi->curr_queue_pairs = queue_pairs;37503750- /* virtnet_open() will refill when device is going to up. */37513751- spin_lock_bh(&vi->refill_lock);37523752- if (dev->flags & IFF_UP && vi->refill_enabled)37533753- schedule_delayed_work(&vi->refill, 0);37543754- spin_unlock_bh(&vi->refill_lock);38323832+ if (dev->flags & IFF_UP) {38333833+ local_bh_disable();38343834+ for (int i = 0; i < vi->curr_queue_pairs; ++i)38353835+ virtqueue_napi_schedule(&vi->rq[i].napi, vi->rq[i].vq);38363836+ local_bh_enable();38373837+ }3755383837563839 return 0;37573840}···37623843 struct virtnet_info *vi = netdev_priv(dev);37633844 int i;3764384537653765- /* Make sure NAPI doesn't schedule refill work */37663766- disable_delayed_refill(vi);37673767- /* Make sure refill_work doesn't re-enable napi! */37683768- cancel_delayed_work_sync(&vi->refill);37693846 /* Prevent the config change callback from changing carrier37703847 * after close37713848 */···5717580257185803 virtio_device_ready(vdev);5719580457205720- enable_delayed_refill(vi);57215805 enable_rx_mode_work(vi);5722580657235807 if (netif_running(vi->dev)) {···5806589258075893 rq->xsk_pool = pool;5808589458095809- virtnet_rx_resume(vi, rq);58955895+ virtnet_rx_resume(vi, rq, true);5810589658115897 if (pool)58125898 return 0;···64736559 if (!vi->rq)64746560 goto err_rq;6475656164766476- INIT_DELAYED_WORK(&vi->refill, refill_work);64776562 for (i = 0; i < vi->max_queue_pairs; i++) {64786563 vi->rq[i].pages = NULL;64796564 netif_napi_add_config(vi->dev, &vi->rq[i].napi, virtnet_poll,···6814690168156902 INIT_WORK(&vi->config_work, virtnet_config_changed_work);68166903 INIT_WORK(&vi->rx_mode_work, virtnet_rx_mode_work);68176817- spin_lock_init(&vi->refill_lock);6818690468196905 if (virtio_has_feature(vdev, VIRTIO_NET_F_MRG_RXBUF)) {68206906 vi->mergeable_rx_bufs = true;···70777165 net_failover_destroy(vi->failover);70787166free_vqs:70797167 virtio_reset_device(vdev);70807080- cancel_delayed_work_sync(&vi->refill);70817168 free_receive_page_frags(vi);70827169 virtnet_del_vqs(vi);70837170free:
···150150 * code path with duplicate ctrl subsysnqn. In order to prevent that we151151 * mask the passthru-ctrl subsysnqn with the target ctrl subsysnqn.152152 */153153- memcpy(id->subnqn, ctrl->subsys->subsysnqn, sizeof(id->subnqn));153153+ strscpy(id->subnqn, ctrl->subsys->subsysnqn, sizeof(id->subnqn));154154155155 /* use fabric id-ctrl values */156156 id->ioccsz = cpu_to_le32((sizeof(struct nvme_command) +
+16-5
drivers/nvme/target/tcp.c
···982982 pr_err("H2CData PDU len %u is invalid\n", cmd->pdu_len);983983 goto err_proto;984984 }985985+ /*986986+ * Ensure command data structures are initialized. We must check both987987+ * cmd->req.sg and cmd->iov because they can have different NULL states:988988+ * - Uninitialized commands: both NULL989989+ * - READ commands: cmd->req.sg allocated, cmd->iov NULL990990+ * - WRITE commands: both allocated991991+ */992992+ if (unlikely(!cmd->req.sg || !cmd->iov)) {993993+ pr_err("queue %d: H2CData PDU received for invalid command state (ttag %u)\n",994994+ queue->idx, data->ttag);995995+ goto err_proto;996996+ }985997 cmd->pdu_recv = 0;986998 nvmet_tcp_build_pdu_iovec(cmd);987999 queue->cmd = cmd;···2004199220051993 trace_sk_data_ready(sk);2006199419951995+ if (sk->sk_state != TCP_LISTEN)19961996+ return;19971997+20071998 read_lock_bh(&sk->sk_callback_lock);20081999 port = sk->sk_user_data;20092009- if (!port)20102010- goto out;20112011-20122012- if (sk->sk_state == TCP_LISTEN)20002000+ if (port)20132001 queue_work(nvmet_wq, &port->accept_work);20142014-out:20152002 read_unlock_bh(&sk->sk_callback_lock);20162003}20172004
-6
drivers/pci/Kconfig
···225225 P2P DMA transactions must be between devices behind the same root226226 port.227227228228- Enabling this option will reduce the entropy of x86 KASLR memory229229- regions. For example - on a 46 bit system, the entropy goes down230230- from 16 bits to 15 bits. The actual reduction in entropy depends231231- on the physical address bits, on processor features, kernel config232232- (5 level page table) and physical memory present on the system.233233-234228 If unsure, say N.235229236230config PCI_LABEL
···66config PHY_SPARX5_SERDES77 tristate "Microchip Sparx5 SerDes PHY driver"88 select GENERIC_PHY99- depends on ARCH_SPARX5 || COMPILE_TEST99+ depends on ARCH_SPARX5 || ARCH_LAN969X || COMPILE_TEST1010 depends on OF1111 depends on HAS_IOMEM1212 help
+8-8
drivers/phy/qualcomm/phy-qcom-qusb2.c
···10931093 or->hsdisc_trim.override = true;10941094 }1095109510961096- pm_runtime_set_active(dev);10971097- pm_runtime_enable(dev);10961096+ dev_set_drvdata(dev, qphy);10971097+10981098 /*10991099- * Prevent runtime pm from being ON by default. Users can enable11001100- * it using power/control in sysfs.10991099+ * Enable runtime PM support, but forbid it by default.11001100+ * Users can allow it again via the power/control attribute in sysfs.11011101 */11021102+ pm_runtime_set_active(dev);11021103 pm_runtime_forbid(dev);11041104+ ret = devm_pm_runtime_enable(dev);11051105+ if (ret)11061106+ return ret;1103110711041108 generic_phy = devm_phy_create(dev, NULL, &qusb2_phy_gen_ops);11051109 if (IS_ERR(generic_phy)) {11061110 ret = PTR_ERR(generic_phy);11071111 dev_err(dev, "failed to create phy, %d\n", ret);11081108- pm_runtime_disable(dev);11091112 return ret;11101113 }11111114 qphy->phy = generic_phy;1112111511131113- dev_set_drvdata(dev, qphy);11141116 phy_set_drvdata(generic_phy, qphy);1115111711161118 phy_provider = devm_of_phy_provider_register(dev, of_phy_simple_xlate);11171117- if (IS_ERR(phy_provider))11181118- pm_runtime_disable(dev);1119111911201120 return PTR_ERR_OR_ZERO(phy_provider);11211121}
+9-5
drivers/phy/rockchip/phy-rockchip-inno-usb2.c
···821821 container_of(work, struct rockchip_usb2phy_port, chg_work.work);822822 struct rockchip_usb2phy *rphy = dev_get_drvdata(rport->phy->dev.parent);823823 struct regmap *base = get_reg_base(rphy);824824- bool is_dcd, tmout, vout;824824+ bool is_dcd, tmout, vout, vbus_attach;825825 unsigned long delay;826826+827827+ vbus_attach = property_enabled(rphy->grf, &rport->port_cfg->utmi_bvalid);826828827829 dev_dbg(&rport->phy->dev, "chg detection work state = %d\n",828830 rphy->chg_state);829831 switch (rphy->chg_state) {830832 case USB_CHG_STATE_UNDEFINED:831831- if (!rport->suspended)833833+ if (!rport->suspended && !vbus_attach)832834 rockchip_usb2phy_power_off(rport->phy);833835 /* put the controller in non-driving mode */834834- property_enable(base, &rphy->phy_cfg->chg_det.opmode, false);836836+ if (!vbus_attach)837837+ property_enable(base, &rphy->phy_cfg->chg_det.opmode, false);835838 /* Start DCD processing stage 1 */836839 rockchip_chg_enable_dcd(rphy, true);837840 rphy->chg_state = USB_CHG_STATE_WAIT_FOR_DCD;···897894 fallthrough;898895 case USB_CHG_STATE_DETECTED:899896 /* put the controller in normal mode */900900- property_enable(base, &rphy->phy_cfg->chg_det.opmode, true);897897+ if (!vbus_attach)898898+ property_enable(base, &rphy->phy_cfg->chg_det.opmode, true);901899 rockchip_usb2phy_otg_sm_work(&rport->otg_sm_work.work);902900 dev_dbg(&rport->phy->dev, "charger = %s\n",903901 chg_to_string(rphy->chg_type));···14951491 rphy);14961492 if (ret) {14971493 dev_err_probe(rphy->dev, ret, "failed to request usb2phy irq handle\n");14981498- goto put_child;14941494+ return ret;14991495 }15001496 }15011497
+1-1
drivers/phy/st/phy-stm32-usbphyc.c
···712712 }713713714714 ret = of_property_read_u32(child, "reg", &index);715715- if (ret || index > usbphyc->nphys) {715715+ if (ret || index >= usbphyc->nphys) {716716 dev_err(&phy->dev, "invalid reg property: %d\n", ret);717717 if (!ret)718718 ret = -EINVAL;
+3
drivers/phy/tegra/xusb-tegra186.c
···8484#define XUSB_PADCTL_USB2_BIAS_PAD_CTL0 0x2848585#define BIAS_PAD_PD BIT(11)8686#define HS_SQUELCH_LEVEL(x) (((x) & 0x7) << 0)8787+#define HS_DISCON_LEVEL(x) (((x) & 0x7) << 3)87888889#define XUSB_PADCTL_USB2_BIAS_PAD_CTL1 0x2888990#define USB2_TRK_START_TIMER(x) (((x) & 0x7f) << 12)···624623 value &= ~BIAS_PAD_PD;625624 value &= ~HS_SQUELCH_LEVEL(~0);626625 value |= HS_SQUELCH_LEVEL(priv->calib.hs_squelch);626626+ value &= ~HS_DISCON_LEVEL(~0);627627+ value |= HS_DISCON_LEVEL(0x7);627628 padctl_writel(padctl, value, XUSB_PADCTL_USB2_BIAS_PAD_CTL0);628629629630 udelay(1);
+4-3
drivers/phy/ti/phy-da8xx-usb.c
···180180 struct da8xx_usb_phy_platform_data *pdata = dev->platform_data;181181 struct device_node *node = dev->of_node;182182 struct da8xx_usb_phy *d_phy;183183+ int ret;183184184185 d_phy = devm_kzalloc(dev, sizeof(*d_phy), GFP_KERNEL);185186 if (!d_phy)···234233 return PTR_ERR(d_phy->phy_provider);235234 }236235 } else {237237- int ret;238238-239236 ret = phy_create_lookup(d_phy->usb11_phy, "usb-phy",240237 "ohci-da8xx");241238 if (ret)···248249 PHY_INIT_BITS, PHY_INIT_BITS);249250250251 pm_runtime_set_active(dev);251251- devm_pm_runtime_enable(dev);252252+ ret = devm_pm_runtime_enable(dev);253253+ if (ret)254254+ return ret;252255 /*253256 * Prevent runtime pm from being ON by default. Users can enable254257 * it using power/control in sysfs.
+1-1
drivers/phy/ti/phy-gmii-sel.c
···512512 return dev_err_probe(dev, PTR_ERR(base),513513 "failed to get base memory resource\n");514514515515- priv->regmap = regmap_init_mmio(dev, base, &phy_gmii_sel_regmap_cfg);515515+ priv->regmap = devm_regmap_init_mmio(dev, base, &phy_gmii_sel_regmap_cfg);516516 if (IS_ERR(priv->regmap))517517 return dev_err_probe(dev, PTR_ERR(priv->regmap),518518 "Failed to get syscon\n");
+6-3
drivers/resctrl/mpam_internal.h
···1212#include <linux/jump_label.h>1313#include <linux/llist.h>1414#include <linux/mutex.h>1515-#include <linux/srcu.h>1615#include <linux/spinlock.h>1716#include <linux/srcu.h>1817#include <linux/types.h>···200201} PACKED_FOR_KUNIT;201202202203#define mpam_has_feature(_feat, x) test_bit(_feat, (x)->features)203203-#define mpam_set_feature(_feat, x) set_bit(_feat, (x)->features)204204-#define mpam_clear_feature(_feat, x) clear_bit(_feat, (x)->features)204204+/*205205+ * The non-atomic get/set operations are used because if struct mpam_props is206206+ * packed, the alignment requirements for atomics aren't met.207207+ */208208+#define mpam_set_feature(_feat, x) __set_bit(_feat, (x)->features)209209+#define mpam_clear_feature(_feat, x) __clear_bit(_feat, (x)->features)205210206211/* The values for MSMON_CFG_MBWU_FLT.RWBW */207212enum mon_filter_options {
+1-1
drivers/scsi/bfa/bfa_fcs.c
···11691169 * This function should be used only if there is any requirement11701170* to check for FOS version below 6.3.11711171 * To check if the attached fabric is a brocade fabric, use11721172- * bfa_lps_is_brcd_fabric() which works for FOS versions 6.311721172+ * fabric->lps->brcd_switch which works for FOS versions 6.311731173 * or above only.11741174 */11751175
+24
drivers/scsi/scsi_error.c
···10631063 unsigned char *cmnd, int cmnd_size, unsigned sense_bytes)10641064{10651065 struct scsi_device *sdev = scmd->device;10661066+#ifdef CONFIG_BLK_INLINE_ENCRYPTION10671067+ struct request *rq = scsi_cmd_to_rq(scmd);10681068+#endif1066106910671070 /*10681071 * We need saved copies of a number of fields - this is because···11181115 (sdev->lun << 5 & 0xe0);1119111611201117 /*11181118+ * Encryption must be disabled for the commands submitted by the error handler.11191119+ * Hence, clear the encryption context information.11201120+ */11211121+#ifdef CONFIG_BLK_INLINE_ENCRYPTION11221122+ ses->rq_crypt_keyslot = rq->crypt_keyslot;11231123+ ses->rq_crypt_ctx = rq->crypt_ctx;11241124+11251125+ rq->crypt_keyslot = NULL;11261126+ rq->crypt_ctx = NULL;11271127+#endif11281128+11291129+ /*11211130 * Zero the sense buffer. The scsi spec mandates that any11221131 * untransferred sense data should be interpreted as being zero.11231132 */···11461131 */11471132void scsi_eh_restore_cmnd(struct scsi_cmnd* scmd, struct scsi_eh_save *ses)11481133{11341134+#ifdef CONFIG_BLK_INLINE_ENCRYPTION11351135+ struct request *rq = scsi_cmd_to_rq(scmd);11361136+#endif11371137+11491138 /*11501139 * Restore original data11511140 */···11621143 scmd->underflow = ses->underflow;11631144 scmd->prot_op = ses->prot_op;11641145 scmd->eh_eflags = ses->eh_eflags;11461146+11471147+#ifdef CONFIG_BLK_INLINE_ENCRYPTION11481148+ rq->crypt_keyslot = ses->rq_crypt_keyslot;11491149+ rq->crypt_ctx = ses->rq_crypt_ctx;11501150+#endif11651151}11661152EXPORT_SYMBOL(scsi_eh_restore_cmnd);11671153
+1-1
drivers/scsi/scsi_lib.c
···24592459 * @retries: number of retries before failing24602460 * @sshdr: outpout pointer for decoded sense information.24612461 *24622462- * Returns zero if unsuccessful or an error if TUR failed. For24622462+ * Returns zero if successful or an error if TUR failed. For24632463 * removable media, UNIT_ATTENTION sets ->changed flag.24642464 **/24652465int
+1-1
drivers/soundwire/bus_type.c
···105105 if (ret)106106 return ret;107107108108- ret = ida_alloc_max(&slave->bus->slave_ida, SDW_FW_MAX_DEVICES, GFP_KERNEL);108108+ ret = ida_alloc_max(&slave->bus->slave_ida, SDW_FW_MAX_DEVICES - 1, GFP_KERNEL);109109 if (ret < 0) {110110 dev_err(dev, "Failed to allocated ID: %d\n", ret);111111 return ret;
···10401040 __u8 cap_type;10411041 int ret;1042104210431043+ if (dev->quirks & USB_QUIRK_NO_BOS) {10441044+ dev_dbg(ddev, "skipping BOS descriptor\n");10451045+ return -ENOMSG;10461046+ }10471047+10431048 bos = kzalloc(sizeof(*bos), GFP_KERNEL);10441049 if (!bos)10451050 return -ENOMEM;
+3
drivers/usb/core/quirks.c
···450450 { USB_DEVICE(0x0c45, 0x7056), .driver_info =451451 USB_QUIRK_IGNORE_REMOTE_WAKEUP },452452453453+ /* Elgato 4K X - BOS descriptor fetch hangs at SuperSpeed Plus */454454+ { USB_DEVICE(0x0fd9, 0x009b), .driver_info = USB_QUIRK_NO_BOS },455455+453456 /* Sony Xperia XZ1 Compact (lilac) smartphone in fastboot mode */454457 { USB_DEVICE(0x0fce, 0x0dde), .driver_info = USB_QUIRK_NO_LPM },455458
+2
drivers/usb/dwc3/core.c
···993993994994 reg = dwc3_readl(dwc->regs, DWC3_GSNPSID);995995 dwc->ip = DWC3_GSNPS_ID(reg);996996+ if (dwc->ip == DWC4_IP)997997+ dwc->ip = DWC32_IP;996998997999 /* This should read as U3 followed by revision number */9981000 if (DWC3_IP_IS(DWC3)) {
···218218 return ret;219219}220220221221-static void dwc3_apple_phy_set_mode(struct dwc3_apple *appledwc, enum phy_mode mode)222222-{223223- lockdep_assert_held(&appledwc->lock);224224-225225- /*226226- * This platform requires SUSPHY to be enabled here already in order to properly configure227227- * the PHY and switch dwc3's PIPE interface to USB3 PHY.228228- */229229- dwc3_enable_susphy(&appledwc->dwc, true);230230- phy_set_mode(appledwc->dwc.usb2_generic_phy[0], mode);231231- phy_set_mode(appledwc->dwc.usb3_generic_phy[0], mode);232232-}233233-234221static int dwc3_apple_init(struct dwc3_apple *appledwc, enum dwc3_apple_state state)235222{236223 int ret, ret_reset;237224238225 lockdep_assert_held(&appledwc->lock);226226+227227+ /*228228+ * The USB2 PHY on this platform must be configured for host or device mode while it is229229+ * still powered off and before dwc3 tries to access it. Otherwise, the new configuration230230+ * will sometimes only take affect after the *next* time dwc3 is brought up which causes231231+ * the connected device to just not work.232232+ * The USB3 PHY must be configured later after dwc3 has already been initialized.233233+ */234234+ switch (state) {235235+ case DWC3_APPLE_HOST:236236+ phy_set_mode(appledwc->dwc.usb2_generic_phy[0], PHY_MODE_USB_HOST);237237+ break;238238+ case DWC3_APPLE_DEVICE:239239+ phy_set_mode(appledwc->dwc.usb2_generic_phy[0], PHY_MODE_USB_DEVICE);240240+ break;241241+ default:242242+ /* Unreachable unless there's a bug in this driver */243243+ return -EINVAL;244244+ }239245240246 ret = reset_control_deassert(appledwc->reset);241247 if (ret) {···263257 case DWC3_APPLE_HOST:264258 appledwc->dwc.dr_mode = USB_DR_MODE_HOST;265259 dwc3_apple_set_ptrcap(appledwc, DWC3_GCTL_PRTCAP_HOST);266266- dwc3_apple_phy_set_mode(appledwc, PHY_MODE_USB_HOST);260260+ /*261261+ * This platform requires SUSPHY to be enabled here already in order to properly262262+ * configure the PHY and switch dwc3's PIPE interface to USB3 PHY. The USB2 PHY263263+ * has already been configured to the correct mode earlier.264264+ */265265+ dwc3_enable_susphy(&appledwc->dwc, true);266266+ phy_set_mode(appledwc->dwc.usb3_generic_phy[0], PHY_MODE_USB_HOST);267267 ret = dwc3_host_init(&appledwc->dwc);268268 if (ret) {269269 dev_err(appledwc->dev, "Failed to initialize host, ret=%d\n", ret);···280268 case DWC3_APPLE_DEVICE:281269 appledwc->dwc.dr_mode = USB_DR_MODE_PERIPHERAL;282270 dwc3_apple_set_ptrcap(appledwc, DWC3_GCTL_PRTCAP_DEVICE);283283- dwc3_apple_phy_set_mode(appledwc, PHY_MODE_USB_DEVICE);271271+ /*272272+ * This platform requires SUSPHY to be enabled here already in order to properly273273+ * configure the PHY and switch dwc3's PIPE interface to USB3 PHY. The USB2 PHY274274+ * has already been configured to the correct mode earlier.275275+ */276276+ dwc3_enable_susphy(&appledwc->dwc, true);277277+ phy_set_mode(appledwc->dwc.usb3_generic_phy[0], PHY_MODE_USB_DEVICE);284278 ret = dwc3_gadget_init(&appledwc->dwc);285279 if (ret) {286280 dev_err(appledwc->dev, "Failed to initialize gadget, ret=%d\n", ret);···356338 int ret;357339358340 guard(mutex)(&appledwc->lock);341341+342342+ /*343343+ * Skip role switches if appledwc is already in the desired state. The344344+ * USB-C port controller on M2 and M1/M2 Pro/Max/Ultra devices issues345345+ * additional interrupts which results in usb_role_switch_set_role()346346+ * calls with the current role.347347+ * Ignore those calls here to ensure the USB-C port controller and348348+ * appledwc are in a consistent state.349349+ * This matches the behaviour in __dwc3_set_mode().350350+ * Do no handle USB_ROLE_NONE for DWC3_APPLE_NO_CABLE and351351+ * DWC3_APPLE_PROBE_PENDING since that is no-op anyway.352352+ */353353+ if (appledwc->state == DWC3_APPLE_HOST && role == USB_ROLE_HOST)354354+ return 0;355355+ if (appledwc->state == DWC3_APPLE_DEVICE && role == USB_ROLE_DEVICE)356356+ return 0;359357360358 /*361359 * We need to tear all of dwc3 down and re-initialize it every time a cable is
···107107 unsigned int width;108108 unsigned int height;109109 unsigned int imagesize;110110- unsigned int interval;110110+ unsigned int interval; /* in 100ns units */111111 struct mutex mutex; /* protects frame parameters */112112113113 unsigned int uvc_num_requests;···117117 /* Requests */118118 bool is_enabled; /* tracks whether video stream is enabled */119119 unsigned int req_size;120120+ unsigned int max_req_size;120121 struct list_head ureqs; /* all uvc_requests allocated by uvc_video */121122122123 /* USB requests that the video pump thread can encode into */
+19-4
drivers/usb/gadget/function/uvc_queue.c
···8686 buf->bytesused = 0;8787 } else {8888 buf->bytesused = vb2_get_plane_payload(vb, 0);8989- buf->req_payload_size =9090- DIV_ROUND_UP(buf->bytesused +9191- (video->reqs_per_frame * UVCG_REQUEST_HEADER_LEN),9292- video->reqs_per_frame);8989+9090+ if (video->reqs_per_frame != 0) {9191+ buf->req_payload_size =9292+ DIV_ROUND_UP(buf->bytesused +9393+ (video->reqs_per_frame * UVCG_REQUEST_HEADER_LEN),9494+ video->reqs_per_frame);9595+ if (buf->req_payload_size > video->req_size)9696+ buf->req_payload_size = video->req_size;9797+ } else {9898+ buf->req_payload_size = video->max_req_size;9999+ }93100 }9410195102 return 0;···182175{183176 int ret;184177178178+retry:185179 ret = vb2_reqbufs(&queue->queue, rb);180180+ if (ret < 0 && queue->use_sg) {181181+ uvc_trace(UVC_TRACE_IOCTL,182182+ "failed to alloc buffer with sg enabled, try non-sg mode\n");183183+ queue->use_sg = 0;184184+ queue->queue.mem_ops = &vb2_vmalloc_memops;185185+ goto retry;186186+ }186187187188 return ret ? ret : rb->count;188189}
···15631563 for (i = 0; i < tegra->soc->max_num_wakes; i++) {15641564 struct irq_data *data;1565156515661566- tegra->wake_irqs[i] = platform_get_irq(pdev, i + WAKE_IRQ_START_INDEX);15661566+ tegra->wake_irqs[i] = platform_get_irq_optional(pdev, i + WAKE_IRQ_START_INDEX);15671567 if (tegra->wake_irqs[i] < 0)15681568 break;15691569
+12-3
drivers/usb/host/xhci.c
···28982898 gfp_t gfp_flags)28992899{29002900 struct xhci_command *command;29012901+ struct xhci_ep_ctx *ep_ctx;29012902 unsigned long flags;29022902- int ret;29032903+ int ret = -ENODEV;2903290429042905 command = xhci_alloc_command(xhci, true, gfp_flags);29052906 if (!command)29062907 return -ENOMEM;2907290829082909 spin_lock_irqsave(&xhci->lock, flags);29092909- ret = xhci_queue_stop_endpoint(xhci, command, ep->vdev->slot_id,29102910- ep->ep_index, suspend);29102910+29112911+ /* make sure endpoint exists and is running before stopping it */29122912+ if (ep->ring) {29132913+ ep_ctx = xhci_get_ep_ctx(xhci, ep->vdev->out_ctx, ep->ep_index);29142914+ if (GET_EP_CTX_STATE(ep_ctx) == EP_STATE_RUNNING)29152915+ ret = xhci_queue_stop_endpoint(xhci, command,29162916+ ep->vdev->slot_id,29172917+ ep->ep_index, suspend);29182918+ }29192919+29112920 if (ret < 0) {29122921 spin_unlock_irqrestore(&xhci->lock, flags);29132922 goto out;
+47-30
drivers/usb/serial/f81232.c
···7070#define F81232_REGISTER_REQUEST 0xa07171#define F81232_GET_REGISTER 0xc07272#define F81232_SET_REGISTER 0x407373-#define F81534A_ACCESS_REG_RETRY 274737574#define SERIAL_BASE_ADDRESS 0x01207675#define RECEIVE_BUFFER_REGISTER (0x00 + SERIAL_BASE_ADDRESS)···823824static int f81534a_ctrl_set_register(struct usb_interface *intf, u16 reg,824825 u16 size, void *val)825826{826826- struct usb_device *dev = interface_to_usbdev(intf);827827- int retry = F81534A_ACCESS_REG_RETRY;828828- int status;827827+ return usb_control_msg_send(interface_to_usbdev(intf),828828+ 0,829829+ F81232_REGISTER_REQUEST,830830+ F81232_SET_REGISTER,831831+ reg,832832+ 0,833833+ val,834834+ size,835835+ USB_CTRL_SET_TIMEOUT,836836+ GFP_KERNEL);837837+}829838830830- while (retry--) {831831- status = usb_control_msg_send(dev,832832- 0,833833- F81232_REGISTER_REQUEST,834834- F81232_SET_REGISTER,835835- reg,836836- 0,837837- val,838838- size,839839- USB_CTRL_SET_TIMEOUT,840840- GFP_KERNEL);841841- if (status) {842842- status = usb_translate_errors(status);843843- if (status == -EIO)844844- continue;845845- }846846-847847- break;848848- }849849-850850- if (status) {851851- dev_err(&intf->dev, "failed to set register 0x%x: %d\n",852852- reg, status);853853- }854854-855855- return status;839839+static int f81534a_ctrl_get_register(struct usb_interface *intf, u16 reg,840840+ u16 size, void *val)841841+{842842+ return usb_control_msg_recv(interface_to_usbdev(intf),843843+ 0,844844+ F81232_REGISTER_REQUEST,845845+ F81232_GET_REGISTER,846846+ reg,847847+ 0,848848+ val,849849+ size,850850+ USB_CTRL_GET_TIMEOUT,851851+ GFP_KERNEL);856852}857853858854static int f81534a_ctrl_enable_all_ports(struct usb_interface *intf, bool en)···863869 * bit 0~11 : Serial port enable bit.864870 */865871 if (en) {872872+ /*873873+ * The Fintek F81532A/534A/535/536 family relies on the874874+ * F81534A_CTRL_CMD_ENABLE_PORT (116h) register during875875+ * initialization to both determine serial port status and876876+ * control port creation.877877+ *878878+ * If the driver experiences fast load/unload cycles, the879879+ * device state may becomes unstable, resulting in the880880+ * incomplete generation of serial ports.881881+ *882882+ * Performing a dummy read operation on the register prior883883+ * to the initial write command resolves the issue.884884+ *885885+ * This clears the device's stale internal state. Subsequent886886+ * write operations will correctly generate all serial ports.887887+ */888888+ status = f81534a_ctrl_get_register(intf,889889+ F81534A_CTRL_CMD_ENABLE_PORT,890890+ sizeof(enable),891891+ enable);892892+ if (status)893893+ return status;894894+866895 enable[0] = 0xff;867896 enable[1] = 0x8f;868897 }
···78907890 port->partner_desc.identity = &port->partner_ident;7891789178927892 port->role_sw = fwnode_usb_role_switch_get(tcpc->fwnode);78937893- if (!port->role_sw)78937893+ if (IS_ERR_OR_NULL(port->role_sw))78947894 port->role_sw = usb_role_switch_get(port->dev);78957895 if (IS_ERR(port->role_sw)) {78967896 err = PTR_ERR(port->role_sw);
+5-1
fs/btrfs/Kconfig
···115115116116 - extent tree v2 - complex rework of extent tracking117117118118- - large folio support118118+ - large folio and block size (> page size) support119119+120120+ - shutdown ioctl and auto-degradation support121121+122122+ - asynchronous checksum generation for data writes119123120124 If unsure, say N.
+9
fs/btrfs/inode.c
···4180418041814181 return 0;41824182out:41834183+ /*41844184+ * We may have a read locked leaf and iget_failed() triggers inode41854185+ * eviction which needs to release the delayed inode and that needs41864186+ * to lock the delayed inode's mutex. This can cause a ABBA deadlock41874187+ * with a task running delayed items, as that require first locking41884188+ * the delayed inode's mutex and then modifying its subvolume btree.41894189+ * So release the path before iget_failed().41904190+ */41914191+ btrfs_release_path(path);41834192 iget_failed(vfs_inode);41844193 return ret;41854194}
+13-10
fs/btrfs/reflink.c
···705705 struct inode *src = file_inode(file_src);706706 struct btrfs_fs_info *fs_info = inode_to_fs_info(inode);707707 int ret;708708- int wb_ret;709708 u64 len = olen;710709 u64 bs = fs_info->sectorsize;711710 u64 end;···749750 btrfs_lock_extent(&BTRFS_I(inode)->io_tree, destoff, end, &cached_state);750751 ret = btrfs_clone(src, inode, off, olen, len, destoff, 0);751752 btrfs_unlock_extent(&BTRFS_I(inode)->io_tree, destoff, end, &cached_state);753753+ if (ret < 0)754754+ return ret;752755753756 /*754757 * We may have copied an inline extent into a page of the destination755755- * range, so wait for writeback to complete before truncating pages758758+ * range, so wait for writeback to complete before invalidating pages756759 * from the page cache. This is a rare case.757760 */758758- wb_ret = btrfs_wait_ordered_range(BTRFS_I(inode), destoff, len);759759- ret = ret ? ret : wb_ret;761761+ ret = btrfs_wait_ordered_range(BTRFS_I(inode), destoff, len);762762+ if (ret < 0)763763+ return ret;764764+760765 /*761761- * Truncate page cache pages so that future reads will see the cloned762762- * data immediately and not the previous data.766766+ * Invalidate page cache so that future reads will see the cloned data767767+ * immediately and not the previous data.763768 */764764- truncate_inode_pages_range(&inode->i_data,765765- round_down(destoff, PAGE_SIZE),766766- round_up(destoff + len, PAGE_SIZE) - 1);769769+ ret = filemap_invalidate_inode(inode, false, destoff, end);770770+ if (ret < 0)771771+ return ret;767772768773 btrfs_btree_balance_dirty(fs_info);769774770770- return ret;775775+ return 0;771776}772777773778static int btrfs_remap_file_range_prep(struct file *file_in, loff_t pos_in,
+2
fs/btrfs/send.c
···63836383 extent_end = btrfs_file_extent_end(path);63846384 if (extent_end <= start)63856385 goto next;63866386+ if (btrfs_file_extent_type(leaf, fi) == BTRFS_FILE_EXTENT_INLINE)63876387+ return 0;63866388 if (btrfs_file_extent_disk_bytenr(leaf, fi) == 0) {63876389 search_start = extent_end;63886390 goto next;
···20252025}2026202620272027/**20282028+ * nfs_wb_folio_reclaim - Write back all requests on one page20292029+ * @inode: pointer to page20302030+ * @folio: pointer to folio20312031+ *20322032+ * Assumes that the folio has been locked by the caller20332033+ */20342034+int nfs_wb_folio_reclaim(struct inode *inode, struct folio *folio)20352035+{20362036+ loff_t range_start = folio_pos(folio);20372037+ size_t len = folio_size(folio);20382038+ struct writeback_control wbc = {20392039+ .sync_mode = WB_SYNC_ALL,20402040+ .nr_to_write = 0,20412041+ .range_start = range_start,20422042+ .range_end = range_start + len - 1,20432043+ .for_sync = 1,20442044+ };20452045+ int ret;20462046+20472047+ if (folio_test_writeback(folio))20482048+ return -EBUSY;20492049+ if (folio_clear_dirty_for_io(folio)) {20502050+ trace_nfs_writeback_folio_reclaim(inode, range_start, len);20512051+ ret = nfs_writepage_locked(folio, &wbc);20522052+ trace_nfs_writeback_folio_reclaim_done(inode, range_start, len,20532053+ ret);20542054+ return ret;20552055+ }20562056+ nfs_commit_inode(inode, 0);20572057+ return 0;20582058+}20592059+20602060+/**20282061 * nfs_wb_folio - Write back all requests on one page20292062 * @inode: pointer to page20302063 * @folio: pointer to folio
+6-5
fs/xfs/libxfs/xfs_ialloc.c
···848848 * invalid inode records, such as records that start at agbno 0849849 * or extend beyond the AG.850850 *851851- * Set min agbno to the first aligned, non-zero agbno and max to852852- * the last aligned agbno that is at least one full chunk from853853- * the end of the AG.851851+ * Set min agbno to the first chunk aligned, non-zero agbno and852852+ * max to one less than the last chunk aligned agbno from the853853+ * end of the AG. We subtract 1 from max so that the cluster854854+ * allocation alignment takes over and allows allocation within855855+ * the last full inode chunk in the AG.854856 */855857 args.min_agbno = args.mp->m_sb.sb_inoalignmt;856858 args.max_agbno = round_down(xfs_ag_block_count(args.mp,857859 pag_agno(pag)),858858- args.mp->m_sb.sb_inoalignmt) -859859- igeo->ialloc_blks;860860+ args.mp->m_sb.sb_inoalignmt) - 1;860861861862 error = xfs_alloc_vextent_near_bno(&args,862863 xfs_agbno_to_fsb(pag,
···552552 void *buffer, size_t size);553553554554/**555555+ * drm_dp_dpcd_readb() - read a single byte from the DPCD556556+ * @aux: DisplayPort AUX channel557557+ * @offset: address of the register to read558558+ * @valuep: location where the value of the register will be stored559559+ *560560+ * Returns the number of bytes transferred (1) on success, or a negative561561+ * error code on failure. In most of the cases you should be using562562+ * drm_dp_dpcd_read_byte() instead.563563+ */564564+static inline ssize_t drm_dp_dpcd_readb(struct drm_dp_aux *aux,565565+ unsigned int offset, u8 *valuep)566566+{567567+ return drm_dp_dpcd_read(aux, offset, valuep, 1);568568+}569569+570570+/**555571 * drm_dp_dpcd_read_data() - read a series of bytes from the DPCD556572 * @aux: DisplayPort AUX channel (SST or MST)557573 * @offset: address of the (first) register to read···586570 void *buffer, size_t size)587571{588572 int ret;573573+ size_t i;574574+ u8 *buf = buffer;589575590576 ret = drm_dp_dpcd_read(aux, offset, buffer, size);591591- if (ret < 0)592592- return ret;593593- if (ret < size)594594- return -EPROTO;577577+ if (ret >= 0) {578578+ if (ret < size)579579+ return -EPROTO;580580+ return 0;581581+ }582582+583583+ /*584584+ * Workaround for USB-C hubs/adapters with buggy firmware that fail585585+ * multi-byte AUX reads but work with single-byte reads.586586+ * Known affected devices:587587+ * - Lenovo USB-C to VGA adapter (VIA VL817, idVendor=17ef, idProduct=7217)588588+ * - Dell DA310 USB-C hub (idVendor=413c, idProduct=c010)589589+ * Attempt byte-by-byte reading as a fallback.590590+ */591591+ for (i = 0; i < size; i++) {592592+ ret = drm_dp_dpcd_readb(aux, offset + i, &buf[i]);593593+ if (ret < 0)594594+ return ret;595595+ }595596596597 return 0;597598}···640607 return -EPROTO;641608642609 return 0;643643-}644644-645645-/**646646- * drm_dp_dpcd_readb() - read a single byte from the DPCD647647- * @aux: DisplayPort AUX channel648648- * @offset: address of the register to read649649- * @valuep: location where the value of the register will be stored650650- *651651- * Returns the number of bytes transferred (1) on success, or a negative652652- * error code on failure. In most of the cases you should be using653653- * drm_dp_dpcd_read_byte() instead.654654- */655655-static inline ssize_t drm_dp_dpcd_readb(struct drm_dp_aux *aux,656656- unsigned int offset, u8 *valuep)657657-{658658- return drm_dp_dpcd_read(aux, offset, valuep, 1);659610}660611661612/**
···626626#endif627627628628 /* All ancestors including self */629629- struct cgroup *ancestors[];629629+ union {630630+ DECLARE_FLEX_ARRAY(struct cgroup *, ancestors);631631+ struct {632632+ struct cgroup *_root_ancestor;633633+ DECLARE_FLEX_ARRAY(struct cgroup *, _low_ancestors);634634+ };635635+ };630636};631637632638/*···653647 struct list_head root_list;654648 struct rcu_head rcu; /* Must be near the top */655649656656- /*657657- * The root cgroup. The containing cgroup_root will be destroyed on its658658- * release. cgrp->ancestors[0] will be used overflowing into the659659- * following field. cgrp_ancestor_storage must immediately follow.660660- */661661- struct cgroup cgrp;662662-663663- /* must follow cgrp for cgrp->ancestors[0], see above */664664- struct cgroup *cgrp_ancestor_storage;665665-666650 /* Number of cgroups in the hierarchy, used only for /proc/cgroups */667651 atomic_t nr_cgrps;668652···664668665669 /* The name for this hierarchy - may be empty */666670 char name[MAX_CGROUP_ROOT_NAMELEN];671671+672672+ /*673673+ * The root cgroup. The containing cgroup_root will be destroyed on its674674+ * release. This must be embedded last due to flexible array at the end675675+ * of struct cgroup.676676+ */677677+ struct cgroup cgrp;667678};668679669680/*
+1-1
include/linux/energy_model.h
···1818 * @power: The power consumed at this level (by 1 CPU or by a registered1919 * device). It can be a total power: static and dynamic.2020 * @cost: The cost coefficient associated with this level, used during2121- * energy calculation. Equal to: power * max_frequency / frequency2121+ * energy calculation. Equal to: 10 * power * max_frequency / frequency2222 * @flags: see "em_perf_state flags" description below.2323 */2424struct em_perf_state {
+1
include/linux/kfence.h
···211211 * __kfence_obj_info() - fill kmem_obj_info struct212212 * @kpp: kmem_obj_info to be filled213213 * @object: the object214214+ * @slab: the slab214215 *215216 * Return:216217 * * false - not a KFENCE object
+1
include/linux/nfs_fs.h
···637637extern int nfs_sync_inode(struct inode *inode);638638extern int nfs_wb_all(struct inode *inode);639639extern int nfs_wb_folio(struct inode *inode, struct folio *folio);640640+extern int nfs_wb_folio_reclaim(struct inode *inode, struct folio *folio);640641int nfs_wb_folio_cancel(struct inode *inode, struct folio *folio);641642extern int nfs_commit_inode(struct inode *, int);642643extern struct nfs_commit_data *nfs_commitdata_alloc(void);
+1
include/linux/nmi.h
···8383#if defined(CONFIG_HARDLOCKUP_DETECTOR)8484extern void hardlockup_detector_disable(void);8585extern unsigned int hardlockup_panic;8686+extern unsigned long hardlockup_si_mask;8687#else8788static inline void hardlockup_detector_disable(void) {}8889#endif
···18741874extern int can_nice(const struct task_struct *p, const int nice);18751875extern int task_curr(const struct task_struct *p);18761876extern int idle_cpu(int cpu);18771877-extern int available_idle_cpu(int cpu);18781877extern int sched_setscheduler(struct task_struct *, int, const struct sched_param *);18791878extern int sched_setscheduler_nocheck(struct task_struct *, int, const struct sched_param *);18801879extern void sched_set_fifo(struct task_struct *p);
+1
include/linux/sched/mm.h
···325325326326/**327327 * memalloc_flags_save - Add a PF_* flag to current->flags, save old value328328+ * @flags: Flags to add.328329 *329330 * This allows PF_* flags to be conveniently added, irrespective of current330331 * value, and then the old version restored with memalloc_flags_restore().
+2-2
include/linux/soc/airoha/airoha_offload.h
···5252{5353}54545555-static inline int airoha_ppe_setup_tc_block_cb(struct airoha_ppe_dev *dev,5656- void *type_data)5555+static inline int airoha_ppe_dev_setup_tc_block_cb(struct airoha_ppe_dev *dev,5656+ void *type_data)5757{5858 return -EOPNOTSUPP;5959}
+1
include/linux/textsearch.h
···3535 * @get_pattern: return head of pattern3636 * @get_pattern_len: return length of pattern3737 * @owner: module reference to algorithm3838+ * @list: list to search3839 */3940struct ts_ops4041{
···6767 FN(TC_EGRESS) \6868 FN(SECURITY_HOOK) \6969 FN(QDISC_DROP) \7070+ FN(QDISC_BURST_DROP) \7071 FN(QDISC_OVERLIMIT) \7172 FN(QDISC_CONGESTED) \7273 FN(CAKE_FLOOD) \···375374 * failed to enqueue to current qdisc)376375 */377376 SKB_DROP_REASON_QDISC_DROP,377377+ /**378378+ * @SKB_DROP_REASON_QDISC_BURST_DROP: dropped when net.core.qdisc_max_burst379379+ * limit is hit.380380+ */381381+ SKB_DROP_REASON_QDISC_BURST_DROP,378382 /**379383 * @SKB_DROP_REASON_QDISC_OVERLIMIT: dropped by qdisc when a qdisc380384 * instance exceeds its total buffer size limit.
+1
include/net/hotdata.h
···4242 int netdev_budget_usecs;4343 int tstamp_prequeue;4444 int max_backlog;4545+ int qdisc_max_burst;4546 int dev_tx_weight;4647 int dev_rx_weight;4748 int sysctl_max_skb_frags;
···14021402#define snd_pcm_lib_mmap_iomem NULL14031403#endif1404140414051405-void snd_pcm_runtime_buffer_set_silence(struct snd_pcm_runtime *runtime);14051405+int snd_pcm_runtime_buffer_set_silence(struct snd_pcm_runtime *runtime);1406140614071407/**14081408 * snd_pcm_limit_isa_dma_size - Get the max size fitting with ISA DMA transfer
+82
include/uapi/linux/dev_energymodel.h
···11+/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) */22+/* Do not edit directly, auto-generated from: */33+/* Documentation/netlink/specs/dev-energymodel.yaml */44+/* YNL-GEN uapi header */55+/* To regenerate run: tools/net/ynl/ynl-regen.sh */66+77+#ifndef _UAPI_LINUX_DEV_ENERGYMODEL_H88+#define _UAPI_LINUX_DEV_ENERGYMODEL_H99+1010+#define DEV_ENERGYMODEL_FAMILY_NAME "dev-energymodel"1111+#define DEV_ENERGYMODEL_FAMILY_VERSION 11212+1313+/**1414+ * enum dev_energymodel_perf_state_flags1515+ * @DEV_ENERGYMODEL_PERF_STATE_FLAGS_PERF_STATE_INEFFICIENT: The performance1616+ * state is inefficient. There is in this perf-domain, another performance1717+ * state with a higher frequency but a lower or equal power cost.1818+ */1919+enum dev_energymodel_perf_state_flags {2020+ DEV_ENERGYMODEL_PERF_STATE_FLAGS_PERF_STATE_INEFFICIENT = 1,2121+};2222+2323+/**2424+ * enum dev_energymodel_perf_domain_flags2525+ * @DEV_ENERGYMODEL_PERF_DOMAIN_FLAGS_PERF_DOMAIN_MICROWATTS: The power values2626+ * are in micro-Watts or some other scale.2727+ * @DEV_ENERGYMODEL_PERF_DOMAIN_FLAGS_PERF_DOMAIN_SKIP_INEFFICIENCIES: Skip2828+ * inefficient states when estimating energy consumption.2929+ * @DEV_ENERGYMODEL_PERF_DOMAIN_FLAGS_PERF_DOMAIN_ARTIFICIAL: The power values3030+ * are artificial and might be created by platform missing real power3131+ * information.3232+ */3333+enum dev_energymodel_perf_domain_flags {3434+ DEV_ENERGYMODEL_PERF_DOMAIN_FLAGS_PERF_DOMAIN_MICROWATTS = 1,3535+ DEV_ENERGYMODEL_PERF_DOMAIN_FLAGS_PERF_DOMAIN_SKIP_INEFFICIENCIES = 2,3636+ DEV_ENERGYMODEL_PERF_DOMAIN_FLAGS_PERF_DOMAIN_ARTIFICIAL = 4,3737+};3838+3939+enum {4040+ DEV_ENERGYMODEL_A_PERF_DOMAIN_PAD = 1,4141+ DEV_ENERGYMODEL_A_PERF_DOMAIN_PERF_DOMAIN_ID,4242+ DEV_ENERGYMODEL_A_PERF_DOMAIN_FLAGS,4343+ DEV_ENERGYMODEL_A_PERF_DOMAIN_CPUS,4444+4545+ __DEV_ENERGYMODEL_A_PERF_DOMAIN_MAX,4646+ DEV_ENERGYMODEL_A_PERF_DOMAIN_MAX = (__DEV_ENERGYMODEL_A_PERF_DOMAIN_MAX - 1)4747+};4848+4949+enum {5050+ DEV_ENERGYMODEL_A_PERF_TABLE_PERF_DOMAIN_ID = 1,5151+ DEV_ENERGYMODEL_A_PERF_TABLE_PERF_STATE,5252+5353+ __DEV_ENERGYMODEL_A_PERF_TABLE_MAX,5454+ DEV_ENERGYMODEL_A_PERF_TABLE_MAX = (__DEV_ENERGYMODEL_A_PERF_TABLE_MAX - 1)5555+};5656+5757+enum {5858+ DEV_ENERGYMODEL_A_PERF_STATE_PAD = 1,5959+ DEV_ENERGYMODEL_A_PERF_STATE_PERFORMANCE,6060+ DEV_ENERGYMODEL_A_PERF_STATE_FREQUENCY,6161+ DEV_ENERGYMODEL_A_PERF_STATE_POWER,6262+ DEV_ENERGYMODEL_A_PERF_STATE_COST,6363+ DEV_ENERGYMODEL_A_PERF_STATE_FLAGS,6464+6565+ __DEV_ENERGYMODEL_A_PERF_STATE_MAX,6666+ DEV_ENERGYMODEL_A_PERF_STATE_MAX = (__DEV_ENERGYMODEL_A_PERF_STATE_MAX - 1)6767+};6868+6969+enum {7070+ DEV_ENERGYMODEL_CMD_GET_PERF_DOMAINS = 1,7171+ DEV_ENERGYMODEL_CMD_GET_PERF_TABLE,7272+ DEV_ENERGYMODEL_CMD_PERF_DOMAIN_CREATED,7373+ DEV_ENERGYMODEL_CMD_PERF_DOMAIN_UPDATED,7474+ DEV_ENERGYMODEL_CMD_PERF_DOMAIN_DELETED,7575+7676+ __DEV_ENERGYMODEL_CMD_MAX,7777+ DEV_ENERGYMODEL_CMD_MAX = (__DEV_ENERGYMODEL_CMD_MAX - 1)7878+};7979+8080+#define DEV_ENERGYMODEL_MCGRP_EVENT "event"8181+8282+#endif /* _UAPI_LINUX_DEV_ENERGYMODEL_H */
···216216 * :manpage:`ftruncate(2)`, :manpage:`creat(2)`, or :manpage:`open(2)` with217217 * ``O_TRUNC``. This access right is available since the third version of the218218 * Landlock ABI.219219+ * - %LANDLOCK_ACCESS_FS_IOCTL_DEV: Invoke :manpage:`ioctl(2)` commands on an opened220220+ * character or block device.221221+ *222222+ * This access right applies to all `ioctl(2)` commands implemented by device223223+ * drivers. However, the following common IOCTL commands continue to be224224+ * invokable independent of the %LANDLOCK_ACCESS_FS_IOCTL_DEV right:225225+ *226226+ * * IOCTL commands targeting file descriptors (``FIOCLEX``, ``FIONCLEX``),227227+ * * IOCTL commands targeting file descriptions (``FIONBIO``, ``FIOASYNC``),228228+ * * IOCTL commands targeting file systems (``FIFREEZE``, ``FITHAW``,229229+ * ``FIGETBSZ``, ``FS_IOC_GETFSUUID``, ``FS_IOC_GETFSSYSFSPATH``)230230+ * * Some IOCTL commands which do not make sense when used with devices, but231231+ * whose implementations are safe and return the right error codes232232+ * (``FS_IOC_FIEMAP``, ``FICLONE``, ``FICLONERANGE``, ``FIDEDUPERANGE``)233233+ *234234+ * This access right is available since the fifth version of the Landlock235235+ * ABI.219236 *220237 * Whether an opened file can be truncated with :manpage:`ftruncate(2)` or used221238 * with `ioctl(2)` is determined during :manpage:`open(2)`, in the same way as···291274 *292275 * If multiple requirements are not met, the ``EACCES`` error code takes293276 * precedence over ``EXDEV``.294294- *295295- * The following access right applies both to files and directories:296296- *297297- * - %LANDLOCK_ACCESS_FS_IOCTL_DEV: Invoke :manpage:`ioctl(2)` commands on an opened298298- * character or block device.299299- *300300- * This access right applies to all `ioctl(2)` commands implemented by device301301- * drivers. However, the following common IOCTL commands continue to be302302- * invokable independent of the %LANDLOCK_ACCESS_FS_IOCTL_DEV right:303303- *304304- * * IOCTL commands targeting file descriptors (``FIOCLEX``, ``FIONCLEX``),305305- * * IOCTL commands targeting file descriptions (``FIONBIO``, ``FIOASYNC``),306306- * * IOCTL commands targeting file systems (``FIFREEZE``, ``FITHAW``,307307- * ``FIGETBSZ``, ``FS_IOC_GETFSUUID``, ``FS_IOC_GETFSSYSFSPATH``)308308- * * Some IOCTL commands which do not make sense when used with devices, but309309- * whose implementations are safe and return the right error codes310310- * (``FS_IOC_FIEMAP``, ``FICLONE``, ``FICLONERANGE``, ``FIDEDUPERANGE``)311311- *312312- * This access right is available since the fifth version of the Landlock313313- * ABI.314277 *315278 * .. warning::316279 *
-9
include/uapi/linux/media/arm/mali-c55-config.h
···195195} __attribute__((packed));196196197197/**198198- * enum mali_c55_param_buffer_version - Mali-C55 parameters block versioning199199- *200200- * @MALI_C55_PARAM_BUFFER_V1: First version of Mali-C55 parameters block201201- */202202-enum mali_c55_param_buffer_version {203203- MALI_C55_PARAM_BUFFER_V1,204204-};205205-206206-/**207198 * enum mali_c55_param_block_type - Enumeration of Mali-C55 parameter blocks208199 *209200 * This enumeration defines the types of Mali-C55 parameters block. Each block
+4-4
io_uring/io_uring.c
···30033003 mutex_unlock(&ctx->uring_lock);30043004 }3005300530063006- if (ctx->flags & IORING_SETUP_DEFER_TASKRUN)30073007- io_move_task_work_from_local(ctx);30083008-30093006 /* The SQPOLL thread never reaches this path */30103010- while (io_uring_try_cancel_requests(ctx, NULL, true, false))30073007+ do {30083008+ if (ctx->flags & IORING_SETUP_DEFER_TASKRUN)30093009+ io_move_task_work_from_local(ctx);30113010 cond_resched();30113011+ } while (io_uring_try_cancel_requests(ctx, NULL, true, false));3012301230133013 if (ctx->sq_data) {30143014 struct io_sq_data *sqd = ctx->sq_data;
+5
kernel/bpf/verifier.c
···96099609 if (reg->type != PTR_TO_MAP_VALUE)96109610 return -EINVAL;9611961196129612+ if (map->map_type == BPF_MAP_TYPE_INSN_ARRAY) {96139613+ verbose(env, "R%d points to insn_array map which cannot be used as const string\n", regno);96149614+ return -EACCES;96159615+ }96169616+96129617 if (!bpf_map_is_rdonly(map)) {96139618 verbose(env, "R%d does not point to a readonly map'\n", regno);96149619 return -EACCES;
+2-5
kernel/cgroup/cgroup.c
···11+// SPDX-License-Identifier: GPL-2.012/*23 * Generic process-grouping system.34 *···2120 * 2003-10-22 Updates by Stephen Hemminger.2221 * 2004 May-July Rework by Paul Jackson.2322 * ---------------------------------------------------2424- *2525- * This file is subject to the terms and conditions of the GNU General Public2626- * License. See the file COPYING in the main directory of the Linux2727- * distribution for more details.2823 */29243025#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt···58445847 int ret;5845584858465849 /* allocate the cgroup and its ID, 0 is reserved for the root */58475847- cgrp = kzalloc(struct_size(cgrp, ancestors, (level + 1)), GFP_KERNEL);58505850+ cgrp = kzalloc(struct_size(cgrp, _low_ancestors, level), GFP_KERNEL);58485851 if (!cgrp)58495852 return ERR_PTR(-ENOMEM);58505853
+1-4
kernel/cgroup/cpuset.c
···11+// SPDX-License-Identifier: GPL-2.012/*23 * kernel/cpuset.c34 *···1716 * 2006 Rework by Paul Menage to use generic cgroups1817 * 2008 Rework of the scheduler domains and CPU hotplug handling1918 * by Max Krasnyansky2020- *2121- * This file is subject to the terms and conditions of the GNU General Public2222- * License. See the file COPYING in the main directory of the Linux2323- * distribution for more details.2419 */2520#include "cpuset-internal.h"2621
+1-8
kernel/cgroup/legacy_freezer.c
···11+// SPDX-License-Identifier: LGPL-2.112/*23 * cgroup_freezer.c - control group freezer subsystem34 *45 * Copyright IBM Corporation, 200756 *67 * Author : Cedric Le Goater <clg@fr.ibm.com>77- *88- * This program is free software; you can redistribute it and/or modify it99- * under the terms of version 2.1 of the GNU Lesser General Public License1010- * as published by the Free Software Foundation.1111- *1212- * This program is distributed in the hope that it would be useful, but1313- * WITHOUT ANY WARRANTY; without even the implied warranty of1414- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.158 */1691710#include <linux/export.h>
+19-18
kernel/liveupdate/kexec_handover.c
···460460 }461461}462462463463-/* Return true if memory was deserizlied */464464-static bool __init kho_mem_deserialize(const void *fdt)463463+/* Returns physical address of the preserved memory map from FDT */464464+static phys_addr_t __init kho_get_mem_map_phys(const void *fdt)465465{466466- struct khoser_mem_chunk *chunk;467466 const void *mem_ptr;468468- u64 mem;469467 int len;470468471469 mem_ptr = fdt_getprop(fdt, 0, PROP_PRESERVED_MEMORY_MAP, &len);472470 if (!mem_ptr || len != sizeof(u64)) {473471 pr_err("failed to get preserved memory bitmaps\n");474474- return false;472472+ return 0;475473 }476474477477- mem = get_unaligned((const u64 *)mem_ptr);478478- chunk = mem ? phys_to_virt(mem) : NULL;475475+ return get_unaligned((const u64 *)mem_ptr);476476+}479477480480- /* No preserved physical pages were passed, no deserialization */481481- if (!chunk)482482- return false;483483-478478+static void __init kho_mem_deserialize(struct khoser_mem_chunk *chunk)479479+{484480 while (chunk) {485481 unsigned int i;486482···485489 &chunk->bitmaps[i]);486490 chunk = KHOSER_LOAD_PTR(chunk->hdr.next);487491 }488488-489489- return true;490492}491493492494/*···12471253struct kho_in {12481254 phys_addr_t fdt_phys;12491255 phys_addr_t scratch_phys;12561256+ phys_addr_t mem_map_phys;12501257 struct kho_debugfs dbg;12511258};12521259···1429143414301435void __init kho_memory_init(void)14311436{14321432- if (kho_in.scratch_phys) {14371437+ if (kho_in.mem_map_phys) {14331438 kho_scratch = phys_to_virt(kho_in.scratch_phys);14341439 kho_release_scratch();14351435-14361436- if (!kho_mem_deserialize(kho_get_fdt()))14371437- kho_in.fdt_phys = 0;14401440+ kho_mem_deserialize(phys_to_virt(kho_in.mem_map_phys));14381441 } else {14391442 kho_reserve_scratch();14401443 }···14411448void __init kho_populate(phys_addr_t fdt_phys, u64 fdt_len,14421449 phys_addr_t scratch_phys, u64 scratch_len)14431450{14441444- void *fdt = NULL;14451451 struct kho_scratch *scratch = NULL;14521452+ phys_addr_t mem_map_phys;14531453+ void *fdt = NULL;14461454 int err = 0;14471455 unsigned int scratch_cnt = scratch_len / sizeof(*kho_scratch);14481456···14661472 pr_warn("setup: handover FDT (0x%llx) is incompatible with '%s': %d\n",14671473 fdt_phys, KHO_FDT_COMPATIBLE, err);14681474 err = -EINVAL;14751475+ goto out;14761476+ }14771477+14781478+ mem_map_phys = kho_get_mem_map_phys(fdt);14791479+ if (!mem_map_phys) {14801480+ err = -ENOENT;14691481 goto out;14701482 }14711483···1515151515161516 kho_in.fdt_phys = fdt_phys;15171517 kho_in.scratch_phys = scratch_phys;15181518+ kho_in.mem_map_phys = mem_map_phys;15181519 kho_scratch_cnt = scratch_cnt;15191520 pr_info("found kexec handover data.\n");15201521
···449449 INIT_LIST_HEAD(&pd->node);450450451451 id = ida_alloc(&em_pd_ida, GFP_KERNEL);452452- if (id < 0)453453- return -ENOMEM;452452+ if (id < 0) {453453+ kfree(pd);454454+ return id;455455+ }454456 pd->id = id;455457456458 em_table = em_table_alloc(pd);
+18-20
kernel/printk/nbcon.c
···15571557 ctxt->allow_unsafe_takeover = nbcon_allow_unsafe_takeover();1558155815591559 while (nbcon_seq_read(con) < stop_seq) {15601560- if (!nbcon_context_try_acquire(ctxt, false))15611561- return -EPERM;15621562-15631560 /*15641564- * nbcon_emit_next_record() returns false when the console was15651565- * handed over or taken over. In both cases the context is no15661566- * longer valid.15611561+ * Atomic flushing does not use console driver synchronization15621562+ * (i.e. it does not hold the port lock for uart consoles).15631563+ * Therefore IRQs must be disabled to avoid being interrupted15641564+ * and then calling into a driver that will deadlock trying15651565+ * to acquire console ownership.15671566 */15681568- if (!nbcon_emit_next_record(&wctxt, true))15691569- return -EAGAIN;15671567+ scoped_guard(irqsave) {15681568+ if (!nbcon_context_try_acquire(ctxt, false))15691569+ return -EPERM;1570157015711571- nbcon_context_release(ctxt);15711571+ /*15721572+ * nbcon_emit_next_record() returns false when15731573+ * the console was handed over or taken over.15741574+ * In both cases the context is no longer valid.15751575+ */15761576+ if (!nbcon_emit_next_record(&wctxt, true))15771577+ return -EAGAIN;15781578+15791579+ nbcon_context_release(ctxt);15801580+ }1572158115731582 if (!ctxt->backlog) {15741583 /* Are there reserved but not yet finalized records? */···16041595static void nbcon_atomic_flush_pending_con(struct console *con, u64 stop_seq)16051596{16061597 struct console_flush_type ft;16071607- unsigned long flags;16081598 int err;1609159916101600again:16111611- /*16121612- * Atomic flushing does not use console driver synchronization (i.e.16131613- * it does not hold the port lock for uart consoles). Therefore IRQs16141614- * must be disabled to avoid being interrupted and then calling into16151615- * a driver that will deadlock trying to acquire console ownership.16161616- */16171617- local_irq_save(flags);16181618-16191601 err = __nbcon_atomic_flush_pending_con(con, stop_seq);16201620-16211621- local_irq_restore(flags);1622160216231603 /*16241604 * If there was a new owner (-EPERM, -EAGAIN), that context is
···13641364#define cpu_curr(cpu) (cpu_rq(cpu)->curr)13651365#define raw_rq() raw_cpu_ptr(&runqueues)1366136613671367+static inline bool idle_rq(struct rq *rq)13681368+{13691369+ return rq->curr == rq->idle && !rq->nr_running && !rq->ttwu_pending;13701370+}13711371+13721372+/**13731373+ * available_idle_cpu - is a given CPU idle for enqueuing work.13741374+ * @cpu: the CPU in question.13751375+ *13761376+ * Return: 1 if the CPU is currently idle. 0 otherwise.13771377+ */13781378+static inline bool available_idle_cpu(int cpu)13791379+{13801380+ if (!idle_rq(cpu_rq(cpu)))13811381+ return 0;13821382+13831383+ if (vcpu_is_preempted(cpu))13841384+ return 0;13851385+13861386+ return 1;13871387+}13881388+13671389#ifdef CONFIG_SCHED_PROXY_EXEC13681390static inline void rq_set_donor(struct rq *rq, struct task_struct *t)13691391{···23882366 * should preserve as much state as possible.23892367 *23902368 * MOVE - paired with SAVE/RESTORE, explicitly does not preserve the location23912391- * in the runqueue.23692369+ * in the runqueue. IOW the priority is allowed to change. Callers23702370+ * must expect to deal with balance callbacks.23922371 *23932372 * NOCLOCK - skip the update_rq_clock() (avoids double updates)23942373 *···39703947extern bool dequeue_task(struct rq *rq, struct task_struct *p, int flags);3971394839723949extern struct balance_callback *splice_balance_callbacks(struct rq *rq);39503950+39513951+extern void __balance_callbacks(struct rq *rq, struct rq_flags *rf);39733952extern void balance_callbacks(struct rq *rq, struct balance_callback *head);3974395339753954/*
+2-30
kernel/sched/syscalls.c
···180180 */181181int idle_cpu(int cpu)182182{183183- struct rq *rq = cpu_rq(cpu);184184-185185- if (rq->curr != rq->idle)186186- return 0;187187-188188- if (rq->nr_running)189189- return 0;190190-191191- if (rq->ttwu_pending)192192- return 0;193193-194194- return 1;195195-}196196-197197-/**198198- * available_idle_cpu - is a given CPU idle for enqueuing work.199199- * @cpu: the CPU in question.200200- *201201- * Return: 1 if the CPU is currently idle. 0 otherwise.202202- */203203-int available_idle_cpu(int cpu)204204-{205205- if (!idle_cpu(cpu))206206- return 0;207207-208208- if (vcpu_is_preempted(cpu))209209- return 0;210210-211211- return 1;183183+ return idle_rq(cpu_rq(cpu));212184}213185214186/**···639667 * itself.640668 */641669 newprio = rt_effective_prio(p, newprio);642642- if (newprio == oldprio)670670+ if (newprio == oldprio && !dl_prio(newprio))643671 queue_flags &= ~DEQUEUE_MOVE;644672 }645673
+1-1
kernel/time/hrtimer.c
···913913 return true;914914915915 /* Extra check for softirq clock bases */916916- if (base->clockid < HRTIMER_BASE_MONOTONIC_SOFT)916916+ if (base->index < HRTIMER_BASE_MONOTONIC_SOFT)917917 continue;918918 if (cpu_base->softirq_activated)919919 continue;
+15-14
kernel/trace/ftrace.c
···11481148};1149114911501150#define ENTRY_SIZE sizeof(struct dyn_ftrace)11511151-#define ENTRIES_PER_PAGE (PAGE_SIZE / ENTRY_SIZE)1152115111531152static struct ftrace_page *ftrace_pages_start;11541153static struct ftrace_page *ftrace_pages;···38333834 return 0;38343835}3835383638363836-static int ftrace_allocate_records(struct ftrace_page *pg, int count)38373837+static int ftrace_allocate_records(struct ftrace_page *pg, int count,38383838+ unsigned long *num_pages)38373839{38383840 int order;38393841 int pages;···38443844 return -EINVAL;3845384538463846 /* We want to fill as much as possible, with no empty pages */38473847- pages = DIV_ROUND_UP(count, ENTRIES_PER_PAGE);38473847+ pages = DIV_ROUND_UP(count * ENTRY_SIZE, PAGE_SIZE);38483848 order = fls(pages) - 1;3849384938503850 again:···38593859 }3860386038613861 ftrace_number_of_pages += 1 << order;38623862+ *num_pages += 1 << order;38623863 ftrace_number_of_groups++;3863386438643865 cnt = (PAGE_SIZE << order) / ENTRY_SIZE;···38883887}3889388838903889static struct ftrace_page *38913891-ftrace_allocate_pages(unsigned long num_to_init)38903890+ftrace_allocate_pages(unsigned long num_to_init, unsigned long *num_pages)38923891{38933892 struct ftrace_page *start_pg;38943893 struct ftrace_page *pg;38953894 int cnt;38953895+38963896+ *num_pages = 0;3896389738973898 if (!num_to_init)38983899 return NULL;···39093906 * waste as little space as possible.39103907 */39113908 for (;;) {39123912- cnt = ftrace_allocate_records(pg, num_to_init);39093909+ cnt = ftrace_allocate_records(pg, num_to_init, num_pages);39133910 if (cnt < 0)39143911 goto free_pages;39153912···71957192 if (!count)71967193 return 0;7197719471987198- pages = DIV_ROUND_UP(count, ENTRIES_PER_PAGE);71997199-72007195 /*72017196 * Sorting mcount in vmlinux at build time depend on72027197 * CONFIG_BUILDTIME_MCOUNT_SORT, while mcount loc in···72077206 test_is_sorted(start, count);72087207 }7209720872107210- start_pg = ftrace_allocate_pages(count);72097209+ start_pg = ftrace_allocate_pages(count, &pages);72117210 if (!start_pg)72127211 return -ENOMEM;72137212···73067305 /* We should have used all pages unless we skipped some */73077306 if (pg_unuse) {73087307 unsigned long pg_remaining, remaining = 0;73097309- unsigned long skip;73087308+ long skip;7310730973117310 /* Count the number of entries unused and compare it to skipped. */73127312- pg_remaining = (ENTRIES_PER_PAGE << pg->order) - pg->index;73117311+ pg_remaining = (PAGE_SIZE << pg->order) / ENTRY_SIZE - pg->index;7313731273147313 if (!WARN(skipped < pg_remaining, "Extra allocated pages for ftrace")) {7315731473167315 skip = skipped - pg_remaining;7317731673187318- for (pg = pg_unuse; pg; pg = pg->next)73177317+ for (pg = pg_unuse; pg && skip > 0; pg = pg->next) {73197318 remaining += 1 << pg->order;73197319+ skip -= (PAGE_SIZE << pg->order) / ENTRY_SIZE;73207320+ }7320732173217322 pages -= remaining;73227322-73237323- skip = DIV_ROUND_UP(skip, ENTRIES_PER_PAGE);7324732373257324 /*73267325 * Check to see if the number of pages remaining would73277326 * just fit the number of entries skipped.73287327 */73297329- WARN(skip != remaining, "Extra allocated pages for ftrace: %lu with %lu skipped",73287328+ WARN(pg || skip > 0, "Extra allocated pages for ftrace: %lu with %lu skipped",73307329 remaining, skipped);73317330 }73327331 /* Need to synchronize with ftrace_location_range() */
+1-1
kernel/watchdog.c
···7171 * hard lockup is detected, it could be task, memory, lock etc.7272 * Refer include/linux/sys_info.h for detailed bit definition.7373 */7474-static unsigned long hardlockup_si_mask;7474+unsigned long hardlockup_si_mask;75757676#ifdef CONFIG_SYSFS7777
+20-12
lib/buildid.c
···55#include <linux/elf.h>66#include <linux/kernel.h>77#include <linux/pagemap.h>88+#include <linux/fs.h>89#include <linux/secretmem.h>9101011#define BUILD_ID 3···47464847 freader_put_folio(r);49485050- /* reject secretmem folios created with memfd_secret() */5151- if (secretmem_mapping(r->file->f_mapping))5252- return -EFAULT;5353-4949+ /* only use page cache lookup - fail if not already cached */5450 r->folio = filemap_get_folio(r->file->f_mapping, file_off >> PAGE_SHIFT);5555-5656- /* if sleeping is allowed, wait for the page, if necessary */5757- if (r->may_fault && (IS_ERR(r->folio) || !folio_test_uptodate(r->folio))) {5858- filemap_invalidate_lock_shared(r->file->f_mapping);5959- r->folio = read_cache_folio(r->file->f_mapping, file_off >> PAGE_SHIFT,6060- NULL, r->file);6161- filemap_invalidate_unlock_shared(r->file->f_mapping);6262- }63516452 if (IS_ERR(r->folio) || !folio_test_uptodate(r->folio)) {6553 if (!IS_ERR(r->folio))···8595 return NULL;8696 }8797 return r->data + file_off;9898+ }9999+100100+ /* reject secretmem folios created with memfd_secret() */101101+ if (secretmem_mapping(r->file->f_mapping)) {102102+ r->err = -EFAULT;103103+ return NULL;104104+ }105105+106106+ /* use __kernel_read() for sleepable context */107107+ if (r->may_fault) {108108+ ssize_t ret;109109+110110+ ret = __kernel_read(r->file, r->buf, sz, &file_off);111111+ if (ret != sz) {112112+ r->err = (ret < 0) ? ret : -EIO;113113+ return NULL;114114+ }115115+ return r->buf;88116 }8911790118 /* fetch or reuse folio for given file offset */
+7-3
mm/Kconfig
···12201220 Device memory hotplug support allows for establishing pmem,12211221 or other device driver discovered memory regions, in the12221222 memmap. This allows pfn_to_page() lookups of otherwise12231223- "device-physical" addresses which is needed for using a DAX12241224- mapping in an O_DIRECT operation, among other things.12231223+ "device-physical" addresses which is needed for DAX, PCI_P2PDMA, and12241224+ DEVICE_PRIVATE features among others.1225122512261226- If FS_DAX is enabled, then say Y.12261226+ Enabling this option will reduce the entropy of x86 KASLR memory12271227+ regions. For example - on a 46 bit system, the entropy goes down12281228+ from 16 bits to 15 bits. The actual reduction in entropy depends12291229+ on the physical address bits, on processor features, kernel config12301230+ (5 level page table) and physical memory present on the system.1227123112281232#12291233# Helpers to mirror range of the CPU page tables of a process into device page
+37-4
mm/damon/core.c
···14311431 return running;14321432}1433143314341434+/*14351435+ * damon_call_handle_inactive_ctx() - handle DAMON call request that added to14361436+ * an inactive context.14371437+ * @ctx: The inactive DAMON context.14381438+ * @control: Control variable of the call request.14391439+ *14401440+ * This function is called in a case that @control is added to @ctx but @ctx is14411441+ * not running (inactive). See if @ctx handled @control or not, and cleanup14421442+ * @control if it was not handled.14431443+ *14441444+ * Returns 0 if @control was handled by @ctx, negative error code otherwise.14451445+ */14461446+static int damon_call_handle_inactive_ctx(14471447+ struct damon_ctx *ctx, struct damon_call_control *control)14481448+{14491449+ struct damon_call_control *c;14501450+14511451+ mutex_lock(&ctx->call_controls_lock);14521452+ list_for_each_entry(c, &ctx->call_controls, list) {14531453+ if (c == control) {14541454+ list_del(&control->list);14551455+ mutex_unlock(&ctx->call_controls_lock);14561456+ return -EINVAL;14571457+ }14581458+ }14591459+ mutex_unlock(&ctx->call_controls_lock);14601460+ return 0;14611461+}14621462+14341463/**14351464 * damon_call() - Invoke a given function on DAMON worker thread (kdamond).14361465 * @ctx: DAMON context to call the function for.···14901461 list_add_tail(&control->list, &ctx->call_controls);14911462 mutex_unlock(&ctx->call_controls_lock);14921463 if (!damon_is_running(ctx))14931493- return -EINVAL;14641464+ return damon_call_handle_inactive_ctx(ctx, control);14941465 if (control->repeat)14951466 return 0;14961467 wait_for_completion(&control->completion);···2080205120812052 rcu_read_lock();20822053 memcg = mem_cgroup_from_id(goal->memcg_id);20832083- rcu_read_unlock();20842084- if (!memcg) {20542054+ if (!memcg || !mem_cgroup_tryget(memcg)) {20552055+ rcu_read_unlock();20852056 if (goal->metric == DAMOS_QUOTA_NODE_MEMCG_USED_BP)20862057 return 0;20872058 else /* DAMOS_QUOTA_NODE_MEMCG_FREE_BP */20882059 return 10000;20892060 }20612061+ rcu_read_unlock();20622062+20902063 mem_cgroup_flush_stats(memcg);20912064 lruvec = mem_cgroup_lruvec(memcg, NODE_DATA(goal->nid));20922065 used_pages = lruvec_page_state(lruvec, NR_ACTIVE_ANON);20932066 used_pages += lruvec_page_state(lruvec, NR_INACTIVE_ANON);20942067 used_pages += lruvec_page_state(lruvec, NR_ACTIVE_FILE);20952068 used_pages += lruvec_page_state(lruvec, NR_INACTIVE_FILE);20692069+20702070+ mem_cgroup_put(memcg);2096207120972072 si_meminfo_node(&i, goal->nid);20982073 if (goal->metric == DAMOS_QUOTA_NODE_MEMCG_USED_BP)···27842751 if (ctx->ops.cleanup)27852752 ctx->ops.cleanup(ctx);27862753 kfree(ctx->regions_score_histogram);27542754+ kdamond_call(ctx, true);2787275527882756 pr_debug("kdamond (%d) finishes\n", current->pid);27892757 mutex_lock(&ctx->kdamond_lock);27902758 ctx->kdamond = NULL;27912759 mutex_unlock(&ctx->kdamond_lock);2792276027932793- kdamond_call(ctx, true);27942761 damos_walk_cancel(ctx);2795276227962763 mutex_lock(&damon_lock);
···167167 pcp_trylock_finish(UP_flags); \168168})169169170170+/*171171+ * With the UP spinlock implementation, when we spin_lock(&pcp->lock) (for i.e.172172+ * a potentially remote cpu drain) and get interrupted by an operation that173173+ * attempts pcp_spin_trylock(), we can't rely on the trylock failure due to UP174174+ * spinlock assumptions making the trylock a no-op. So we have to turn that175175+ * spin_lock() to a spin_lock_irqsave(). This works because on UP there are no176176+ * remote cpu's so we can only be locking the only existing local one.177177+ */178178+#if defined(CONFIG_SMP) || defined(CONFIG_PREEMPT_RT)179179+static inline void __flags_noop(unsigned long *flags) { }180180+#define pcp_spin_lock_maybe_irqsave(ptr, flags) \181181+({ \182182+ __flags_noop(&(flags)); \183183+ spin_lock(&(ptr)->lock); \184184+})185185+#define pcp_spin_unlock_maybe_irqrestore(ptr, flags) \186186+({ \187187+ spin_unlock(&(ptr)->lock); \188188+ __flags_noop(&(flags)); \189189+})190190+#else191191+#define pcp_spin_lock_maybe_irqsave(ptr, flags) \192192+ spin_lock_irqsave(&(ptr)->lock, flags)193193+#define pcp_spin_unlock_maybe_irqrestore(ptr, flags) \194194+ spin_unlock_irqrestore(&(ptr)->lock, flags)195195+#endif196196+170197#ifdef CONFIG_USE_PERCPU_NUMA_NODE_ID171198DEFINE_PER_CPU(int, numa_node);172199EXPORT_PER_CPU_SYMBOL(numa_node);···25832556bool decay_pcp_high(struct zone *zone, struct per_cpu_pages *pcp)25842557{25852558 int high_min, to_drain, to_drain_batched, batch;25592559+ unsigned long UP_flags;25862560 bool todo = false;2587256125882562 high_min = READ_ONCE(pcp->high_min);···26032575 to_drain = pcp->count - pcp->high;26042576 while (to_drain > 0) {26052577 to_drain_batched = min(to_drain, batch);26062606- spin_lock(&pcp->lock);25782578+ pcp_spin_lock_maybe_irqsave(pcp, UP_flags);26072579 free_pcppages_bulk(zone, to_drain_batched, pcp, 0);26082608- spin_unlock(&pcp->lock);25802580+ pcp_spin_unlock_maybe_irqrestore(pcp, UP_flags);26092581 todo = true;2610258226112583 to_drain -= to_drain_batched;···26222594 */26232595void drain_zone_pages(struct zone *zone, struct per_cpu_pages *pcp)26242596{25972597+ unsigned long UP_flags;26252598 int to_drain, batch;2626259926272600 batch = READ_ONCE(pcp->batch);26282601 to_drain = min(pcp->count, batch);26292602 if (to_drain > 0) {26302630- spin_lock(&pcp->lock);26032603+ pcp_spin_lock_maybe_irqsave(pcp, UP_flags);26312604 free_pcppages_bulk(zone, to_drain, pcp, 0);26322632- spin_unlock(&pcp->lock);26052605+ pcp_spin_unlock_maybe_irqrestore(pcp, UP_flags);26332606 }26342607}26352608#endif···26412612static void drain_pages_zone(unsigned int cpu, struct zone *zone)26422613{26432614 struct per_cpu_pages *pcp = per_cpu_ptr(zone->per_cpu_pageset, cpu);26152615+ unsigned long UP_flags;26442616 int count;2645261726462618 do {26472647- spin_lock(&pcp->lock);26192619+ pcp_spin_lock_maybe_irqsave(pcp, UP_flags);26482620 count = pcp->count;26492621 if (count) {26502622 int to_drain = min(count,···26542624 free_pcppages_bulk(zone, to_drain, pcp, 0);26552625 count -= to_drain;26562626 }26572657- spin_unlock(&pcp->lock);26272627+ pcp_spin_unlock_maybe_irqrestore(pcp, UP_flags);26582628 } while (count);26592629}26602630···61396109{61406110 struct per_cpu_pages *pcp;61416111 struct cpu_cacheinfo *cci;61126112+ unsigned long UP_flags;6142611361436114 pcp = per_cpu_ptr(zone->per_cpu_pageset, cpu);61446115 cci = get_cpu_cacheinfo(cpu);···61506119 * This can reduce zone lock contention without hurting61516120 * cache-hot pages sharing.61526121 */61536153- spin_lock(&pcp->lock);61226122+ pcp_spin_lock_maybe_irqsave(pcp, UP_flags);61546123 if ((cci->per_cpu_data_slice_size >> PAGE_SHIFT) > 3 * pcp->batch)61556124 pcp->flags |= PCPF_FREE_HIGH_BATCH;61566125 else61576126 pcp->flags &= ~PCPF_FREE_HIGH_BATCH;61586158- spin_unlock(&pcp->lock);61276127+ pcp_spin_unlock_maybe_irqrestore(pcp, UP_flags);61596128}6160612961616130void setup_pcp_cacheinfo(unsigned int cpu)···66986667 int old_percpu_pagelist_high_fraction;66996668 int ret;6700666966706670+ /*66716671+ * Avoid using pcp_batch_high_lock for reads as the value is read66726672+ * atomically and a race with offlining is harmless.66736673+ */66746674+66756675+ if (!write)66766676+ return proc_dointvec_minmax(table, write, buffer, length, ppos);66776677+67016678 mutex_lock(&pcp_batch_high_lock);67026679 old_percpu_pagelist_high_fraction = percpu_pagelist_high_fraction;6703668067046681 ret = proc_dointvec_minmax(table, write, buffer, length, ppos);67056705- if (!write || ret < 0)66826682+ if (ret < 0)67066683 goto out;6707668467086685 /* Sanity checking to avoid pcp imbalance */
+74-37
mm/vma.c
···6767 .state = VMA_MERGE_START, \6868 }69697070-/*7171- * If, at any point, the VMA had unCoW'd mappings from parents, it will maintain7272- * more than one anon_vma_chain connecting it to more than one anon_vma. A merge7373- * would mean a wider range of folios sharing the root anon_vma lock, and thus7474- * potential lock contention, we do not wish to encourage merging such that this7575- * scales to a problem.7676- */7777-static bool vma_had_uncowed_parents(struct vm_area_struct *vma)7070+/* Was this VMA ever forked from a parent, i.e. maybe contains CoW mappings? */7171+static bool vma_is_fork_child(struct vm_area_struct *vma)7872{7973 /*8074 * The list_is_singular() test is to avoid merging VMA cloned from8181- * parents. This can improve scalability caused by anon_vma lock.7575+ * parents. This can improve scalability caused by the anon_vma root7676+ * lock.8277 */8378 return vma && vma->anon_vma && !list_is_singular(&vma->anon_vma_chain);8479}···110115 VM_WARN_ON(src && src_anon != src->anon_vma);111116112117 /* Case 1 - we will dup_anon_vma() from src into tgt. */113113- if (!tgt_anon && src_anon)114114- return !vma_had_uncowed_parents(src);118118+ if (!tgt_anon && src_anon) {119119+ struct vm_area_struct *copied_from = vmg->copied_from;120120+121121+ if (vma_is_fork_child(src))122122+ return false;123123+ if (vma_is_fork_child(copied_from))124124+ return false;125125+126126+ return true;127127+ }115128 /* Case 2 - we will simply use tgt's anon_vma. */116129 if (tgt_anon && !src_anon)117117- return !vma_had_uncowed_parents(tgt);130130+ return !vma_is_fork_child(tgt);118131 /* Case 3 - the anon_vma's are already shared. */119132 return src_anon == tgt_anon;120133}···832829 VM_WARN_ON_VMG(middle &&833830 !(vma_iter_addr(vmg->vmi) >= middle->vm_start &&834831 vma_iter_addr(vmg->vmi) < middle->vm_end), vmg);832832+ /* An existing merge can never be used by the mremap() logic. */833833+ VM_WARN_ON_VMG(vmg->copied_from, vmg);835834836835 vmg->state = VMA_MERGE_NOMERGE;837836···11041099}1105110011061101/*11021102+ * vma_merge_copied_range - Attempt to merge a VMA that is being copied by11031103+ * mremap()11041104+ *11051105+ * @vmg: Describes the VMA we are adding, in the copied-to range @vmg->start to11061106+ * @vmg->end (exclusive), which we try to merge with any adjacent VMAs if11071107+ * possible.11081108+ *11091109+ * vmg->prev, next, start, end, pgoff should all be relative to the COPIED TO11101110+ * range, i.e. the target range for the VMA.11111111+ *11121112+ * Returns: In instances where no merge was possible, NULL. Otherwise, a pointer11131113+ * to the VMA we expanded.11141114+ *11151115+ * ASSUMPTIONS: Same as vma_merge_new_range(), except vmg->middle must contain11161116+ * the copied-from VMA.11171117+ */11181118+static struct vm_area_struct *vma_merge_copied_range(struct vma_merge_struct *vmg)11191119+{11201120+ /* We must have a copied-from VMA. */11211121+ VM_WARN_ON_VMG(!vmg->middle, vmg);11221122+11231123+ vmg->copied_from = vmg->middle;11241124+ vmg->middle = NULL;11251125+ return vma_merge_new_range(vmg);11261126+}11271127+11281128+/*11071129 * vma_expand - Expand an existing VMA11081130 *11091131 * @vmg: Describes a VMA expansion operation.···11491117int vma_expand(struct vma_merge_struct *vmg)11501118{11511119 struct vm_area_struct *anon_dup = NULL;11521152- bool remove_next = false;11531120 struct vm_area_struct *target = vmg->target;11541121 struct vm_area_struct *next = vmg->next;11221122+ bool remove_next = false;11551123 vm_flags_t sticky_flags;11561156-11571157- sticky_flags = vmg->vm_flags & VM_STICKY;11581158- sticky_flags |= target->vm_flags & VM_STICKY;11591159-11601160- VM_WARN_ON_VMG(!target, vmg);11241124+ int ret = 0;1161112511621126 mmap_assert_write_locked(vmg->mm);11631163-11641127 vma_start_write(target);11651165- if (next && (target != next) && (vmg->end == next->vm_end)) {11661166- int ret;1167112811681168- sticky_flags |= next->vm_flags & VM_STICKY;11291129+ if (next && target != next && vmg->end == next->vm_end)11691130 remove_next = true;11701170- /* This should already have been checked by this point. */11711171- VM_WARN_ON_VMG(!can_merge_remove_vma(next), vmg);11721172- vma_start_write(next);11731173- /*11741174- * In this case we don't report OOM, so vmg->give_up_on_mm is11751175- * safe.11761176- */11771177- ret = dup_anon_vma(target, next, &anon_dup);11781178- if (ret)11791179- return ret;11801180- }1181113111321132+ /* We must have a target. */11331133+ VM_WARN_ON_VMG(!target, vmg);11341134+ /* This should have already been checked by this point. */11351135+ VM_WARN_ON_VMG(remove_next && !can_merge_remove_vma(next), vmg);11821136 /* Not merging but overwriting any part of next is not handled. */11831137 VM_WARN_ON_VMG(next && !remove_next &&11841138 next != target && vmg->end > next->vm_start, vmg);11851185- /* Only handles expanding */11391139+ /* Only handles expanding. */11861140 VM_WARN_ON_VMG(target->vm_start < vmg->start ||11871141 target->vm_end > vmg->end, vmg);1188114211431143+ sticky_flags = vmg->vm_flags & VM_STICKY;11441144+ sticky_flags |= target->vm_flags & VM_STICKY;11891145 if (remove_next)11901190- vmg->__remove_next = true;11461146+ sticky_flags |= next->vm_flags & VM_STICKY;1191114711481148+ /*11491149+ * If we are removing the next VMA or copying from a VMA11501150+ * (e.g. mremap()'ing), we must propagate anon_vma state.11511151+ *11521152+ * Note that, by convention, callers ignore OOM for this case, so11531153+ * we don't need to account for vmg->give_up_on_mm here.11541154+ */11551155+ if (remove_next)11561156+ ret = dup_anon_vma(target, next, &anon_dup);11571157+ if (!ret && vmg->copied_from)11581158+ ret = dup_anon_vma(target, vmg->copied_from, &anon_dup);11591159+ if (ret)11601160+ return ret;11611161+11621162+ if (remove_next) {11631163+ vma_start_write(next);11641164+ vmg->__remove_next = true;11651165+ }11921166 if (commit_merge(vmg))11931167 goto nomem;11941168···18661828 if (new_vma && new_vma->vm_start < addr + len)18671829 return NULL; /* should never get here */1868183018691869- vmg.middle = NULL; /* New VMA range. */18701831 vmg.pgoff = pgoff;18711832 vmg.next = vma_iter_next_rewind(&vmi, NULL);18721872- new_vma = vma_merge_new_range(&vmg);18331833+ new_vma = vma_merge_copied_range(&vmg);1873183418741835 if (new_vma) {18751836 /*
+3
mm/vma.h
···106106 struct anon_vma_name *anon_name;107107 enum vma_merge_state state;108108109109+ /* If copied from (i.e. mremap()'d) the VMA from which we are copying. */110110+ struct vm_area_struct *copied_from;111111+109112 /* Flags which callers can use to modify merge behaviour: */110113111114 /*
···44204420 if (bis_capable(hdev)) {44214421 events[1] |= 0x20; /* LE PA Report */44224422 events[1] |= 0x40; /* LE PA Sync Established */44234423+ events[1] |= 0x80; /* LE PA Sync Lost */44234424 events[3] |= 0x04; /* LE Create BIG Complete */44244425 events[3] |= 0x08; /* LE Terminate BIG Complete */44254426 events[3] |= 0x10; /* LE BIG Sync Established */
+17-8
net/bpf/test_run.c
···12941294 batch_size = NAPI_POLL_WEIGHT;12951295 else if (batch_size > TEST_XDP_MAX_BATCH)12961296 return -E2BIG;12971297-12981298- headroom += sizeof(struct xdp_page_head);12991297 } else if (batch_size) {13001298 return -EINVAL;13011299 }···13061308 /* There can't be user provided data before the meta data */13071309 if (ctx->data_meta || ctx->data_end > kattr->test.data_size_in ||13081310 ctx->data > ctx->data_end ||13091309- unlikely(xdp_metalen_invalid(ctx->data)) ||13101311 (do_live && (kattr->test.data_out || kattr->test.ctx_out)))13111312 goto free_ctx;13121312- /* Meta data is allocated from the headroom */13131313- headroom -= ctx->data;1314131313151314 meta_sz = ctx->data;13151315+ if (xdp_metalen_invalid(meta_sz) || meta_sz > headroom - sizeof(struct xdp_frame))13161316+ goto free_ctx;13171317+13181318+ /* Meta data is allocated from the headroom */13191319+ headroom -= meta_sz;13161320 linear_sz = ctx->data_end;13171321 }13221322+13231323+ /* The xdp_page_head structure takes up space in each page, limiting the13241324+ * size of the packet data; add the extra size to headroom here to make13251325+ * sure it's accounted in the length checks below, but not in the13261326+ * metadata size check above.13271327+ */13281328+ if (do_live)13291329+ headroom += sizeof(struct xdp_page_head);1318133013191331 max_linear_sz = PAGE_SIZE - headroom - tailroom;13201332 linear_sz = min_t(u32, linear_sz, max_linear_sz);···1363135513641356 if (sinfo->nr_frags == MAX_SKB_FRAGS) {13651357 ret = -ENOMEM;13661366- goto out;13581358+ goto out_put_dev;13671359 }1368136013691361 page = alloc_page(GFP_KERNEL);13701362 if (!page) {13711363 ret = -ENOMEM;13721372- goto out;13641364+ goto out_put_dev;13731365 }1374136613751367 frag = &sinfo->frags[sinfo->nr_frags++];···13811373 if (copy_from_user(page_address(page), data_in + size,13821374 data_len)) {13831375 ret = -EFAULT;13841384- goto out;13761376+ goto out_put_dev;13851377 }13861378 sinfo->xdp_frags_size += data_len;13871379 size += data_len;···13961388 ret = bpf_test_run_xdp_live(prog, &xdp, repeat, batch_size, &duration);13971389 else13981390 ret = bpf_test_run(prog, &xdp, repeat, &retval, &duration, true);13911391+out_put_dev:13991392 /* We convert the xdp_buff back to an xdp_md before checking the return14001393 * code so the reference count of any held netdevice will be decremented14011394 * even if the test run failed.
···221221 if (test_bit(BR_FDB_LOCAL, &dst->flags))222222 return br_pass_frame_up(skb, false);223223224224- if (now != dst->used)225225- dst->used = now;224224+ if (now != READ_ONCE(dst->used))225225+ WRITE_ONCE(dst->used, now);226226 br_forward(dst->dst, skb, local_rcv, false);227227 } else {228228 if (!mcast_hit)
+9-1
net/can/j1939/transport.c
···1695169516961696 j1939_session_timers_cancel(session);16971697 j1939_session_cancel(session, J1939_XTP_ABORT_BUSY);16981698- if (session->transmission)16981698+ if (session->transmission) {16991699 j1939_session_deactivate_activate_next(session);17001700+ } else if (session->state == J1939_SESSION_WAITING_ABORT) {17011701+ /* Force deactivation for the receiver.17021702+ * If we rely on the timer starting in j1939_session_cancel,17031703+ * a second RTS call here will cancel that timer and fail17041704+ * to restart it because the state is already WAITING_ABORT.17051705+ */17061706+ j1939_session_deactivate_activate_next(session);17071707+ }1700170817011709 return -EBUSY;17021710 }
+10-41
net/can/raw.c
···4949#include <linux/if_arp.h>5050#include <linux/skbuff.h>5151#include <linux/can.h>5252+#include <linux/can/can-ml.h>5253#include <linux/can/core.h>5353-#include <linux/can/dev.h> /* for can_is_canxl_dev_mtu() */5454#include <linux/can/skb.h>5555#include <linux/can/raw.h>5656#include <net/sock.h>···892892 }893893}894894895895-static inline bool raw_dev_cc_enabled(struct net_device *dev,896896- struct can_priv *priv)897897-{898898- /* The CANXL-only mode disables error-signalling on the CAN bus899899- * which is needed to send CAN CC/FD frames900900- */901901- if (priv)902902- return !can_dev_in_xl_only_mode(priv);903903-904904- /* virtual CAN interfaces always support CAN CC */905905- return true;906906-}907907-908908-static inline bool raw_dev_fd_enabled(struct net_device *dev,909909- struct can_priv *priv)910910-{911911- /* check FD ctrlmode on real CAN interfaces */912912- if (priv)913913- return (priv->ctrlmode & CAN_CTRLMODE_FD);914914-915915- /* check MTU for virtual CAN FD interfaces */916916- return (READ_ONCE(dev->mtu) >= CANFD_MTU);917917-}918918-919919-static inline bool raw_dev_xl_enabled(struct net_device *dev,920920- struct can_priv *priv)921921-{922922- /* check XL ctrlmode on real CAN interfaces */923923- if (priv)924924- return (priv->ctrlmode & CAN_CTRLMODE_XL);925925-926926- /* check MTU for virtual CAN XL interfaces */927927- return can_is_canxl_dev_mtu(READ_ONCE(dev->mtu));928928-}929929-930895static unsigned int raw_check_txframe(struct raw_sock *ro, struct sk_buff *skb,931896 struct net_device *dev)932897{933933- struct can_priv *priv = safe_candev_priv(dev);934934-935898 /* Classical CAN */936936- if (can_is_can_skb(skb) && raw_dev_cc_enabled(dev, priv))899899+ if (can_is_can_skb(skb) && can_cap_enabled(dev, CAN_CAP_CC))937900 return CAN_MTU;938901939902 /* CAN FD */940903 if (ro->fd_frames && can_is_canfd_skb(skb) &&941941- raw_dev_fd_enabled(dev, priv))904904+ can_cap_enabled(dev, CAN_CAP_FD))942905 return CANFD_MTU;943906944907 /* CAN XL */945908 if (ro->xl_frames && can_is_canxl_skb(skb) &&946946- raw_dev_xl_enabled(dev, priv))909909+ can_cap_enabled(dev, CAN_CAP_XL))947910 return CANXL_MTU;948911949912 return 0;···944981 dev = dev_get_by_index(sock_net(sk), ifindex);945982 if (!dev)946983 return -ENXIO;984984+985985+ /* no sending on a CAN device in read-only mode */986986+ if (can_cap_enabled(dev, CAN_CAP_RO)) {987987+ err = -EACCES;988988+ goto put_dev;989989+ }947990948991 skb = sock_alloc_send_skb(sk, size + sizeof(struct can_skb_priv),949992 msg->msg_flags & MSG_DONTWAIT, &err);
+22-9
net/core/dev.c
···478478 ARPHRD_IEEE1394, ARPHRD_EUI64, ARPHRD_INFINIBAND, ARPHRD_SLIP,479479 ARPHRD_CSLIP, ARPHRD_SLIP6, ARPHRD_CSLIP6, ARPHRD_RSRVD,480480 ARPHRD_ADAPT, ARPHRD_ROSE, ARPHRD_X25, ARPHRD_HWX25,481481+ ARPHRD_CAN, ARPHRD_MCTP,481482 ARPHRD_PPP, ARPHRD_CISCO, ARPHRD_LAPB, ARPHRD_DDCMP,482482- ARPHRD_RAWHDLC, ARPHRD_TUNNEL, ARPHRD_TUNNEL6, ARPHRD_FRAD,483483+ ARPHRD_RAWHDLC, ARPHRD_RAWIP,484484+ ARPHRD_TUNNEL, ARPHRD_TUNNEL6, ARPHRD_FRAD,483485 ARPHRD_SKIP, ARPHRD_LOOPBACK, ARPHRD_LOCALTLK, ARPHRD_FDDI,484486 ARPHRD_BIF, ARPHRD_SIT, ARPHRD_IPDDP, ARPHRD_IPGRE,485487 ARPHRD_PIMREG, ARPHRD_HIPPI, ARPHRD_ASH, ARPHRD_ECONET,486488 ARPHRD_IRDA, ARPHRD_FCPP, ARPHRD_FCAL, ARPHRD_FCPL,487489 ARPHRD_FCFABRIC, ARPHRD_IEEE80211, ARPHRD_IEEE80211_PRISM,488488- ARPHRD_IEEE80211_RADIOTAP, ARPHRD_PHONET, ARPHRD_PHONET_PIPE,489489- ARPHRD_IEEE802154, ARPHRD_VOID, ARPHRD_NONE};490490+ ARPHRD_IEEE80211_RADIOTAP,491491+ ARPHRD_IEEE802154, ARPHRD_IEEE802154_MONITOR,492492+ ARPHRD_PHONET, ARPHRD_PHONET_PIPE,493493+ ARPHRD_CAIF, ARPHRD_IP6GRE, ARPHRD_NETLINK, ARPHRD_6LOWPAN,494494+ ARPHRD_VSOCKMON,495495+ ARPHRD_VOID, ARPHRD_NONE};490496491497static const char *const netdev_lock_name[] = {492498 "_xmit_NETROM", "_xmit_ETHER", "_xmit_EETHER", "_xmit_AX25",···501495 "_xmit_IEEE1394", "_xmit_EUI64", "_xmit_INFINIBAND", "_xmit_SLIP",502496 "_xmit_CSLIP", "_xmit_SLIP6", "_xmit_CSLIP6", "_xmit_RSRVD",503497 "_xmit_ADAPT", "_xmit_ROSE", "_xmit_X25", "_xmit_HWX25",498498+ "_xmit_CAN", "_xmit_MCTP",504499 "_xmit_PPP", "_xmit_CISCO", "_xmit_LAPB", "_xmit_DDCMP",505505- "_xmit_RAWHDLC", "_xmit_TUNNEL", "_xmit_TUNNEL6", "_xmit_FRAD",500500+ "_xmit_RAWHDLC", "_xmit_RAWIP",501501+ "_xmit_TUNNEL", "_xmit_TUNNEL6", "_xmit_FRAD",506502 "_xmit_SKIP", "_xmit_LOOPBACK", "_xmit_LOCALTLK", "_xmit_FDDI",507503 "_xmit_BIF", "_xmit_SIT", "_xmit_IPDDP", "_xmit_IPGRE",508504 "_xmit_PIMREG", "_xmit_HIPPI", "_xmit_ASH", "_xmit_ECONET",509505 "_xmit_IRDA", "_xmit_FCPP", "_xmit_FCAL", "_xmit_FCPL",510506 "_xmit_FCFABRIC", "_xmit_IEEE80211", "_xmit_IEEE80211_PRISM",511511- "_xmit_IEEE80211_RADIOTAP", "_xmit_PHONET", "_xmit_PHONET_PIPE",512512- "_xmit_IEEE802154", "_xmit_VOID", "_xmit_NONE"};507507+ "_xmit_IEEE80211_RADIOTAP",508508+ "_xmit_IEEE802154", "_xmit_IEEE802154_MONITOR",509509+ "_xmit_PHONET", "_xmit_PHONET_PIPE",510510+ "_xmit_CAIF", "_xmit_IP6GRE", "_xmit_NETLINK", "_xmit_6LOWPAN",511511+ "_xmit_VSOCKMON",512512+ "_xmit_VOID", "_xmit_NONE"};513513514514static struct lock_class_key netdev_xmit_lock_key[ARRAY_SIZE(netdev_lock_type)];515515static struct lock_class_key netdev_addr_lock_key[ARRAY_SIZE(netdev_lock_type)];···528516 if (netdev_lock_type[i] == dev_type)529517 return i;530518 /* the last key is used by default */519519+ WARN_ONCE(1, "netdev_lock_pos() could not find dev_type=%u\n", dev_type);531520 return ARRAY_SIZE(netdev_lock_type) - 1;532521}533522···42034190 do {42044191 if (first_n && !defer_count) {42054192 defer_count = atomic_long_inc_return(&q->defer_count);42064206- if (unlikely(defer_count > READ_ONCE(q->limit))) {42074207- kfree_skb_reason(skb, SKB_DROP_REASON_QDISC_DROP);41934193+ if (unlikely(defer_count > READ_ONCE(net_hotdata.qdisc_max_burst))) {41944194+ kfree_skb_reason(skb, SKB_DROP_REASON_QDISC_BURST_DROP);42084195 return NET_XMIT_DROP;42094196 }42104197 }···42224209 ll_list = llist_del_all(&q->defer_list);42234210 /* There is a small race because we clear defer_count not atomically42244211 * with the prior llist_del_all(). This means defer_list could grow42254225- * over q->limit.42124212+ * over qdisc_max_burst.42264213 */42274214 atomic_long_set(&q->defer_count, 0);42284215
···31513151 int err;3152315231533153 if (family == AF_INET &&31543154+ (!x->dir || x->dir == XFRM_SA_DIR_OUT) &&31543155 READ_ONCE(xs_net(x)->ipv4.sysctl_ip_no_pmtu_disc))31553156 x->props.flags |= XFRM_STATE_NOPMTUDISC;31563157
+42
rust/helpers/bitops.c
···11// SPDX-License-Identifier: GPL-2.02233#include <linux/bitops.h>44+#include <linux/find.h>4556void rust_helper___set_bit(unsigned long nr, unsigned long *addr)67{···2221{2322 clear_bit(nr, addr);2423}2424+2525+/*2626+ * The rust_helper_ prefix is intentionally omitted below so that the2727+ * declarations in include/linux/find.h are compatible with these helpers.2828+ *2929+ * Note that the below #ifdefs mean that the helper is only created if C does3030+ * not provide a definition.3131+ */3232+#ifdef find_first_zero_bit3333+__rust_helper3434+unsigned long _find_first_zero_bit(const unsigned long *p, unsigned long size)3535+{3636+ return find_first_zero_bit(p, size);3737+}3838+#endif /* find_first_zero_bit */3939+4040+#ifdef find_next_zero_bit4141+__rust_helper4242+unsigned long _find_next_zero_bit(const unsigned long *addr,4343+ unsigned long size, unsigned long offset)4444+{4545+ return find_next_zero_bit(addr, size, offset);4646+}4747+#endif /* find_next_zero_bit */4848+4949+#ifdef find_first_bit5050+__rust_helper5151+unsigned long _find_first_bit(const unsigned long *addr, unsigned long size)5252+{5353+ return find_first_bit(addr, size);5454+}5555+#endif /* find_first_bit */5656+5757+#ifdef find_next_bit5858+__rust_helper5959+unsigned long _find_next_bit(const unsigned long *addr, unsigned long size,6060+ unsigned long offset)6161+{6262+ return find_next_bit(addr, size, offset);6363+}6464+#endif /* find_next_bit */
+1-1
security/landlock/audit.c
···191191 long youngest_layer = -1;192192193193 for_each_set_bit(access_bit, &access_req, layer_masks_size) {194194- const access_mask_t mask = (*layer_masks)[access_bit];194194+ const layer_mask_t mask = (*layer_masks)[access_bit];195195 long layer;196196197197 if (!mask)
+1-1
security/landlock/domain.h
···9797 */9898 atomic64_t num_denials;9999 /**100100- * @id: Landlock domain ID, sets once at domain creation time.100100+ * @id: Landlock domain ID, set once at domain creation time.101101 */102102 u64 id;103103 /**
+1-1
security/landlock/errata/abi-6.h
···99 * This fix addresses an issue where signal scoping was overly restrictive,1010 * preventing sandboxed threads from signaling other threads within the same1111 * process if they belonged to different domains. Because threads are not1212- * security boundaries, user space might assume that any thread within the same1212+ * security boundaries, user space might assume that all threads within the same1313 * process can send signals between themselves (see :manpage:`nptl(7)` and1414 * :manpage:`libpsx(3)`). Consistent with :manpage:`ptrace(2)` behavior, direct1515 * interaction between threads of the same process should always be allowed.
+11-3
security/landlock/fs.c
···939939 }940940 path_put(&walker_path);941941942942- if (!allowed_parent1) {942942+ /*943943+ * Check CONFIG_AUDIT to enable elision of log_request_parent* and944944+ * associated caller's stack variables thanks to dead code elimination.945945+ */946946+#ifdef CONFIG_AUDIT947947+ if (!allowed_parent1 && log_request_parent1) {943948 log_request_parent1->type = LANDLOCK_REQUEST_FS_ACCESS;944949 log_request_parent1->audit.type = LSM_AUDIT_DATA_PATH;945950 log_request_parent1->audit.u.path = *path;···954949 ARRAY_SIZE(*layer_masks_parent1);955950 }956951957957- if (!allowed_parent2) {952952+ if (!allowed_parent2 && log_request_parent2) {958953 log_request_parent2->type = LANDLOCK_REQUEST_FS_ACCESS;959954 log_request_parent2->audit.type = LSM_AUDIT_DATA_PATH;960955 log_request_parent2->audit.u.path = *path;···963958 log_request_parent2->layer_masks_size =964959 ARRAY_SIZE(*layer_masks_parent2);965960 }961961+#endif /* CONFIG_AUDIT */962962+966963 return allowed_parent1 && allowed_parent2;967964}968965···13211314 * second call to iput() for the same Landlock object. Also13221315 * checks I_NEW because such inode cannot be tied to an object.13231316 */13241324- if (inode_state_read(inode) & (I_FREEING | I_WILL_FREE | I_NEW)) {13171317+ if (inode_state_read(inode) &13181318+ (I_FREEING | I_WILL_FREE | I_NEW)) {13251319 spin_unlock(&inode->i_lock);13261320 continue;13271321 }
+67-51
security/landlock/net.c
···71717272 switch (address->sa_family) {7373 case AF_UNSPEC:7474+ if (access_request == LANDLOCK_ACCESS_NET_CONNECT_TCP) {7575+ /*7676+ * Connecting to an address with AF_UNSPEC dissolves7777+ * the TCP association, which have the same effect as7878+ * closing the connection while retaining the socket7979+ * object (i.e., the file descriptor). As for dropping8080+ * privileges, closing connections is always allowed.8181+ *8282+ * For a TCP access control system, this request is8383+ * legitimate. Let the network stack handle potential8484+ * inconsistencies and return -EINVAL if needed.8585+ */8686+ return 0;8787+ } else if (access_request == LANDLOCK_ACCESS_NET_BIND_TCP) {8888+ /*8989+ * Binding to an AF_UNSPEC address is treated9090+ * differently by IPv4 and IPv6 sockets. The socket's9191+ * family may change under our feet due to9292+ * setsockopt(IPV6_ADDRFORM), but that's ok: we either9393+ * reject entirely or require9494+ * %LANDLOCK_ACCESS_NET_BIND_TCP for the given port, so9595+ * it cannot be used to bypass the policy.9696+ *9797+ * IPv4 sockets map AF_UNSPEC to AF_INET for9898+ * retrocompatibility for bind accesses, only if the9999+ * address is INADDR_ANY (cf. __inet_bind). IPv6100100+ * sockets always reject it.101101+ *102102+ * Checking the address is required to not wrongfully103103+ * return -EACCES instead of -EAFNOSUPPORT or -EINVAL.104104+ * We could return 0 and let the network stack handle105105+ * these checks, but it is safer to return a proper106106+ * error and test consistency thanks to kselftest.107107+ */108108+ if (sock->sk->__sk_common.skc_family == AF_INET) {109109+ const struct sockaddr_in *const sockaddr =110110+ (struct sockaddr_in *)address;111111+112112+ if (addrlen < sizeof(struct sockaddr_in))113113+ return -EINVAL;114114+115115+ if (sockaddr->sin_addr.s_addr !=116116+ htonl(INADDR_ANY))117117+ return -EAFNOSUPPORT;118118+ } else {119119+ if (addrlen < SIN6_LEN_RFC2133)120120+ return -EINVAL;121121+ else122122+ return -EAFNOSUPPORT;123123+ }124124+ } else {125125+ WARN_ON_ONCE(1);126126+ }127127+ /* Only for bind(AF_UNSPEC+INADDR_ANY) on IPv4 socket. */128128+ fallthrough;74129 case AF_INET: {75130 const struct sockaddr_in *addr4;76131···174119 return 0;175120 }176121177177- /* Specific AF_UNSPEC handling. */178178- if (address->sa_family == AF_UNSPEC) {179179- /*180180- * Connecting to an address with AF_UNSPEC dissolves the TCP181181- * association, which have the same effect as closing the182182- * connection while retaining the socket object (i.e., the file183183- * descriptor). As for dropping privileges, closing184184- * connections is always allowed.185185- *186186- * For a TCP access control system, this request is legitimate.187187- * Let the network stack handle potential inconsistencies and188188- * return -EINVAL if needed.189189- */190190- if (access_request == LANDLOCK_ACCESS_NET_CONNECT_TCP)191191- return 0;192192-193193- /*194194- * For compatibility reason, accept AF_UNSPEC for bind195195- * accesses (mapped to AF_INET) only if the address is196196- * INADDR_ANY (cf. __inet_bind). Checking the address is197197- * required to not wrongfully return -EACCES instead of198198- * -EAFNOSUPPORT.199199- *200200- * We could return 0 and let the network stack handle these201201- * checks, but it is safer to return a proper error and test202202- * consistency thanks to kselftest.203203- */204204- if (access_request == LANDLOCK_ACCESS_NET_BIND_TCP) {205205- /* addrlen has already been checked for AF_UNSPEC. */206206- const struct sockaddr_in *const sockaddr =207207- (struct sockaddr_in *)address;208208-209209- if (sock->sk->__sk_common.skc_family != AF_INET)210210- return -EINVAL;211211-212212- if (sockaddr->sin_addr.s_addr != htonl(INADDR_ANY))213213- return -EAFNOSUPPORT;214214- }215215- } else {216216- /*217217- * Checks sa_family consistency to not wrongfully return218218- * -EACCES instead of -EINVAL. Valid sa_family changes are219219- * only (from AF_INET or AF_INET6) to AF_UNSPEC.220220- *221221- * We could return 0 and let the network stack handle this222222- * check, but it is safer to return a proper error and test223223- * consistency thanks to kselftest.224224- */225225- if (address->sa_family != sock->sk->__sk_common.skc_family)226226- return -EINVAL;227227- }122122+ /*123123+ * Checks sa_family consistency to not wrongfully return124124+ * -EACCES instead of -EINVAL. Valid sa_family changes are125125+ * only (from AF_INET or AF_INET6) to AF_UNSPEC.126126+ *127127+ * We could return 0 and let the network stack handle this128128+ * check, but it is safer to return a proper error and test129129+ * consistency thanks to kselftest.130130+ */131131+ if (address->sa_family != sock->sk->__sk_common.skc_family &&132132+ address->sa_family != AF_UNSPEC)133133+ return -EINVAL;228134229135 id.key.data = (__force uintptr_t)port;230136 BUILD_BUG_ON(sizeof(port) > sizeof(id.key.data));
···8686 const unsigned int mode)8787{8888 const struct landlock_cred_security *parent_subject;8989- const struct landlock_ruleset *child_dom;9089 int err;91909291 /* Quick return for non-landlocked tasks. */···95969697 scoped_guard(rcu)9798 {9898- child_dom = landlock_get_task_domain(child);9999+ const struct landlock_ruleset *const child_dom =100100+ landlock_get_task_domain(child);99101 err = domain_ptrace(parent_subject->domain, child_dom);100102 }101103···166166}167167168168/**169169- * domain_is_scoped - Checks if the client domain is scoped in the same170170- * domain as the server.169169+ * domain_is_scoped - Check if an interaction from a client/sender to a170170+ * server/receiver should be restricted based on scope controls.171171 *172172 * @client: IPC sender domain.173173 * @server: IPC receiver domain.174174 * @scope: The scope restriction criteria.175175 *176176- * Returns: True if the @client domain is scoped to access the @server,177177- * unless the @server is also scoped in the same domain as @client.176176+ * Returns: True if @server is in a different domain from @client, and @client177177+ * is scoped to access @server (i.e. access should be denied).178178 */179179static bool domain_is_scoped(const struct landlock_ruleset *const client,180180 const struct landlock_ruleset *const server,
···730730}731731732732/* fill the PCM buffer with the current silence format; called from pcm_oss.c */733733-void snd_pcm_runtime_buffer_set_silence(struct snd_pcm_runtime *runtime)733733+int snd_pcm_runtime_buffer_set_silence(struct snd_pcm_runtime *runtime)734734{735735- snd_pcm_buffer_access_lock(runtime);735735+ int err;736736+737737+ err = snd_pcm_buffer_access_lock(runtime);738738+ if (err < 0)739739+ return err;736740 if (runtime->dma_area)737741 snd_pcm_format_set_silence(runtime->format, runtime->dma_area,738742 bytes_to_samples(runtime, runtime->dma_bytes));739743 snd_pcm_buffer_access_unlock(runtime);744744+ return 0;740745}741746EXPORT_SYMBOL_GPL(snd_pcm_runtime_buffer_set_silence);742747
···4747 struct test_xdp_context_test_run *skel = NULL;4848 char data[sizeof(pkt_v4) + sizeof(__u32)];4949 char bad_ctx[sizeof(struct xdp_md) + 1];5050+ char large_data[256];5051 struct xdp_md ctx_in, ctx_out;5152 DECLARE_LIBBPF_OPTS(bpf_test_run_opts, opts,5253 .data_in = &data,···9594 test_xdp_context_error(prog_fd, opts, 4, sizeof(__u32), sizeof(data),9695 0, 0, 0);97969898- /* Meta data must be 255 bytes or smaller */9999- test_xdp_context_error(prog_fd, opts, 0, 256, sizeof(data), 0, 0, 0);100100-10197 /* Total size of data must be data_end - data_meta or larger */10298 test_xdp_context_error(prog_fd, opts, 0, sizeof(__u32),10399 sizeof(data) + 1, 0, 0, 0);···113115 /* The egress cannot be specified */114116 test_xdp_context_error(prog_fd, opts, 0, sizeof(__u32), sizeof(data),115117 0, 0, 1);118118+119119+ /* Meta data must be 216 bytes or smaller (256 - sizeof(struct120120+ * xdp_frame)). Test both nearest invalid size and nearest invalid121121+ * 4-byte-aligned size, and make sure data_in is large enough that we122122+ * actually hit the check on metadata length123123+ */124124+ opts.data_in = large_data;125125+ opts.data_size_in = sizeof(large_data);126126+ test_xdp_context_error(prog_fd, opts, 0, 217, sizeof(large_data), 0, 0, 0);127127+ test_xdp_context_error(prog_fd, opts, 0, 220, sizeof(large_data), 0, 0, 0);116128117129 test_xdp_context_test_run__destroy(skel);118130}
+2-2
tools/testing/selftests/drivers/net/hw/toeplitz.c
···485485486486 bitmap = strtoul(arg, NULL, 0);487487488488- if (bitmap & ~(RPS_MAX_CPUS - 1))489489- error(1, 0, "rps bitmap 0x%lx out of bounds 0..%lu",488488+ if (bitmap & ~((1UL << RPS_MAX_CPUS) - 1))489489+ error(1, 0, "rps bitmap 0x%lx out of bounds, max cpu %lu",490490 bitmap, RPS_MAX_CPUS - 1);491491492492 for (i = 0; i < RPS_MAX_CPUS; i++)
···9494 mask = 09595 for cpu in rps_cpus:9696 mask |= (1 << cpu)9797- mask = hex(mask)[2:]9797+9898+ mask = hex(mask)989999100 # Set RPS bitmap for all rx queues100101 for rps_file in glob.glob(f"/sys/class/net/{cfg.ifname}/queues/rx-*/rps_cpus"):101102 with open(rps_file, "w", encoding="utf-8") as fp:102102- fp.write(mask)103103+ # sysfs expects hex without '0x' prefix, toeplitz.c needs the prefix104104+ fp.write(mask[2:])103105104106 return mask105107
+85-59
tools/testing/selftests/kvm/x86/amx_test.c
···6969 : : "a"(tile), "d"(0));7070}71717272+static inline int tileloadd_safe(void *tile)7373+{7474+ return kvm_asm_safe(".byte 0xc4,0xe2,0x7b,0x4b,0x04,0x10",7575+ "a"(tile), "d"(0));7676+}7777+7278static inline void __tilerelease(void)7379{7480 asm volatile(".byte 0xc4, 0xe2, 0x78, 0x49, 0xc0" ::);···130124 }131125}132126127127+enum {128128+ /* Retrieve TMM0 from guest, stash it for TEST_RESTORE_TILEDATA */129129+ TEST_SAVE_TILEDATA = 1,130130+131131+ /* Check TMM0 against tiledata */132132+ TEST_COMPARE_TILEDATA = 2,133133+134134+ /* Restore TMM0 from earlier save */135135+ TEST_RESTORE_TILEDATA = 4,136136+137137+ /* Full VM save/restore */138138+ TEST_SAVE_RESTORE = 8,139139+};140140+133141static void __attribute__((__flatten__)) guest_code(struct tile_config *amx_cfg,134142 struct tile_data *tiledata,135143 struct xstate *xstate)136144{145145+ int vector;146146+137147 GUEST_ASSERT(this_cpu_has(X86_FEATURE_XSAVE) &&138148 this_cpu_has(X86_FEATURE_OSXSAVE));139149 check_xtile_info();140140- GUEST_SYNC(1);150150+ GUEST_SYNC(TEST_SAVE_RESTORE);141151142152 /* xfd=0, enable amx */143153 wrmsr(MSR_IA32_XFD, 0);144144- GUEST_SYNC(2);154154+ GUEST_SYNC(TEST_SAVE_RESTORE);145155 GUEST_ASSERT(rdmsr(MSR_IA32_XFD) == 0);146156 set_tilecfg(amx_cfg);147157 __ldtilecfg(amx_cfg);148148- GUEST_SYNC(3);158158+ GUEST_SYNC(TEST_SAVE_RESTORE);149159 /* Check save/restore when trap to userspace */150160 __tileloadd(tiledata);151151- GUEST_SYNC(4);161161+ GUEST_SYNC(TEST_SAVE_TILEDATA | TEST_COMPARE_TILEDATA | TEST_SAVE_RESTORE);162162+163163+ /* xfd=0x40000, disable amx tiledata */164164+ wrmsr(MSR_IA32_XFD, XFEATURE_MASK_XTILE_DATA);165165+166166+ /* host tries setting tiledata while guest XFD is set */167167+ GUEST_SYNC(TEST_RESTORE_TILEDATA);168168+ GUEST_SYNC(TEST_SAVE_RESTORE);169169+170170+ wrmsr(MSR_IA32_XFD, 0);152171 __tilerelease();153153- GUEST_SYNC(5);172172+ GUEST_SYNC(TEST_SAVE_RESTORE);154173 /*155174 * After XSAVEC, XTILEDATA is cleared in the xstate_bv but is set in156175 * the xcomp_bv.···184153 __xsavec(xstate, XFEATURE_MASK_XTILE_DATA);185154 GUEST_ASSERT(!(xstate->header.xstate_bv & XFEATURE_MASK_XTILE_DATA));186155 GUEST_ASSERT(xstate->header.xcomp_bv & XFEATURE_MASK_XTILE_DATA);156156+157157+ /* #NM test */187158188159 /* xfd=0x40000, disable amx tiledata */189160 wrmsr(MSR_IA32_XFD, XFEATURE_MASK_XTILE_DATA);···199166 GUEST_ASSERT(!(xstate->header.xstate_bv & XFEATURE_MASK_XTILE_DATA));200167 GUEST_ASSERT((xstate->header.xcomp_bv & XFEATURE_MASK_XTILE_DATA));201168202202- GUEST_SYNC(6);169169+ GUEST_SYNC(TEST_SAVE_RESTORE);203170 GUEST_ASSERT(rdmsr(MSR_IA32_XFD) == XFEATURE_MASK_XTILE_DATA);204171 set_tilecfg(amx_cfg);205172 __ldtilecfg(amx_cfg);173173+206174 /* Trigger #NM exception */207207- __tileloadd(tiledata);208208- GUEST_SYNC(10);175175+ vector = tileloadd_safe(tiledata);176176+ __GUEST_ASSERT(vector == NM_VECTOR,177177+ "Wanted #NM on tileloadd with XFD[18]=1, got %s",178178+ ex_str(vector));209179210210- GUEST_DONE();211211-}212212-213213-void guest_nm_handler(struct ex_regs *regs)214214-{215215- /* Check if #NM is triggered by XFEATURE_MASK_XTILE_DATA */216216- GUEST_SYNC(7);217180 GUEST_ASSERT(!(get_cr0() & X86_CR0_TS));218181 GUEST_ASSERT(rdmsr(MSR_IA32_XFD_ERR) == XFEATURE_MASK_XTILE_DATA);219182 GUEST_ASSERT(rdmsr(MSR_IA32_XFD) == XFEATURE_MASK_XTILE_DATA);220220- GUEST_SYNC(8);183183+ GUEST_SYNC(TEST_SAVE_RESTORE);221184 GUEST_ASSERT(rdmsr(MSR_IA32_XFD_ERR) == XFEATURE_MASK_XTILE_DATA);222185 GUEST_ASSERT(rdmsr(MSR_IA32_XFD) == XFEATURE_MASK_XTILE_DATA);223186 /* Clear xfd_err */224187 wrmsr(MSR_IA32_XFD_ERR, 0);225188 /* xfd=0, enable amx */226189 wrmsr(MSR_IA32_XFD, 0);227227- GUEST_SYNC(9);190190+ GUEST_SYNC(TEST_SAVE_RESTORE);191191+192192+ __tileloadd(tiledata);193193+ GUEST_SYNC(TEST_COMPARE_TILEDATA | TEST_SAVE_RESTORE);194194+195195+ GUEST_DONE();228196}229197230198int main(int argc, char *argv[])···234200 struct kvm_vcpu *vcpu;235201 struct kvm_vm *vm;236202 struct kvm_x86_state *state;203203+ struct kvm_x86_state *tile_state = NULL;237204 int xsave_restore_size;238205 vm_vaddr_t amx_cfg, tiledata, xstate;239206 struct ucall uc;240240- u32 amx_offset;241207 int ret;242208243209 /*···262228263229 vcpu_regs_get(vcpu, ®s1);264230265265- /* Register #NM handler */266266- vm_install_exception_handler(vm, NM_VECTOR, guest_nm_handler);267267-268231 /* amx cfg for guest_code */269232 amx_cfg = vm_vaddr_alloc_page(vm);270233 memset(addr_gva2hva(vm, amx_cfg), 0x0, getpagesize());···275244 memset(addr_gva2hva(vm, xstate), 0, PAGE_SIZE * DIV_ROUND_UP(XSAVE_SIZE, PAGE_SIZE));276245 vcpu_args_set(vcpu, 3, amx_cfg, tiledata, xstate);277246247247+ int iter = 0;278248 for (;;) {279249 vcpu_run(vcpu);280250 TEST_ASSERT_KVM_EXIT_REASON(vcpu, KVM_EXIT_IO);···285253 REPORT_GUEST_ASSERT(uc);286254 /* NOT REACHED */287255 case UCALL_SYNC:288288- switch (uc.args[1]) {289289- case 1:290290- case 2:291291- case 3:292292- case 5:293293- case 6:294294- case 7:295295- case 8:296296- fprintf(stderr, "GUEST_SYNC(%ld)\n", uc.args[1]);297297- break;298298- case 4:299299- case 10:300300- fprintf(stderr,301301- "GUEST_SYNC(%ld), check save/restore status\n", uc.args[1]);256256+ ++iter;257257+ if (uc.args[1] & TEST_SAVE_TILEDATA) {258258+ fprintf(stderr, "GUEST_SYNC #%d, save tiledata\n", iter);259259+ tile_state = vcpu_save_state(vcpu);260260+ }261261+ if (uc.args[1] & TEST_COMPARE_TILEDATA) {262262+ fprintf(stderr, "GUEST_SYNC #%d, check TMM0 contents\n", iter);302263303264 /* Compacted mode, get amx offset by xsave area304265 * size subtract 8K amx size.305266 */306306- amx_offset = xsave_restore_size - NUM_TILES*TILE_SIZE;307307- state = vcpu_save_state(vcpu);308308- void *amx_start = (void *)state->xsave + amx_offset;267267+ u32 amx_offset = xsave_restore_size - NUM_TILES*TILE_SIZE;268268+ void *amx_start = (void *)tile_state->xsave + amx_offset;309269 void *tiles_data = (void *)addr_gva2hva(vm, tiledata);310270 /* Only check TMM0 register, 1 tile */311271 ret = memcmp(amx_start, tiles_data, TILE_SIZE);312272 TEST_ASSERT(ret == 0, "memcmp failed, ret=%d", ret);273273+ }274274+ if (uc.args[1] & TEST_RESTORE_TILEDATA) {275275+ fprintf(stderr, "GUEST_SYNC #%d, before KVM_SET_XSAVE\n", iter);276276+ vcpu_xsave_set(vcpu, tile_state->xsave);277277+ fprintf(stderr, "GUEST_SYNC #%d, after KVM_SET_XSAVE\n", iter);278278+ }279279+ if (uc.args[1] & TEST_SAVE_RESTORE) {280280+ fprintf(stderr, "GUEST_SYNC #%d, save/restore VM state\n", iter);281281+ state = vcpu_save_state(vcpu);282282+ memset(®s1, 0, sizeof(regs1));283283+ vcpu_regs_get(vcpu, ®s1);284284+285285+ kvm_vm_release(vm);286286+287287+ /* Restore state in a new VM. */288288+ vcpu = vm_recreate_with_one_vcpu(vm);289289+ vcpu_load_state(vcpu, state);313290 kvm_x86_state_cleanup(state);314314- break;315315- case 9:316316- fprintf(stderr,317317- "GUEST_SYNC(%ld), #NM exception and enable amx\n", uc.args[1]);318318- break;291291+292292+ memset(®s2, 0, sizeof(regs2));293293+ vcpu_regs_get(vcpu, ®s2);294294+ TEST_ASSERT(!memcmp(®s1, ®s2, sizeof(regs2)),295295+ "Unexpected register values after vcpu_load_state; rdi: %lx rsi: %lx",296296+ (ulong) regs2.rdi, (ulong) regs2.rsi);319297 }320298 break;321299 case UCALL_DONE:···335293 TEST_FAIL("Unknown ucall %lu", uc.cmd);336294 }337295338338- state = vcpu_save_state(vcpu);339339- memset(®s1, 0, sizeof(regs1));340340- vcpu_regs_get(vcpu, ®s1);341341-342342- kvm_vm_release(vm);343343-344344- /* Restore state in a new VM. */345345- vcpu = vm_recreate_with_one_vcpu(vm);346346- vcpu_load_state(vcpu, state);347347- kvm_x86_state_cleanup(state);348348-349349- memset(®s2, 0, sizeof(regs2));350350- vcpu_regs_get(vcpu, ®s2);351351- TEST_ASSERT(!memcmp(®s1, ®s2, sizeof(regs2)),352352- "Unexpected register values after vcpu_load_state; rdi: %lx rsi: %lx",353353- (ulong) regs2.rdi, (ulong) regs2.rsi);354296 }355297done:356298 kvm_vm_free(vm);
···179179 if (rw && shared && fs_is_unknown(fs_type)) {180180 ksft_print_msg("Unknown filesystem\n");181181 result = KSFT_SKIP;182182- return;182182+ break;183183 }184184 /*185185 * R/O pinning or pinning in a private mapping is always
+357-27
tools/testing/selftests/mm/merge.c
···2222 struct procmap_fd procmap;2323};24242525+static char *map_carveout(unsigned int page_size)2626+{2727+ return mmap(NULL, 30 * page_size, PROT_NONE,2828+ MAP_ANON | MAP_PRIVATE, -1, 0);2929+}3030+3131+static pid_t do_fork(struct procmap_fd *procmap)3232+{3333+ pid_t pid = fork();3434+3535+ if (pid == -1)3636+ return -1;3737+ if (pid != 0) {3838+ wait(NULL);3939+ return pid;4040+ }4141+4242+ /* Reopen for child. */4343+ if (close_procmap(procmap))4444+ return -1;4545+ if (open_self_procmap(procmap))4646+ return -1;4747+4848+ return 0;4949+}5050+2551FIXTURE_SETUP(merge)2652{2753 self->page_size = psize();2854 /* Carve out PROT_NONE region to map over. */2929- self->carveout = mmap(NULL, 30 * self->page_size, PROT_NONE,3030- MAP_ANON | MAP_PRIVATE, -1, 0);5555+ self->carveout = map_carveout(self->page_size);3156 ASSERT_NE(self->carveout, MAP_FAILED);3257 /* Setup PROCMAP_QUERY interface. */3358 ASSERT_EQ(open_self_procmap(&self->procmap), 0);···6136FIXTURE_TEARDOWN(merge)6237{6338 ASSERT_EQ(munmap(self->carveout, 30 * self->page_size), 0);6464- ASSERT_EQ(close_procmap(&self->procmap), 0);3939+ /* May fail for parent of forked process. */4040+ close_procmap(&self->procmap);6541 /*6642 * Clear unconditionally, as some tests set this. It is no issue if this6743 * fails (KSM may be disabled for instance).6844 */4545+ prctl(PR_SET_MEMORY_MERGE, 0, 0, 0, 0);4646+}4747+4848+FIXTURE(merge_with_fork)4949+{5050+ unsigned int page_size;5151+ char *carveout;5252+ struct procmap_fd procmap;5353+};5454+5555+FIXTURE_VARIANT(merge_with_fork)5656+{5757+ bool forked;5858+};5959+6060+FIXTURE_VARIANT_ADD(merge_with_fork, forked)6161+{6262+ .forked = true,6363+};6464+6565+FIXTURE_VARIANT_ADD(merge_with_fork, unforked)6666+{6767+ .forked = false,6868+};6969+7070+FIXTURE_SETUP(merge_with_fork)7171+{7272+ self->page_size = psize();7373+ self->carveout = map_carveout(self->page_size);7474+ ASSERT_NE(self->carveout, MAP_FAILED);7575+ ASSERT_EQ(open_self_procmap(&self->procmap), 0);7676+}7777+7878+FIXTURE_TEARDOWN(merge_with_fork)7979+{8080+ ASSERT_EQ(munmap(self->carveout, 30 * self->page_size), 0);8181+ ASSERT_EQ(close_procmap(&self->procmap), 0);8282+ /* See above. */6983 prctl(PR_SET_MEMORY_MERGE, 0, 0, 0, 0);7084}7185···386322 unsigned int page_size = self->page_size;387323 char *carveout = self->carveout;388324 struct procmap_fd *procmap = &self->procmap;389389- pid_t pid;390325 char *ptr, *ptr2;326326+ pid_t pid;391327 int i;392328393329 /*···408344 */409345 ptr[0] = 'x';410346411411- pid = fork();347347+ pid = do_fork(&self->procmap);412348 ASSERT_NE(pid, -1);413413-414414- if (pid != 0) {415415- wait(NULL);349349+ if (pid != 0)416350 return;417417- }418418-419419- /* Child process below: */420420-421421- /* Reopen for child. */422422- ASSERT_EQ(close_procmap(&self->procmap), 0);423423- ASSERT_EQ(open_self_procmap(&self->procmap), 0);424351425352 /* unCOWing everything does not cause the AVC to go away. */426353 for (i = 0; i < 5 * page_size; i += page_size)···441386 unsigned int page_size = self->page_size;442387 char *carveout = self->carveout;443388 struct procmap_fd *procmap = &self->procmap;444444- pid_t pid;445389 char *ptr, *ptr2;390390+ pid_t pid;446391 int i;447392448393 /*···463408 */464409 ptr[0] = 'x';465410466466- pid = fork();411411+ pid = do_fork(&self->procmap);467412 ASSERT_NE(pid, -1);468468-469469- if (pid != 0) {470470- wait(NULL);413413+ if (pid != 0)471414 return;472472- }473473-474474- /* Child process below: */475475-476476- /* Reopen for child. */477477- ASSERT_EQ(close_procmap(&self->procmap), 0);478478- ASSERT_EQ(open_self_procmap(&self->procmap), 0);479415480416 /* unCOWing everything does not cause the AVC to go away. */481417 for (i = 0; i < 5 * page_size; i += page_size)···12151169 ASSERT_TRUE(find_vma_procmap(procmap, ptr));12161170 ASSERT_EQ(procmap->query.vma_start, (unsigned long)ptr);12171171 ASSERT_EQ(procmap->query.vma_end, (unsigned long)ptr + 15 * page_size);11721172+}11731173+11741174+TEST_F(merge_with_fork, mremap_faulted_to_unfaulted_prev)11751175+{11761176+ struct procmap_fd *procmap = &self->procmap;11771177+ unsigned int page_size = self->page_size;11781178+ unsigned long offset;11791179+ char *ptr_a, *ptr_b;11801180+11811181+ /*11821182+ * mremap() such that A and B merge:11831183+ *11841184+ * |------------|11851185+ * | \ |11861186+ * |-----------| | / |---------|11871187+ * | unfaulted | v \ | faulted |11881188+ * |-----------| / |---------|11891189+ * B \ A11901190+ */11911191+11921192+ /* Map VMA A into place. */11931193+ ptr_a = mmap(&self->carveout[page_size + 3 * page_size],11941194+ 3 * page_size,11951195+ PROT_READ | PROT_WRITE,11961196+ MAP_PRIVATE | MAP_ANON | MAP_FIXED, -1, 0);11971197+ ASSERT_NE(ptr_a, MAP_FAILED);11981198+ /* Fault it in. */11991199+ ptr_a[0] = 'x';12001200+12011201+ if (variant->forked) {12021202+ pid_t pid = do_fork(&self->procmap);12031203+12041204+ ASSERT_NE(pid, -1);12051205+ if (pid != 0)12061206+ return;12071207+ }12081208+12091209+ /*12101210+ * Now move it out of the way so we can place VMA B in position,12111211+ * unfaulted.12121212+ */12131213+ ptr_a = mremap(ptr_a, 3 * page_size, 3 * page_size,12141214+ MREMAP_FIXED | MREMAP_MAYMOVE, &self->carveout[20 * page_size]);12151215+ ASSERT_NE(ptr_a, MAP_FAILED);12161216+12171217+ /* Map VMA B into place. */12181218+ ptr_b = mmap(&self->carveout[page_size], 3 * page_size,12191219+ PROT_READ | PROT_WRITE,12201220+ MAP_PRIVATE | MAP_ANON | MAP_FIXED, -1, 0);12211221+ ASSERT_NE(ptr_b, MAP_FAILED);12221222+12231223+ /*12241224+ * Now move VMA A into position with MREMAP_DONTUNMAP to catch incorrect12251225+ * anon_vma propagation.12261226+ */12271227+ ptr_a = mremap(ptr_a, 3 * page_size, 3 * page_size,12281228+ MREMAP_FIXED | MREMAP_MAYMOVE | MREMAP_DONTUNMAP,12291229+ &self->carveout[page_size + 3 * page_size]);12301230+ ASSERT_NE(ptr_a, MAP_FAILED);12311231+12321232+ /* The VMAs should have merged, if not forked. */12331233+ ASSERT_TRUE(find_vma_procmap(procmap, ptr_b));12341234+ ASSERT_EQ(procmap->query.vma_start, (unsigned long)ptr_b);12351235+12361236+ offset = variant->forked ? 3 * page_size : 6 * page_size;12371237+ ASSERT_EQ(procmap->query.vma_end, (unsigned long)ptr_b + offset);12381238+}12391239+12401240+TEST_F(merge_with_fork, mremap_faulted_to_unfaulted_next)12411241+{12421242+ struct procmap_fd *procmap = &self->procmap;12431243+ unsigned int page_size = self->page_size;12441244+ unsigned long offset;12451245+ char *ptr_a, *ptr_b;12461246+12471247+ /*12481248+ * mremap() such that A and B merge:12491249+ *12501250+ * |---------------------------|12511251+ * | \ |12521252+ * | |-----------| / |---------|12531253+ * v | unfaulted | \ | faulted |12541254+ * |-----------| / |---------|12551255+ * B \ A12561256+ *12571257+ * Then unmap VMA A to trigger the bug.12581258+ */12591259+12601260+ /* Map VMA A into place. */12611261+ ptr_a = mmap(&self->carveout[page_size], 3 * page_size,12621262+ PROT_READ | PROT_WRITE,12631263+ MAP_PRIVATE | MAP_ANON | MAP_FIXED, -1, 0);12641264+ ASSERT_NE(ptr_a, MAP_FAILED);12651265+ /* Fault it in. */12661266+ ptr_a[0] = 'x';12671267+12681268+ if (variant->forked) {12691269+ pid_t pid = do_fork(&self->procmap);12701270+12711271+ ASSERT_NE(pid, -1);12721272+ if (pid != 0)12731273+ return;12741274+ }12751275+12761276+ /*12771277+ * Now move it out of the way so we can place VMA B in position,12781278+ * unfaulted.12791279+ */12801280+ ptr_a = mremap(ptr_a, 3 * page_size, 3 * page_size,12811281+ MREMAP_FIXED | MREMAP_MAYMOVE, &self->carveout[20 * page_size]);12821282+ ASSERT_NE(ptr_a, MAP_FAILED);12831283+12841284+ /* Map VMA B into place. */12851285+ ptr_b = mmap(&self->carveout[page_size + 3 * page_size], 3 * page_size,12861286+ PROT_READ | PROT_WRITE,12871287+ MAP_PRIVATE | MAP_ANON | MAP_FIXED, -1, 0);12881288+ ASSERT_NE(ptr_b, MAP_FAILED);12891289+12901290+ /*12911291+ * Now move VMA A into position with MREMAP_DONTUNMAP to catch incorrect12921292+ * anon_vma propagation.12931293+ */12941294+ ptr_a = mremap(ptr_a, 3 * page_size, 3 * page_size,12951295+ MREMAP_FIXED | MREMAP_MAYMOVE | MREMAP_DONTUNMAP,12961296+ &self->carveout[page_size]);12971297+ ASSERT_NE(ptr_a, MAP_FAILED);12981298+12991299+ /* The VMAs should have merged, if not forked. */13001300+ ASSERT_TRUE(find_vma_procmap(procmap, ptr_a));13011301+ ASSERT_EQ(procmap->query.vma_start, (unsigned long)ptr_a);13021302+ offset = variant->forked ? 3 * page_size : 6 * page_size;13031303+ ASSERT_EQ(procmap->query.vma_end, (unsigned long)ptr_a + offset);13041304+}13051305+13061306+TEST_F(merge_with_fork, mremap_faulted_to_unfaulted_prev_unfaulted_next)13071307+{13081308+ struct procmap_fd *procmap = &self->procmap;13091309+ unsigned int page_size = self->page_size;13101310+ unsigned long offset;13111311+ char *ptr_a, *ptr_b, *ptr_c;13121312+13131313+ /*13141314+ * mremap() with MREMAP_DONTUNMAP such that A, B and C merge:13151315+ *13161316+ * |---------------------------|13171317+ * | \ |13181318+ * |-----------| | |-----------| / |---------|13191319+ * | unfaulted | v | unfaulted | \ | faulted |13201320+ * |-----------| |-----------| / |---------|13211321+ * A C \ B13221322+ */13231323+13241324+ /* Map VMA B into place. */13251325+ ptr_b = mmap(&self->carveout[page_size + 3 * page_size], 3 * page_size,13261326+ PROT_READ | PROT_WRITE,13271327+ MAP_PRIVATE | MAP_ANON | MAP_FIXED, -1, 0);13281328+ ASSERT_NE(ptr_b, MAP_FAILED);13291329+ /* Fault it in. */13301330+ ptr_b[0] = 'x';13311331+13321332+ if (variant->forked) {13331333+ pid_t pid = do_fork(&self->procmap);13341334+13351335+ ASSERT_NE(pid, -1);13361336+ if (pid != 0)13371337+ return;13381338+ }13391339+13401340+ /*13411341+ * Now move it out of the way so we can place VMAs A, C in position,13421342+ * unfaulted.13431343+ */13441344+ ptr_b = mremap(ptr_b, 3 * page_size, 3 * page_size,13451345+ MREMAP_FIXED | MREMAP_MAYMOVE, &self->carveout[20 * page_size]);13461346+ ASSERT_NE(ptr_b, MAP_FAILED);13471347+13481348+ /* Map VMA A into place. */13491349+13501350+ ptr_a = mmap(&self->carveout[page_size], 3 * page_size,13511351+ PROT_READ | PROT_WRITE,13521352+ MAP_PRIVATE | MAP_ANON | MAP_FIXED, -1, 0);13531353+ ASSERT_NE(ptr_a, MAP_FAILED);13541354+13551355+ /* Map VMA C into place. */13561356+ ptr_c = mmap(&self->carveout[page_size + 3 * page_size + 3 * page_size],13571357+ 3 * page_size, PROT_READ | PROT_WRITE,13581358+ MAP_PRIVATE | MAP_ANON | MAP_FIXED, -1, 0);13591359+ ASSERT_NE(ptr_c, MAP_FAILED);13601360+13611361+ /*13621362+ * Now move VMA B into position with MREMAP_DONTUNMAP to catch incorrect13631363+ * anon_vma propagation.13641364+ */13651365+ ptr_b = mremap(ptr_b, 3 * page_size, 3 * page_size,13661366+ MREMAP_FIXED | MREMAP_MAYMOVE | MREMAP_DONTUNMAP,13671367+ &self->carveout[page_size + 3 * page_size]);13681368+ ASSERT_NE(ptr_b, MAP_FAILED);13691369+13701370+ /* The VMAs should have merged, if not forked. */13711371+ ASSERT_TRUE(find_vma_procmap(procmap, ptr_a));13721372+ ASSERT_EQ(procmap->query.vma_start, (unsigned long)ptr_a);13731373+ offset = variant->forked ? 3 * page_size : 9 * page_size;13741374+ ASSERT_EQ(procmap->query.vma_end, (unsigned long)ptr_a + offset);13751375+13761376+ /* If forked, B and C should also not have merged. */13771377+ if (variant->forked) {13781378+ ASSERT_TRUE(find_vma_procmap(procmap, ptr_b));13791379+ ASSERT_EQ(procmap->query.vma_start, (unsigned long)ptr_b);13801380+ ASSERT_EQ(procmap->query.vma_end, (unsigned long)ptr_b + 3 * page_size);13811381+ }13821382+}13831383+13841384+TEST_F(merge_with_fork, mremap_faulted_to_unfaulted_prev_faulted_next)13851385+{13861386+ struct procmap_fd *procmap = &self->procmap;13871387+ unsigned int page_size = self->page_size;13881388+ char *ptr_a, *ptr_b, *ptr_bc;13891389+13901390+ /*13911391+ * mremap() with MREMAP_DONTUNMAP such that A, B and C merge:13921392+ *13931393+ * |---------------------------|13941394+ * | \ |13951395+ * |-----------| | |-----------| / |---------|13961396+ * | unfaulted | v | faulted | \ | faulted |13971397+ * |-----------| |-----------| / |---------|13981398+ * A C \ B13991399+ */14001400+14011401+ /*14021402+ * Map VMA B and C into place. We have to map them together so their14031403+ * anon_vma is the same and the vma->vm_pgoff's are correctly aligned.14041404+ */14051405+ ptr_bc = mmap(&self->carveout[page_size + 3 * page_size],14061406+ 3 * page_size + 3 * page_size,14071407+ PROT_READ | PROT_WRITE,14081408+ MAP_PRIVATE | MAP_ANON | MAP_FIXED, -1, 0);14091409+ ASSERT_NE(ptr_bc, MAP_FAILED);14101410+14111411+ /* Fault it in. */14121412+ ptr_bc[0] = 'x';14131413+14141414+ if (variant->forked) {14151415+ pid_t pid = do_fork(&self->procmap);14161416+14171417+ ASSERT_NE(pid, -1);14181418+ if (pid != 0)14191419+ return;14201420+ }14211421+14221422+ /*14231423+ * Now move VMA B out the way (splitting VMA BC) so we can place VMA A14241424+ * in position, unfaulted, and leave the remainder of the VMA we just14251425+ * moved in place, faulted, as VMA C.14261426+ */14271427+ ptr_b = mremap(ptr_bc, 3 * page_size, 3 * page_size,14281428+ MREMAP_FIXED | MREMAP_MAYMOVE, &self->carveout[20 * page_size]);14291429+ ASSERT_NE(ptr_b, MAP_FAILED);14301430+14311431+ /* Map VMA A into place. */14321432+ ptr_a = mmap(&self->carveout[page_size], 3 * page_size,14331433+ PROT_READ | PROT_WRITE,14341434+ MAP_PRIVATE | MAP_ANON | MAP_FIXED, -1, 0);14351435+ ASSERT_NE(ptr_a, MAP_FAILED);14361436+14371437+ /*14381438+ * Now move VMA B into position with MREMAP_DONTUNMAP to catch incorrect14391439+ * anon_vma propagation.14401440+ */14411441+ ptr_b = mremap(ptr_b, 3 * page_size, 3 * page_size,14421442+ MREMAP_FIXED | MREMAP_MAYMOVE | MREMAP_DONTUNMAP,14431443+ &self->carveout[page_size + 3 * page_size]);14441444+ ASSERT_NE(ptr_b, MAP_FAILED);14451445+14461446+ /* The VMAs should have merged. A,B,C if unforked, B, C if forked. */14471447+ if (variant->forked) {14481448+ ASSERT_TRUE(find_vma_procmap(procmap, ptr_b));14491449+ ASSERT_EQ(procmap->query.vma_start, (unsigned long)ptr_b);14501450+ ASSERT_EQ(procmap->query.vma_end, (unsigned long)ptr_b + 6 * page_size);14511451+ } else {14521452+ ASSERT_TRUE(find_vma_procmap(procmap, ptr_a));14531453+ ASSERT_EQ(procmap->query.vma_start, (unsigned long)ptr_a);14541454+ ASSERT_EQ(procmap->query.vma_end, (unsigned long)ptr_a + 9 * page_size);14551455+ }12181456}1219145712201458TEST_HARNESS_MAIN
···511511512512 printf("ok\n");513513 }514514+515515+ printf("All tests have been executed. Waiting other peer...");516516+ fflush(stdout);517517+518518+ /*519519+ * Final full barrier, to ensure that all tests have been run and520520+ * that even the last one has been successful on both sides.521521+ */522522+ control_writeln("COMPLETED");523523+ control_expectln("COMPLETED");524524+525525+ printf("ok\n");514526}515527516528void list_tests(const struct test_case *test_cases)