···81818282 power-supply: true83838484+ power-domains:8585+ maxItems: 18686+8487 resets:8588 description: |8689 A number of phandles to resets that need to be asserted during
···119119kernel versions by including an arbitrary string of "salt" in it.120120This is specified by the Kconfig symbol ``CONFIG_BUILD_SALT``.121121122122+Git123123+---124124+125125+Uncommitted changes or different commit ids in git can also lead126126+to different compilation results. For example, after executing127127+``git reset HEAD^``, even if the code is the same, the128128+``include/config/kernel.release`` generated during compilation129129+will be different, which will eventually lead to binary differences.130130+See ``scripts/setlocalversion`` for details.131131+122132.. _KBUILD_BUILD_TIMESTAMP: kbuild.html#kbuild-build-timestamp123133.. _KBUILD_BUILD_USER and KBUILD_BUILD_HOST: kbuild.html#kbuild-build-user-kbuild-build-host124134.. _KCFLAGS: kbuild.html#kcflags
+3
Documentation/virt/kvm/devices/vm.rst
···215215:Parameters: address of a buffer in user space to store the data (u8) to216216:Returns: -EFAULT if the given address is not accessible from kernel space;217217 -EINVAL if setting the TOD clock extension to != 0 is not supported218218+ -EOPNOTSUPP for a PV guest (TOD managed by the ultravisor)2182192192203.2. ATTRIBUTE: KVM_S390_VM_TOD_LOW220221-----------------------------------···225224226225:Parameters: address of a buffer in user space to store the data (u64) to227226:Returns: -EFAULT if the given address is not accessible from kernel space227227+ -EOPNOTSUPP for a PV guest (TOD managed by the ultravisor)2282282292293.3. ATTRIBUTE: KVM_S390_VM_TOD_EXT230230-----------------------------------···239237 (kvm_s390_vm_tod_clock) to240238:Returns: -EFAULT if the given address is not accessible from kernel space;241239 -EINVAL if setting the TOD clock extension to != 0 is not supported240240+ -EOPNOTSUPP for a PV guest (TOD managed by the ultravisor)2422412432424. GROUP: KVM_S390_VM_CRYPTO244243============================
+13-10
MAINTAINERS
···39843984R: Broadcom internal kernel review list <bcm-kernel-feedback-list@broadcom.com>39853985L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)39863986S: Maintained39873987-T: git git://github.com/broadcom/stblinux.git39873987+T: git https://github.com/broadcom/stblinux.git39883988F: Documentation/devicetree/bindings/arm/bcm/brcm,bcmbca.yaml39893989F: arch/arm64/boot/dts/broadcom/bcmbca/*39903990N: bcmbca···40094009L: linux-rpi-kernel@lists.infradead.org (moderated for non-subscribers)40104010L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)40114011S: Maintained40124012-T: git git://github.com/broadcom/stblinux.git40124012+T: git https://github.com/broadcom/stblinux.git40134013F: Documentation/devicetree/bindings/pci/brcm,stb-pcie.yaml40144014F: drivers/pci/controller/pcie-brcmstb.c40154015F: drivers/staging/vc04_services···40234023M: Scott Branden <sbranden@broadcom.com>40244024R: Broadcom internal kernel review list <bcm-kernel-feedback-list@broadcom.com>40254025S: Maintained40264026-T: git git://github.com/broadcom/mach-bcm40264026+T: git https://github.com/broadcom/mach-bcm40274027F: arch/arm/mach-bcm/40284028N: bcm281*40294029N: bcm113*···40884088R: Broadcom internal kernel review list <bcm-kernel-feedback-list@broadcom.com>40894089L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)40904090S: Maintained40914091-T: git git://github.com/broadcom/stblinux.git40914091+T: git https://github.com/broadcom/stblinux.git40924092F: Documentation/devicetree/bindings/pci/brcm,stb-pcie.yaml40934093F: arch/arm/boot/dts/bcm7*.dts*40944094F: arch/arm/include/asm/hardware/cache-b15-rac.h···41214121R: Broadcom internal kernel review list <bcm-kernel-feedback-list@broadcom.com>41224122L: linux-mips@vger.kernel.org41234123S: Maintained41244124-T: git git://github.com/broadcom/stblinux.git41244124+T: git https://github.com/broadcom/stblinux.git41254125F: arch/mips/bmips/*41264126F: arch/mips/boot/dts/brcm/bcm*.dts*41274127F: arch/mips/include/asm/mach-bmips/*···42624262R: Broadcom internal kernel review list <bcm-kernel-feedback-list@broadcom.com>42634263L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)42644264S: Maintained42654265-T: git git://github.com/broadcom/stblinux.git42654265+T: git https://github.com/broadcom/stblinux.git42664266F: arch/arm64/boot/dts/broadcom/northstar2/*42674267F: arch/arm64/boot/dts/broadcom/stingray/*42684268F: drivers/clk/bcm/clk-ns*···43324332R: Broadcom internal kernel review list <bcm-kernel-feedback-list@broadcom.com>43334333L: linux-pm@vger.kernel.org43344334S: Maintained43354335-T: git git://github.com/broadcom/stblinux.git43354335+T: git https://github.com/broadcom/stblinux.git43364336F: drivers/soc/bcm/bcm63xx/bcm-pmb.c43374337F: include/dt-bindings/soc/bcm-pmb.h43384338···5041504150425042CISCO VIC ETHERNET NIC DRIVER50435043M: Christian Benvenuti <benve@cisco.com>50445044-M: Govindarajulu Varadarajan <_govind@gmx.com>50445044+M: Satish Kharat <satishkh@cisco.com>50455045S: Supported50465046F: drivers/net/ethernet/cisco/enic/50475047···92179217F: drivers/i2c/busses/i2c-hisi.c9218921892199219HISILICON LPC BUS DRIVER92209220-M: john.garry@huawei.com92209220+M: Jay Fang <f.fangjian@huawei.com>92219221S: Maintained92229222W: http://www.hisilicon.com92239223F: Documentation/devicetree/bindings/arm/hisilicon/low-pin-count.yaml···97789778F: drivers/pci/hotplug/rpaphp*9779977997809780IBM Power SRIOV Virtual NIC Device Driver97819781-M: Dany Madden <drt@linux.ibm.com>97819781+M: Haren Myneni <haren@linux.ibm.com>97829782+M: Rick Lindsley <ricklind@linux.ibm.com>97839783+R: Nick Child <nnac123@linux.ibm.com>97849784+R: Dany Madden <danymadden@us.ibm.com>97829785R: Thomas Falcon <tlfalcon@linux.ibm.com>97839786L: netdev@vger.kernel.org97849787S: Supported
+2-2
Makefile
···22VERSION = 633PATCHLEVEL = 144SUBLEVEL = 055-EXTRAVERSION = -rc355+EXTRAVERSION = -rc466NAME = Hurr durr I'ma ninja sloth7788# *DOCUMENTATION*···12181218 cmd_ar_vmlinux.a = \12191219 rm -f $@; \12201220 $(AR) cDPrST $@ $(KBUILD_VMLINUX_OBJS); \12211221- $(AR) mPiT $$($(AR) t $@ | head -n1) $@ $$($(AR) t $@ | grep -F --file=$(srctree)/scripts/head-object-list.txt)12211221+ $(AR) mPiT $$($(AR) t $@ | sed -n 1p) $@ $$($(AR) t $@ | grep -F -f $(srctree)/scripts/head-object-list.txt)1222122212231223targets += vmlinux.a12241224vmlinux.a: $(KBUILD_VMLINUX_OBJS) scripts/head-object-list.txt autoksyms_recursive FORCE
+4
arch/arm/boot/dts/imx6q-yapp4-crux.dts
···3333 status = "okay";3434};35353636+®_pu {3737+ regulator-always-on;3838+};3939+3640®_usb_h1_vbus {3741 status = "okay";3842};
···3434#define VE_GET_PORT_NUM(e) ((e) >> 16)3535#define VE_IS_IO_STRING(e) ((e) & BIT(4))36363737+#define ATTR_SEPT_VE_DISABLE BIT(28)3838+3739/*3840 * Wrapper for standard use of __tdx_hypercall with no output aside from3941 * return code.···10098 panic("TDCALL %lld failed (Buggy TDX module!)\n", fn);10199}102100103103-static u64 get_cc_mask(void)101101+static void tdx_parse_tdinfo(u64 *cc_mask)104102{105103 struct tdx_module_output out;106104 unsigned int gpa_width;105105+ u64 td_attr;107106108107 /*109108 * TDINFO TDX module call is used to get the TD execution environment···112109 * information, etc. More details about the ABI can be found in TDX113110 * Guest-Host-Communication Interface (GHCI), section 2.4.2 TDCALL114111 * [TDG.VP.INFO].115115- *116116- * The GPA width that comes out of this call is critical. TDX guests117117- * can not meaningfully run without it.118112 */119113 tdx_module_call(TDX_GET_INFO, 0, 0, 0, 0, &out);120120-121121- gpa_width = out.rcx & GENMASK(5, 0);122114123115 /*124116 * The highest bit of a guest physical address is the "sharing" bit.125117 * Set it for shared pages and clear it for private pages.118118+ *119119+ * The GPA width that comes out of this call is critical. TDX guests120120+ * can not meaningfully run without it.126121 */127127- return BIT_ULL(gpa_width - 1);122122+ gpa_width = out.rcx & GENMASK(5, 0);123123+ *cc_mask = BIT_ULL(gpa_width - 1);124124+125125+ /*126126+ * The kernel can not handle #VE's when accessing normal kernel127127+ * memory. Ensure that no #VE will be delivered for accesses to128128+ * TD-private memory. Only VMM-shared memory (MMIO) will #VE.129129+ */130130+ td_attr = out.rdx;131131+ if (!(td_attr & ATTR_SEPT_VE_DISABLE))132132+ panic("TD misconfiguration: SEPT_VE_DISABLE attibute must be set.\n");128133}129134130135/*···769758 setup_force_cpu_cap(X86_FEATURE_TDX_GUEST);770759771760 cc_set_vendor(CC_VENDOR_INTEL);772772- cc_mask = get_cc_mask();761761+ tdx_parse_tdinfo(&cc_mask);773762 cc_set_mask(cc_mask);774763775764 /*
···619619 case RAPL_UNIT_QUIRK_INTEL_HSW:620620 rapl_hw_unit[PERF_RAPL_RAM] = 16;621621 break;622622- /*623623- * SPR shares the same DRAM domain energy unit as HSW, plus it624624- * also has a fixed energy unit for Psys domain.625625- */622622+ /* SPR uses a fixed energy unit for Psys domain. */626623 case RAPL_UNIT_QUIRK_INTEL_SPR:627627- rapl_hw_unit[PERF_RAPL_RAM] = 16;628624 rapl_hw_unit[PERF_RAPL_PSYS] = 0;629625 break;630626 default:
···327327 pr_warn("ACPI NUMA: Failed to add memblk for CFMWS node %d [mem %#llx-%#llx]\n",328328 node, start, end);329329 }330330+ node_set(node, numa_nodes_parsed);330331331332 /* Set the next available fake_pxm value */332333 (*fake_pxm)++;
···63636464 /* remap drive's physical memory address */6565 mem = devm_platform_ioremap_resource(pdev, 0);6666- if (!mem)6767- return -ENOMEM;6666+ if (IS_ERR(mem))6767+ return PTR_ERR(mem);68686969 /* request and activate power and reset GPIOs */7070 lda->power = devm_gpiod_get(dev, "power", GPIOD_OUT_HIGH);
+6
drivers/block/Kconfig
···408408 definition isn't finalized yet, and might change according to future409409 requirement, so mark is as experimental now.410410411411+ Say Y if you want to get better performance because task_work_add()412412+ can be used in IO path for replacing io_uring cmd, which will become413413+ shared between IO tasks and ubq daemon, meantime task_work_add() can414414+ can handle batch more effectively, but task_work_add() isn't exported415415+ for module, so ublk has to be built to kernel.416416+411417source "drivers/block/rnbd/Kconfig"412418413419endif # BLK_DEV
+68-47
drivers/block/ublk_drv.c
···5757#define UBLK_PARAM_TYPE_ALL (UBLK_PARAM_TYPE_BASIC | UBLK_PARAM_TYPE_DISCARD)58585959struct ublk_rq_data {6060- struct callback_head work;6060+ union {6161+ struct callback_head work;6262+ struct llist_node node;6363+ };6164};62656366struct ublk_uring_cmd_pdu {6464- struct request *req;6767+ struct ublk_queue *ubq;6568};66696770/*···121118 unsigned long flags;122119 struct task_struct *ubq_daemon;123120 char *io_cmd_buf;121121+122122+ struct llist_head io_cmds;124123125124 unsigned long io_addr; /* mapped vm address */126125 unsigned int max_io_sz;···769764static void ublk_rq_task_work_cb(struct io_uring_cmd *cmd)770765{771766 struct ublk_uring_cmd_pdu *pdu = ublk_get_uring_cmd_pdu(cmd);767767+ struct ublk_queue *ubq = pdu->ubq;768768+ struct llist_node *io_cmds = llist_del_all(&ubq->io_cmds);769769+ struct ublk_rq_data *data;772770773773- __ublk_rq_task_work(pdu->req);771771+ llist_for_each_entry(data, io_cmds, node)772772+ __ublk_rq_task_work(blk_mq_rq_from_pdu(data));774773}775774776775static void ublk_rq_task_work_fn(struct callback_head *work)···784775 struct request *req = blk_mq_rq_from_pdu(data);785776786777 __ublk_rq_task_work(req);778778+}779779+780780+static void ublk_submit_cmd(struct ublk_queue *ubq, const struct request *rq)781781+{782782+ struct ublk_io *io = &ubq->ios[rq->tag];783783+784784+ /*785785+ * If the check pass, we know that this is a re-issued request aborted786786+ * previously in monitor_work because the ubq_daemon(cmd's task) is787787+ * PF_EXITING. We cannot call io_uring_cmd_complete_in_task() anymore788788+ * because this ioucmd's io_uring context may be freed now if no inflight789789+ * ioucmd exists. Otherwise we may cause null-deref in ctx->fallback_work.790790+ *791791+ * Note: monitor_work sets UBLK_IO_FLAG_ABORTED and ends this request(releasing792792+ * the tag). Then the request is re-started(allocating the tag) and we are here.793793+ * Since releasing/allocating a tag implies smp_mb(), finding UBLK_IO_FLAG_ABORTED794794+ * guarantees that here is a re-issued request aborted previously.795795+ */796796+ if (unlikely(io->flags & UBLK_IO_FLAG_ABORTED)) {797797+ struct llist_node *io_cmds = llist_del_all(&ubq->io_cmds);798798+ struct ublk_rq_data *data;799799+800800+ llist_for_each_entry(data, io_cmds, node)801801+ __ublk_abort_rq(ubq, blk_mq_rq_from_pdu(data));802802+ } else {803803+ struct io_uring_cmd *cmd = io->cmd;804804+ struct ublk_uring_cmd_pdu *pdu = ublk_get_uring_cmd_pdu(cmd);805805+806806+ pdu->ubq = ubq;807807+ io_uring_cmd_complete_in_task(cmd, ublk_rq_task_work_cb);808808+ }809809+}810810+811811+static void ublk_queue_cmd(struct ublk_queue *ubq, struct request *rq,812812+ bool last)813813+{814814+ struct ublk_rq_data *data = blk_mq_rq_to_pdu(rq);815815+816816+ if (ublk_can_use_task_work(ubq)) {817817+ enum task_work_notify_mode notify_mode = last ?818818+ TWA_SIGNAL_NO_IPI : TWA_NONE;819819+820820+ if (task_work_add(ubq->ubq_daemon, &data->work, notify_mode))821821+ __ublk_abort_rq(ubq, rq);822822+ } else {823823+ if (llist_add(&data->node, &ubq->io_cmds))824824+ ublk_submit_cmd(ubq, rq);825825+ }787826}788827789828static blk_status_t ublk_queue_rq(struct blk_mq_hw_ctx *hctx,···845788 res = ublk_setup_iod(ubq, rq);846789 if (unlikely(res != BLK_STS_OK))847790 return BLK_STS_IOERR;791791+848792 /* With recovery feature enabled, force_abort is set in849793 * ublk_stop_dev() before calling del_gendisk(). We have to850794 * abort all requeued and new rqs here to let del_gendisk()···861803 blk_mq_start_request(bd->rq);862804863805 if (unlikely(ubq_daemon_is_dying(ubq))) {864864- fail:865806 __ublk_abort_rq(ubq, rq);866807 return BLK_STS_OK;867808 }868809869869- if (ublk_can_use_task_work(ubq)) {870870- struct ublk_rq_data *data = blk_mq_rq_to_pdu(rq);871871- enum task_work_notify_mode notify_mode = bd->last ?872872- TWA_SIGNAL_NO_IPI : TWA_NONE;873873-874874- if (task_work_add(ubq->ubq_daemon, &data->work, notify_mode))875875- goto fail;876876- } else {877877- struct ublk_io *io = &ubq->ios[rq->tag];878878- struct io_uring_cmd *cmd = io->cmd;879879- struct ublk_uring_cmd_pdu *pdu = ublk_get_uring_cmd_pdu(cmd);880880-881881- /*882882- * If the check pass, we know that this is a re-issued request aborted883883- * previously in monitor_work because the ubq_daemon(cmd's task) is884884- * PF_EXITING. We cannot call io_uring_cmd_complete_in_task() anymore885885- * because this ioucmd's io_uring context may be freed now if no inflight886886- * ioucmd exists. Otherwise we may cause null-deref in ctx->fallback_work.887887- *888888- * Note: monitor_work sets UBLK_IO_FLAG_ABORTED and ends this request(releasing889889- * the tag). Then the request is re-started(allocating the tag) and we are here.890890- * Since releasing/allocating a tag implies smp_mb(), finding UBLK_IO_FLAG_ABORTED891891- * guarantees that here is a re-issued request aborted previously.892892- */893893- if ((io->flags & UBLK_IO_FLAG_ABORTED))894894- goto fail;895895-896896- pdu->req = rq;897897- io_uring_cmd_complete_in_task(cmd, ublk_rq_task_work_cb);898898- }810810+ ublk_queue_cmd(ubq, rq, bd->last);899811900812 return BLK_STS_OK;901813}···11921164}1193116511941166static void ublk_handle_need_get_data(struct ublk_device *ub, int q_id,11951195- int tag, struct io_uring_cmd *cmd)11671167+ int tag)11961168{11971169 struct ublk_queue *ubq = ublk_get_queue(ub, q_id);11981170 struct request *req = blk_mq_tag_to_rq(ub->tag_set.tags[q_id], tag);1199117112001200- if (ublk_can_use_task_work(ubq)) {12011201- struct ublk_rq_data *data = blk_mq_rq_to_pdu(req);12021202-12031203- /* should not fail since we call it just in ubq->ubq_daemon */12041204- task_work_add(ubq->ubq_daemon, &data->work, TWA_SIGNAL_NO_IPI);12051205- } else {12061206- struct ublk_uring_cmd_pdu *pdu = ublk_get_uring_cmd_pdu(cmd);12071207-12081208- pdu->req = req;12091209- io_uring_cmd_complete_in_task(cmd, ublk_rq_task_work_cb);12101210- }11721172+ ublk_queue_cmd(ubq, req, true);12111173}1212117412131175static int ublk_ch_uring_cmd(struct io_uring_cmd *cmd, unsigned int issue_flags)···12851267 io->addr = ub_cmd->addr;12861268 io->cmd = cmd;12871269 io->flags |= UBLK_IO_FLAG_ACTIVE;12881288- ublk_handle_need_get_data(ub, ub_cmd->q_id, ub_cmd->tag, cmd);12701270+ ublk_handle_need_get_data(ub, ub_cmd->q_id, ub_cmd->tag);12891271 break;12901272 default:12911273 goto out;···16751657 * (features) to handle.16761658 */16771659 ub->dev_info.flags &= UBLK_F_ALL;16601660+16611661+ if (!IS_BUILTIN(CONFIG_BLK_DEV_UBLK))16621662+ ub->dev_info.flags |= UBLK_F_URING_CMD_COMP_IN_TASK;1678166316791664 /* We are not ready to support zero copy */16801665 ub->dev_info.flags &= ~UBLK_F_SUPPORT_ZERO_COPY;
···174174 iter = to_cxl_port(iter->dev.parent)) {175175 cxl_rr = cxl_rr_load(iter, cxlr);176176 cxld = cxl_rr->decoder;177177- rc = cxld->commit(cxld);177177+ if (cxld->commit)178178+ rc = cxld->commit(cxld);178179 if (rc)179180 break;180181 }···658657 xa_for_each(&port->regions, index, iter) {659658 struct cxl_region_params *ip = &iter->region->params;660659660660+ if (!ip->res)661661+ continue;662662+661663 if (ip->res->start > p->res->start) {662664 dev_dbg(&cxlr->dev,663665 "%s: HPA order violation %s:%pr vs %pr\n",···690686 return cxl_rr;691687}692688693693-static void free_region_ref(struct cxl_region_ref *cxl_rr)689689+static void cxl_rr_free_decoder(struct cxl_region_ref *cxl_rr)694690{695695- struct cxl_port *port = cxl_rr->port;696691 struct cxl_region *cxlr = cxl_rr->region;697692 struct cxl_decoder *cxld = cxl_rr->decoder;693693+694694+ if (!cxld)695695+ return;698696699697 dev_WARN_ONCE(&cxlr->dev, cxld->region != cxlr, "region mismatch\n");700698 if (cxld->region == cxlr) {701699 cxld->region = NULL;702700 put_device(&cxlr->dev);703701 }702702+}704703704704+static void free_region_ref(struct cxl_region_ref *cxl_rr)705705+{706706+ struct cxl_port *port = cxl_rr->port;707707+ struct cxl_region *cxlr = cxl_rr->region;708708+709709+ cxl_rr_free_decoder(cxl_rr);705710 xa_erase(&port->regions, (unsigned long)cxlr);706711 xa_destroy(&cxl_rr->endpoints);707712 kfree(cxl_rr);···738725 get_device(&cxlr->dev);739726 }740727728728+ return 0;729729+}730730+731731+static int cxl_rr_alloc_decoder(struct cxl_port *port, struct cxl_region *cxlr,732732+ struct cxl_endpoint_decoder *cxled,733733+ struct cxl_region_ref *cxl_rr)734734+{735735+ struct cxl_decoder *cxld;736736+737737+ if (port == cxled_to_port(cxled))738738+ cxld = &cxled->cxld;739739+ else740740+ cxld = cxl_region_find_decoder(port, cxlr);741741+ if (!cxld) {742742+ dev_dbg(&cxlr->dev, "%s: no decoder available\n",743743+ dev_name(&port->dev));744744+ return -EBUSY;745745+ }746746+747747+ if (cxld->region) {748748+ dev_dbg(&cxlr->dev, "%s: %s already attached to %s\n",749749+ dev_name(&port->dev), dev_name(&cxld->dev),750750+ dev_name(&cxld->region->dev));751751+ return -EBUSY;752752+ }753753+754754+ cxl_rr->decoder = cxld;741755 return 0;742756}743757···834794 cxl_rr->nr_targets++;835795 nr_targets_inc = true;836796 }837837-838838- /*839839- * The decoder for @cxlr was allocated when the region was first840840- * attached to @port.841841- */842842- cxld = cxl_rr->decoder;843797 } else {844798 cxl_rr = alloc_region_ref(port, cxlr);845799 if (IS_ERR(cxl_rr)) {···844810 }845811 nr_targets_inc = true;846812847847- if (port == cxled_to_port(cxled))848848- cxld = &cxled->cxld;849849- else850850- cxld = cxl_region_find_decoder(port, cxlr);851851- if (!cxld) {852852- dev_dbg(&cxlr->dev, "%s: no decoder available\n",853853- dev_name(&port->dev));813813+ rc = cxl_rr_alloc_decoder(port, cxlr, cxled, cxl_rr);814814+ if (rc)854815 goto out_erase;855855- }856856-857857- if (cxld->region) {858858- dev_dbg(&cxlr->dev, "%s: %s already attached to %s\n",859859- dev_name(&port->dev), dev_name(&cxld->dev),860860- dev_name(&cxld->region->dev));861861- rc = -EBUSY;862862- goto out_erase;863863- }864864-865865- cxl_rr->decoder = cxld;866816 }817817+ cxld = cxl_rr->decoder;867818868819 rc = cxl_rr_ep_add(cxl_rr, cxled);869820 if (rc) {···990971 if (cxl_rr->nr_targets_set) {991972 int i, distance;992973993993- distance = p->nr_targets / cxl_rr->nr_targets;974974+ /*975975+ * Passthrough ports impose no distance requirements between976976+ * peers977977+ */978978+ if (port->nr_dports == 1)979979+ distance = 0;980980+ else981981+ distance = p->nr_targets / cxl_rr->nr_targets;994982 for (i = 0; i < cxl_rr->nr_targets_set; i++)995983 if (ep->dport == cxlsd->target[i]) {996984 rc = check_last_peer(cxled, ep, cxl_rr,···1534150815351509static void cxl_region_release(struct device *dev)15361510{15111511+ struct cxl_root_decoder *cxlrd = to_cxl_root_decoder(dev->parent);15371512 struct cxl_region *cxlr = to_cxl_region(dev);15131513+ int id = atomic_read(&cxlrd->region_id);15141514+15151515+ /*15161516+ * Try to reuse the recently idled id rather than the cached15171517+ * next id to prevent the region id space from increasing15181518+ * unnecessarily.15191519+ */15201520+ if (cxlr->id < id)15211521+ if (atomic_try_cmpxchg(&cxlrd->region_id, &id, cxlr->id)) {15221522+ memregion_free(id);15231523+ goto out;15241524+ }1538152515391526 memregion_free(cxlr->id);15271527+out:15281528+ put_device(dev->parent);15401529 kfree(cxlr);15411530}15421531···15791538static void unregister_region(void *dev)15801539{15811540 struct cxl_region *cxlr = to_cxl_region(dev);15411541+ struct cxl_region_params *p = &cxlr->params;15421542+ int i;1582154315831544 device_del(dev);15451545+15461546+ /*15471547+ * Now that region sysfs is shutdown, the parameter block is now15481548+ * read-only, so no need to hold the region rwsem to access the15491549+ * region parameters.15501550+ */15511551+ for (i = 0; i < p->interleave_ways; i++)15521552+ detach_target(cxlr, i);15531553+15841554 cxl_region_iomem_release(cxlr);15851555 put_device(dev);15861556}···16131561 device_initialize(dev);16141562 lockdep_set_class(&dev->mutex, &cxl_region_key);16151563 dev->parent = &cxlrd->cxlsd.cxld.dev;15641564+ /*15651565+ * Keep root decoder pinned through cxl_region_release to fixup15661566+ * region id allocations15671567+ */15681568+ get_device(dev->parent);16161569 device_set_pm_not_required(dev);16171570 dev->bus = &cxl_bus_type;16181571 dev->type = &cxl_region_type;
+3-1
drivers/cxl/cxl.h
···423423 struct device dev;424424 struct cxl_memdev *cxlmd;425425 struct cxl_nvdimm_bridge *bridge;426426- struct cxl_pmem_region *region;426426+ struct xarray pmem_regions;427427};428428429429struct cxl_pmem_region_mapping {···457457 * @regions: cxl_region_ref instances, regions mapped by this port458458 * @parent_dport: dport that points to this port in the parent459459 * @decoder_ida: allocator for decoder ids460460+ * @nr_dports: number of entries in @dports460461 * @hdm_end: track last allocated HDM decoder instance for allocation ordering461462 * @commit_end: cursor to track highest committed decoder for commit ordering462463 * @component_reg_phys: component register capability base address (optional)···476475 struct xarray regions;477476 struct cxl_dport *parent_dport;478477 struct ida decoder_ida;478478+ int nr_dports;479479 int hdm_end;480480 int commit_end;481481 resource_size_t component_reg_phys;
+68-39
drivers/cxl/pmem.c
···3030 struct cxl_nvdimm *cxl_nvd = nvdimm_provider_data(nvdimm);3131 struct cxl_nvdimm_bridge *cxl_nvb = cxl_nvd->bridge;3232 struct cxl_pmem_region *cxlr_pmem;3333+ unsigned long index;33343435 device_lock(&cxl_nvb->dev);3535- cxlr_pmem = cxl_nvd->region;3636 dev_set_drvdata(&cxl_nvd->dev, NULL);3737- cxl_nvd->region = NULL;3838- device_unlock(&cxl_nvb->dev);3737+ xa_for_each(&cxl_nvd->pmem_regions, index, cxlr_pmem) {3838+ get_device(&cxlr_pmem->dev);3939+ device_unlock(&cxl_nvb->dev);39404040- if (cxlr_pmem) {4141 device_release_driver(&cxlr_pmem->dev);4242 put_device(&cxlr_pmem->dev);4343+4444+ device_lock(&cxl_nvb->dev);4345 }4646+ device_unlock(&cxl_nvb->dev);44474548 nvdimm_delete(nvdimm);4649 cxl_nvd->bridge = NULL;···110107111108 *cmd = (struct nd_cmd_get_config_size) {112109 .config_size = cxlds->lsa_size,113113- .max_xfer = cxlds->payload_size,110110+ .max_xfer = cxlds->payload_size - sizeof(struct cxl_mbox_set_lsa),114111 };115112116113 return 0;···151148 return -EINVAL;152149153150 /* 4-byte status follows the input data in the payload */154154- if (struct_size(cmd, in_buf, cmd->in_length) + 4 > buf_len)151151+ if (size_add(struct_size(cmd, in_buf, cmd->in_length), 4) > buf_len)155152 return -EINVAL;156153157154 set_lsa =···369366370367static void unregister_nvdimm_region(void *nd_region)371368{372372- struct cxl_nvdimm_bridge *cxl_nvb;373373- struct cxl_pmem_region *cxlr_pmem;374374- int i;369369+ nvdimm_region_delete(nd_region);370370+}375371376376- cxlr_pmem = nd_region_provider_data(nd_region);377377- cxl_nvb = cxlr_pmem->bridge;372372+static int cxl_nvdimm_add_region(struct cxl_nvdimm *cxl_nvd,373373+ struct cxl_pmem_region *cxlr_pmem)374374+{375375+ int rc;376376+377377+ rc = xa_insert(&cxl_nvd->pmem_regions, (unsigned long)cxlr_pmem,378378+ cxlr_pmem, GFP_KERNEL);379379+ if (rc)380380+ return rc;381381+382382+ get_device(&cxlr_pmem->dev);383383+ return 0;384384+}385385+386386+static void cxl_nvdimm_del_region(struct cxl_nvdimm *cxl_nvd,387387+ struct cxl_pmem_region *cxlr_pmem)388388+{389389+ /*390390+ * It is possible this is called without a corresponding391391+ * cxl_nvdimm_add_region for @cxlr_pmem392392+ */393393+ cxlr_pmem = xa_erase(&cxl_nvd->pmem_regions, (unsigned long)cxlr_pmem);394394+ if (cxlr_pmem)395395+ put_device(&cxlr_pmem->dev);396396+}397397+398398+static void release_mappings(void *data)399399+{400400+ int i;401401+ struct cxl_pmem_region *cxlr_pmem = data;402402+ struct cxl_nvdimm_bridge *cxl_nvb = cxlr_pmem->bridge;403403+378404 device_lock(&cxl_nvb->dev);379405 for (i = 0; i < cxlr_pmem->nr_mappings; i++) {380406 struct cxl_pmem_region_mapping *m = &cxlr_pmem->mapping[i];381407 struct cxl_nvdimm *cxl_nvd = m->cxl_nvd;382408383383- if (cxl_nvd->region) {384384- put_device(&cxlr_pmem->dev);385385- cxl_nvd->region = NULL;386386- }409409+ cxl_nvdimm_del_region(cxl_nvd, cxlr_pmem);387410 }388411 device_unlock(&cxl_nvb->dev);389389-390390- nvdimm_region_delete(nd_region);391412}392413393414static void cxlr_pmem_remove_resource(void *res)···449422 if (!cxl_nvb->nvdimm_bus) {450423 dev_dbg(dev, "nvdimm bus not found\n");451424 rc = -ENXIO;452452- goto err;425425+ goto out_nvb;453426 }454427455428 memset(&mappings, 0, sizeof(mappings));···458431 res = devm_kzalloc(dev, sizeof(*res), GFP_KERNEL);459432 if (!res) {460433 rc = -ENOMEM;461461- goto err;434434+ goto out_nvb;462435 }463436464437 res->name = "Persistent Memory";···469442470443 rc = insert_resource(&iomem_resource, res);471444 if (rc)472472- goto err;445445+ goto out_nvb;473446474447 rc = devm_add_action_or_reset(dev, cxlr_pmem_remove_resource, res);475448 if (rc)476476- goto err;449449+ goto out_nvb;477450478451 ndr_desc.res = res;479452 ndr_desc.provider_data = cxlr_pmem;···489462 nd_set = devm_kzalloc(dev, sizeof(*nd_set), GFP_KERNEL);490463 if (!nd_set) {491464 rc = -ENOMEM;492492- goto err;465465+ goto out_nvb;493466 }494467495468 ndr_desc.memregion = cxlr->id;···499472 info = kmalloc_array(cxlr_pmem->nr_mappings, sizeof(*info), GFP_KERNEL);500473 if (!info) {501474 rc = -ENOMEM;502502- goto err;475475+ goto out_nvb;503476 }477477+478478+ rc = devm_add_action_or_reset(dev, release_mappings, cxlr_pmem);479479+ if (rc)480480+ goto out_nvd;504481505482 for (i = 0; i < cxlr_pmem->nr_mappings; i++) {506483 struct cxl_pmem_region_mapping *m = &cxlr_pmem->mapping[i];···517486 dev_dbg(dev, "[%d]: %s: no cxl_nvdimm found\n", i,518487 dev_name(&cxlmd->dev));519488 rc = -ENODEV;520520- goto err;489489+ goto out_nvd;521490 }522491523492 /* safe to drop ref now with bridge lock held */···529498 dev_dbg(dev, "[%d]: %s: no nvdimm found\n", i,530499 dev_name(&cxlmd->dev));531500 rc = -ENODEV;532532- goto err;501501+ goto out_nvd;533502 }534534- cxl_nvd->region = cxlr_pmem;535535- get_device(&cxlr_pmem->dev);503503+504504+ /*505505+ * Pin the region per nvdimm device as those may be released506506+ * out-of-order with respect to the region, and a single nvdimm507507+ * maybe associated with multiple regions508508+ */509509+ rc = cxl_nvdimm_add_region(cxl_nvd, cxlr_pmem);510510+ if (rc)511511+ goto out_nvd;536512 m->cxl_nvd = cxl_nvd;537513 mappings[i] = (struct nd_mapping_desc) {538514 .nvdimm = nvdimm,···565527 nvdimm_pmem_region_create(cxl_nvb->nvdimm_bus, &ndr_desc);566528 if (!cxlr_pmem->nd_region) {567529 rc = -ENOMEM;568568- goto err;530530+ goto out_nvd;569531 }570532571533 rc = devm_add_action_or_reset(dev, unregister_nvdimm_region,572534 cxlr_pmem->nd_region);573573-out:535535+out_nvd:574536 kfree(info);537537+out_nvb:575538 device_unlock(&cxl_nvb->dev);576539 put_device(&cxl_nvb->dev);577540578541 return rc;579579-580580-err:581581- dev_dbg(dev, "failed to create nvdimm region\n");582582- for (i--; i >= 0; i--) {583583- nvdimm = mappings[i].nvdimm;584584- cxl_nvd = nvdimm_provider_data(nvdimm);585585- put_device(&cxl_nvd->region->dev);586586- cxl_nvd->region = NULL;587587- }588588- goto out;589542}590543591544static struct cxl_driver cxl_pmem_region_driver = {
···498498 msg_tx_prepare(channel->req.msg, xfer);499499 ret = invoke_process_msg_channel(channel, msg_command_size(xfer));500500 } else {501501- shmem_tx_prepare(channel->req.shmem, xfer);501501+ shmem_tx_prepare(channel->req.shmem, xfer, cinfo);502502 ret = invoke_process_smt_channel(channel);503503 }504504
+27-4
drivers/firmware/arm_scmi/shmem.c
···55 * Copyright (C) 2019 ARM Ltd.66 */7788+#include <linux/ktime.h>89#include <linux/io.h>910#include <linux/processor.h>1011#include <linux/types.h>1212+1313+#include <asm-generic/bug.h>11141215#include "common.h"1316···3330};34313532void shmem_tx_prepare(struct scmi_shared_mem __iomem *shmem,3636- struct scmi_xfer *xfer)3333+ struct scmi_xfer *xfer, struct scmi_chan_info *cinfo)3734{3535+ ktime_t stop;3636+3837 /*3938 * Ideally channel must be free by now unless OS timeout last4039 * request and platform continued to process the same, wait4140 * until it releases the shared memory, otherwise we may endup4242- * overwriting its response with new message payload or vice-versa4141+ * overwriting its response with new message payload or vice-versa.4242+ * Giving up anyway after twice the expected channel timeout so as4343+ * not to bail-out on intermittent issues where the platform is4444+ * occasionally a bit slower to answer.4545+ *4646+ * Note that after a timeout is detected we bail-out and carry on but4747+ * the transport functionality is probably permanently compromised:4848+ * this is just to ease debugging and avoid complete hangs on boot4949+ * due to a misbehaving SCMI firmware.4350 */4444- spin_until_cond(ioread32(&shmem->channel_status) &4545- SCMI_SHMEM_CHAN_STAT_CHANNEL_FREE);5151+ stop = ktime_add_ms(ktime_get(), 2 * cinfo->rx_timeout_ms);5252+ spin_until_cond((ioread32(&shmem->channel_status) &5353+ SCMI_SHMEM_CHAN_STAT_CHANNEL_FREE) ||5454+ ktime_after(ktime_get(), stop));5555+ if (!(ioread32(&shmem->channel_status) &5656+ SCMI_SHMEM_CHAN_STAT_CHANNEL_FREE)) {5757+ WARN_ON_ONCE(1);5858+ dev_err(cinfo->dev,5959+ "Timeout waiting for a free TX channel !\n");6060+ return;6161+ }6262+4663 /* Mark channel busy + clear error */4764 iowrite32(0x0, &shmem->channel_status);4865 iowrite32(xfer->hdr.poll_completion ? 0 : SCMI_SHMEM_FLAG_INTR_ENABLED,
···148148{149149 unsigned long flags;150150 DECLARE_COMPLETION_ONSTACK(vioch_shutdown_done);151151- void *deferred_wq = NULL;152151153152 /*154153 * Prepare to wait for the last release if not already released···161162162163 vioch->shutdown_done = &vioch_shutdown_done;163164 virtio_break_device(vioch->vqueue->vdev);164164- if (!vioch->is_rx && vioch->deferred_tx_wq) {165165- deferred_wq = vioch->deferred_tx_wq;165165+ if (!vioch->is_rx && vioch->deferred_tx_wq)166166 /* Cannot be kicked anymore after this...*/167167 vioch->deferred_tx_wq = NULL;168168- }169168 spin_unlock_irqrestore(&vioch->lock, flags);170170-171171- if (deferred_wq)172172- destroy_workqueue(deferred_wq);173169174170 scmi_vio_channel_release(vioch);175171···410416 return vioch && !vioch->cinfo;411417}412418419419+static void scmi_destroy_tx_workqueue(void *deferred_tx_wq)420420+{421421+ destroy_workqueue(deferred_tx_wq);422422+}423423+413424static int virtio_chan_setup(struct scmi_chan_info *cinfo, struct device *dev,414425 bool tx)415426{···429430430431 /* Setup a deferred worker for polling. */431432 if (tx && !vioch->deferred_tx_wq) {433433+ int ret;434434+432435 vioch->deferred_tx_wq =433436 alloc_workqueue(dev_name(&scmi_vdev->dev),434437 WQ_UNBOUND | WQ_FREEZABLE | WQ_SYSFS,435438 0);436439 if (!vioch->deferred_tx_wq)437440 return -ENOMEM;441441+442442+ ret = devm_add_action_or_reset(dev, scmi_destroy_tx_workqueue,443443+ vioch->deferred_tx_wq);444444+ if (ret)445445+ return ret;438446439447 INIT_WORK(&vioch->deferred_tx_work,440448 scmi_vio_deferred_tx_worker);···450444 for (i = 0; i < vioch->max_msg; i++) {451445 struct scmi_vio_msg *msg;452446453453- msg = devm_kzalloc(cinfo->dev, sizeof(*msg), GFP_KERNEL);447447+ msg = devm_kzalloc(dev, sizeof(*msg), GFP_KERNEL);454448 if (!msg)455449 return -ENOMEM;456450457451 if (tx) {458458- msg->request = devm_kzalloc(cinfo->dev,452452+ msg->request = devm_kzalloc(dev,459453 VIRTIO_SCMI_MAX_PDU_SIZE,460454 GFP_KERNEL);461455 if (!msg->request)···464458 refcount_set(&msg->users, 1);465459 }466460467467- msg->input = devm_kzalloc(cinfo->dev, VIRTIO_SCMI_MAX_PDU_SIZE,461461+ msg->input = devm_kzalloc(dev, VIRTIO_SCMI_MAX_PDU_SIZE,468462 GFP_KERNEL);469463 if (!msg->input)470464 return -ENOMEM;
+1-1
drivers/firmware/efi/efi.c
···611611612612 seed = early_memremap(efi_rng_seed, sizeof(*seed));613613 if (seed != NULL) {614614- size = READ_ONCE(seed->size);614614+ size = min(seed->size, EFI_RANDOM_SEED_SIZE);615615 early_memunmap(seed, sizeof(*seed));616616 } else {617617 pr_err("Could not map UEFI random seed!\n");
+6-1
drivers/firmware/efi/libstub/random.c
···7575 if (status != EFI_SUCCESS)7676 return status;77777878- status = efi_bs_call(allocate_pool, EFI_RUNTIME_SERVICES_DATA,7878+ /*7979+ * Use EFI_ACPI_RECLAIM_MEMORY here so that it is guaranteed that the8080+ * allocation will survive a kexec reboot (although we refresh the seed8181+ * beforehand)8282+ */8383+ status = efi_bs_call(allocate_pool, EFI_ACPI_RECLAIM_MEMORY,7984 sizeof(*seed) + EFI_RANDOM_SEED_SIZE,8085 (void **)&seed);8186 if (status != EFI_SUCCESS)
···21212222static DEFINE_SEMAPHORE(efivars_lock);23232424-static efi_status_t check_var_size(u32 attributes, unsigned long size)2424+static efi_status_t check_var_size(bool nonblocking, u32 attributes,2525+ unsigned long size)2526{2627 const struct efivar_operations *fops;2828+ efi_status_t status;27292830 fops = __efivars->ops;29313032 if (!fops->query_variable_store)3333+ status = EFI_UNSUPPORTED;3434+ else3535+ status = fops->query_variable_store(attributes, size,3636+ nonblocking);3737+ if (status == EFI_UNSUPPORTED)3138 return (size <= SZ_64K) ? EFI_SUCCESS : EFI_OUT_OF_RESOURCES;3232-3333- return fops->query_variable_store(attributes, size, false);3434-}3535-3636-static3737-efi_status_t check_var_size_nonblocking(u32 attributes, unsigned long size)3838-{3939- const struct efivar_operations *fops;4040-4141- fops = __efivars->ops;4242-4343- if (!fops->query_variable_store)4444- return (size <= SZ_64K) ? EFI_SUCCESS : EFI_OUT_OF_RESOURCES;4545-4646- return fops->query_variable_store(attributes, size, true);3939+ return status;4740}48414942/**···189196EXPORT_SYMBOL_NS_GPL(efivar_get_next_variable, EFIVAR);190197191198/*192192- * efivar_set_variable_blocking() - local helper function for set_variable193193- *194194- * Must be called with efivars_lock held.195195- */196196-static efi_status_t197197-efivar_set_variable_blocking(efi_char16_t *name, efi_guid_t *vendor,198198- u32 attr, unsigned long data_size, void *data)199199-{200200- efi_status_t status;201201-202202- if (data_size > 0) {203203- status = check_var_size(attr, data_size +204204- ucs2_strsize(name, 1024));205205- if (status != EFI_SUCCESS)206206- return status;207207- }208208- return __efivars->ops->set_variable(name, vendor, attr, data_size, data);209209-}210210-211211-/*212199 * efivar_set_variable_locked() - set a variable identified by name/vendor213200 *214201 * Must be called with efivars_lock held. If @nonblocking is set, it will use···201228 efi_set_variable_t *setvar;202229 efi_status_t status;203230204204- if (!nonblocking)205205- return efivar_set_variable_blocking(name, vendor, attr,206206- data_size, data);231231+ if (data_size > 0) {232232+ status = check_var_size(nonblocking, attr,233233+ data_size + ucs2_strsize(name, 1024));234234+ if (status != EFI_SUCCESS)235235+ return status;236236+ }207237208238 /*209239 * If no _nonblocking variant exists, the ordinary one210240 * is assumed to be non-blocking.211241 */212212- setvar = __efivars->ops->set_variable_nonblocking ?:213213- __efivars->ops->set_variable;242242+ setvar = __efivars->ops->set_variable_nonblocking;243243+ if (!setvar || !nonblocking)244244+ setvar = __efivars->ops->set_variable;214245215215- if (data_size > 0) {216216- status = check_var_size_nonblocking(attr, data_size +217217- ucs2_strsize(name, 1024));218218- if (status != EFI_SUCCESS)219219- return status;220220- }221246 return setvar(name, vendor, attr, data_size, data);222247}223248EXPORT_SYMBOL_NS_GPL(efivar_set_variable_locked, EFIVAR);···235264 if (efivar_lock())236265 return EFI_ABORTED;237266238238- status = efivar_set_variable_blocking(name, vendor, attr, data_size, data);267267+ status = efivar_set_variable_locked(name, vendor, attr, data_size,268268+ data, false);239269 efivar_unlock();240270 return status;241271}
+7
drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
···706706707707void amdgpu_amdkfd_set_compute_idle(struct amdgpu_device *adev, bool idle)708708{709709+ /* Temporary workaround to fix issues observed in some710710+ * compute applications when GFXOFF is enabled on GFX11.711711+ */712712+ if (IP_VERSION_MAJ(adev->ip_versions[GC_HWIP][0]) == 11) {713713+ pr_debug("GFXOFF is %s\n", idle ? "enabled" : "disabled");714714+ amdgpu_gfx_off_ctrl(adev, idle);715715+ }709716 amdgpu_dpm_switch_power_profile(adev,710717 PP_SMC_POWER_PROFILE_COMPUTE,711718 !idle);
+10-5
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
···40604060 * at suspend time.40614061 *40624062 */40634063-static void amdgpu_device_evict_resources(struct amdgpu_device *adev)40634063+static int amdgpu_device_evict_resources(struct amdgpu_device *adev)40644064{40654065+ int ret;40664066+40654067 /* No need to evict vram on APUs for suspend to ram or s2idle */40664068 if ((adev->in_s3 || adev->in_s0ix) && (adev->flags & AMD_IS_APU))40674067- return;40694069+ return 0;4068407040694069- if (amdgpu_ttm_evict_resources(adev, TTM_PL_VRAM))40714071+ ret = amdgpu_ttm_evict_resources(adev, TTM_PL_VRAM);40724072+ if (ret)40704073 DRM_WARN("evicting device resources failed\n");40714071-40744074+ return ret;40724075}4073407640744077/*···41214118 if (!adev->in_s0ix)41224119 amdgpu_amdkfd_suspend(adev, adev->in_runpm);4123412041244124- amdgpu_device_evict_resources(adev);41214121+ r = amdgpu_device_evict_resources(adev);41224122+ if (r)41234123+ return r;4125412441264125 amdgpu_fence_driver_hw_fini(adev);41274126
+2-1
drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
···22012201 pm_runtime_forbid(dev->dev);22022202 }2203220322042204- if (adev->ip_versions[MP1_HWIP][0] == IP_VERSION(13, 0, 2)) {22042204+ if (adev->ip_versions[MP1_HWIP][0] == IP_VERSION(13, 0, 2) &&22052205+ !amdgpu_sriov_vf(adev)) {22052206 bool need_to_reset_gpu = false;2206220722072208 if (adev->gmc.xgmi.num_physical_nodes > 1) {
···186186 s_getreg_b32 s_save_trapsts, hwreg(HW_REG_TRAPSTS)187187188188#if SW_SA_TRAP189189+ // If ttmp1[30] is set then issue s_barrier to unblock dependent waves.190190+ s_bitcmp1_b32 s_save_pc_hi, 30191191+ s_cbranch_scc0 L_TRAP_NO_BARRIER192192+ s_barrier193193+194194+L_TRAP_NO_BARRIER:189195 // If ttmp1[31] is set then trap may occur early.190196 // Spin wait until SAVECTX exception is raised.191197 s_bitcmp1_b32 s_save_pc_hi, 31
···1549154915501550 adev->dm.dc->debug.visual_confirm = amdgpu_dc_visual_confirm;1551155115521552+ /* TODO: Remove after DP2 receiver gets proper support of Cable ID feature */15531553+ adev->dm.dc->debug.ignore_cable_id = true;15541554+15521555 r = dm_dmub_hw_init(adev);15531556 if (r) {15541557 DRM_ERROR("DMUB interface failed to initialize: status=%d\n", r);
···157157 struct clk_mgr_internal *clk_mgr = TO_CLK_MGR_INTERNAL(clk_mgr_base);158158 unsigned int num_levels;159159 struct clk_limit_num_entries *num_entries_per_clk = &clk_mgr_base->bw_params->clk_table.num_entries_per_clk;160160+ unsigned int i;160161161162 memset(&(clk_mgr_base->clks), 0, sizeof(struct dc_clocks));162163 clk_mgr_base->clks.p_state_change_support = true;···206205 clk_mgr->dpm_present = true;207206208207 if (clk_mgr_base->ctx->dc->debug.min_disp_clk_khz) {209209- unsigned int i;210210-211208 for (i = 0; i < num_levels; i++)212209 if (clk_mgr_base->bw_params->clk_table.entries[i].dispclk_mhz213210 < khz_to_mhz_ceil(clk_mgr_base->ctx->dc->debug.min_disp_clk_khz))214211 clk_mgr_base->bw_params->clk_table.entries[i].dispclk_mhz215212 = khz_to_mhz_ceil(clk_mgr_base->ctx->dc->debug.min_disp_clk_khz);216213 }214214+ for (i = 0; i < num_levels; i++)215215+ if (clk_mgr_base->bw_params->clk_table.entries[i].dispclk_mhz > 1950)216216+ clk_mgr_base->bw_params->clk_table.entries[i].dispclk_mhz = 1950;217217218218 if (clk_mgr_base->ctx->dc->debug.min_dpp_clk_khz) {219219- unsigned int i;220220-221219 for (i = 0; i < num_levels; i++)222220 if (clk_mgr_base->bw_params->clk_table.entries[i].dppclk_mhz223221 < khz_to_mhz_ceil(clk_mgr_base->ctx->dc->debug.min_dpp_clk_khz))···668668 dcn32_init_single_clock(clk_mgr, PPCLK_UCLK,669669 &clk_mgr_base->bw_params->clk_table.entries[0].memclk_mhz,670670 &num_entries_per_clk->num_memclk_levels);671671+672672+ /* memclk must have at least one level */673673+ num_entries_per_clk->num_memclk_levels = num_entries_per_clk->num_memclk_levels ? num_entries_per_clk->num_memclk_levels : 1;671674672675 dcn32_init_single_clock(clk_mgr, PPCLK_FCLK,673676 &clk_mgr_base->bw_params->clk_table.entries[0].fclk_mhz,
···625625 mode_lib->vba.skip_dio_check[mode_lib->vba.NumberOfActivePlanes] =626626 dout->is_virtual;627627628628- if (!dout->dsc_enable)628628+ if (dout->dsc_enable)629629 mode_lib->vba.ForcedOutputLinkBPP[mode_lib->vba.NumberOfActivePlanes] = dout->output_bpp;630630 else631631 mode_lib->vba.ForcedOutputLinkBPP[mode_lib->vba.NumberOfActivePlanes] = 0.0;
+47-19
drivers/gpu/drm/drm_format_helper.c
···807807 return false;808808}809809810810+static const uint32_t conv_from_xrgb8888[] = {811811+ DRM_FORMAT_XRGB8888,812812+ DRM_FORMAT_ARGB8888,813813+ DRM_FORMAT_XRGB2101010,814814+ DRM_FORMAT_ARGB2101010,815815+ DRM_FORMAT_RGB565,816816+ DRM_FORMAT_RGB888,817817+};818818+819819+static const uint32_t conv_from_rgb565_888[] = {820820+ DRM_FORMAT_XRGB8888,821821+ DRM_FORMAT_ARGB8888,822822+};823823+824824+static bool is_conversion_supported(uint32_t from, uint32_t to)825825+{826826+ switch (from) {827827+ case DRM_FORMAT_XRGB8888:828828+ case DRM_FORMAT_ARGB8888:829829+ return is_listed_fourcc(conv_from_xrgb8888, ARRAY_SIZE(conv_from_xrgb8888), to);830830+ case DRM_FORMAT_RGB565:831831+ case DRM_FORMAT_RGB888:832832+ return is_listed_fourcc(conv_from_rgb565_888, ARRAY_SIZE(conv_from_rgb565_888), to);833833+ case DRM_FORMAT_XRGB2101010:834834+ return to == DRM_FORMAT_ARGB2101010;835835+ case DRM_FORMAT_ARGB2101010:836836+ return to == DRM_FORMAT_XRGB2101010;837837+ default:838838+ return false;839839+ }840840+}841841+810842/**811843 * drm_fb_build_fourcc_list - Filters a list of supported color formats against812844 * the device's native formats···859827 * be handed over to drm_universal_plane_init() et al. Native formats860828 * will go before emulated formats. Other heuristics might be applied861829 * to optimize the order. Formats near the beginning of the list are862862- * usually preferred over formats near the end of the list.830830+ * usually preferred over formats near the end of the list. Formats831831+ * without conversion helpers will be skipped. New drivers should only832832+ * pass in XRGB8888 and avoid exposing additional emulated formats.863833 *864834 * Returns:865835 * The number of color-formats 4CC codes returned in @fourccs_out.···873839{874840 u32 *fourccs = fourccs_out;875841 const u32 *fourccs_end = fourccs_out + nfourccs_out;876876- bool found_native = false;842842+ uint32_t native_format = 0;877843 size_t i;878844879845 /*···892858893859 drm_dbg_kms(dev, "adding native format %p4cc\n", &fourcc);894860895895- if (!found_native)896896- found_native = is_listed_fourcc(driver_fourccs, driver_nfourccs, fourcc);861861+ /*862862+ * There should only be one native format with the current API.863863+ * This API needs to be refactored to correctly support arbitrary864864+ * sets of native formats, since it needs to report which native865865+ * format to use for each emulated format.866866+ */867867+ if (!native_format)868868+ native_format = fourcc;897869 *fourccs = fourcc;898870 ++fourccs;899899- }900900-901901- /*902902- * The plane's atomic_update helper converts the framebuffer's color format903903- * to a native format when copying to device memory.904904- *905905- * If there is not a single format supported by both, device and906906- * driver, the native formats are likely not supported by the conversion907907- * helpers. Therefore *only* support the native formats and add a908908- * conversion helper ASAP.909909- */910910- if (!found_native) {911911- drm_warn(dev, "Format conversion helpers required to add extra formats.\n");912912- goto out;913871 }914872915873 /*···916890 } else if (fourccs == fourccs_end) {917891 drm_warn(dev, "Ignoring emulated format %p4cc\n", &fourcc);918892 continue; /* end of available output buffer */893893+ } else if (!is_conversion_supported(fourcc, native_format)) {894894+ drm_dbg_kms(dev, "Unsupported emulated format %p4cc\n", &fourcc);895895+ continue; /* format is not supported for conversion */919896 }920897921898 drm_dbg_kms(dev, "adding emulated format %p4cc\n", &fourcc);···927898 ++fourccs;928899 }929900930930-out:931901 return fourccs - fourccs_out;932902}933903EXPORT_SYMBOL(drm_fb_build_fourcc_list);
···52765276 encoder->devdata, IS_ERR(edid) ? NULL : edid);5277527752785278 intel_panel_add_edid_fixed_modes(intel_connector,52795279- intel_connector->panel.vbt.drrs_type != DRRS_TYPE_NONE,52795279+ intel_connector->panel.vbt.drrs_type != DRRS_TYPE_NONE ||52805280 intel_vrr_is_capable(intel_connector));5281528152825282 /* MSO requires information from the EDID */
+27-32
drivers/gpu/drm/i915/display/intel_dpll_mgr.c
···25252626#include "intel_de.h"2727#include "intel_display_types.h"2828+#include "intel_dkl_phy.h"2829#include "intel_dpio_phy.h"2930#include "intel_dpll.h"3031#include "intel_dpll_mgr.h"···35093508 * All registers read here have the same HIP_INDEX_REG even though35103509 * they are on different building blocks35113510 */35123512- intel_de_write(dev_priv, HIP_INDEX_REG(tc_port),35133513- HIP_INDEX_VAL(tc_port, 0x2));35143514-35153515- hw_state->mg_refclkin_ctl = intel_de_read(dev_priv,35163516- DKL_REFCLKIN_CTL(tc_port));35113511+ hw_state->mg_refclkin_ctl = intel_dkl_phy_read(dev_priv,35123512+ DKL_REFCLKIN_CTL(tc_port), 2);35173513 hw_state->mg_refclkin_ctl &= MG_REFCLKIN_CTL_OD_2_MUX_MASK;3518351435193515 hw_state->mg_clktop2_hsclkctl =35203520- intel_de_read(dev_priv, DKL_CLKTOP2_HSCLKCTL(tc_port));35163516+ intel_dkl_phy_read(dev_priv, DKL_CLKTOP2_HSCLKCTL(tc_port), 2);35213517 hw_state->mg_clktop2_hsclkctl &=35223518 MG_CLKTOP2_HSCLKCTL_TLINEDRV_CLKSEL_MASK |35233519 MG_CLKTOP2_HSCLKCTL_CORE_INPUTSEL_MASK |···35223524 MG_CLKTOP2_HSCLKCTL_DSDIV_RATIO_MASK;3523352535243526 hw_state->mg_clktop2_coreclkctl1 =35253525- intel_de_read(dev_priv, DKL_CLKTOP2_CORECLKCTL1(tc_port));35273527+ intel_dkl_phy_read(dev_priv, DKL_CLKTOP2_CORECLKCTL1(tc_port), 2);35263528 hw_state->mg_clktop2_coreclkctl1 &=35273529 MG_CLKTOP2_CORECLKCTL1_A_DIVRATIO_MASK;3528353035293529- hw_state->mg_pll_div0 = intel_de_read(dev_priv, DKL_PLL_DIV0(tc_port));35313531+ hw_state->mg_pll_div0 = intel_dkl_phy_read(dev_priv, DKL_PLL_DIV0(tc_port), 2);35303532 val = DKL_PLL_DIV0_MASK;35313533 if (dev_priv->display.vbt.override_afc_startup)35323534 val |= DKL_PLL_DIV0_AFC_STARTUP_MASK;35333535 hw_state->mg_pll_div0 &= val;3534353635353535- hw_state->mg_pll_div1 = intel_de_read(dev_priv, DKL_PLL_DIV1(tc_port));35373537+ hw_state->mg_pll_div1 = intel_dkl_phy_read(dev_priv, DKL_PLL_DIV1(tc_port), 2);35363538 hw_state->mg_pll_div1 &= (DKL_PLL_DIV1_IREF_TRIM_MASK |35373539 DKL_PLL_DIV1_TDC_TARGET_CNT_MASK);3538354035393539- hw_state->mg_pll_ssc = intel_de_read(dev_priv, DKL_PLL_SSC(tc_port));35413541+ hw_state->mg_pll_ssc = intel_dkl_phy_read(dev_priv, DKL_PLL_SSC(tc_port), 2);35403542 hw_state->mg_pll_ssc &= (DKL_PLL_SSC_IREF_NDIV_RATIO_MASK |35413543 DKL_PLL_SSC_STEP_LEN_MASK |35423544 DKL_PLL_SSC_STEP_NUM_MASK |35433545 DKL_PLL_SSC_EN);3544354635453545- hw_state->mg_pll_bias = intel_de_read(dev_priv, DKL_PLL_BIAS(tc_port));35473547+ hw_state->mg_pll_bias = intel_dkl_phy_read(dev_priv, DKL_PLL_BIAS(tc_port), 2);35463548 hw_state->mg_pll_bias &= (DKL_PLL_BIAS_FRAC_EN_H |35473549 DKL_PLL_BIAS_FBDIV_FRAC_MASK);3548355035493551 hw_state->mg_pll_tdc_coldst_bias =35503550- intel_de_read(dev_priv, DKL_PLL_TDC_COLDST_BIAS(tc_port));35523552+ intel_dkl_phy_read(dev_priv, DKL_PLL_TDC_COLDST_BIAS(tc_port), 2);35513553 hw_state->mg_pll_tdc_coldst_bias &= (DKL_PLL_TDC_SSC_STEP_SIZE_MASK |35523554 DKL_PLL_TDC_FEED_FWD_GAIN_MASK);35533555···37353737 * All registers programmed here have the same HIP_INDEX_REG even37363738 * though on different building block37373739 */37383738- intel_de_write(dev_priv, HIP_INDEX_REG(tc_port),37393739- HIP_INDEX_VAL(tc_port, 0x2));37403740-37413740 /* All the registers are RMW */37423742- val = intel_de_read(dev_priv, DKL_REFCLKIN_CTL(tc_port));37413741+ val = intel_dkl_phy_read(dev_priv, DKL_REFCLKIN_CTL(tc_port), 2);37433742 val &= ~MG_REFCLKIN_CTL_OD_2_MUX_MASK;37443743 val |= hw_state->mg_refclkin_ctl;37453745- intel_de_write(dev_priv, DKL_REFCLKIN_CTL(tc_port), val);37443744+ intel_dkl_phy_write(dev_priv, DKL_REFCLKIN_CTL(tc_port), 2, val);3746374537473747- val = intel_de_read(dev_priv, DKL_CLKTOP2_CORECLKCTL1(tc_port));37463746+ val = intel_dkl_phy_read(dev_priv, DKL_CLKTOP2_CORECLKCTL1(tc_port), 2);37483747 val &= ~MG_CLKTOP2_CORECLKCTL1_A_DIVRATIO_MASK;37493748 val |= hw_state->mg_clktop2_coreclkctl1;37503750- intel_de_write(dev_priv, DKL_CLKTOP2_CORECLKCTL1(tc_port), val);37493749+ intel_dkl_phy_write(dev_priv, DKL_CLKTOP2_CORECLKCTL1(tc_port), 2, val);3751375037523752- val = intel_de_read(dev_priv, DKL_CLKTOP2_HSCLKCTL(tc_port));37513751+ val = intel_dkl_phy_read(dev_priv, DKL_CLKTOP2_HSCLKCTL(tc_port), 2);37533752 val &= ~(MG_CLKTOP2_HSCLKCTL_TLINEDRV_CLKSEL_MASK |37543753 MG_CLKTOP2_HSCLKCTL_CORE_INPUTSEL_MASK |37553754 MG_CLKTOP2_HSCLKCTL_HSDIV_RATIO_MASK |37563755 MG_CLKTOP2_HSCLKCTL_DSDIV_RATIO_MASK);37573756 val |= hw_state->mg_clktop2_hsclkctl;37583758- intel_de_write(dev_priv, DKL_CLKTOP2_HSCLKCTL(tc_port), val);37573757+ intel_dkl_phy_write(dev_priv, DKL_CLKTOP2_HSCLKCTL(tc_port), 2, val);3759375837603759 val = DKL_PLL_DIV0_MASK;37613760 if (dev_priv->display.vbt.override_afc_startup)37623761 val |= DKL_PLL_DIV0_AFC_STARTUP_MASK;37633763- intel_de_rmw(dev_priv, DKL_PLL_DIV0(tc_port), val,37643764- hw_state->mg_pll_div0);37623762+ intel_dkl_phy_rmw(dev_priv, DKL_PLL_DIV0(tc_port), 2, val,37633763+ hw_state->mg_pll_div0);3765376437663766- val = intel_de_read(dev_priv, DKL_PLL_DIV1(tc_port));37653765+ val = intel_dkl_phy_read(dev_priv, DKL_PLL_DIV1(tc_port), 2);37673766 val &= ~(DKL_PLL_DIV1_IREF_TRIM_MASK |37683767 DKL_PLL_DIV1_TDC_TARGET_CNT_MASK);37693768 val |= hw_state->mg_pll_div1;37703770- intel_de_write(dev_priv, DKL_PLL_DIV1(tc_port), val);37693769+ intel_dkl_phy_write(dev_priv, DKL_PLL_DIV1(tc_port), 2, val);3771377037723772- val = intel_de_read(dev_priv, DKL_PLL_SSC(tc_port));37713771+ val = intel_dkl_phy_read(dev_priv, DKL_PLL_SSC(tc_port), 2);37733772 val &= ~(DKL_PLL_SSC_IREF_NDIV_RATIO_MASK |37743773 DKL_PLL_SSC_STEP_LEN_MASK |37753774 DKL_PLL_SSC_STEP_NUM_MASK |37763775 DKL_PLL_SSC_EN);37773776 val |= hw_state->mg_pll_ssc;37783778- intel_de_write(dev_priv, DKL_PLL_SSC(tc_port), val);37773777+ intel_dkl_phy_write(dev_priv, DKL_PLL_SSC(tc_port), 2, val);3779377837803780- val = intel_de_read(dev_priv, DKL_PLL_BIAS(tc_port));37793779+ val = intel_dkl_phy_read(dev_priv, DKL_PLL_BIAS(tc_port), 2);37813780 val &= ~(DKL_PLL_BIAS_FRAC_EN_H |37823781 DKL_PLL_BIAS_FBDIV_FRAC_MASK);37833782 val |= hw_state->mg_pll_bias;37843784- intel_de_write(dev_priv, DKL_PLL_BIAS(tc_port), val);37833783+ intel_dkl_phy_write(dev_priv, DKL_PLL_BIAS(tc_port), 2, val);3785378437863786- val = intel_de_read(dev_priv, DKL_PLL_TDC_COLDST_BIAS(tc_port));37853785+ val = intel_dkl_phy_read(dev_priv, DKL_PLL_TDC_COLDST_BIAS(tc_port), 2);37873786 val &= ~(DKL_PLL_TDC_SSC_STEP_SIZE_MASK |37883787 DKL_PLL_TDC_FEED_FWD_GAIN_MASK);37893788 val |= hw_state->mg_pll_tdc_coldst_bias;37903790- intel_de_write(dev_priv, DKL_PLL_TDC_COLDST_BIAS(tc_port), val);37893789+ intel_dkl_phy_write(dev_priv, DKL_PLL_TDC_COLDST_BIAS(tc_port), 2, val);3791379037923792- intel_de_posting_read(dev_priv, DKL_PLL_TDC_COLDST_BIAS(tc_port));37913791+ intel_dkl_phy_posting_read(dev_priv, DKL_PLL_TDC_COLDST_BIAS(tc_port), 2);37933792}3794379337953794static void icl_pll_power_enable(struct drm_i915_private *dev_priv,
+1-2
drivers/gpu/drm/i915/display/intel_lvds.c
···972972973973 /* Try EDID first */974974 intel_panel_add_edid_fixed_modes(intel_connector,975975- intel_connector->panel.vbt.drrs_type != DRRS_TYPE_NONE,976976- false);975975+ intel_connector->panel.vbt.drrs_type != DRRS_TYPE_NONE);977976978977 /* Failed to get EDID, what about VBT? */979978 if (!intel_panel_preferred_fixed_mode(intel_connector))
···991010#include <linux/pfn.h>1111#include <linux/scatterlist.h>1212-#include <linux/swiotlb.h>1212+#include <linux/dma-mapping.h>1313+#include <xen/xen.h>13141415#include "i915_gem.h"1516···128127 return page_sizes;129128}130129131131-static inline unsigned int i915_sg_segment_size(void)130130+static inline unsigned int i915_sg_segment_size(struct device *dev)132131{133133- unsigned int size = swiotlb_max_segment();132132+ size_t max = min_t(size_t, UINT_MAX, dma_max_mapping_size(dev));134133135135- if (size == 0)136136- size = UINT_MAX;137137-138138- size = rounddown(size, PAGE_SIZE);139139- /* swiotlb_max_segment_size can return 1 byte when it means one page. */140140- if (size < PAGE_SIZE)141141- size = PAGE_SIZE;142142-143143- return size;134134+ /*135135+ * For Xen PV guests pages aren't contiguous in DMA (machine) address136136+ * space. The DMA API takes care of that both in dma_alloc_* (by137137+ * calling into the hypervisor to make the pages contiguous) and in138138+ * dma_map_* (by bounce buffering). But i915 abuses ignores the139139+ * coherency aspects of the DMA API and thus can't cope with bounce140140+ * buffering actually happening, so add a hack here to force small141141+ * allocations and mappings when running in PV mode on Xen.142142+ *143143+ * Note this will still break if bounce buffering is required for other144144+ * reasons, like confidential computing hypervisors or PCIe root ports145145+ * with addressing limitations.146146+ */147147+ if (xen_pv_domain())148148+ max = PAGE_SIZE;149149+ return round_down(max, PAGE_SIZE);144150}145151146152bool i915_sg_trim(struct sg_table *orig_st);
-1
drivers/gpu/drm/imx/Kconfig
···44 select DRM_KMS_HELPER55 select VIDEOMODE_HELPERS66 select DRM_GEM_DMA_HELPER77- select DRM_KMS_HELPER87 depends on DRM && (ARCH_MXC || ARCH_MULTIPLATFORM || COMPILE_TEST)98 depends on IMX_IPUV3_CORE109 help
···752752static void dw_mipi_dsi_rockchip_set_lcdsel(struct dw_mipi_dsi_rockchip *dsi,753753 int mux)754754{755755- if (dsi->cdata->lcdsel_grf_reg < 0)755755+ if (dsi->cdata->lcdsel_grf_reg)756756 regmap_write(dsi->grf_regmap, dsi->cdata->lcdsel_grf_reg,757757 mux ? dsi->cdata->lcdsel_lit : dsi->cdata->lcdsel_big);758758}···10511051 if (ret) {10521052 DRM_DEV_ERROR(dsi->dev, "Failed to register component: %d\n",10531053 ret);10541054- return ret;10541054+ goto out;10551055 }1056105610571057 second = dw_mipi_dsi_rockchip_find_second(dsi);10581058- if (IS_ERR(second))10591059- return PTR_ERR(second);10581058+ if (IS_ERR(second)) {10591059+ ret = PTR_ERR(second);10601060+ goto out;10611061+ }10601062 if (second) {10611063 ret = component_add(second, &dw_mipi_dsi_rockchip_ops);10621064 if (ret) {10631065 DRM_DEV_ERROR(second,10641066 "Failed to register component: %d\n",10651067 ret);10661066- return ret;10681068+ goto out;10671069 }10681070 }1069107110701072 return 0;10731073+10741074+out:10751075+ mutex_lock(&dsi->usage_mutex);10761076+ dsi->usage_mode = DW_DSI_USAGE_IDLE;10771077+ mutex_unlock(&dsi->usage_mutex);10781078+ return ret;10711079}1072108010731081static int dw_mipi_dsi_rockchip_host_detach(void *priv_data,···16431635static const struct rockchip_dw_dsi_chip_data rk3568_chip_data[] = {16441636 {16451637 .reg = 0xfe060000,16461646- .lcdsel_grf_reg = -1,16471638 .lanecfg1_grf_reg = RK3568_GRF_VO_CON2,16481639 .lanecfg1 = HIWORD_UPDATE(0, RK3568_DSI0_SKEWCALHS |16491640 RK3568_DSI0_FORCETXSTOPMODE |···16521645 },16531646 {16541647 .reg = 0xfe070000,16551655- .lcdsel_grf_reg = -1,16561648 .lanecfg1_grf_reg = RK3568_GRF_VO_CON3,16571649 .lanecfg1 = HIWORD_UPDATE(0, RK3568_DSI1_SKEWCALHS |16581650 RK3568_DSI1_FORCETXSTOPMODE |···16871681 .of_match_table = dw_mipi_dsi_rockchip_dt_ids,16881682 .pm = &dw_mipi_dsi_rockchip_pm_ops,16891683 .name = "dw-mipi-dsi-rockchip",16841684+ /*16851685+ * For dual-DSI display, one DSI pokes at the other DSI's16861686+ * drvdata in dw_mipi_dsi_rockchip_find_second(). This is not16871687+ * safe for asynchronous probe.16881688+ */16891689+ .probe_type = PROBE_FORCE_SYNCHRONOUS,16901690 },16911691};
+2-1
drivers/gpu/drm/rockchip/dw_hdmi-rockchip.c
···565565566566 ret = rockchip_hdmi_parse_dt(hdmi);567567 if (ret) {568568- DRM_DEV_ERROR(hdmi->dev, "Unable to parse OF data\n");568568+ if (ret != -EPROBE_DEFER)569569+ DRM_DEV_ERROR(hdmi->dev, "Unable to parse OF data\n");569570 return ret;570571 }571572
···956956 }957957 if (card->irq > 0)958958 free_irq(card->irq, card);959959- if (card->isac.dch.dev.dev.class)959959+ if (device_is_registered(&card->isac.dch.dev.dev))960960 mISDN_unregister_device(&card->isac.dch.dev);961961962962 for (i = 0; i < 2; i++) {
···376376377377#define NUM_FIXED_PHYS (DSA_LOOP_NUM_PORTS - 2)378378379379+static void dsa_loop_phydevs_unregister(void)380380+{381381+ unsigned int i;382382+383383+ for (i = 0; i < NUM_FIXED_PHYS; i++)384384+ if (!IS_ERR(phydevs[i])) {385385+ fixed_phy_unregister(phydevs[i]);386386+ phy_device_free(phydevs[i]);387387+ }388388+}389389+379390static int __init dsa_loop_init(void)380391{381392 struct fixed_phy_status status = {···394383 .speed = SPEED_100,395384 .duplex = DUPLEX_FULL,396385 };397397- unsigned int i;386386+ unsigned int i, ret;398387399388 for (i = 0; i < NUM_FIXED_PHYS; i++)400389 phydevs[i] = fixed_phy_register(PHY_POLL, &status, NULL);401390402402- return mdio_driver_register(&dsa_loop_drv);391391+ ret = mdio_driver_register(&dsa_loop_drv);392392+ if (ret)393393+ dsa_loop_phydevs_unregister();394394+395395+ return ret;403396}404397module_init(dsa_loop_init);405398406399static void __exit dsa_loop_exit(void)407400{408408- unsigned int i;409409-410401 mdio_driver_unregister(&dsa_loop_drv);411411- for (i = 0; i < NUM_FIXED_PHYS; i++)412412- if (!IS_ERR(phydevs[i]))413413- fixed_phy_unregister(phydevs[i]);402402+ dsa_loop_phydevs_unregister();414403}415404module_exit(dsa_loop_exit);416405
+29-9
drivers/net/ethernet/adi/adin1110.c
···15121512 .notifier_call = adin1110_switchdev_event,15131513};1514151415151515-static void adin1110_unregister_notifiers(void *data)15151515+static void adin1110_unregister_notifiers(void)15161516{15171517 unregister_switchdev_blocking_notifier(&adin1110_switchdev_blocking_notifier);15181518 unregister_switchdev_notifier(&adin1110_switchdev_notifier);15191519 unregister_netdevice_notifier(&adin1110_netdevice_nb);15201520}1521152115221522-static int adin1110_setup_notifiers(struct adin1110_priv *priv)15221522+static int adin1110_setup_notifiers(void)15231523{15241524- struct device *dev = &priv->spidev->dev;15251524 int ret;1526152515271526 ret = register_netdevice_notifier(&adin1110_netdevice_nb);···15351536 if (ret < 0)15361537 goto err_sdev;1537153815381538- return devm_add_action_or_reset(dev, adin1110_unregister_notifiers, NULL);15391539+ return 0;1539154015401541err_sdev:15411542 unregister_switchdev_notifier(&adin1110_switchdev_notifier);1542154315431544err_netdev:15441545 unregister_netdevice_notifier(&adin1110_netdevice_nb);15461546+15451547 return ret;15461548}15471549···16101610 adin1110_irq,16111611 IRQF_TRIGGER_LOW | IRQF_ONESHOT,16121612 dev_name(dev), priv);16131613- if (ret < 0)16141614- return ret;16151615-16161616- ret = adin1110_setup_notifiers(priv);16171613 if (ret < 0)16181614 return ret;16191615···16891693 .probe = adin1110_probe,16901694 .id_table = adin1110_spi_id,16911695};16921692-module_spi_driver(adin1110_driver);16961696+16971697+static int __init adin1110_driver_init(void)16981698+{16991699+ int ret;17001700+17011701+ ret = adin1110_setup_notifiers();17021702+ if (ret < 0)17031703+ return ret;17041704+17051705+ ret = spi_register_driver(&adin1110_driver);17061706+ if (ret < 0) {17071707+ adin1110_unregister_notifiers();17081708+ return ret;17091709+ }17101710+17111711+ return 0;17121712+}17131713+17141714+static void __exit adin1110_exit(void)17151715+{17161716+ adin1110_unregister_notifiers();17171717+ spi_unregister_driver(&adin1110_driver);17181718+}17191719+module_init(adin1110_driver_init);17201720+module_exit(adin1110_exit);1693172116941722MODULE_DESCRIPTION("ADIN1110 Network driver");16951723MODULE_AUTHOR("Alexandru Tachici <alexandru.tachici@analog.com>");
···30073007 rwi = get_next_rwi(adapter);3008300830093009 /*30103010- * If there is another reset queued, free the previous rwi30113011- * and process the new reset even if previous reset failed30123012- * (the previous reset could have failed because of a fail30133013- * over for instance, so process the fail over).30143014- *30153010 * If there are no resets queued and the previous reset failed,30163011 * the adapter would be in an undefined state. So retry the30173012 * previous reset as a hard reset.30133013+ *30143014+ * Else, free the previous rwi and, if there is another reset30153015+ * queued, process the new reset even if previous reset failed30163016+ * (the previous reset could have failed because of a fail30173017+ * over for instance, so process the fail over).30183018 */30193019- if (rwi)30203020- kfree(tmprwi);30213021- else if (rc)30193019+ if (!rwi && rc)30223020 rwi = tmprwi;30213021+ else30223022+ kfree(tmprwi);3023302330243024 if (rwi && (rwi->reset_reason == VNIC_RESET_FAILOVER ||30253025 rwi->reset_reason == VNIC_RESET_MOBILITY || rc))
···5151 struct stmmac_resources res;5252 struct device_node *np;5353 int ret, i, phy_mode;5454- bool mdio = false;55545655 np = dev_of_node(&pdev->dev);5756···6869 if (!plat)6970 return -ENOMEM;70717272+ plat->mdio_node = of_get_child_by_name(np, "mdio");7173 if (plat->mdio_node) {7272- dev_err(&pdev->dev, "Found MDIO subnode\n");7373- mdio = true;7474- }7474+ dev_info(&pdev->dev, "Found MDIO subnode\n");75757676- if (mdio) {7776 plat->mdio_bus_data = devm_kzalloc(&pdev->dev,7877 sizeof(*plat->mdio_bus_data),7978 GFP_KERNEL);
+1-1
drivers/net/ethernet/xilinx/xilinx_emaclite.c
···108108 * @next_tx_buf_to_use: next Tx buffer to write to109109 * @next_rx_buf_to_use: next Rx buffer to read from110110 * @base_addr: base address of the Emaclite device111111- * @reset_lock: lock used for synchronization111111+ * @reset_lock: lock to serialize xmit and tx_timeout execution112112 * @deferred_skb: holds an skb (for transmission at a later time) when the113113 * Tx buffer is not free114114 * @phy_dev: pointer to the PHY device
+1-1
drivers/net/phy/mdio_bus.c
···583583 }584584585585 for (i = 0; i < PHY_MAX_ADDR; i++) {586586- if ((bus->phy_mask & (1 << i)) == 0) {586586+ if ((bus->phy_mask & BIT(i)) == 0) {587587 struct phy_device *phydev;588588589589 phydev = mdiobus_scan(bus, i);
+2-1
drivers/net/tun.c
···14591459 int err;14601460 int i;1461146114621462- if (it->nr_segs > MAX_SKB_FRAGS + 1)14621462+ if (it->nr_segs > MAX_SKB_FRAGS + 1 ||14631463+ len > (ETH_MAX_MTU - NET_SKB_PAD - NET_IP_ALIGN))14631464 return ERR_PTR(-EMSGSIZE);1464146514651466 local_bh_disable();
+9-1
drivers/nfc/fdp/fdp.c
···249249static int fdp_nci_send(struct nci_dev *ndev, struct sk_buff *skb)250250{251251 struct fdp_nci_info *info = nci_get_drvdata(ndev);252252+ int ret;252253253254 if (atomic_dec_and_test(&info->data_pkt_counter))254255 info->data_pkt_counter_cb(ndev);255256256256- return info->phy_ops->write(info->phy, skb);257257+ ret = info->phy_ops->write(info->phy, skb);258258+ if (ret < 0) {259259+ kfree_skb(skb);260260+ return ret;261261+ }262262+263263+ consume_skb(skb);264264+ return 0;257265}258266259267static int fdp_nci_request_firmware(struct nci_dev *ndev)
+7-2
drivers/nfc/nfcmrvl/i2c.c
···132132 ret = -EREMOTEIO;133133 } else134134 ret = 0;135135- kfree_skb(skb);136135 }137136138138- return ret;137137+ if (ret) {138138+ kfree_skb(skb);139139+ return ret;140140+ }141141+142142+ consume_skb(skb);143143+ return 0;139144}140145141146static void nfcmrvl_i2c_nci_update_config(struct nfcmrvl_private *priv,
+5-2
drivers/nfc/nxp-nci/core.c
···8080 return -EINVAL;81818282 r = info->phy_ops->write(info->phy_id, skb);8383- if (r < 0)8383+ if (r < 0) {8484 kfree_skb(skb);8585+ return r;8686+ }85878686- return r;8888+ consume_skb(skb);8989+ return 0;8790}88918992static int nxp_nci_rf_pll_unlocked_ntf(struct nci_dev *ndev,
+6-2
drivers/nfc/s3fwrn5/core.c
···110110 }111111112112 ret = s3fwrn5_write(info, skb);113113- if (ret < 0)113113+ if (ret < 0) {114114 kfree_skb(skb);115115+ mutex_unlock(&info->mutex);116116+ return ret;117117+ }115118119119+ consume_skb(skb);116120 mutex_unlock(&info->mutex);117117- return ret;121121+ return 0;118122}119123120124static int s3fwrn5_nci_post_setup(struct nci_dev *ndev)
+15-2
drivers/soc/imx/imx93-pd.c
···135135136136 ret = pm_genpd_init(&domain->genpd, NULL, domain->init_off);137137 if (ret)138138- return ret;138138+ goto err_clk_unprepare;139139140140 platform_set_drvdata(pdev, domain);141141142142- return of_genpd_add_provider_simple(np, &domain->genpd);142142+ ret = of_genpd_add_provider_simple(np, &domain->genpd);143143+ if (ret)144144+ goto err_genpd_remove;145145+146146+ return 0;147147+148148+err_genpd_remove:149149+ pm_genpd_remove(&domain->genpd);150150+151151+err_clk_unprepare:152152+ if (!domain->init_off)153153+ clk_bulk_disable_unprepare(domain->num_clks, domain->clks);154154+155155+ return ret;143156}144157145158static const struct of_device_id imx93_pd_ids[] = {
+34-20
fs/btrfs/backref.c
···289289 struct prelim_ref *ref, *next_ref;290290291291 rbtree_postorder_for_each_entry_safe(ref, next_ref,292292- &preftree->root.rb_root, rbnode)292292+ &preftree->root.rb_root, rbnode) {293293+ free_inode_elem_list(ref->inode_list);293294 free_pref(ref);295295+ }294296295297 preftree->root = RB_ROOT_CACHED;296298 preftree->count = 0;···650648 return (struct extent_inode_elem *)(uintptr_t)node->aux;651649}652650651651+static void free_leaf_list(struct ulist *ulist)652652+{653653+ struct ulist_node *node;654654+ struct ulist_iterator uiter;655655+656656+ ULIST_ITER_INIT(&uiter);657657+ while ((node = ulist_next(ulist, &uiter)))658658+ free_inode_elem_list(unode_aux_to_inode_list(node));659659+660660+ ulist_free(ulist);661661+}662662+653663/*654664 * We maintain three separate rbtrees: one for direct refs, one for655665 * indirect refs which have a key, and one for indirect refs which do not···776762 cond_resched();777763 }778764out:779779- ulist_free(parents);765765+ /*766766+ * We may have inode lists attached to refs in the parents ulist, so we767767+ * must free them before freeing the ulist and its refs.768768+ */769769+ free_leaf_list(parents);780770 return ret;781771}782772···13861368 if (ret < 0)13871369 goto out;13881370 ref->inode_list = eie;13711371+ /*13721372+ * We transferred the list ownership to the ref,13731373+ * so set to NULL to avoid a double free in case13741374+ * an error happens after this.13751375+ */13761376+ eie = NULL;13891377 }13901378 ret = ulist_add_merge_ptr(refs, ref->parent,13911379 ref->inode_list,···14171393 eie->next = ref->inode_list;14181394 }14191395 eie = NULL;13961396+ /*13971397+ * We have transferred the inode list ownership from13981398+ * this ref to the ref we added to the 'refs' ulist.13991399+ * So set this ref's inode list to NULL to avoid14001400+ * use-after-free when our caller uses it or double14011401+ * frees in case an error happens before we return.14021402+ */14031403+ ref->inode_list = NULL;14201404 }14211405 cond_resched();14221406 }···14391407 if (ret < 0)14401408 free_inode_elem_list(eie);14411409 return ret;14421442-}14431443-14441444-static void free_leaf_list(struct ulist *blocks)14451445-{14461446- struct ulist_node *node = NULL;14471447- struct extent_inode_elem *eie;14481448- struct ulist_iterator uiter;14491449-14501450- ULIST_ITER_INIT(&uiter);14511451- while ((node = ulist_next(blocks, &uiter))) {14521452- if (!node->aux)14531453- continue;14541454- eie = unode_aux_to_inode_list(node);14551455- free_inode_elem_list(eie);14561456- node->aux = 0;14571457- }14581458-14591459- ulist_free(blocks);14601410}1461141114621412/*
···15981598 write_bytes);15991599 else16001600 btrfs_check_nocow_unlock(BTRFS_I(inode));16011601+16021602+ if (nowait && ret == -ENOSPC)16031603+ ret = -EAGAIN;16011604 break;16021605 }1603160616041607 release_bytes = reserve_bytes;16051608again:16061609 ret = balance_dirty_pages_ratelimited_flags(inode->i_mapping, bdp_flags);16071607- if (ret)16101610+ if (ret) {16111611+ btrfs_delalloc_release_extents(BTRFS_I(inode), reserve_bytes);16081612 break;16131613+ }1609161416101615 /*16111616 * This is going to setup the pages array with the number of···17701765 loff_t endbyte;17711766 ssize_t err;17721767 unsigned int ilock_flags = 0;17681768+ struct iomap_dio *dio;1773176917741770 if (iocb->ki_flags & IOCB_NOWAIT)17751771 ilock_flags |= BTRFS_ILOCK_TRY;···18311825 * So here we disable page faults in the iov_iter and then retry if we18321826 * got -EFAULT, faulting in the pages before the retry.18331827 */18341834-again:18351828 from->nofault = true;18361836- err = btrfs_dio_rw(iocb, from, written);18291829+ dio = btrfs_dio_write(iocb, from, written);18371830 from->nofault = false;18311831+18321832+ /*18331833+ * iomap_dio_complete() will call btrfs_sync_file() if we have a dsync18341834+ * iocb, and that needs to lock the inode. So unlock it before calling18351835+ * iomap_dio_complete() to avoid a deadlock.18361836+ */18371837+ btrfs_inode_unlock(inode, ilock_flags);18381838+18391839+ if (IS_ERR_OR_NULL(dio))18401840+ err = PTR_ERR_OR_ZERO(dio);18411841+ else18421842+ err = iomap_dio_complete(dio);1838184318391844 /* No increment (+=) because iomap returns a cumulative value. */18401845 if (err > 0)···18721855 } else {18731856 fault_in_iov_iter_readable(from, left);18741857 prev_left = left;18751875- goto again;18581858+ goto relock;18761859 }18771860 }18781878-18791879- btrfs_inode_unlock(inode, ilock_flags);1880186118811862 /*18821863 * If 'err' is -ENOTBLK or we have not written all data, then it means···40504035 */40514036 pagefault_disable();40524037 to->nofault = true;40534053- ret = btrfs_dio_rw(iocb, to, read);40384038+ ret = btrfs_dio_read(iocb, to, read);40544039 to->nofault = false;40554040 pagefault_enable();40564041
···7777 goto unlock;78787979 addr = kmap_local_page(page);8080- if (!offset)8080+ if (!offset) {8181 clear_page(addr);8282+ SetPageUptodate(page);8383+ }8284 memcpy(addr + offset, dirent, reclen);8385 kunmap_local(addr);8486 fi->rdc.size = (index << PAGE_SHIFT) + offset + reclen;···518516519517 page = find_get_page_flags(file->f_mapping, index,520518 FGP_ACCESSED | FGP_LOCK);519519+ /* Page gone missing, then re-added to cache, but not initialized? */520520+ if (page && !PageUptodate(page)) {521521+ unlock_page(page);522522+ put_page(page);523523+ page = NULL;524524+ }521525 spin_lock(&fi->rdc.lock);522526 if (!page) {523527 /*
+15
fs/xfs/libxfs/xfs_ag.h
···133133 return true;134134}135135136136+static inline bool137137+xfs_verify_agbext(138138+ struct xfs_perag *pag,139139+ xfs_agblock_t agbno,140140+ xfs_agblock_t len)141141+{142142+ if (agbno + len <= agbno)143143+ return false;144144+145145+ if (!xfs_verify_agbno(pag, agbno))146146+ return false;147147+148148+ return xfs_verify_agbno(pag, agbno + len - 1);149149+}150150+136151/*137152 * Verify that an AG inode number pointer neither points outside the AG138153 * nor points at static metadata.
+1-5
fs/xfs/libxfs/xfs_alloc.c
···263263 goto out_bad_rec;264264265265 /* check for valid extent range, including overflow */266266- if (!xfs_verify_agbno(pag, *bno))267267- goto out_bad_rec;268268- if (*bno > *bno + *len)269269- goto out_bad_rec;270270- if (!xfs_verify_agbno(pag, *bno + *len - 1))266266+ if (!xfs_verify_agbext(pag, *bno, *len))271267 goto out_bad_rec;272268273269 return 0;
+7-2
fs/xfs/libxfs/xfs_dir2_leaf.c
···146146 xfs_dir2_leaf_tail_t *ltp;147147 int stale;148148 int i;149149+ bool isleaf1 = (hdr->magic == XFS_DIR2_LEAF1_MAGIC ||150150+ hdr->magic == XFS_DIR3_LEAF1_MAGIC);149151150152 ltp = xfs_dir2_leaf_tail_p(geo, leaf);151153···160158 return __this_address;161159162160 /* Leaves and bests don't overlap in leaf format. */163163- if ((hdr->magic == XFS_DIR2_LEAF1_MAGIC ||164164- hdr->magic == XFS_DIR3_LEAF1_MAGIC) &&161161+ if (isleaf1 &&165162 (char *)&hdr->ents[hdr->count] > (char *)xfs_dir2_leaf_bests_p(ltp))166163 return __this_address;167164···176175 }177176 if (hdr->ents[i].address == cpu_to_be32(XFS_DIR2_NULL_DATAPTR))178177 stale++;178178+ if (isleaf1 && xfs_dir2_dataptr_to_db(geo,179179+ be32_to_cpu(hdr->ents[i].address)) >=180180+ be32_to_cpu(ltp->bestcount))181181+ return __this_address;179182 }180183 if (hdr->stale != stale)181184 return __this_address;
+1-21
fs/xfs/libxfs/xfs_format.h
···15641564#define RMAPBT_UNUSED_OFFSET_BITLEN 715651565#define RMAPBT_OFFSET_BITLEN 541566156615671567-#define XFS_RMAP_ATTR_FORK (1 << 0)15681568-#define XFS_RMAP_BMBT_BLOCK (1 << 1)15691569-#define XFS_RMAP_UNWRITTEN (1 << 2)15701570-#define XFS_RMAP_KEY_FLAGS (XFS_RMAP_ATTR_FORK | \15711571- XFS_RMAP_BMBT_BLOCK)15721572-#define XFS_RMAP_REC_FLAGS (XFS_RMAP_UNWRITTEN)15731573-struct xfs_rmap_irec {15741574- xfs_agblock_t rm_startblock; /* extent start block */15751575- xfs_extlen_t rm_blockcount; /* extent length */15761576- uint64_t rm_owner; /* extent owner */15771577- uint64_t rm_offset; /* offset within the owner */15781578- unsigned int rm_flags; /* state flags */15791579-};15801580-15811567/*15821568 * Key structure15831569 *···16121626 * on the startblock. This speeds up mount time deletion of stale16131627 * staging extents because they're all at the right side of the tree.16141628 */16151615-#define XFS_REFC_COW_START ((xfs_agblock_t)(1U << 31))16291629+#define XFS_REFC_COWFLAG (1U << 31)16161630#define REFCNTBT_COWFLAG_BITLEN 116171631#define REFCNTBT_AGBLOCK_BITLEN 3116181632···1624163816251639struct xfs_refcount_key {16261640 __be32 rc_startblock; /* starting block number */16271627-};16281628-16291629-struct xfs_refcount_irec {16301630- xfs_agblock_t rc_startblock; /* starting block number */16311631- xfs_extlen_t rc_blockcount; /* count of free blocks */16321632- xfs_nlink_t rc_refcount; /* number of inodes linked here */16331641};1634164216351643#define MAXREFCOUNT ((xfs_nlink_t)~0U)
+54-6
fs/xfs/libxfs/xfs_log_format.h
···613613 uint16_t efi_size; /* size of this item */614614 uint32_t efi_nextents; /* # extents to free */615615 uint64_t efi_id; /* efi identifier */616616- xfs_extent_t efi_extents[1]; /* array of extents to free */616616+ xfs_extent_t efi_extents[]; /* array of extents to free */617617} xfs_efi_log_format_t;618618+619619+static inline size_t620620+xfs_efi_log_format_sizeof(621621+ unsigned int nr)622622+{623623+ return sizeof(struct xfs_efi_log_format) +624624+ nr * sizeof(struct xfs_extent);625625+}618626619627typedef struct xfs_efi_log_format_32 {620628 uint16_t efi_type; /* efi log item type */621629 uint16_t efi_size; /* size of this item */622630 uint32_t efi_nextents; /* # extents to free */623631 uint64_t efi_id; /* efi identifier */624624- xfs_extent_32_t efi_extents[1]; /* array of extents to free */632632+ xfs_extent_32_t efi_extents[]; /* array of extents to free */625633} __attribute__((packed)) xfs_efi_log_format_32_t;634634+635635+static inline size_t636636+xfs_efi_log_format32_sizeof(637637+ unsigned int nr)638638+{639639+ return sizeof(struct xfs_efi_log_format_32) +640640+ nr * sizeof(struct xfs_extent_32);641641+}626642627643typedef struct xfs_efi_log_format_64 {628644 uint16_t efi_type; /* efi log item type */629645 uint16_t efi_size; /* size of this item */630646 uint32_t efi_nextents; /* # extents to free */631647 uint64_t efi_id; /* efi identifier */632632- xfs_extent_64_t efi_extents[1]; /* array of extents to free */648648+ xfs_extent_64_t efi_extents[]; /* array of extents to free */633649} xfs_efi_log_format_64_t;650650+651651+static inline size_t652652+xfs_efi_log_format64_sizeof(653653+ unsigned int nr)654654+{655655+ return sizeof(struct xfs_efi_log_format_64) +656656+ nr * sizeof(struct xfs_extent_64);657657+}634658635659/*636660 * This is the structure used to lay out an efd log item in the···666642 uint16_t efd_size; /* size of this item */667643 uint32_t efd_nextents; /* # of extents freed */668644 uint64_t efd_efi_id; /* id of corresponding efi */669669- xfs_extent_t efd_extents[1]; /* array of extents freed */645645+ xfs_extent_t efd_extents[]; /* array of extents freed */670646} xfs_efd_log_format_t;647647+648648+static inline size_t649649+xfs_efd_log_format_sizeof(650650+ unsigned int nr)651651+{652652+ return sizeof(struct xfs_efd_log_format) +653653+ nr * sizeof(struct xfs_extent);654654+}671655672656typedef struct xfs_efd_log_format_32 {673657 uint16_t efd_type; /* efd log item type */674658 uint16_t efd_size; /* size of this item */675659 uint32_t efd_nextents; /* # of extents freed */676660 uint64_t efd_efi_id; /* id of corresponding efi */677677- xfs_extent_32_t efd_extents[1]; /* array of extents freed */661661+ xfs_extent_32_t efd_extents[]; /* array of extents freed */678662} __attribute__((packed)) xfs_efd_log_format_32_t;663663+664664+static inline size_t665665+xfs_efd_log_format32_sizeof(666666+ unsigned int nr)667667+{668668+ return sizeof(struct xfs_efd_log_format_32) +669669+ nr * sizeof(struct xfs_extent_32);670670+}679671680672typedef struct xfs_efd_log_format_64 {681673 uint16_t efd_type; /* efd log item type */682674 uint16_t efd_size; /* size of this item */683675 uint32_t efd_nextents; /* # of extents freed */684676 uint64_t efd_efi_id; /* id of corresponding efi */685685- xfs_extent_64_t efd_extents[1]; /* array of extents freed */677677+ xfs_extent_64_t efd_extents[]; /* array of extents freed */686678} xfs_efd_log_format_64_t;679679+680680+static inline size_t681681+xfs_efd_log_format64_sizeof(682682+ unsigned int nr)683683+{684684+ return sizeof(struct xfs_efd_log_format_64) +685685+ nr * sizeof(struct xfs_extent_64);686686+}687687688688/*689689 * RUI/RUD (reverse mapping) log format definitions
+199-87
fs/xfs/libxfs/xfs_refcount.c
···4646int4747xfs_refcount_lookup_le(4848 struct xfs_btree_cur *cur,4949+ enum xfs_refc_domain domain,4950 xfs_agblock_t bno,5051 int *stat)5152{5252- trace_xfs_refcount_lookup(cur->bc_mp, cur->bc_ag.pag->pag_agno, bno,5353+ trace_xfs_refcount_lookup(cur->bc_mp, cur->bc_ag.pag->pag_agno,5454+ xfs_refcount_encode_startblock(bno, domain),5355 XFS_LOOKUP_LE);5456 cur->bc_rec.rc.rc_startblock = bno;5557 cur->bc_rec.rc.rc_blockcount = 0;5858+ cur->bc_rec.rc.rc_domain = domain;5659 return xfs_btree_lookup(cur, XFS_LOOKUP_LE, stat);5760}5861···6663int6764xfs_refcount_lookup_ge(6865 struct xfs_btree_cur *cur,6666+ enum xfs_refc_domain domain,6967 xfs_agblock_t bno,7068 int *stat)7169{7272- trace_xfs_refcount_lookup(cur->bc_mp, cur->bc_ag.pag->pag_agno, bno,7070+ trace_xfs_refcount_lookup(cur->bc_mp, cur->bc_ag.pag->pag_agno,7171+ xfs_refcount_encode_startblock(bno, domain),7372 XFS_LOOKUP_GE);7473 cur->bc_rec.rc.rc_startblock = bno;7574 cur->bc_rec.rc.rc_blockcount = 0;7575+ cur->bc_rec.rc.rc_domain = domain;7676 return xfs_btree_lookup(cur, XFS_LOOKUP_GE, stat);7777}7878···8680int8781xfs_refcount_lookup_eq(8882 struct xfs_btree_cur *cur,8383+ enum xfs_refc_domain domain,8984 xfs_agblock_t bno,9085 int *stat)9186{9292- trace_xfs_refcount_lookup(cur->bc_mp, cur->bc_ag.pag->pag_agno, bno,8787+ trace_xfs_refcount_lookup(cur->bc_mp, cur->bc_ag.pag->pag_agno,8888+ xfs_refcount_encode_startblock(bno, domain),9389 XFS_LOOKUP_LE);9490 cur->bc_rec.rc.rc_startblock = bno;9591 cur->bc_rec.rc.rc_blockcount = 0;9292+ cur->bc_rec.rc.rc_domain = domain;9693 return xfs_btree_lookup(cur, XFS_LOOKUP_EQ, stat);9794}9895···10596 const union xfs_btree_rec *rec,10697 struct xfs_refcount_irec *irec)10798{108108- irec->rc_startblock = be32_to_cpu(rec->refc.rc_startblock);9999+ uint32_t start;100100+101101+ start = be32_to_cpu(rec->refc.rc_startblock);102102+ if (start & XFS_REFC_COWFLAG) {103103+ start &= ~XFS_REFC_COWFLAG;104104+ irec->rc_domain = XFS_REFC_DOMAIN_COW;105105+ } else {106106+ irec->rc_domain = XFS_REFC_DOMAIN_SHARED;107107+ }108108+109109+ irec->rc_startblock = start;109110 irec->rc_blockcount = be32_to_cpu(rec->refc.rc_blockcount);110111 irec->rc_refcount = be32_to_cpu(rec->refc.rc_refcount);111112}···133114 struct xfs_perag *pag = cur->bc_ag.pag;134115 union xfs_btree_rec *rec;135116 int error;136136- xfs_agblock_t realstart;137117138118 error = xfs_btree_get_rec(cur, &rec, stat);139119 if (error || !*stat)···142124 if (irec->rc_blockcount == 0 || irec->rc_blockcount > MAXREFCEXTLEN)143125 goto out_bad_rec;144126145145- /* handle special COW-staging state */146146- realstart = irec->rc_startblock;147147- if (realstart & XFS_REFC_COW_START) {148148- if (irec->rc_refcount != 1)149149- goto out_bad_rec;150150- realstart &= ~XFS_REFC_COW_START;151151- } else if (irec->rc_refcount < 2) {127127+ if (!xfs_refcount_check_domain(irec))152128 goto out_bad_rec;153153- }154129155130 /* check for valid extent range, including overflow */156156- if (!xfs_verify_agbno(pag, realstart))157157- goto out_bad_rec;158158- if (realstart > realstart + irec->rc_blockcount)159159- goto out_bad_rec;160160- if (!xfs_verify_agbno(pag, realstart + irec->rc_blockcount - 1))131131+ if (!xfs_verify_agbext(pag, irec->rc_startblock, irec->rc_blockcount))161132 goto out_bad_rec;162133163134 if (irec->rc_refcount == 0 || irec->rc_refcount > MAXREFCOUNT)···176169 struct xfs_refcount_irec *irec)177170{178171 union xfs_btree_rec rec;172172+ uint32_t start;179173 int error;180174181175 trace_xfs_refcount_update(cur->bc_mp, cur->bc_ag.pag->pag_agno, irec);182182- rec.refc.rc_startblock = cpu_to_be32(irec->rc_startblock);176176+177177+ start = xfs_refcount_encode_startblock(irec->rc_startblock,178178+ irec->rc_domain);179179+ rec.refc.rc_startblock = cpu_to_be32(start);183180 rec.refc.rc_blockcount = cpu_to_be32(irec->rc_blockcount);184181 rec.refc.rc_refcount = cpu_to_be32(irec->rc_refcount);182182+185183 error = xfs_btree_update(cur, &rec);186184 if (error)187185 trace_xfs_refcount_update_error(cur->bc_mp,···208196 int error;209197210198 trace_xfs_refcount_insert(cur->bc_mp, cur->bc_ag.pag->pag_agno, irec);199199+211200 cur->bc_rec.rc.rc_startblock = irec->rc_startblock;212201 cur->bc_rec.rc.rc_blockcount = irec->rc_blockcount;213202 cur->bc_rec.rc.rc_refcount = irec->rc_refcount;203203+ cur->bc_rec.rc.rc_domain = irec->rc_domain;204204+214205 error = xfs_btree_insert(cur, i);215206 if (error)216207 goto out_error;···259244 }260245 if (error)261246 goto out_error;262262- error = xfs_refcount_lookup_ge(cur, irec.rc_startblock, &found_rec);247247+ error = xfs_refcount_lookup_ge(cur, irec.rc_domain, irec.rc_startblock,248248+ &found_rec);263249out_error:264250 if (error)265251 trace_xfs_refcount_delete_error(cur->bc_mp,···359343STATIC int360344xfs_refcount_split_extent(361345 struct xfs_btree_cur *cur,346346+ enum xfs_refc_domain domain,362347 xfs_agblock_t agbno,363348 bool *shape_changed)364349{···368351 int error;369352370353 *shape_changed = false;371371- error = xfs_refcount_lookup_le(cur, agbno, &found_rec);354354+ error = xfs_refcount_lookup_le(cur, domain, agbno, &found_rec);372355 if (error)373356 goto out_error;374357 if (!found_rec)···381364 error = -EFSCORRUPTED;382365 goto out_error;383366 }367367+ if (rcext.rc_domain != domain)368368+ return 0;384369 if (rcext.rc_startblock == agbno || xfs_refc_next(&rcext) <= agbno)385370 return 0;386371···434415 trace_xfs_refcount_merge_center_extents(cur->bc_mp,435416 cur->bc_ag.pag->pag_agno, left, center, right);436417418418+ ASSERT(left->rc_domain == center->rc_domain);419419+ ASSERT(right->rc_domain == center->rc_domain);420420+437421 /*438422 * Make sure the center and right extents are not in the btree.439423 * If the center extent was synthesized, the first delete call···445423 * call removes the center and the second one removes the right446424 * extent.447425 */448448- error = xfs_refcount_lookup_ge(cur, center->rc_startblock,449449- &found_rec);426426+ error = xfs_refcount_lookup_ge(cur, center->rc_domain,427427+ center->rc_startblock, &found_rec);450428 if (error)451429 goto out_error;452430 if (XFS_IS_CORRUPT(cur->bc_mp, found_rec != 1)) {···473451 }474452475453 /* Enlarge the left extent. */476476- error = xfs_refcount_lookup_le(cur, left->rc_startblock,477477- &found_rec);454454+ error = xfs_refcount_lookup_le(cur, left->rc_domain,455455+ left->rc_startblock, &found_rec);478456 if (error)479457 goto out_error;480458 if (XFS_IS_CORRUPT(cur->bc_mp, found_rec != 1)) {···513491 trace_xfs_refcount_merge_left_extent(cur->bc_mp,514492 cur->bc_ag.pag->pag_agno, left, cleft);515493494494+ ASSERT(left->rc_domain == cleft->rc_domain);495495+516496 /* If the extent at agbno (cleft) wasn't synthesized, remove it. */517497 if (cleft->rc_refcount > 1) {518518- error = xfs_refcount_lookup_le(cur, cleft->rc_startblock,519519- &found_rec);498498+ error = xfs_refcount_lookup_le(cur, cleft->rc_domain,499499+ cleft->rc_startblock, &found_rec);520500 if (error)521501 goto out_error;522502 if (XFS_IS_CORRUPT(cur->bc_mp, found_rec != 1)) {···536512 }537513538514 /* Enlarge the left extent. */539539- error = xfs_refcount_lookup_le(cur, left->rc_startblock,540540- &found_rec);515515+ error = xfs_refcount_lookup_le(cur, left->rc_domain,516516+ left->rc_startblock, &found_rec);541517 if (error)542518 goto out_error;543519 if (XFS_IS_CORRUPT(cur->bc_mp, found_rec != 1)) {···576552 trace_xfs_refcount_merge_right_extent(cur->bc_mp,577553 cur->bc_ag.pag->pag_agno, cright, right);578554555555+ ASSERT(right->rc_domain == cright->rc_domain);556556+579557 /*580558 * If the extent ending at agbno+aglen (cright) wasn't synthesized,581559 * remove it.582560 */583561 if (cright->rc_refcount > 1) {584584- error = xfs_refcount_lookup_le(cur, cright->rc_startblock,585585- &found_rec);562562+ error = xfs_refcount_lookup_le(cur, cright->rc_domain,563563+ cright->rc_startblock, &found_rec);586564 if (error)587565 goto out_error;588566 if (XFS_IS_CORRUPT(cur->bc_mp, found_rec != 1)) {···602576 }603577604578 /* Enlarge the right extent. */605605- error = xfs_refcount_lookup_le(cur, right->rc_startblock,606606- &found_rec);579579+ error = xfs_refcount_lookup_le(cur, right->rc_domain,580580+ right->rc_startblock, &found_rec);607581 if (error)608582 goto out_error;609583 if (XFS_IS_CORRUPT(cur->bc_mp, found_rec != 1)) {···626600 return error;627601}628602629629-#define XFS_FIND_RCEXT_SHARED 1630630-#define XFS_FIND_RCEXT_COW 2631603/*632604 * Find the left extent and the one after it (cleft). This function assumes633605 * that we've already split any extent crossing agbno.···635611 struct xfs_btree_cur *cur,636612 struct xfs_refcount_irec *left,637613 struct xfs_refcount_irec *cleft,614614+ enum xfs_refc_domain domain,638615 xfs_agblock_t agbno,639639- xfs_extlen_t aglen,640640- int flags)616616+ xfs_extlen_t aglen)641617{642618 struct xfs_refcount_irec tmp;643619 int error;644620 int found_rec;645621646622 left->rc_startblock = cleft->rc_startblock = NULLAGBLOCK;647647- error = xfs_refcount_lookup_le(cur, agbno - 1, &found_rec);623623+ error = xfs_refcount_lookup_le(cur, domain, agbno - 1, &found_rec);648624 if (error)649625 goto out_error;650626 if (!found_rec)···658634 goto out_error;659635 }660636637637+ if (tmp.rc_domain != domain)638638+ return 0;661639 if (xfs_refc_next(&tmp) != agbno)662662- return 0;663663- if ((flags & XFS_FIND_RCEXT_SHARED) && tmp.rc_refcount < 2)664664- return 0;665665- if ((flags & XFS_FIND_RCEXT_COW) && tmp.rc_refcount > 1)666640 return 0;667641 /* We have a left extent; retrieve (or invent) the next right one */668642 *left = tmp;···676654 error = -EFSCORRUPTED;677655 goto out_error;678656 }657657+658658+ if (tmp.rc_domain != domain)659659+ goto not_found;679660680661 /* if tmp starts at the end of our range, just use that */681662 if (tmp.rc_startblock == agbno)···696671 cleft->rc_blockcount = min(aglen,697672 tmp.rc_startblock - agbno);698673 cleft->rc_refcount = 1;674674+ cleft->rc_domain = domain;699675 }700676 } else {677677+not_found:701678 /*702679 * No extents, so pretend that there's one covering the whole703680 * range.···707680 cleft->rc_startblock = agbno;708681 cleft->rc_blockcount = aglen;709682 cleft->rc_refcount = 1;683683+ cleft->rc_domain = domain;710684 }711685 trace_xfs_refcount_find_left_extent(cur->bc_mp, cur->bc_ag.pag->pag_agno,712686 left, cleft, agbno);···728700 struct xfs_btree_cur *cur,729701 struct xfs_refcount_irec *right,730702 struct xfs_refcount_irec *cright,703703+ enum xfs_refc_domain domain,731704 xfs_agblock_t agbno,732732- xfs_extlen_t aglen,733733- int flags)705705+ xfs_extlen_t aglen)734706{735707 struct xfs_refcount_irec tmp;736708 int error;737709 int found_rec;738710739711 right->rc_startblock = cright->rc_startblock = NULLAGBLOCK;740740- error = xfs_refcount_lookup_ge(cur, agbno + aglen, &found_rec);712712+ error = xfs_refcount_lookup_ge(cur, domain, agbno + aglen, &found_rec);741713 if (error)742714 goto out_error;743715 if (!found_rec)···751723 goto out_error;752724 }753725726726+ if (tmp.rc_domain != domain)727727+ return 0;754728 if (tmp.rc_startblock != agbno + aglen)755755- return 0;756756- if ((flags & XFS_FIND_RCEXT_SHARED) && tmp.rc_refcount < 2)757757- return 0;758758- if ((flags & XFS_FIND_RCEXT_COW) && tmp.rc_refcount > 1)759729 return 0;760730 /* We have a right extent; retrieve (or invent) the next left one */761731 *right = tmp;···769743 error = -EFSCORRUPTED;770744 goto out_error;771745 }746746+747747+ if (tmp.rc_domain != domain)748748+ goto not_found;772749773750 /* if tmp ends at the end of our range, just use that */774751 if (xfs_refc_next(&tmp) == agbno + aglen)···789760 cright->rc_blockcount = right->rc_startblock -790761 cright->rc_startblock;791762 cright->rc_refcount = 1;763763+ cright->rc_domain = domain;792764 }793765 } else {766766+not_found:794767 /*795768 * No extents, so pretend that there's one covering the whole796769 * range.···800769 cright->rc_startblock = agbno;801770 cright->rc_blockcount = aglen;802771 cright->rc_refcount = 1;772772+ cright->rc_domain = domain;803773 }804774 trace_xfs_refcount_find_right_extent(cur->bc_mp, cur->bc_ag.pag->pag_agno,805775 cright, right, agbno + aglen);···826794STATIC int827795xfs_refcount_merge_extents(828796 struct xfs_btree_cur *cur,797797+ enum xfs_refc_domain domain,829798 xfs_agblock_t *agbno,830799 xfs_extlen_t *aglen,831800 enum xfs_refc_adjust_op adjust,832832- int flags,833801 bool *shape_changed)834802{835803 struct xfs_refcount_irec left = {0}, cleft = {0};···844812 * just below (agbno + aglen) [cright], and just above (agbno + aglen)845813 * [right].846814 */847847- error = xfs_refcount_find_left_extents(cur, &left, &cleft, *agbno,848848- *aglen, flags);815815+ error = xfs_refcount_find_left_extents(cur, &left, &cleft, domain,816816+ *agbno, *aglen);849817 if (error)850818 return error;851851- error = xfs_refcount_find_right_extents(cur, &right, &cright, *agbno,852852- *aglen, flags);819819+ error = xfs_refcount_find_right_extents(cur, &right, &cright, domain,820820+ *agbno, *aglen);853821 if (error)854822 return error;855823···902870 aglen);903871 }904872905905- return error;873873+ return 0;906874}907875908876/*···965933 if (*aglen == 0)966934 return 0;967935968968- error = xfs_refcount_lookup_ge(cur, *agbno, &found_rec);936936+ error = xfs_refcount_lookup_ge(cur, XFS_REFC_DOMAIN_SHARED, *agbno,937937+ &found_rec);969938 if (error)970939 goto out_error;971940···974941 error = xfs_refcount_get_rec(cur, &ext, &found_rec);975942 if (error)976943 goto out_error;977977- if (!found_rec) {944944+ if (!found_rec || ext.rc_domain != XFS_REFC_DOMAIN_SHARED) {978945 ext.rc_startblock = cur->bc_mp->m_sb.sb_agblocks;979946 ext.rc_blockcount = 0;980947 ext.rc_refcount = 0;948948+ ext.rc_domain = XFS_REFC_DOMAIN_SHARED;981949 }982950983951 /*···991957 tmp.rc_blockcount = min(*aglen,992958 ext.rc_startblock - *agbno);993959 tmp.rc_refcount = 1 + adj;960960+ tmp.rc_domain = XFS_REFC_DOMAIN_SHARED;961961+994962 trace_xfs_refcount_modify_extent(cur->bc_mp,995963 cur->bc_ag.pag->pag_agno, &tmp);996964···1022986 (*agbno) += tmp.rc_blockcount;1023987 (*aglen) -= tmp.rc_blockcount;102498810251025- error = xfs_refcount_lookup_ge(cur, *agbno,989989+ /* Stop if there's nothing left to modify */990990+ if (*aglen == 0 || !xfs_refcount_still_have_space(cur))991991+ break;992992+993993+ /* Move the cursor to the start of ext. */994994+ error = xfs_refcount_lookup_ge(cur,995995+ XFS_REFC_DOMAIN_SHARED, *agbno,1026996 &found_rec);1027997 if (error)1028998 goto out_error;1029999 }1030100010311031- /* Stop if there's nothing left to modify */10321032- if (*aglen == 0 || !xfs_refcount_still_have_space(cur))10331033- break;10011001+ /*10021002+ * A previous step trimmed agbno/aglen such that the end of the10031003+ * range would not be in the middle of the record. If this is10041004+ * no longer the case, something is seriously wrong with the10051005+ * btree. Make sure we never feed the synthesized record into10061006+ * the processing loop below.10071007+ */10081008+ if (XFS_IS_CORRUPT(cur->bc_mp, ext.rc_blockcount == 0) ||10091009+ XFS_IS_CORRUPT(cur->bc_mp, ext.rc_blockcount > *aglen)) {10101010+ error = -EFSCORRUPTED;10111011+ goto out_error;10121012+ }1034101310351014 /*10361015 * Adjust the reference count and either update the tree···11211070 /*11221071 * Ensure that no rcextents cross the boundary of the adjustment range.11231072 */11241124- error = xfs_refcount_split_extent(cur, agbno, &shape_changed);10731073+ error = xfs_refcount_split_extent(cur, XFS_REFC_DOMAIN_SHARED,10741074+ agbno, &shape_changed);11251075 if (error)11261076 goto out_error;11271077 if (shape_changed)11281078 shape_changes++;1129107911301130- error = xfs_refcount_split_extent(cur, agbno + aglen, &shape_changed);10801080+ error = xfs_refcount_split_extent(cur, XFS_REFC_DOMAIN_SHARED,10811081+ agbno + aglen, &shape_changed);11311082 if (error)11321083 goto out_error;11331084 if (shape_changed)···11381085 /*11391086 * Try to merge with the left or right extents of the range.11401087 */11411141- error = xfs_refcount_merge_extents(cur, new_agbno, new_aglen, adj,11421142- XFS_FIND_RCEXT_SHARED, &shape_changed);10881088+ error = xfs_refcount_merge_extents(cur, XFS_REFC_DOMAIN_SHARED,10891089+ new_agbno, new_aglen, adj, &shape_changed);11431090 if (error)11441091 goto out_error;11451092 if (shape_changed)···11751122 xfs_btree_del_cursor(rcur, error);11761123 if (error)11771124 xfs_trans_brelse(tp, agbp);11251125+}11261126+11271127+/*11281128+ * Set up a continuation a deferred refcount operation by updating the intent.11291129+ * Checks to make sure we're not going to run off the end of the AG.11301130+ */11311131+static inline int11321132+xfs_refcount_continue_op(11331133+ struct xfs_btree_cur *cur,11341134+ xfs_fsblock_t startblock,11351135+ xfs_agblock_t new_agbno,11361136+ xfs_extlen_t new_len,11371137+ xfs_fsblock_t *new_fsbno)11381138+{11391139+ struct xfs_mount *mp = cur->bc_mp;11401140+ struct xfs_perag *pag = cur->bc_ag.pag;11411141+11421142+ if (XFS_IS_CORRUPT(mp, !xfs_verify_agbext(pag, new_agbno, new_len)))11431143+ return -EFSCORRUPTED;11441144+11451145+ *new_fsbno = XFS_AGB_TO_FSB(mp, pag->pag_agno, new_agbno);11461146+11471147+ ASSERT(xfs_verify_fsbext(mp, *new_fsbno, new_len));11481148+ ASSERT(pag->pag_agno == XFS_FSB_TO_AGNO(mp, *new_fsbno));11491149+11501150+ return 0;11781151}1179115211801153/*···12701191 case XFS_REFCOUNT_INCREASE:12711192 error = xfs_refcount_adjust(rcur, bno, blockcount, &new_agbno,12721193 new_len, XFS_REFCOUNT_ADJUST_INCREASE);12731273- *new_fsb = XFS_AGB_TO_FSB(mp, pag->pag_agno, new_agbno);11941194+ if (error)11951195+ goto out_drop;11961196+ if (*new_len > 0)11971197+ error = xfs_refcount_continue_op(rcur, startblock,11981198+ new_agbno, *new_len, new_fsb);12741199 break;12751200 case XFS_REFCOUNT_DECREASE:12761201 error = xfs_refcount_adjust(rcur, bno, blockcount, &new_agbno,12771202 new_len, XFS_REFCOUNT_ADJUST_DECREASE);12781278- *new_fsb = XFS_AGB_TO_FSB(mp, pag->pag_agno, new_agbno);12031203+ if (error)12041204+ goto out_drop;12051205+ if (*new_len > 0)12061206+ error = xfs_refcount_continue_op(rcur, startblock,12071207+ new_agbno, *new_len, new_fsb);12791208 break;12801209 case XFS_REFCOUNT_ALLOC_COW:12811210 *new_fsb = startblock + blockcount;···13941307 *flen = 0;1395130813961309 /* Try to find a refcount extent that crosses the start */13971397- error = xfs_refcount_lookup_le(cur, agbno, &have);13101310+ error = xfs_refcount_lookup_le(cur, XFS_REFC_DOMAIN_SHARED, agbno,13111311+ &have);13981312 if (error)13991313 goto out_error;14001314 if (!have) {···14131325 error = -EFSCORRUPTED;14141326 goto out_error;14151327 }13281328+ if (tmp.rc_domain != XFS_REFC_DOMAIN_SHARED)13291329+ goto done;1416133014171331 /* If the extent ends before the start, look at the next one */14181332 if (tmp.rc_startblock + tmp.rc_blockcount <= agbno) {···14301340 error = -EFSCORRUPTED;14311341 goto out_error;14321342 }13431343+ if (tmp.rc_domain != XFS_REFC_DOMAIN_SHARED)13441344+ goto done;14331345 }1434134614351347 /* If the extent starts after the range we want, bail out */···14631371 error = -EFSCORRUPTED;14641372 goto out_error;14651373 }14661466- if (tmp.rc_startblock >= agbno + aglen ||13741374+ if (tmp.rc_domain != XFS_REFC_DOMAIN_SHARED ||13751375+ tmp.rc_startblock >= agbno + aglen ||14671376 tmp.rc_startblock != *fbno + *flen)14681377 break;14691378 *flen = min(*flen + tmp.rc_blockcount, agbno + aglen - *fbno);···15481455 return 0;1549145615501457 /* Find any overlapping refcount records */15511551- error = xfs_refcount_lookup_ge(cur, agbno, &found_rec);14581458+ error = xfs_refcount_lookup_ge(cur, XFS_REFC_DOMAIN_COW, agbno,14591459+ &found_rec);15521460 if (error)15531461 goto out_error;15541462 error = xfs_refcount_get_rec(cur, &ext, &found_rec);15551463 if (error)15561464 goto out_error;14651465+ if (XFS_IS_CORRUPT(cur->bc_mp, found_rec &&14661466+ ext.rc_domain != XFS_REFC_DOMAIN_COW)) {14671467+ error = -EFSCORRUPTED;14681468+ goto out_error;14691469+ }15571470 if (!found_rec) {15581558- ext.rc_startblock = cur->bc_mp->m_sb.sb_agblocks +15591559- XFS_REFC_COW_START;14711471+ ext.rc_startblock = cur->bc_mp->m_sb.sb_agblocks;15601472 ext.rc_blockcount = 0;15611473 ext.rc_refcount = 0;14741474+ ext.rc_domain = XFS_REFC_DOMAIN_COW;15621475 }1563147615641477 switch (adj) {···15791480 tmp.rc_startblock = agbno;15801481 tmp.rc_blockcount = aglen;15811482 tmp.rc_refcount = 1;14831483+ tmp.rc_domain = XFS_REFC_DOMAIN_COW;14841484+15821485 trace_xfs_refcount_modify_extent(cur->bc_mp,15831486 cur->bc_ag.pag->pag_agno, &tmp);15841487···16431542 bool shape_changed;16441543 int error;1645154416461646- agbno += XFS_REFC_COW_START;16471647-16481545 /*16491546 * Ensure that no rcextents cross the boundary of the adjustment range.16501547 */16511651- error = xfs_refcount_split_extent(cur, agbno, &shape_changed);15481548+ error = xfs_refcount_split_extent(cur, XFS_REFC_DOMAIN_COW,15491549+ agbno, &shape_changed);16521550 if (error)16531551 goto out_error;1654155216551655- error = xfs_refcount_split_extent(cur, agbno + aglen, &shape_changed);15531553+ error = xfs_refcount_split_extent(cur, XFS_REFC_DOMAIN_COW,15541554+ agbno + aglen, &shape_changed);16561555 if (error)16571556 goto out_error;1658155716591558 /*16601559 * Try to merge with the left or right extents of the range.16611560 */16621662- error = xfs_refcount_merge_extents(cur, &agbno, &aglen, adj,16631663- XFS_FIND_RCEXT_COW, &shape_changed);15611561+ error = xfs_refcount_merge_extents(cur, XFS_REFC_DOMAIN_COW, &agbno,15621562+ &aglen, adj, &shape_changed);16641563 if (error)16651564 goto out_error;16661565···17671666 be32_to_cpu(rec->refc.rc_refcount) != 1))17681667 return -EFSCORRUPTED;1769166817701770- rr = kmem_alloc(sizeof(struct xfs_refcount_recovery), 0);16691669+ rr = kmalloc(sizeof(struct xfs_refcount_recovery),16701670+ GFP_KERNEL | __GFP_NOFAIL);16711671+ INIT_LIST_HEAD(&rr->rr_list);17711672 xfs_refcount_btrec_to_irec(rec, &rr->rr_rrec);17721772- list_add_tail(&rr->rr_list, debris);1773167316741674+ if (XFS_IS_CORRUPT(cur->bc_mp,16751675+ rr->rr_rrec.rc_domain != XFS_REFC_DOMAIN_COW)) {16761676+ kfree(rr);16771677+ return -EFSCORRUPTED;16781678+ }16791679+16801680+ list_add_tail(&rr->rr_list, debris);17741681 return 0;17751682}17761683···17961687 union xfs_btree_irec low;17971688 union xfs_btree_irec high;17981689 xfs_fsblock_t fsb;17991799- xfs_agblock_t agbno;18001690 int error;1801169118021802- if (mp->m_sb.sb_agblocks >= XFS_REFC_COW_START)16921692+ /* reflink filesystems mustn't have AGs larger than 2^31-1 blocks */16931693+ BUILD_BUG_ON(XFS_MAX_CRC_AG_BLOCKS >= XFS_REFC_COWFLAG);16941694+ if (mp->m_sb.sb_agblocks > XFS_MAX_CRC_AG_BLOCKS)18031695 return -EOPNOTSUPP;1804169618051697 INIT_LIST_HEAD(&debris);···18271717 /* Find all the leftover CoW staging extents. */18281718 memset(&low, 0, sizeof(low));18291719 memset(&high, 0, sizeof(high));18301830- low.rc.rc_startblock = XFS_REFC_COW_START;17201720+ low.rc.rc_domain = high.rc.rc_domain = XFS_REFC_DOMAIN_COW;18311721 high.rc.rc_startblock = -1U;18321722 error = xfs_btree_query_range(cur, &low, &high,18331723 xfs_refcount_recover_extent, &debris);···18481738 &rr->rr_rrec);1849173918501740 /* Free the orphan record */18511851- agbno = rr->rr_rrec.rc_startblock - XFS_REFC_COW_START;18521852- fsb = XFS_AGB_TO_FSB(mp, pag->pag_agno, agbno);17411741+ fsb = XFS_AGB_TO_FSB(mp, pag->pag_agno,17421742+ rr->rr_rrec.rc_startblock);18531743 xfs_refcount_free_cow_extent(tp, fsb,18541744 rr->rr_rrec.rc_blockcount);18551745···18611751 goto out_free;1862175218631753 list_del(&rr->rr_list);18641864- kmem_free(rr);17541754+ kfree(rr);18651755 }1866175618671757 return error;···18711761 /* Free the leftover list */18721762 list_for_each_entry_safe(rr, n, &debris, rr_list) {18731763 list_del(&rr->rr_list);18741874- kmem_free(rr);17641764+ kfree(rr);18751765 }18761766 return error;18771767}···18801770int18811771xfs_refcount_has_record(18821772 struct xfs_btree_cur *cur,17731773+ enum xfs_refc_domain domain,18831774 xfs_agblock_t bno,18841775 xfs_extlen_t len,18851776 bool *exists)···18921781 low.rc.rc_startblock = bno;18931782 memset(&high, 0xFF, sizeof(high));18941783 high.rc.rc_startblock = bno + len - 1;17841784+ low.rc.rc_domain = high.rc.rc_domain = domain;1895178518961786 return xfs_btree_has_record(cur, &low, &high, exists);18971787}
+36-4
fs/xfs/libxfs/xfs_refcount.h
···1414struct xfs_refcount_irec;15151616extern int xfs_refcount_lookup_le(struct xfs_btree_cur *cur,1717- xfs_agblock_t bno, int *stat);1717+ enum xfs_refc_domain domain, xfs_agblock_t bno, int *stat);1818extern int xfs_refcount_lookup_ge(struct xfs_btree_cur *cur,1919- xfs_agblock_t bno, int *stat);1919+ enum xfs_refc_domain domain, xfs_agblock_t bno, int *stat);2020extern int xfs_refcount_lookup_eq(struct xfs_btree_cur *cur,2121- xfs_agblock_t bno, int *stat);2121+ enum xfs_refc_domain domain, xfs_agblock_t bno, int *stat);2222extern int xfs_refcount_get_rec(struct xfs_btree_cur *cur,2323 struct xfs_refcount_irec *irec, int *stat);2424+2525+static inline uint32_t2626+xfs_refcount_encode_startblock(2727+ xfs_agblock_t startblock,2828+ enum xfs_refc_domain domain)2929+{3030+ uint32_t start;3131+3232+ /*3333+ * low level btree operations need to handle the generic btree range3434+ * query functions (which set rc_domain == -1U), so we check that the3535+ * domain is /not/ shared.3636+ */3737+ start = startblock & ~XFS_REFC_COWFLAG;3838+ if (domain != XFS_REFC_DOMAIN_SHARED)3939+ start |= XFS_REFC_COWFLAG;4040+4141+ return start;4242+}24432544enum xfs_refcount_intent_type {2645 XFS_REFCOUNT_INCREASE = 1,···5435 xfs_extlen_t ri_blockcount;5536 xfs_fsblock_t ri_startblock;5637};3838+3939+/* Check that the refcount is appropriate for the record domain. */4040+static inline bool4141+xfs_refcount_check_domain(4242+ const struct xfs_refcount_irec *irec)4343+{4444+ if (irec->rc_domain == XFS_REFC_DOMAIN_COW && irec->rc_refcount != 1)4545+ return false;4646+ if (irec->rc_domain == XFS_REFC_DOMAIN_SHARED && irec->rc_refcount < 2)4747+ return false;4848+ return true;4949+}57505851void xfs_refcount_increase_extent(struct xfs_trans *tp,5952 struct xfs_bmbt_irec *irec);···11079#define XFS_REFCOUNT_ITEM_OVERHEAD 321118011281extern int xfs_refcount_has_record(struct xfs_btree_cur *cur,113113- xfs_agblock_t bno, xfs_extlen_t len, bool *exists);8282+ enum xfs_refc_domain domain, xfs_agblock_t bno,8383+ xfs_extlen_t len, bool *exists);11484union xfs_btree_rec;11585extern void xfs_refcount_btrec_to_irec(const union xfs_btree_rec *rec,11686 struct xfs_refcount_irec *irec);
···235235 goto out_bad_rec;236236 } else {237237 /* check for valid extent range, including overflow */238238- if (!xfs_verify_agbno(pag, irec->rm_startblock))239239- goto out_bad_rec;240240- if (irec->rm_startblock >241241- irec->rm_startblock + irec->rm_blockcount)242242- goto out_bad_rec;243243- if (!xfs_verify_agbno(pag,244244- irec->rm_startblock + irec->rm_blockcount - 1))238238+ if (!xfs_verify_agbext(pag, irec->rm_startblock,239239+ irec->rm_blockcount))245240 goto out_bad_rec;246241 }247242
+2-2
fs/xfs/libxfs/xfs_trans_resv.c
···422422423423/*424424 * In renaming a files we can modify:425425- * the four inodes involved: 4 * inode size425425+ * the five inodes involved: 5 * inode size426426 * the two directory btrees: 2 * (max depth + v2) * dir block size427427 * the two directory bmap btrees: 2 * max depth * block size428428 * And the bmap_finish transaction can free dir and bmap blocks (two sets···437437 struct xfs_mount *mp)438438{439439 return XFS_DQUOT_LOGRES(mp) +440440- max((xfs_calc_inode_res(mp, 4) +440440+ max((xfs_calc_inode_res(mp, 5) +441441 xfs_calc_buf_res(2 * XFS_DIROP_LOG_COUNT(mp),442442 XFS_FSB_TO_B(mp, 1))),443443 (xfs_calc_buf_res(7, mp->m_sb.sb_sectsize) +
···100100 bno = be32_to_cpu(rec->alloc.ar_startblock);101101 len = be32_to_cpu(rec->alloc.ar_blockcount);102102103103- if (bno + len <= bno ||104104- !xfs_verify_agbno(pag, bno) ||105105- !xfs_verify_agbno(pag, bno + len - 1))103103+ if (!xfs_verify_agbext(pag, bno, len))106104 xchk_btree_set_corrupt(bs->sc, bs->cur, 0);107105108106 xchk_allocbt_xref(bs->sc, bno, len);
+2-3
fs/xfs/scrub/ialloc.c
···108108 xfs_agblock_t bno;109109110110 bno = XFS_AGINO_TO_AGBNO(mp, agino);111111- if (bno + len <= bno ||112112- !xfs_verify_agbno(pag, bno) ||113113- !xfs_verify_agbno(pag, bno + len - 1))111111+112112+ if (!xfs_verify_agbext(pag, bno, len))114113 xchk_btree_set_corrupt(bs->sc, bs->cur, 0);115114116115 xchk_iallocbt_chunk_xref(bs->sc, irec, agino, bno, len);
+30-42
fs/xfs/scrub/refcount.c
···269269STATIC void270270xchk_refcountbt_xref_rmap(271271 struct xfs_scrub *sc,272272- xfs_agblock_t bno,273273- xfs_extlen_t len,274274- xfs_nlink_t refcount)272272+ const struct xfs_refcount_irec *irec)275273{276274 struct xchk_refcnt_check refchk = {277277- .sc = sc,278278- .bno = bno,279279- .len = len,280280- .refcount = refcount,275275+ .sc = sc,276276+ .bno = irec->rc_startblock,277277+ .len = irec->rc_blockcount,278278+ .refcount = irec->rc_refcount,281279 .seen = 0,282280 };283281 struct xfs_rmap_irec low;···289291290292 /* Cross-reference with the rmapbt to confirm the refcount. */291293 memset(&low, 0, sizeof(low));292292- low.rm_startblock = bno;294294+ low.rm_startblock = irec->rc_startblock;293295 memset(&high, 0xFF, sizeof(high));294294- high.rm_startblock = bno + len - 1;296296+ high.rm_startblock = irec->rc_startblock + irec->rc_blockcount - 1;295297296298 INIT_LIST_HEAD(&refchk.fragments);297299 error = xfs_rmap_query_range(sc->sa.rmap_cur, &low, &high,···300302 goto out_free;301303302304 xchk_refcountbt_process_rmap_fragments(&refchk);303303- if (refcount != refchk.seen)305305+ if (irec->rc_refcount != refchk.seen)304306 xchk_btree_xref_set_corrupt(sc, sc->sa.rmap_cur, 0);305307306308out_free:···313315/* Cross-reference with the other btrees. */314316STATIC void315317xchk_refcountbt_xref(316316- struct xfs_scrub *sc,317317- xfs_agblock_t agbno,318318- xfs_extlen_t len,319319- xfs_nlink_t refcount)318318+ struct xfs_scrub *sc,319319+ const struct xfs_refcount_irec *irec)320320{321321 if (sc->sm->sm_flags & XFS_SCRUB_OFLAG_CORRUPT)322322 return;323323324324- xchk_xref_is_used_space(sc, agbno, len);325325- xchk_xref_is_not_inode_chunk(sc, agbno, len);326326- xchk_refcountbt_xref_rmap(sc, agbno, len, refcount);324324+ xchk_xref_is_used_space(sc, irec->rc_startblock, irec->rc_blockcount);325325+ xchk_xref_is_not_inode_chunk(sc, irec->rc_startblock,326326+ irec->rc_blockcount);327327+ xchk_refcountbt_xref_rmap(sc, irec);327328}328329329330/* Scrub a refcountbt record. */···331334 struct xchk_btree *bs,332335 const union xfs_btree_rec *rec)333336{337337+ struct xfs_refcount_irec irec;334338 xfs_agblock_t *cow_blocks = bs->private;335339 struct xfs_perag *pag = bs->cur->bc_ag.pag;336336- xfs_agblock_t bno;337337- xfs_extlen_t len;338338- xfs_nlink_t refcount;339339- bool has_cowflag;340340341341- bno = be32_to_cpu(rec->refc.rc_startblock);342342- len = be32_to_cpu(rec->refc.rc_blockcount);343343- refcount = be32_to_cpu(rec->refc.rc_refcount);341341+ xfs_refcount_btrec_to_irec(rec, &irec);344342345345- /* Only CoW records can have refcount == 1. */346346- has_cowflag = (bno & XFS_REFC_COW_START);347347- if ((refcount == 1 && !has_cowflag) || (refcount != 1 && has_cowflag))343343+ /* Check the domain and refcount are not incompatible. */344344+ if (!xfs_refcount_check_domain(&irec))348345 xchk_btree_set_corrupt(bs->sc, bs->cur, 0);349349- if (has_cowflag)350350- (*cow_blocks) += len;346346+347347+ if (irec.rc_domain == XFS_REFC_DOMAIN_COW)348348+ (*cow_blocks) += irec.rc_blockcount;351349352350 /* Check the extent. */353353- bno &= ~XFS_REFC_COW_START;354354- if (bno + len <= bno ||355355- !xfs_verify_agbno(pag, bno) ||356356- !xfs_verify_agbno(pag, bno + len - 1))351351+ if (!xfs_verify_agbext(pag, irec.rc_startblock, irec.rc_blockcount))357352 xchk_btree_set_corrupt(bs->sc, bs->cur, 0);358353359359- if (refcount == 0)354354+ if (irec.rc_refcount == 0)360355 xchk_btree_set_corrupt(bs->sc, bs->cur, 0);361356362362- xchk_refcountbt_xref(bs->sc, bno, len, refcount);357357+ xchk_refcountbt_xref(bs->sc, &irec);363358364359 return 0;365360}···415426 xfs_extlen_t len)416427{417428 struct xfs_refcount_irec rc;418418- bool has_cowflag;419429 int has_refcount;420430 int error;421431···422434 return;423435424436 /* Find the CoW staging extent. */425425- error = xfs_refcount_lookup_le(sc->sa.refc_cur,426426- agbno + XFS_REFC_COW_START, &has_refcount);437437+ error = xfs_refcount_lookup_le(sc->sa.refc_cur, XFS_REFC_DOMAIN_COW,438438+ agbno, &has_refcount);427439 if (!xchk_should_check_xref(sc, &error, &sc->sa.refc_cur))428440 return;429441 if (!has_refcount) {···439451 return;440452 }441453442442- /* CoW flag must be set, refcount must be 1. */443443- has_cowflag = (rc.rc_startblock & XFS_REFC_COW_START);444444- if (!has_cowflag || rc.rc_refcount != 1)454454+ /* CoW lookup returned a shared extent record? */455455+ if (rc.rc_domain != XFS_REFC_DOMAIN_COW)445456 xchk_btree_xref_set_corrupt(sc, sc->sa.refc_cur, 0);446457447458 /* Must be at least as long as what was passed in */···464477 if (!sc->sa.refc_cur || xchk_skip_xref(sc->sm))465478 return;466479467467- error = xfs_refcount_has_record(sc->sa.refc_cur, agbno, len, &shared);480480+ error = xfs_refcount_has_record(sc->sa.refc_cur, XFS_REFC_DOMAIN_SHARED,481481+ agbno, len, &shared);468482 if (!xchk_should_check_xref(sc, &error, &sc->sa.refc_cur))469483 return;470484 if (shared)
+33-34
fs/xfs/xfs_attr_item.c
···245245 return attrip;246246}247247248248-/*249249- * Copy an attr format buffer from the given buf, and into the destination attr250250- * format structure.251251- */252252-STATIC int253253-xfs_attri_copy_format(254254- struct xfs_log_iovec *buf,255255- struct xfs_attri_log_format *dst_attr_fmt)256256-{257257- struct xfs_attri_log_format *src_attr_fmt = buf->i_addr;258258- size_t len;259259-260260- len = sizeof(struct xfs_attri_log_format);261261- if (buf->i_len != len) {262262- XFS_ERROR_REPORT(__func__, XFS_ERRLEVEL_LOW, NULL);263263- return -EFSCORRUPTED;264264- }265265-266266- memcpy((char *)dst_attr_fmt, (char *)src_attr_fmt, len);267267- return 0;268268-}269269-270248static inline struct xfs_attrd_log_item *ATTRD_ITEM(struct xfs_log_item *lip)271249{272250 return container_of(lip, struct xfs_attrd_log_item, attrd_item);···709731 struct xfs_attri_log_nameval *nv;710732 const void *attr_value = NULL;711733 const void *attr_name;712712- int error;734734+ size_t len;713735714736 attri_formatp = item->ri_buf[0].i_addr;715737 attr_name = item->ri_buf[1].i_addr;716738717739 /* Validate xfs_attri_log_format before the large memory allocation */740740+ len = sizeof(struct xfs_attri_log_format);741741+ if (item->ri_buf[0].i_len != len) {742742+ XFS_CORRUPTION_ERROR(__func__, XFS_ERRLEVEL_LOW, mp,743743+ item->ri_buf[0].i_addr, item->ri_buf[0].i_len);744744+ return -EFSCORRUPTED;745745+ }746746+718747 if (!xfs_attri_validate(mp, attri_formatp)) {719719- XFS_ERROR_REPORT(__func__, XFS_ERRLEVEL_LOW, mp);748748+ XFS_CORRUPTION_ERROR(__func__, XFS_ERRLEVEL_LOW, mp,749749+ item->ri_buf[0].i_addr, item->ri_buf[0].i_len);750750+ return -EFSCORRUPTED;751751+ }752752+753753+ /* Validate the attr name */754754+ if (item->ri_buf[1].i_len !=755755+ xlog_calc_iovec_len(attri_formatp->alfi_name_len)) {756756+ XFS_CORRUPTION_ERROR(__func__, XFS_ERRLEVEL_LOW, mp,757757+ item->ri_buf[0].i_addr, item->ri_buf[0].i_len);720758 return -EFSCORRUPTED;721759 }722760723761 if (!xfs_attr_namecheck(attr_name, attri_formatp->alfi_name_len)) {724724- XFS_ERROR_REPORT(__func__, XFS_ERRLEVEL_LOW, mp);762762+ XFS_CORRUPTION_ERROR(__func__, XFS_ERRLEVEL_LOW, mp,763763+ item->ri_buf[1].i_addr, item->ri_buf[1].i_len);725764 return -EFSCORRUPTED;726765 }727766728728- if (attri_formatp->alfi_value_len)767767+ /* Validate the attr value, if present */768768+ if (attri_formatp->alfi_value_len != 0) {769769+ if (item->ri_buf[2].i_len != xlog_calc_iovec_len(attri_formatp->alfi_value_len)) {770770+ XFS_CORRUPTION_ERROR(__func__, XFS_ERRLEVEL_LOW, mp,771771+ item->ri_buf[0].i_addr,772772+ item->ri_buf[0].i_len);773773+ return -EFSCORRUPTED;774774+ }775775+729776 attr_value = item->ri_buf[2].i_addr;777777+ }730778731779 /*732780 * Memory alloc failure will cause replay to abort. We attach the···764760 attri_formatp->alfi_value_len);765761766762 attrip = xfs_attri_init(mp, nv);767767- error = xfs_attri_copy_format(&item->ri_buf[0], &attrip->attri_format);768768- if (error)769769- goto out;763763+ memcpy(&attrip->attri_format, attri_formatp, len);770764771765 /*772766 * The ATTRI has two references. One for the ATTRD and one for ATTRI to···776774 xfs_attri_release(attrip);777775 xfs_attri_log_nameval_put(nv);778776 return 0;779779-out:780780- xfs_attri_item_free(attrip);781781- xfs_attri_log_nameval_put(nv);782782- return error;783777}784778785779/*···840842841843 attrd_formatp = item->ri_buf[0].i_addr;842844 if (item->ri_buf[0].i_len != sizeof(struct xfs_attrd_log_format)) {843843- XFS_ERROR_REPORT(__func__, XFS_ERRLEVEL_LOW, NULL);845845+ XFS_CORRUPTION_ERROR(__func__, XFS_ERRLEVEL_LOW, log->l_mp,846846+ item->ri_buf[0].i_addr, item->ri_buf[0].i_len);844847 return -EFSCORRUPTED;845848 }846849
+29-27
fs/xfs/xfs_bmap_item.c
···608608 .iop_relog = xfs_bui_item_relog,609609};610610611611-/*612612- * Copy an BUI format buffer from the given buf, and into the destination613613- * BUI format structure. The BUI/BUD items were designed not to need any614614- * special alignment handling.615615- */616616-static int611611+static inline void617612xfs_bui_copy_format(618618- struct xfs_log_iovec *buf,619619- struct xfs_bui_log_format *dst_bui_fmt)613613+ struct xfs_bui_log_format *dst,614614+ const struct xfs_bui_log_format *src)620615{621621- struct xfs_bui_log_format *src_bui_fmt;622622- uint len;616616+ unsigned int i;623617624624- src_bui_fmt = buf->i_addr;625625- len = xfs_bui_log_format_sizeof(src_bui_fmt->bui_nextents);618618+ memcpy(dst, src, offsetof(struct xfs_bui_log_format, bui_extents));626619627627- if (buf->i_len == len) {628628- memcpy(dst_bui_fmt, src_bui_fmt, len);629629- return 0;630630- }631631- XFS_ERROR_REPORT(__func__, XFS_ERRLEVEL_LOW, NULL);632632- return -EFSCORRUPTED;620620+ for (i = 0; i < src->bui_nextents; i++)621621+ memcpy(&dst->bui_extents[i], &src->bui_extents[i],622622+ sizeof(struct xfs_map_extent));633623}634624635625/*···636646 struct xlog_recover_item *item,637647 xfs_lsn_t lsn)638648{639639- int error;640649 struct xfs_mount *mp = log->l_mp;641650 struct xfs_bui_log_item *buip;642651 struct xfs_bui_log_format *bui_formatp;652652+ size_t len;643653644654 bui_formatp = item->ri_buf[0].i_addr;645655646646- if (bui_formatp->bui_nextents != XFS_BUI_MAX_FAST_EXTENTS) {647647- XFS_ERROR_REPORT(__func__, XFS_ERRLEVEL_LOW, log->l_mp);656656+ if (item->ri_buf[0].i_len < xfs_bui_log_format_sizeof(0)) {657657+ XFS_CORRUPTION_ERROR(__func__, XFS_ERRLEVEL_LOW, mp,658658+ item->ri_buf[0].i_addr, item->ri_buf[0].i_len);648659 return -EFSCORRUPTED;649660 }650650- buip = xfs_bui_init(mp);651651- error = xfs_bui_copy_format(&item->ri_buf[0], &buip->bui_format);652652- if (error) {653653- xfs_bui_item_free(buip);654654- return error;661661+662662+ if (bui_formatp->bui_nextents != XFS_BUI_MAX_FAST_EXTENTS) {663663+ XFS_CORRUPTION_ERROR(__func__, XFS_ERRLEVEL_LOW, mp,664664+ item->ri_buf[0].i_addr, item->ri_buf[0].i_len);665665+ return -EFSCORRUPTED;655666 }667667+668668+ len = xfs_bui_log_format_sizeof(bui_formatp->bui_nextents);669669+ if (item->ri_buf[0].i_len != len) {670670+ XFS_CORRUPTION_ERROR(__func__, XFS_ERRLEVEL_LOW, mp,671671+ item->ri_buf[0].i_addr, item->ri_buf[0].i_len);672672+ return -EFSCORRUPTED;673673+ }674674+675675+ buip = xfs_bui_init(mp);676676+ xfs_bui_copy_format(&buip->bui_format, bui_formatp);656677 atomic_set(&buip->bui_next_extent, bui_formatp->bui_nextents);657678 /*658679 * Insert the intent into the AIL directly and drop one reference so···697696698697 bud_formatp = item->ri_buf[0].i_addr;699698 if (item->ri_buf[0].i_len != sizeof(struct xfs_bud_log_format)) {700700- XFS_ERROR_REPORT(__func__, XFS_ERRLEVEL_LOW, log->l_mp);699699+ XFS_CORRUPTION_ERROR(__func__, XFS_ERRLEVEL_LOW, log->l_mp,700700+ item->ri_buf[0].i_addr, item->ri_buf[0].i_len);701701 return -EFSCORRUPTED;702702 }703703
+7-2
fs/xfs/xfs_error.c
···234234xfs_errortag_init(235235 struct xfs_mount *mp)236236{237237+ int ret;238238+237239 mp->m_errortag = kmem_zalloc(sizeof(unsigned int) * XFS_ERRTAG_MAX,238240 KM_MAYFAIL);239241 if (!mp->m_errortag)240242 return -ENOMEM;241243242242- return xfs_sysfs_init(&mp->m_errortag_kobj, &xfs_errortag_ktype,243243- &mp->m_kobj, "errortag");244244+ ret = xfs_sysfs_init(&mp->m_errortag_kobj, &xfs_errortag_ktype,245245+ &mp->m_kobj, "errortag");246246+ if (ret)247247+ kmem_free(mp->m_errortag);248248+ return ret;244249}245250246251void
+45-49
fs/xfs/xfs_extfree_item.c
···6666 xfs_efi_item_free(efip);6767}68686969-/*7070- * This returns the number of iovecs needed to log the given efi item.7171- * We only need 1 iovec for an efi item. It just logs the efi_log_format7272- * structure.7373- */7474-static inline int7575-xfs_efi_item_sizeof(7676- struct xfs_efi_log_item *efip)7777-{7878- return sizeof(struct xfs_efi_log_format) +7979- (efip->efi_format.efi_nextents - 1) * sizeof(xfs_extent_t);8080-}8181-8269STATIC void8370xfs_efi_item_size(8471 struct xfs_log_item *lip,8572 int *nvecs,8673 int *nbytes)8774{7575+ struct xfs_efi_log_item *efip = EFI_ITEM(lip);7676+8877 *nvecs += 1;8989- *nbytes += xfs_efi_item_sizeof(EFI_ITEM(lip));7878+ *nbytes += xfs_efi_log_format_sizeof(efip->efi_format.efi_nextents);9079}91809281/*···101112102113 xlog_copy_iovec(lv, &vecp, XLOG_REG_TYPE_EFI_FORMAT,103114 &efip->efi_format,104104- xfs_efi_item_sizeof(efip));115115+ xfs_efi_log_format_sizeof(efip->efi_format.efi_nextents));105116}106117107118···144155145156{146157 struct xfs_efi_log_item *efip;147147- uint size;148158149159 ASSERT(nextents > 0);150160 if (nextents > XFS_EFI_MAX_FAST_EXTENTS) {151151- size = (uint)(sizeof(struct xfs_efi_log_item) +152152- ((nextents - 1) * sizeof(xfs_extent_t)));153153- efip = kmem_zalloc(size, 0);161161+ efip = kzalloc(xfs_efi_log_item_sizeof(nextents),162162+ GFP_KERNEL | __GFP_NOFAIL);154163 } else {155164 efip = kmem_cache_zalloc(xfs_efi_cache,156165 GFP_KERNEL | __GFP_NOFAIL);···175188{176189 xfs_efi_log_format_t *src_efi_fmt = buf->i_addr;177190 uint i;178178- uint len = sizeof(xfs_efi_log_format_t) +179179- (src_efi_fmt->efi_nextents - 1) * sizeof(xfs_extent_t);180180- uint len32 = sizeof(xfs_efi_log_format_32_t) +181181- (src_efi_fmt->efi_nextents - 1) * sizeof(xfs_extent_32_t);182182- uint len64 = sizeof(xfs_efi_log_format_64_t) +183183- (src_efi_fmt->efi_nextents - 1) * sizeof(xfs_extent_64_t);191191+ uint len = xfs_efi_log_format_sizeof(src_efi_fmt->efi_nextents);192192+ uint len32 = xfs_efi_log_format32_sizeof(src_efi_fmt->efi_nextents);193193+ uint len64 = xfs_efi_log_format64_sizeof(src_efi_fmt->efi_nextents);184194185195 if (buf->i_len == len) {186186- memcpy((char *)dst_efi_fmt, (char*)src_efi_fmt, len);196196+ memcpy(dst_efi_fmt, src_efi_fmt,197197+ offsetof(struct xfs_efi_log_format, efi_extents));198198+ for (i = 0; i < src_efi_fmt->efi_nextents; i++)199199+ memcpy(&dst_efi_fmt->efi_extents[i],200200+ &src_efi_fmt->efi_extents[i],201201+ sizeof(struct xfs_extent));187202 return 0;188203 } else if (buf->i_len == len32) {189204 xfs_efi_log_format_32_t *src_efi_fmt_32 = buf->i_addr;···216227 }217228 return 0;218229 }219219- XFS_ERROR_REPORT(__func__, XFS_ERRLEVEL_LOW, NULL);230230+ XFS_CORRUPTION_ERROR(__func__, XFS_ERRLEVEL_LOW, NULL, buf->i_addr,231231+ buf->i_len);220232 return -EFSCORRUPTED;221233}222234···236246 kmem_cache_free(xfs_efd_cache, efdp);237247}238248239239-/*240240- * This returns the number of iovecs needed to log the given efd item.241241- * We only need 1 iovec for an efd item. It just logs the efd_log_format242242- * structure.243243- */244244-static inline int245245-xfs_efd_item_sizeof(246246- struct xfs_efd_log_item *efdp)247247-{248248- return sizeof(xfs_efd_log_format_t) +249249- (efdp->efd_format.efd_nextents - 1) * sizeof(xfs_extent_t);250250-}251251-252249STATIC void253250xfs_efd_item_size(254251 struct xfs_log_item *lip,255252 int *nvecs,256253 int *nbytes)257254{255255+ struct xfs_efd_log_item *efdp = EFD_ITEM(lip);256256+258257 *nvecs += 1;259259- *nbytes += xfs_efd_item_sizeof(EFD_ITEM(lip));258258+ *nbytes += xfs_efd_log_format_sizeof(efdp->efd_format.efd_nextents);260259}261260262261/*···270291271292 xlog_copy_iovec(lv, &vecp, XLOG_REG_TYPE_EFD_FORMAT,272293 &efdp->efd_format,273273- xfs_efd_item_sizeof(efdp));294294+ xfs_efd_log_format_sizeof(efdp->efd_format.efd_nextents));274295}275296276297/*···319340 ASSERT(nextents > 0);320341321342 if (nextents > XFS_EFD_MAX_FAST_EXTENTS) {322322- efdp = kmem_zalloc(sizeof(struct xfs_efd_log_item) +323323- (nextents - 1) * sizeof(struct xfs_extent),324324- 0);343343+ efdp = kzalloc(xfs_efd_log_item_sizeof(nextents),344344+ GFP_KERNEL | __GFP_NOFAIL);325345 } else {326346 efdp = kmem_cache_zalloc(xfs_efd_cache,327347 GFP_KERNEL | __GFP_NOFAIL);···711733712734 efi_formatp = item->ri_buf[0].i_addr;713735736736+ if (item->ri_buf[0].i_len < xfs_efi_log_format_sizeof(0)) {737737+ XFS_CORRUPTION_ERROR(__func__, XFS_ERRLEVEL_LOW, mp,738738+ item->ri_buf[0].i_addr, item->ri_buf[0].i_len);739739+ return -EFSCORRUPTED;740740+ }741741+714742 efip = xfs_efi_init(mp, efi_formatp->efi_nextents);715743 error = xfs_efi_copy_format(&item->ri_buf[0], &efip->efi_format);716744 if (error) {···753769 xfs_lsn_t lsn)754770{755771 struct xfs_efd_log_format *efd_formatp;772772+ int buflen = item->ri_buf[0].i_len;756773757774 efd_formatp = item->ri_buf[0].i_addr;758758- ASSERT((item->ri_buf[0].i_len == (sizeof(xfs_efd_log_format_32_t) +759759- ((efd_formatp->efd_nextents - 1) * sizeof(xfs_extent_32_t)))) ||760760- (item->ri_buf[0].i_len == (sizeof(xfs_efd_log_format_64_t) +761761- ((efd_formatp->efd_nextents - 1) * sizeof(xfs_extent_64_t)))));775775+776776+ if (buflen < sizeof(struct xfs_efd_log_format)) {777777+ XFS_CORRUPTION_ERROR(__func__, XFS_ERRLEVEL_LOW, log->l_mp,778778+ efd_formatp, buflen);779779+ return -EFSCORRUPTED;780780+ }781781+782782+ if (item->ri_buf[0].i_len != xfs_efd_log_format32_sizeof(783783+ efd_formatp->efd_nextents) &&784784+ item->ri_buf[0].i_len != xfs_efd_log_format64_sizeof(785785+ efd_formatp->efd_nextents)) {786786+ XFS_CORRUPTION_ERROR(__func__, XFS_ERRLEVEL_LOW, log->l_mp,787787+ efd_formatp, buflen);788788+ return -EFSCORRUPTED;789789+ }762790763791 xlog_recover_release_intent(log, XFS_LI_EFI, efd_formatp->efd_efi_id);764792 return 0;
+16
fs/xfs/xfs_extfree_item.h
···5252 xfs_efi_log_format_t efi_format;5353};54545555+static inline size_t5656+xfs_efi_log_item_sizeof(5757+ unsigned int nr)5858+{5959+ return offsetof(struct xfs_efi_log_item, efi_format) +6060+ xfs_efi_log_format_sizeof(nr);6161+}6262+5563/*5664 * This is the "extent free done" log item. It is used to log5765 * the fact that some extents earlier mentioned in an efi item···7163 uint efd_next_extent;7264 xfs_efd_log_format_t efd_format;7365};6666+6767+static inline size_t6868+xfs_efd_log_item_sizeof(6969+ unsigned int nr)7070+{7171+ return offsetof(struct xfs_efd_log_item, efd_format) +7272+ xfs_efd_log_format_sizeof(nr);7373+}74747575/*7676 * Max number of extents in fast allocation path.
···28182818 * Lock all the participating inodes. Depending upon whether28192819 * the target_name exists in the target directory, and28202820 * whether the target directory is the same as the source28212821- * directory, we can lock from 2 to 4 inodes.28212821+ * directory, we can lock from 2 to 5 inodes.28222822 */28232823 xfs_lock_inodes(inodes, num_inodes, XFS_ILOCK_EXCL);28242824
+8-2
fs/xfs/xfs_log_recover.c
···25522552 for (lip = xfs_trans_ail_cursor_first(ailp, &cur, 0);25532553 lip != NULL;25542554 lip = xfs_trans_ail_cursor_next(ailp, &cur)) {25552555+ const struct xfs_item_ops *ops;25562556+25552557 if (!xlog_item_is_intent(lip))25562558 break;25572559···25692567 * deferred ops, you /must/ attach them to the capture list in25702568 * the recover routine or else those subsequent intents will be25712569 * replayed in the wrong order!25702570+ *25712571+ * The recovery function can free the log item, so we must not25722572+ * access lip after it returns.25722573 */25732574 spin_unlock(&ailp->ail_lock);25742574- error = lip->li_ops->iop_recover(lip, &capture_list);25752575+ ops = lip->li_ops;25762576+ error = ops->iop_recover(lip, &capture_list);25752577 spin_lock(&ailp->ail_lock);25762578 if (error) {25772579 trace_xlog_intent_recovery_failed(log->l_mp, error,25782578- lip->li_ops->iop_recover);25802580+ ops->iop_recover);25792581 break;25802582 }25812583 }
···295295{296296 /* Most test cases want 2 distinct CPUs. */297297 if (num_online_cpus() < 2)298298- return -EINVAL;298298+ kunit_skip(test, "not enough cpus");299299300300 /* Want the system to not use breakpoints elsewhere. */301301 if (hw_breakpoint_is_used())302302- return -EBUSY;302302+ kunit_skip(test, "hw breakpoint already in use");303303304304 return 0;305305}
+4-1
kernel/kprobes.c
···24292429 if (!kprobes_all_disarmed && kprobe_disabled(p)) {24302430 p->flags &= ~KPROBE_FLAG_DISABLED;24312431 ret = arm_kprobe(p);24322432- if (ret)24322432+ if (ret) {24332433 p->flags |= KPROBE_FLAG_DISABLED;24342434+ if (p != kp)24352435+ kp->flags |= KPROBE_FLAG_DISABLED;24362436+ }24342437 }24352438out:24362439 mutex_unlock(&kprobe_mutex);
+4-1
kernel/trace/fprobe.c
···141141 return -E2BIG;142142143143 fp->rethook = rethook_alloc((void *)fp, fprobe_exit_handler);144144+ if (!fp->rethook)145145+ return -ENOMEM;144146 for (i = 0; i < size; i++) {145147 struct fprobe_rethook_node *node;146148···303301{304302 int ret;305303306306- if (!fp || fp->ops.func != fprobe_handler)304304+ if (!fp || (fp->ops.saved_func != fprobe_handler &&305305+ fp->ops.saved_func != fprobe_kprobe_handler))307306 return -EINVAL;308307309308 /*
+3-13
kernel/trace/ftrace.c
···30283028 command |= FTRACE_UPDATE_TRACE_FUNC;30293029 }3030303030313031- if (!command || !ftrace_enabled) {30323032- /*30333033- * If these are dynamic or per_cpu ops, they still30343034- * need their data freed. Since, function tracing is30353035- * not currently active, we can just free them30363036- * without synchronizing all CPUs.30373037- */30383038- if (ops->flags & FTRACE_OPS_FL_DYNAMIC)30393039- goto free_ops;30403040-30413041- return 0;30423042- }30313031+ if (!command || !ftrace_enabled)30323032+ goto out;3043303330443034 /*30453035 * If the ops uses a trampoline, then it needs to be···30663076 removed_ops = NULL;30673077 ops->flags &= ~FTRACE_OPS_FL_REMOVING;3068307830793079+out:30693080 /*30703081 * Dynamic ops may be freed, we must make sure that all30713082 * callers are done before leaving this function.···30943103 if (IS_ENABLED(CONFIG_PREEMPTION))30953104 synchronize_rcu_tasks();3096310530973097- free_ops:30983106 ftrace_trampoline_free(ops);30993107 }31003108
+7-11
kernel/trace/kprobe_event_gen_test.c
···100100 KPROBE_GEN_TEST_FUNC,101101 KPROBE_GEN_TEST_ARG0, KPROBE_GEN_TEST_ARG1);102102 if (ret)103103- goto free;103103+ goto out;104104105105 /* Use kprobe_event_add_fields to add the rest of the fields */106106107107 ret = kprobe_event_add_fields(&cmd, KPROBE_GEN_TEST_ARG2, KPROBE_GEN_TEST_ARG3);108108 if (ret)109109- goto free;109109+ goto out;110110111111 /*112112 * This actually creates the event.113113 */114114 ret = kprobe_event_gen_cmd_end(&cmd);115115 if (ret)116116- goto free;116116+ goto out;117117118118 /*119119 * Now get the gen_kprobe_test event file. We need to prevent···136136 goto delete;137137 }138138 out:139139+ kfree(buf);139140 return ret;140141 delete:141142 /* We got an error after creating the event, delete it */142143 ret = kprobe_event_delete("gen_kprobe_test");143143- free:144144- kfree(buf);145145-146144 goto out;147145}148146···168170 KPROBE_GEN_TEST_FUNC,169171 "$retval");170172 if (ret)171171- goto free;173173+ goto out;172174173175 /*174176 * This actually creates the event.175177 */176178 ret = kretprobe_event_gen_cmd_end(&cmd);177179 if (ret)178178- goto free;180180+ goto out;179181180182 /*181183 * Now get the gen_kretprobe_test event file. We need to···199201 goto delete;200202 }201203 out:204204+ kfree(buf);202205 return ret;203206 delete:204207 /* We got an error after creating the event, delete it */205208 ret = kprobe_event_delete("gen_kretprobe_test");206206- free:207207- kfree(buf);208208-209209 goto out;210210}211211
+11
kernel/trace/ring_buffer.c
···937937 struct ring_buffer_per_cpu *cpu_buffer;938938 struct rb_irq_work *rbwork;939939940940+ if (!buffer)941941+ return;942942+940943 if (cpu == RING_BUFFER_ALL_CPUS) {941944942945 /* Wake up individual ones too. One level recursion */···948945949946 rbwork = &buffer->irq_work;950947 } else {948948+ if (WARN_ON_ONCE(!buffer->buffers))949949+ return;950950+ if (WARN_ON_ONCE(cpu >= nr_cpu_ids))951951+ return;952952+951953 cpu_buffer = buffer->buffers[cpu];954954+ /* The CPU buffer may not have been initialized yet */955955+ if (!cpu_buffer)956956+ return;952957 rbwork = &cpu_buffer->irq_work;953958 }954959
+15-26
lib/nlattr.c
···124124 range->max = U8_MAX;125125 break;126126 case NLA_U16:127127+ case NLA_BE16:127128 case NLA_BINARY:128129 range->max = U16_MAX;129130 break;130131 case NLA_U32:132132+ case NLA_BE32:131133 range->max = U32_MAX;132134 break;133135 case NLA_U64:···161159 }162160}163161164164-static u64 nla_get_attr_bo(const struct nla_policy *pt,165165- const struct nlattr *nla)166166-{167167- switch (pt->type) {168168- case NLA_U16:169169- if (pt->network_byte_order)170170- return ntohs(nla_get_be16(nla));171171-172172- return nla_get_u16(nla);173173- case NLA_U32:174174- if (pt->network_byte_order)175175- return ntohl(nla_get_be32(nla));176176-177177- return nla_get_u32(nla);178178- case NLA_U64:179179- if (pt->network_byte_order)180180- return be64_to_cpu(nla_get_be64(nla));181181-182182- return nla_get_u64(nla);183183- }184184-185185- WARN_ON_ONCE(1);186186- return 0;187187-}188188-189162static int nla_validate_range_unsigned(const struct nla_policy *pt,190163 const struct nlattr *nla,191164 struct netlink_ext_ack *extack,···174197 value = nla_get_u8(nla);175198 break;176199 case NLA_U16:200200+ value = nla_get_u16(nla);201201+ break;177202 case NLA_U32:203203+ value = nla_get_u32(nla);204204+ break;178205 case NLA_U64:179179- value = nla_get_attr_bo(pt, nla);206206+ value = nla_get_u64(nla);180207 break;181208 case NLA_MSECS:182209 value = nla_get_u64(nla);183210 break;184211 case NLA_BINARY:185212 value = nla_len(nla);213213+ break;214214+ case NLA_BE16:215215+ value = ntohs(nla_get_be16(nla));216216+ break;217217+ case NLA_BE32:218218+ value = ntohl(nla_get_be32(nla));186219 break;187220 default:188221 return -EINVAL;···321334 case NLA_U64:322335 case NLA_MSECS:323336 case NLA_BINARY:337337+ case NLA_BE16:338338+ case NLA_BE32:324339 return nla_validate_range_unsigned(pt, nla, extack, validate);325340 case NLA_S8:326341 case NLA_S16:
+12-6
net/bluetooth/hci_conn.c
···10671067 hdev->acl_cnt += conn->sent;10681068 } else {10691069 struct hci_conn *acl = conn->link;10701070+10701071 if (acl) {10711072 acl->link = NULL;10721073 hci_conn_drop(acl);10741074+ }10751075+10761076+ /* Unacked ISO frames */10771077+ if (conn->type == ISO_LINK) {10781078+ if (hdev->iso_pkts)10791079+ hdev->iso_cnt += conn->sent;10801080+ else if (hdev->le_pkts)10811081+ hdev->le_cnt += conn->sent;10821082+ else10831083+ hdev->acl_cnt += conn->sent;10731084 }10741085 }10751086···17721761 if (!cis)17731762 return ERR_PTR(-ENOMEM);17741763 cis->cleanup = cis_cleanup;17641764+ cis->dst_type = dst_type;17751765 }1776176617771767 if (cis->state == BT_CONNECTED)···21512139{21522140 struct hci_conn *le;21532141 struct hci_conn *cis;21542154-21552155- /* Convert from ISO socket address type to HCI address type */21562156- if (dst_type == BDADDR_LE_PUBLIC)21572157- dst_type = ADDR_LE_DEV_PUBLIC;21582158- else21592159- dst_type = ADDR_LE_DEV_RANDOM;2160214221612143 if (hci_dev_test_flag(hdev, HCI_ADVERTISING))21622144 le = hci_connect_le(hdev, dst, dst_type, false,
+12-2
net/bluetooth/iso.c
···235235 return err;236236}237237238238+static inline u8 le_addr_type(u8 bdaddr_type)239239+{240240+ if (bdaddr_type == BDADDR_LE_PUBLIC)241241+ return ADDR_LE_DEV_PUBLIC;242242+ else243243+ return ADDR_LE_DEV_RANDOM;244244+}245245+238246static int iso_connect_bis(struct sock *sk)239247{240248 struct iso_conn *conn;···336328 /* Just bind if DEFER_SETUP has been set */337329 if (test_bit(BT_SK_DEFER_SETUP, &bt_sk(sk)->flags)) {338330 hcon = hci_bind_cis(hdev, &iso_pi(sk)->dst,339339- iso_pi(sk)->dst_type, &iso_pi(sk)->qos);331331+ le_addr_type(iso_pi(sk)->dst_type),332332+ &iso_pi(sk)->qos);340333 if (IS_ERR(hcon)) {341334 err = PTR_ERR(hcon);342335 goto done;343336 }344337 } else {345338 hcon = hci_connect_cis(hdev, &iso_pi(sk)->dst,346346- iso_pi(sk)->dst_type, &iso_pi(sk)->qos);339339+ le_addr_type(iso_pi(sk)->dst_type),340340+ &iso_pi(sk)->qos);347341 if (IS_ERR(hcon)) {348342 err = PTR_ERR(hcon);349343 goto done;
+73-13
net/bluetooth/l2cap_core.c
···19901990 if (link_type == LE_LINK && c->src_type == BDADDR_BREDR)19911991 continue;1992199219931993- if (c->psm == psm) {19931993+ if (c->chan_type != L2CAP_CHAN_FIXED && c->psm == psm) {19941994 int src_match, dst_match;19951995 int src_any, dst_any;19961996···37643764 l2cap_add_conf_opt(&ptr, L2CAP_CONF_RFC,37653765 sizeof(rfc), (unsigned long) &rfc, endptr - ptr);3766376637673767- if (test_bit(FLAG_EFS_ENABLE, &chan->flags)) {37673767+ if (remote_efs &&37683768+ test_bit(FLAG_EFS_ENABLE, &chan->flags)) {37683769 chan->remote_id = efs.id;37693770 chan->remote_stype = efs.stype;37703771 chan->remote_msdu = le16_to_cpu(efs.msdu);···58145813 BT_DBG("psm 0x%2.2x scid 0x%4.4x mtu %u mps %u", __le16_to_cpu(psm),58155814 scid, mtu, mps);5816581558165816+ /* BLUETOOTH CORE SPECIFICATION Version 5.3 | Vol 3, Part A58175817+ * page 1059:58185818+ *58195819+ * Valid range: 0x0001-0x00ff58205820+ *58215821+ * Table 4.15: L2CAP_LE_CREDIT_BASED_CONNECTION_REQ SPSM ranges58225822+ */58235823+ if (!psm || __le16_to_cpu(psm) > L2CAP_PSM_LE_DYN_END) {58245824+ result = L2CAP_CR_LE_BAD_PSM;58255825+ chan = NULL;58265826+ goto response;58275827+ }58285828+58175829 /* Check if we have socket listening on psm */58185830 pchan = l2cap_global_chan_by_psm(BT_LISTEN, psm, &conn->hcon->src,58195831 &conn->hcon->dst, LE_LINK);···60146000 }6015600160166002 psm = req->psm;60036003+60046004+ /* BLUETOOTH CORE SPECIFICATION Version 5.3 | Vol 3, Part A60056005+ * page 1059:60066006+ *60076007+ * Valid range: 0x0001-0x00ff60086008+ *60096009+ * Table 4.15: L2CAP_LE_CREDIT_BASED_CONNECTION_REQ SPSM ranges60106010+ */60116011+ if (!psm || __le16_to_cpu(psm) > L2CAP_PSM_LE_DYN_END) {60126012+ result = L2CAP_CR_LE_BAD_PSM;60136013+ goto response;60146014+ }6017601560186016 BT_DBG("psm 0x%2.2x mtu %u mps %u", __le16_to_cpu(psm), mtu, mps);60196017···69116885 struct l2cap_ctrl *control,69126886 struct sk_buff *skb, u8 event)69136887{68886888+ struct l2cap_ctrl local_control;69146889 int err = 0;69156890 bool skb_in_use = false;69166891···69366909 chan->buffer_seq = chan->expected_tx_seq;69376910 skb_in_use = true;6938691169126912+ /* l2cap_reassemble_sdu may free skb, hence invalidate69136913+ * control, so make a copy in advance to use it after69146914+ * l2cap_reassemble_sdu returns and to avoid the race69156915+ * condition, for example:69166916+ *69176917+ * The current thread calls:69186918+ * l2cap_reassemble_sdu69196919+ * chan->ops->recv == l2cap_sock_recv_cb69206920+ * __sock_queue_rcv_skb69216921+ * Another thread calls:69226922+ * bt_sock_recvmsg69236923+ * skb_recv_datagram69246924+ * skb_free_datagram69256925+ * Then the current thread tries to access control, but69266926+ * it was freed by skb_free_datagram.69276927+ */69286928+ local_control = *control;69396929 err = l2cap_reassemble_sdu(chan, skb, control);69406930 if (err)69416931 break;6942693269436943- if (control->final) {69336933+ if (local_control.final) {69446934 if (!test_and_clear_bit(CONN_REJ_ACT,69456935 &chan->conn_state)) {69466946- control->final = 0;69476947- l2cap_retransmit_all(chan, control);69366936+ local_control.final = 0;69376937+ l2cap_retransmit_all(chan, &local_control);69486938 l2cap_ertm_send(chan);69496939 }69506940 }···73417297static int l2cap_stream_rx(struct l2cap_chan *chan, struct l2cap_ctrl *control,73427298 struct sk_buff *skb)73437299{73007300+ /* l2cap_reassemble_sdu may free skb, hence invalidate control, so store73017301+ * the txseq field in advance to use it after l2cap_reassemble_sdu73027302+ * returns and to avoid the race condition, for example:73037303+ *73047304+ * The current thread calls:73057305+ * l2cap_reassemble_sdu73067306+ * chan->ops->recv == l2cap_sock_recv_cb73077307+ * __sock_queue_rcv_skb73087308+ * Another thread calls:73097309+ * bt_sock_recvmsg73107310+ * skb_recv_datagram73117311+ * skb_free_datagram73127312+ * Then the current thread tries to access control, but it was freed by73137313+ * skb_free_datagram.73147314+ */73157315+ u16 txseq = control->txseq;73167316+73447317 BT_DBG("chan %p, control %p, skb %p, state %d", chan, control, skb,73457318 chan->rx_state);7346731973477347- if (l2cap_classify_txseq(chan, control->txseq) ==73487348- L2CAP_TXSEQ_EXPECTED) {73207320+ if (l2cap_classify_txseq(chan, txseq) == L2CAP_TXSEQ_EXPECTED) {73497321 l2cap_pass_to_tx(chan, control);7350732273517323 BT_DBG("buffer_seq %u->%u", chan->buffer_seq,···73847324 }73857325 }7386732673877387- chan->last_acked_seq = control->txseq;73887388- chan->expected_tx_seq = __next_seq(chan, control->txseq);73277327+ chan->last_acked_seq = txseq;73287328+ chan->expected_tx_seq = __next_seq(chan, txseq);7389732973907330 return 0;73917331}···76417581 return;76427582 }7643758375847584+ l2cap_chan_hold(chan);76447585 l2cap_chan_lock(chan);76457586 } else {76467587 BT_DBG("unknown cid 0x%4.4x", cid);···84878426 * expected length.84888427 */84898428 if (skb->len < L2CAP_LEN_SIZE) {84908490- if (l2cap_recv_frag(conn, skb, conn->mtu) < 0)84918491- goto drop;84928492- return;84298429+ l2cap_recv_frag(conn, skb, conn->mtu);84308430+ break;84938431 }8494843284958433 len = get_unaligned_le16(skb->data) + L2CAP_HDR_SIZE;···8532847285338473 /* Header still could not be read just continue */85348474 if (conn->rx_skb->len < L2CAP_LEN_SIZE)85358535- return;84758475+ break;85368476 }8537847785388478 if (skb->len > conn->rx_len) {
···4242#define AHASH_MAX_SIZE (6 * AHASH_INIT_SIZE)4343/* Max muber of elements in the array block when tuned */4444#define AHASH_MAX_TUNED 644545-4645#define AHASH_MAX(h) ((h)->bucketsize)4747-4848-/* Max number of elements can be tuned */4949-#ifdef IP_SET_HASH_WITH_MULTI5050-static u85151-tune_bucketsize(u8 curr, u32 multi)5252-{5353- u32 n;5454-5555- if (multi < curr)5656- return curr;5757-5858- n = curr + AHASH_INIT_SIZE;5959- /* Currently, at listing one hash bucket must fit into a message.6060- * Therefore we have a hard limit here.6161- */6262- return n > curr && n <= AHASH_MAX_TUNED ? n : curr;6363-}6464-#define TUNE_BUCKETSIZE(h, multi) \6565- ((h)->bucketsize = tune_bucketsize((h)->bucketsize, multi))6666-#else6767-#define TUNE_BUCKETSIZE(h, multi)6868-#endif69467047/* A hash bucket */7148struct hbucket {···913936 goto set_full;914937 /* Create a new slot */915938 if (n->pos >= n->size) {916916- TUNE_BUCKETSIZE(h, multi);939939+#ifdef IP_SET_HASH_WITH_MULTI940940+ if (h->bucketsize >= AHASH_MAX_TUNED)941941+ goto set_full;942942+ else if (h->bucketsize < multi)943943+ h->bucketsize += AHASH_INIT_SIZE;944944+#endif917945 if (n->size >= AHASH_MAX(h)) {918946 /* Trigger rehashing */919947 mtype_data_next(&h->next, d);
···722722 if (!expr_eq(prop->menu->dep, prop->visible.expr))723723 get_dep_str(r, prop->visible.expr, " Visible if: ");724724725725- menu = prop->menu->parent;726726- for (i = 0; menu && i < 8; menu = menu->parent) {725725+ menu = prop->menu;726726+ for (i = 0; menu != &rootmenu && i < 8; menu = menu->parent) {727727 bool accessible = menu_is_visible(menu);728728729729 submenu[i++] = menu;···733733 if (head && location) {734734 jump = xmalloc(sizeof(struct jump_key));735735736736- if (menu_is_visible(prop->menu)) {737737- /*738738- * There is not enough room to put the hint at the739739- * beginning of the "Prompt" line. Put the hint on the740740- * last "Location" line even when it would belong on741741- * the former.742742- */743743- jump->target = prop->menu;744744- } else745745- jump->target = location;736736+ jump->target = location;746737747738 if (list_empty(head))748739 jump->index = 0;···749758 menu = submenu[i];750759 if (jump && menu == location)751760 jump->offset = strlen(r->s);752752-753753- if (menu == &rootmenu)754754- /* The real rootmenu prompt is ugly */755755- str_printf(r, "%*cMain menu", j, ' ');756756- else757757- str_printf(r, "%*c-> %s", j, ' ', menu_get_prompt(menu));758758-761761+ str_printf(r, "%*c-> %s", j, ' ', menu_get_prompt(menu));759762 if (menu->sym) {760763 str_printf(r, " (%s [=%s])", menu->sym->name ?761764 menu->sym->name : "<choice>",
+280-21
tools/testing/cxl/test/cxl.c
···1212#include "mock.h"13131414#define NR_CXL_HOST_BRIDGES 21515+#define NR_CXL_SINGLE_HOST 11516#define NR_CXL_ROOT_PORTS 21617#define NR_CXL_SWITCH_PORTS 21718#define NR_CXL_PORT_DECODERS 818191920static struct platform_device *cxl_acpi;2021static struct platform_device *cxl_host_bridge[NR_CXL_HOST_BRIDGES];2121-static struct platform_device2222- *cxl_root_port[NR_CXL_HOST_BRIDGES * NR_CXL_ROOT_PORTS];2323-static struct platform_device2424- *cxl_switch_uport[NR_CXL_HOST_BRIDGES * NR_CXL_ROOT_PORTS];2525-static struct platform_device2626- *cxl_switch_dport[NR_CXL_HOST_BRIDGES * NR_CXL_ROOT_PORTS *2727- NR_CXL_SWITCH_PORTS];2828-struct platform_device2929- *cxl_mem[NR_CXL_HOST_BRIDGES * NR_CXL_ROOT_PORTS * NR_CXL_SWITCH_PORTS];2222+#define NR_MULTI_ROOT (NR_CXL_HOST_BRIDGES * NR_CXL_ROOT_PORTS)2323+static struct platform_device *cxl_root_port[NR_MULTI_ROOT];2424+static struct platform_device *cxl_switch_uport[NR_MULTI_ROOT];2525+#define NR_MEM_MULTI \2626+ (NR_CXL_HOST_BRIDGES * NR_CXL_ROOT_PORTS * NR_CXL_SWITCH_PORTS)2727+static struct platform_device *cxl_switch_dport[NR_MEM_MULTI];2828+2929+static struct platform_device *cxl_hb_single[NR_CXL_SINGLE_HOST];3030+static struct platform_device *cxl_root_single[NR_CXL_SINGLE_HOST];3131+static struct platform_device *cxl_swu_single[NR_CXL_SINGLE_HOST];3232+#define NR_MEM_SINGLE (NR_CXL_SINGLE_HOST * NR_CXL_SWITCH_PORTS)3333+static struct platform_device *cxl_swd_single[NR_MEM_SINGLE];3434+3535+struct platform_device *cxl_mem[NR_MEM_MULTI];3636+struct platform_device *cxl_mem_single[NR_MEM_SINGLE];3737+3838+3939+static inline bool is_multi_bridge(struct device *dev)4040+{4141+ int i;4242+4343+ for (i = 0; i < ARRAY_SIZE(cxl_host_bridge); i++)4444+ if (&cxl_host_bridge[i]->dev == dev)4545+ return true;4646+ return false;4747+}4848+4949+static inline bool is_single_bridge(struct device *dev)5050+{5151+ int i;5252+5353+ for (i = 0; i < ARRAY_SIZE(cxl_hb_single); i++)5454+ if (&cxl_hb_single[i]->dev == dev)5555+ return true;5656+ return false;5757+}30583159static struct acpi_device acpi0017_mock;3232-static struct acpi_device host_bridge[NR_CXL_HOST_BRIDGES] = {6060+static struct acpi_device host_bridge[NR_CXL_HOST_BRIDGES + NR_CXL_SINGLE_HOST] = {3361 [0] = {3462 .handle = &host_bridge[0],3563 },3664 [1] = {3765 .handle = &host_bridge[1],3866 },6767+ [2] = {6868+ .handle = &host_bridge[2],6969+ },7070+3971};40724173static bool is_mock_dev(struct device *dev)···76447745 for (i = 0; i < ARRAY_SIZE(cxl_mem); i++)7846 if (dev == &cxl_mem[i]->dev)4747+ return true;4848+ for (i = 0; i < ARRAY_SIZE(cxl_mem_single); i++)4949+ if (dev == &cxl_mem_single[i]->dev)7950 return true;8051 if (dev == &cxl_acpi->dev)8152 return true;···1016610267static struct {10368 struct acpi_table_cedt cedt;104104- struct acpi_cedt_chbs chbs[NR_CXL_HOST_BRIDGES];6969+ struct acpi_cedt_chbs chbs[NR_CXL_HOST_BRIDGES + NR_CXL_SINGLE_HOST];10570 struct {10671 struct acpi_cedt_cfmws cfmws;10772 u32 target[1];···11883 struct acpi_cedt_cfmws cfmws;11984 u32 target[2];12085 } cfmws3;8686+ struct {8787+ struct acpi_cedt_cfmws cfmws;8888+ u32 target[1];8989+ } cfmws4;12190} __packed mock_cedt = {12291 .cedt = {12392 .header = {···144105 .length = sizeof(mock_cedt.chbs[0]),145106 },146107 .uid = 1,108108+ .cxl_version = ACPI_CEDT_CHBS_VERSION_CXL20,109109+ },110110+ .chbs[2] = {111111+ .header = {112112+ .type = ACPI_CEDT_TYPE_CHBS,113113+ .length = sizeof(mock_cedt.chbs[0]),114114+ },115115+ .uid = 2,147116 .cxl_version = ACPI_CEDT_CHBS_VERSION_CXL20,148117 },149118 .cfmws0 = {···214167 },215168 .target = { 0, 1, },216169 },170170+ .cfmws4 = {171171+ .cfmws = {172172+ .header = {173173+ .type = ACPI_CEDT_TYPE_CFMWS,174174+ .length = sizeof(mock_cedt.cfmws4),175175+ },176176+ .interleave_ways = 0,177177+ .granularity = 4,178178+ .restrictions = ACPI_CEDT_CFMWS_RESTRICT_TYPE3 |179179+ ACPI_CEDT_CFMWS_RESTRICT_PMEM,180180+ .qtg_id = 4,181181+ .window_size = SZ_256M * 4UL,182182+ },183183+ .target = { 2 },184184+ },217185};218186219219-struct acpi_cedt_cfmws *mock_cfmws[4] = {187187+struct acpi_cedt_cfmws *mock_cfmws[] = {220188 [0] = &mock_cedt.cfmws0.cfmws,221189 [1] = &mock_cedt.cfmws1.cfmws,222190 [2] = &mock_cedt.cfmws2.cfmws,223191 [3] = &mock_cedt.cfmws3.cfmws,192192+ [4] = &mock_cedt.cfmws4.cfmws,224193};225194226195struct cxl_mock_res {···367304 for (i = 0; i < ARRAY_SIZE(cxl_host_bridge); i++)368305 if (dev == &cxl_host_bridge[i]->dev)369306 return true;307307+ for (i = 0; i < ARRAY_SIZE(cxl_hb_single); i++)308308+ if (dev == &cxl_hb_single[i]->dev)309309+ return true;370310 return false;371311}372312···390324391325 for (i = 0; i < ARRAY_SIZE(cxl_switch_dport); i++)392326 if (dev == &cxl_switch_dport[i]->dev)327327+ return true;328328+329329+ for (i = 0; i < ARRAY_SIZE(cxl_root_single); i++)330330+ if (dev == &cxl_root_single[i]->dev)331331+ return true;332332+333333+ for (i = 0; i < ARRAY_SIZE(cxl_swu_single); i++)334334+ if (dev == &cxl_swu_single[i]->dev)335335+ return true;336336+337337+ for (i = 0; i < ARRAY_SIZE(cxl_swd_single); i++)338338+ if (dev == &cxl_swd_single[i]->dev)393339 return true;394340395341 if (is_cxl_memdev(dev))···639561 int i, array_size;640562641563 if (port->depth == 1) {642642- array_size = ARRAY_SIZE(cxl_root_port);643643- array = cxl_root_port;564564+ if (is_multi_bridge(port->uport)) {565565+ array_size = ARRAY_SIZE(cxl_root_port);566566+ array = cxl_root_port;567567+ } else if (is_single_bridge(port->uport)) {568568+ array_size = ARRAY_SIZE(cxl_root_single);569569+ array = cxl_root_single;570570+ } else {571571+ dev_dbg(&port->dev, "%s: unknown bridge type\n",572572+ dev_name(port->uport));573573+ return -ENXIO;574574+ }644575 } else if (port->depth == 2) {645645- array_size = ARRAY_SIZE(cxl_switch_dport);646646- array = cxl_switch_dport;576576+ struct cxl_port *parent = to_cxl_port(port->dev.parent);577577+578578+ if (is_multi_bridge(parent->uport)) {579579+ array_size = ARRAY_SIZE(cxl_switch_dport);580580+ array = cxl_switch_dport;581581+ } else if (is_single_bridge(parent->uport)) {582582+ array_size = ARRAY_SIZE(cxl_swd_single);583583+ array = cxl_swd_single;584584+ } else {585585+ dev_dbg(&port->dev, "%s: unknown bridge type\n",586586+ dev_name(port->uport));587587+ return -ENXIO;588588+ }647589 } else {648590 dev_WARN_ONCE(&port->dev, 1, "unexpected depth %d\n",649591 port->depth);···674576 struct platform_device *pdev = array[i];675577 struct cxl_dport *dport;676578677677- if (pdev->dev.parent != port->uport)579579+ if (pdev->dev.parent != port->uport) {580580+ dev_dbg(&port->dev, "%s: mismatch parent %s\n",581581+ dev_name(port->uport),582582+ dev_name(pdev->dev.parent));678583 continue;584584+ }679585680586 dport = devm_cxl_add_dport(port, &pdev->dev, pdev->id,681587 CXL_RESOURCE_NONE);···728626#ifndef SZ_512G729627#define SZ_512G (SZ_64G * 8)730628#endif629629+630630+static __init int cxl_single_init(void)631631+{632632+ int i, rc;633633+634634+ for (i = 0; i < ARRAY_SIZE(cxl_hb_single); i++) {635635+ struct acpi_device *adev =636636+ &host_bridge[NR_CXL_HOST_BRIDGES + i];637637+ struct platform_device *pdev;638638+639639+ pdev = platform_device_alloc("cxl_host_bridge",640640+ NR_CXL_HOST_BRIDGES + i);641641+ if (!pdev)642642+ goto err_bridge;643643+644644+ mock_companion(adev, &pdev->dev);645645+ rc = platform_device_add(pdev);646646+ if (rc) {647647+ platform_device_put(pdev);648648+ goto err_bridge;649649+ }650650+651651+ cxl_hb_single[i] = pdev;652652+ rc = sysfs_create_link(&pdev->dev.kobj, &pdev->dev.kobj,653653+ "physical_node");654654+ if (rc)655655+ goto err_bridge;656656+ }657657+658658+ for (i = 0; i < ARRAY_SIZE(cxl_root_single); i++) {659659+ struct platform_device *bridge =660660+ cxl_hb_single[i % ARRAY_SIZE(cxl_hb_single)];661661+ struct platform_device *pdev;662662+663663+ pdev = platform_device_alloc("cxl_root_port",664664+ NR_MULTI_ROOT + i);665665+ if (!pdev)666666+ goto err_port;667667+ pdev->dev.parent = &bridge->dev;668668+669669+ rc = platform_device_add(pdev);670670+ if (rc) {671671+ platform_device_put(pdev);672672+ goto err_port;673673+ }674674+ cxl_root_single[i] = pdev;675675+ }676676+677677+ for (i = 0; i < ARRAY_SIZE(cxl_swu_single); i++) {678678+ struct platform_device *root_port = cxl_root_single[i];679679+ struct platform_device *pdev;680680+681681+ pdev = platform_device_alloc("cxl_switch_uport",682682+ NR_MULTI_ROOT + i);683683+ if (!pdev)684684+ goto err_uport;685685+ pdev->dev.parent = &root_port->dev;686686+687687+ rc = platform_device_add(pdev);688688+ if (rc) {689689+ platform_device_put(pdev);690690+ goto err_uport;691691+ }692692+ cxl_swu_single[i] = pdev;693693+ }694694+695695+ for (i = 0; i < ARRAY_SIZE(cxl_swd_single); i++) {696696+ struct platform_device *uport =697697+ cxl_swu_single[i % ARRAY_SIZE(cxl_swu_single)];698698+ struct platform_device *pdev;699699+700700+ pdev = platform_device_alloc("cxl_switch_dport",701701+ i + NR_MEM_MULTI);702702+ if (!pdev)703703+ goto err_dport;704704+ pdev->dev.parent = &uport->dev;705705+706706+ rc = platform_device_add(pdev);707707+ if (rc) {708708+ platform_device_put(pdev);709709+ goto err_dport;710710+ }711711+ cxl_swd_single[i] = pdev;712712+ }713713+714714+ for (i = 0; i < ARRAY_SIZE(cxl_mem_single); i++) {715715+ struct platform_device *dport = cxl_swd_single[i];716716+ struct platform_device *pdev;717717+718718+ pdev = platform_device_alloc("cxl_mem", NR_MEM_MULTI + i);719719+ if (!pdev)720720+ goto err_mem;721721+ pdev->dev.parent = &dport->dev;722722+ set_dev_node(&pdev->dev, i % 2);723723+724724+ rc = platform_device_add(pdev);725725+ if (rc) {726726+ platform_device_put(pdev);727727+ goto err_mem;728728+ }729729+ cxl_mem_single[i] = pdev;730730+ }731731+732732+ return 0;733733+734734+err_mem:735735+ for (i = ARRAY_SIZE(cxl_mem_single) - 1; i >= 0; i--)736736+ platform_device_unregister(cxl_mem_single[i]);737737+err_dport:738738+ for (i = ARRAY_SIZE(cxl_swd_single) - 1; i >= 0; i--)739739+ platform_device_unregister(cxl_swd_single[i]);740740+err_uport:741741+ for (i = ARRAY_SIZE(cxl_swu_single) - 1; i >= 0; i--)742742+ platform_device_unregister(cxl_swu_single[i]);743743+err_port:744744+ for (i = ARRAY_SIZE(cxl_root_single) - 1; i >= 0; i--)745745+ platform_device_unregister(cxl_root_single[i]);746746+err_bridge:747747+ for (i = ARRAY_SIZE(cxl_hb_single) - 1; i >= 0; i--) {748748+ struct platform_device *pdev = cxl_hb_single[i];749749+750750+ if (!pdev)751751+ continue;752752+ sysfs_remove_link(&pdev->dev.kobj, "physical_node");753753+ platform_device_unregister(cxl_hb_single[i]);754754+ }755755+756756+ return rc;757757+}758758+759759+static void cxl_single_exit(void)760760+{761761+ int i;762762+763763+ for (i = ARRAY_SIZE(cxl_mem_single) - 1; i >= 0; i--)764764+ platform_device_unregister(cxl_mem_single[i]);765765+ for (i = ARRAY_SIZE(cxl_swd_single) - 1; i >= 0; i--)766766+ platform_device_unregister(cxl_swd_single[i]);767767+ for (i = ARRAY_SIZE(cxl_swu_single) - 1; i >= 0; i--)768768+ platform_device_unregister(cxl_swu_single[i]);769769+ for (i = ARRAY_SIZE(cxl_root_single) - 1; i >= 0; i--)770770+ platform_device_unregister(cxl_root_single[i]);771771+ for (i = ARRAY_SIZE(cxl_hb_single) - 1; i >= 0; i--) {772772+ struct platform_device *pdev = cxl_hb_single[i];773773+774774+ if (!pdev)775775+ continue;776776+ sysfs_remove_link(&pdev->dev.kobj, "physical_node");777777+ platform_device_unregister(cxl_hb_single[i]);778778+ }779779+}731780732781static __init int cxl_test_init(void)733782{···948695949696 pdev = platform_device_alloc("cxl_switch_uport", i);950697 if (!pdev)951951- goto err_port;698698+ goto err_uport;952699 pdev->dev.parent = &root_port->dev;953700954701 rc = platform_device_add(pdev);···966713967714 pdev = platform_device_alloc("cxl_switch_dport", i);968715 if (!pdev)969969- goto err_port;716716+ goto err_dport;970717 pdev->dev.parent = &uport->dev;971718972719 rc = platform_device_add(pdev);···977724 cxl_switch_dport[i] = pdev;978725 }979726980980- BUILD_BUG_ON(ARRAY_SIZE(cxl_mem) != ARRAY_SIZE(cxl_switch_dport));981727 for (i = 0; i < ARRAY_SIZE(cxl_mem); i++) {982728 struct platform_device *dport = cxl_switch_dport[i];983729 struct platform_device *pdev;···995743 cxl_mem[i] = pdev;996744 }997745746746+ rc = cxl_single_init();747747+ if (rc)748748+ goto err_mem;749749+998750 cxl_acpi = platform_device_alloc("cxl_acpi", 0);999751 if (!cxl_acpi)10001000- goto err_mem;752752+ goto err_single;10017531002754 mock_companion(&acpi0017_mock, &cxl_acpi->dev);1003755 acpi0017_mock.dev.bus = &platform_bus_type;···10147581015759err_add:1016760 platform_device_put(cxl_acpi);761761+err_single:762762+ cxl_single_exit();1017763err_mem:1018764 for (i = ARRAY_SIZE(cxl_mem) - 1; i >= 0; i--)1019765 platform_device_unregister(cxl_mem[i]);···1051793 int i;10527941053795 platform_device_unregister(cxl_acpi);796796+ cxl_single_exit();1054797 for (i = ARRAY_SIZE(cxl_mem) - 1; i >= 0; i--)1055798 platform_device_unregister(cxl_mem[i]);1056799 for (i = ARRAY_SIZE(cxl_switch_dport) - 1; i >= 0; i--)
+4-3
tools/testing/selftests/landlock/Makefile
···33# First run: make -C ../../../.. headers_install4455CFLAGS += -Wall -O2 $(KHDR_INCLUDES)66-LDLIBS += -lcap7687LOCAL_HDRS += common.h98···12131314TEST_GEN_PROGS_EXTENDED := true14151515-# Static linking for short targets:1616+# Short targets:1717+$(TEST_GEN_PROGS): LDLIBS += -lcap1618$(TEST_GEN_PROGS_EXTENDED): LDFLAGS += -static17191820include ../lib.mk19212020-# Static linking for targets with $(OUTPUT)/ prefix:2222+# Targets with $(OUTPUT)/ prefix:2323+$(TEST_GEN_PROGS): LDLIBS += -lcap2124$(TEST_GEN_PROGS_EXTENDED): LDFLAGS += -static