···77- phy-mode : See ethernet.txt file in the same directory8899Optional properties:1010-- phy-reset-gpios : Should specify the gpio for phy reset1111-- phy-reset-duration : Reset duration in milliseconds. Should present1212- only if property "phy-reset-gpios" is available. Missing the property1313- will have the duration be 1 millisecond. Numbers greater than 1000 are1414- invalid and 1 millisecond will be used instead.1515-- phy-reset-active-high : If present then the reset sequence using the GPIO1616- specified in the "phy-reset-gpios" property is reversed (H=reset state,1717- L=operation state).1818-- phy-reset-post-delay : Post reset delay in milliseconds. If present then1919- a delay of phy-reset-post-delay milliseconds will be observed after the2020- phy-reset-gpios has been toggled. Can be omitted thus no delay is2121- observed. Delay is in range of 1ms to 1000ms. Other delays are invalid.2210- phy-supply : regulator that powers the Ethernet PHY.2311- phy-handle : phandle to the PHY device connected to this device.2412- fixed-link : Assume a fixed link. See fixed-link.txt in the same directory.···3547 For imx6sx, "int0" handles all 3 queues and ENET_MII. "pps" is for the pulse3648 per second interrupt associated with 1588 precision time protocol(PTP).37493838-3950Optional subnodes:4051- mdio : specifies the mdio bus in the FEC, used as a container for phy nodes4152 according to phy.txt in the same directory5353+5454+Deprecated optional properties:5555+ To avoid these, create a phy node according to phy.txt in the same5656+ directory, and point the fec's "phy-handle" property to it. Then use5757+ the phy's reset binding, again described by phy.txt.5858+- phy-reset-gpios : Should specify the gpio for phy reset5959+- phy-reset-duration : Reset duration in milliseconds. Should present6060+ only if property "phy-reset-gpios" is available. Missing the property6161+ will have the duration be 1 millisecond. Numbers greater than 1000 are6262+ invalid and 1 millisecond will be used instead.6363+- phy-reset-active-high : If present then the reset sequence using the GPIO6464+ specified in the "phy-reset-gpios" property is reversed (H=reset state,6565+ L=operation state).6666+- phy-reset-post-delay : Post reset delay in milliseconds. If present then6767+ a delay of phy-reset-post-delay milliseconds will be observed after the6868+ phy-reset-gpios has been toggled. Can be omitted thus no delay is6969+ observed. Delay is in range of 1ms to 1000ms. Other delays are invalid.42704371Example:4472
···3737 hwlocks: true38383939 st,syscfg:4040- $ref: "/schemas/types.yaml#/definitions/phandle-array"4040+ allOf:4141+ - $ref: "/schemas/types.yaml#/definitions/phandle-array"4142 description: Should be phandle/offset/mask4243 items:4344 - description: Phandle to the syscon node which includes IRQ mux selection.
+16-2
MAINTAINERS
···64416441F: drivers/perf/fsl_imx8_ddr_perf.c64426442F: Documentation/devicetree/bindings/perf/fsl-imx-ddr.txt6443644364446444+FREESCALE IMX I2C DRIVER64456445+M: Oleksij Rempel <o.rempel@pengutronix.de>64466446+R: Pengutronix Kernel Team <kernel@pengutronix.de>64476447+L: linux-i2c@vger.kernel.org64486448+S: Maintained64496449+F: drivers/i2c/busses/i2c-imx.c64506450+F: Documentation/devicetree/bindings/i2c/i2c-imx.txt64516451+64446452FREESCALE IMX LPI2C DRIVER64456453M: Dong Aisheng <aisheng.dong@nxp.com>64466454L: linux-i2c@vger.kernel.org···74607452F: drivers/scsi/storvsc_drv.c74617453F: drivers/uio/uio_hv_generic.c74627454F: drivers/video/fbdev/hyperv_fb.c74637463-F: drivers/iommu/hyperv_iommu.c74557455+F: drivers/iommu/hyperv-iommu.c74647456F: net/vmw_vsock/hyperv_transport.c74657457F: include/clocksource/hyperv_timer.h74667458F: include/linux/hyperv.h···80728064S: Supported80738065F: drivers/scsi/isci/8074806680678067+INTEL CPU family model numbers80688068+M: Tony Luck <tony.luck@intel.com>80698069+M: x86@kernel.org80708070+L: linux-kernel@vger.kernel.org80718071+S: Supported80728072+F: arch/x86/include/asm/intel-family.h80738073+80758074INTEL DRM DRIVERS (excluding Poulsbo, Moorestown and derivative chipsets)80768075M: Jani Nikula <jani.nikula@linux.intel.com>80778076M: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>···84318416L: linux-fsdevel@vger.kernel.org84328417T: git git://git.kernel.org/pub/scm/fs/xfs/xfs-linux.git84338418S: Supported84348434-F: fs/iomap.c84358419F: fs/iomap/84368420F: include/linux/iomap.h84378421
···184184};185185186186static const struct arm64_ftr_bits ftr_id_aa64mmfr0[] = {187187- S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_TGRAN4_SHIFT, 4, ID_AA64MMFR0_TGRAN4_NI),188188- S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_TGRAN64_SHIFT, 4, ID_AA64MMFR0_TGRAN64_NI),189189- ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_TGRAN16_SHIFT, 4, ID_AA64MMFR0_TGRAN16_NI),187187+ /*188188+ * We already refuse to boot CPUs that don't support our configured189189+ * page size, so we can only detect mismatches for a page size other190190+ * than the one we're currently using. Unfortunately, SoCs like this191191+ * exist in the wild so, even though we don't like it, we'll have to go192192+ * along with it and treat them as non-strict.193193+ */194194+ S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_TGRAN4_SHIFT, 4, ID_AA64MMFR0_TGRAN4_NI),195195+ S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_TGRAN64_SHIFT, 4, ID_AA64MMFR0_TGRAN64_NI),196196+ ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_TGRAN16_SHIFT, 4, ID_AA64MMFR0_TGRAN16_NI),197197+190198 ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_BIGENDEL0_SHIFT, 4, 0),191199 /* Linux shouldn't care about secure memory */192200 ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_SNSMEM_SHIFT, 4, 0),
+13-9
arch/arm64/kernel/ftrace.c
···73737474 if (offset < -SZ_128M || offset >= SZ_128M) {7575#ifdef CONFIG_ARM64_MODULE_PLTS7676- struct plt_entry trampoline;7676+ struct plt_entry trampoline, *dst;7777 struct module *mod;78787979 /*···106106 * to check if the actual opcodes are in fact identical,107107 * regardless of the offset in memory so use memcmp() instead.108108 */109109- trampoline = get_plt_entry(addr, mod->arch.ftrace_trampoline);110110- if (memcmp(mod->arch.ftrace_trampoline, &trampoline,111111- sizeof(trampoline))) {112112- if (plt_entry_is_initialized(mod->arch.ftrace_trampoline)) {109109+ dst = mod->arch.ftrace_trampoline;110110+ trampoline = get_plt_entry(addr, dst);111111+ if (memcmp(dst, &trampoline, sizeof(trampoline))) {112112+ if (plt_entry_is_initialized(dst)) {113113 pr_err("ftrace: far branches to multiple entry points unsupported inside a single module\n");114114 return -EINVAL;115115 }116116117117 /* point the trampoline to our ftrace entry point */118118 module_disable_ro(mod);119119- *mod->arch.ftrace_trampoline = trampoline;119119+ *dst = trampoline;120120 module_enable_ro(mod, true);121121122122- /* update trampoline before patching in the branch */123123- smp_wmb();122122+ /*123123+ * Ensure updated trampoline is visible to instruction124124+ * fetch before we patch in the branch.125125+ */126126+ __flush_icache_range((unsigned long)&dst[0],127127+ (unsigned long)&dst[1]);124128 }125125- addr = (unsigned long)(void *)mod->arch.ftrace_trampoline;129129+ addr = (unsigned long)dst;126130#else /* CONFIG_ARM64_MODULE_PLTS */127131 return -EINVAL;128132#endif /* CONFIG_ARM64_MODULE_PLTS */
···11-// SPDX-License-Identifier: GPL-2.0-or-later22-/*33- * Contains common dma routines for all powerpc platforms.44- *55- * Copyright (C) 2019 Shawn Anastasio.66- */77-88-#include <linux/mm.h>99-#include <linux/dma-noncoherent.h>1010-1111-pgprot_t arch_dma_mmap_pgprot(struct device *dev, pgprot_t prot,1212- unsigned long attrs)1313-{1414- if (!dev_is_dma_coherent(dev))1515- return pgprot_noncached(prot);1616- return prot;1717-}
+2
arch/riscv/configs/defconfig
···5454CONFIG_SERIAL_OF_PLATFORM=y5555CONFIG_SERIAL_EARLYCON_RISCV_SBI=y5656CONFIG_HVC_RISCV_SBI=y5757+CONFIG_HW_RANDOM=y5858+CONFIG_HW_RANDOM_VIRTIO=y5759CONFIG_SPI=y5860CONFIG_SPI_SIFIVE=y5961# CONFIG_PTP_1588_CLOCK is not set
+3
arch/riscv/configs/rv32_defconfig
···3434CONFIG_PCI_HOST_GENERIC=y3535CONFIG_PCIE_XILINX=y3636CONFIG_DEVTMPFS=y3737+CONFIG_DEVTMPFS_MOUNT=y3738CONFIG_BLK_DEV_LOOP=y3839CONFIG_VIRTIO_BLK=y3940CONFIG_BLK_DEV_SD=y···5453CONFIG_SERIAL_OF_PLATFORM=y5554CONFIG_SERIAL_EARLYCON_RISCV_SBI=y5655CONFIG_HVC_RISCV_SBI=y5656+CONFIG_HW_RANDOM=y5757+CONFIG_HW_RANDOM_VIRTIO=y5758# CONFIG_PTP_1588_CLOCK is not set5859CONFIG_DRM=y5960CONFIG_DRM_RADEON=y
···6464 unsigned long sp)6565{6666 regs->sstatus = SR_SPIE;6767- if (has_fpu)6767+ if (has_fpu) {6868 regs->sstatus |= SR_FS_INITIAL;6969+ /*7070+ * Restore the initial value to the FP register7171+ * before starting the user program.7272+ */7373+ fstate_restore(current, regs);7474+ }6975 regs->sepc = pc;7076 regs->sp = sp;7177 set_fs(USER_DS);···8175{8276#ifdef CONFIG_FPU8377 /*8484- * Reset FPU context7878+ * Reset FPU state and context8579 * frm: round to nearest, ties to even (IEEE default)8680 * fflags: accrued exceptions cleared8781 */8282+ fstate_off(current, task_pt_regs(current));8883 memset(¤t->thread.fstate, 0, sizeof(current->thread.fstate));8984#endif9085}
···157157 switch (sh_type) {158158 case SH_BREAKPOINT_READ:159159 *gen_type = HW_BREAKPOINT_R;160160+ break;160161 case SH_BREAKPOINT_WRITE:161162 *gen_type = HW_BREAKPOINT_W;162163 break;
+48-15
arch/x86/include/asm/bootparam_utils.h
···1818 * Note: efi_info is commonly left uninitialized, but that field has a1919 * private magic, so it is better to leave it unchanged.2020 */2121+2222+#define sizeof_mbr(type, member) ({ sizeof(((type *)0)->member); })2323+2424+#define BOOT_PARAM_PRESERVE(struct_member) \2525+ { \2626+ .start = offsetof(struct boot_params, struct_member), \2727+ .len = sizeof_mbr(struct boot_params, struct_member), \2828+ }2929+3030+struct boot_params_to_save {3131+ unsigned int start;3232+ unsigned int len;3333+};3434+2135static void sanitize_boot_params(struct boot_params *boot_params)2236{2337 /* ···4935 * problems again.5036 */5137 if (boot_params->sentinel) {5252- /* fields in boot_params are left uninitialized, clear them */5353- boot_params->acpi_rsdp_addr = 0;5454- memset(&boot_params->ext_ramdisk_image, 0,5555- (char *)&boot_params->efi_info -5656- (char *)&boot_params->ext_ramdisk_image);5757- memset(&boot_params->kbd_status, 0,5858- (char *)&boot_params->hdr -5959- (char *)&boot_params->kbd_status);6060- memset(&boot_params->_pad7[0], 0,6161- (char *)&boot_params->edd_mbr_sig_buffer[0] -6262- (char *)&boot_params->_pad7[0]);6363- memset(&boot_params->_pad8[0], 0,6464- (char *)&boot_params->eddbuf[0] -6565- (char *)&boot_params->_pad8[0]);6666- memset(&boot_params->_pad9[0], 0, sizeof(boot_params->_pad9));3838+ static struct boot_params scratch;3939+ char *bp_base = (char *)boot_params;4040+ char *save_base = (char *)&scratch;4141+ int i;4242+4343+ const struct boot_params_to_save to_save[] = {4444+ BOOT_PARAM_PRESERVE(screen_info),4545+ BOOT_PARAM_PRESERVE(apm_bios_info),4646+ BOOT_PARAM_PRESERVE(tboot_addr),4747+ BOOT_PARAM_PRESERVE(ist_info),4848+ BOOT_PARAM_PRESERVE(acpi_rsdp_addr),4949+ BOOT_PARAM_PRESERVE(hd0_info),5050+ BOOT_PARAM_PRESERVE(hd1_info),5151+ BOOT_PARAM_PRESERVE(sys_desc_table),5252+ BOOT_PARAM_PRESERVE(olpc_ofw_header),5353+ BOOT_PARAM_PRESERVE(efi_info),5454+ BOOT_PARAM_PRESERVE(alt_mem_k),5555+ BOOT_PARAM_PRESERVE(scratch),5656+ BOOT_PARAM_PRESERVE(e820_entries),5757+ BOOT_PARAM_PRESERVE(eddbuf_entries),5858+ BOOT_PARAM_PRESERVE(edd_mbr_sig_buf_entries),5959+ BOOT_PARAM_PRESERVE(edd_mbr_sig_buffer),6060+ BOOT_PARAM_PRESERVE(e820_table),6161+ BOOT_PARAM_PRESERVE(eddbuf),6262+ };6363+6464+ memset(&scratch, 0, sizeof(scratch));6565+6666+ for (i = 0; i < ARRAY_SIZE(to_save); i++) {6767+ memcpy(save_base + to_save[i].start,6868+ bp_base + to_save[i].start, to_save[i].len);6969+ }7070+7171+ memcpy(boot_params, save_base, sizeof(*boot_params));6772 }6873}6974
+2-1
arch/x86/kernel/apic/probe_32.c
···184184 def_to_bigsmp = 0;185185 break;186186 }187187- /* If P4 and above fall through */187187+ /* P4 and above */188188+ /* fall through */188189 case X86_VENDOR_HYGON:189190 case X86_VENDOR_AMD:190191 def_to_bigsmp = 1;
+38-1
arch/x86/kernel/cpu/umwait.c
···1818static u32 umwait_control_cached = UMWAIT_CTRL_VAL(100000, UMWAIT_C02_ENABLE);19192020/*2121+ * Cache the original IA32_UMWAIT_CONTROL MSR value which is configured by2222+ * hardware or BIOS before kernel boot.2323+ */2424+static u32 orig_umwait_control_cached __ro_after_init;2525+2626+/*2127 * Serialize access to umwait_control_cached and IA32_UMWAIT_CONTROL MSR in2228 * the sysfs write functions.2329 */···5549 local_irq_disable();5650 umwait_update_control_msr(NULL);5751 local_irq_enable();5252+ return 0;5353+}5454+5555+/*5656+ * The CPU hotplug callback sets the control MSR to the original control5757+ * value.5858+ */5959+static int umwait_cpu_offline(unsigned int cpu)6060+{6161+ /*6262+ * This code is protected by the CPU hotplug already and6363+ * orig_umwait_control_cached is never changed after it caches6464+ * the original control MSR value in umwait_init(). So there6565+ * is no race condition here.6666+ */6767+ wrmsr(MSR_IA32_UMWAIT_CONTROL, orig_umwait_control_cached, 0);6868+5869 return 0;5970}6071···208185 if (!boot_cpu_has(X86_FEATURE_WAITPKG))209186 return -ENODEV;210187188188+ /*189189+ * Cache the original control MSR value before the control MSR is190190+ * changed. This is the only place where orig_umwait_control_cached191191+ * is modified.192192+ */193193+ rdmsrl(MSR_IA32_UMWAIT_CONTROL, orig_umwait_control_cached);194194+211195 ret = cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "umwait:online",212212- umwait_cpu_online, NULL);196196+ umwait_cpu_online, umwait_cpu_offline);197197+ if (ret < 0) {198198+ /*199199+ * On failure, the control MSR on all CPUs has the200200+ * original control value.201201+ */202202+ return ret;203203+ }213204214205 register_syscore_ops(&umwait_syscore_ops);215206
+3-2
arch/x86/math-emu/errors.c
···178178 for (i = 0; i < 8; i++) {179179 FPU_REG *r = &st(i);180180 u_char tagi = FPU_gettagi(i);181181+181182 switch (tagi) {182183 case TAG_Empty:183184 continue;184184- break;185185 case TAG_Zero:186186 case TAG_Special:187187+ /* Update tagi for the printk below */187188 tagi = FPU_Special(r);189189+ /* fall through */188190 case TAG_Valid:189191 printk("st(%d) %c .%04lx %04lx %04lx %04lx e%+-6d ", i,190192 getsign(r) ? '-' : '+',···200198 printk("Whoops! Error in errors.c: tag%d is %d ", i,201199 tagi);202200 continue;203203- break;204201 }205202 printk("%s\n", tag_desc[(int)(unsigned)tagi]);206203 }
+1-1
arch/x86/math-emu/fpu_trig.c
···13521352 case TW_Denormal:13531353 if (denormal_operand() < 0)13541354 return;13551355-13551355+ /* fall through */13561356 case TAG_Zero:13571357 case TAG_Valid:13581358 setsign(st0_ptr, getsign(st0_ptr) ^ getsign(st1_ptr));
···892892893893 blk_free_queue_stats(q->stats);894894895895+ if (queue_is_mq(q))896896+ cancel_delayed_work_sync(&q->requeue_work);897897+895898 blk_exit_queue(q);896899897900 blk_queue_free_zone_bitmaps(q);
+5
drivers/auxdisplay/Kconfig
···448448choice449449 prompt "Backlight initial state"450450 default CHARLCD_BL_FLASH451451+ ---help---452452+ Select the initial backlight state on boot or module load.453453+454454+ Previously, there was no option for this: the backlight flashed455455+ briefly on init. Now you can also turn it off/on.451456452457 config CHARLCD_BL_OFF453458 bool "Off"
···1414#include <linux/property.h>1515#include <linux/slab.h>16161717-#include <misc/charlcd.h>1818-1717+#include "charlcd.h"19182019enum hd44780_pin {2120 /* Order does matter due to writing to GPIO array subsets! */
+3-1
drivers/auxdisplay/panel.c
···5555#include <linux/io.h>5656#include <linux/uaccess.h>57575858-#include <misc/charlcd.h>5858+#include "charlcd.h"59596060#define KEYPAD_MINOR 1856161···16171617 return;1618161816191619err_lcd_unreg:16201620+ if (scan_timer.function)16211621+ del_timer_sync(&scan_timer);16201622 if (lcd.enabled)16211623 charlcd_unregister(lcd.charlcd);16221624err_unreg_device:
+1-1
drivers/base/regmap/Kconfig
···44444545config REGMAP_SOUNDWIRE4646 tristate4747- depends on SOUNDWIRE_BUS4747+ depends on SOUNDWIRE48484949config REGMAP_SCCB5050 tristate
+3-3
drivers/block/xen-blkback/xenbus.c
···965965 }966966 }967967968968+ err = -ENOMEM;968969 for (i = 0; i < nr_grefs * XEN_BLKIF_REQS_PER_PAGE; i++) {969970 req = kzalloc(sizeof(*req), GFP_KERNEL);970971 if (!req)···988987 err = xen_blkif_map(ring, ring_ref, nr_grefs, evtchn);989988 if (err) {990989 xenbus_dev_fatal(dev, err, "mapping ring-ref port %u", evtchn);991991- return err;990990+ goto fail;992991 }993992994993 return 0;···10081007 }10091008 kfree(req);10101009 }10111011- return -ENOMEM;10121012-10101010+ return err;10131011}1014101210151013static int connect_ring(struct backend_info *be)
···771771 struct nv50_head_atom *asyh = nv50_head_atom(crtc_state);772772 int slots;773773774774- /* When restoring duplicated states, we need to make sure that the775775- * bw remains the same and avoid recalculating it, as the connector's776776- * bpc may have changed after the state was duplicated777777- */778778- if (!state->duplicated)779779- asyh->dp.pbn =780780- drm_dp_calc_pbn_mode(crtc_state->adjusted_mode.clock,781781- connector->display_info.bpc * 3);774774+ if (crtc_state->mode_changed || crtc_state->connectors_changed) {775775+ /*776776+ * When restoring duplicated states, we need to make sure that777777+ * the bw remains the same and avoid recalculating it, as the778778+ * connector's bpc may have changed after the state was779779+ * duplicated780780+ */781781+ if (!state->duplicated) {782782+ const int bpp = connector->display_info.bpc * 3;783783+ const int clock = crtc_state->adjusted_mode.clock;782784783783- if (crtc_state->mode_changed) {785785+ asyh->dp.pbn = drm_dp_calc_pbn_mode(clock, bpp);786786+ }787787+784788 slots = drm_dp_atomic_find_vcpi_slots(state, &mstm->mgr,785789 mstc->port,786790 asyh->dp.pbn);
+2-2
drivers/gpu/drm/scheduler/sched_entity.c
···9595 rmb(); /* for list_empty to work without lock */96969797 if (list_empty(&entity->list) ||9898- spsc_queue_peek(&entity->job_queue) == NULL)9898+ spsc_queue_count(&entity->job_queue) == 0)9999 return true;100100101101 return false;···281281 /* Consumption of existing IBs wasn't completed. Forcefully282282 * remove them here.283283 */284284- if (spsc_queue_peek(&entity->job_queue)) {284284+ if (spsc_queue_count(&entity->job_queue)) {285285 if (sched) {286286 /* Park the kernel for a moment to make sure it isn't processing287287 * our enity.
···3838 int ret;39394040 port_counter = &dev->port_data[port].port_counter;4141+ if (!port_counter->hstats)4242+ return -EOPNOTSUPP;4343+4144 mutex_lock(&port_counter->lock);4245 if (on) {4346 ret = __counter_set_mode(&port_counter->mode,···511508512509 if (!rdma_is_port_valid(dev, port))513510 return -EINVAL;511511+512512+ if (!dev->port_data[port].port_counter.hstats)513513+ return -EOPNOTSUPP;514514515515 qp = rdma_counter_get_qp(dev, qp_num);516516 if (!qp)
···112112 * prevent any further fault handling on this MR.113113 */114114 ib_umem_notifier_start_account(umem_odp);115115- umem_odp->dying = 1;116116- /* Make sure that the fact the umem is dying is out before we release117117- * all pending page faults. */118118- smp_wmb();119115 complete_all(&umem_odp->notifier_completion);120116 umem_odp->umem.context->invalidate_range(121117 umem_odp, ib_umem_start(umem_odp), ib_umem_end(umem_odp));
+6-5
drivers/infiniband/hw/mlx5/devx.c
···20262026 event_sub->eventfd =20272027 eventfd_ctx_fdget(redirect_fd);2028202820292029- if (IS_ERR(event_sub)) {20292029+ if (IS_ERR(event_sub->eventfd)) {20302030 err = PTR_ERR(event_sub->eventfd);20312031 event_sub->eventfd = NULL;20322032 goto err;···26442644 struct devx_async_event_file *ev_file = filp->private_data;26452645 struct devx_event_subscription *event_sub, *event_sub_tmp;26462646 struct devx_async_event_data *entry, *tmp;26472647+ struct mlx5_ib_dev *dev = ev_file->dev;2647264826482648- mutex_lock(&ev_file->dev->devx_event_table.event_xa_lock);26492649+ mutex_lock(&dev->devx_event_table.event_xa_lock);26492650 /* delete the subscriptions which are related to this FD */26502651 list_for_each_entry_safe(event_sub, event_sub_tmp,26512652 &ev_file->subscribed_events_list, file_list) {26522652- devx_cleanup_subscription(ev_file->dev, event_sub);26532653+ devx_cleanup_subscription(dev, event_sub);26532654 if (event_sub->eventfd)26542655 eventfd_ctx_put(event_sub->eventfd);26552656···26592658 kfree_rcu(event_sub, rcu);26602659 }2661266026622662- mutex_unlock(&ev_file->dev->devx_event_table.event_xa_lock);26612661+ mutex_unlock(&dev->devx_event_table.event_xa_lock);2663266226642663 /* free the pending events allocation */26652664 if (!ev_file->omit_data) {···26712670 }2672267126732672 uverbs_close_fd(filp);26742674- put_device(&ev_file->dev->ib_dev.dev);26732673+ put_device(&dev->ib_dev.dev);26752674 return 0;26762675}26772676
+8-14
drivers/infiniband/hw/mlx5/odp.c
···579579 u32 flags)580580{581581 int npages = 0, current_seq, page_shift, ret, np;582582- bool implicit = false;583582 struct ib_umem_odp *odp_mr = to_ib_umem_odp(mr->umem);584583 bool downgrade = flags & MLX5_PF_FLAGS_DOWNGRADE;585584 bool prefetch = flags & MLX5_PF_FLAGS_PREFETCH;···593594 if (IS_ERR(odp))594595 return PTR_ERR(odp);595596 mr = odp->private;596596- implicit = true;597597 } else {598598 odp = odp_mr;599599 }···680682681683out:682684 if (ret == -EAGAIN) {683683- if (implicit || !odp->dying) {684684- unsigned long timeout =685685- msecs_to_jiffies(MMU_NOTIFIER_TIMEOUT);685685+ unsigned long timeout = msecs_to_jiffies(MMU_NOTIFIER_TIMEOUT);686686687687- if (!wait_for_completion_timeout(688688- &odp->notifier_completion,689689- timeout)) {690690- mlx5_ib_warn(dev, "timeout waiting for mmu notifier. seq %d against %d. notifiers_count=%d\n",691691- current_seq, odp->notifiers_seq, odp->notifiers_count);692692- }693693- } else {694694- /* The MR is being killed, kill the QP as well. */695695- ret = -EFAULT;687687+ if (!wait_for_completion_timeout(&odp->notifier_completion,688688+ timeout)) {689689+ mlx5_ib_warn(690690+ dev,691691+ "timeout waiting for mmu notifier. seq %d against %d. notifiers_count=%d\n",692692+ current_seq, odp->notifiers_seq,693693+ odp->notifiers_count);696694 }697695 }698696
+1-1
drivers/infiniband/sw/siw/Kconfig
···11config RDMA_SIW22 tristate "Software RDMA over TCP/IP (iWARP) driver"33- depends on INET && INFINIBAND && LIBCRC32C && 64BIT33+ depends on INET && INFINIBAND && LIBCRC32C44 select DMA_VIRT_OPS55 help66 This driver implements the iWARP RDMA transport over
···160160161161out_err:162162 siw_cpu_info.num_nodes = 0;163163- while (i) {163163+ while (--i >= 0)164164 kfree(siw_cpu_info.tx_valid_cpus[i]);165165- siw_cpu_info.tx_valid_cpus[i--] = NULL;166166- }167165 kfree(siw_cpu_info.tx_valid_cpus);168166 siw_cpu_info.tx_valid_cpus = NULL;169167
+10-4
drivers/infiniband/sw/siw/siw_qp.c
···10131013 */10141014static bool siw_cq_notify_now(struct siw_cq *cq, u32 flags)10151015{10161016- u64 cq_notify;10161016+ u32 cq_notify;1017101710181018 if (!cq->base_cq.comp_handler)10191019 return false;1020102010211021- cq_notify = READ_ONCE(*cq->notify);10211021+ /* Read application shared notification state */10221022+ cq_notify = READ_ONCE(cq->notify->flags);1022102310231024 if ((cq_notify & SIW_NOTIFY_NEXT_COMPLETION) ||10241025 ((cq_notify & SIW_NOTIFY_SOLICITED) &&10251026 (flags & SIW_WQE_SOLICITED))) {10261026- /* dis-arm CQ */10271027- smp_store_mb(*cq->notify, SIW_NOTIFY_NOT);10271027+ /*10281028+ * CQ notification is one-shot: Since the10291029+ * current CQE causes user notification,10301030+ * the CQ gets dis-aremd and must be re-aremd10311031+ * by the user for a new notification.10321032+ */10331033+ WRITE_ONCE(cq->notify->flags, SIW_NOTIFY_NOT);1028103410291035 return true;10301036 }
+11-5
drivers/infiniband/sw/siw/siw_verbs.c
···1049104910501050 spin_lock_init(&cq->lock);1051105110521052- cq->notify = &((struct siw_cq_ctrl *)&cq->queue[size])->notify;10521052+ cq->notify = (struct siw_cq_ctrl *)&cq->queue[size];1053105310541054 if (udata) {10551055 struct siw_uresp_create_cq uresp = {};···11411141 siw_dbg_cq(cq, "flags: 0x%02x\n", flags);1142114211431143 if ((flags & IB_CQ_SOLICITED_MASK) == IB_CQ_SOLICITED)11441144- /* CQ event for next solicited completion */11451145- smp_store_mb(*cq->notify, SIW_NOTIFY_SOLICITED);11441144+ /*11451145+ * Enable CQ event for next solicited completion.11461146+ * and make it visible to all associated producers.11471147+ */11481148+ smp_store_mb(cq->notify->flags, SIW_NOTIFY_SOLICITED);11461149 else11471147- /* CQ event for any signalled completion */11481148- smp_store_mb(*cq->notify, SIW_NOTIFY_ALL);11501150+ /*11511151+ * Enable CQ event for any signalled completion.11521152+ * and make it visible to all associated producers.11531153+ */11541154+ smp_store_mb(cq->notify->flags, SIW_NOTIFY_ALL);1149115511501156 if (flags & IB_CQ_REPORT_MISSED_EVENTS)11511157 return cq->cq_put - cq->cq_get;
···456456457457config XILINX_SDFEC458458 tristate "Xilinx SDFEC 16"459459+ depends on HAS_IOMEM459460 help460461 This option enables support for the Xilinx SDFEC (Soft Decision461462 Forward Error Correction) driver. This enables a char driver
+2-3
drivers/misc/habanalabs/device.c
···970970 rc = hl_ctx_init(hdev, hdev->kernel_ctx, true);971971 if (rc) {972972 dev_err(hdev->dev, "failed to initialize kernel context\n");973973- goto free_ctx;973973+ kfree(hdev->kernel_ctx);974974+ goto mmu_fini;974975 }975976976977 rc = hl_cb_pool_init(hdev);···10541053 if (hl_ctx_put(hdev->kernel_ctx) != 1)10551054 dev_err(hdev->dev,10561055 "kernel ctx is still alive on initialization failure\n");10571057-free_ctx:10581058- kfree(hdev->kernel_ctx);10591056mmu_fini:10601057 hl_mmu_fini(hdev);10611058eq_fini:
+47-25
drivers/misc/habanalabs/goya/goya.c
···27292729 GOYA_ASYNC_EVENT_ID_PI_UPDATE);27302730}2731273127322732-void goya_flush_pq_write(struct hl_device *hdev, u64 *pq, u64 exp_val)27322732+void goya_pqe_write(struct hl_device *hdev, __le64 *pqe, struct hl_bd *bd)27332733{27342734- /* Not needed in Goya */27342734+ /* The QMANs are on the SRAM so need to copy to IO space */27352735+ memcpy_toio((void __iomem *) pqe, bd, sizeof(struct hl_bd));27352736}2736273727372738static void *goya_dma_alloc_coherent(struct hl_device *hdev, size_t size,···33143313 int rc;3315331433163315 dev_dbg(hdev->dev, "DMA packet details:\n");33173317- dev_dbg(hdev->dev, "source == 0x%llx\n", user_dma_pkt->src_addr);33183318- dev_dbg(hdev->dev, "destination == 0x%llx\n", user_dma_pkt->dst_addr);33193319- dev_dbg(hdev->dev, "size == %u\n", user_dma_pkt->tsize);33163316+ dev_dbg(hdev->dev, "source == 0x%llx\n",33173317+ le64_to_cpu(user_dma_pkt->src_addr));33183318+ dev_dbg(hdev->dev, "destination == 0x%llx\n",33193319+ le64_to_cpu(user_dma_pkt->dst_addr));33203320+ dev_dbg(hdev->dev, "size == %u\n", le32_to_cpu(user_dma_pkt->tsize));3320332133213322 ctl = le32_to_cpu(user_dma_pkt->ctl);33223323 user_dir = (ctl & GOYA_PKT_LIN_DMA_CTL_DMA_DIR_MASK) >>···33473344 struct packet_lin_dma *user_dma_pkt)33483345{33493346 dev_dbg(hdev->dev, "DMA packet details:\n");33503350- dev_dbg(hdev->dev, "source == 0x%llx\n", user_dma_pkt->src_addr);33513351- dev_dbg(hdev->dev, "destination == 0x%llx\n", user_dma_pkt->dst_addr);33523352- dev_dbg(hdev->dev, "size == %u\n", user_dma_pkt->tsize);33473347+ dev_dbg(hdev->dev, "source == 0x%llx\n",33483348+ le64_to_cpu(user_dma_pkt->src_addr));33493349+ dev_dbg(hdev->dev, "destination == 0x%llx\n",33503350+ le64_to_cpu(user_dma_pkt->dst_addr));33513351+ dev_dbg(hdev->dev, "size == %u\n", le32_to_cpu(user_dma_pkt->tsize));3353335233543353 /*33553354 * WA for HW-23.···3391338633923387 dev_dbg(hdev->dev, "WREG32 packet details:\n");33933388 dev_dbg(hdev->dev, "reg_offset == 0x%x\n", reg_offset);33943394- dev_dbg(hdev->dev, "value == 0x%x\n", wreg_pkt->value);33893389+ dev_dbg(hdev->dev, "value == 0x%x\n",33903390+ le32_to_cpu(wreg_pkt->value));3395339133963392 if (reg_offset != (mmDMA_CH_0_WR_COMP_ADDR_LO & 0x1FFF)) {33973393 dev_err(hdev->dev, "WREG32 packet with illegal address 0x%x\n",···34343428 while (cb_parsed_length < parser->user_cb_size) {34353429 enum packet_id pkt_id;34363430 u16 pkt_size;34373437- void *user_pkt;34313431+ struct goya_packet *user_pkt;3438343234393439- user_pkt = (void *) (uintptr_t)34333433+ user_pkt = (struct goya_packet *) (uintptr_t)34403434 (parser->user_cb->kernel_address + cb_parsed_length);3441343534423442- pkt_id = (enum packet_id) (((*(u64 *) user_pkt) &34363436+ pkt_id = (enum packet_id) (34373437+ (le64_to_cpu(user_pkt->header) &34433438 PACKET_HEADER_PACKET_ID_MASK) >>34443439 PACKET_HEADER_PACKET_ID_SHIFT);34453440···34603453 * need to validate here as well because patch_cb() is34613454 * not called in MMU path while this function is called34623455 */34633463- rc = goya_validate_wreg32(hdev, parser, user_pkt);34563456+ rc = goya_validate_wreg32(hdev,34573457+ parser, (struct packet_wreg32 *) user_pkt);34643458 break;3465345934663460 case PACKET_WREG_BULK:···34893481 case PACKET_LIN_DMA:34903482 if (is_mmu)34913483 rc = goya_validate_dma_pkt_mmu(hdev, parser,34923492- user_pkt);34843484+ (struct packet_lin_dma *) user_pkt);34933485 else34943486 rc = goya_validate_dma_pkt_no_mmu(hdev, parser,34953495- user_pkt);34873487+ (struct packet_lin_dma *) user_pkt);34963488 break;3497348934983490 case PACKET_MSG_LONG:···36653657 enum packet_id pkt_id;36663658 u16 pkt_size;36673659 u32 new_pkt_size = 0;36683668- void *user_pkt, *kernel_pkt;36603660+ struct goya_packet *user_pkt, *kernel_pkt;3669366136703670- user_pkt = (void *) (uintptr_t)36623662+ user_pkt = (struct goya_packet *) (uintptr_t)36713663 (parser->user_cb->kernel_address + cb_parsed_length);36723672- kernel_pkt = (void *) (uintptr_t)36643664+ kernel_pkt = (struct goya_packet *) (uintptr_t)36733665 (parser->patched_cb->kernel_address +36743666 cb_patched_cur_length);3675366736763676- pkt_id = (enum packet_id) (((*(u64 *) user_pkt) &36683668+ pkt_id = (enum packet_id) (36693669+ (le64_to_cpu(user_pkt->header) &36773670 PACKET_HEADER_PACKET_ID_MASK) >>36783671 PACKET_HEADER_PACKET_ID_SHIFT);36793672···3689368036903681 switch (pkt_id) {36913682 case PACKET_LIN_DMA:36923692- rc = goya_patch_dma_packet(hdev, parser, user_pkt,36933693- kernel_pkt, &new_pkt_size);36833683+ rc = goya_patch_dma_packet(hdev, parser,36843684+ (struct packet_lin_dma *) user_pkt,36853685+ (struct packet_lin_dma *) kernel_pkt,36863686+ &new_pkt_size);36943687 cb_patched_cur_length += new_pkt_size;36953688 break;3696368936973690 case PACKET_WREG_32:36983691 memcpy(kernel_pkt, user_pkt, pkt_size);36993692 cb_patched_cur_length += pkt_size;37003700- rc = goya_validate_wreg32(hdev, parser, kernel_pkt);36933693+ rc = goya_validate_wreg32(hdev, parser,36943694+ (struct packet_wreg32 *) kernel_pkt);37013695 break;3702369637033697 case PACKET_WREG_BULK:···43644352 size_t total_pkt_size;43654353 long result;43664354 int rc;43554355+ int irq_num_entries, irq_arr_index;43564356+ __le32 *goya_irq_arr;4367435743684358 total_pkt_size = sizeof(struct armcp_unmask_irq_arr_packet) +43694359 irq_arr_size;···43834369 if (!pkt)43844370 return -ENOMEM;4385437143864386- pkt->length = cpu_to_le32(irq_arr_size / sizeof(irq_arr[0]));43874387- memcpy(&pkt->irqs, irq_arr, irq_arr_size);43724372+ irq_num_entries = irq_arr_size / sizeof(irq_arr[0]);43734373+ pkt->length = cpu_to_le32(irq_num_entries);43744374+43754375+ /* We must perform any necessary endianness conversation on the irq43764376+ * array being passed to the goya hardware43774377+ */43784378+ for (irq_arr_index = 0, goya_irq_arr = (__le32 *) &pkt->irqs;43794379+ irq_arr_index < irq_num_entries ; irq_arr_index++)43804380+ goya_irq_arr[irq_arr_index] =43814381+ cpu_to_le32(irq_arr[irq_arr_index]);4388438243894383 pkt->armcp_pkt.ctl = cpu_to_le32(ARMCP_PACKET_UNMASK_RAZWI_IRQ_ARRAY <<43904384 ARMCP_PKT_CTL_OPCODE_SHIFT);···50645042 .resume = goya_resume,50655043 .cb_mmap = goya_cb_mmap,50665044 .ring_doorbell = goya_ring_doorbell,50675067- .flush_pq_write = goya_flush_pq_write,50455045+ .pqe_write = goya_pqe_write,50685046 .asic_dma_alloc_coherent = goya_dma_alloc_coherent,50695047 .asic_dma_free_coherent = goya_dma_free_coherent,50705048 .get_int_queue_base = goya_get_int_queue_base,
···441441 * @resume: handles IP specific H/W or SW changes for resume.442442 * @cb_mmap: maps a CB.443443 * @ring_doorbell: increment PI on a given QMAN.444444- * @flush_pq_write: flush PQ entry write if necessary, WARN if flushing failed.444444+ * @pqe_write: Write the PQ entry to the PQ. This is ASIC-specific445445+ * function because the PQs are located in different memory areas446446+ * per ASIC (SRAM, DRAM, Host memory) and therefore, the method of447447+ * writing the PQE must match the destination memory area448448+ * properties.445449 * @asic_dma_alloc_coherent: Allocate coherent DMA memory by calling446450 * dma_alloc_coherent(). This is ASIC function because447451 * its implementation is not trivial when the driver···514510 int (*cb_mmap)(struct hl_device *hdev, struct vm_area_struct *vma,515511 u64 kaddress, phys_addr_t paddress, u32 size);516512 void (*ring_doorbell)(struct hl_device *hdev, u32 hw_queue_id, u32 pi);517517- void (*flush_pq_write)(struct hl_device *hdev, u64 *pq, u64 exp_val);513513+ void (*pqe_write)(struct hl_device *hdev, __le64 *pqe,514514+ struct hl_bd *bd);518515 void* (*asic_dma_alloc_coherent)(struct hl_device *hdev, size_t size,519516 dma_addr_t *dma_handle, gfp_t flag);520517 void (*asic_dma_free_coherent)(struct hl_device *hdev, size_t size,
···5252#define GOYA_PKT_CTL_MB_SHIFT 315353#define GOYA_PKT_CTL_MB_MASK 0x8000000054545555+/* All packets have, at least, an 8-byte header, which contains5656+ * the packet type. The kernel driver uses the packet header for packet5757+ * validation and to perform any necessary required preparation before5858+ * sending them off to the hardware.5959+ */6060+struct goya_packet {6161+ __le64 header;6262+ /* The rest of the packet data follows. Use the corresponding6363+ * packet_XXX struct to deference the data, based on packet type6464+ */6565+ u8 contents[0];6666+};6767+5568struct packet_nop {5669 __le32 reserved;5770 __le32 ctl;
+13-14
drivers/misc/habanalabs/irq.c
···8080 struct hl_cs_job *job;8181 bool shadow_index_valid;8282 u16 shadow_index;8383- u32 *cq_entry;8484- u32 *cq_base;8383+ struct hl_cq_entry *cq_entry, *cq_base;85848685 if (hdev->disabled) {8786 dev_dbg(hdev->dev,···8990 return IRQ_HANDLED;9091 }91929292- cq_base = (u32 *) (uintptr_t) cq->kernel_address;9393+ cq_base = (struct hl_cq_entry *) (uintptr_t) cq->kernel_address;93949495 while (1) {9595- bool entry_ready = ((cq_base[cq->ci] & CQ_ENTRY_READY_MASK)9696+ bool entry_ready = ((le32_to_cpu(cq_base[cq->ci].data) &9797+ CQ_ENTRY_READY_MASK)9698 >> CQ_ENTRY_READY_SHIFT);979998100 if (!entry_ready)99101 break;100102101101- cq_entry = (u32 *) &cq_base[cq->ci];103103+ cq_entry = (struct hl_cq_entry *) &cq_base[cq->ci];102104103103- /*104104- * Make sure we read CQ entry contents after we've105105+ /* Make sure we read CQ entry contents after we've105106 * checked the ownership bit.106107 */107108 dma_rmb();108109109109- shadow_index_valid =110110- ((*cq_entry & CQ_ENTRY_SHADOW_INDEX_VALID_MASK)110110+ shadow_index_valid = ((le32_to_cpu(cq_entry->data) &111111+ CQ_ENTRY_SHADOW_INDEX_VALID_MASK)111112 >> CQ_ENTRY_SHADOW_INDEX_VALID_SHIFT);112113113113- shadow_index = (u16)114114- ((*cq_entry & CQ_ENTRY_SHADOW_INDEX_MASK)114114+ shadow_index = (u16) ((le32_to_cpu(cq_entry->data) &115115+ CQ_ENTRY_SHADOW_INDEX_MASK)115116 >> CQ_ENTRY_SHADOW_INDEX_SHIFT);116117117118 queue = &hdev->kernel_queues[cq->hw_queue_id];···121122 queue_work(hdev->cq_wq, &job->finish_work);122123 }123124124124- /*125125- * Update ci of the context's queue. There is no125125+ /* Update ci of the context's queue. There is no126126 * need to protect it with spinlock because this update is127127 * done only inside IRQ and there is a different IRQ per128128 * queue···129131 queue->ci = hl_queue_inc_ptr(queue->ci);130132131133 /* Clear CQ entry ready bit */132132- cq_base[cq->ci] &= ~CQ_ENTRY_READY_MASK;134134+ cq_entry->data = cpu_to_le32(le32_to_cpu(cq_entry->data) &135135+ ~CQ_ENTRY_READY_MASK);133136134137 cq->ci = hl_cq_inc_ptr(cq->ci);135138
+2
drivers/misc/habanalabs/memory.c
···16291629 dev_dbg(hdev->dev,16301630 "page list 0x%p of asid %d is still alive\n",16311631 phys_pg_list, ctx->asid);16321632+ atomic64_sub(phys_pg_list->total_size,16331633+ &hdev->dram_used_mem);16321634 free_phys_pg_pack(hdev, phys_pg_list);16331635 idr_remove(&vm->phys_pg_pack_handles, i);16341636 }
+3-2
drivers/mtd/spi-nor/spi-nor.c
···37803780 default:37813781 /* Kept only for backward compatibility purpose. */37823782 params->quad_enable = spansion_quad_enable;37833783- if (nor->clear_sr_bp)37843784- nor->clear_sr_bp = spi_nor_spansion_clear_sr_bp;37853783 break;37863784 }37873785···40334035 int err;4034403640354037 if (nor->clear_sr_bp) {40384038+ if (nor->quad_enable == spansion_quad_enable)40394039+ nor->clear_sr_bp = spi_nor_spansion_clear_sr_bp;40404040+40364041 err = nor->clear_sr_bp(nor);40374042 if (err) {40384043 dev_err(nor->dev,
+14-1
drivers/nvme/host/core.c
···12861286 */12871287 if (effects & (NVME_CMD_EFFECTS_LBCC | NVME_CMD_EFFECTS_CSE_MASK)) {12881288 mutex_lock(&ctrl->scan_lock);12891289+ mutex_lock(&ctrl->subsys->lock);12901290+ nvme_mpath_start_freeze(ctrl->subsys);12911291+ nvme_mpath_wait_freeze(ctrl->subsys);12891292 nvme_start_freeze(ctrl);12901293 nvme_wait_freeze(ctrl);12911294 }···13191316 nvme_update_formats(ctrl);13201317 if (effects & (NVME_CMD_EFFECTS_LBCC | NVME_CMD_EFFECTS_CSE_MASK)) {13211318 nvme_unfreeze(ctrl);13191319+ nvme_mpath_unfreeze(ctrl->subsys);13201320+ mutex_unlock(&ctrl->subsys->lock);13221321 mutex_unlock(&ctrl->scan_lock);13231322 }13241323 if (effects & NVME_CMD_EFFECTS_CCC)···17201715 if (ns->head->disk) {17211716 nvme_update_disk_info(ns->head->disk, ns, id);17221717 blk_queue_stack_limits(ns->head->disk->queue, ns->queue);17181718+ revalidate_disk(ns->head->disk);17231719 }17241720#endif17251721}···24932487 if (ret) {24942488 dev_err(ctrl->device,24952489 "failed to register subsystem device.\n");24902490+ put_device(&subsys->dev);24962491 goto out_unlock;24972492 }24982493 ida_init(&subsys->ns_ida);···25162509 nvme_put_subsystem(subsys);25172510out_unlock:25182511 mutex_unlock(&nvme_subsystems_lock);25192519- put_device(&subsys->dev);25202512 return ret;25212513}25222514···35763570{35773571 struct nvme_ns *ns, *next;35783572 LIST_HEAD(ns_list);35733573+35743574+ /*35753575+ * make sure to requeue I/O to all namespaces as these35763576+ * might result from the scan itself and must complete35773577+ * for the scan_work to make progress35783578+ */35793579+ nvme_mpath_clear_ctrl_paths(ctrl);3579358035803581 /* prevent racing with ns scanning */35813582 flush_work(&ctrl->scan_work);
+70-6
drivers/nvme/host/multipath.c
···1212MODULE_PARM_DESC(multipath,1313 "turn on native support for multiple controllers per subsystem");14141515+void nvme_mpath_unfreeze(struct nvme_subsystem *subsys)1616+{1717+ struct nvme_ns_head *h;1818+1919+ lockdep_assert_held(&subsys->lock);2020+ list_for_each_entry(h, &subsys->nsheads, entry)2121+ if (h->disk)2222+ blk_mq_unfreeze_queue(h->disk->queue);2323+}2424+2525+void nvme_mpath_wait_freeze(struct nvme_subsystem *subsys)2626+{2727+ struct nvme_ns_head *h;2828+2929+ lockdep_assert_held(&subsys->lock);3030+ list_for_each_entry(h, &subsys->nsheads, entry)3131+ if (h->disk)3232+ blk_mq_freeze_queue_wait(h->disk->queue);3333+}3434+3535+void nvme_mpath_start_freeze(struct nvme_subsystem *subsys)3636+{3737+ struct nvme_ns_head *h;3838+3939+ lockdep_assert_held(&subsys->lock);4040+ list_for_each_entry(h, &subsys->nsheads, entry)4141+ if (h->disk)4242+ blk_freeze_queue_start(h->disk->queue);4343+}4444+1545/*1646 * If multipathing is enabled we need to always use the subsystem instance1747 * number for numbering our devices to avoid conflicts between subsystems that···134104 [NVME_ANA_CHANGE] = "change",135105};136106137137-void nvme_mpath_clear_current_path(struct nvme_ns *ns)107107+bool nvme_mpath_clear_current_path(struct nvme_ns *ns)138108{139109 struct nvme_ns_head *head = ns->head;110110+ bool changed = false;140111 int node;141112142113 if (!head)143143- return;114114+ goto out;144115145116 for_each_node(node) {146146- if (ns == rcu_access_pointer(head->current_path[node]))117117+ if (ns == rcu_access_pointer(head->current_path[node])) {147118 rcu_assign_pointer(head->current_path[node], NULL);119119+ changed = true;120120+ }148121 }122122+out:123123+ return changed;124124+}125125+126126+void nvme_mpath_clear_ctrl_paths(struct nvme_ctrl *ctrl)127127+{128128+ struct nvme_ns *ns;129129+130130+ mutex_lock(&ctrl->scan_lock);131131+ list_for_each_entry(ns, &ctrl->namespaces, list)132132+ if (nvme_mpath_clear_current_path(ns))133133+ kblockd_schedule_work(&ns->head->requeue_work);134134+ mutex_unlock(&ctrl->scan_lock);149135}150136151137static bool nvme_path_is_disabled(struct nvme_ns *ns)···272226 return ns;273227}274228229229+static bool nvme_available_path(struct nvme_ns_head *head)230230+{231231+ struct nvme_ns *ns;232232+233233+ list_for_each_entry_rcu(ns, &head->list, siblings) {234234+ switch (ns->ctrl->state) {235235+ case NVME_CTRL_LIVE:236236+ case NVME_CTRL_RESETTING:237237+ case NVME_CTRL_CONNECTING:238238+ /* fallthru */239239+ return true;240240+ default:241241+ break;242242+ }243243+ }244244+ return false;245245+}246246+275247static blk_qc_t nvme_ns_head_make_request(struct request_queue *q,276248 struct bio *bio)277249{···316252 disk_devt(ns->head->disk),317253 bio->bi_iter.bi_sector);318254 ret = direct_make_request(bio);319319- } else if (!list_empty_careful(&head->list)) {320320- dev_warn_ratelimited(dev, "no path available - requeuing I/O\n");255255+ } else if (nvme_available_path(head)) {256256+ dev_warn_ratelimited(dev, "no usable path - requeuing I/O\n");321257322258 spin_lock_irq(&head->requeue_lock);323259 bio_list_add(&head->requeue_list, bio);324260 spin_unlock_irq(&head->requeue_lock);325261 } else {326326- dev_warn_ratelimited(dev, "no path - failing I/O\n");262262+ dev_warn_ratelimited(dev, "no available path - failing I/O\n");327263328264 bio->bi_status = BLK_STS_IOERR;329265 bio_endio(bio);
···26952695{26962696 struct nvme_dev *dev = data;2697269726982698- nvme_reset_ctrl_sync(&dev->ctrl);26982698+ flush_work(&dev->ctrl.reset_work);26992699 flush_work(&dev->ctrl.scan_work);27002700 nvme_put_ctrl(&dev->ctrl);27012701}···2761276127622762 dev_info(dev->ctrl.device, "pci function %s\n", dev_name(&pdev->dev));2763276327642764+ nvme_reset_ctrl(&dev->ctrl);27642765 nvme_get_ctrl(&dev->ctrl);27652766 async_schedule(nvme_async_probe, dev);27662767···28472846 struct nvme_dev *ndev = pci_get_drvdata(to_pci_dev(dev));28482847 struct nvme_ctrl *ctrl = &ndev->ctrl;2849284828502850- if (pm_resume_via_firmware() || !ctrl->npss ||28492849+ if (ndev->last_ps == U32_MAX ||28512850 nvme_set_power_state(ctrl, ndev->last_ps) != 0)28522851 nvme_reset_ctrl(ctrl);28532852 return 0;···28602859 struct nvme_ctrl *ctrl = &ndev->ctrl;28612860 int ret = -EBUSY;2862286128622862+ ndev->last_ps = U32_MAX;28632863+28632864 /*28642865 * The platform does not remove power for a kernel managed suspend so28652866 * use host managed nvme power settings for lowest idle power if···28692866 * shutdown. But if the firmware is involved after the suspend or the28702867 * device does not support any non-default power states, shut down the28712868 * device fully.28692869+ *28702870+ * If ASPM is not enabled for the device, shut down the device and allow28712871+ * the PCI bus layer to put it into D3 in order to take the PCIe link28722872+ * down, so as to allow the platform to achieve its minimum low-power28732873+ * state (which may not be possible if the link is up).28722874 */28732873- if (pm_suspend_via_firmware() || !ctrl->npss) {28752875+ if (pm_suspend_via_firmware() || !ctrl->npss ||28762876+ !pcie_aspm_enabled(pdev)) {28742877 nvme_dev_disable(ndev, true);28752878 return 0;28762879 }···28892880 ctrl->state != NVME_CTRL_ADMIN_ONLY)28902881 goto unfreeze;2891288228922892- ndev->last_ps = 0;28932883 ret = nvme_get_power_state(ctrl, &ndev->last_ps);28942884 if (ret < 0)28952885 goto unfreeze;
···675675676676found:677677 list_del(&p->entry);678678+ nvmet_port_del_ctrls(port, subsys);678679 nvmet_port_disc_changed(port, subsys);679680680681 if (list_empty(&port->subsystems))
+15
drivers/nvme/target/core.c
···4646 u16 status;47474848 switch (errno) {4949+ case 0:5050+ status = NVME_SC_SUCCESS;5151+ break;4952 case -ENOSPC:5053 req->error_loc = offsetof(struct nvme_rw_command, length);5154 status = NVME_SC_CAP_EXCEEDED | NVME_SC_DNR;···282279 up_write(&nvmet_config_sem);283280}284281EXPORT_SYMBOL_GPL(nvmet_unregister_transport);282282+283283+void nvmet_port_del_ctrls(struct nvmet_port *port, struct nvmet_subsys *subsys)284284+{285285+ struct nvmet_ctrl *ctrl;286286+287287+ mutex_lock(&subsys->lock);288288+ list_for_each_entry(ctrl, &subsys->ctrls, subsys_entry) {289289+ if (ctrl->port == port)290290+ ctrl->ops->delete_ctrl(ctrl);291291+ }292292+ mutex_unlock(&subsys->lock);293293+}285294286295int nvmet_enable_port(struct nvmet_port *port)287296{
+8
drivers/nvme/target/loop.c
···654654 mutex_lock(&nvme_loop_ports_mutex);655655 list_del_init(&port->entry);656656 mutex_unlock(&nvme_loop_ports_mutex);657657+658658+ /*659659+ * Ensure any ctrls that are in the process of being660660+ * deleted are in fact deleted before we return661661+ * and free the port. This is to prevent active662662+ * ctrls from using a port after it's freed.663663+ */664664+ flush_workqueue(nvme_delete_wq);657665}658666659667static const struct nvmet_fabrics_ops nvme_loop_ops = {
···277277 * of_irq_parse_one - Resolve an interrupt for a device278278 * @device: the device whose interrupt is to be resolved279279 * @index: index of the interrupt to resolve280280- * @out_irq: structure of_irq filled by this function280280+ * @out_irq: structure of_phandle_args filled by this function281281 *282282 * This function resolves an interrupt for a node by walking the interrupt tree,283283 * finding which interrupt controller node it is attached to, and returning the
+9-3
drivers/of/resolver.c
···206206 for_each_child_of_node(local_fixups, child) {207207208208 for_each_child_of_node(overlay, overlay_child)209209- if (!node_name_cmp(child, overlay_child))209209+ if (!node_name_cmp(child, overlay_child)) {210210+ of_node_put(overlay_child);210211 break;212212+ }211213212212- if (!overlay_child)214214+ if (!overlay_child) {215215+ of_node_put(child);213216 return -EINVAL;217217+ }214218215219 err = adjust_local_phandle_references(child, overlay_child,216220 phandle_delta);217217- if (err)221221+ if (err) {222222+ of_node_put(child);218223 return err;224224+ }219225 }220226221227 return 0;
+20
drivers/pci/pcie/aspm.c
···11701170module_param_call(policy, pcie_aspm_set_policy, pcie_aspm_get_policy,11711171 NULL, 0644);1172117211731173+/**11741174+ * pcie_aspm_enabled - Check if PCIe ASPM has been enabled for a device.11751175+ * @pdev: Target device.11761176+ */11771177+bool pcie_aspm_enabled(struct pci_dev *pdev)11781178+{11791179+ struct pci_dev *bridge = pci_upstream_bridge(pdev);11801180+ bool ret;11811181+11821182+ if (!bridge)11831183+ return false;11841184+11851185+ mutex_lock(&aspm_lock);11861186+ ret = bridge->link_state ? !!bridge->link_state->aspm_enabled : false;11871187+ mutex_unlock(&aspm_lock);11881188+11891189+ return ret;11901190+}11911191+EXPORT_SYMBOL_GPL(pcie_aspm_enabled);11921192+11731193#ifdef CONFIG_PCIEASPM_DEBUG11741194static ssize_t link_state_show(struct device *dev,11751195 struct device_attribute *attr,
+21-2
drivers/scsi/lpfc/lpfc_init.c
···1077610776 /* This loop sets up all CPUs that are affinitized with a1077710777 * irq vector assigned to the driver. All affinitized CPUs1077810778 * will get a link to that vectors IRQ and EQ.1077910779+ *1078010780+ * NULL affinity mask handling:1078110781+ * If irq count is greater than one, log an error message.1078210782+ * If the null mask is received for the first irq, find the1078310783+ * first present cpu, and assign the eq index to ensure at1078410784+ * least one EQ is assigned.1077910785 */1078010786 for (idx = 0; idx < phba->cfg_irq_chann; idx++) {1078110787 /* Get a CPU mask for all CPUs affinitized to this vector */1078210788 maskp = pci_irq_get_affinity(phba->pcidev, idx);1078310783- if (!maskp)1078410784- continue;1078910789+ if (!maskp) {1079010790+ if (phba->cfg_irq_chann > 1)1079110791+ lpfc_printf_log(phba, KERN_ERR, LOG_INIT,1079210792+ "3329 No affinity mask found "1079310793+ "for vector %d (%d)\n",1079410794+ idx, phba->cfg_irq_chann);1079510795+ if (!idx) {1079610796+ cpu = cpumask_first(cpu_present_mask);1079710797+ cpup = &phba->sli4_hba.cpu_map[cpu];1079810798+ cpup->eq = idx;1079910799+ cpup->irq = pci_irq_vector(phba->pcidev, idx);1080010800+ cpup->flag |= LPFC_CPU_FIRST_IRQ;1080110801+ }1080210802+ break;1080310803+ }10785108041078610805 i = 0;1078710806 /* Loop through all CPUs associated with vector idx */
+1-6
drivers/soundwire/Kconfig
···44#5566menuconfig SOUNDWIRE77- bool "SoundWire support"77+ tristate "SoundWire support"88 help99 SoundWire is a 2-Pin interface with data and clock line ratified1010 by the MIPI Alliance. SoundWire is used for transporting data···17171818comment "SoundWire Devices"19192020-config SOUNDWIRE_BUS2121- tristate2222- select REGMAP_SOUNDWIRE2323-2420config SOUNDWIRE_CADENCE2521 tristate26222723config SOUNDWIRE_INTEL2824 tristate "Intel SoundWire Master driver"2925 select SOUNDWIRE_CADENCE3030- select SOUNDWIRE_BUS3126 depends on X86 && ACPI && SND_SOC3227 help3328 SoundWire Intel Master driver.
···342342static int dt3k_ns_to_timer(unsigned int timer_base, unsigned int *nanosec,343343 unsigned int flags)344344{345345- int divider, base, prescale;345345+ unsigned int divider, base, prescale;346346347347- /* This function needs improvment */347347+ /* This function needs improvement */348348 /* Don't know if divider==0 works. */349349350350 for (prescale = 0; prescale < 16; prescale++) {···358358 divider = (*nanosec) / base;359359 break;360360 case CMDF_ROUND_UP:361361- divider = (*nanosec) / base;361361+ divider = DIV_ROUND_UP(*nanosec, base);362362 break;363363 }364364 if (divider < 65536) {···368368 }369369370370 prescale = 15;371371- base = timer_base * (1 << prescale);371371+ base = timer_base * (prescale + 1);372372 divider = 65535;373373 *nanosec = divider * base;374374 return (prescale << 16) | (divider);
+12-7
drivers/usb/chipidea/ci_hdrc_imx.c
···454454 imx_disable_unprepare_clks(dev);455455disable_hsic_regulator:456456 if (data->hsic_pad_regulator)457457- ret = regulator_disable(data->hsic_pad_regulator);457457+ /* don't overwrite original ret (cf. EPROBE_DEFER) */458458+ regulator_disable(data->hsic_pad_regulator);458459 if (pdata.flags & CI_HDRC_PMQOS)459460 pm_qos_remove_request(&data->pm_qos_req);461461+ data->ci_pdev = NULL;460462 return ret;461463}462464···471469 pm_runtime_disable(&pdev->dev);472470 pm_runtime_put_noidle(&pdev->dev);473471 }474474- ci_hdrc_remove_device(data->ci_pdev);472472+ if (data->ci_pdev)473473+ ci_hdrc_remove_device(data->ci_pdev);475474 if (data->override_phy_control)476475 usb_phy_shutdown(data->phy);477477- imx_disable_unprepare_clks(&pdev->dev);478478- if (data->plat_data->flags & CI_HDRC_PMQOS)479479- pm_qos_remove_request(&data->pm_qos_req);480480- if (data->hsic_pad_regulator)481481- regulator_disable(data->hsic_pad_regulator);476476+ if (data->ci_pdev) {477477+ imx_disable_unprepare_clks(&pdev->dev);478478+ if (data->plat_data->flags & CI_HDRC_PMQOS)479479+ pm_qos_remove_request(&data->pm_qos_req);480480+ if (data->hsic_pad_regulator)481481+ regulator_disable(data->hsic_pad_regulator);482482+ }482483483484 return 0;484485}
···6666 char name[16];6767 int i, size;68686969- if (!IS_ENABLED(CONFIG_HAS_DMA) ||7070- (!is_device_dma_capable(hcd->self.sysdev) &&7171- !hcd->localmem_pool))6969+ if (hcd->localmem_pool || !hcd_uses_dma(hcd))7270 return 0;73717472 for (i = 0; i < HCD_BUFFER_POOLS; i++) {···127129 return gen_pool_dma_alloc(hcd->localmem_pool, size, dma);128130129131 /* some USB hosts just use PIO */130130- if (!IS_ENABLED(CONFIG_HAS_DMA) ||131131- !is_device_dma_capable(bus->sysdev)) {132132+ if (!hcd_uses_dma(hcd)) {132133 *dma = ~(dma_addr_t) 0;133134 return kmalloc(size, mem_flags);134135 }···157160 return;158161 }159162160160- if (!IS_ENABLED(CONFIG_HAS_DMA) ||161161- !is_device_dma_capable(bus->sysdev)) {163163+ if (!hcd_uses_dma(hcd)) {162164 kfree(addr);163165 return;164166 }
+5-5
drivers/usb/core/file.c
···193193 intf->minor = minor;194194 break;195195 }196196- up_write(&minor_rwsem);197197- if (intf->minor < 0)196196+ if (intf->minor < 0) {197197+ up_write(&minor_rwsem);198198 return -EXFULL;199199+ }199200200201 /* create a usb class device for this usb interface */201202 snprintf(name, sizeof(name), class_driver->name, minor - minor_base);···204203 MKDEV(USB_MAJOR, minor), class_driver,205204 "%s", kbasename(name));206205 if (IS_ERR(intf->usb_dev)) {207207- down_write(&minor_rwsem);208206 usb_minors[minor] = NULL;209207 intf->minor = -1;210210- up_write(&minor_rwsem);211208 retval = PTR_ERR(intf->usb_dev);212209 }210210+ up_write(&minor_rwsem);213211 return retval;214212}215213EXPORT_SYMBOL_GPL(usb_register_dev);···234234 return;235235236236 dev_dbg(&intf->dev, "removing %d minor\n", intf->minor);237237+ device_destroy(usb_class->class, MKDEV(USB_MAJOR, intf->minor));237238238239 down_write(&minor_rwsem);239240 usb_minors[intf->minor] = NULL;240241 up_write(&minor_rwsem);241242242242- device_destroy(usb_class->class, MKDEV(USB_MAJOR, intf->minor));243243 intf->usb_dev = NULL;244244 intf->minor = -1;245245 destroy_usb_class();
+2-2
drivers/usb/core/hcd.c
···14121412 if (usb_endpoint_xfer_control(&urb->ep->desc)) {14131413 if (hcd->self.uses_pio_for_control)14141414 return ret;14151415- if (IS_ENABLED(CONFIG_HAS_DMA) && hcd->self.uses_dma) {14151415+ if (hcd_uses_dma(hcd)) {14161416 if (is_vmalloc_addr(urb->setup_packet)) {14171417 WARN_ONCE(1, "setup packet is not dma capable\n");14181418 return -EAGAIN;···14461446 dir = usb_urb_dir_in(urb) ? DMA_FROM_DEVICE : DMA_TO_DEVICE;14471447 if (urb->transfer_buffer_length != 014481448 && !(urb->transfer_flags & URB_NO_TRANSFER_DMA_MAP)) {14491449- if (IS_ENABLED(CONFIG_HAS_DMA) && hcd->self.uses_dma) {14491449+ if (hcd_uses_dma(hcd)) {14501450 if (urb->num_sgs) {14511451 int n;14521452
+2-2
drivers/usb/core/message.c
···22182218 (struct usb_cdc_dmm_desc *)buffer;22192219 break;22202220 case USB_CDC_MDLM_TYPE:22212221- if (elength < sizeof(struct usb_cdc_mdlm_desc *))22212221+ if (elength < sizeof(struct usb_cdc_mdlm_desc))22222222 goto next_desc;22232223 if (desc)22242224 return -EINVAL;22252225 desc = (struct usb_cdc_mdlm_desc *)buffer;22262226 break;22272227 case USB_CDC_MDLM_DETAIL_TYPE:22282228- if (elength < sizeof(struct usb_cdc_mdlm_detail_desc *))22282228+ if (elength < sizeof(struct usb_cdc_mdlm_detail_desc))22292229 goto next_desc;22302230 if (detail)22312231 return -EINVAL;
+1-1
drivers/usb/dwc2/hcd.c
···4608460846094609 buf = urb->transfer_buffer;4610461046114611- if (hcd->self.uses_dma) {46114611+ if (hcd_uses_dma(hcd)) {46124612 if (!buf && (urb->transfer_dma & 3)) {46134613 dev_err(hsotg->dev,46144614 "%s: unaligned transfer with no transfer_buffer",
+1
drivers/usb/gadget/composite.c
···19761976 * disconnect callbacks?19771977 */19781978 spin_lock_irqsave(&cdev->lock, flags);19791979+ cdev->suspended = 0;19791980 if (cdev->config)19801981 reset_config(cdev);19811982 if (cdev->driver->disconnect)
+18-10
drivers/usb/gadget/function/f_mass_storage.c
···261261struct fsg_common {262262 struct usb_gadget *gadget;263263 struct usb_composite_dev *cdev;264264- struct fsg_dev *fsg, *new_fsg;264264+ struct fsg_dev *fsg;265265 wait_queue_head_t io_wait;266266 wait_queue_head_t fsg_wait;267267···290290 unsigned int bulk_out_maxpacket;291291 enum fsg_state state; /* For exception handling */292292 unsigned int exception_req_tag;293293+ void *exception_arg;293294294295 enum data_direction data_dir;295296 u32 data_size;···392391393392/* These routines may be called in process context or in_irq */394393395395-static void raise_exception(struct fsg_common *common, enum fsg_state new_state)394394+static void __raise_exception(struct fsg_common *common, enum fsg_state new_state,395395+ void *arg)396396{397397 unsigned long flags;398398···406404 if (common->state <= new_state) {407405 common->exception_req_tag = common->ep0_req_tag;408406 common->state = new_state;407407+ common->exception_arg = arg;409408 if (common->thread_task)410409 send_sig_info(SIGUSR1, SEND_SIG_PRIV,411410 common->thread_task);···414411 spin_unlock_irqrestore(&common->lock, flags);415412}416413414414+static void raise_exception(struct fsg_common *common, enum fsg_state new_state)415415+{416416+ __raise_exception(common, new_state, NULL);417417+}417418418419/*-------------------------------------------------------------------------*/419420···22922285static int fsg_set_alt(struct usb_function *f, unsigned intf, unsigned alt)22932286{22942287 struct fsg_dev *fsg = fsg_from_func(f);22952295- fsg->common->new_fsg = fsg;22962296- raise_exception(fsg->common, FSG_STATE_CONFIG_CHANGE);22882288+22892289+ __raise_exception(fsg->common, FSG_STATE_CONFIG_CHANGE, fsg);22972290 return USB_GADGET_DELAYED_STATUS;22982291}2299229223002293static void fsg_disable(struct usb_function *f)23012294{23022295 struct fsg_dev *fsg = fsg_from_func(f);23032303- fsg->common->new_fsg = NULL;23042304- raise_exception(fsg->common, FSG_STATE_CONFIG_CHANGE);22962296+22972297+ __raise_exception(fsg->common, FSG_STATE_CONFIG_CHANGE, NULL);23052298}2306229923072300···23142307 enum fsg_state old_state;23152308 struct fsg_lun *curlun;23162309 unsigned int exception_req_tag;23102310+ struct fsg_dev *new_fsg;2317231123182312 /*23192313 * Clear the existing signals. Anything but SIGUSR1 is converted···23682360 common->next_buffhd_to_fill = &common->buffhds[0];23692361 common->next_buffhd_to_drain = &common->buffhds[0];23702362 exception_req_tag = common->exception_req_tag;23632363+ new_fsg = common->exception_arg;23712364 old_state = common->state;23722365 common->state = FSG_STATE_NORMAL;23732366···24222413 break;2423241424242415 case FSG_STATE_CONFIG_CHANGE:24252425- do_set_interface(common, common->new_fsg);24262426- if (common->new_fsg)24162416+ do_set_interface(common, new_fsg);24172417+ if (new_fsg)24272418 usb_composite_setup_continue(common->cdev);24282419 break;24292420···2998298929992990 DBG(fsg, "unbind\n");30002991 if (fsg->common->fsg == fsg) {30013001- fsg->common->new_fsg = NULL;30023002- raise_exception(fsg->common, FSG_STATE_CONFIG_CHANGE);29922992+ __raise_exception(fsg->common, FSG_STATE_CONFIG_CHANGE, NULL);30032993 /* FIXME: make interruptible or killable somehow? */30042994 wait_event(common->fsg_wait, common->fsg != fsg);30052995 }
+3-2
drivers/usb/gadget/udc/renesas_usb3.c
···1919#include <linux/pm_runtime.h>2020#include <linux/sizes.h>2121#include <linux/slab.h>2222+#include <linux/string.h>2223#include <linux/sys_soc.h>2324#include <linux/uaccess.h>2425#include <linux/usb/ch9.h>···24512450 if (usb3->forced_b_device)24522451 return -EBUSY;2453245224542454- if (!strncmp(buf, "host", strlen("host")))24532453+ if (sysfs_streq(buf, "host"))24552454 new_mode_is_host = true;24562456- else if (!strncmp(buf, "peripheral", strlen("peripheral")))24552455+ else if (sysfs_streq(buf, "peripheral"))24572456 new_mode_is_host = false;24582457 else24592458 return -EINVAL;
+4
drivers/usb/host/fotg210-hcd.c
···16291629 /* see what we found out */16301630 temp = check_reset_complete(fotg210, wIndex, status_reg,16311631 fotg210_readl(fotg210, status_reg));16321632+16331633+ /* restart schedule */16341634+ fotg210->command |= CMD_RUN;16351635+ fotg210_writel(fotg210, fotg210->command, &fotg210->regs->command);16321636 }1633163716341638 if (!(temp & (PORT_RESUME|PORT_RESET))) {
···440440 * iterate through the data blob that lists the contents of an AFS directory441441 */442442static int afs_dir_iterate(struct inode *dir, struct dir_context *ctx,443443- struct key *key)443443+ struct key *key, afs_dataversion_t *_dir_version)444444{445445 struct afs_vnode *dvnode = AFS_FS_I(dir);446446 struct afs_xdr_dir_page *dbuf;···460460 req = afs_read_dir(dvnode, key);461461 if (IS_ERR(req))462462 return PTR_ERR(req);463463+ *_dir_version = req->data_version;463464464465 /* round the file position up to the next entry boundary */465466 ctx->pos += sizeof(union afs_xdr_dirent) - 1;···515514 */516515static int afs_readdir(struct file *file, struct dir_context *ctx)517516{518518- return afs_dir_iterate(file_inode(file), ctx, afs_file_key(file));517517+ afs_dataversion_t dir_version;518518+519519+ return afs_dir_iterate(file_inode(file), ctx, afs_file_key(file),520520+ &dir_version);519521}520522521523/*···559555 * - just returns the FID the dentry name maps to if found560556 */561557static int afs_do_lookup_one(struct inode *dir, struct dentry *dentry,562562- struct afs_fid *fid, struct key *key)558558+ struct afs_fid *fid, struct key *key,559559+ afs_dataversion_t *_dir_version)563560{564561 struct afs_super_info *as = dir->i_sb->s_fs_info;565562 struct afs_lookup_one_cookie cookie = {···573568 _enter("{%lu},%p{%pd},", dir->i_ino, dentry, dentry);574569575570 /* search the directory */576576- ret = afs_dir_iterate(dir, &cookie.ctx, key);571571+ ret = afs_dir_iterate(dir, &cookie.ctx, key, _dir_version);577572 if (ret < 0) {578573 _leave(" = %d [iter]", ret);579574 return ret;···647642 struct afs_server *server;648643 struct afs_vnode *dvnode = AFS_FS_I(dir), *vnode;649644 struct inode *inode = NULL, *ti;645645+ afs_dataversion_t data_version = READ_ONCE(dvnode->status.data_version);650646 int ret, i;651647652648 _enter("{%lu},%p{%pd},", dir->i_ino, dentry, dentry);···675669 cookie->fids[i].vid = as->volume->vid;676670677671 /* search the directory */678678- ret = afs_dir_iterate(dir, &cookie->ctx, key);672672+ ret = afs_dir_iterate(dir, &cookie->ctx, key, &data_version);679673 if (ret < 0) {680674 inode = ERR_PTR(ret);681675 goto out;682676 }677677+678678+ dentry->d_fsdata = (void *)(unsigned long)data_version;683679684680 inode = ERR_PTR(-ENOENT);685681 if (!cookie->found)···976968 struct dentry *parent;977969 struct inode *inode;978970 struct key *key;979979- long dir_version, de_version;971971+ afs_dataversion_t dir_version;972972+ long de_version;980973 int ret;981974982975 if (flags & LOOKUP_RCU)···10231014 * on a 32-bit system, we only have 32 bits in the dentry to store the10241015 * version.10251016 */10261026- dir_version = (long)dir->status.data_version;10171017+ dir_version = dir->status.data_version;10271018 de_version = (long)dentry->d_fsdata;10281028- if (de_version == dir_version)10291029- goto out_valid;10191019+ if (de_version == (long)dir_version)10201020+ goto out_valid_noupdate;1030102110311031- dir_version = (long)dir->invalid_before;10321032- if (de_version - dir_version >= 0)10221022+ dir_version = dir->invalid_before;10231023+ if (de_version - (long)dir_version >= 0)10331024 goto out_valid;1034102510351026 _debug("dir modified");10361027 afs_stat_v(dir, n_reval);1037102810381029 /* search the directory for this vnode */10391039- ret = afs_do_lookup_one(&dir->vfs_inode, dentry, &fid, key);10301030+ ret = afs_do_lookup_one(&dir->vfs_inode, dentry, &fid, key, &dir_version);10401031 switch (ret) {10411032 case 0:10421033 /* the filename maps to something */···10891080 }1090108110911082out_valid:10921092- dentry->d_fsdata = (void *)dir_version;10831083+ dentry->d_fsdata = (void *)(unsigned long)dir_version;10841084+out_valid_noupdate:10931085 dput(parent);10941086 key_put(key);10951087 _leave(" = 1 [valid]");···11961186}1197118711981188/*11891189+ * Note that a dentry got changed. We need to set d_fsdata to the data version11901190+ * number derived from the result of the operation. It doesn't matter if11911191+ * d_fsdata goes backwards as we'll just revalidate.11921192+ */11931193+static void afs_update_dentry_version(struct afs_fs_cursor *fc,11941194+ struct dentry *dentry,11951195+ struct afs_status_cb *scb)11961196+{11971197+ if (fc->ac.error == 0)11981198+ dentry->d_fsdata =11991199+ (void *)(unsigned long)scb->status.data_version;12001200+}12011201+12021202+/*11991203 * create a directory on an AFS filesystem12001204 */12011205static int afs_mkdir(struct inode *dir, struct dentry *dentry, umode_t mode)···12511227 afs_check_for_remote_deletion(&fc, dvnode);12521228 afs_vnode_commit_status(&fc, dvnode, fc.cb_break,12531229 &data_version, &scb[0]);12301230+ afs_update_dentry_version(&fc, dentry, &scb[0]);12541231 afs_vnode_new_inode(&fc, dentry, &iget_data, &scb[1]);12551232 ret = afs_end_vnode_operation(&fc);12561233 if (ret < 0)···1344131913451320 afs_vnode_commit_status(&fc, dvnode, fc.cb_break,13461321 &data_version, scb);13221322+ afs_update_dentry_version(&fc, dentry, scb);13471323 ret = afs_end_vnode_operation(&fc);13481324 if (ret == 0) {13491325 afs_dir_remove_subdir(dentry);···14841458 &data_version, &scb[0]);14851459 afs_vnode_commit_status(&fc, vnode, fc.cb_break_2,14861460 &data_version_2, &scb[1]);14611461+ afs_update_dentry_version(&fc, dentry, &scb[0]);14871462 ret = afs_end_vnode_operation(&fc);14881463 if (ret == 0 && !(scb[1].have_status || scb[1].have_error))14891464 ret = afs_dir_remove_link(dvnode, dentry, key);···15531526 afs_check_for_remote_deletion(&fc, dvnode);15541527 afs_vnode_commit_status(&fc, dvnode, fc.cb_break,15551528 &data_version, &scb[0]);15291529+ afs_update_dentry_version(&fc, dentry, &scb[0]);15561530 afs_vnode_new_inode(&fc, dentry, &iget_data, &scb[1]);15571531 ret = afs_end_vnode_operation(&fc);15581532 if (ret < 0)···16351607 afs_vnode_commit_status(&fc, vnode, fc.cb_break_2,16361608 NULL, &scb[1]);16371609 ihold(&vnode->vfs_inode);16101610+ afs_update_dentry_version(&fc, dentry, &scb[0]);16381611 d_instantiate(dentry, &vnode->vfs_inode);1639161216401613 mutex_unlock(&vnode->io_lock);···17151686 afs_check_for_remote_deletion(&fc, dvnode);17161687 afs_vnode_commit_status(&fc, dvnode, fc.cb_break,17171688 &data_version, &scb[0]);16891689+ afs_update_dentry_version(&fc, dentry, &scb[0]);17181690 afs_vnode_new_inode(&fc, dentry, &iget_data, &scb[1]);17191691 ret = afs_end_vnode_operation(&fc);17201692 if (ret < 0)···18211791 }18221792 }1823179317941794+ /* This bit is potentially nasty as there's a potential race with17951795+ * afs_d_revalidate{,_rcu}(). We have to change d_fsdata on the dentry17961796+ * to reflect it's new parent's new data_version after the op, but17971797+ * d_revalidate may see old_dentry between the op having taken place17981798+ * and the version being updated.17991799+ *18001800+ * So drop the old_dentry for now to make other threads go through18011801+ * lookup instead - which we hold a lock against.18021802+ */18031803+ d_drop(old_dentry);18041804+18241805 ret = -ERESTARTSYS;18251806 if (afs_begin_vnode_operation(&fc, orig_dvnode, key, true)) {18261807 afs_dataversion_t orig_data_version;···18431802 if (orig_dvnode != new_dvnode) {18441803 if (mutex_lock_interruptible_nested(&new_dvnode->io_lock, 1) < 0) {18451804 afs_end_vnode_operation(&fc);18461846- goto error_rehash;18051805+ goto error_rehash_old;18471806 }18481848- new_data_version = new_dvnode->status.data_version;18071807+ new_data_version = new_dvnode->status.data_version + 1;18491808 } else {18501809 new_data_version = orig_data_version;18511810 new_scb = &scb[0];···18681827 }18691828 ret = afs_end_vnode_operation(&fc);18701829 if (ret < 0)18711871- goto error_rehash;18301830+ goto error_rehash_old;18721831 }1873183218741833 if (ret == 0) {···18941853 drop_nlink(new_inode);18951854 spin_unlock(&new_inode->i_lock);18961855 }18561856+18571857+ /* Now we can update d_fsdata on the dentries to reflect their18581858+ * new parent's data_version.18591859+ *18601860+ * Note that if we ever implement RENAME_EXCHANGE, we'll have18611861+ * to update both dentries with opposing dir versions.18621862+ */18631863+ if (new_dvnode != orig_dvnode) {18641864+ afs_update_dentry_version(&fc, old_dentry, &scb[1]);18651865+ afs_update_dentry_version(&fc, new_dentry, &scb[1]);18661866+ } else {18671867+ afs_update_dentry_version(&fc, old_dentry, &scb[0]);18681868+ afs_update_dentry_version(&fc, new_dentry, &scb[0]);18691869+ }18971870 d_move(old_dentry, new_dentry);18981871 goto error_tmp;18991872 }1900187318741874+error_rehash_old:18751875+ d_rehash(new_dentry);19011876error_rehash:19021877 if (rehash)19031878 d_rehash(rehash);
+7-5
fs/afs/file.c
···191191 int i;192192193193 if (refcount_dec_and_test(&req->usage)) {194194- for (i = 0; i < req->nr_pages; i++)195195- if (req->pages[i])196196- put_page(req->pages[i]);197197- if (req->pages != req->array)198198- kfree(req->pages);194194+ if (req->pages) {195195+ for (i = 0; i < req->nr_pages; i++)196196+ if (req->pages[i])197197+ put_page(req->pages[i]);198198+ if (req->pages != req->array)199199+ kfree(req->pages);200200+ }199201 kfree(req);200202 }201203}
+6-5
fs/afs/vlclient.c
···5656 struct afs_uuid__xdr *xdr;5757 struct afs_uuid *uuid;5858 int j;5959+ int n = entry->nr_servers;59606061 tmp = ntohl(uvldb->serverFlags[i]);6162 if (tmp & AFS_VLSF_DONTUSE ||6263 (new_only && !(tmp & AFS_VLSF_NEWREPSITE)))6364 continue;6465 if (tmp & AFS_VLSF_RWVOL) {6565- entry->fs_mask[i] |= AFS_VOL_VTM_RW;6666+ entry->fs_mask[n] |= AFS_VOL_VTM_RW;6667 if (vlflags & AFS_VLF_BACKEXISTS)6767- entry->fs_mask[i] |= AFS_VOL_VTM_BAK;6868+ entry->fs_mask[n] |= AFS_VOL_VTM_BAK;6869 }6970 if (tmp & AFS_VLSF_ROVOL)7070- entry->fs_mask[i] |= AFS_VOL_VTM_RO;7171- if (!entry->fs_mask[i])7171+ entry->fs_mask[n] |= AFS_VOL_VTM_RO;7272+ if (!entry->fs_mask[n])7273 continue;73747475 xdr = &uvldb->serverNumber[i];7575- uuid = (struct afs_uuid *)&entry->fs_server[i];7676+ uuid = (struct afs_uuid *)&entry->fs_server[n];7677 uuid->time_low = xdr->time_low;7778 uuid->time_mid = htons(ntohl(xdr->time_mid));7879 uuid->time_hi_and_version = htons(ntohl(xdr->time_hi_and_version));
+5-44
fs/block_dev.c
···345345 struct bio *bio;346346 bool is_poll = (iocb->ki_flags & IOCB_HIPRI) != 0;347347 bool is_read = (iov_iter_rw(iter) == READ), is_sync;348348- bool nowait = (iocb->ki_flags & IOCB_NOWAIT) != 0;349348 loff_t pos = iocb->ki_pos;350349 blk_qc_t qc = BLK_QC_T_NONE;351351- gfp_t gfp;352352- int ret;350350+ int ret = 0;353351354352 if ((pos | iov_iter_alignment(iter)) &355353 (bdev_logical_block_size(bdev) - 1))356354 return -EINVAL;357355358358- if (nowait)359359- gfp = GFP_NOWAIT;360360- else361361- gfp = GFP_KERNEL;362362-363363- bio = bio_alloc_bioset(gfp, nr_pages, &blkdev_dio_pool);364364- if (!bio)365365- return -EAGAIN;356356+ bio = bio_alloc_bioset(GFP_KERNEL, nr_pages, &blkdev_dio_pool);366357367358 dio = container_of(bio, struct blkdev_dio, bio);368359 dio->is_sync = is_sync = is_sync_kiocb(iocb);···375384 if (!is_poll)376385 blk_start_plug(&plug);377386378378- ret = 0;379387 for (;;) {380388 bio_set_dev(bio, bdev);381389 bio->bi_iter.bi_sector = pos >> 9;···399409 task_io_account_write(bio->bi_iter.bi_size);400410 }401411402402- /*403403- * Tell underlying layer to not block for resource shortage.404404- * And if we would have blocked, return error inline instead405405- * of through the bio->bi_end_io() callback.406406- */407407- if (nowait)408408- bio->bi_opf |= (REQ_NOWAIT | REQ_NOWAIT_INLINE);409409-412412+ dio->size += bio->bi_iter.bi_size;410413 pos += bio->bi_iter.bi_size;411414412415 nr_pages = iov_iter_npages(iter, BIO_MAX_PAGES);···411428 polled = true;412429 }413430414414- dio->size += bio->bi_iter.bi_size;415431 qc = submit_bio(bio);416416- if (qc == BLK_QC_T_EAGAIN) {417417- dio->size -= bio->bi_iter.bi_size;418418- ret = -EAGAIN;419419- goto error;420420- }421432422433 if (polled)423434 WRITE_ONCE(iocb->ki_cookie, qc);···432455 atomic_inc(&dio->ref);433456 }434457435435- dio->size += bio->bi_iter.bi_size;436436- qc = submit_bio(bio);437437- if (qc == BLK_QC_T_EAGAIN) {438438- dio->size -= bio->bi_iter.bi_size;439439- ret = -EAGAIN;440440- goto error;441441- }442442-443443- bio = bio_alloc(gfp, nr_pages);444444- if (!bio) {445445- ret = -EAGAIN;446446- goto error;447447- }458458+ submit_bio(bio);459459+ bio = bio_alloc(GFP_KERNEL, nr_pages);448460 }449461450462 if (!is_poll)···453487 }454488 __set_current_state(TASK_RUNNING);455489456456-out:457490 if (!ret)458491 ret = blk_status_to_errno(dio->bio.bi_status);459492 if (likely(!ret))···460495461496 bio_put(&dio->bio);462497 return ret;463463-error:464464- if (!is_poll)465465- blk_finish_plug(&plug);466466- goto out;467498}468499469500static ssize_t
···44 */5566#include <linux/sched.h>77+#include <linux/sched/mm.h>78#include <linux/sched/signal.h>89#include <linux/pagemap.h>910#include <linux/writeback.h>···78897888 return 0;78907889}7891789078927892-/* link_block_group will queue up kobjects to add when we're reclaim-safe */78937893-void btrfs_add_raid_kobjects(struct btrfs_fs_info *fs_info)78947894-{78957895- struct btrfs_space_info *space_info;78967896- struct raid_kobject *rkobj;78977897- LIST_HEAD(list);78987898- int ret = 0;78997899-79007900- spin_lock(&fs_info->pending_raid_kobjs_lock);79017901- list_splice_init(&fs_info->pending_raid_kobjs, &list);79027902- spin_unlock(&fs_info->pending_raid_kobjs_lock);79037903-79047904- list_for_each_entry(rkobj, &list, list) {79057905- space_info = btrfs_find_space_info(fs_info, rkobj->flags);79067906-79077907- ret = kobject_add(&rkobj->kobj, &space_info->kobj,79087908- "%s", btrfs_bg_type_to_raid_name(rkobj->flags));79097909- if (ret) {79107910- kobject_put(&rkobj->kobj);79117911- break;79127912- }79137913- }79147914- if (ret)79157915- btrfs_warn(fs_info,79167916- "failed to add kobject for block cache, ignoring");79177917-}79187918-79197891static void link_block_group(struct btrfs_block_group_cache *cache)79207892{79217893 struct btrfs_space_info *space_info = cache->space_info;···79037929 up_write(&space_info->groups_sem);7904793079057931 if (first) {79067906- struct raid_kobject *rkobj = kzalloc(sizeof(*rkobj), GFP_NOFS);79327932+ struct raid_kobject *rkobj;79337933+ unsigned int nofs_flag;79347934+ int ret;79357935+79367936+ /*79377937+ * Setup a NOFS context because kobject_add(), deep in its call79387938+ * chain, does GFP_KERNEL allocations, and we are often called79397939+ * in a context where if reclaim is triggered we can deadlock79407940+ * (we are either holding a transaction handle or some lock79417941+ * required for a transaction commit).79427942+ */79437943+ nofs_flag = memalloc_nofs_save();79447944+ rkobj = kzalloc(sizeof(*rkobj), GFP_KERNEL);79077945 if (!rkobj) {79467946+ memalloc_nofs_restore(nofs_flag);79087947 btrfs_warn(cache->fs_info,79097948 "couldn't alloc memory for raid level kobject");79107949 return;79117950 }79127951 rkobj->flags = cache->flags;79137952 kobject_init(&rkobj->kobj, &btrfs_raid_ktype);79147914-79157915- spin_lock(&fs_info->pending_raid_kobjs_lock);79167916- list_add_tail(&rkobj->list, &fs_info->pending_raid_kobjs);79177917- spin_unlock(&fs_info->pending_raid_kobjs_lock);79537953+ ret = kobject_add(&rkobj->kobj, &space_info->kobj, "%s",79547954+ btrfs_bg_type_to_raid_name(rkobj->flags));79557955+ memalloc_nofs_restore(nofs_flag);79567956+ if (ret) {79577957+ kobject_put(&rkobj->kobj);79587958+ btrfs_warn(fs_info,79597959+ "failed to add kobject for block cache, ignoring");79607960+ return;79617961+ }79187962 space_info->block_group_kobjs[index] = &rkobj->kobj;79197963 }79207964}···81988206 inc_block_group_ro(cache, 1);81998207 }8200820882018201- btrfs_add_raid_kobjects(info);82028209 btrfs_init_global_block_rsv(info);82038210 ret = check_chunk_block_group_mappings(info);82048211error:···89668975 struct btrfs_device *device;89678976 struct list_head *devices;89688977 u64 group_trimmed;89788978+ u64 range_end = U64_MAX;89698979 u64 start;89708980 u64 end;89718981 u64 trimmed = 0;···89768984 int dev_ret = 0;89778985 int ret = 0;8978898689878987+ /*89888988+ * Check range overflow if range->len is set.89898989+ * The default range->len is U64_MAX.89908990+ */89918991+ if (range->len != U64_MAX &&89928992+ check_add_overflow(range->start, range->len, &range_end))89938993+ return -EINVAL;89948994+89798995 cache = btrfs_lookup_first_block_group(fs_info, range->start);89808996 for (; cache; cache = next_block_group(cache)) {89818981- if (cache->key.objectid >= (range->start + range->len)) {89978997+ if (cache->key.objectid >= range_end) {89828998 btrfs_put_block_group(cache);89838999 break;89849000 }8985900189869002 start = max(range->start, cache->key.objectid);89878987- end = min(range->start + range->len,89888988- cache->key.objectid + cache->key.offset);90039003+ end = min(range_end, cache->key.objectid + cache->key.offset);8989900489909005 if (end - start >= range->minlen) {89919006 if (!block_group_cache_done(cache)) {
-13
fs/btrfs/volumes.c
···30873087 if (ret)30883088 return ret;3089308930903090- /*30913091- * We add the kobjects here (and after forcing data chunk creation)30923092- * since relocation is the only place we'll create chunks of a new30933093- * type at runtime. The only place where we'll remove the last30943094- * chunk of a type is the call immediately below this one. Even30953095- * so, we're protected against races with the cleaner thread since30963096- * we're covered by the delete_unused_bgs_mutex.30973097- */30983098- btrfs_add_raid_kobjects(fs_info);30993099-31003090 trans = btrfs_start_trans_remove_block_group(root->fs_info,31013091 chunk_offset);31023092 if (IS_ERR(trans)) {···32133223 btrfs_end_transaction(trans);32143224 if (ret < 0)32153225 return ret;32163216-32173217- btrfs_add_raid_kobjects(fs_info);32183218-32193226 return 1;32203227 }32213228 }
+10-10
fs/io_uring.c
···1097109710981098 iter->bvec = bvec + seg_skip;10991099 iter->nr_segs -= seg_skip;11001100- iter->count -= (seg_skip << PAGE_SHIFT);11001100+ iter->count -= bvec->bv_len + offset;11011101 iter->iov_offset = offset & ~PAGE_MASK;11021102- if (iter->iov_offset)11031103- iter->count -= iter->iov_offset;11041102 }11051103 }11061104···20232025{20242026 int ret;2025202720282028+ ret = io_req_defer(ctx, req, s->sqe);20292029+ if (ret) {20302030+ if (ret != -EIOCBQUEUED) {20312031+ io_free_req(req);20322032+ io_cqring_add_event(ctx, s->sqe->user_data, ret);20332033+ }20342034+ return 0;20352035+ }20362036+20262037 ret = __io_submit_sqe(ctx, req, s, true);20272038 if (ret == -EAGAIN && !(req->flags & REQ_F_NOWAIT)) {20282039 struct io_uring_sqe *sqe_copy;···21012094 io_free_req(req);21022095err:21032096 io_cqring_add_event(ctx, s->sqe->user_data, ret);21042104- return;21052105- }21062106-21072107- ret = io_req_defer(ctx, req, s->sqe);21082108- if (ret) {21092109- if (ret != -EIOCBQUEUED)21102110- goto err_req;21112097 return;21122098 }21132099
+1-1
fs/seq_file.c
···119119 }120120 if (seq_has_overflowed(m))121121 goto Eoverflow;122122+ p = m->op->next(m, p, &m->index);122123 if (pos + m->count > offset) {123124 m->from = offset - pos;124125 m->count -= m->from;···127126 }128127 pos += m->count;129128 m->count = 0;130130- p = m->op->next(m, p, &m->index);131129 if (pos == offset)132130 break;133131 }
+21-8
fs/xfs/libxfs/xfs_bmap.c
···38353835 XFS_STATS_INC(mp, xs_blk_mapr);3836383638373837 ifp = XFS_IFORK_PTR(ip, whichfork);38383838+ if (!ifp) {38393839+ /* No CoW fork? Return a hole. */38403840+ if (whichfork == XFS_COW_FORK) {38413841+ mval->br_startoff = bno;38423842+ mval->br_startblock = HOLESTARTBLOCK;38433843+ mval->br_blockcount = len;38443844+ mval->br_state = XFS_EXT_NORM;38453845+ *nmap = 1;38463846+ return 0;38473847+ }3838384838393839- /* No CoW fork? Return a hole. */38403840- if (whichfork == XFS_COW_FORK && !ifp) {38413841- mval->br_startoff = bno;38423842- mval->br_startblock = HOLESTARTBLOCK;38433843- mval->br_blockcount = len;38443844- mval->br_state = XFS_EXT_NORM;38453845- *nmap = 1;38463846- return 0;38493849+ /*38503850+ * A missing attr ifork implies that the inode says we're in38513851+ * extents or btree format but failed to pass the inode fork38523852+ * verifier while trying to load it. Treat that as a file38533853+ * corruption too.38543854+ */38553855+#ifdef DEBUG38563856+ xfs_alert(mp, "%s: inode %llu missing fork %d",38573857+ __func__, ip->i_ino, whichfork);38583858+#endif /* DEBUG */38593859+ return -EFSCORRUPTED;38473860 }3848386138493862 if (!(ifp->if_flags & XFS_IFEXTENTS)) {
+12-7
fs/xfs/libxfs/xfs_da_btree.c
···487487 ASSERT(state->path.active == 0);488488 oldblk = &state->path.blk[0];489489 error = xfs_da3_root_split(state, oldblk, addblk);490490- if (error) {491491- addblk->bp = NULL;492492- return error; /* GROT: dir is inconsistent */493493- }490490+ if (error)491491+ goto out;494492495493 /*496494 * Update pointers to the node which used to be block 0 and just got···503505 */504506 node = oldblk->bp->b_addr;505507 if (node->hdr.info.forw) {506506- ASSERT(be32_to_cpu(node->hdr.info.forw) == addblk->blkno);508508+ if (be32_to_cpu(node->hdr.info.forw) != addblk->blkno) {509509+ error = -EFSCORRUPTED;510510+ goto out;511511+ }507512 node = addblk->bp->b_addr;508513 node->hdr.info.back = cpu_to_be32(oldblk->blkno);509514 xfs_trans_log_buf(state->args->trans, addblk->bp,···515514 }516515 node = oldblk->bp->b_addr;517516 if (node->hdr.info.back) {518518- ASSERT(be32_to_cpu(node->hdr.info.back) == addblk->blkno);517517+ if (be32_to_cpu(node->hdr.info.back) != addblk->blkno) {518518+ error = -EFSCORRUPTED;519519+ goto out;520520+ }519521 node = addblk->bp->b_addr;520522 node->hdr.info.forw = cpu_to_be32(oldblk->blkno);521523 xfs_trans_log_buf(state->args->trans, addblk->bp,522524 XFS_DA_LOGRANGE(node, &node->hdr.info,523525 sizeof(node->hdr.info)));524526 }527527+out:525528 addblk->bp = NULL;526526- return 0;529529+ return error;527530}528531529532/*
+2-1
fs/xfs/libxfs/xfs_dir2_node.c
···741741 ents = dp->d_ops->leaf_ents_p(leaf);742742743743 xfs_dir3_leaf_check(dp, bp);744744- ASSERT(leafhdr.count > 0);744744+ if (leafhdr.count <= 0)745745+ return -EFSCORRUPTED;745746746747 /*747748 * Look up the hash value in the leaf entries.
···159159 /** @pgmap: Points to the hosting device page map. */160160 struct dev_pagemap *pgmap;161161 void *zone_device_data;162162- unsigned long _zd_pad_1; /* uses mapping */162162+ /*163163+ * ZONE_DEVICE private pages are counted as being164164+ * mapped so the next 3 words hold the mapping, index,165165+ * and private fields from the source anonymous or166166+ * page cache page while the page is migrated to device167167+ * private memory.168168+ * ZONE_DEVICE MEMORY_DEVICE_FS_DAX pages also169169+ * use the mapping, index, and private fields when170170+ * pmem backed DAX files are mapped.171171+ */163172 };164173165174 /** @rcu_head: You can use this to free a page by RCU. */
···14571457 * field rather than determining a dma address themselves.14581458 *14591459 * Note that transfer_buffer must still be set if the controller14601460- * does not support DMA (as indicated by bus.uses_dma) and when talking14601460+ * does not support DMA (as indicated by hcd_uses_dma()) and when talking14611461 * to root hub. If you have to trasfer between highmem zone and the device14621462 * on such controller, create a bounce buffer or bail out with an error.14631463 * If transfer_buffer cannot be set (is in highmem) and the controller is DMA
+3
include/linux/usb/hcd.h
···422422 return hcd->high_prio_bh.completing_ep == ep;423423}424424425425+#define hcd_uses_dma(hcd) \426426+ (IS_ENABLED(CONFIG_HAS_DMA) && (hcd)->self.uses_dma)427427+425428extern int usb_hcd_link_urb_to_ep(struct usb_hcd *hcd, struct urb *urb);426429extern int usb_hcd_check_unlink_urb(struct usb_hcd *hcd, struct urb *urb,427430 int status);
···11+// SPDX-License-Identifier: GPL-2.0-or-later12/*23 * kernel/configs.c34 * Echo the kernel .config file used to build the kernel···76 * Copyright (C) 2002 Randy Dunlap <rdunlap@xenotime.net>87 * Copyright (C) 2002 Al Stone <ahs3@fc.hp.com>98 * Copyright (C) 2002 Hewlett-Packard Company1010- *1111- * This program is free software; you can redistribute it and/or modify1212- * it under the terms of the GNU General Public License as published by1313- * the Free Software Foundation; either version 2 of the License, or (at1414- * your option) any later version.1515- *1616- * This program is distributed in the hope that it will be useful, but1717- * WITHOUT ANY WARRANTY; without even the implied warranty of1818- * MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or1919- * NON INFRINGEMENT. See the GNU General Public License for more2020- * details.2121- *2222- * You should have received a copy of the GNU General Public License2323- * along with this program; if not, write to the Free Software2424- * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.259 */26102711#include <linux/kernel.h>
+5-5
kernel/dma/direct.c
···4747{4848 u64 max_dma = phys_to_dma_direct(dev, (max_pfn - 1) << PAGE_SHIFT);49495050- if (dev->bus_dma_mask && dev->bus_dma_mask < max_dma)5151- max_dma = dev->bus_dma_mask;5252-5350 return (1ULL << (fls64(max_dma) - 1)) * 2 - 1;5451}5552···127130 if (!page)128131 return NULL;129132130130- if (attrs & DMA_ATTR_NO_KERNEL_MAPPING) {133133+ if ((attrs & DMA_ATTR_NO_KERNEL_MAPPING) &&134134+ !force_dma_unencrypted(dev)) {131135 /* remove any dirty cache lines on the kernel alias */132136 if (!PageHighMem(page))133137 arch_dma_prep_coherent(page, size);138138+ *dma_handle = phys_to_dma(dev, page_to_phys(page));134139 /* return the page pointer as the opaque cookie */135140 return page;136141 }···177178{178179 unsigned int page_order = get_order(size);179180180180- if (attrs & DMA_ATTR_NO_KERNEL_MAPPING) {181181+ if ((attrs & DMA_ATTR_NO_KERNEL_MAPPING) &&182182+ !force_dma_unencrypted(dev)) {181183 /* cpu_addr is a struct page cookie, not a kernel address */182184 __dma_direct_free_pages(dev, size, cpu_addr);183185 return;
+18-1
kernel/dma/mapping.c
···150150}151151EXPORT_SYMBOL(dma_get_sgtable_attrs);152152153153+#ifdef CONFIG_MMU154154+/*155155+ * Return the page attributes used for mapping dma_alloc_* memory, either in156156+ * kernel space if remapping is needed, or to userspace through dma_mmap_*.157157+ */158158+pgprot_t dma_pgprot(struct device *dev, pgprot_t prot, unsigned long attrs)159159+{160160+ if (dev_is_dma_coherent(dev) ||161161+ (IS_ENABLED(CONFIG_DMA_NONCOHERENT_CACHE_SYNC) &&162162+ (attrs & DMA_ATTR_NON_CONSISTENT)))163163+ return prot;164164+ if (IS_ENABLED(CONFIG_ARCH_HAS_DMA_MMAP_PGPROT))165165+ return arch_dma_mmap_pgprot(dev, prot, attrs);166166+ return pgprot_noncached(prot);167167+}168168+#endif /* CONFIG_MMU */169169+153170/*154171 * Create userspace mapping for the DMA-coherent memory.155172 */···181164 unsigned long pfn;182165 int ret = -ENXIO;183166184184- vma->vm_page_prot = arch_dma_mmap_pgprot(dev, vma->vm_page_prot, attrs);167167+ vma->vm_page_prot = dma_pgprot(dev, vma->vm_page_prot, attrs);185168186169 if (dma_mmap_from_dev_coherent(dev, vma, cpu_addr, size, &ret))187170 return ret;
+1-1
kernel/dma/remap.c
···218218219219 /* create a coherent mapping */220220 ret = dma_common_contiguous_remap(page, size, VM_USERMAP,221221- arch_dma_mmap_pgprot(dev, PAGE_KERNEL, attrs),221221+ dma_pgprot(dev, PAGE_KERNEL, attrs),222222 __builtin_return_address(0));223223 if (!ret) {224224 __dma_direct_free_pages(dev, size, page);
···644644 * available645645 * never: never stall for any thp allocation646646 */647647-static inline gfp_t alloc_hugepage_direct_gfpmask(struct vm_area_struct *vma)647647+static inline gfp_t alloc_hugepage_direct_gfpmask(struct vm_area_struct *vma, unsigned long addr)648648{649649 const bool vma_madvised = !!(vma->vm_flags & VM_HUGEPAGE);650650+ gfp_t this_node = 0;650651651651- /* Always do synchronous compaction */652652+#ifdef CONFIG_NUMA653653+ struct mempolicy *pol;654654+ /*655655+ * __GFP_THISNODE is used only when __GFP_DIRECT_RECLAIM is not656656+ * specified, to express a general desire to stay on the current657657+ * node for optimistic allocation attempts. If the defrag mode658658+ * and/or madvise hint requires the direct reclaim then we prefer659659+ * to fallback to other node rather than node reclaim because that660660+ * can lead to excessive reclaim even though there is free memory661661+ * on other nodes. We expect that NUMA preferences are specified662662+ * by memory policies.663663+ */664664+ pol = get_vma_policy(vma, addr);665665+ if (pol->mode != MPOL_BIND)666666+ this_node = __GFP_THISNODE;667667+ mpol_cond_put(pol);668668+#endif669669+652670 if (test_bit(TRANSPARENT_HUGEPAGE_DEFRAG_DIRECT_FLAG, &transparent_hugepage_flags))653671 return GFP_TRANSHUGE | (vma_madvised ? 0 : __GFP_NORETRY);654654-655655- /* Kick kcompactd and fail quickly */656672 if (test_bit(TRANSPARENT_HUGEPAGE_DEFRAG_KSWAPD_FLAG, &transparent_hugepage_flags))657657- return GFP_TRANSHUGE_LIGHT | __GFP_KSWAPD_RECLAIM;658658-659659- /* Synchronous compaction if madvised, otherwise kick kcompactd */673673+ return GFP_TRANSHUGE_LIGHT | __GFP_KSWAPD_RECLAIM | this_node;660674 if (test_bit(TRANSPARENT_HUGEPAGE_DEFRAG_KSWAPD_OR_MADV_FLAG, &transparent_hugepage_flags))661661- return GFP_TRANSHUGE_LIGHT |662662- (vma_madvised ? __GFP_DIRECT_RECLAIM :663663- __GFP_KSWAPD_RECLAIM);664664-665665- /* Only do synchronous compaction if madvised */675675+ return GFP_TRANSHUGE_LIGHT | (vma_madvised ? __GFP_DIRECT_RECLAIM :676676+ __GFP_KSWAPD_RECLAIM | this_node);666677 if (test_bit(TRANSPARENT_HUGEPAGE_DEFRAG_REQ_MADV_FLAG, &transparent_hugepage_flags))667667- return GFP_TRANSHUGE_LIGHT |668668- (vma_madvised ? __GFP_DIRECT_RECLAIM : 0);669669-670670- return GFP_TRANSHUGE_LIGHT;678678+ return GFP_TRANSHUGE_LIGHT | (vma_madvised ? __GFP_DIRECT_RECLAIM :679679+ this_node);680680+ return GFP_TRANSHUGE_LIGHT | this_node;671681}672682673683/* Caller must hold page table lock. */···749739 pte_free(vma->vm_mm, pgtable);750740 return ret;751741 }752752- gfp = alloc_hugepage_direct_gfpmask(vma);753753- page = alloc_hugepage_vma(gfp, vma, haddr, HPAGE_PMD_ORDER);742742+ gfp = alloc_hugepage_direct_gfpmask(vma, haddr);743743+ page = alloc_pages_vma(gfp, HPAGE_PMD_ORDER, vma, haddr, numa_node_id());754744 if (unlikely(!page)) {755745 count_vm_event(THP_FAULT_FALLBACK);756746 return VM_FAULT_FALLBACK;···13571347alloc:13581348 if (__transparent_hugepage_enabled(vma) &&13591349 !transparent_hugepage_debug_cow()) {13601360- huge_gfp = alloc_hugepage_direct_gfpmask(vma);13611361- new_page = alloc_hugepage_vma(huge_gfp, vma, haddr, HPAGE_PMD_ORDER);13501350+ huge_gfp = alloc_hugepage_direct_gfpmask(vma, haddr);13511351+ new_page = alloc_pages_vma(huge_gfp, HPAGE_PMD_ORDER, vma,13521352+ haddr, numa_node_id());13621353 } else13631354 new_page = NULL;13641355
+19
mm/hugetlb.c
···3856385638573857 page = alloc_huge_page(vma, haddr, 0);38583858 if (IS_ERR(page)) {38593859+ /*38603860+ * Returning error will result in faulting task being38613861+ * sent SIGBUS. The hugetlb fault mutex prevents two38623862+ * tasks from racing to fault in the same page which38633863+ * could result in false unable to allocate errors.38643864+ * Page migration does not take the fault mutex, but38653865+ * does a clear then write of pte's under page table38663866+ * lock. Page fault code could race with migration,38673867+ * notice the clear pte and try to allocate a page38683868+ * here. Before returning error, get ptl and make38693869+ * sure there really is no pte entry.38703870+ */38713871+ ptl = huge_pte_lock(h, mm, ptep);38723872+ if (!huge_pte_none(huge_ptep_get(ptep))) {38733873+ ret = 0;38743874+ spin_unlock(ptl);38753875+ goto out;38763876+ }38773877+ spin_unlock(ptl);38593878 ret = vmf_error(PTR_ERR(page));38603879 goto out;38613880 }
···768768 __this_cpu_write(pn->lruvec_stat_cpu->count[idx], x);769769}770770771771+void __mod_lruvec_slab_state(void *p, enum node_stat_item idx, int val)772772+{773773+ struct page *page = virt_to_head_page(p);774774+ pg_data_t *pgdat = page_pgdat(page);775775+ struct mem_cgroup *memcg;776776+ struct lruvec *lruvec;777777+778778+ rcu_read_lock();779779+ memcg = memcg_from_slab_page(page);780780+781781+ /* Untracked pages have no memcg, no lruvec. Update only the node */782782+ if (!memcg || memcg == root_mem_cgroup) {783783+ __mod_node_page_state(pgdat, idx, val);784784+ } else {785785+ lruvec = mem_cgroup_lruvec(pgdat, memcg);786786+ __mod_lruvec_state(lruvec, idx, val);787787+ }788788+ rcu_read_unlock();789789+}790790+771791/**772792 * __count_memcg_events - account VM events in a cgroup773793 * @memcg: the memory cgroup···11501130 css_put(&prev->css);11511131}1152113211531153-static void invalidate_reclaim_iterators(struct mem_cgroup *dead_memcg)11331133+static void __invalidate_reclaim_iterators(struct mem_cgroup *from,11341134+ struct mem_cgroup *dead_memcg)11541135{11551155- struct mem_cgroup *memcg = dead_memcg;11561136 struct mem_cgroup_reclaim_iter *iter;11571137 struct mem_cgroup_per_node *mz;11581138 int nid;11591139 int i;1160114011611161- for (; memcg; memcg = parent_mem_cgroup(memcg)) {11621162- for_each_node(nid) {11631163- mz = mem_cgroup_nodeinfo(memcg, nid);11641164- for (i = 0; i <= DEF_PRIORITY; i++) {11651165- iter = &mz->iter[i];11661166- cmpxchg(&iter->position,11671167- dead_memcg, NULL);11681168- }11411141+ for_each_node(nid) {11421142+ mz = mem_cgroup_nodeinfo(from, nid);11431143+ for (i = 0; i <= DEF_PRIORITY; i++) {11441144+ iter = &mz->iter[i];11451145+ cmpxchg(&iter->position,11461146+ dead_memcg, NULL);11691147 }11701148 }11491149+}11501150+11511151+static void invalidate_reclaim_iterators(struct mem_cgroup *dead_memcg)11521152+{11531153+ struct mem_cgroup *memcg = dead_memcg;11541154+ struct mem_cgroup *last;11551155+11561156+ do {11571157+ __invalidate_reclaim_iterators(memcg, dead_memcg);11581158+ last = memcg;11591159+ } while ((memcg = parent_mem_cgroup(memcg)));11601160+11611161+ /*11621162+ * When cgruop1 non-hierarchy mode is used,11631163+ * parent_mem_cgroup() does not walk all the way up to the11641164+ * cgroup root (root_mem_cgroup). So we have to handle11651165+ * dead_memcg from cgroup root separately.11661166+ */11671167+ if (last != root_mem_cgroup)11681168+ __invalidate_reclaim_iterators(root_mem_cgroup,11691169+ dead_memcg);11711170}1172117111731172/**
+77-57
mm/mempolicy.c
···403403 },404404};405405406406-static void migrate_page_add(struct page *page, struct list_head *pagelist,406406+static int migrate_page_add(struct page *page, struct list_head *pagelist,407407 unsigned long flags);408408409409struct queue_pages {···429429}430430431431/*432432- * queue_pages_pmd() has three possible return values:433433- * 1 - pages are placed on the right node or queued successfully.434434- * 0 - THP was split.435435- * -EIO - is migration entry or MPOL_MF_STRICT was specified and an existing436436- * page was already on a node that does not follow the policy.432432+ * queue_pages_pmd() has four possible return values:433433+ * 0 - pages are placed on the right node or queued successfully.434434+ * 1 - there is unmovable page, and MPOL_MF_MOVE* & MPOL_MF_STRICT were435435+ * specified.436436+ * 2 - THP was split.437437+ * -EIO - is migration entry or only MPOL_MF_STRICT was specified and an438438+ * existing page was already on a node that does not follow the439439+ * policy.437440 */438441static int queue_pages_pmd(pmd_t *pmd, spinlock_t *ptl, unsigned long addr,439442 unsigned long end, struct mm_walk *walk)···454451 if (is_huge_zero_page(page)) {455452 spin_unlock(ptl);456453 __split_huge_pmd(walk->vma, pmd, addr, false, NULL);454454+ ret = 2;457455 goto out;458456 }459459- if (!queue_pages_required(page, qp)) {460460- ret = 1;457457+ if (!queue_pages_required(page, qp))461458 goto unlock;462462- }463459464464- ret = 1;465460 flags = qp->flags;466461 /* go to thp migration */467462 if (flags & (MPOL_MF_MOVE | MPOL_MF_MOVE_ALL)) {468468- if (!vma_migratable(walk->vma)) {469469- ret = -EIO;463463+ if (!vma_migratable(walk->vma) ||464464+ migrate_page_add(page, qp->pagelist, flags)) {465465+ ret = 1;470466 goto unlock;471467 }472472-473473- migrate_page_add(page, qp->pagelist, flags);474468 } else475469 ret = -EIO;476470unlock:···479479/*480480 * Scan through pages checking if pages follow certain conditions,481481 * and move them to the pagelist if they do.482482+ *483483+ * queue_pages_pte_range() has three possible return values:484484+ * 0 - pages are placed on the right node or queued successfully.485485+ * 1 - there is unmovable page, and MPOL_MF_MOVE* & MPOL_MF_STRICT were486486+ * specified.487487+ * -EIO - only MPOL_MF_STRICT was specified and an existing page was already488488+ * on a node that does not follow the policy.482489 */483490static int queue_pages_pte_range(pmd_t *pmd, unsigned long addr,484491 unsigned long end, struct mm_walk *walk)···495488 struct queue_pages *qp = walk->private;496489 unsigned long flags = qp->flags;497490 int ret;491491+ bool has_unmovable = false;498492 pte_t *pte;499493 spinlock_t *ptl;500494501495 ptl = pmd_trans_huge_lock(pmd, vma);502496 if (ptl) {503497 ret = queue_pages_pmd(pmd, ptl, addr, end, walk);504504- if (ret > 0)505505- return 0;506506- else if (ret < 0)498498+ if (ret != 2)507499 return ret;508500 }501501+ /* THP was split, fall through to pte walk */509502510503 if (pmd_trans_unstable(pmd))511504 return 0;···526519 if (!queue_pages_required(page, qp))527520 continue;528521 if (flags & (MPOL_MF_MOVE | MPOL_MF_MOVE_ALL)) {529529- if (!vma_migratable(vma))522522+ /* MPOL_MF_STRICT must be specified if we get here */523523+ if (!vma_migratable(vma)) {524524+ has_unmovable = true;530525 break;531531- migrate_page_add(page, qp->pagelist, flags);526526+ }527527+528528+ /*529529+ * Do not abort immediately since there may be530530+ * temporary off LRU pages in the range. Still531531+ * need migrate other LRU pages.532532+ */533533+ if (migrate_page_add(page, qp->pagelist, flags))534534+ has_unmovable = true;532535 } else533536 break;534537 }535538 pte_unmap_unlock(pte - 1, ptl);536539 cond_resched();540540+541541+ if (has_unmovable)542542+ return 1;543543+537544 return addr != end ? -EIO : 0;538545}539546···660639 *661640 * If pages found in a given range are on a set of nodes (determined by662641 * @nodes and @flags,) it's isolated and queued to the pagelist which is663663- * passed via @private.)642642+ * passed via @private.643643+ *644644+ * queue_pages_range() has three possible return values:645645+ * 1 - there is unmovable page, but MPOL_MF_MOVE* & MPOL_MF_STRICT were646646+ * specified.647647+ * 0 - queue pages successfully or no misplaced page.648648+ * -EIO - there is misplaced page and only MPOL_MF_STRICT was specified.664649 */665650static int666651queue_pages_range(struct mm_struct *mm, unsigned long start, unsigned long end,···967940/*968941 * page migration, thp tail pages can be passed.969942 */970970-static void migrate_page_add(struct page *page, struct list_head *pagelist,943943+static int migrate_page_add(struct page *page, struct list_head *pagelist,971944 unsigned long flags)972945{973946 struct page *head = compound_head(page);···980953 mod_node_page_state(page_pgdat(head),981954 NR_ISOLATED_ANON + page_is_file_cache(head),982955 hpage_nr_pages(head));956956+ } else if (flags & MPOL_MF_STRICT) {957957+ /*958958+ * Non-movable page may reach here. And, there may be959959+ * temporary off LRU pages or non-LRU movable pages.960960+ * Treat them as unmovable pages since they can't be961961+ * isolated, so they can't be moved at the moment. It962962+ * should return -EIO for this case too.963963+ */964964+ return -EIO;983965 }984966 }967967+968968+ return 0;985969}986970987971/* page allocation callback for NUMA node migration */···11801142 } else if (PageTransHuge(page)) {11811143 struct page *thp;1182114411831183- thp = alloc_hugepage_vma(GFP_TRANSHUGE, vma, address,11841184- HPAGE_PMD_ORDER);11451145+ thp = alloc_pages_vma(GFP_TRANSHUGE, HPAGE_PMD_ORDER, vma,11461146+ address, numa_node_id());11851147 if (!thp)11861148 return NULL;11871149 prep_transhuge_page(thp);···11951157}11961158#else1197115911981198-static void migrate_page_add(struct page *page, struct list_head *pagelist,11601160+static int migrate_page_add(struct page *page, struct list_head *pagelist,11991161 unsigned long flags)12001162{11631163+ return -EIO;12011164}1202116512031166int do_migrate_pages(struct mm_struct *mm, const nodemask_t *from,···12211182 struct mempolicy *new;12221183 unsigned long end;12231184 int err;11851185+ int ret;12241186 LIST_HEAD(pagelist);1225118712261188 if (flags & ~(unsigned long)MPOL_MF_VALID)···12831243 if (err)12841244 goto mpol_out;1285124512861286- err = queue_pages_range(mm, start, end, nmask,12461246+ ret = queue_pages_range(mm, start, end, nmask,12871247 flags | MPOL_MF_INVERT, &pagelist);12881288- if (!err)12891289- err = mbind_range(mm, start, end, new);12481248+12491249+ if (ret < 0) {12501250+ err = -EIO;12511251+ goto up_out;12521252+ }12531253+12541254+ err = mbind_range(mm, start, end, new);1290125512911256 if (!err) {12921257 int nr_failed = 0;···13041259 putback_movable_pages(&pagelist);13051260 }1306126113071307- if (nr_failed && (flags & MPOL_MF_STRICT))12621262+ if ((ret > 0) || (nr_failed && (flags & MPOL_MF_STRICT)))13081263 err = -EIO;13091264 } else13101265 putback_movable_pages(&pagelist);1311126612671267+up_out:13121268 up_write(&mm->mmap_sem);13131313- mpol_out:12691269+mpol_out:13141270 mpol_put(new);13151271 return err;13161272}···17341688 * freeing by another task. It is the caller's responsibility to free the17351689 * extra reference for shared policies.17361690 */17371737-static struct mempolicy *get_vma_policy(struct vm_area_struct *vma,16911691+struct mempolicy *get_vma_policy(struct vm_area_struct *vma,17381692 unsigned long addr)17391693{17401694 struct mempolicy *pol = __get_vma_policy(vma, addr);···20832037 * @vma: Pointer to VMA or NULL if not available.20842038 * @addr: Virtual Address of the allocation. Must be inside the VMA.20852039 * @node: Which node to prefer for allocation (modulo policy).20862086- * @hugepage: for hugepages try only the preferred node if possible20872040 *20882041 * This function allocates a page from the kernel page pool and applies20892042 * a NUMA policy associated with the VMA or the current process.···20932048 */20942049struct page *20952050alloc_pages_vma(gfp_t gfp, int order, struct vm_area_struct *vma,20962096- unsigned long addr, int node, bool hugepage)20512051+ unsigned long addr, int node)20972052{20982053 struct mempolicy *pol;20992054 struct page *page;···21092064 mpol_cond_put(pol);21102065 page = alloc_page_interleave(gfp, order, nid);21112066 goto out;21122112- }21132113-21142114- if (unlikely(IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) && hugepage)) {21152115- int hpage_node = node;21162116-21172117- /*21182118- * For hugepage allocation and non-interleave policy which21192119- * allows the current node (or other explicitly preferred21202120- * node) we only try to allocate from the current/preferred21212121- * node and don't fall back to other nodes, as the cost of21222122- * remote accesses would likely offset THP benefits.21232123- *21242124- * If the policy is interleave, or does not allow the current21252125- * node in its nodemask, we allocate the standard way.21262126- */21272127- if (pol->mode == MPOL_PREFERRED && !(pol->flags & MPOL_F_LOCAL))21282128- hpage_node = pol->v.preferred_node;21292129-21302130- nmask = policy_nodemask(gfp, pol);21312131- if (!nmask || node_isset(hpage_node, *nmask)) {21322132- mpol_cond_put(pol);21332133- page = __alloc_pages_node(hpage_node,21342134- gfp | __GFP_THISNODE, order);21352135- goto out;21362136- }21372067 }2138206821392069 nmask = policy_nodemask(gfp, pol);
+24
mm/memremap.c
···403403404404 mem_cgroup_uncharge(page);405405406406+ /*407407+ * When a device_private page is freed, the page->mapping field408408+ * may still contain a (stale) mapping value. For example, the409409+ * lower bits of page->mapping may still identify the page as410410+ * an anonymous page. Ultimately, this entire field is just411411+ * stale and wrong, and it will cause errors if not cleared.412412+ * One example is:413413+ *414414+ * migrate_vma_pages()415415+ * migrate_vma_insert_page()416416+ * page_add_new_anon_rmap()417417+ * __page_set_anon_rmap()418418+ * ...checks page->mapping, via PageAnon(page) call,419419+ * and incorrectly concludes that the page is an420420+ * anonymous page. Therefore, it incorrectly,421421+ * silently fails to set up the new anon rmap.422422+ *423423+ * For other types of ZONE_DEVICE pages, migration is either424424+ * handled differently or not done at all, so there is no need425425+ * to clear page->mapping.426426+ */427427+ if (is_device_private_page(page))428428+ page->mapping = NULL;429429+406430 page->pgmap->ops->page_free(page);407431 } else if (!count)408432 __put_page(page);
+8
mm/rmap.c
···14751475 /*14761476 * No need to invalidate here it will synchronize on14771477 * against the special swap migration pte.14781478+ *14791479+ * The assignment to subpage above was computed from a14801480+ * swap PTE which results in an invalid pointer.14811481+ * Since only PAGE_SIZE pages can currently be14821482+ * migrated, just set it to page. This will need to be14831483+ * changed when hugepage migrations to device private14841484+ * memory are supported.14781485 */14861486+ subpage = page;14791487 goto discard;14801488 }14811489
···147147 bool to_user)148148{149149 /* Reject if object wraps past end of memory. */150150- if (ptr + n < ptr)150150+ if (ptr + (n - 1) < ptr)151151 usercopy_abort("wrapped address", NULL, to_user, 0, ptr + n);152152153153 /* Reject if NULL or ZERO-allocation. */
+11-1
mm/vmalloc.c
···32793279 goto overflow;3280328032813281 /*32823282+ * If required width exeeds current VA block, move32833283+ * base downwards and then recheck.32843284+ */32853285+ if (base + end > va->va_end) {32863286+ base = pvm_determine_end_from_reverse(&va, align) - end;32873287+ term_area = area;32883288+ continue;32893289+ }32903290+32913291+ /*32823292 * If this VA does not fit, move base downwards and recheck.32833293 */32843284- if (base + start < va->va_start || base + end > va->va_end) {32943294+ if (base + start < va->va_start) {32853295 va = node_to_va(rb_prev(&va->rb_node));32863296 base = pvm_determine_end_from_reverse(&va, align) - end;32873297 term_area = area;
+2-11
mm/vmscan.c
···8888 /* Can pages be swapped as part of reclaim? */8989 unsigned int may_swap:1;90909191- /* e.g. boosted watermark reclaim leaves slabs alone */9292- unsigned int may_shrinkslab:1;9393-9491 /*9592 * Cgroups are not reclaimed below their configured memory.low,9693 * unless we threaten to OOM. If any cgroups are skipped due to···27112714 shrink_node_memcg(pgdat, memcg, sc, &lru_pages);27122715 node_lru_pages += lru_pages;2713271627142714- if (sc->may_shrinkslab) {27152715- shrink_slab(sc->gfp_mask, pgdat->node_id,27162716- memcg, sc->priority);27172717- }27172717+ shrink_slab(sc->gfp_mask, pgdat->node_id, memcg,27182718+ sc->priority);2718271927192720 /* Record the group's reclaim efficiency */27202721 vmpressure(sc->gfp_mask, memcg, false,···31893194 .may_writepage = !laptop_mode,31903195 .may_unmap = 1,31913196 .may_swap = 1,31923192- .may_shrinkslab = 1,31933197 };3194319831953199 /*···32323238 .may_unmap = 1,32333239 .reclaim_idx = MAX_NR_ZONES - 1,32343240 .may_swap = !noswap,32353235- .may_shrinkslab = 1,32363241 };32373242 unsigned long lru_pages;32383243···32793286 .may_writepage = !laptop_mode,32803287 .may_unmap = 1,32813288 .may_swap = may_swap,32823282- .may_shrinkslab = 1,32833289 };3284329032853291 set_task_reclaim_state(current, &sc.reclaim_state);···35903598 */35913599 sc.may_writepage = !laptop_mode && !nr_boost_reclaim;35923600 sc.may_swap = !nr_boost_reclaim;35933593- sc.may_shrinkslab = !nr_boost_reclaim;3594360135953602 /*35963603 * Do some background aging of the anon list, to give
···817817static void z3fold_destroy_pool(struct z3fold_pool *pool)818818{819819 kmem_cache_destroy(pool->c_handle);820820- z3fold_unregister_migration(pool);821821- destroy_workqueue(pool->release_wq);820820+821821+ /*822822+ * We need to destroy pool->compact_wq before pool->release_wq,823823+ * as any pending work on pool->compact_wq will call824824+ * queue_work(pool->release_wq, &pool->work).825825+ *826826+ * There are still outstanding pages until both workqueues are drained,827827+ * so we cannot unregister migration until then.828828+ */829829+822830 destroy_workqueue(pool->compact_wq);831831+ destroy_workqueue(pool->release_wq);832832+ z3fold_unregister_migration(pool);823833 kfree(pool);824834}825835
···11+// SPDX-License-Identifier: GPL-2.0-only12// Check if refcount_t type and API should be used23// instead of atomic_t type when dealing with refcounters34//
-13
security/keys/trusted.c
···1228122812291229static int __init init_digests(void)12301230{12311231- u8 digest[TPM_MAX_DIGEST_SIZE];12321232- int ret;12331233- int i;12341234-12351235- ret = tpm_get_random(chip, digest, TPM_MAX_DIGEST_SIZE);12361236- if (ret < 0)12371237- return ret;12381238- if (ret < TPM_MAX_DIGEST_SIZE)12391239- return -EFAULT;12401240-12411231 digests = kcalloc(chip->nr_allocated_banks, sizeof(*digests),12421232 GFP_KERNEL);12431233 if (!digests)12441234 return -ENOMEM;12451245-12461246- for (i = 0; i < chip->nr_allocated_banks; i++)12471247- memcpy(digests[i].digest, digest, TPM_MAX_DIGEST_SIZE);1248123512491236 return 0;12501237}
+20-1
sound/pci/hda/hda_generic.c
···60516051}60526052EXPORT_SYMBOL_GPL(snd_hda_gen_free);6053605360546054+/**60556055+ * snd_hda_gen_reboot_notify - Make codec enter D3 before rebooting60566056+ * @codec: the HDA codec60576057+ *60586058+ * This can be put as patch_ops reboot_notify function.60596059+ */60606060+void snd_hda_gen_reboot_notify(struct hda_codec *codec)60616061+{60626062+ /* Make the codec enter D3 to avoid spurious noises from the internal60636063+ * speaker during (and after) reboot60646064+ */60656065+ snd_hda_codec_set_power_to_all(codec, codec->core.afg, AC_PWRST_D3);60666066+ snd_hda_codec_write(codec, codec->core.afg, 0,60676067+ AC_VERB_SET_POWER_STATE, AC_PWRST_D3);60686068+ msleep(10);60696069+}60706070+EXPORT_SYMBOL_GPL(snd_hda_gen_reboot_notify);60716071+60546072#ifdef CONFIG_PM60556073/**60566074 * snd_hda_gen_check_power_status - check the loopback power save state···60966078 .init = snd_hda_gen_init,60976079 .free = snd_hda_gen_free,60986080 .unsol_event = snd_hda_jack_unsol_event,60816081+ .reboot_notify = snd_hda_gen_reboot_notify,60996082#ifdef CONFIG_PM61006083 .check_power_status = snd_hda_gen_check_power_status,61016084#endif···6119610061206101 err = snd_hda_parse_pin_defcfg(codec, &spec->autocfg, NULL, 0);61216102 if (err < 0)61226122- return err;61036103+ goto error;6123610461246105 err = snd_hda_gen_parse_auto_config(codec, &spec->autocfg);61256106 if (err < 0)
···6868 unsigned char *buffer;6969 unsigned int buflen;7070 DECLARE_BITMAP(unitbitmap, MAX_ID_ELEMS);7171+ DECLARE_BITMAP(termbitmap, MAX_ID_ELEMS);7172 struct usb_audio_term oterm;7273 const struct usbmix_name_map *map;7374 const struct usbmix_selector_map *selector_map;···745744 return -EINVAL;746745 if (!desc->bNrInPins)747746 return -EINVAL;747747+ if (desc->bLength < sizeof(*desc) + desc->bNrInPins)748748+ return -EINVAL;748749749750 switch (state->mixer->protocol) {750751 case UAC_VERSION_1:···776773 * parse the source unit recursively until it reaches to a terminal777774 * or a branched unit.778775 */779779-static int check_input_term(struct mixer_build *state, int id,776776+static int __check_input_term(struct mixer_build *state, int id,780777 struct usb_audio_term *term)781778{782779 int protocol = state->mixer->protocol;783780 int err;784781 void *p1;782782+ unsigned char *hdr;785783786784 memset(term, 0, sizeof(*term));787787- while ((p1 = find_audio_control_unit(state, id)) != NULL) {788788- unsigned char *hdr = p1;785785+ for (;;) {786786+ /* a loop in the terminal chain? */787787+ if (test_and_set_bit(id, state->termbitmap))788788+ return -EINVAL;789789+790790+ p1 = find_audio_control_unit(state, id);791791+ if (!p1)792792+ break;793793+794794+ hdr = p1;789795 term->id = id;790796791797 if (protocol == UAC_VERSION_1 || protocol == UAC_VERSION_2) {···812800813801 /* call recursively to verify that the814802 * referenced clock entity is valid */815815- err = check_input_term(state, d->bCSourceID, term);803803+ err = __check_input_term(state, d->bCSourceID, term);816804 if (err < 0)817805 return err;818806···846834 case UAC2_CLOCK_SELECTOR: {847835 struct uac_selector_unit_descriptor *d = p1;848836 /* call recursively to retrieve the channel info */849849- err = check_input_term(state, d->baSourceID[0], term);837837+ err = __check_input_term(state, d->baSourceID[0], term);850838 if (err < 0)851839 return err;852840 term->type = UAC3_SELECTOR_UNIT << 16; /* virtual type */···909897910898 /* call recursively to verify that the911899 * referenced clock entity is valid */912912- err = check_input_term(state, d->bCSourceID, term);900900+ err = __check_input_term(state, d->bCSourceID, term);913901 if (err < 0)914902 return err;915903···960948 case UAC3_CLOCK_SELECTOR: {961949 struct uac_selector_unit_descriptor *d = p1;962950 /* call recursively to retrieve the channel info */963963- err = check_input_term(state, d->baSourceID[0], term);951951+ err = __check_input_term(state, d->baSourceID[0], term);964952 if (err < 0)965953 return err;966954 term->type = UAC3_SELECTOR_UNIT << 16; /* virtual type */···976964 return -EINVAL;977965978966 /* call recursively to retrieve the channel info */979979- err = check_input_term(state, d->baSourceID[0], term);967967+ err = __check_input_term(state, d->baSourceID[0], term);980968 if (err < 0)981969 return err;982970···992980 }993981 }994982 return -ENODEV;983983+}984984+985985+986986+static int check_input_term(struct mixer_build *state, int id,987987+ struct usb_audio_term *term)988988+{989989+ memset(term, 0, sizeof(*term));990990+ memset(state->termbitmap, 0, sizeof(state->termbitmap));991991+ return __check_input_term(state, id, term);995992}996993997994/*
+1-1
tools/hv/hv_get_dhcp_info.sh
···1313# the script prints the string "Disabled" to stdout.1414#1515# Each Distro is expected to implement this script in a distro specific1616-# fashion. For instance on Distros that ship with Network Manager enabled,1616+# fashion. For instance, on Distros that ship with Network Manager enabled,1717# this script can be based on the Network Manager APIs for retrieving DHCP1818# information.1919
+5-3
tools/hv/hv_kvp_daemon.c
···700700701701702702 /*703703- * Gather the DNS state.703703+ * Gather the DNS state.704704 * Since there is no standard way to get this information705705 * across various distributions of interest; we just invoke706706 * an external script that needs to be ported across distros···10511051 char *start;1052105210531053 /*10541054- * in_buf has sequence of characters that are seperated by10541054+ * in_buf has sequence of characters that are separated by10551055 * the character ';'. The last sequence does not have the10561056 * terminating ";" character.10571057 */···13861386 daemonize = 0;13871387 break;13881388 case 'h':13891389+ print_usage(argv);13901390+ exit(0);13891391 default:13901392 print_usage(argv);13911393 exit(EXIT_FAILURE);···14921490 case KVP_OP_GET_IP_INFO:14931491 kvp_ip_val = &hv_msg->body.kvp_ip_val;1494149214951495- error = kvp_mac_to_ip(kvp_ip_val);14931493+ error = kvp_mac_to_ip(kvp_ip_val);1496149414971495 if (error)14981496 hv_msg->error = error;
+1-1
tools/hv/hv_set_ifconfig.sh
···1212# be used to configure the interface.1313#1414# Each Distro is expected to implement this script in a distro specific1515-# fashion. For instance on Distros that ship with Network Manager enabled,1515+# fashion. For instance, on Distros that ship with Network Manager enabled,1616# this script can be based on the Network Manager APIs for configuring the1717# interface.1818#
+3-1
tools/hv/hv_vss_daemon.c
···4242 * If a partition is mounted more than once, only the first4343 * FREEZE/THAW can succeed and the later ones will get4444 * EBUSY/EINVAL respectively: there could be 2 cases:4545- * 1) a user may mount the same partition to differnt directories4545+ * 1) a user may mount the same partition to different directories4646 * by mistake or on purpose;4747 * 2) The subvolume of btrfs appears to have the same partition4848 * mounted more than once.···218218 daemonize = 0;219219 break;220220 case 'h':221221+ print_usage(argv);222222+ exit(0);221223 default:222224 print_usage(argv);223225 exit(EXIT_FAILURE);