···9191 compensate for the board being designed with the lanes9292 swapped.93939494+ enet-phy-lane-no-swap:9595+ $ref: /schemas/types.yaml#/definitions/flag9696+ description:9797+ If set, indicates that PHY will disable swap of the9898+ TX/RX lanes. This property allows the PHY to work correcly after9999+ e.g. wrong bootstrap configuration caused by issues in PCB100100+ layout design.101101+94102 eee-broken-100tx:95103 $ref: /schemas/types.yaml#/definitions/flag96104 description:
···440440a virtual function (VF), jumbo frames must first be enabled in the physical441441function (PF). The VF MTU setting cannot be larger than the PF MTU.442442443443+NBASE-T Support444444+---------------445445+The ixgbe driver supports NBASE-T on some devices. However, the advertisement446446+of NBASE-T speeds is suppressed by default, to accommodate broken network447447+switches which cannot cope with advertised NBASE-T speeds. Use the ethtool448448+command to enable advertising NBASE-T speeds on devices which support it::449449+450450+ ethtool -s eth? advertise 0x1800000001028451451+452452+On Linux systems with INTERFACES(5), this can be specified as a pre-up command453453+in /etc/network/interfaces so that the interface is always brought up with454454+NBASE-T support, e.g.::455455+456456+ iface eth? inet dhcp457457+ pre-up ethtool -s eth? advertise 0x1800000001028 || true458458+443459Generic Receive Offload, aka GRO444460--------------------------------445461The driver supports the in-kernel software implementation of GRO. GRO has
+16-8
MAINTAINERS
···30663066F: drivers/phy/qualcomm/phy-ath79-usb.c3067306730683068ATHEROS ATH GENERIC UTILITIES30693069-M: Kalle Valo <kvalo@codeaurora.org>30693069+M: Kalle Valo <kvalo@kernel.org>30703070L: linux-wireless@vger.kernel.org30713071S: Supported30723072F: drivers/net/wireless/ath/*···30813081F: drivers/net/wireless/ath/ath5k/3082308230833083ATHEROS ATH6KL WIRELESS DRIVER30843084-M: Kalle Valo <kvalo@codeaurora.org>30843084+M: Kalle Valo <kvalo@kernel.org>30853085L: linux-wireless@vger.kernel.org30863086S: Supported30873087W: https://wireless.wiki.kernel.org/en/users/Drivers/ath6kl···93299329F: drivers/iio/pressure/dps310.c9330933093319331INFINIBAND SUBSYSTEM93329332-M: Doug Ledford <dledford@redhat.com>93339332M: Jason Gunthorpe <jgg@nvidia.com>93349333L: linux-rdma@vger.kernel.org93359334S: Supported···1027910280F: scripts/Makefile.kcsan10280102811028110282KDUMP1028210282-M: Dave Young <dyoung@redhat.com>1028310283M: Baoquan He <bhe@redhat.com>1028410284R: Vivek Goyal <vgoyal@redhat.com>1028510285+R: Dave Young <dyoung@redhat.com>1028510286L: kexec@lists.infradead.org1028610287S: Maintained1028710288W: http://lse.sourceforge.net/kdump/···1325513256F: include/uapi/linux/netdevice.h13256132571325713258NETWORKING DRIVERS (WIRELESS)1325813258-M: Kalle Valo <kvalo@codeaurora.org>1325913259+M: Kalle Valo <kvalo@kernel.org>1325913260L: linux-wireless@vger.kernel.org1326013261S: Maintained1326113262Q: http://patchwork.kernel.org/project/linux-wireless/list/···1571115712F: drivers/media/tuners/qt1010*15712157131571315714QUALCOMM ATHEROS ATH10K WIRELESS DRIVER1571415714-M: Kalle Valo <kvalo@codeaurora.org>1571515715+M: Kalle Valo <kvalo@kernel.org>1571515716L: ath10k@lists.infradead.org1571615717S: Supported1571715718W: https://wireless.wiki.kernel.org/en/users/Drivers/ath10k···1571915720F: drivers/net/wireless/ath/ath10k/15720157211572115722QUALCOMM ATHEROS ATH11K WIRELESS DRIVER1572215722-M: Kalle Valo <kvalo@codeaurora.org>1572315723+M: Kalle Valo <kvalo@kernel.org>1572315724L: ath11k@lists.infradead.org1572415725S: Supported1572515726T: git git://git.kernel.org/pub/scm/linux/kernel/git/kvalo/ath.git···1578415785S: Maintained1578515786F: Documentation/devicetree/bindings/net/qcom,ethqos.txt1578615787F: drivers/net/ethernet/stmicro/stmmac/dwmac-qcom-ethqos.c1578815788+1578915789+QUALCOMM FASTRPC DRIVER1579015790+M: Srinivas Kandagatla <srinivas.kandagatla@linaro.org>1579115791+M: Amol Maheshwari <amahesh@qti.qualcomm.com>1579215792+L: linux-arm-msm@vger.kernel.org1579315793+S: Maintained1579415794+F: Documentation/devicetree/bindings/misc/qcom,fastrpc.txt1579515795+F: drivers/misc/fastrpc.c1579615796+F: include/uapi/misc/fastrpc.h15787157971578815798QUALCOMM GENERIC INTERFACE I2C DRIVER1578915799M: Akash Asthana <akashast@codeaurora.org>···1590015892F: drivers/media/platform/qcom/venus/15901158931590215894QUALCOMM WCN36XX WIRELESS DRIVER1590315903-M: Kalle Valo <kvalo@codeaurora.org>1589515895+M: Kalle Valo <kvalo@kernel.org>1590415896L: wcn36xx@lists.infradead.org1590515897S: Supported1590615898W: https://wireless.wiki.kernel.org/en/users/Drivers/wcn36xx
···189189 rockchip_boot_fn = __pa_symbol(secondary_startup);190190191191 /* copy the trampoline to sram, that runs during startup of the core */192192- memcpy(sram_base_addr, &rockchip_secondary_trampoline, trampoline_sz);192192+ memcpy_toio(sram_base_addr, &rockchip_secondary_trampoline, trampoline_sz);193193 flush_cache_all();194194 outer_clean_range(0, trampoline_sz);195195
-1
arch/arm64/Kconfig.platforms
···161161162162config ARCH_MESON163163 bool "Amlogic Platforms"164164- select COMMON_CLK165164 help166165 This enables support for the arm64 based Amlogic SoCs167166 such as the s905, S905X/D, S912, A113X/D or S905X/D2
···149149 initrd_len, cmdline, 0);150150 if (!dtb) {151151 pr_err("Preparing for new dtb failed\n");152152+ ret = -EINVAL;152153 goto out_err;153154 }154155
···117117CONFIG_UNIX_DIAG=m118118CONFIG_XFRM_USER=m119119CONFIG_NET_KEY=m120120+CONFIG_NET_SWITCHDEV=y120121CONFIG_SMC=m121122CONFIG_SMC_DIAG=m122123CONFIG_INET=y···512511CONFIG_MLX4_EN=m513512CONFIG_MLX5_CORE=m514513CONFIG_MLX5_CORE_EN=y514514+CONFIG_MLX5_ESWITCH=y515515# CONFIG_NET_VENDOR_MICREL is not set516516# CONFIG_NET_VENDOR_MICROCHIP is not set517517# CONFIG_NET_VENDOR_MICROSEMI is not set
+2
arch/s390/configs/defconfig
···109109CONFIG_UNIX_DIAG=m110110CONFIG_XFRM_USER=m111111CONFIG_NET_KEY=m112112+CONFIG_NET_SWITCHDEV=y112113CONFIG_SMC=m113114CONFIG_SMC_DIAG=m114115CONFIG_INET=y···503502CONFIG_MLX4_EN=m504503CONFIG_MLX5_CORE=m505504CONFIG_MLX5_CORE_EN=y505505+CONFIG_MLX5_ESWITCH=y506506# CONFIG_NET_VENDOR_MICREL is not set507507# CONFIG_NET_VENDOR_MICROCHIP is not set508508# CONFIG_NET_VENDOR_MICROSEMI is not set
···44224422 __release(&t->lock);4423442344244424 /*44254425- * If this thread used poll, make sure we remove the waitqueue44264426- * from any epoll data structures holding it with POLLFREE.44274427- * waitqueue_active() is safe to use here because we're holding44284428- * the inner lock.44254425+ * If this thread used poll, make sure we remove the waitqueue from any44264426+ * poll data structures holding it.44294427 */44304430- if ((thread->looper & BINDER_LOOPER_STATE_POLL) &&44314431- waitqueue_active(&thread->wait)) {44324432- wake_up_poll(&thread->wait, EPOLLHUP | POLLFREE);44334433- }44284428+ if (thread->looper & BINDER_LOOPER_STATE_POLL)44294429+ wake_up_pollfree(&thread->wait);4434443044354431 binder_inner_proc_unlock(thread->proc);4436443244374433 /*44384438- * This is needed to avoid races between wake_up_poll() above and44394439- * and ep_remove_waitqueue() called for other reasons (eg the epoll file44404440- * descriptor being closed); ep_remove_waitqueue() holds an RCU read44414441- * lock, so we can be sure it's done after calling synchronize_rcu().44344434+ * This is needed to avoid races between wake_up_pollfree() above and44354435+ * someone else removing the last entry from the queue for other reasons44364436+ * (e.g. ep_remove_wait_queue() being called due to an epoll file44374437+ * descriptor being closed). Such other users hold an RCU read lock, so44384438+ * we can be sure they're done after we call synchronize_rcu().44424439 */44434440 if (thread->looper & BINDER_LOOPER_STATE_POLL)44444441 synchronize_rcu();
+2-1
drivers/ata/ahci_ceva.c
···9494static unsigned int ceva_ahci_read_id(struct ata_device *dev,9595 struct ata_taskfile *tf, u16 *id)9696{9797+ __le16 *__id = (__le16 *)id;9798 u32 err_mask;989999100 err_mask = ata_do_dev_read_id(dev, tf, id);···104103 * Since CEVA controller does not support device sleep feature, we105104 * need to clear DEVSLP (bit 8) in word78 of the IDENTIFY DEVICE data.106105 */107107- id[ATA_ID_FEATURE_SUPP] &= cpu_to_le16(~(1 << 8));106106+ __id[ATA_ID_FEATURE_SUPP] &= cpu_to_le16(~(1 << 8));108107109108 return 0;110109}
···3418341834193419 clk_prepare_lock();3420342034213421+ /*34223422+ * Set hw->core after grabbing the prepare_lock to synchronize with34233423+ * callers of clk_core_fill_parent_index() where we treat hw->core34243424+ * being NULL as the clk not being registered yet. This is crucial so34253425+ * that clks aren't parented until their parent is fully registered.34263426+ */34273427+ core->hw->core = core;34283428+34213429 ret = clk_pm_runtime_get(core);34223430 if (ret)34233431 goto unlock;···35903582out:35913583 clk_pm_runtime_put(core);35923584unlock:35933593- if (ret)35853585+ if (ret) {35943586 hlist_del_init(&core->child_node);35873587+ core->hw->core = NULL;35883588+ }3595358935963590 clk_prepare_unlock();35973591···38573847 core->num_parents = init->num_parents;38583848 core->min_rate = 0;38593849 core->max_rate = ULONG_MAX;38603860- hw->core = core;3861385038623851 ret = clk_core_populate_parent_map(core, init);38633852 if (ret)···38743865 goto fail_create_clk;38753866 }3876386738773877- clk_core_link_consumer(hw->core, hw->clk);38683868+ clk_core_link_consumer(core, hw->clk);3878386938793870 ret = __clk_core_init(core);38803871 if (!ret)
···14291429void clk_trion_pll_configure(struct clk_alpha_pll *pll, struct regmap *regmap,14301430 const struct alpha_pll_config *config)14311431{14321432+ /*14331433+ * If the bootloader left the PLL enabled it's likely that there are14341434+ * RCGs that will lock up if we disable the PLL below.14351435+ */14361436+ if (trion_pll_is_enabled(pll, regmap)) {14371437+ pr_debug("Trion PLL is already enabled, skipping configuration\n");14381438+ return;14391439+ }14401440+14321441 clk_alpha_pll_write_config(regmap, PLL_L_VAL(pll), config->l);14331442 regmap_write(regmap, PLL_CAL_L_VAL(pll), TRION_PLL_CAL_VAL);14341443 clk_alpha_pll_write_config(regmap, PLL_ALPHA_VAL(pll), config->alpha);
+1-1
drivers/clk/qcom/clk-regmap-mux.c
···2828 val &= mask;29293030 if (mux->parent_map)3131- return qcom_find_src_index(hw, mux->parent_map, val);3131+ return qcom_find_cfg_index(hw, mux->parent_map, val);32323333 return val;3434}
+12
drivers/clk/qcom/common.c
···6969}7070EXPORT_SYMBOL_GPL(qcom_find_src_index);71717272+int qcom_find_cfg_index(struct clk_hw *hw, const struct parent_map *map, u8 cfg)7373+{7474+ int i, num_parents = clk_hw_get_num_parents(hw);7575+7676+ for (i = 0; i < num_parents; i++)7777+ if (cfg == map[i].cfg)7878+ return i;7979+8080+ return -ENOENT;8181+}8282+EXPORT_SYMBOL_GPL(qcom_find_cfg_index);8383+7284struct regmap *7385qcom_cc_map(struct platform_device *pdev, const struct qcom_cc_desc *desc)7486{
···394394395395static atomic_t timer_unstable_counter_workaround_in_use = ATOMIC_INIT(0);396396397397-static void erratum_set_next_event_generic(const int access, unsigned long evt,398398- struct clock_event_device *clk)397397+/*398398+ * Force the inlining of this function so that the register accesses399399+ * can be themselves correctly inlined.400400+ */401401+static __always_inline402402+void erratum_set_next_event_generic(const int access, unsigned long evt,403403+ struct clock_event_device *clk)399404{400405 unsigned long ctrl;401406 u64 cval;
+1-1
drivers/clocksource/dw_apb_timer_of.c
···4747 pr_warn("pclk for %pOFn is present, but could not be activated\n",4848 np);49495050- if (!of_property_read_u32(np, "clock-freq", rate) &&5050+ if (!of_property_read_u32(np, "clock-freq", rate) ||5151 !of_property_read_u32(np, "clock-frequency", rate))5252 return 0;5353
···25762576 */25772577 link_enc_cfg_init(dm->dc, dc_state);2578257825792579- amdgpu_dm_outbox_init(adev);25792579+ if (dc_enable_dmub_notifications(adev->dm.dc))25802580+ amdgpu_dm_outbox_init(adev);2580258125812582 r = dm_dmub_hw_init(adev);25822583 if (r)···26252624 dm_state->context = dc_create_state(dm->dc);26262625 /* TODO: Remove dc_state->dccg, use dc->dccg directly. */26272626 dc_resource_state_construct(dm->dc, dm_state->context);26272627+26282628+ /* Re-enable outbox interrupts for DPIA. */26292629+ if (dc_enable_dmub_notifications(adev->dm.dc))26302630+ amdgpu_dm_outbox_init(adev);2628263126292632 /* Before powering on DC we need to re-initialize DMUB. */26302633 r = dm_dmub_hw_init(adev);
+2
drivers/gpu/drm/amd/display/dc/dc_link.h
···226226 *edp_num = 0;227227 for (i = 0; i < dc->link_count; i++) {228228 // report any eDP links, even unconnected DDI's229229+ if (!dc->links[i])230230+ continue;229231 if (dc->links[i]->connector_signal == SIGNAL_TYPE_EDP) {230232 edp_links[*edp_num] = dc->links[i];231233 if (++(*edp_num) == MAX_NUM_EDP)
+10-1
drivers/gpu/drm/drm_syncobj.c
···404404405405 if (*fence) {406406 ret = dma_fence_chain_find_seqno(fence, point);407407- if (!ret)407407+ if (!ret) {408408+ /* If the requested seqno is already signaled409409+ * drm_syncobj_find_fence may return a NULL410410+ * fence. To make sure the recipient gets411411+ * signalled, use a new fence instead.412412+ */413413+ if (!*fence)414414+ *fence = dma_fence_get_stub();415415+408416 goto out;417417+ }409418 dma_fence_put(*fence);410419 } else {411420 ret = -EINVAL;
+1
drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
···32773277 out_fence = eb_requests_create(&eb, in_fence, out_fence_fd);32783278 if (IS_ERR(out_fence)) {32793279 err = PTR_ERR(out_fence);32803280+ out_fence = NULL;32803281 if (eb.requests[0])32813282 goto err_request;32823283 else
···2323/*2424 * I2C command delays (in microseconds)2525 */2626-#define SHT4X_MEAS_DELAY 10002626+#define SHT4X_MEAS_DELAY_HPM 8200 /* see t_MEAS,h in datasheet */2727#define SHT4X_DELAY_EXTRA 1000028282929/*···9090 if (ret < 0)9191 goto unlock;92929393- usleep_range(SHT4X_MEAS_DELAY, SHT4X_MEAS_DELAY + SHT4X_DELAY_EXTRA);9393+ usleep_range(SHT4X_MEAS_DELAY_HPM, SHT4X_MEAS_DELAY_HPM + SHT4X_DELAY_EXTRA);94949595 ret = i2c_master_recv(client, raw_data, SHT4X_RESPONSE_LENGTH);9696 if (ret != SHT4X_RESPONSE_LENGTH) {
+1-1
drivers/i2c/busses/i2c-mpc.c
···636636 status = readb(i2c->base + MPC_I2C_SR);637637 if (status & CSR_MIF) {638638 /* Wait up to 100us for transfer to properly complete */639639- readb_poll_timeout(i2c->base + MPC_I2C_SR, status, !(status & CSR_MCF), 0, 100);639639+ readb_poll_timeout_atomic(i2c->base + MPC_I2C_SR, status, status & CSR_MCF, 0, 100);640640 writeb(0, i2c->base + MPC_I2C_SR);641641 mpc_i2c_do_intr(i2c, status);642642 return IRQ_HANDLED;
+12-20
drivers/i2c/busses/i2c-virtio.c
···2222/**2323 * struct virtio_i2c - virtio I2C data2424 * @vdev: virtio device for this controller2525- * @completion: completion of virtio I2C message2625 * @adap: I2C adapter for this controller2726 * @vq: the virtio virtqueue for communication2827 */2928struct virtio_i2c {3029 struct virtio_device *vdev;3131- struct completion completion;3230 struct i2c_adapter adap;3331 struct virtqueue *vq;3432};35333634/**3735 * struct virtio_i2c_req - the virtio I2C request structure3636+ * @completion: completion of virtio I2C message3837 * @out_hdr: the OUT header of the virtio I2C message3938 * @buf: the buffer into which data is read, or from which it's written4039 * @in_hdr: the IN header of the virtio I2C message4140 */4241struct virtio_i2c_req {4242+ struct completion completion;4343 struct virtio_i2c_out_hdr out_hdr ____cacheline_aligned;4444 uint8_t *buf ____cacheline_aligned;4545 struct virtio_i2c_in_hdr in_hdr ____cacheline_aligned;···47474848static void virtio_i2c_msg_done(struct virtqueue *vq)4949{5050- struct virtio_i2c *vi = vq->vdev->priv;5050+ struct virtio_i2c_req *req;5151+ unsigned int len;51525252- complete(&vi->completion);5353+ while ((req = virtqueue_get_buf(vq, &len)))5454+ complete(&req->completion);5355}54565557static int virtio_i2c_prepare_reqs(struct virtqueue *vq,···63616462 for (i = 0; i < num; i++) {6563 int outcnt = 0, incnt = 0;6464+6565+ init_completion(&reqs[i].completion);66666767 /*6868 * Only 7-bit mode supported for this moment. For the address···110106 struct virtio_i2c_req *reqs,111107 struct i2c_msg *msgs, int num)112108{113113- struct virtio_i2c_req *req;114109 bool failed = false;115115- unsigned int len;116110 int i, j = 0;117111118112 for (i = 0; i < num; i++) {119119- /* Detach the ith request from the vq */120120- req = virtqueue_get_buf(vq, &len);113113+ struct virtio_i2c_req *req = &reqs[i];121114122122- /*123123- * Condition req == &reqs[i] should always meet since we have124124- * total num requests in the vq. reqs[i] can never be NULL here.125125- */126126- if (!failed && (WARN_ON(req != &reqs[i]) ||127127- req->in_hdr.status != VIRTIO_I2C_MSG_OK))115115+ wait_for_completion(&req->completion);116116+117117+ if (!failed && req->in_hdr.status != VIRTIO_I2C_MSG_OK)128118 failed = true;129119130120 i2c_put_dma_safe_msg_buf(reqs[i].buf, &msgs[i], !failed);···154156 * remote here to clear the virtqueue, so we can try another set of155157 * messages later on.156158 */157157-158158- reinit_completion(&vi->completion);159159 virtqueue_kick(vq);160160-161161- wait_for_completion(&vi->completion);162160163161 count = virtio_i2c_complete_reqs(vq, reqs, msgs, count);164162···203209204210 vdev->priv = vi;205211 vi->vdev = vdev;206206-207207- init_completion(&vi->completion);208212209213 ret = virtio_i2c_setup_vqs(vi);210214 if (ret)
+2-3
drivers/iio/accel/kxcjk-1013.c
···15951595 return 0;1596159615971597err_buffer_cleanup:15981598- if (data->dready_trig)15991599- iio_triggered_buffer_cleanup(indio_dev);15981598+ iio_triggered_buffer_cleanup(indio_dev);16001599err_trigger_unregister:16011600 if (data->dready_trig)16021601 iio_trigger_unregister(data->dready_trig);···16171618 pm_runtime_disable(&client->dev);16181619 pm_runtime_set_suspended(&client->dev);1619162016211621+ iio_triggered_buffer_cleanup(indio_dev);16201622 if (data->dready_trig) {16211621- iio_triggered_buffer_cleanup(indio_dev);16221623 iio_trigger_unregister(data->dready_trig);16231624 iio_trigger_unregister(data->motion_trig);16241625 }
···532532533533config IMX8QXP_ADC534534 tristate "NXP IMX8QXP ADC driver"535535- depends on ARCH_MXC_ARM64 || COMPILE_TEST535535+ depends on ARCH_MXC || COMPILE_TEST536536 depends on HAS_IOMEM537537 help538538 Say yes here to build support for IMX8QXP ADC.
···665665666666 /* User MR data */667667 struct mlx5_cache_ent *cache_ent;668668- struct ib_umem *umem;669668670669 /* This is zero'd when the MR is allocated */671670 union {···676677 struct list_head list;677678 };678679679679- /* Used only by kernel MRs (umem == NULL) */680680+ /* Used only by kernel MRs */680681 struct {681682 void *descs;682683 void *descs_alloc;···697698 int data_length;698699 };699700700700- /* Used only by User MRs (umem != NULL) */701701+ /* Used only by User MRs */701702 struct {703703+ struct ib_umem *umem;702704 unsigned int page_shift;703705 /* Current access_flags */704706 int access_flags;
···238238 }239239240240 data->num_parent_irqs = platform_irq_count(pdev);241241+ put_device(&pdev->dev);241242 if (data->num_parent_irqs <= 0) {242243 pr_err("invalid number of parent interrupts\n");243244 ret = -ENOMEM;
···719719static u64 fastrpc_get_payload_size(struct fastrpc_invoke_ctx *ctx, int metalen)720720{721721 u64 size = 0;722722- int i;722722+ int oix;723723724724 size = ALIGN(metalen, FASTRPC_ALIGN);725725- for (i = 0; i < ctx->nscalars; i++) {725725+ for (oix = 0; oix < ctx->nbufs; oix++) {726726+ int i = ctx->olaps[oix].raix;727727+726728 if (ctx->args[i].fd == 0 || ctx->args[i].fd == -1) {727729728728- if (ctx->olaps[i].offset == 0)730730+ if (ctx->olaps[oix].offset == 0)729731 size = ALIGN(size, FASTRPC_ALIGN);730732731731- size += (ctx->olaps[i].mend - ctx->olaps[i].mstart);733733+ size += (ctx->olaps[oix].mend - ctx->olaps[oix].mstart);732734 }733735 }734736
+3-1
drivers/mmc/host/mtk-sd.c
···22912291 sdr_set_field(host->base + PAD_DS_TUNE,22922292 PAD_DS_TUNE_DLY1, i);22932293 ret = mmc_get_ext_csd(card, &ext_csd);22942294- if (!ret)22942294+ if (!ret) {22952295 result_dly1 |= (1 << i);22962296+ kfree(ext_csd);22972297+ }22962298 }22972299 host->hs400_tuning = false;22982300
+1-1
drivers/mmc/host/renesas_sdhi_core.c
···673673674674 /* Issue CMD19 twice for each tap */675675 for (i = 0; i < 2 * priv->tap_num; i++) {676676- int cmd_error;676676+ int cmd_error = 0;677677678678 /* Set sampling clock position */679679 sd_scc_write32(host, priv, SH_MOBILE_SDHI_SCC_TAPSET, i % priv->tap_num);
+4
drivers/net/dsa/mv88e6xxx/chip.c
···768768 if ((!mv88e6xxx_port_ppu_updates(chip, port) ||769769 mode == MLO_AN_FIXED) && ops->port_sync_link)770770 err = ops->port_sync_link(chip, port, mode, false);771771+772772+ if (!err && ops->port_set_speed_duplex)773773+ err = ops->port_set_speed_duplex(chip, port, SPEED_UNFORCED,774774+ DUPLEX_UNFORCED);771775 mv88e6xxx_reg_unlock(chip);772776773777 if (err)
+2-2
drivers/net/dsa/mv88e6xxx/port.c
···283283 if (err)284284 return err;285285286286- if (speed)286286+ if (speed != SPEED_UNFORCED)287287 dev_dbg(chip->dev, "p%d: Speed set to %d Mbps\n", port, speed);288288 else289289 dev_dbg(chip->dev, "p%d: Speed unforced\n", port);···516516 if (err)517517 return err;518518519519- if (speed)519519+ if (speed != SPEED_UNFORCED)520520 dev_dbg(chip->dev, "p%d: Speed set to %d Mbps\n", port, speed);521521 else522522 dev_dbg(chip->dev, "p%d: Speed unforced\n", port);
+4-1
drivers/net/ethernet/broadcom/bcmsysport.c
···13091309 struct bcm_sysport_priv *priv = netdev_priv(dev);13101310 struct device *kdev = &priv->pdev->dev;13111311 struct bcm_sysport_tx_ring *ring;13121312+ unsigned long flags, desc_flags;13121313 struct bcm_sysport_cb *cb;13131314 struct netdev_queue *txq;13141315 u32 len_status, addr_lo;13151316 unsigned int skb_len;13161316- unsigned long flags;13171317 dma_addr_t mapping;13181318 u16 queue;13191319 int ret;···13731373 ring->desc_count--;1374137413751375 /* Ports are latched, so write upper address first */13761376+ spin_lock_irqsave(&priv->desc_lock, desc_flags);13761377 tdma_writel(priv, len_status, TDMA_WRITE_PORT_HI(ring->index));13771378 tdma_writel(priv, addr_lo, TDMA_WRITE_PORT_LO(ring->index));13791379+ spin_unlock_irqrestore(&priv->desc_lock, desc_flags);1378138013791381 /* Check ring space and update SW control flow */13801382 if (ring->desc_count == 0)···20152013 }2016201420172015 /* Initialize both hardware and software ring */20162016+ spin_lock_init(&priv->desc_lock);20182017 for (i = 0; i < dev->num_tx_queues; i++) {20192018 ret = bcm_sysport_init_tx_ring(priv, i);20202019 if (ret) {
···589589 * Internal or external PHY with MDIO access590590 */591591 phydev = phy_attach(priv->dev, phy_name, pd->phy_interface);592592- if (!phydev) {592592+ if (IS_ERR(phydev)) {593593 dev_err(kdev, "failed to register PHY device\n");594594- return -ENODEV;594594+ return PTR_ERR(phydev);595595 }596596 } else {597597 /*
+2
drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.h
···388388 __u64 bytes_per_cdan;389389};390390391391+#define DPAA2_ETH_CH_STATS 7392392+391393/* Maximum number of queues associated with a DPNI */392394#define DPAA2_ETH_MAX_TCS 8393395#define DPAA2_ETH_MAX_RX_QUEUES_PER_TC 16
···12261226 if (ret)12271227 return ret;1228122812291229+ mutex_lock(&handle->dbgfs_lock);12291230 save_buf = &hns3_dbg_cmd[index].buf;1230123112311232 if (!test_bit(HNS3_NIC_STATE_INITED, &priv->state) ||···12391238 read_buf = *save_buf;12401239 } else {12411240 read_buf = kvzalloc(hns3_dbg_cmd[index].buf_len, GFP_KERNEL);12421242- if (!read_buf)12431243- return -ENOMEM;12411241+ if (!read_buf) {12421242+ ret = -ENOMEM;12431243+ goto out;12441244+ }1244124512451246 /* save the buffer addr until the last read operation */12461247 *save_buf = read_buf;12471247- }1248124812491249- /* get data ready for the first time to read */12501250- if (!*ppos) {12491249+ /* get data ready for the first time to read */12511250 ret = hns3_dbg_read_cmd(dbg_data, hns3_dbg_cmd[index].cmd,12521251 read_buf, hns3_dbg_cmd[index].buf_len);12531252 if (ret)···1256125512571256 size = simple_read_from_buffer(buffer, count, ppos, read_buf,12581257 strlen(read_buf));12591259- if (size > 0)12581258+ if (size > 0) {12591259+ mutex_unlock(&handle->dbgfs_lock);12601260 return size;12611261+ }1261126212621263out:12631264 /* free the buffer for the last read operation */···12681265 *save_buf = NULL;12691266 }1270126712681268+ mutex_unlock(&handle->dbgfs_lock);12711269 return ret;12721270}12731271···13411337 debugfs_create_dir(hns3_dbg_dentry[i].name,13421338 handle->hnae3_dbgfs);1343133913401340+ mutex_init(&handle->dbgfs_lock);13411341+13441342 for (i = 0; i < ARRAY_SIZE(hns3_dbg_cmd); i++) {13451343 if ((hns3_dbg_cmd[i].cmd == HNAE3_DBG_CMD_TM_NODES &&13461344 ae_dev->dev_version <= HNAE3_DEVICE_VERSION_V2) ||···13691363 return 0;1370136413711365out:13661366+ mutex_destroy(&handle->dbgfs_lock);13721367 debugfs_remove_recursive(handle->hnae3_dbgfs);13731368 handle->hnae3_dbgfs = NULL;13741369 return ret;···13851378 hns3_dbg_cmd[i].buf = NULL;13861379 }1387138013811381+ mutex_destroy(&handle->dbgfs_lock);13881382 debugfs_remove_recursive(handle->hnae3_dbgfs);13891383 handle->hnae3_dbgfs = NULL;13901384}
···705705 scaled_ppm = -scaled_ppm;706706 }707707708708- while ((u64)scaled_ppm > div_u64(U64_MAX, incval)) {708708+ while ((u64)scaled_ppm > div64_u64(U64_MAX, incval)) {709709 /* handle overflow by scaling down the scaled_ppm and710710 * the divisor, losing some precision711711 */···15361536 if (err)15371537 continue;1538153815391539- /* Check if the timestamp is valid */15401540- if (!(raw_tstamp & ICE_PTP_TS_VALID))15391539+ /* Check if the timestamp is invalid or stale */15401540+ if (!(raw_tstamp & ICE_PTP_TS_VALID) ||15411541+ raw_tstamp == tx->tstamps[idx].cached_tstamp)15411542 continue;15421542-15431543- /* clear the timestamp register, so that it won't show valid15441544- * again when re-used.15451545- */15461546- ice_clear_phy_tstamp(hw, tx->quad, phy_idx);1547154315481544 /* The timestamp is valid, so we'll go ahead and clear this15491545 * index and then send the timestamp up to the stack.15501546 */15511547 spin_lock(&tx->lock);15481548+ tx->tstamps[idx].cached_tstamp = raw_tstamp;15521549 clear_bit(idx, tx->in_use);15531550 skb = tx->tstamps[idx].skb;15541551 tx->tstamps[idx].skb = NULL;
+6
drivers/net/ethernet/intel/ice/ice_ptp.h
···5555 * struct ice_tx_tstamp - Tracking for a single Tx timestamp5656 * @skb: pointer to the SKB for this timestamp request5757 * @start: jiffies when the timestamp was first requested5858+ * @cached_tstamp: last read timestamp5859 *5960 * This structure tracks a single timestamp request. The SKB pointer is6061 * provided when initiating a request. The start time is used to ensure that6162 * we discard old requests that were not fulfilled within a 2 second time6263 * window.6464+ * Timestamp values in the PHY are read only and do not get cleared except at6565+ * hardware reset or when a new timestamp value is captured. The cached_tstamp6666+ * field is used to detect the case where a new timestamp has not yet been6767+ * captured, ensuring that we avoid sending stale timestamp data to the stack.6368 */6469struct ice_tx_tstamp {6570 struct sk_buff *skb;6671 unsigned long start;7272+ u64 cached_tstamp;6773};68746975/**
+14-14
drivers/net/ethernet/intel/igb/igb_main.c
···76487648 struct vf_mac_filter *entry = NULL;76497649 int ret = 0;7650765076517651+ if ((vf_data->flags & IGB_VF_FLAG_PF_SET_MAC) &&76527652+ !vf_data->trusted) {76537653+ dev_warn(&pdev->dev,76547654+ "VF %d requested MAC filter but is administratively denied\n",76557655+ vf);76567656+ return -EINVAL;76577657+ }76587658+ if (!is_valid_ether_addr(addr)) {76597659+ dev_warn(&pdev->dev,76607660+ "VF %d attempted to set invalid MAC filter\n",76617661+ vf);76627662+ return -EINVAL;76637663+ }76647664+76517665 switch (info) {76527666 case E1000_VF_MAC_FILTER_CLR:76537667 /* remove all unicast MAC filters related to the current VF */···76757661 }76767662 break;76777663 case E1000_VF_MAC_FILTER_ADD:76787678- if ((vf_data->flags & IGB_VF_FLAG_PF_SET_MAC) &&76797679- !vf_data->trusted) {76807680- dev_warn(&pdev->dev,76817681- "VF %d requested MAC filter but is administratively denied\n",76827682- vf);76837683- return -EINVAL;76847684- }76857685- if (!is_valid_ether_addr(addr)) {76867686- dev_warn(&pdev->dev,76877687- "VF %d attempted to set invalid MAC filter\n",76887688- vf);76897689- return -EINVAL;76907690- }76917691-76927664 /* try to find empty slot in the list */76937665 list_for_each(pos, &adapter->vf_macs.l) {76947666 entry = list_entry(pos, struct vf_mac_filter, l);
···55315531 if (!speed && hw->mac.ops.get_link_capabilities) {55325532 ret = hw->mac.ops.get_link_capabilities(hw, &speed,55335533 &autoneg);55345534+ /* remove NBASE-T speeds from default autonegotiation55355535+ * to accommodate broken network switches in the field55365536+ * which cannot cope with advertised NBASE-T speeds55375537+ */55345538 speed &= ~(IXGBE_LINK_SPEED_5GB_FULL |55355539 IXGBE_LINK_SPEED_2_5GB_FULL);55365540 }
+3
drivers/net/ethernet/intel/ixgbe/ixgbe_x550.c
···34053405 /* flush pending Tx transactions */34063406 ixgbe_clear_tx_pending(hw);3407340734083408+ /* set MDIO speed before talking to the PHY in case it's the 1st time */34093409+ ixgbe_set_mdio_speed(hw);34103410+34083411 /* PHY ops must be identified and initialized prior to reset */34093412 status = hw->phy.ops.init(hw);34103413 if (status == IXGBE_ERR_SFP_NOT_SUPPORTED ||
···3333 void (*set_rgmii_speed)(struct rk_priv_data *bsp_priv, int speed);3434 void (*set_rmii_speed)(struct rk_priv_data *bsp_priv, int speed);3535 void (*integrated_phy_powerup)(struct rk_priv_data *bsp_priv);3636+ bool regs_valid;3637 u32 regs[];3738};3839···10931092 .set_to_rmii = rk3568_set_to_rmii,10941093 .set_rgmii_speed = rk3568_set_gmac_speed,10951094 .set_rmii_speed = rk3568_set_gmac_speed,10951095+ .regs_valid = true,10961096 .regs = {10971097 0xfe2a0000, /* gmac0 */10981098 0xfe010000, /* gmac1 */···13851383 * to be distinguished.13861384 */13871385 res = platform_get_resource(pdev, IORESOURCE_MEM, 0);13881388- if (res) {13861386+ if (res && ops->regs_valid) {13891387 int i = 0;1390138813911389 while (ops->regs[i]) {
+17
drivers/net/ethernet/stmicro/stmmac/stmmac.h
···171171 int is_l4;172172};173173174174+/* Rx Frame Steering */175175+enum stmmac_rfs_type {176176+ STMMAC_RFS_T_VLAN,177177+ STMMAC_RFS_T_MAX,178178+};179179+180180+struct stmmac_rfs_entry {181181+ unsigned long cookie;182182+ int in_use;183183+ int type;184184+ int tc;185185+};186186+174187struct stmmac_priv {175188 /* Frequently used values are kept adjacent for cache effect */176189 u32 tx_coal_frames[MTL_MAX_TX_QUEUES];···301288 struct stmmac_tc_entry *tc_entries;302289 unsigned int flow_entries_max;303290 struct stmmac_flow_entry *flow_entries;291291+ unsigned int rfs_entries_max[STMMAC_RFS_T_MAX];292292+ unsigned int rfs_entries_cnt[STMMAC_RFS_T_MAX];293293+ unsigned int rfs_entries_total;294294+ struct stmmac_rfs_entry *rfs_entries;304295305296 /* Pulse Per Second output */306297 struct stmmac_pps_cfg pps[STMMAC_PPS_MAX];
+12-4
drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
···14511451{14521452 struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue];14531453 struct stmmac_rx_buffer *buf = &rx_q->buf_pool[i];14541454+ gfp_t gfp = (GFP_ATOMIC | __GFP_NOWARN);14551455+14561456+ if (priv->dma_cap.addr64 <= 32)14571457+ gfp |= GFP_DMA32;1454145814551459 if (!buf->page) {14561456- buf->page = page_pool_dev_alloc_pages(rx_q->page_pool);14601460+ buf->page = page_pool_alloc_pages(rx_q->page_pool, gfp);14571461 if (!buf->page)14581462 return -ENOMEM;14591463 buf->page_offset = stmmac_rx_offset(priv);14601464 }1461146514621466 if (priv->sph && !buf->sec_page) {14631463- buf->sec_page = page_pool_dev_alloc_pages(rx_q->page_pool);14671467+ buf->sec_page = page_pool_alloc_pages(rx_q->page_pool, gfp);14641468 if (!buf->sec_page)14651469 return -ENOMEM;14661470···44794475 struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue];44804476 int dirty = stmmac_rx_dirty(priv, queue);44814477 unsigned int entry = rx_q->dirty_rx;44784478+ gfp_t gfp = (GFP_ATOMIC | __GFP_NOWARN);44794479+44804480+ if (priv->dma_cap.addr64 <= 32)44814481+ gfp |= GFP_DMA32;4482448244834483 while (dirty-- > 0) {44844484 struct stmmac_rx_buffer *buf = &rx_q->buf_pool[entry];···44954487 p = rx_q->dma_rx + entry;4496448844974489 if (!buf->page) {44984498- buf->page = page_pool_dev_alloc_pages(rx_q->page_pool);44904490+ buf->page = page_pool_alloc_pages(rx_q->page_pool, gfp);44994491 if (!buf->page)45004492 break;45014493 }4502449445034495 if (priv->sph && !buf->sec_page) {45044504- buf->sec_page = page_pool_dev_alloc_pages(rx_q->page_pool);44964496+ buf->sec_page = page_pool_alloc_pages(rx_q->page_pool, gfp);45054497 if (!buf->sec_page)45064498 break;45074499
+73-13
drivers/net/ethernet/stmicro/stmmac/stmmac_tc.c
···232232 }233233}234234235235+static int tc_rfs_init(struct stmmac_priv *priv)236236+{237237+ int i;238238+239239+ priv->rfs_entries_max[STMMAC_RFS_T_VLAN] = 8;240240+241241+ for (i = 0; i < STMMAC_RFS_T_MAX; i++)242242+ priv->rfs_entries_total += priv->rfs_entries_max[i];243243+244244+ priv->rfs_entries = devm_kcalloc(priv->device,245245+ priv->rfs_entries_total,246246+ sizeof(*priv->rfs_entries),247247+ GFP_KERNEL);248248+ if (!priv->rfs_entries)249249+ return -ENOMEM;250250+251251+ dev_info(priv->device, "Enabled RFS Flow TC (entries=%d)\n",252252+ priv->rfs_entries_total);253253+254254+ return 0;255255+}256256+235257static int tc_init(struct stmmac_priv *priv)236258{237259 struct dma_features *dma_cap = &priv->dma_cap;238260 unsigned int count;239239- int i;261261+ int ret, i;240262241263 if (dma_cap->l3l4fnum) {242264 priv->flow_entries_max = dma_cap->l3l4fnum;···272250 for (i = 0; i < priv->flow_entries_max; i++)273251 priv->flow_entries[i].idx = i;274252275275- dev_info(priv->device, "Enabled Flow TC (entries=%d)\n",253253+ dev_info(priv->device, "Enabled L3L4 Flow TC (entries=%d)\n",276254 priv->flow_entries_max);277255 }256256+257257+ ret = tc_rfs_init(priv);258258+ if (ret)259259+ return -ENOMEM;278260279261 if (!priv->plat->fpe_cfg) {280262 priv->plat->fpe_cfg = devm_kzalloc(priv->device,···633607 return ret;634608}635609610610+static struct stmmac_rfs_entry *tc_find_rfs(struct stmmac_priv *priv,611611+ struct flow_cls_offload *cls,612612+ bool get_free)613613+{614614+ int i;615615+616616+ for (i = 0; i < priv->rfs_entries_total; i++) {617617+ struct stmmac_rfs_entry *entry = &priv->rfs_entries[i];618618+619619+ if (entry->cookie == cls->cookie)620620+ return entry;621621+ if (get_free && entry->in_use == false)622622+ return entry;623623+ }624624+625625+ return NULL;626626+}627627+636628#define VLAN_PRIO_FULL_MASK (0x07)637629638630static int tc_add_vlan_flow(struct stmmac_priv *priv,639631 struct flow_cls_offload *cls)640632{633633+ struct stmmac_rfs_entry *entry = tc_find_rfs(priv, cls, false);641634 struct flow_rule *rule = flow_cls_offload_flow_rule(cls);642635 struct flow_dissector *dissector = rule->match.dissector;643636 int tc = tc_classid_to_hwtc(priv->dev, cls->classid);644637 struct flow_match_vlan match;638638+639639+ if (!entry) {640640+ entry = tc_find_rfs(priv, cls, true);641641+ if (!entry)642642+ return -ENOENT;643643+ }644644+645645+ if (priv->rfs_entries_cnt[STMMAC_RFS_T_VLAN] >=646646+ priv->rfs_entries_max[STMMAC_RFS_T_VLAN])647647+ return -ENOENT;645648646649 /* Nothing to do here */647650 if (!dissector_uses_key(dissector, FLOW_DISSECTOR_KEY_VLAN))···693638694639 prio = BIT(match.key->vlan_priority);695640 stmmac_rx_queue_prio(priv, priv->hw, prio, tc);641641+642642+ entry->in_use = true;643643+ entry->cookie = cls->cookie;644644+ entry->tc = tc;645645+ entry->type = STMMAC_RFS_T_VLAN;646646+ priv->rfs_entries_cnt[STMMAC_RFS_T_VLAN]++;696647 }697648698649 return 0;···707646static int tc_del_vlan_flow(struct stmmac_priv *priv,708647 struct flow_cls_offload *cls)709648{710710- struct flow_rule *rule = flow_cls_offload_flow_rule(cls);711711- struct flow_dissector *dissector = rule->match.dissector;712712- int tc = tc_classid_to_hwtc(priv->dev, cls->classid);649649+ struct stmmac_rfs_entry *entry = tc_find_rfs(priv, cls, false);713650714714- /* Nothing to do here */715715- if (!dissector_uses_key(dissector, FLOW_DISSECTOR_KEY_VLAN))716716- return -EINVAL;651651+ if (!entry || !entry->in_use || entry->type != STMMAC_RFS_T_VLAN)652652+ return -ENOENT;717653718718- if (tc < 0) {719719- netdev_err(priv->dev, "Invalid traffic class\n");720720- return -EINVAL;721721- }654654+ stmmac_rx_queue_prio(priv, priv->hw, 0, entry->tc);722655723723- stmmac_rx_queue_prio(priv, priv->hw, 0, tc);656656+ entry->in_use = false;657657+ entry->cookie = 0;658658+ entry->tc = 0;659659+ entry->type = 0;660660+661661+ priv->rfs_entries_cnt[STMMAC_RFS_T_VLAN]--;724662725663 return 0;726664}
+20-9
drivers/net/ethernet/ti/am65-cpsw-nuss.c
···18441844 if (ret < 0) {18451845 dev_err(dev, "%pOF error reading port_id %d\n",18461846 port_np, ret);18471847- return ret;18471847+ goto of_node_put;18481848 }1849184918501850 if (!port_id || port_id > common->port_num) {18511851 dev_err(dev, "%pOF has invalid port_id %u %s\n",18521852 port_np, port_id, port_np->name);18531853- return -EINVAL;18531853+ ret = -EINVAL;18541854+ goto of_node_put;18541855 }1855185618561857 port = am65_common_get_port(common, port_id);···18671866 (AM65_CPSW_NU_FRAM_PORT_OFFSET * (port_id - 1));1868186718691868 port->slave.mac_sl = cpsw_sl_get("am65", dev, port->port_base);18701870- if (IS_ERR(port->slave.mac_sl))18711871- return PTR_ERR(port->slave.mac_sl);18691869+ if (IS_ERR(port->slave.mac_sl)) {18701870+ ret = PTR_ERR(port->slave.mac_sl);18711871+ goto of_node_put;18721872+ }1872187318731874 port->disabled = !of_device_is_available(port_np);18741875 if (port->disabled) {···18831880 ret = PTR_ERR(port->slave.ifphy);18841881 dev_err(dev, "%pOF error retrieving port phy: %d\n",18851882 port_np, ret);18861886- return ret;18831883+ goto of_node_put;18871884 }1888188518891886 port->slave.mac_only =···18921889 /* get phy/link info */18931890 if (of_phy_is_fixed_link(port_np)) {18941891 ret = of_phy_register_fixed_link(port_np);18951895- if (ret)18961896- return dev_err_probe(dev, ret,18921892+ if (ret) {18931893+ ret = dev_err_probe(dev, ret,18971894 "failed to register fixed-link phy %pOF\n",18981895 port_np);18961896+ goto of_node_put;18971897+ }18991898 port->slave.phy_node = of_node_get(port_np);19001899 } else {19011900 port->slave.phy_node =···19071902 if (!port->slave.phy_node) {19081903 dev_err(dev,19091904 "slave[%d] no phy found\n", port_id);19101910- return -ENODEV;19051905+ ret = -ENODEV;19061906+ goto of_node_put;19111907 }1912190819131909 ret = of_get_phy_mode(port_np, &port->slave.phy_if);19141910 if (ret) {19151911 dev_err(dev, "%pOF read phy-mode err %d\n",19161912 port_np, ret);19171917- return ret;19131913+ goto of_node_put;19181914 }1919191519201916 ret = of_get_mac_address(port_np, port->slave.mac_addr);···19381932 }1939193319401934 return 0;19351935+19361936+of_node_put:19371937+ of_node_put(port_np);19381938+ of_node_put(node);19391939+ return ret;19411940}1942194119431942static void am65_cpsw_pcpu_stats_free(void *data)
···462462463463 if (addr == mdiodev->addr) {464464 device_set_node(dev, of_fwnode_handle(child));465465+ /* The refcount on "child" is passed to the mdio466466+ * device. Do _not_ use of_node_put(child) here.467467+ */465468 return;466469 }467470 }
···733733 pr_debug("%s: rx error: len %u exceeds max size %d\n",734734 dev->name, len, GOOD_PACKET_LEN);735735 dev->stats.rx_length_errors++;736736- goto err_len;736736+ goto err;737737 }738738739739 if (likely(!vi->xdp_enabled)) {···825825826826skip_xdp:827827 skb = build_skb(buf, buflen);828828- if (!skb) {829829- put_page(page);828828+ if (!skb)830829 goto err;831831- }832830 skb_reserve(skb, headroom - delta);833831 skb_put(skb, len);834832 if (!xdp_prog) {···837839 if (metasize)838840 skb_metadata_set(skb, metasize);839841840840-err:841842 return skb;842843843844err_xdp:844845 rcu_read_unlock();845846 stats->xdp_drops++;846846-err_len:847847+err:847848 stats->drops++;848849 put_page(page);849850xdp_xmit:
+5-1
drivers/net/wireless/ath/ath11k/mhi.c
···533533 ret = mhi_pm_suspend(ab_pci->mhi_ctrl);534534 break;535535 case ATH11K_MHI_RESUME:536536- ret = mhi_pm_resume(ab_pci->mhi_ctrl);536536+ /* Do force MHI resume as some devices like QCA6390, WCN6855537537+ * are not in M3 state but they are functional. So just ignore538538+ * the MHI state while resuming.539539+ */540540+ ret = mhi_pm_resume_force(ab_pci->mhi_ctrl);537541 break;538542 case ATH11K_MHI_TRIGGER_RDDM:539543 ret = mhi_force_rddm_mode(ab_pci->mhi_ctrl);
+9-5
drivers/net/wireless/broadcom/brcm80211/Kconfig
···77 depends on MAC8021188 depends on BCMA_POSSIBLE99 select BCMA1010- select NEW_LEDS if BCMA_DRIVER_GPIO1111- select LEDS_CLASS if BCMA_DRIVER_GPIO1210 select BRCMUTIL1311 select FW_LOADER1412 select CORDIC1513 help1614 This module adds support for PCIe wireless adapters based on Broadcom1717- IEEE802.11n SoftMAC chipsets. It also has WLAN led support, which will1818- be available if you select BCMA_DRIVER_GPIO. If you choose to build a1919- module, the driver will be called brcmsmac.ko.1515+ IEEE802.11n SoftMAC chipsets. If you choose to build a module, the1616+ driver will be called brcmsmac.ko.1717+1818+config BRCMSMAC_LEDS1919+ def_bool BRCMSMAC && BCMA_DRIVER_GPIO && MAC80211_LEDS2020+ help2121+ The brcmsmac LED support depends on the presence of the2222+ BCMA_DRIVER_GPIO driver, and it only works if LED support2323+ is enabled and reachable from the driver module.20242125source "drivers/net/wireless/broadcom/brcm80211/brcmfmac/Kconfig"2226
···22config IWLEGACY33 tristate44 select FW_LOADER55- select NEW_LEDS66- select LEDS_CLASS75 select LEDS_TRIGGERS86 select MAC80211_LEDS97108config IWL4965119 tristate "Intel Wireless WiFi 4965AGN (iwl4965)"1210 depends on PCI && MAC802111111+ depends on LEDS_CLASS=y || LEDS_CLASS=MAC802111312 select IWLEGACY1413 help1514 This option enables support for···3738config IWL39453839 tristate "Intel PRO/Wireless 3945ABG/BG Network Connection (iwl3945)"3940 depends on PCI && MAC802114141+ depends on LEDS_CLASS=y || LEDS_CLASS=MAC802114042 select IWLEGACY4143 help4244 Select to build the driver supporting the:
+1-1
drivers/net/wireless/intel/iwlwifi/Kconfig
···47474848config IWLWIFI_LEDS4949 bool5050- depends on LEDS_CLASS=y || LEDS_CLASS=IWLWIFI5050+ depends on LEDS_CLASS=y || LEDS_CLASS=MAC802115151 depends on IWLMVM || IWLDVM5252 select LEDS_TRIGGERS5353 select MAC80211_LEDS
+3-2
drivers/net/wireless/intel/iwlwifi/mvm/tx.c
···269269 u8 rate_plcp;270270 u32 rate_flags = 0;271271 bool is_cck;272272- struct iwl_mvm_sta *mvmsta = iwl_mvm_sta_from_mac80211(sta);273272274273 /* info->control is only relevant for non HW rate control */275274 if (!ieee80211_hw_check(mvm->hw, HAS_RATE_CONTROL)) {275275+ struct iwl_mvm_sta *mvmsta = iwl_mvm_sta_from_mac80211(sta);276276+276277 /* HT rate doesn't make sense for a non data frame */277278 WARN_ONCE(info->control.rates[0].flags & IEEE80211_TX_RC_MCS &&278279 !ieee80211_is_data(fc),279280 "Got a HT rate (flags:0x%x/mcs:%d/fc:0x%x/state:%d) for a non data frame\n",280281 info->control.rates[0].flags,281282 info->control.rates[0].idx,282282- le16_to_cpu(fc), mvmsta->sta_state);283283+ le16_to_cpu(fc), sta ? mvmsta->sta_state : -1);283284284285 rate_idx = info->control.rates[0].idx;285286 }
···666666 struct request *rq)667667{668668 if (ctrl->state != NVME_CTRL_DELETING_NOIO &&669669+ ctrl->state != NVME_CTRL_DELETING &&669670 ctrl->state != NVME_CTRL_DEAD &&670671 !test_bit(NVME_CTRL_FAILFAST_EXPIRED, &ctrl->flags) &&671672 !blk_noretry_request(rq) && !(rq->cmd_flags & REQ_NVME_MPATH))···17501749 */17511750 if (WARN_ON_ONCE(!(id->flbas & NVME_NS_FLBAS_META_EXT)))17521751 return -EINVAL;17531753- if (ctrl->max_integrity_segments)17541754- ns->features |=17551755- (NVME_NS_METADATA_SUPPORTED | NVME_NS_EXT_LBAS);17521752+17531753+ ns->features |= NVME_NS_EXT_LBAS;17541754+17551755+ /*17561756+ * The current fabrics transport drivers support namespace17571757+ * metadata formats only if nvme_ns_has_pi() returns true.17581758+ * Suppress support for all other formats so the namespace will17591759+ * have a 0 capacity and not be usable through the block stack.17601760+ *17611761+ * Note, this check will need to be modified if any drivers17621762+ * gain the ability to use other metadata formats.17631763+ */17641764+ if (ctrl->max_integrity_segments && nvme_ns_has_pi(ns))17651765+ ns->features |= NVME_NS_METADATA_SUPPORTED;17561766 } else {17571767 /*17581768 * For PCIe controllers, we can't easily remap the separate···2708269627092697 if (tmp->cntlid == ctrl->cntlid) {27102698 dev_err(ctrl->device,27112711- "Duplicate cntlid %u with %s, rejecting\n",27122712- ctrl->cntlid, dev_name(tmp->device));26992699+ "Duplicate cntlid %u with %s, subsys %s, rejecting\n",27002700+ ctrl->cntlid, dev_name(tmp->device),27012701+ subsys->subnqn);27132702 return false;27142703 }27152704
···922922 size_t data_len = le32_to_cpu(req->cmd->common.dptr.sgl.length);923923 int ret;924924925925- if (!nvme_is_write(cmd->req.cmd) ||925925+ /*926926+ * This command has not been processed yet, hence we are trying to927927+ * figure out if there is still pending data left to receive. If928928+ * we don't, we can simply prepare for the next pdu and bail out,929929+ * otherwise we will need to prepare a buffer and receive the930930+ * stale data before continuing forward.931931+ */932932+ if (!nvme_is_write(cmd->req.cmd) || !data_len ||926933 data_len > cmd->req.port->inline_data_size) {927934 nvmet_prepare_receive_pdu(queue);928935 return;
+25-2
drivers/of/irq.c
···7676}7777EXPORT_SYMBOL_GPL(of_irq_find_parent);78787979+/*8080+ * These interrupt controllers abuse interrupt-map for unspeakable8181+ * reasons and rely on the core code to *ignore* it (the drivers do8282+ * their own parsing of the property).8383+ *8484+ * If you think of adding to the list for something *new*, think8585+ * again. There is a high chance that you will be sent back to the8686+ * drawing board.8787+ */8888+static const char * const of_irq_imap_abusers[] = {8989+ "CBEA,platform-spider-pic",9090+ "sti,platform-spider-pic",9191+ "realtek,rtl-intc",9292+ "fsl,ls1021a-extirq",9393+ "fsl,ls1043a-extirq",9494+ "fsl,ls1088a-extirq",9595+ "renesas,rza1-irqc",9696+ NULL,9797+};9898+7999/**80100 * of_irq_parse_raw - Low level interrupt tree parsing81101 * @addr: address specifier (start of "reg" property of the device) in be32 format···179159 /*180160 * Now check if cursor is an interrupt-controller and181161 * if it is then we are done, unless there is an182182- * interrupt-map which takes precedence.162162+ * interrupt-map which takes precedence except on one163163+ * of these broken platforms that want to parse164164+ * interrupt-map themselves for $reason.183165 */184166 bool intc = of_property_read_bool(ipar, "interrupt-controller");185167186168 imap = of_get_property(ipar, "interrupt-map", &imaplen);187187- if (imap == NULL && intc) {169169+ if (intc &&170170+ (!imap || of_device_compatible_match(ipar, of_irq_imap_abusers))) {188171 pr_debug(" -> got it !\n");189172 return 0;190173 }
+2-2
drivers/pci/controller/Kconfig
···332332 If unsure, say Y if you have an Apple Silicon system.333333334334config PCIE_MT7621335335- tristate "MediaTek MT7621 PCIe Controller"336336- depends on (RALINK && SOC_MT7621) || (MIPS && COMPILE_TEST)335335+ bool "MediaTek MT7621 PCIe Controller"336336+ depends on SOC_MT7621 || (MIPS && COMPILE_TEST)337337 select PHY_MT7621_PCI338338 default SOC_MT7621339339 help
-9
drivers/pci/controller/pci-aardvark.c
···3232#define PCIE_CORE_DEV_ID_REG 0x03333#define PCIE_CORE_CMD_STATUS_REG 0x43434#define PCIE_CORE_DEV_REV_REG 0x83535-#define PCIE_CORE_EXP_ROM_BAR_REG 0x303635#define PCIE_CORE_PCIEXP_CAP 0xc03736#define PCIE_CORE_ERR_CAPCTL_REG 0x1183837#define PCIE_CORE_ERR_CAPCTL_ECRC_CHK_TX BIT(5)···773774 *value = advk_readl(pcie, PCIE_CORE_CMD_STATUS_REG);774775 return PCI_BRIDGE_EMUL_HANDLED;775776776776- case PCI_ROM_ADDRESS1:777777- *value = advk_readl(pcie, PCIE_CORE_EXP_ROM_BAR_REG);778778- return PCI_BRIDGE_EMUL_HANDLED;779779-780777 case PCI_INTERRUPT_LINE: {781778 /*782779 * From the whole 32bit register we support reading from HW only···803808 switch (reg) {804809 case PCI_COMMAND:805810 advk_writel(pcie, new, PCIE_CORE_CMD_STATUS_REG);806806- break;807807-808808- case PCI_ROM_ADDRESS1:809809- advk_writel(pcie, new, PCIE_CORE_EXP_ROM_BAR_REG);810811 break;811812812813 case PCI_INTERRUPT_LINE:
+12-2
drivers/pci/controller/pcie-apple.c
···516516 int ret, i;517517518518 reset = gpiod_get_from_of_node(np, "reset-gpios", 0,519519- GPIOD_OUT_LOW, "#PERST");519519+ GPIOD_OUT_LOW, "PERST#");520520 if (IS_ERR(reset))521521 return PTR_ERR(reset);522522···539539540540 rmw_set(PORT_APPCLK_EN, port->base + PORT_APPCLK);541541542542+ /* Assert PERST# before setting up the clock */543543+ gpiod_set_value(reset, 1);544544+542545 ret = apple_pcie_setup_refclk(pcie, port);543546 if (ret < 0)544547 return ret;545548549549+ /* The minimal Tperst-clk value is 100us (PCIe CEM r5.0, 2.9.2) */550550+ usleep_range(100, 200);551551+552552+ /* Deassert PERST# */546553 rmw_set(PORT_PERST_OFF, port->base + PORT_PERST);547547- gpiod_set_value(reset, 1);554554+ gpiod_set_value(reset, 0);555555+556556+ /* Wait for 100ms after PERST# deassertion (PCIe r5.0, 6.6.1) */557557+ msleep(100);548558549559 ret = readl_relaxed_poll_timeout(port->base + PORT_STATUS, stat,550560 stat & PORT_STATUS_READY, 100, 250000);
···127127};128128129129/**130130- * Write register and read back masked value to confirm it is written130130+ * usb_phy_write_readback() - Write register and read back masked value to131131+ * confirm it is written131132 *132132- * @base - QCOM DWC3 PHY base virtual address.133133- * @offset - register offset.134134- * @mask - register bitmask specifying what should be updated135135- * @val - value to write.133133+ * @phy_dwc3: QCOM DWC3 phy context134134+ * @offset: register offset.135135+ * @mask: register bitmask specifying what should be updated136136+ * @val: value to write.136137 */137138static inline void usb_phy_write_readback(struct usb_phy *phy_dwc3,138139 u32 offset,···172171}173172174173/**175175- * Write SSPHY register174174+ * usb_ss_write_phycreg() - Write SSPHY register176175 *177177- * @base - QCOM DWC3 PHY base virtual address.178178- * @addr - SSPHY address to write.179179- * @val - value to write.176176+ * @phy_dwc3: QCOM DWC3 phy context177177+ * @addr: SSPHY address to write.178178+ * @val: value to write.180179 */181180static int usb_ss_write_phycreg(struct usb_phy *phy_dwc3,182181 u32 addr, u32 val)···210209}211210212211/**213213- * Read SSPHY register.212212+ * usb_ss_read_phycreg() - Read SSPHY register.214213 *215215- * @base - QCOM DWC3 PHY base virtual address.216216- * @addr - SSPHY address to read.214214+ * @phy_dwc3: QCOM DWC3 phy context215215+ * @addr: SSPHY address to read.216216+ * @val: pointer in which read is store.217217 */218218static int usb_ss_read_phycreg(struct usb_phy *phy_dwc3,219219 u32 addr, u32 *val)
+3
drivers/phy/qualcomm/phy-qcom-qmp.c
···29732973 * @qmp: QMP phy to which this lane belongs29742974 * @lane_rst: lane's reset controller29752975 * @mode: current PHY mode29762976+ * @dp_aux_cfg: Display port aux config29772977+ * @dp_opts: Display port optional config29782978+ * @dp_clks: Display port clocks29762979 */29772980struct qmp_phy {29782981 struct phy *phy;
···11// SPDX-License-Identifier: GPL-2.022-/**22+/*33 * Wrapper driver for SERDES used in J721E44 *55 * Copyright (C) 2019 Texas Instruments Incorporated - http://www.ti.com/
+3-3
drivers/phy/ti/phy-omap-usb2.c
···8989}90909191/**9292- * omap_usb2_set_comparator - links the comparator present in the system with9393- * this phy9494- * @comparator - the companion phy(comparator) for this phy9292+ * omap_usb2_set_comparator() - links the comparator present in the system with this phy9393+ *9494+ * @comparator: the companion phy(comparator) for this phy9595 *9696 * The phy companion driver should call this API passing the phy_companion9797 * filled with set_vbus and start_srp to be used by usb phy.
···17171818#define BLK_SFT_RSTN 0x01919#define BLK_CLK_EN 0x42020+#define BLK_MIPI_RESET_DIV 0x8 /* Mini/Nano DISPLAY_BLK_CTRL only */20212122struct imx8m_blk_ctrl_domain;2223···3736 const char *gpc_name;3837 u32 rst_mask;3938 u32 clk_mask;3939+4040+ /*4141+ * i.MX8M Mini and Nano have a third DISPLAY_BLK_CTRL register4242+ * which is used to control the reset for the MIPI Phy.4343+ * Since it's only present in certain circumstances,4444+ * an if-statement should be used before setting and clearing this4545+ * register.4646+ */4747+ u32 mipi_phy_rst_mask;4048};41494250#define DOMAIN_MAX_CLKS 3···88788979 /* put devices into reset */9080 regmap_clear_bits(bc->regmap, BLK_SFT_RSTN, data->rst_mask);8181+ if (data->mipi_phy_rst_mask)8282+ regmap_clear_bits(bc->regmap, BLK_MIPI_RESET_DIV, data->mipi_phy_rst_mask);91839284 /* enable upstream and blk-ctrl clocks to allow reset to propagate */9385 ret = clk_bulk_prepare_enable(data->num_clks, domain->clks);···11199112100 /* release reset */113101 regmap_set_bits(bc->regmap, BLK_SFT_RSTN, data->rst_mask);102102+ if (data->mipi_phy_rst_mask)103103+ regmap_set_bits(bc->regmap, BLK_MIPI_RESET_DIV, data->mipi_phy_rst_mask);114104115105 /* disable upstream clocks */116106 clk_bulk_disable_unprepare(data->num_clks, domain->clks);···134120 struct imx8m_blk_ctrl *bc = domain->bc;135121136122 /* put devices into reset and disable clocks */123123+ if (data->mipi_phy_rst_mask)124124+ regmap_clear_bits(bc->regmap, BLK_MIPI_RESET_DIV, data->mipi_phy_rst_mask);125125+137126 regmap_clear_bits(bc->regmap, BLK_SFT_RSTN, data->rst_mask);138127 regmap_clear_bits(bc->regmap, BLK_CLK_EN, data->clk_mask);139128···497480 .gpc_name = "mipi-dsi",498481 .rst_mask = BIT(5),499482 .clk_mask = BIT(8) | BIT(9),483483+ .mipi_phy_rst_mask = BIT(17),500484 },501485 [IMX8MM_DISPBLK_PD_MIPI_CSI] = {502486 .name = "dispblk-mipi-csi",···506488 .gpc_name = "mipi-csi",507489 .rst_mask = BIT(3) | BIT(4),508490 .clk_mask = BIT(10) | BIT(11),491491+ .mipi_phy_rst_mask = BIT(16),509492 },510493};511494
+4
drivers/soc/imx/soc-imx.c
···3636 int ret;3737 int i;38383939+ /* Return early if this is running on devices with different SoCs */4040+ if (!__mxc_cpu_type)4141+ return 0;4242+3943 if (of_machine_is_compatible("fsl,ls1021a"))4044 return 0;4145
+1-1
drivers/soc/tegra/fuse/fuse-tegra.c
···320320};321321builtin_platform_driver(tegra_fuse_driver);322322323323-bool __init tegra_fuse_read_spare(unsigned int spare)323323+u32 __init tegra_fuse_read_spare(unsigned int spare)324324{325325 unsigned int offset = fuse->soc->info->spare + spare * 4;326326
+1-1
drivers/soc/tegra/fuse/fuse.h
···6565void tegra_init_revision(void);6666void tegra_init_apbmisc(void);67676868-bool __init tegra_fuse_read_spare(unsigned int spare);6868+u32 __init tegra_fuse_read_spare(unsigned int spare);6969u32 __init tegra_fuse_read_early(unsigned int offset);70707171u8 tegra_get_major_rev(void);
···406406 * the USB-2 spec requires such endpoints to have wMaxPacketSize = 0407407 * (see the end of section 5.6.3), so don't warn about them.408408 */409409- maxp = usb_endpoint_maxp(&endpoint->desc);409409+ maxp = le16_to_cpu(endpoint->desc.wMaxPacketSize);410410 if (maxp == 0 && !(usb_endpoint_xfer_isoc(d) && asnum == 0)) {411411 dev_warn(ddev, "config %d interface %d altsetting %d endpoint 0x%X has invalid wMaxPacketSize 0\n",412412 cfgno, inum, asnum, d->bEndpointAddress);···422422 maxpacket_maxes = full_speed_maxpacket_maxes;423423 break;424424 case USB_SPEED_HIGH:425425- /* Bits 12..11 are allowed only for HS periodic endpoints */425425+ /* Multiple-transactions bits are allowed only for HS periodic endpoints */426426 if (usb_endpoint_xfer_int(d) || usb_endpoint_xfer_isoc(d)) {427427- i = maxp & (BIT(12) | BIT(11));427427+ i = maxp & USB_EP_MAXP_MULT_MASK;428428 maxp &= ~i;429429 }430430 fallthrough;
-15
drivers/usb/dwc3/dwc3-qcom.c
···649649 struct dwc3_qcom *qcom = platform_get_drvdata(pdev);650650 struct device_node *np = pdev->dev.of_node, *dwc3_np;651651 struct device *dev = &pdev->dev;652652- struct property *prop;653652 int ret;654653655654 dwc3_np = of_get_compatible_child(np, "snps,dwc3");656655 if (!dwc3_np) {657656 dev_err(dev, "failed to find dwc3 core child\n");658657 return -ENODEV;659659- }660660-661661- prop = devm_kzalloc(dev, sizeof(*prop), GFP_KERNEL);662662- if (!prop) {663663- ret = -ENOMEM;664664- dev_err(dev, "unable to allocate memory for property\n");665665- goto node_put;666666- }667667-668668- prop->name = "tx-fifo-resize";669669- ret = of_add_property(dwc3_np, prop);670670- if (ret) {671671- dev_err(dev, "unable to add property\n");672672- goto node_put;673658 }674659675660 ret = of_platform_populate(np, NULL, NULL, dev);
+13-1
drivers/usb/gadget/composite.c
···16791679 struct usb_function *f = NULL;16801680 u8 endp;1681168116821682+ if (w_length > USB_COMP_EP0_BUFSIZ) {16831683+ if (ctrl->bRequestType == USB_DIR_OUT) {16841684+ goto done;16851685+ } else {16861686+ /* Cast away the const, we are going to overwrite on purpose. */16871687+ __le16 *temp = (__le16 *)&ctrl->wLength;16881688+16891689+ *temp = cpu_to_le16(USB_COMP_EP0_BUFSIZ);16901690+ w_length = USB_COMP_EP0_BUFSIZ;16911691+ }16921692+ }16931693+16821694 /* partial re-init of the response message; the function or the16831695 * gadget might need to intercept e.g. a control-OUT completion16841696 * when we delegate to it.···22212209 if (!cdev->req)22222210 return -ENOMEM;2223221122242224- cdev->req->buf = kmalloc(USB_COMP_EP0_BUFSIZ, GFP_KERNEL);22122212+ cdev->req->buf = kzalloc(USB_COMP_EP0_BUFSIZ, GFP_KERNEL);22252213 if (!cdev->req->buf)22262214 goto fail;22272215
+14-1
drivers/usb/gadget/legacy/dbgp.c
···137137 goto fail_1;138138 }139139140140- req->buf = kmalloc(DBGP_REQ_LEN, GFP_KERNEL);140140+ req->buf = kzalloc(DBGP_REQ_LEN, GFP_KERNEL);141141 if (!req->buf) {142142 err = -ENOMEM;143143 stp = 2;···344344 int err = -EOPNOTSUPP;345345 void *data = NULL;346346 u16 len = 0;347347+348348+ if (length > DBGP_REQ_LEN) {349349+ if (ctrl->bRequestType == USB_DIR_OUT) {350350+ return err;351351+ } else {352352+ /* Cast away the const, we are going to overwrite on purpose. */353353+ __le16 *temp = (__le16 *)&ctrl->wLength;354354+355355+ *temp = cpu_to_le16(DBGP_REQ_LEN);356356+ length = DBGP_REQ_LEN;357357+ }358358+ }359359+347360348361 if (request == USB_REQ_GET_DESCRIPTOR) {349362 switch (value>>8) {
+15-1
drivers/usb/gadget/legacy/inode.c
···110110/* enough for the whole queue: most events invalidate others */111111#define N_EVENT 5112112113113+#define RBUF_SIZE 256114114+113115struct dev_data {114116 spinlock_t lock;115117 refcount_t count;···146144 struct dentry *dentry;147145148146 /* except this scratch i/o buffer for ep0 */149149- u8 rbuf [256];147147+ u8 rbuf[RBUF_SIZE];150148};151149152150static inline void get_dev (struct dev_data *data)···13321330 struct usb_gadgetfs_event *event;13331331 u16 w_value = le16_to_cpu(ctrl->wValue);13341332 u16 w_length = le16_to_cpu(ctrl->wLength);13331333+13341334+ if (w_length > RBUF_SIZE) {13351335+ if (ctrl->bRequestType == USB_DIR_OUT) {13361336+ return value;13371337+ } else {13381338+ /* Cast away the const, we are going to overwrite on purpose. */13391339+ __le16 *temp = (__le16 *)&ctrl->wLength;13401340+13411341+ *temp = cpu_to_le16(RBUF_SIZE);13421342+ w_length = RBUF_SIZE;13431343+ }13441344+ }1335134513361346 spin_lock (&dev->lock);13371347 dev->setup_abort = 0;
+1
drivers/usb/host/xhci-hub.c
···717717 continue;718718719719 retval = xhci_disable_slot(xhci, i);720720+ xhci_free_virt_device(xhci, i);720721 if (retval)721722 xhci_err(xhci, "Failed to disable slot %d, %d. Enter test mode anyway\n",722723 i, retval);
-1
drivers/usb/host/xhci-ring.c
···15251525 if (xhci->quirks & XHCI_EP_LIMIT_QUIRK)15261526 /* Delete default control endpoint resources */15271527 xhci_free_device_endpoint_resources(xhci, virt_dev, true);15281528- xhci_free_virt_device(xhci, slot_id);15291528}1530152915311530static void xhci_handle_cmd_config_ep(struct xhci_hcd *xhci, int slot_id,
+15-11
drivers/usb/host/xhci.c
···39343934 struct xhci_slot_ctx *slot_ctx;39353935 int i, ret;3936393639373937-#ifndef CONFIG_USB_DEFAULT_PERSIST39383937 /*39393938 * We called pm_runtime_get_noresume when the device was attached.39403939 * Decrement the counter here to allow controller to runtime suspend···39413942 */39423943 if (xhci->quirks & XHCI_RESET_ON_RESUME)39433944 pm_runtime_put_noidle(hcd->self.controller);39443944-#endif3945394539463946 ret = xhci_check_args(hcd, udev, NULL, 0, true, __func__);39473947 /* If the host is halted due to driver unload, we still need to free the···39593961 del_timer_sync(&virt_dev->eps[i].stop_cmd_timer);39603962 }39613963 virt_dev->udev = NULL;39623962- ret = xhci_disable_slot(xhci, udev->slot_id);39633963- if (ret)39643964- xhci_free_virt_device(xhci, udev->slot_id);39643964+ xhci_disable_slot(xhci, udev->slot_id);39653965+ xhci_free_virt_device(xhci, udev->slot_id);39653966}3966396739673968int xhci_disable_slot(struct xhci_hcd *xhci, u32 slot_id)···39703973 u32 state;39713974 int ret = 0;3972397539733973- command = xhci_alloc_command(xhci, false, GFP_KERNEL);39763976+ command = xhci_alloc_command(xhci, true, GFP_KERNEL);39743977 if (!command)39753978 return -ENOMEM;39763979···39953998 }39963999 xhci_ring_cmd_db(xhci);39974000 spin_unlock_irqrestore(&xhci->lock, flags);40014001+40024002+ wait_for_completion(command->completion);40034003+40044004+ if (command->status != COMP_SUCCESS)40054005+ xhci_warn(xhci, "Unsuccessful disable slot %u command, status %d\n",40064006+ slot_id, command->status);40074007+40084008+ xhci_free_command(xhci, command);40094009+39984010 return ret;39994011}40004012···4100409441014095 xhci_debugfs_create_slot(xhci, slot_id);4102409641034103-#ifndef CONFIG_USB_DEFAULT_PERSIST41044097 /*41054098 * If resetting upon resume, we can't put the controller into runtime41064099 * suspend if there is a device attached.41074100 */41084101 if (xhci->quirks & XHCI_RESET_ON_RESUME)41094102 pm_runtime_get_noresume(hcd->self.controller);41104110-#endif4111410341124104 /* Is this a LS or FS device under a HS hub? */41134105 /* Hub or peripherial? */41144106 return 1;4115410741164108disable_slot:41174117- ret = xhci_disable_slot(xhci, udev->slot_id);41184118- if (ret)41194119- xhci_free_virt_device(xhci, udev->slot_id);41094109+ xhci_disable_slot(xhci, udev->slot_id);41104110+ xhci_free_virt_device(xhci, udev->slot_id);4120411141214112 return 0;41224113}···4243424042444241 mutex_unlock(&xhci->mutex);42454242 ret = xhci_disable_slot(xhci, udev->slot_id);42434243+ xhci_free_virt_device(xhci, udev->slot_id);42464244 if (!ret)42474245 xhci_alloc_dev(hcd, udev);42484246 kfree(command->completion);
+2-1
drivers/vdpa/vdpa.c
···404404 goto msg_err;405405406406 while (mdev->id_table[i].device) {407407- supported_classes |= BIT(mdev->id_table[i].device);407407+ if (mdev->id_table[i].device <= 63)408408+ supported_classes |= BIT_ULL(mdev->id_table[i].device);408409 i++;409410 }410411
+4-2
drivers/vdpa/vdpa_user/vduse_dev.c
···655655{656656 struct vduse_dev *dev = vdpa_to_vduse(vdpa);657657658658- if (len > dev->config_size - offset)658658+ if (offset > dev->config_size ||659659+ len > dev->config_size - offset)659660 return;660661661662 memcpy(buf, dev->config + offset, len);···976975 break;977976978977 ret = -EINVAL;979979- if (config.length == 0 ||978978+ if (config.offset > dev->config_size ||979979+ config.length == 0 ||980980 config.length > dev->config_size - config.offset)981981 break;982982
+1-1
drivers/vhost/vdpa.c
···197197 struct vdpa_device *vdpa = v->vdpa;198198 long size = vdpa->config->get_config_size(vdpa);199199200200- if (c->len == 0)200200+ if (c->len == 0 || c->off > size)201201 return -EINVAL;202202203203 if (c->len > size - c->off)
···181181 struct file *file;182182 struct wait_queue_head *head;183183 __poll_t events;184184- bool done;185184 bool cancelled;185185+ bool work_scheduled;186186+ bool work_need_resched;186187 struct wait_queue_entry wait;187188 struct work_struct work;188189};···16201619 iocb_put(iocb);16211620}1622162116221622+/*16231623+ * Safely lock the waitqueue which the request is on, synchronizing with the16241624+ * case where the ->poll() provider decides to free its waitqueue early.16251625+ *16261626+ * Returns true on success, meaning that req->head->lock was locked, req->wait16271627+ * is on req->head, and an RCU read lock was taken. Returns false if the16281628+ * request was already removed from its waitqueue (which might no longer exist).16291629+ */16301630+static bool poll_iocb_lock_wq(struct poll_iocb *req)16311631+{16321632+ wait_queue_head_t *head;16331633+16341634+ /*16351635+ * While we hold the waitqueue lock and the waitqueue is nonempty,16361636+ * wake_up_pollfree() will wait for us. However, taking the waitqueue16371637+ * lock in the first place can race with the waitqueue being freed.16381638+ *16391639+ * We solve this as eventpoll does: by taking advantage of the fact that16401640+ * all users of wake_up_pollfree() will RCU-delay the actual free. If16411641+ * we enter rcu_read_lock() and see that the pointer to the queue is16421642+ * non-NULL, we can then lock it without the memory being freed out from16431643+ * under us, then check whether the request is still on the queue.16441644+ *16451645+ * Keep holding rcu_read_lock() as long as we hold the queue lock, in16461646+ * case the caller deletes the entry from the queue, leaving it empty.16471647+ * In that case, only RCU prevents the queue memory from being freed.16481648+ */16491649+ rcu_read_lock();16501650+ head = smp_load_acquire(&req->head);16511651+ if (head) {16521652+ spin_lock(&head->lock);16531653+ if (!list_empty(&req->wait.entry))16541654+ return true;16551655+ spin_unlock(&head->lock);16561656+ }16571657+ rcu_read_unlock();16581658+ return false;16591659+}16601660+16611661+static void poll_iocb_unlock_wq(struct poll_iocb *req)16621662+{16631663+ spin_unlock(&req->head->lock);16641664+ rcu_read_unlock();16651665+}16661666+16231667static void aio_poll_complete_work(struct work_struct *work)16241668{16251669 struct poll_iocb *req = container_of(work, struct poll_iocb, work);···16841638 * avoid further branches in the fast path.16851639 */16861640 spin_lock_irq(&ctx->ctx_lock);16871687- if (!mask && !READ_ONCE(req->cancelled)) {16881688- add_wait_queue(req->head, &req->wait);16891689- spin_unlock_irq(&ctx->ctx_lock);16901690- return;16911691- }16411641+ if (poll_iocb_lock_wq(req)) {16421642+ if (!mask && !READ_ONCE(req->cancelled)) {16431643+ /*16441644+ * The request isn't actually ready to be completed yet.16451645+ * Reschedule completion if another wakeup came in.16461646+ */16471647+ if (req->work_need_resched) {16481648+ schedule_work(&req->work);16491649+ req->work_need_resched = false;16501650+ } else {16511651+ req->work_scheduled = false;16521652+ }16531653+ poll_iocb_unlock_wq(req);16541654+ spin_unlock_irq(&ctx->ctx_lock);16551655+ return;16561656+ }16571657+ list_del_init(&req->wait.entry);16581658+ poll_iocb_unlock_wq(req);16591659+ } /* else, POLLFREE has freed the waitqueue, so we must complete */16921660 list_del_init(&iocb->ki_list);16931661 iocb->ki_res.res = mangle_poll(mask);16941694- req->done = true;16951662 spin_unlock_irq(&ctx->ctx_lock);1696166316971664 iocb_put(iocb);···17161657 struct aio_kiocb *aiocb = container_of(iocb, struct aio_kiocb, rw);17171658 struct poll_iocb *req = &aiocb->poll;1718165917191719- spin_lock(&req->head->lock);17201720- WRITE_ONCE(req->cancelled, true);17211721- if (!list_empty(&req->wait.entry)) {17221722- list_del_init(&req->wait.entry);17231723- schedule_work(&aiocb->poll.work);17241724- }17251725- spin_unlock(&req->head->lock);16601660+ if (poll_iocb_lock_wq(req)) {16611661+ WRITE_ONCE(req->cancelled, true);16621662+ if (!req->work_scheduled) {16631663+ schedule_work(&aiocb->poll.work);16641664+ req->work_scheduled = true;16651665+ }16661666+ poll_iocb_unlock_wq(req);16671667+ } /* else, the request was force-cancelled by POLLFREE already */1726166817271669 return 0;17281670}···17401680 if (mask && !(mask & req->events))17411681 return 0;1742168217431743- list_del_init(&req->wait.entry);17441744-17451745- if (mask && spin_trylock_irqsave(&iocb->ki_ctx->ctx_lock, flags)) {16831683+ /*16841684+ * Complete the request inline if possible. This requires that three16851685+ * conditions be met:16861686+ * 1. An event mask must have been passed. If a plain wakeup was done16871687+ * instead, then mask == 0 and we have to call vfs_poll() to get16881688+ * the events, so inline completion isn't possible.16891689+ * 2. The completion work must not have already been scheduled.16901690+ * 3. ctx_lock must not be busy. We have to use trylock because we16911691+ * already hold the waitqueue lock, so this inverts the normal16921692+ * locking order. Use irqsave/irqrestore because not all16931693+ * filesystems (e.g. fuse) call this function with IRQs disabled,16941694+ * yet IRQs have to be disabled before ctx_lock is obtained.16951695+ */16961696+ if (mask && !req->work_scheduled &&16971697+ spin_trylock_irqsave(&iocb->ki_ctx->ctx_lock, flags)) {17461698 struct kioctx *ctx = iocb->ki_ctx;1747169917481748- /*17491749- * Try to complete the iocb inline if we can. Use17501750- * irqsave/irqrestore because not all filesystems (e.g. fuse)17511751- * call this function with IRQs disabled and because IRQs17521752- * have to be disabled before ctx_lock is obtained.17531753- */17001700+ list_del_init(&req->wait.entry);17541701 list_del(&iocb->ki_list);17551702 iocb->ki_res.res = mangle_poll(mask);17561756- req->done = true;17571757- if (iocb->ki_eventfd && eventfd_signal_allowed()) {17031703+ if (iocb->ki_eventfd && !eventfd_signal_allowed()) {17581704 iocb = NULL;17591705 INIT_WORK(&req->work, aio_poll_put_work);17601706 schedule_work(&req->work);···17691703 if (iocb)17701704 iocb_put(iocb);17711705 } else {17721772- schedule_work(&req->work);17061706+ /*17071707+ * Schedule the completion work if needed. If it was already17081708+ * scheduled, record that another wakeup came in.17091709+ *17101710+ * Don't remove the request from the waitqueue here, as it might17111711+ * not actually be complete yet (we won't know until vfs_poll()17121712+ * is called), and we must not miss any wakeups. POLLFREE is an17131713+ * exception to this; see below.17141714+ */17151715+ if (req->work_scheduled) {17161716+ req->work_need_resched = true;17171717+ } else {17181718+ schedule_work(&req->work);17191719+ req->work_scheduled = true;17201720+ }17211721+17221722+ /*17231723+ * If the waitqueue is being freed early but we can't complete17241724+ * the request inline, we have to tear down the request as best17251725+ * we can. That means immediately removing the request from its17261726+ * waitqueue and preventing all further accesses to the17271727+ * waitqueue via the request. We also need to schedule the17281728+ * completion work (done above). Also mark the request as17291729+ * cancelled, to potentially skip an unneeded call to ->poll().17301730+ */17311731+ if (mask & POLLFREE) {17321732+ WRITE_ONCE(req->cancelled, true);17331733+ list_del_init(&req->wait.entry);17341734+17351735+ /*17361736+ * Careful: this *must* be the last step, since as soon17371737+ * as req->head is NULL'ed out, the request can be17381738+ * completed and freed, since aio_poll_complete_work()17391739+ * will no longer need to take the waitqueue lock.17401740+ */17411741+ smp_store_release(&req->head, NULL);17421742+ }17731743 }17741744 return 1;17751745}···18131711struct aio_poll_table {18141712 struct poll_table_struct pt;18151713 struct aio_kiocb *iocb;17141714+ bool queued;18161715 int error;18171716};18181717···18241721 struct aio_poll_table *pt = container_of(p, struct aio_poll_table, pt);1825172218261723 /* multiple wait queues per file are not supported */18271827- if (unlikely(pt->iocb->poll.head)) {17241724+ if (unlikely(pt->queued)) {18281725 pt->error = -EINVAL;18291726 return;18301727 }1831172817291729+ pt->queued = true;18321730 pt->error = 0;18331731 pt->iocb->poll.head = head;18341732 add_wait_queue(head, &pt->iocb->poll.wait);···18541750 req->events = demangle_poll(iocb->aio_buf) | EPOLLERR | EPOLLHUP;1855175118561752 req->head = NULL;18571857- req->done = false;18581753 req->cancelled = false;17541754+ req->work_scheduled = false;17551755+ req->work_need_resched = false;1859175618601757 apt.pt._qproc = aio_poll_queue_proc;18611758 apt.pt._key = req->events;18621759 apt.iocb = aiocb;17601760+ apt.queued = false;18631761 apt.error = -EINVAL; /* same as no support for IOCB_CMD_POLL */1864176218651763 /* initialized the list so that we can do list_empty checks */···1870176418711765 mask = vfs_poll(req->file, &apt.pt) & req->events;18721766 spin_lock_irq(&ctx->ctx_lock);18731873- if (likely(req->head)) {18741874- spin_lock(&req->head->lock);18751875- if (unlikely(list_empty(&req->wait.entry))) {18761876- if (apt.error)17671767+ if (likely(apt.queued)) {17681768+ bool on_queue = poll_iocb_lock_wq(req);17691769+17701770+ if (!on_queue || req->work_scheduled) {17711771+ /*17721772+ * aio_poll_wake() already either scheduled the async17731773+ * completion work, or completed the request inline.17741774+ */17751775+ if (apt.error) /* unsupported case: multiple queues */18771776 cancel = true;18781777 apt.error = 0;18791778 mask = 0;18801779 }18811780 if (mask || apt.error) {17811781+ /* Steal to complete synchronously. */18821782 list_del_init(&req->wait.entry);18831783 } else if (cancel) {17841784+ /* Cancel if possible (may be too late though). */18841785 WRITE_ONCE(req->cancelled, true);18851885- } else if (!req->done) { /* actually waiting for an event */17861786+ } else if (on_queue) {17871787+ /*17881788+ * Actually waiting for an event, so add the request to17891789+ * active_reqs so that it can be cancelled if needed.17901790+ */18861791 list_add_tail(&aiocb->ki_list, &ctx->active_reqs);18871792 aiocb->ki_cancel = aio_poll_cancel;18881793 }18891889- spin_unlock(&req->head->lock);17941794+ if (on_queue)17951795+ poll_iocb_unlock_wq(req);18901796 }18911797 if (mask) { /* no async, we'd stolen it */18921798 aiocb->ki_res.res = mangle_poll(mask);
+9-3
fs/btrfs/delalloc-space.c
···143143144144 /* Use new btrfs_qgroup_reserve_data to reserve precious data space. */145145 ret = btrfs_qgroup_reserve_data(inode, reserved, start, len);146146- if (ret < 0)146146+ if (ret < 0) {147147 btrfs_free_reserved_data_space_noquota(fs_info, len);148148- else148148+ extent_changeset_free(*reserved);149149+ *reserved = NULL;150150+ } else {149151 ret = 0;152152+ }150153 return ret;151154}152155···455452 if (ret < 0)456453 return ret;457454 ret = btrfs_delalloc_reserve_metadata(inode, len);458458- if (ret < 0)455455+ if (ret < 0) {459456 btrfs_free_reserved_data_space(inode, *reserved, start, len);457457+ extent_changeset_free(*reserved);458458+ *reserved = NULL;459459+ }460460 return ret;461461}462462
···43144314 return;4315431543164316 /*43174317+ * A read may stumble upon this buffer later, make sure that it gets an43184318+ * error and knows there was an error.43194319+ */43204320+ clear_bit(EXTENT_BUFFER_UPTODATE, &eb->bflags);43214321+43224322+ /*43234323+ * We need to set the mapping with the io error as well because a write43244324+ * error will flip the file system readonly, and then syncfs() will43254325+ * return a 0 because we are readonly if we don't modify the err seq for43264326+ * the superblock.43274327+ */43284328+ mapping_set_error(page->mapping, -EIO);43294329+43304330+ /*43174331 * If we error out, we should add back the dirty_metadata_bytes43184332 * to make it consistent.43194333 */
+2-4
fs/btrfs/ioctl.c
···31873187 return -EPERM;3188318831893189 vol_args = memdup_user(arg, sizeof(*vol_args));31903190- if (IS_ERR(vol_args)) {31913191- ret = PTR_ERR(vol_args);31923192- goto out;31933193- }31903190+ if (IS_ERR(vol_args))31913191+ return PTR_ERR(vol_args);3194319231953193 if (vol_args->flags & ~BTRFS_DEVICE_REMOVE_ARGS_MASK) {31963194 ret = -EOPNOTSUPP;
···43504350{43514351 struct ceph_mds_client *mdsc = ceph_sb_to_mdsc(ci->vfs_inode.i_sb);43524352 int bits = (fmode << 1) | 1;43534353- bool is_opened = false;43534353+ bool already_opened = false;43544354 int i;4355435543564356 if (count == 1)···4358435843594359 spin_lock(&ci->i_ceph_lock);43604360 for (i = 0; i < CEPH_FILE_MODE_BITS; i++) {43614361- if (bits & (1 << i))43624362- ci->i_nr_by_mode[i] += count;43634363-43644361 /*43654365- * If any of the mode ref is larger than 1,43624362+ * If any of the mode ref is larger than 0,43664363 * that means it has been already opened by43674364 * others. Just skip checking the PIN ref.43684365 */43694369- if (i && ci->i_nr_by_mode[i] > 1)43704370- is_opened = true;43664366+ if (i && ci->i_nr_by_mode[i])43674367+ already_opened = true;43684368+43694369+ if (bits & (1 << i))43704370+ ci->i_nr_by_mode[i] += count;43714371 }4372437243734373- if (!is_opened)43734373+ if (!already_opened)43744374 percpu_counter_inc(&mdsc->metric.opened_inodes);43754375 spin_unlock(&ci->i_ceph_lock);43764376}
···590590{591591 unsigned int tioffset; /* challenge message target info area */592592 unsigned int tilen; /* challenge message target info area length */593593-594593 CHALLENGE_MESSAGE *pblob = (CHALLENGE_MESSAGE *)bcc_ptr;594594+ __u32 server_flags;595595596596 if (blob_len < sizeof(CHALLENGE_MESSAGE)) {597597 cifs_dbg(VFS, "challenge blob len %d too small\n", blob_len);···609609 return -EINVAL;610610 }611611612612+ server_flags = le32_to_cpu(pblob->NegotiateFlags);613613+ cifs_dbg(FYI, "%s: negotiate=0x%08x challenge=0x%08x\n", __func__,614614+ ses->ntlmssp->client_flags, server_flags);615615+616616+ if ((ses->ntlmssp->client_flags & (NTLMSSP_NEGOTIATE_SEAL | NTLMSSP_NEGOTIATE_SIGN)) &&617617+ (!(server_flags & NTLMSSP_NEGOTIATE_56) && !(server_flags & NTLMSSP_NEGOTIATE_128))) {618618+ cifs_dbg(VFS, "%s: requested signing/encryption but server did not return either 56-bit or 128-bit session key size\n",619619+ __func__);620620+ return -EINVAL;621621+ }622622+ if (!(server_flags & NTLMSSP_NEGOTIATE_NTLM) && !(server_flags & NTLMSSP_NEGOTIATE_EXTENDED_SEC)) {623623+ cifs_dbg(VFS, "%s: server does not seem to support either NTLMv1 or NTLMv2\n", __func__);624624+ return -EINVAL;625625+ }626626+ if (ses->server->sign && !(server_flags & NTLMSSP_NEGOTIATE_SIGN)) {627627+ cifs_dbg(VFS, "%s: forced packet signing but server does not seem to support it\n",628628+ __func__);629629+ return -EOPNOTSUPP;630630+ }631631+ if ((ses->ntlmssp->client_flags & NTLMSSP_NEGOTIATE_KEY_XCH) &&632632+ !(server_flags & NTLMSSP_NEGOTIATE_KEY_XCH))633633+ pr_warn_once("%s: authentication has been weakened as server does not support key exchange\n",634634+ __func__);635635+636636+ ses->ntlmssp->server_flags = server_flags;637637+612638 memcpy(ses->ntlmssp->cryptkey, pblob->Challenge, CIFS_CRYPTO_KEY_SIZE);613613- /* BB we could decode pblob->NegotiateFlags; some may be useful */614639 /* In particular we can examine sign flags */615640 /* BB spec says that if AvId field of MsvAvTimestamp is populated then616641 we must set the MIC field of the AUTHENTICATE_MESSAGE */617617- ses->ntlmssp->server_flags = le32_to_cpu(pblob->NegotiateFlags);642642+618643 tioffset = le32_to_cpu(pblob->TargetInfoArray.BufferOffset);619644 tilen = le16_to_cpu(pblob->TargetInfoArray.Length);620645 if (tioffset > blob_len || tioffset + tilen > blob_len) {···746721 flags = NTLMSSP_NEGOTIATE_56 | NTLMSSP_REQUEST_TARGET |747722 NTLMSSP_NEGOTIATE_128 | NTLMSSP_NEGOTIATE_UNICODE |748723 NTLMSSP_NEGOTIATE_NTLM | NTLMSSP_NEGOTIATE_EXTENDED_SEC |749749- NTLMSSP_NEGOTIATE_SEAL;750750- if (server->sign)751751- flags |= NTLMSSP_NEGOTIATE_SIGN;724724+ NTLMSSP_NEGOTIATE_ALWAYS_SIGN | NTLMSSP_NEGOTIATE_SEAL |725725+ NTLMSSP_NEGOTIATE_SIGN;752726 if (!server->session_estab || ses->ntlmssp->sesskey_per_smbsess)753727 flags |= NTLMSSP_NEGOTIATE_KEY_XCH;754728755729 tmp = *pbuffer + sizeof(NEGOTIATE_MESSAGE);730730+ ses->ntlmssp->client_flags = flags;756731 sec_blob->NegotiateFlags = cpu_to_le32(flags);757732758733 /* these fields should be null in negotiate phase MS-NLMP 3.1.5.1.1 */···804779 memcpy(sec_blob->Signature, NTLMSSP_SIGNATURE, 8);805780 sec_blob->MessageType = NtLmAuthenticate;806781807807- flags = NTLMSSP_NEGOTIATE_56 |808808- NTLMSSP_REQUEST_TARGET | NTLMSSP_NEGOTIATE_TARGET_INFO |809809- NTLMSSP_NEGOTIATE_128 | NTLMSSP_NEGOTIATE_UNICODE |810810- NTLMSSP_NEGOTIATE_NTLM | NTLMSSP_NEGOTIATE_EXTENDED_SEC |811811- NTLMSSP_NEGOTIATE_SEAL | NTLMSSP_NEGOTIATE_WORKSTATION_SUPPLIED;812812- if (ses->server->sign)813813- flags |= NTLMSSP_NEGOTIATE_SIGN;814814- if (!ses->server->session_estab || ses->ntlmssp->sesskey_per_smbsess)815815- flags |= NTLMSSP_NEGOTIATE_KEY_XCH;782782+ flags = ses->ntlmssp->server_flags | NTLMSSP_REQUEST_TARGET |783783+ NTLMSSP_NEGOTIATE_TARGET_INFO | NTLMSSP_NEGOTIATE_WORKSTATION_SUPPLIED;816784817785 tmp = *pbuffer + sizeof(AUTHENTICATE_MESSAGE);818786 sec_blob->NegotiateFlags = cpu_to_le32(flags);···852834 *pbuffer, &tmp,853835 nls_cp);854836855855- if (((ses->ntlmssp->server_flags & NTLMSSP_NEGOTIATE_KEY_XCH) ||856856- (ses->ntlmssp->server_flags & NTLMSSP_NEGOTIATE_EXTENDED_SEC))857857- && !calc_seckey(ses)) {837837+ if ((ses->ntlmssp->server_flags & NTLMSSP_NEGOTIATE_KEY_XCH) &&838838+ (!ses->server->session_estab || ses->ntlmssp->sesskey_per_smbsess) &&839839+ !calc_seckey(ses)) {858840 memcpy(tmp, ses->ntlmssp->ciphertext, CIFS_CPHTXT_SIZE);859841 sec_blob->SessionKey.BufferOffset = cpu_to_le32(tmp - *pbuffer);860842 sec_blob->SessionKey.Length = cpu_to_le16(CIFS_CPHTXT_SIZE);
+56-16
fs/file.c
···841841 spin_unlock(&files->file_lock);842842}843843844844+static inline struct file *__fget_files_rcu(struct files_struct *files,845845+ unsigned int fd, fmode_t mask, unsigned int refs)846846+{847847+ for (;;) {848848+ struct file *file;849849+ struct fdtable *fdt = rcu_dereference_raw(files->fdt);850850+ struct file __rcu **fdentry;851851+852852+ if (unlikely(fd >= fdt->max_fds))853853+ return NULL;854854+855855+ fdentry = fdt->fd + array_index_nospec(fd, fdt->max_fds);856856+ file = rcu_dereference_raw(*fdentry);857857+ if (unlikely(!file))858858+ return NULL;859859+860860+ if (unlikely(file->f_mode & mask))861861+ return NULL;862862+863863+ /*864864+ * Ok, we have a file pointer. However, because we do865865+ * this all locklessly under RCU, we may be racing with866866+ * that file being closed.867867+ *868868+ * Such a race can take two forms:869869+ *870870+ * (a) the file ref already went down to zero,871871+ * and get_file_rcu_many() fails. Just try872872+ * again:873873+ */874874+ if (unlikely(!get_file_rcu_many(file, refs)))875875+ continue;876876+877877+ /*878878+ * (b) the file table entry has changed under us.879879+ * Note that we don't need to re-check the 'fdt->fd'880880+ * pointer having changed, because it always goes881881+ * hand-in-hand with 'fdt'.882882+ *883883+ * If so, we need to put our refs and try again.884884+ */885885+ if (unlikely(rcu_dereference_raw(files->fdt) != fdt) ||886886+ unlikely(rcu_dereference_raw(*fdentry) != file)) {887887+ fput_many(file, refs);888888+ continue;889889+ }890890+891891+ /*892892+ * Ok, we have a ref to the file, and checked that it893893+ * still exists.894894+ */895895+ return file;896896+ }897897+}898898+844899static struct file *__fget_files(struct files_struct *files, unsigned int fd,845900 fmode_t mask, unsigned int refs)846901{847902 struct file *file;848903849904 rcu_read_lock();850850-loop:851851- file = files_lookup_fd_rcu(files, fd);852852- if (file) {853853- /* File object ref couldn't be taken.854854- * dup2() atomicity guarantee is the reason855855- * we loop to catch the new file (or NULL pointer)856856- */857857- if (file->f_mode & mask)858858- file = NULL;859859- else if (!get_file_rcu_many(file, refs))860860- goto loop;861861- else if (files_lookup_fd_raw(files, fd) != file) {862862- fput_many(file, refs);863863- goto loop;864864- }865865- }905905+ file = __fget_files_rcu(files, fd, mask, refs);866906 rcu_read_unlock();867907868908 return file;
+23-6
fs/io-wq.c
···142142 struct io_wqe_acct *acct,143143 struct io_cb_cancel_data *match);144144static void create_worker_cb(struct callback_head *cb);145145+static void io_wq_cancel_tw_create(struct io_wq *wq);145146146147static bool io_worker_get(struct io_worker *worker)147148{···358357 test_and_set_bit_lock(0, &worker->create_state))359358 goto fail_release;360359360360+ atomic_inc(&wq->worker_refs);361361 init_task_work(&worker->create_work, func);362362 worker->create_index = acct->index;363363 if (!task_work_add(wq->task, &worker->create_work, TWA_SIGNAL)) {364364- clear_bit_unlock(0, &worker->create_state);364364+ /*365365+ * EXIT may have been set after checking it above, check after366366+ * adding the task_work and remove any creation item if it is367367+ * now set. wq exit does that too, but we can have added this368368+ * work item after we canceled in io_wq_exit_workers().369369+ */370370+ if (test_bit(IO_WQ_BIT_EXIT, &wq->state))371371+ io_wq_cancel_tw_create(wq);372372+ io_worker_ref_put(wq);365373 return true;366374 }375375+ io_worker_ref_put(wq);367376 clear_bit_unlock(0, &worker->create_state);368377fail_release:369378 io_worker_release(worker);···12091198 set_bit(IO_WQ_BIT_EXIT, &wq->state);12101199}1211120012121212-static void io_wq_exit_workers(struct io_wq *wq)12011201+static void io_wq_cancel_tw_create(struct io_wq *wq)12131202{12141203 struct callback_head *cb;12151215- int node;12161216-12171217- if (!wq->task)12181218- return;1219120412201205 while ((cb = task_work_cancel_match(wq->task, io_task_work_match, wq)) != NULL) {12211206 struct io_worker *worker;···12191212 worker = container_of(cb, struct io_worker, create_work);12201213 io_worker_cancel_cb(worker);12211214 }12151215+}12161216+12171217+static void io_wq_exit_workers(struct io_wq *wq)12181218+{12191219+ int node;12201220+12211221+ if (!wq->task)12221222+ return;12231223+12241224+ io_wq_cancel_tw_create(wq);1222122512231226 rcu_read_lock();12241227 for_each_node(node) {
+4-2
fs/io_uring.c
···9824982498259825/*98269826 * Find any io_uring ctx that this task has registered or done IO on, and cancel98279827- * requests. @sqd should be not-null IIF it's an SQPOLL thread cancellation.98279827+ * requests. @sqd should be not-null IFF it's an SQPOLL thread cancellation.98289828 */98299829static __cold void io_uring_cancel_generic(bool cancel_all,98309830 struct io_sq_data *sqd)···98669866 cancel_all);98679867 }9868986898699869- prepare_to_wait(&tctx->wait, &wait, TASK_UNINTERRUPTIBLE);98699869+ prepare_to_wait(&tctx->wait, &wait, TASK_INTERRUPTIBLE);98709870+ io_run_task_work();98709871 io_uring_drop_tctx_refs(current);98729872+98719873 /*98729874 * If we've seen completions, retry without waiting. This98739875 * avoids a race where a completion comes in before we did
···35353636void signalfd_cleanup(struct sighand_struct *sighand)3737{3838- wait_queue_head_t *wqh = &sighand->signalfd_wqh;3939- /*4040- * The lockless check can race with remove_wait_queue() in progress,4141- * but in this case its caller should run under rcu_read_lock() and4242- * sighand_cachep is SLAB_TYPESAFE_BY_RCU, we can safely return.4343- */4444- if (likely(!waitqueue_active(wqh)))4545- return;4646-4747- /* wait_queue_entry_t->func(POLLFREE) should do remove_wait_queue() */4848- wake_up_poll(wqh, EPOLLHUP | POLLFREE);3838+ wake_up_pollfree(&sighand->signalfd_wqh);4939}50405141struct signalfd_ctx {
-13
fs/smbfs_common/cifs_arc4.c
···7272 ctx->y = y;7373}7474EXPORT_SYMBOL_GPL(cifs_arc4_crypt);7575-7676-static int __init7777-init_smbfs_common(void)7878-{7979- return 0;8080-}8181-static void __init8282-exit_smbfs_common(void)8383-{8484-}8585-8686-module_init(init_smbfs_common)8787-module_exit(exit_smbfs_common)
+76
fs/tracefs/inode.c
···161161 struct tracefs_mount_opts mount_opts;162162};163163164164+static void change_gid(struct dentry *dentry, kgid_t gid)165165+{166166+ if (!dentry->d_inode)167167+ return;168168+ dentry->d_inode->i_gid = gid;169169+}170170+171171+/*172172+ * Taken from d_walk, but without he need for handling renames.173173+ * Nothing can be renamed while walking the list, as tracefs174174+ * does not support renames. This is only called when mounting175175+ * or remounting the file system, to set all the files to176176+ * the given gid.177177+ */178178+static void set_gid(struct dentry *parent, kgid_t gid)179179+{180180+ struct dentry *this_parent;181181+ struct list_head *next;182182+183183+ this_parent = parent;184184+ spin_lock(&this_parent->d_lock);185185+186186+ change_gid(this_parent, gid);187187+repeat:188188+ next = this_parent->d_subdirs.next;189189+resume:190190+ while (next != &this_parent->d_subdirs) {191191+ struct list_head *tmp = next;192192+ struct dentry *dentry = list_entry(tmp, struct dentry, d_child);193193+ next = tmp->next;194194+195195+ spin_lock_nested(&dentry->d_lock, DENTRY_D_LOCK_NESTED);196196+197197+ change_gid(dentry, gid);198198+199199+ if (!list_empty(&dentry->d_subdirs)) {200200+ spin_unlock(&this_parent->d_lock);201201+ spin_release(&dentry->d_lock.dep_map, _RET_IP_);202202+ this_parent = dentry;203203+ spin_acquire(&this_parent->d_lock.dep_map, 0, 1, _RET_IP_);204204+ goto repeat;205205+ }206206+ spin_unlock(&dentry->d_lock);207207+ }208208+ /*209209+ * All done at this level ... ascend and resume the search.210210+ */211211+ rcu_read_lock();212212+ascend:213213+ if (this_parent != parent) {214214+ struct dentry *child = this_parent;215215+ this_parent = child->d_parent;216216+217217+ spin_unlock(&child->d_lock);218218+ spin_lock(&this_parent->d_lock);219219+220220+ /* go into the first sibling still alive */221221+ do {222222+ next = child->d_child.next;223223+ if (next == &this_parent->d_subdirs)224224+ goto ascend;225225+ child = list_entry(next, struct dentry, d_child);226226+ } while (unlikely(child->d_flags & DCACHE_DENTRY_KILLED));227227+ rcu_read_unlock();228228+ goto resume;229229+ }230230+ rcu_read_unlock();231231+ spin_unlock(&this_parent->d_lock);232232+ return;233233+}234234+164235static int tracefs_parse_options(char *data, struct tracefs_mount_opts *opts)165236{166237 substring_t args[MAX_OPT_ARGS];···264193 if (!gid_valid(gid))265194 return -EINVAL;266195 opts->gid = gid;196196+ set_gid(tracefs_mount->mnt_root, gid);267197 break;268198 case Opt_mode:269199 if (match_octal(&args[0], &option))···486414 inode->i_mode = mode;487415 inode->i_fop = fops ? fops : &tracefs_file_operations;488416 inode->i_private = data;417417+ inode->i_uid = d_inode(dentry->d_parent)->i_uid;418418+ inode->i_gid = d_inode(dentry->d_parent)->i_gid;489419 d_instantiate(dentry, inode);490420 fsnotify_create(dentry->d_parent->d_inode, dentry);491421 return end_creating(dentry);···510436 inode->i_mode = S_IFDIR | S_IRWXU | S_IRUSR| S_IRGRP | S_IXUSR | S_IXGRP;511437 inode->i_op = ops;512438 inode->i_fop = &simple_dir_operations;439439+ inode->i_uid = d_inode(dentry->d_parent)->i_uid;440440+ inode->i_gid = d_inode(dentry->d_parent)->i_gid;513441514442 /* directory inodes start off with i_nlink == 2 (for "." entry) */515443 inc_nlink(inode);
+11-3
fs/xfs/xfs_super.c
···17651765xfs_remount_ro(17661766 struct xfs_mount *mp)17671767{17681768- int error;17681768+ struct xfs_icwalk icw = {17691769+ .icw_flags = XFS_ICWALK_FLAG_SYNC,17701770+ };17711771+ int error;1769177217701773 /*17711774 * Cancel background eofb scanning so it cannot race with the final···17761773 */17771774 xfs_blockgc_stop(mp);1778177517791779- /* Get rid of any leftover CoW reservations... */17801780- error = xfs_blockgc_free_space(mp, NULL);17761776+ /*17771777+ * Clear out all remaining COW staging extents and speculative post-EOF17781778+ * preallocations so that we don't leave inodes requiring inactivation17791779+ * cleanups during reclaim on a read-only mount. We must process every17801780+ * cached inode, so this requires a synchronous cache scan.17811781+ */17821782+ error = xfs_blockgc_free_space(mp, &icw);17811783 if (error) {17821784 xfs_force_shutdown(mp, SHUTDOWN_CORRUPT_INCORE);17831785 return error;
+13-1
include/linux/delay.h
···2020 */21212222#include <linux/math.h>2323+#include <linux/sched.h>23242425extern unsigned long loops_per_jiffy;2526···5958void __attribute__((weak)) calibration_delay_done(void);6059void msleep(unsigned int msecs);6160unsigned long msleep_interruptible(unsigned int msecs);6262-void usleep_range(unsigned long min, unsigned long max);6161+void usleep_range_state(unsigned long min, unsigned long max,6262+ unsigned int state);6363+6464+static inline void usleep_range(unsigned long min, unsigned long max)6565+{6666+ usleep_range_state(min, max, TASK_UNINTERRUPTIBLE);6767+}6868+6969+static inline void usleep_idle_range(unsigned long min, unsigned long max)7070+{7171+ usleep_range_state(min, max, TASK_IDLE);7272+}63736474static inline void ssleep(unsigned int seconds)6575{
+13
include/linux/mhi.h
···664664int mhi_pm_resume(struct mhi_controller *mhi_cntrl);665665666666/**667667+ * mhi_pm_resume_force - Force resume MHI from suspended state668668+ * @mhi_cntrl: MHI controller669669+ *670670+ * Resume the device irrespective of its MHI state. As per the MHI spec, devices671671+ * has to be in M3 state during resume. But some devices seem to be in a672672+ * different MHI state other than M3 but they continue working fine if allowed.673673+ * This API is intented to be used for such devices.674674+ *675675+ * Return: 0 if the resume succeeds, a negative error code otherwise676676+ */677677+int mhi_pm_resume_force(struct mhi_controller *mhi_cntrl);678678+679679+/**667680 * mhi_download_rddm_image - Download ramdump image from device for668681 * debugging purpose.669682 * @mhi_cntrl: MHI controller
···129129 * pm_runtime_active - Check whether or not a device is runtime-active.130130 * @dev: Target device.131131 *132132- * Return %true if runtime PM is enabled for @dev and its runtime PM status is132132+ * Return %true if runtime PM is disabled for @dev or its runtime PM status is133133 * %RPM_ACTIVE, or %false otherwise.134134 *135135 * Note that the return value of this function can only be trusted if it is
+26
include/linux/wait.h
···217217void __wake_up_locked_sync_key(struct wait_queue_head *wq_head, unsigned int mode, void *key);218218void __wake_up_locked(struct wait_queue_head *wq_head, unsigned int mode, int nr);219219void __wake_up_sync(struct wait_queue_head *wq_head, unsigned int mode);220220+void __wake_up_pollfree(struct wait_queue_head *wq_head);220221221222#define wake_up(x) __wake_up(x, TASK_NORMAL, 1, NULL)222223#define wake_up_nr(x, nr) __wake_up(x, TASK_NORMAL, nr, NULL)···245244 __wake_up_sync_key((x), TASK_INTERRUPTIBLE, poll_to_key(m))246245#define wake_up_interruptible_sync_poll_locked(x, m) \247246 __wake_up_locked_sync_key((x), TASK_INTERRUPTIBLE, poll_to_key(m))247247+248248+/**249249+ * wake_up_pollfree - signal that a polled waitqueue is going away250250+ * @wq_head: the wait queue head251251+ *252252+ * In the very rare cases where a ->poll() implementation uses a waitqueue whose253253+ * lifetime is tied to a task rather than to the 'struct file' being polled,254254+ * this function must be called before the waitqueue is freed so that255255+ * non-blocking polls (e.g. epoll) are notified that the queue is going away.256256+ *257257+ * The caller must also RCU-delay the freeing of the wait_queue_head, e.g. via258258+ * an explicit synchronize_rcu() or call_rcu(), or via SLAB_TYPESAFE_BY_RCU.259259+ */260260+static inline void wake_up_pollfree(struct wait_queue_head *wq_head)261261+{262262+ /*263263+ * For performance reasons, we don't always take the queue lock here.264264+ * Therefore, we might race with someone removing the last entry from265265+ * the queue, and proceed while they still hold the queue lock.266266+ * However, rcu_read_lock() is required to be held in such cases, so we267267+ * can safely proceed with an RCU-delayed free.268268+ */269269+ if (waitqueue_active(wq_head))270270+ __wake_up_pollfree(wq_head);271271+}248272249273#define ___wait_cond_timeout(condition) \250274({ \
+1-1
include/uapi/asm-generic/poll.h
···2929#define POLLRDHUP 0x20003030#endif31313232-#define POLLFREE (__force __poll_t)0x4000 /* currently only for epoll */3232+#define POLLFREE (__force __poll_t)0x400033333434#define POLL_BUSY_LOOP (__force __poll_t)0x80003535
+10-8
include/uapi/linux/mptcp.h
···136136 * MPTCP_EVENT_REMOVED: token, rem_id137137 * An address has been lost by the peer.138138 *139139- * MPTCP_EVENT_SUB_ESTABLISHED: token, family, saddr4 | saddr6,140140- * daddr4 | daddr6, sport, dport, backup,141141- * if_idx [, error]139139+ * MPTCP_EVENT_SUB_ESTABLISHED: token, family, loc_id, rem_id,140140+ * saddr4 | saddr6, daddr4 | daddr6, sport,141141+ * dport, backup, if_idx [, error]142142 * A new subflow has been established. 'error' should not be set.143143 *144144- * MPTCP_EVENT_SUB_CLOSED: token, family, saddr4 | saddr6, daddr4 | daddr6,145145- * sport, dport, backup, if_idx [, error]144144+ * MPTCP_EVENT_SUB_CLOSED: token, family, loc_id, rem_id, saddr4 | saddr6,145145+ * daddr4 | daddr6, sport, dport, backup, if_idx146146+ * [, error]146147 * A subflow has been closed. An error (copy of sk_err) could be set if an147148 * error has been detected for this subflow.148149 *149149- * MPTCP_EVENT_SUB_PRIORITY: token, family, saddr4 | saddr6, daddr4 | daddr6,150150- * sport, dport, backup, if_idx [, error]151151- * The priority of a subflow has changed. 'error' should not be set.150150+ * MPTCP_EVENT_SUB_PRIORITY: token, family, loc_id, rem_id, saddr4 | saddr6,151151+ * daddr4 | daddr6, sport, dport, backup, if_idx152152+ * [, error]153153+ * The priority of a subflow has changed. 'error' should not be set.152154 */153155enum mptcp_event_type {154156 MPTCP_EVENT_UNSPEC = 0,
+10-3
include/uapi/linux/resource.h
···6666#define _STK_LIM (8*1024*1024)67676868/*6969- * GPG2 wants 64kB of mlocked memory, to make sure pass phrases7070- * and other sensitive information are never written to disk.6969+ * Limit the amount of locked memory by some sane default:7070+ * root can always increase this limit if needed.7171+ *7272+ * The main use-cases are (1) preventing sensitive memory7373+ * from being swapped; (2) real-time operations; (3) via7474+ * IOURING_REGISTER_BUFFERS.7575+ *7676+ * The first two don't need much. The latter will take as7777+ * much as it can get. 8MB is a reasonably sane default.7178 */7272-#define MLOCK_LIMIT ((PAGE_SIZE > 64*1024) ? PAGE_SIZE : 64*1024)7979+#define MLOCK_LIMIT (8*1024*1024)73807481/*7582 * Due to binary compatibility, the actual resource numbers
+10-11
kernel/audit.c
···718718{719719 int rc = 0;720720 struct sk_buff *skb;721721- static unsigned int failed = 0;721721+ unsigned int failed = 0;722722723723 /* NOTE: kauditd_thread takes care of all our locking, we just use724724 * the netlink info passed to us (e.g. sk and portid) */···735735 continue;736736 }737737738738+retry:738739 /* grab an extra skb reference in case of error */739740 skb_get(skb);740741 rc = netlink_unicast(sk, skb, portid, 0);741742 if (rc < 0) {742742- /* fatal failure for our queue flush attempt? */743743+ /* send failed - try a few times unless fatal error */743744 if (++failed >= retry_limit ||744745 rc == -ECONNREFUSED || rc == -EPERM) {745745- /* yes - error processing for the queue */746746 sk = NULL;747747 if (err_hook)748748 (*err_hook)(skb);749749- if (!skb_hook)750750- goto out;751751- /* keep processing with the skb_hook */749749+ if (rc == -EAGAIN)750750+ rc = 0;751751+ /* continue to drain the queue */752752 continue;753753 } else754754- /* no - requeue to preserve ordering */755755- skb_queue_head(queue, skb);754754+ goto retry;756755 } else {757757- /* it worked - drop the extra reference and continue */756756+ /* skb sent - drop the extra reference and continue */758757 consume_skb(skb);759758 failed = 0;760759 }761760 }762761763763-out:764762 return (rc >= 0 ? 0 : rc);765763}766764···16071609 audit_panic("cannot initialize netlink socket in namespace");16081610 return -ENOMEM;16091611 }16101610- aunet->sk->sk_sndtimeo = MAX_SCHEDULE_TIMEOUT;16121612+ /* limit the timeout in case auditd is blocked/stopped */16131613+ aunet->sk->sk_sndtimeo = HZ / 10;1611161416121615 return 0;16131616}
+36-17
kernel/bpf/verifier.c
···13681368 reg->var_off = tnum_or(tnum_clear_subreg(var64_off), var32_off);13691369}1370137013711371+static bool __reg32_bound_s64(s32 a)13721372+{13731373+ return a >= 0 && a <= S32_MAX;13741374+}13751375+13711376static void __reg_assign_32_into_64(struct bpf_reg_state *reg)13721377{13731378 reg->umin_value = reg->u32_min_value;13741379 reg->umax_value = reg->u32_max_value;13751375- /* Attempt to pull 32-bit signed bounds into 64-bit bounds13761376- * but must be positive otherwise set to worse case bounds13771377- * and refine later from tnum.13801380+13811381+ /* Attempt to pull 32-bit signed bounds into 64-bit bounds but must13821382+ * be positive otherwise set to worse case bounds and refine later13831383+ * from tnum.13781384 */13791379- if (reg->s32_min_value >= 0 && reg->s32_max_value >= 0)13801380- reg->smax_value = reg->s32_max_value;13811381- else13821382- reg->smax_value = U32_MAX;13831383- if (reg->s32_min_value >= 0)13851385+ if (__reg32_bound_s64(reg->s32_min_value) &&13861386+ __reg32_bound_s64(reg->s32_max_value)) {13841387 reg->smin_value = reg->s32_min_value;13851385- else13881388+ reg->smax_value = reg->s32_max_value;13891389+ } else {13861390 reg->smin_value = 0;13911391+ reg->smax_value = U32_MAX;13921392+ }13871393}1388139413891395static void __reg_combine_32_into_64(struct bpf_reg_state *reg)···23872381 */23882382 if (insn->src_reg != BPF_REG_FP)23892383 return 0;23902390- if (BPF_SIZE(insn->code) != BPF_DW)23912391- return 0;2392238423932385 /* dreg = *(u64 *)[fp - off] was a fill from the stack.23942386 * that [fp - off] slot contains scalar that needs to be···24082404 return -ENOTSUPP;24092405 /* scalars can only be spilled into stack */24102406 if (insn->dst_reg != BPF_REG_FP)24112411- return 0;24122412- if (BPF_SIZE(insn->code) != BPF_DW)24132407 return 0;24142408 spi = (-insn->off - 1) / BPF_REG_SIZE;24152409 if (spi >= 64) {···4555455345564554 if (insn->imm == BPF_CMPXCHG) {45574555 /* Check comparison of R0 with memory location */45584558- err = check_reg_arg(env, BPF_REG_0, SRC_OP);45564556+ const u32 aux_reg = BPF_REG_0;45574557+45584558+ err = check_reg_arg(env, aux_reg, SRC_OP);45594559 if (err)45604560 return err;45614561+45624562+ if (is_pointer_value(env, aux_reg)) {45634563+ verbose(env, "R%d leaks addr into mem\n", aux_reg);45644564+ return -EACCES;45654565+ }45614566 }4562456745634568 if (is_pointer_value(env, insn->src_reg)) {···45994590 load_reg = -1;46004591 }4601459246024602- /* check whether we can read the memory */45934593+ /* Check whether we can read the memory, with second call for fetch45944594+ * case to simulate the register fill.45954595+ */46034596 err = check_mem_access(env, insn_idx, insn->dst_reg, insn->off,46044604- BPF_SIZE(insn->code), BPF_READ, load_reg, true);45974597+ BPF_SIZE(insn->code), BPF_READ, -1, true);45984598+ if (!err && load_reg >= 0)45994599+ err = check_mem_access(env, insn_idx, insn->dst_reg, insn->off,46004600+ BPF_SIZE(insn->code), BPF_READ, load_reg,46014601+ true);46054602 if (err)46064603 return err;4607460446084608- /* check whether we can write into the same memory */46054605+ /* Check whether we can write into the same memory. */46094606 err = check_mem_access(env, insn_idx, insn->dst_reg, insn->off,46104607 BPF_SIZE(insn->code), BPF_WRITE, -1, true);46114608 if (err)···83798364 insn->dst_reg);83808365 }83818366 zext_32_to_64(dst_reg);83678367+83688368+ __update_reg_bounds(dst_reg);83698369+ __reg_deduce_bounds(dst_reg);83708370+ __reg_bound_offset(dst_reg);83828371 }83838372 } else {83848373 /* case: R = imm
+7
kernel/sched/wait.c
···238238}239239EXPORT_SYMBOL_GPL(__wake_up_sync); /* For internal use only */240240241241+void __wake_up_pollfree(struct wait_queue_head *wq_head)242242+{243243+ __wake_up(wq_head, TASK_NORMAL, 0, poll_to_key(EPOLLHUP | POLLFREE));244244+ /* POLLFREE must have cleared the queue. */245245+ WARN_ON_ONCE(waitqueue_active(wq_head));246246+}247247+241248/*242249 * Note: we use "set_current_state()" _after_ the wait-queue add,243250 * because we need a memory barrier there on SMP, so that any
+9-7
kernel/time/timer.c
···20542054EXPORT_SYMBOL(msleep_interruptible);2055205520562056/**20572057- * usleep_range - Sleep for an approximate time20582058- * @min: Minimum time in usecs to sleep20592059- * @max: Maximum time in usecs to sleep20572057+ * usleep_range_state - Sleep for an approximate time in a given state20582058+ * @min: Minimum time in usecs to sleep20592059+ * @max: Maximum time in usecs to sleep20602060+ * @state: State of the current task that will be while sleeping20602061 *20612062 * In non-atomic context where the exact wakeup time is flexible, use20622062- * usleep_range() instead of udelay(). The sleep improves responsiveness20632063+ * usleep_range_state() instead of udelay(). The sleep improves responsiveness20632064 * by avoiding the CPU-hogging busy-wait of udelay(), and the range reduces20642065 * power usage by allowing hrtimers to take advantage of an already-20652066 * scheduled interrupt instead of scheduling a new one just for this sleep.20662067 */20672067-void __sched usleep_range(unsigned long min, unsigned long max)20682068+void __sched usleep_range_state(unsigned long min, unsigned long max,20692069+ unsigned int state)20682070{20692071 ktime_t exp = ktime_add_us(ktime_get(), min);20702072 u64 delta = (u64)(max - min) * NSEC_PER_USEC;2071207320722074 for (;;) {20732073- __set_current_state(TASK_UNINTERRUPTIBLE);20752075+ __set_current_state(state);20742076 /* Do not return before the requested sleep time has elapsed */20752077 if (!schedule_hrtimeout_range(&exp, delta, HRTIMER_MODE_ABS))20762078 break;20772079 }20782080}20792079-EXPORT_SYMBOL(usleep_range);20812081+EXPORT_SYMBOL(usleep_range_state);
+7-1
kernel/trace/ftrace.c
···52175217{52185218 struct ftrace_direct_func *direct;52195219 struct ftrace_func_entry *entry;52205220+ struct ftrace_hash *hash;52205221 int ret = -ENODEV;5221522252225223 mutex_lock(&direct_mutex);···52265225 if (!entry)52275226 goto out_unlock;5228522752295229- if (direct_functions->count == 1)52285228+ hash = direct_ops.func_hash->filter_hash;52295229+ if (hash->count == 1)52305230 unregister_ftrace_function(&direct_ops);5231523152325232 ret = ftrace_set_filter_ip(&direct_ops, ip, 1, 0);···55425540 err = unregister_ftrace_function(ops);55435541 remove_direct_functions_hash(hash, addr);55445542 mutex_unlock(&direct_mutex);55435543+55445544+ /* cleanup for possible another register call */55455545+ ops->func = NULL;55465546+ ops->trampoline = 0;55455547 return err;55465548}55475549EXPORT_SYMBOL_GPL(unregister_ftrace_direct_multi);
···428428# UP and nommu archs use km based percpu allocator429429#430430config NEED_PER_CPU_KM431431- depends on !SMP431431+ depends on !SMP || !MMU432432 bool433433 default y434434
+7
mm/backing-dev.c
···945945 wb_shutdown(&bdi->wb);946946 cgwb_bdi_unregister(bdi);947947948948+ /*949949+ * If this BDI's min ratio has been set, use bdi_set_min_ratio() to950950+ * update the global bdi_min_ratio.951951+ */952952+ if (bdi->min_ratio)953953+ bdi_set_min_ratio(bdi, 0);954954+948955 if (bdi->dev) {949956 bdi_debug_unregister(bdi);950957 device_unregister(bdi->dev);
+7-13
mm/damon/core.c
···282282 for (i = 0; i < nr_ids; i++) {283283 t = damon_new_target(ids[i]);284284 if (!t) {285285- pr_err("Failed to alloc damon_target\n");286285 /* The caller should do cleanup of the ids itself */287286 damon_for_each_target_safe(t, next, ctx)288287 damon_destroy_target(t);···311312 unsigned long aggr_int, unsigned long primitive_upd_int,312313 unsigned long min_nr_reg, unsigned long max_nr_reg)313314{314314- if (min_nr_reg < 3) {315315- pr_err("min_nr_regions (%lu) must be at least 3\n",316316- min_nr_reg);315315+ if (min_nr_reg < 3)317316 return -EINVAL;318318- }319319- if (min_nr_reg > max_nr_reg) {320320- pr_err("invalid nr_regions. min (%lu) > max (%lu)\n",321321- min_nr_reg, max_nr_reg);317317+ if (min_nr_reg > max_nr_reg)322318 return -EINVAL;323323- }324319325320 ctx->sample_interval = sample_int;326321 ctx->aggr_interval = aggr_int;···973980974981static void kdamond_usleep(unsigned long usecs)975982{976976- if (usecs > 100 * 1000)977977- schedule_timeout_interruptible(usecs_to_jiffies(usecs));983983+ /* See Documentation/timers/timers-howto.rst for the thresholds */984984+ if (usecs > 20 * USEC_PER_MSEC)985985+ schedule_timeout_idle(usecs_to_jiffies(usecs));978986 else979979- usleep_range(usecs, usecs + 1);987987+ usleep_idle_range(usecs, usecs + 1);980988}981989982990/* Returns negative error code if it's not activated but should return */···10321038 ctx->callback.after_sampling(ctx))10331039 done = true;1034104010351035- usleep_range(ctx->sample_interval, ctx->sample_interval + 1);10411041+ kdamond_usleep(ctx->sample_interval);1036104210371043 if (ctx->primitive.check_accesses)10381044 max_nr_accesses = ctx->primitive.check_accesses(ctx);
+1-3
mm/damon/dbgfs.c
···210210 &wmarks.low, &parsed);211211 if (ret != 18)212212 break;213213- if (!damos_action_valid(action)) {214214- pr_err("wrong action %d\n", action);213213+ if (!damos_action_valid(action))215214 goto fail;216216- }217215218216 pos += parsed;219217 scheme = damon_new_scheme(min_sz, max_sz, min_nr_a, max_nr_a,
···32533253 goto skip;32543254 if (!PageUptodate(page) || PageReadahead(page))32553255 goto skip;32563256- if (PageHWPoison(page))32573257- goto skip;32583256 if (!trylock_page(page))32593257 goto skip;32603258 if (page->mapping != mapping)
+1-1
mm/hugetlb.c
···29732973 struct huge_bootmem_page *m = NULL; /* initialize for clang */29742974 int nr_nodes, node;2975297529762976- if (nid >= nr_online_nodes)29762976+ if (nid != NUMA_NO_NODE && nid >= nr_online_nodes)29772977 return 0;29782978 /* do node specific alloc */29792979 if (nid != NUMA_NO_NODE) {
+53-53
mm/memcontrol.c
···776776 rcu_read_unlock();777777}778778779779-/*780780- * mod_objcg_mlstate() may be called with irq enabled, so781781- * mod_memcg_lruvec_state() should be used.782782- */783783-static inline void mod_objcg_mlstate(struct obj_cgroup *objcg,784784- struct pglist_data *pgdat,785785- enum node_stat_item idx, int nr)786786-{787787- struct mem_cgroup *memcg;788788- struct lruvec *lruvec;789789-790790- rcu_read_lock();791791- memcg = obj_cgroup_memcg(objcg);792792- lruvec = mem_cgroup_lruvec(memcg, pgdat);793793- mod_memcg_lruvec_state(lruvec, idx, nr);794794- rcu_read_unlock();795795-}796796-797779/**798780 * __count_memcg_events - account VM events in a cgroup799781 * @memcg: the memory cgroup···21192137}21202138#endif2121213921222122-/*21232123- * Most kmem_cache_alloc() calls are from user context. The irq disable/enable21242124- * sequence used in this case to access content from object stock is slow.21252125- * To optimize for user context access, there are now two object stocks for21262126- * task context and interrupt context access respectively.21272127- *21282128- * The task context object stock can be accessed by disabling preemption only21292129- * which is cheap in non-preempt kernel. The interrupt context object stock21302130- * can only be accessed after disabling interrupt. User context code can21312131- * access interrupt object stock, but not vice versa.21322132- */21332133-static inline struct obj_stock *get_obj_stock(unsigned long *pflags)21342134-{21352135- struct memcg_stock_pcp *stock;21362136-21372137- if (likely(in_task())) {21382138- *pflags = 0UL;21392139- preempt_disable();21402140- stock = this_cpu_ptr(&memcg_stock);21412141- return &stock->task_obj;21422142- }21432143-21442144- local_irq_save(*pflags);21452145- stock = this_cpu_ptr(&memcg_stock);21462146- return &stock->irq_obj;21472147-}21482148-21492149-static inline void put_obj_stock(unsigned long flags)21502150-{21512151- if (likely(in_task()))21522152- preempt_enable();21532153- else21542154- local_irq_restore(flags);21552155-}21562156-21572140/**21582141 * consume_stock: Try to consume stocked charge on this cpu.21592142 * @memcg: memcg to consume from.···27622815 * reclaimable. So those GFP bits should be masked off.27632816 */27642817#define OBJCGS_CLEAR_MASK (__GFP_DMA | __GFP_RECLAIMABLE | __GFP_ACCOUNT)28182818+28192819+/*28202820+ * Most kmem_cache_alloc() calls are from user context. The irq disable/enable28212821+ * sequence used in this case to access content from object stock is slow.28222822+ * To optimize for user context access, there are now two object stocks for28232823+ * task context and interrupt context access respectively.28242824+ *28252825+ * The task context object stock can be accessed by disabling preemption only28262826+ * which is cheap in non-preempt kernel. The interrupt context object stock28272827+ * can only be accessed after disabling interrupt. User context code can28282828+ * access interrupt object stock, but not vice versa.28292829+ */28302830+static inline struct obj_stock *get_obj_stock(unsigned long *pflags)28312831+{28322832+ struct memcg_stock_pcp *stock;28332833+28342834+ if (likely(in_task())) {28352835+ *pflags = 0UL;28362836+ preempt_disable();28372837+ stock = this_cpu_ptr(&memcg_stock);28382838+ return &stock->task_obj;28392839+ }28402840+28412841+ local_irq_save(*pflags);28422842+ stock = this_cpu_ptr(&memcg_stock);28432843+ return &stock->irq_obj;28442844+}28452845+28462846+static inline void put_obj_stock(unsigned long flags)28472847+{28482848+ if (likely(in_task()))28492849+ preempt_enable();28502850+ else28512851+ local_irq_restore(flags);28522852+}28532853+28542854+/*28552855+ * mod_objcg_mlstate() may be called with irq enabled, so28562856+ * mod_memcg_lruvec_state() should be used.28572857+ */28582858+static inline void mod_objcg_mlstate(struct obj_cgroup *objcg,28592859+ struct pglist_data *pgdat,28602860+ enum node_stat_item idx, int nr)28612861+{28622862+ struct mem_cgroup *memcg;28632863+ struct lruvec *lruvec;28642864+28652865+ rcu_read_lock();28662866+ memcg = obj_cgroup_memcg(objcg);28672867+ lruvec = mem_cgroup_lruvec(memcg, pgdat);28682868+ mod_memcg_lruvec_state(lruvec, idx, nr);28692869+ rcu_read_unlock();28702870+}2765287127662872int memcg_alloc_page_obj_cgroups(struct page *page, struct kmem_cache *s,27672873 gfp_t gfp, bool new_page)
···12191219{12201220 struct ieee80211_sub_if_data *sdata = vif_to_sdata(txq->txq.vif);1221122112221222- if (local->in_reconfig)12221222+ /* In reconfig don't transmit now, but mark for waking later */12231223+ if (local->in_reconfig) {12241224+ set_bit(IEEE80211_TXQ_STOP_NETIF_TX, &txq->flags);12231225 return;12261226+ }1224122712251228 if (!check_sdata_in_driver(sdata))12261229 return;
+10-3
net/mac80211/mlme.c
···24522452 u16 tx_time)24532453{24542454 struct ieee80211_if_managed *ifmgd = &sdata->u.mgd;24552455- u16 tid = ieee80211_get_tid(hdr);24562456- int ac = ieee80211_ac_from_tid(tid);24572457- struct ieee80211_sta_tx_tspec *tx_tspec = &ifmgd->tx_tspec[ac];24552455+ u16 tid;24562456+ int ac;24572457+ struct ieee80211_sta_tx_tspec *tx_tspec;24582458 unsigned long now = jiffies;24592459+24602460+ if (!ieee80211_is_data_qos(hdr->frame_control))24612461+ return;24622462+24632463+ tid = ieee80211_get_tid(hdr);24642464+ ac = ieee80211_ac_from_tid(tid);24652465+ tx_tspec = &ifmgd->tx_tspec[ac];2459246624602467 if (likely(!tx_tspec->admitted_time))24612468 return;
+1
net/mac80211/rx.c
···29442944 if (!fwd_skb)29452945 goto out;2946294629472947+ fwd_skb->dev = sdata->dev;29472948 fwd_hdr = (struct ieee80211_hdr *) fwd_skb->data;29482949 fwd_hdr->frame_control &= ~cpu_to_le16(IEEE80211_FCTL_RETRY);29492950 info = IEEE80211_SKB_CB(fwd_skb);
+12-9
net/mac80211/sta_info.c
···644644 /* check if STA exists already */645645 if (sta_info_get_bss(sdata, sta->sta.addr)) {646646 err = -EEXIST;647647- goto out_err;647647+ goto out_cleanup;648648 }649649650650 sinfo = kzalloc(sizeof(struct station_info), GFP_KERNEL);651651 if (!sinfo) {652652 err = -ENOMEM;653653- goto out_err;653653+ goto out_cleanup;654654 }655655656656 local->num_sta++;···667667668668 list_add_tail_rcu(&sta->list, &local->sta_list);669669670670+ /* update channel context before notifying the driver about state671671+ * change, this enables driver using the updated channel context right away.672672+ */673673+ if (sta->sta_state >= IEEE80211_STA_ASSOC) {674674+ ieee80211_recalc_min_chandef(sta->sdata);675675+ if (!sta->sta.support_p2p_ps)676676+ ieee80211_recalc_p2p_go_ps_allowed(sta->sdata);677677+ }678678+670679 /* notify driver */671680 err = sta_info_insert_drv_state(local, sdata, sta);672681 if (err)673682 goto out_remove;674683675684 set_sta_flag(sta, WLAN_STA_INSERTED);676676-677677- if (sta->sta_state >= IEEE80211_STA_ASSOC) {678678- ieee80211_recalc_min_chandef(sta->sdata);679679- if (!sta->sta.support_p2p_ps)680680- ieee80211_recalc_p2p_go_ps_allowed(sta->sdata);681681- }682685683686 /* accept BA sessions now */684687 clear_sta_flag(sta, WLAN_STA_BLOCK_BA);···709706 out_drop_sta:710707 local->num_sta--;711708 synchronize_net();709709+ out_cleanup:712710 cleanup_single_sta(sta);713713- out_err:714711 mutex_unlock(&local->sta_mtx);715712 kfree(sinfo);716713 rcu_read_lock();
+2
net/mac80211/sta_info.h
···176176 * @failed_bar_ssn: ssn of the last failed BAR tx attempt177177 * @bar_pending: BAR needs to be re-sent178178 * @amsdu: support A-MSDU withing A-MDPU179179+ * @ssn: starting sequence number of the session179180 *180181 * This structure's lifetime is managed by RCU, assignments to181182 * the array holding it must hold the aggregation mutex.···200199 u8 stop_initiator;201200 bool tx_stop;202201 u16 buf_size;202202+ u16 ssn;203203204204 u16 failed_bar_ssn;205205 bool bar_pending;
+5-5
net/mac80211/tx.c
···18221822 struct ieee80211_tx_info *info = IEEE80211_SKB_CB(tx->skb);18231823 ieee80211_tx_result res = TX_CONTINUE;1824182418251825+ if (!ieee80211_hw_check(&tx->local->hw, HAS_RATE_CONTROL))18261826+ CALL_TXH(ieee80211_tx_h_rate_ctrl);18271827+18251828 if (unlikely(info->flags & IEEE80211_TX_INTFL_RETRANSMISSION)) {18261829 __skb_queue_tail(&tx->skbs, tx->skb);18271830 tx->skb = NULL;18281831 goto txh_done;18291832 }18301830-18311831- if (!ieee80211_hw_check(&tx->local->hw, HAS_RATE_CONTROL))18321832- CALL_TXH(ieee80211_tx_h_rate_ctrl);1833183318341834 CALL_TXH(ieee80211_tx_h_michael_mic_add);18351835 CALL_TXH(ieee80211_tx_h_sequence);···4191419141924192 ieee80211_aggr_check(sdata, sta, skb);4193419341944194+ sk_pacing_shift_update(skb->sk, sdata->local->hw.tx_sk_pacing_shift);41954195+41944196 if (sta) {41954197 struct ieee80211_fast_tx *fast_tx;41964196-41974197- sk_pacing_shift_update(skb->sk, sdata->local->hw.tx_sk_pacing_shift);4198419841994199 fast_tx = rcu_dereference(sta->fast_tx);42004200
+14-9
net/mac80211/util.c
···943943 struct ieee802_11_elems *elems)944944{945945 const void *data = elem->data + 1;946946- u8 len = elem->datalen - 1;946946+ u8 len;947947+948948+ if (!elem->datalen)949949+ return;950950+951951+ len = elem->datalen - 1;947952948953 switch (elem->data[0]) {949954 case WLAN_EID_EXT_HE_MU_EDCA:···20682063 chandef.chan = chan;2069206420702065 skb = ieee80211_probereq_get(&local->hw, src, ssid, ssid_len,20712071- 100 + ie_len);20662066+ local->scan_ies_len + ie_len);20722067 if (!skb)20732068 return NULL;20742069···26512646 mutex_unlock(&local->sta_mtx);26522647 }2653264826492649+ /*26502650+ * If this is for hw restart things are still running.26512651+ * We may want to change that later, however.26522652+ */26532653+ if (local->open_count && (!suspended || reconfig_due_to_wowlan))26542654+ drv_reconfig_complete(local, IEEE80211_RECONFIG_TYPE_RESTART);26552655+26542656 if (local->in_reconfig) {26552657 local->in_reconfig = false;26562658 barrier();···26752663 ieee80211_wake_queues_by_reason(hw, IEEE80211_MAX_QUEUE_MAP,26762664 IEEE80211_QUEUE_STOP_REASON_SUSPEND,26772665 false);26782678-26792679- /*26802680- * If this is for hw restart things are still running.26812681- * We may want to change that later, however.26822682- */26832683- if (local->open_count && (!suspended || reconfig_due_to_wowlan))26842684- drv_reconfig_complete(local, IEEE80211_RECONFIG_TYPE_RESTART);2685266626862667 if (!suspended)26872668 return 0;
···543543 case TCP_NODELAY:544544 case TCP_THIN_LINEAR_TIMEOUTS:545545 case TCP_CONGESTION:546546- case TCP_ULP:547546 case TCP_CORK:548547 case TCP_KEEPIDLE:549548 case TCP_KEEPINTVL:
···253253 * should end up here, but if it254254 * does, reset/destroy the connection.255255 */256256+ kfree(conn->c_path);256257 kmem_cache_free(rds_conn_slab, conn);257258 conn = ERR_PTR(-EOPNOTSUPP);258259 goto out;
···27362736 q->tins = kvcalloc(CAKE_MAX_TINS, sizeof(struct cake_tin_data),27372737 GFP_KERNEL);27382738 if (!q->tins)27392739- goto nomem;27392739+ return -ENOMEM;2740274027412741 for (i = 0; i < CAKE_MAX_TINS; i++) {27422742 struct cake_tin_data *b = q->tins + i;···27662766 q->min_netlen = ~0;27672767 q->min_adjlen = ~0;27682768 return 0;27692769-27702770-nomem:27712771- cake_destroy(sch);27722772- return -ENOMEM;27732769}2774277027752771static int cake_dump(struct Qdisc *sch, struct sk_buff *skb)
+2-2
net/sched/sch_ets.c
···666666 }667667 }668668 for (i = q->nbands; i < oldbands; i++) {669669- qdisc_tree_flush_backlog(q->classes[i].qdisc);670670- if (i >= q->nstrict)669669+ if (i >= q->nstrict && q->classes[i].qdisc->q.qlen)671670 list_del(&q->classes[i].alist);671671+ qdisc_tree_flush_backlog(q->classes[i].qdisc);672672 }673673 q->nstrict = nstrict;674674 memcpy(q->prio2band, priomap, sizeof(priomap));
+3-1
net/smc/af_smc.c
···194194 /* cleanup for a dangling non-blocking connect */195195 if (smc->connect_nonblock && sk->sk_state == SMC_INIT)196196 tcp_abort(smc->clcsock->sk, ECONNABORTED);197197- flush_work(&smc->connect_work);197197+198198+ if (cancel_work_sync(&smc->connect_work))199199+ sock_put(&smc->sk); /* sock_hold in smc_connect for passive closing */198200199201 if (sk->sk_state == SMC_LISTEN)200202 /* smc_close_non_accepted() is called and acquires
+2-1
net/vmw_vsock/virtio_transport_common.c
···12991299 space_available = virtio_transport_space_update(sk, pkt);1300130013011301 /* Update CID in case it has changed after a transport reset event */13021302- vsk->local_addr.svm_cid = dst.svm_cid;13021302+ if (vsk->local_addr.svm_cid != VMADDR_CID_ANY)13031303+ vsk->local_addr.svm_cid = dst.svm_cid;1303130413041305 if (space_available)13051306 sk->sk_write_space(sk);
···264264 struct snd_ctl_elem_value *data,265265 int type, int count)266266{267267+ struct snd_ctl_elem_value32 __user *data32 = userdata;267268 int i, size;268269269270 if (type == SNDRV_CTL_ELEM_TYPE_BOOLEAN ||···281280 if (copy_to_user(valuep, data->value.bytes.data, size))282281 return -EFAULT;283282 }283283+ if (copy_to_user(&data32->id, &data->id, sizeof(data32->id)))284284+ return -EFAULT;284285 return 0;285286}286287
+25-12
sound/core/oss/pcm_oss.c
···147147 *148148 * Return the maximum value for field PAR.149149 */150150-static unsigned int150150+static int151151snd_pcm_hw_param_value_max(const struct snd_pcm_hw_params *params,152152 snd_pcm_hw_param_t var, int *dir)153153{···682682 struct snd_pcm_hw_params *oss_params,683683 struct snd_pcm_hw_params *slave_params)684684{685685- size_t s;686686- size_t oss_buffer_size, oss_period_size, oss_periods;687687- size_t min_period_size, max_period_size;685685+ ssize_t s;686686+ ssize_t oss_buffer_size;687687+ ssize_t oss_period_size, oss_periods;688688+ ssize_t min_period_size, max_period_size;688689 struct snd_pcm_runtime *runtime = substream->runtime;689690 size_t oss_frame_size;690691691692 oss_frame_size = snd_pcm_format_physical_width(params_format(oss_params)) *692693 params_channels(oss_params) / 8;693694695695+ oss_buffer_size = snd_pcm_hw_param_value_max(slave_params,696696+ SNDRV_PCM_HW_PARAM_BUFFER_SIZE,697697+ NULL);698698+ if (oss_buffer_size <= 0)699699+ return -EINVAL;694700 oss_buffer_size = snd_pcm_plug_client_size(substream,695695- snd_pcm_hw_param_value_max(slave_params, SNDRV_PCM_HW_PARAM_BUFFER_SIZE, NULL)) * oss_frame_size;696696- if (!oss_buffer_size)701701+ oss_buffer_size * oss_frame_size);702702+ if (oss_buffer_size <= 0)697703 return -EINVAL;698704 oss_buffer_size = rounddown_pow_of_two(oss_buffer_size);699705 if (atomic_read(&substream->mmap_count)) {···736730737731 min_period_size = snd_pcm_plug_client_size(substream,738732 snd_pcm_hw_param_value_min(slave_params, SNDRV_PCM_HW_PARAM_PERIOD_SIZE, NULL));739739- if (min_period_size) {733733+ if (min_period_size > 0) {740734 min_period_size *= oss_frame_size;741735 min_period_size = roundup_pow_of_two(min_period_size);742736 if (oss_period_size < min_period_size)···745739746740 max_period_size = snd_pcm_plug_client_size(substream,747741 snd_pcm_hw_param_value_max(slave_params, SNDRV_PCM_HW_PARAM_PERIOD_SIZE, NULL));748748- if (max_period_size) {742742+ if (max_period_size > 0) {749743 max_period_size *= oss_frame_size;750744 max_period_size = rounddown_pow_of_two(max_period_size);751745 if (oss_period_size > max_period_size)···758752 oss_periods = substream->oss.setup.periods;759753760754 s = snd_pcm_hw_param_value_max(slave_params, SNDRV_PCM_HW_PARAM_PERIODS, NULL);761761- if (runtime->oss.maxfrags && s > runtime->oss.maxfrags)755755+ if (s > 0 && runtime->oss.maxfrags && s > runtime->oss.maxfrags)762756 s = runtime->oss.maxfrags;763757 if (oss_periods > s)764758 oss_periods = s;···884878 err = -EINVAL;885879 goto failure;886880 }887887- choose_rate(substream, sparams, runtime->oss.rate);888888- snd_pcm_hw_param_near(substream, sparams, SNDRV_PCM_HW_PARAM_CHANNELS, runtime->oss.channels, NULL);881881+882882+ err = choose_rate(substream, sparams, runtime->oss.rate);883883+ if (err < 0)884884+ goto failure;885885+ err = snd_pcm_hw_param_near(substream, sparams,886886+ SNDRV_PCM_HW_PARAM_CHANNELS,887887+ runtime->oss.channels, NULL);888888+ if (err < 0)889889+ goto failure;889890890891 format = snd_pcm_oss_format_from(runtime->oss.format);891892···19691956 if (runtime->oss.subdivision || runtime->oss.fragshift)19701957 return -EINVAL;19711958 fragshift = val & 0xffff;19721972- if (fragshift >= 31)19591959+ if (fragshift >= 25) /* should be large enough */19731960 return -EINVAL;19741961 runtime->oss.fragshift = fragshift;19751962 runtime->oss.maxfrags = (val >> 16) & 0xffff;
+62-18
sound/pci/hda/patch_realtek.c
···65036503/* for alc285_fixup_ideapad_s740_coef() */65046504#include "ideapad_s740_helper.c"6505650565066506-static void alc256_fixup_tongfang_reset_persistent_settings(struct hda_codec *codec,65076507- const struct hda_fixup *fix,65086508- int action)65066506+static const struct coef_fw alc256_fixup_set_coef_defaults_coefs[] = {65076507+ WRITE_COEF(0x10, 0x0020), WRITE_COEF(0x24, 0x0000),65086508+ WRITE_COEF(0x26, 0x0000), WRITE_COEF(0x29, 0x3000),65096509+ WRITE_COEF(0x37, 0xfe05), WRITE_COEF(0x45, 0x5089),65106510+ {}65116511+};65126512+65136513+static void alc256_fixup_set_coef_defaults(struct hda_codec *codec,65146514+ const struct hda_fixup *fix,65156515+ int action)65096516{65106517 /*65116511- * A certain other OS sets these coeffs to different values. On at least one TongFang65126512- * barebone these settings might survive even a cold reboot. So to restore a clean slate the65136513- * values are explicitly reset to default here. Without this, the external microphone is65146514- * always in a plugged-in state, while the internal microphone is always in an unplugged65156515- * state, breaking the ability to use the internal microphone.65166516- */65176517- alc_write_coef_idx(codec, 0x24, 0x0000);65186518- alc_write_coef_idx(codec, 0x26, 0x0000);65196519- alc_write_coef_idx(codec, 0x29, 0x3000);65206520- alc_write_coef_idx(codec, 0x37, 0xfe05);65216521- alc_write_coef_idx(codec, 0x45, 0x5089);65186518+ * A certain other OS sets these coeffs to different values. On at least65196519+ * one TongFang barebone these settings might survive even a cold65206520+ * reboot. So to restore a clean slate the values are explicitly reset65216521+ * to default here. Without this, the external microphone is always in a65226522+ * plugged-in state, while the internal microphone is always in an65236523+ * unplugged state, breaking the ability to use the internal microphone.65246524+ */65256525+ alc_process_coef_fw(codec, alc256_fixup_set_coef_defaults_coefs);65226526}6523652765246528static const struct coef_fw alc233_fixup_no_audio_jack_coefs[] = {···67636759 ALC287_FIXUP_LEGION_15IMHG05_AUTOMUTE,67646760 ALC287_FIXUP_YOGA7_14ITL_SPEAKERS,67656761 ALC287_FIXUP_13S_GEN2_SPEAKERS,67666766- ALC256_FIXUP_TONGFANG_RESET_PERSISTENT_SETTINGS,67626762+ ALC256_FIXUP_SET_COEF_DEFAULTS,67676763 ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE,67686764 ALC233_FIXUP_NO_AUDIO_JACK,67696765};···84698465 .chained = true,84708466 .chain_id = ALC269_FIXUP_HEADSET_MODE,84718467 },84728472- [ALC256_FIXUP_TONGFANG_RESET_PERSISTENT_SETTINGS] = {84688468+ [ALC256_FIXUP_SET_COEF_DEFAULTS] = {84738469 .type = HDA_FIXUP_FUNC,84748474- .v.func = alc256_fixup_tongfang_reset_persistent_settings,84708470+ .v.func = alc256_fixup_set_coef_defaults,84758471 },84768472 [ALC245_FIXUP_HP_GPIO_LED] = {84778473 .type = HDA_FIXUP_FUNC,···89338929 SND_PCI_QUIRK(0x1b7d, 0xa831, "Ordissimo EVE2 ", ALC269VB_FIXUP_ORDISSIMO_EVE2), /* Also known as Malata PC-B1303 */89348930 SND_PCI_QUIRK(0x1c06, 0x2013, "Lemote A1802", ALC269_FIXUP_LEMOTE_A1802),89358931 SND_PCI_QUIRK(0x1c06, 0x2015, "Lemote A190X", ALC269_FIXUP_LEMOTE_A190X),89368936- SND_PCI_QUIRK(0x1d05, 0x1132, "TongFang PHxTxX1", ALC256_FIXUP_TONGFANG_RESET_PERSISTENT_SETTINGS),89328932+ SND_PCI_QUIRK(0x1d05, 0x1132, "TongFang PHxTxX1", ALC256_FIXUP_SET_COEF_DEFAULTS),89378933 SND_PCI_QUIRK(0x1d72, 0x1602, "RedmiBook", ALC255_FIXUP_XIAOMI_HEADSET_MIC),89388934 SND_PCI_QUIRK(0x1d72, 0x1701, "XiaomiNotebook Pro", ALC298_FIXUP_DELL1_MIC_NO_PRESENCE),89398935 SND_PCI_QUIRK(0x1d72, 0x1901, "RedmiBook 14", ALC256_FIXUP_ASUS_HEADSET_MIC),···1023510231 }1023610232}10237102331023410234+static void alc897_hp_automute_hook(struct hda_codec *codec,1023510235+ struct hda_jack_callback *jack)1023610236+{1023710237+ struct alc_spec *spec = codec->spec;1023810238+ int vref;1023910239+1024010240+ snd_hda_gen_hp_automute(codec, jack);1024110241+ vref = spec->gen.hp_jack_present ? (PIN_HP | AC_PINCTL_VREF_100) : PIN_HP;1024210242+ snd_hda_codec_write(codec, 0x1b, 0, AC_VERB_SET_PIN_WIDGET_CONTROL,1024310243+ vref);1024410244+}1024510245+1024610246+static void alc897_fixup_lenovo_headset_mic(struct hda_codec *codec,1024710247+ const struct hda_fixup *fix, int action)1024810248+{1024910249+ struct alc_spec *spec = codec->spec;1025010250+ if (action == HDA_FIXUP_ACT_PRE_PROBE) {1025110251+ spec->gen.hp_automute_hook = alc897_hp_automute_hook;1025210252+ }1025310253+}1025410254+1023810255static const struct coef_fw alc668_coefs[] = {1023910256 WRITE_COEF(0x01, 0xbebe), WRITE_COEF(0x02, 0xaaaa), WRITE_COEF(0x03, 0x0),1024010257 WRITE_COEF(0x04, 0x0180), WRITE_COEF(0x06, 0x0), WRITE_COEF(0x07, 0x0f80),···1033610311 ALC668_FIXUP_ASUS_NO_HEADSET_MIC,1033710312 ALC668_FIXUP_HEADSET_MIC,1033810313 ALC668_FIXUP_MIC_DET_COEF,1031410314+ ALC897_FIXUP_LENOVO_HEADSET_MIC,1031510315+ ALC897_FIXUP_HEADSET_MIC_PIN,1033910316};10340103171034110318static const struct hda_fixup alc662_fixups[] = {···1074410717 {}1074510718 },1074610719 },1072010720+ [ALC897_FIXUP_LENOVO_HEADSET_MIC] = {1072110721+ .type = HDA_FIXUP_FUNC,1072210722+ .v.func = alc897_fixup_lenovo_headset_mic,1072310723+ },1072410724+ [ALC897_FIXUP_HEADSET_MIC_PIN] = {1072510725+ .type = HDA_FIXUP_PINS,1072610726+ .v.pins = (const struct hda_pintbl[]) {1072710727+ { 0x1a, 0x03a11050 },1072810728+ { }1072910729+ },1073010730+ .chained = true,1073110731+ .chain_id = ALC897_FIXUP_LENOVO_HEADSET_MIC1073210732+ },1074710733};10748107341074910735static const struct snd_pci_quirk alc662_fixup_tbl[] = {···1080110761 SND_PCI_QUIRK(0x144d, 0xc051, "Samsung R720", ALC662_FIXUP_IDEAPAD),1080210762 SND_PCI_QUIRK(0x14cd, 0x5003, "USI", ALC662_FIXUP_USI_HEADSET_MODE),1080310763 SND_PCI_QUIRK(0x17aa, 0x1036, "Lenovo P520", ALC662_FIXUP_LENOVO_MULTI_CODECS),1076410764+ SND_PCI_QUIRK(0x17aa, 0x32ca, "Lenovo ThinkCentre M80", ALC897_FIXUP_HEADSET_MIC_PIN),1076510765+ SND_PCI_QUIRK(0x17aa, 0x32cb, "Lenovo ThinkCentre M70", ALC897_FIXUP_HEADSET_MIC_PIN),1076610766+ SND_PCI_QUIRK(0x17aa, 0x32cf, "Lenovo ThinkCentre M950", ALC897_FIXUP_HEADSET_MIC_PIN),1076710767+ SND_PCI_QUIRK(0x17aa, 0x32f7, "Lenovo ThinkCentre M90", ALC897_FIXUP_HEADSET_MIC_PIN),1080410768 SND_PCI_QUIRK(0x17aa, 0x38af, "Lenovo Ideapad Y550P", ALC662_FIXUP_IDEAPAD),1080510769 SND_PCI_QUIRK(0x17aa, 0x3a0d, "Lenovo Ideapad Y550", ALC662_FIXUP_IDEAPAD),1080610770 SND_PCI_QUIRK(0x1849, 0x5892, "ASRock B150M", ALC892_FIXUP_ASROCK_MOBO),
+2-1
sound/soc/amd/yc/pci-acp6x.c
···146146{147147 struct acp6x_dev_data *adata;148148 struct platform_device_info pdevinfo[ACP6x_DEVS];149149- int ret, index;149149+ int index = 0;150150 int val = 0x00;151151 u32 addr;152152 unsigned int irqflags;153153+ int ret;153154154155 irqflags = IRQF_SHARED;155156 /* Yellow Carp device check */
+6-4
sound/soc/codecs/rt5682.c
···2858285828592859 for (i = 0; i < RT5682_DAI_NUM_CLKS; ++i) {28602860 struct clk_init_data init = { };28612861+ struct clk_parent_data parent_data;28622862+ const struct clk_hw *parent;2861286328622864 dai_clk_hw = &rt5682->dai_clks_hw[i];28632865···28672865 case RT5682_DAI_WCLK_IDX:28682866 /* Make MCLK the parent of WCLK */28692867 if (rt5682->mclk) {28702870- init.parent_data = &(struct clk_parent_data){28682868+ parent_data = (struct clk_parent_data){28712869 .fw_name = "mclk",28722870 };28712871+ init.parent_data = &parent_data;28732872 init.num_parents = 1;28742873 }28752874 break;28762875 case RT5682_DAI_BCLK_IDX:28772876 /* Make WCLK the parent of BCLK */28782878- init.parent_hws = &(const struct clk_hw *){28792879- &rt5682->dai_clks_hw[RT5682_DAI_WCLK_IDX]28802880- };28772877+ parent = &rt5682->dai_clks_hw[RT5682_DAI_WCLK_IDX];28782878+ init.parent_hws = &parent;28812879 init.num_parents = 1;28822880 break;28832881 default:
+6-4
sound/soc/codecs/rt5682s.c
···2693269326942694 for (i = 0; i < RT5682S_DAI_NUM_CLKS; ++i) {26952695 struct clk_init_data init = { };26962696+ struct clk_parent_data parent_data;26972697+ const struct clk_hw *parent;2696269826972699 dai_clk_hw = &rt5682s->dai_clks_hw[i];26982700···27022700 case RT5682S_DAI_WCLK_IDX:27032701 /* Make MCLK the parent of WCLK */27042702 if (rt5682s->mclk) {27052705- init.parent_data = &(struct clk_parent_data){27032703+ parent_data = (struct clk_parent_data){27062704 .fw_name = "mclk",27072705 };27062706+ init.parent_data = &parent_data;27082707 init.num_parents = 1;27092708 }27102709 break;27112710 case RT5682S_DAI_BCLK_IDX:27122711 /* Make WCLK the parent of BCLK */27132713- init.parent_hws = &(const struct clk_hw *){27142714- &rt5682s->dai_clks_hw[RT5682S_DAI_WCLK_IDX]27152715- };27122712+ parent = &rt5682s->dai_clks_hw[RT5682S_DAI_WCLK_IDX];27132713+ init.parent_hws = &parent;27162714 init.num_parents = 1;27172715 break;27182716 default:
+93-33
sound/soc/codecs/wcd934x.c
···32563256 int value = ucontrol->value.integer.value[0];32573257 int sel;3258325832593259+ if (wcd->comp_enabled[comp] == value)32603260+ return 0;32613261+32593262 wcd->comp_enabled[comp] = value;32603263 sel = value ? WCD934X_HPH_GAIN_SRC_SEL_COMPANDER :32613264 WCD934X_HPH_GAIN_SRC_SEL_REGISTER;···32823279 case COMPANDER_8:32833280 break;32843281 default:32853285- break;32823282+ return 0;32863283 }3287328432883288- return 0;32853285+ return 1;32893286}3290328732913288static int wcd934x_rx_hph_mode_get(struct snd_kcontrol *kc,···33293326 return 0;33303327}3331332833293329+static int slim_rx_mux_to_dai_id(int mux)33303330+{33313331+ int aif_id;33323332+33333333+ switch (mux) {33343334+ case 1:33353335+ aif_id = AIF1_PB;33363336+ break;33373337+ case 2:33383338+ aif_id = AIF2_PB;33393339+ break;33403340+ case 3:33413341+ aif_id = AIF3_PB;33423342+ break;33433343+ case 4:33443344+ aif_id = AIF4_PB;33453345+ break;33463346+ default:33473347+ aif_id = -1;33483348+ break;33493349+ }33503350+33513351+ return aif_id;33523352+}33533353+33323354static int slim_rx_mux_put(struct snd_kcontrol *kc,33333355 struct snd_ctl_elem_value *ucontrol)33343356{···33613333 struct wcd934x_codec *wcd = dev_get_drvdata(w->dapm->dev);33623334 struct soc_enum *e = (struct soc_enum *)kc->private_value;33633335 struct snd_soc_dapm_update *update = NULL;33363336+ struct wcd934x_slim_ch *ch, *c;33643337 u32 port_id = w->shift;33383338+ bool found = false;33393339+ int mux_idx;33403340+ int prev_mux_idx = wcd->rx_port_value[port_id];33413341+ int aif_id;3365334233663366- if (wcd->rx_port_value[port_id] == ucontrol->value.enumerated.item[0])33433343+ mux_idx = ucontrol->value.enumerated.item[0];33443344+33453345+ if (mux_idx == prev_mux_idx)33673346 return 0;3368334733693369- wcd->rx_port_value[port_id] = ucontrol->value.enumerated.item[0];33703370-33713371- switch (wcd->rx_port_value[port_id]) {33483348+ switch(mux_idx) {33723349 case 0:33733373- list_del_init(&wcd->rx_chs[port_id].list);33503350+ aif_id = slim_rx_mux_to_dai_id(prev_mux_idx);33513351+ if (aif_id < 0)33523352+ return 0;33533353+33543354+ list_for_each_entry_safe(ch, c, &wcd->dai[aif_id].slim_ch_list, list) {33553355+ if (ch->port == port_id + WCD934X_RX_START) {33563356+ found = true;33573357+ list_del_init(&ch->list);33583358+ break;33593359+ }33603360+ }33613361+ if (!found)33623362+ return 0;33633363+33743364 break;33753375- case 1:33763376- list_add_tail(&wcd->rx_chs[port_id].list,33773377- &wcd->dai[AIF1_PB].slim_ch_list);33653365+ case 1 ... 4:33663366+ aif_id = slim_rx_mux_to_dai_id(mux_idx);33673367+ if (aif_id < 0)33683368+ return 0;33693369+33703370+ if (list_empty(&wcd->rx_chs[port_id].list)) {33713371+ list_add_tail(&wcd->rx_chs[port_id].list,33723372+ &wcd->dai[aif_id].slim_ch_list);33733373+ } else {33743374+ dev_err(wcd->dev ,"SLIM_RX%d PORT is busy\n", port_id);33753375+ return 0;33763376+ }33783377 break;33793379- case 2:33803380- list_add_tail(&wcd->rx_chs[port_id].list,33813381- &wcd->dai[AIF2_PB].slim_ch_list);33823382- break;33833383- case 3:33843384- list_add_tail(&wcd->rx_chs[port_id].list,33853385- &wcd->dai[AIF3_PB].slim_ch_list);33863386- break;33873387- case 4:33883388- list_add_tail(&wcd->rx_chs[port_id].list,33893389- &wcd->dai[AIF4_PB].slim_ch_list);33903390- break;33783378+33913379 default:33923392- dev_err(wcd->dev, "Unknown AIF %d\n",33933393- wcd->rx_port_value[port_id]);33803380+ dev_err(wcd->dev, "Unknown AIF %d\n", mux_idx);33943381 goto err;33953382 }3396338333843384+ wcd->rx_port_value[port_id] = mux_idx;33973385 snd_soc_dapm_mux_update_power(w->dapm, kc, wcd->rx_port_value[port_id],33983386 e, update);3399338734003400- return 0;33883388+ return 1;34013389err:34023390 return -EINVAL;34033391}···38593815 struct soc_mixer_control *mixer =38603816 (struct soc_mixer_control *)kc->private_value;38613817 int enable = ucontrol->value.integer.value[0];38183818+ struct wcd934x_slim_ch *ch, *c;38623819 int dai_id = widget->shift;38633820 int port_id = mixer->shift;38643821···38673822 if (enable == wcd->tx_port_value[port_id])38683823 return 0;3869382438253825+ if (enable) {38263826+ if (list_empty(&wcd->tx_chs[port_id].list)) {38273827+ list_add_tail(&wcd->tx_chs[port_id].list,38283828+ &wcd->dai[dai_id].slim_ch_list);38293829+ } else {38303830+ dev_err(wcd->dev ,"SLIM_TX%d PORT is busy\n", port_id);38313831+ return 0;38323832+ }38333833+ } else {38343834+ bool found = false;38353835+38363836+ list_for_each_entry_safe(ch, c, &wcd->dai[dai_id].slim_ch_list, list) {38373837+ if (ch->port == port_id) {38383838+ found = true;38393839+ list_del_init(&wcd->tx_chs[port_id].list);38403840+ break;38413841+ }38423842+ }38433843+ if (!found)38443844+ return 0;38453845+ }38463846+38703847 wcd->tx_port_value[port_id] = enable;38713871-38723872- if (enable)38733873- list_add_tail(&wcd->tx_chs[port_id].list,38743874- &wcd->dai[dai_id].slim_ch_list);38753875- else38763876- list_del_init(&wcd->tx_chs[port_id].list);38773877-38783848 snd_soc_dapm_mixer_update_power(widget->dapm, kc, enable, update);3879384938803880- return 0;38503850+ return 1;38813851}3882385238833853static const struct snd_kcontrol_new aif1_slim_cap_mixer[] = {
+12-4
sound/soc/codecs/wsa881x.c
···772772773773 usleep_range(1000, 1010);774774 }775775- return 0;775775+776776+ return 1;776777}777778778779static int wsa881x_get_port(struct snd_kcontrol *kcontrol,···817816 (struct soc_mixer_control *)kcontrol->private_value;818817 int portidx = mixer->reg;819818820820- if (ucontrol->value.integer.value[0])819819+ if (ucontrol->value.integer.value[0]) {820820+ if (data->port_enable[portidx])821821+ return 0;822822+821823 data->port_enable[portidx] = true;822822- else824824+ } else {825825+ if (!data->port_enable[portidx])826826+ return 0;827827+823828 data->port_enable[portidx] = false;829829+ }824830825831 if (portidx == WSA881X_PORT_BOOST) /* Boost Switch */826832 wsa881x_boost_ctrl(comp, data->port_enable[portidx]);827833828828- return 0;834834+ return 1;829835}830836831837static const char * const smart_boost_lvl_text[] = {
···4444/* perf sample has 16 bits size limit */4545#define PERF_SAMPLE_MAX_SIZE (1 << 16)46464747+/* number of register is bound by the number of bits in regs_dump::mask (64) */4848+#define PERF_SAMPLE_REGS_CACHE_SIZE (8 * sizeof(u64))4949+4750struct regs_dump {4851 u64 abi;4952 u64 mask;5053 u64 *regs;51545255 /* Cached values/mask filled by first register access. */5353- u64 cache_regs[PERF_REGS_MAX];5656+ u64 cache_regs[PERF_SAMPLE_REGS_CACHE_SIZE];5457 u64 cache_mask;5558};5659
···7272 ip link set $h1.10 address $h1_10_mac7373}74747575+rif_mac_profile_consolidation_test()7676+{7777+ local count=$1; shift7878+ local h1_20_mac7979+8080+ RET=08181+8282+ if [[ $count -eq 1 ]]; then8383+ return8484+ fi8585+8686+ h1_20_mac=$(mac_get $h1.20)8787+8888+ # Set the MAC of $h1.20 to that of $h1.10 and confirm that they are8989+ # using the same MAC profile.9090+ ip link set $h1.20 address 00:11:11:11:11:119191+ check_err $?9292+9393+ occ=$(devlink -j resource show $DEVLINK_DEV \9494+ | jq '.[][][] | select(.name=="rif_mac_profiles") |.["occ"]')9595+9696+ [[ $occ -eq $((count - 1)) ]]9797+ check_err $? "MAC profile occupancy did not decrease"9898+9999+ log_test "RIF MAC profile consolidation"100100+101101+ ip link set $h1.20 address $h1_20_mac102102+}103103+75104rif_mac_profile_shared_replacement_test()76105{77106 local count=$1; shift···133104 create_max_rif_mac_profiles $count134105135106 rif_mac_profile_replacement_test107107+ rif_mac_profile_consolidation_test $count136108 rif_mac_profile_shared_replacement_test $count137109}138110
···11+// SPDX-License-Identifier: GPL-2.022+#include <fcntl.h>33+#include <stdio.h>44+#include <stdlib.h>55+#include <string.h>66+#include <sys/ioctl.h>77+88+#include "test_util.h"99+1010+#include "kvm_util.h"1111+#include "processor.h"1212+1313+#define VCPU_ID 11414+1515+static void guest_ins_port80(uint8_t *buffer, unsigned int count)1616+{1717+ unsigned long end;1818+1919+ if (count == 2)2020+ end = (unsigned long)buffer + 1;2121+ else2222+ end = (unsigned long)buffer + 8192;2323+2424+ asm volatile("cld; rep; insb" : "+D"(buffer), "+c"(count) : "d"(0x80) : "memory");2525+ GUEST_ASSERT_1(count == 0, count);2626+ GUEST_ASSERT_2((unsigned long)buffer == end, buffer, end);2727+}2828+2929+static void guest_code(void)3030+{3131+ uint8_t buffer[8192];3232+ int i;3333+3434+ /*3535+ * Special case tests. main() will adjust RCX 2 => 1 and 3 => 8192 to3636+ * test that KVM doesn't explode when userspace modifies the "count" on3737+ * a userspace I/O exit. KVM isn't required to play nice with the I/O3838+ * itself as KVM doesn't support manipulating the count, it just needs3939+ * to not explode or overflow a buffer.4040+ */4141+ guest_ins_port80(buffer, 2);4242+ guest_ins_port80(buffer, 3);4343+4444+ /* Verify KVM fills the buffer correctly when not stuffing RCX. */4545+ memset(buffer, 0, sizeof(buffer));4646+ guest_ins_port80(buffer, 8192);4747+ for (i = 0; i < 8192; i++)4848+ GUEST_ASSERT_2(buffer[i] == 0xaa, i, buffer[i]);4949+5050+ GUEST_DONE();5151+}5252+5353+int main(int argc, char *argv[])5454+{5555+ struct kvm_regs regs;5656+ struct kvm_run *run;5757+ struct kvm_vm *vm;5858+ struct ucall uc;5959+ int rc;6060+6161+ /* Tell stdout not to buffer its content */6262+ setbuf(stdout, NULL);6363+6464+ /* Create VM */6565+ vm = vm_create_default(VCPU_ID, 0, guest_code);6666+ run = vcpu_state(vm, VCPU_ID);6767+6868+ memset(®s, 0, sizeof(regs));6969+7070+ while (1) {7171+ rc = _vcpu_run(vm, VCPU_ID);7272+7373+ TEST_ASSERT(rc == 0, "vcpu_run failed: %d\n", rc);7474+ TEST_ASSERT(run->exit_reason == KVM_EXIT_IO,7575+ "Unexpected exit reason: %u (%s),\n",7676+ run->exit_reason,7777+ exit_reason_str(run->exit_reason));7878+7979+ if (get_ucall(vm, VCPU_ID, &uc))8080+ break;8181+8282+ TEST_ASSERT(run->io.port == 0x80,8383+ "Expected I/O at port 0x80, got port 0x%x\n", run->io.port);8484+8585+ /*8686+ * Modify the rep string count in RCX: 2 => 1 and 3 => 8192.8787+ * Note, this abuses KVM's batching of rep string I/O to avoid8888+ * getting stuck in an infinite loop. That behavior isn't in8989+ * scope from a testing perspective as it's not ABI in any way,9090+ * i.e. it really is abusing internal KVM knowledge.9191+ */9292+ vcpu_regs_get(vm, VCPU_ID, ®s);9393+ if (regs.rcx == 2)9494+ regs.rcx = 1;9595+ if (regs.rcx == 3)9696+ regs.rcx = 8192;9797+ memset((void *)run + run->io.data_offset, 0xaa, 4096);9898+ vcpu_regs_set(vm, VCPU_ID, ®s);9999+ }100100+101101+ switch (uc.cmd) {102102+ case UCALL_DONE:103103+ break;104104+ case UCALL_ABORT:105105+ TEST_FAIL("%s at %s:%ld : argN+1 = 0x%lx, argN+2 = 0x%lx",106106+ (const char *)uc.args[0], __FILE__, uc.args[1],107107+ uc.args[2], uc.args[3]);108108+ default:109109+ TEST_FAIL("Unknown ucall %lu", uc.cmd);110110+ }111111+112112+ kvm_vm_free(vm);113113+ return 0;114114+}
+34-11
tools/testing/selftests/net/fcnal-test.sh
···462462 ip netns del ${NSC} >/dev/null 2>&1463463}464464465465+cleanup_vrf_dup()466466+{467467+ ip link del ${NSA_DEV2} >/dev/null 2>&1468468+ ip netns pids ${NSC} | xargs kill 2>/dev/null469469+ ip netns del ${NSC} >/dev/null 2>&1470470+}471471+472472+setup_vrf_dup()473473+{474474+ # some VRF tests use ns-C which has the same config as475475+ # ns-B but for a device NOT in the VRF476476+ create_ns ${NSC} "-" "-"477477+ connect_ns ${NSA} ${NSA_DEV2} ${NSA_IP}/24 ${NSA_IP6}/64 \478478+ ${NSC} ${NSC_DEV} ${NSB_IP}/24 ${NSB_IP6}/64479479+}480480+465481setup()466482{467483 local with_vrf=${1}···507491508492 ip -netns ${NSB} ro add ${VRF_IP}/32 via ${NSA_IP} dev ${NSB_DEV}509493 ip -netns ${NSB} -6 ro add ${VRF_IP6}/128 via ${NSA_IP6} dev ${NSB_DEV}510510-511511- # some VRF tests use ns-C which has the same config as512512- # ns-B but for a device NOT in the VRF513513- create_ns ${NSC} "-" "-"514514- connect_ns ${NSA} ${NSA_DEV2} ${NSA_IP}/24 ${NSA_IP6}/64 \515515- ${NSC} ${NSC_DEV} ${NSB_IP}/24 ${NSB_IP6}/64516494 else517495 ip -netns ${NSA} ro add ${NSB_LO_IP}/32 via ${NSB_IP} dev ${NSA_DEV}518496 ip -netns ${NSA} ro add ${NSB_LO_IP6}/128 via ${NSB_IP6} dev ${NSA_DEV}···12571247 log_test_addr ${a} $? 1 "Global server, local connection"1258124812591249 # run MD5 tests12501250+ setup_vrf_dup12601251 ipv4_tcp_md512521252+ cleanup_vrf_dup1261125312621254 #12631255 # enable VRF global server···18251813 for a in ${NSA_IP} ${VRF_IP}18261814 do18271815 log_start18161816+ show_hint "Socket not bound to VRF, but address is in VRF"18281817 run_cmd nettest -s -R -P icmp -l ${a} -b18291829- log_test_addr ${a} $? 0 "Raw socket bind to local address"18181818+ log_test_addr ${a} $? 1 "Raw socket bind to local address"1830181918311820 log_start18321821 run_cmd nettest -s -R -P icmp -l ${a} -I ${NSA_DEV} -b···22282215 log_start22292216 show_hint "Fails since VRF device does not support linklocal or multicast"22302217 run_cmd ${ping6} -c1 -w1 ${a}22312231- log_test_addr ${a} $? 2 "ping out, VRF bind"22182218+ log_test_addr ${a} $? 1 "ping out, VRF bind"22322219 done2233222022342221 for a in ${NSB_IP6} ${NSB_LO_IP6} ${NSB_LINKIP6}%${NSA_DEV} ${MCAST}%${NSA_DEV}···27562743 log_test_addr ${a} $? 1 "Global server, local connection"2757274427582745 # run MD5 tests27462746+ setup_vrf_dup27592747 ipv6_tcp_md527482748+ cleanup_vrf_dup2760274927612750 #27622751 # enable VRF global server···34613446 run_cmd nettest -6 -s -l ${a} -I ${NSA_DEV} -t1 -b34623447 log_test_addr ${a} $? 0 "TCP socket bind to local address after device bind"3463344834493449+ # Sadly, the kernel allows binding a socket to a device and then34503450+ # binding to an address not on the device. So this test passes34513451+ # when it really should not34643452 a=${NSA_LO_IP6}34653453 log_start34663466- show_hint "Should fail with 'Cannot assign requested address'"34543454+ show_hint "Tecnically should fail since address is not on device but kernel allows"34673455 run_cmd nettest -6 -s -l ${a} -I ${NSA_DEV} -t1 -b34683468- log_test_addr ${a} $? 1 "TCP socket bind to out of scope local address"34563456+ log_test_addr ${a} $? 0 "TCP socket bind to out of scope local address"34693457}3470345834713459ipv6_addr_bind_vrf()···35173499 run_cmd nettest -6 -s -l ${a} -I ${NSA_DEV} -t1 -b35183500 log_test_addr ${a} $? 0 "TCP socket bind to local address with device bind"3519350135023502+ # Sadly, the kernel allows binding a socket to a device and then35033503+ # binding to an address not on the device. The only restriction35043504+ # is that the address is valid in the L3 domain. So this test35053505+ # passes when it really should not35203506 a=${VRF_IP6}35213507 log_start35083508+ show_hint "Tecnically should fail since address is not on device but kernel allows"35223509 run_cmd nettest -6 -s -l ${a} -I ${NSA_DEV} -t1 -b35233523- log_test_addr ${a} $? 1 "TCP socket bind to VRF address with device bind"35103510+ log_test_addr ${a} $? 0 "TCP socket bind to VRF address with device bind"3524351135253512 a=${NSA_LO_IP6}35263513 log_start
···1313NETIFS[p6]=veth51414NETIFS[p7]=veth61515NETIFS[p8]=veth71616+NETIFS[p9]=veth81717+NETIFS[p10]=veth916181719# Port that does not have a cable connected.1820NETIF_NO_CABLE=eth8
+1-1
tools/testing/selftests/net/icmp_redirect.sh
···311311 ip -netns h1 ro get ${H1_VRF_ARG} ${H2_N2_IP} | \312312 grep -E -v 'mtu|redirected' | grep -q "cache"313313 fi314314- log_test $? 0 "IPv4: ${desc}"314314+ log_test $? 0 "IPv4: ${desc}" 0315315316316 # No PMTU info for test "redirect" and "mtu exception plus redirect"317317 if [ "$with_redirect" = "yes" ] && [ "$desc" != "redirect exception plus mtu" ]; then
+1-1
tools/testing/selftests/net/toeplitz.c
···498498 bool have_toeplitz = false;499499 int index, c;500500501501- while ((c = getopt_long(argc, argv, "46C:d:i:k:r:stT:u:v", long_options, &index)) != -1) {501501+ while ((c = getopt_long(argc, argv, "46C:d:i:k:r:stT:uv", long_options, &index)) != -1) {502502 switch (c) {503503 case '4':504504 cfg_family = AF_INET;