···1313.. kernel-doc:: drivers/firmware/edd.c1414 :internal:15151616+Generic System Framebuffers Interface1717+-------------------------------------1818+1919+.. kernel-doc:: drivers/firmware/sysfb.c2020+ :export:2121+1622Intel Stratix10 SoC Service Layer1723---------------------------------1824Some features of the Intel Stratix10 SoC require a level of privilege
+36
Documentation/process/maintainer-netdev.rst
···66netdev FAQ77==========8899+tl;dr1010+-----1111+1212+ - designate your patch to a tree - ``[PATCH net]`` or ``[PATCH net-next]``1313+ - for fixes the ``Fixes:`` tag is required, regardless of the tree1414+ - don't post large series (> 15 patches), break them up1515+ - don't repost your patches within one 24h period1616+ - reverse xmas tree1717+918What is netdev?1019---------------1120It is a mailing list for all network-related Linux stuff. This···145136version that should be applied. If there is any doubt, the maintainer146137will reply and ask what should be done.147138139139+How do I divide my work into patches?140140+-------------------------------------141141+142142+Put yourself in the shoes of the reviewer. Each patch is read separately143143+and therefore should constitute a comprehensible step towards your stated144144+goal.145145+146146+Avoid sending series longer than 15 patches. Larger series takes longer147147+to review as reviewers will defer looking at it until they find a large148148+chunk of time. A small series can be reviewed in a short time, so Maintainers149149+just do it. As a result, a sequence of smaller series gets merged quicker and150150+with better review coverage. Re-posting large series also increases the mailing151151+list traffic.152152+148153I made changes to only a few patches in a patch series should I resend only those changed?149154------------------------------------------------------------------------------------------150155No, please resend the entire patch series and make sure you do number your···205182 /* foobar blah blah blah206183 * another line of text207184 */185185+186186+What is "reverse xmas tree"?187187+----------------------------188188+189189+Netdev has a convention for ordering local variables in functions.190190+Order the variable declaration lines longest to shortest, e.g.::191191+192192+ struct scatterlist *sg;193193+ struct sk_buff *skb;194194+ int err, i;195195+196196+If there are dependencies between the variables preventing the ordering197197+move the initialization out of line.208198209199I am working in existing code which uses non-standard formatting. Which formatting should I use?210200------------------------------------------------------------------------------------------------
+109-38
MAINTAINERS
···426426ACPI VIOT DRIVER427427M: Jean-Philippe Brucker <jean-philippe@linaro.org>428428L: linux-acpi@vger.kernel.org429429-L: iommu@lists.linux-foundation.org430429L: iommu@lists.linux.dev431430S: Maintained432431F: drivers/acpi/viot.c···959960AMD IOMMU (AMD-VI)960961M: Joerg Roedel <joro@8bytes.org>961962R: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>962962-L: iommu@lists.linux-foundation.org963963L: iommu@lists.linux.dev964964S: Maintained965965T: git git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu.git···25382540ARM/QUALCOMM SUPPORT25392541M: Andy Gross <agross@kernel.org>25402542M: Bjorn Andersson <bjorn.andersson@linaro.org>25432543+R: Konrad Dybcio <konrad.dybcio@somainline.org>25412544L: linux-arm-msm@vger.kernel.org25422545S: Maintained25432546T: git git://git.kernel.org/pub/scm/linux/kernel/git/qcom/linux.git···36163617F: Documentation/devicetree/bindings/iio/accel/bosch,bma400.yaml36173618F: drivers/iio/accel/bma400*3618361936193619-BPF (Safe dynamic programs and tools)36203620+BPF [GENERAL] (Safe Dynamic Programs and Tools)36203621M: Alexei Starovoitov <ast@kernel.org>36213622M: Daniel Borkmann <daniel@iogearbox.net>36223623M: Andrii Nakryiko <andrii@kernel.org>36233623-R: Martin KaFai Lau <kafai@fb.com>36243624-R: Song Liu <songliubraving@fb.com>36243624+R: Martin KaFai Lau <martin.lau@linux.dev>36253625+R: Song Liu <song@kernel.org>36253626R: Yonghong Song <yhs@fb.com>36263627R: John Fastabend <john.fastabend@gmail.com>36273628R: KP Singh <kpsingh@kernel.org>36283628-L: netdev@vger.kernel.org36293629+R: Stanislav Fomichev <sdf@google.com>36303630+R: Hao Luo <haoluo@google.com>36313631+R: Jiri Olsa <jolsa@kernel.org>36293632L: bpf@vger.kernel.org36303633S: Supported36313634W: https://bpf.io/···36593658F: tools/bpf/36603659F: tools/lib/bpf/36613660F: tools/testing/selftests/bpf/36623662-N: bpf36633663-K: bpf3664366136653662BPF JIT for ARM36663663M: Shubham Bansal <illusionist.neo@gmail.com>36673667-L: netdev@vger.kernel.org36683664L: bpf@vger.kernel.org36693665S: Odd Fixes36703666F: arch/arm/net/···36703672M: Daniel Borkmann <daniel@iogearbox.net>36713673M: Alexei Starovoitov <ast@kernel.org>36723674M: Zi Shen Lim <zlim.lnx@gmail.com>36733673-L: netdev@vger.kernel.org36743675L: bpf@vger.kernel.org36753676S: Supported36763677F: arch/arm64/net/···36773680BPF JIT for MIPS (32-BIT AND 64-BIT)36783681M: Johan Almbladh <johan.almbladh@anyfinetworks.com>36793682M: Paul Burton <paulburton@kernel.org>36803680-L: netdev@vger.kernel.org36813683L: bpf@vger.kernel.org36823684S: Maintained36833685F: arch/mips/net/3684368636853687BPF JIT for NFP NICs36863688M: Jakub Kicinski <kuba@kernel.org>36873687-L: netdev@vger.kernel.org36883689L: bpf@vger.kernel.org36893690S: Odd Fixes36903691F: drivers/net/ethernet/netronome/nfp/bpf/···36903695BPF JIT for POWERPC (32-BIT AND 64-BIT)36913696M: Naveen N. Rao <naveen.n.rao@linux.ibm.com>36923697M: Michael Ellerman <mpe@ellerman.id.au>36933693-L: netdev@vger.kernel.org36943698L: bpf@vger.kernel.org36953699S: Supported36963700F: arch/powerpc/net/···36973703BPF JIT for RISC-V (32-bit)36983704M: Luke Nelson <luke.r.nels@gmail.com>36993705M: Xi Wang <xi.wang@gmail.com>37003700-L: netdev@vger.kernel.org37013706L: bpf@vger.kernel.org37023707S: Maintained37033708F: arch/riscv/net/···3704371137053712BPF JIT for RISC-V (64-bit)37063713M: Björn Töpel <bjorn@kernel.org>37073707-L: netdev@vger.kernel.org37083714L: bpf@vger.kernel.org37093715S: Maintained37103716F: arch/riscv/net/···37133721M: Ilya Leoshkevich <iii@linux.ibm.com>37143722M: Heiko Carstens <hca@linux.ibm.com>37153723M: Vasily Gorbik <gor@linux.ibm.com>37163716-L: netdev@vger.kernel.org37173724L: bpf@vger.kernel.org37183725S: Supported37193726F: arch/s390/net/···3720372937213730BPF JIT for SPARC (32-BIT AND 64-BIT)37223731M: David S. Miller <davem@davemloft.net>37233723-L: netdev@vger.kernel.org37243732L: bpf@vger.kernel.org37253733S: Odd Fixes37263734F: arch/sparc/net/3727373537283736BPF JIT for X86 32-BIT37293737M: Wang YanQing <udknight@gmail.com>37303730-L: netdev@vger.kernel.org37313738L: bpf@vger.kernel.org37323739S: Odd Fixes37333740F: arch/x86/net/bpf_jit_comp32.c···37333744BPF JIT for X86 64-BIT37343745M: Alexei Starovoitov <ast@kernel.org>37353746M: Daniel Borkmann <daniel@iogearbox.net>37363736-L: netdev@vger.kernel.org37373747L: bpf@vger.kernel.org37383748S: Supported37393749F: arch/x86/net/37403750X: arch/x86/net/bpf_jit_comp32.c3741375137423742-BPF LSM (Security Audit and Enforcement using BPF)37523752+BPF [CORE]37533753+M: Alexei Starovoitov <ast@kernel.org>37543754+M: Daniel Borkmann <daniel@iogearbox.net>37553755+R: John Fastabend <john.fastabend@gmail.com>37563756+L: bpf@vger.kernel.org37573757+S: Maintained37583758+F: kernel/bpf/verifier.c37593759+F: kernel/bpf/tnum.c37603760+F: kernel/bpf/core.c37613761+F: kernel/bpf/syscall.c37623762+F: kernel/bpf/dispatcher.c37633763+F: kernel/bpf/trampoline.c37643764+F: include/linux/bpf*37653765+F: include/linux/filter.h37663766+37673767+BPF [BTF]37683768+M: Martin KaFai Lau <martin.lau@linux.dev>37693769+L: bpf@vger.kernel.org37703770+S: Maintained37713771+F: kernel/bpf/btf.c37723772+F: include/linux/btf*37733773+37743774+BPF [TRACING]37753775+M: Song Liu <song@kernel.org>37763776+R: Jiri Olsa <jolsa@kernel.org>37773777+L: bpf@vger.kernel.org37783778+S: Maintained37793779+F: kernel/trace/bpf_trace.c37803780+F: kernel/bpf/stackmap.c37813781+37823782+BPF [NETWORKING] (tc BPF, sock_addr)37833783+M: Martin KaFai Lau <martin.lau@linux.dev>37843784+M: Daniel Borkmann <daniel@iogearbox.net>37853785+R: John Fastabend <john.fastabend@gmail.com>37863786+L: bpf@vger.kernel.org37873787+L: netdev@vger.kernel.org37883788+S: Maintained37893789+F: net/core/filter.c37903790+F: net/sched/act_bpf.c37913791+F: net/sched/cls_bpf.c37923792+37933793+BPF [NETWORKING] (struct_ops, reuseport)37943794+M: Martin KaFai Lau <martin.lau@linux.dev>37953795+L: bpf@vger.kernel.org37963796+L: netdev@vger.kernel.org37973797+S: Maintained37983798+F: kernel/bpf/bpf_struct*37993799+38003800+BPF [SECURITY & LSM] (Security Audit and Enforcement using BPF)37433801M: KP Singh <kpsingh@kernel.org>37443802R: Florent Revest <revest@chromium.org>37453803R: Brendan Jackman <jackmanb@chromium.org>···37973761F: kernel/bpf/bpf_lsm.c37983762F: security/bpf/3799376338003800-BPF L7 FRAMEWORK37643764+BPF [STORAGE & CGROUPS]37653765+M: Martin KaFai Lau <martin.lau@linux.dev>37663766+L: bpf@vger.kernel.org37673767+S: Maintained37683768+F: kernel/bpf/cgroup.c37693769+F: kernel/bpf/*storage.c37703770+F: kernel/bpf/bpf_lru*37713771+37723772+BPF [RINGBUF]37733773+M: Andrii Nakryiko <andrii@kernel.org>37743774+L: bpf@vger.kernel.org37753775+S: Maintained37763776+F: kernel/bpf/ringbuf.c37773777+37783778+BPF [ITERATOR]37793779+M: Yonghong Song <yhs@fb.com>37803780+L: bpf@vger.kernel.org37813781+S: Maintained37823782+F: kernel/bpf/*iter.c37833783+37843784+BPF [L7 FRAMEWORK] (sockmap)38013785M: John Fastabend <john.fastabend@gmail.com>38023786M: Jakub Sitnicki <jakub@cloudflare.com>38033787L: netdev@vger.kernel.org···38303774F: net/ipv4/udp_bpf.c38313775F: net/unix/unix_bpf.c3832377638333833-BPFTOOL37773777+BPF [LIBRARY] (libbpf)37783778+M: Andrii Nakryiko <andrii@kernel.org>37793779+L: bpf@vger.kernel.org37803780+S: Maintained37813781+F: tools/lib/bpf/37823782+37833783+BPF [TOOLING] (bpftool)38343784M: Quentin Monnet <quentin@isovalent.com>38353785L: bpf@vger.kernel.org38363786S: Maintained38373787F: kernel/bpf/disasm.*38383788F: tools/bpf/bpftool/37893789+37903790+BPF [SELFTESTS] (Test Runners & Infrastructure)37913791+M: Andrii Nakryiko <andrii@kernel.org>37923792+R: Mykola Lysenko <mykolal@fb.com>37933793+L: bpf@vger.kernel.org37943794+S: Maintained37953795+F: tools/testing/selftests/bpf/37963796+37973797+BPF [MISC]37983798+L: bpf@vger.kernel.org37993799+S: Odd Fixes38003800+K: (?:\b|_)bpf(?:\b|_)3839380138403802BROADCOM B44 10/100 ETHERNET DRIVER38413803M: Michael Chan <michael.chan@broadcom.com>···50504976T: git git://git.kernel.org/pub/scm/linux/kernel/git/clk/linux.git50514977F: Documentation/devicetree/bindings/clock/50524978F: drivers/clk/49794979+F: include/dt-bindings/clock/50534980F: include/linux/clk-pr*50544981F: include/linux/clk/50554982F: include/linux/of_clk.h···51015026M: Alison Schofield <alison.schofield@intel.com>51025027M: Vishal Verma <vishal.l.verma@intel.com>51035028M: Ira Weiny <ira.weiny@intel.com>51045104-M: Ben Widawsky <ben.widawsky@intel.com>50295029+M: Ben Widawsky <bwidawsk@kernel.org>51055030M: Dan Williams <dan.j.williams@intel.com>51065031L: linux-cxl@vger.kernel.org51075032S: Maintained···60535978M: Christoph Hellwig <hch@lst.de>60545979M: Marek Szyprowski <m.szyprowski@samsung.com>60555980R: Robin Murphy <robin.murphy@arm.com>60566056-L: iommu@lists.linux-foundation.org60575981L: iommu@lists.linux.dev60585982S: Supported60595983W: http://git.infradead.org/users/hch/dma-mapping.git···6065599160665992DMA MAPPING BENCHMARK60675993M: Xiang Chen <chenxiang66@hisilicon.com>60686068-L: iommu@lists.linux-foundation.org60695994L: iommu@lists.linux.dev60705995F: kernel/dma/map_benchmark.c60715996F: tools/testing/selftests/dma/···7649757676507577EXYNOS SYSMMU (IOMMU) driver76517578M: Marek Szyprowski <m.szyprowski@samsung.com>76527652-L: iommu@lists.linux-foundation.org76537579L: iommu@lists.linux.dev76547580S: Maintained76557581F: drivers/iommu/exynos-iommu.c···99149842M: Cezary Rojewski <cezary.rojewski@intel.com>99159843M: Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com>99169844M: Liam Girdwood <liam.r.girdwood@linux.intel.com>99179917-M: Jie Yang <yang.jie@linux.intel.com>98459845+M: Peter Ujfalusi <peter.ujfalusi@linux.intel.com>98469846+M: Bard Liao <yung-chuan.liao@linux.intel.com>98479847+M: Ranjani Sridharan <ranjani.sridharan@linux.intel.com>98489848+M: Kai Vehmanen <kai.vehmanen@linux.intel.com>99189849L: alsa-devel@alsa-project.org (moderated for non-subscribers)99199850S: Supported99209851F: sound/soc/intel/···1008010005INTEL IOMMU (VT-d)1008110006M: David Woodhouse <dwmw2@infradead.org>1008210007M: Lu Baolu <baolu.lu@linux.intel.com>1008310083-L: iommu@lists.linux-foundation.org1008410008L: iommu@lists.linux.dev1008510009S: Supported1008610010T: git git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu.git···1045910385IOMMU DRIVERS1046010386M: Joerg Roedel <joro@8bytes.org>1046110387M: Will Deacon <will@kernel.org>1046210462-L: iommu@lists.linux-foundation.org1046310388L: iommu@lists.linux.dev1046410389S: Maintained1046510390T: git git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu.git···12618125451261912546MEDIATEK IOMMU DRIVER1262012547M: Yong Wu <yong.wu@mediatek.com>1262112621-L: iommu@lists.linux-foundation.org1262212548L: iommu@lists.linux.dev1262312549L: linux-mediatek@lists.infradead.org (moderated for non-subscribers)1262412550S: Supported···1448414412F: sound/soc/codecs/tfa989x.c14485144131448614414NXP-NCI NFC DRIVER1448714487-R: Charles Gorand <charles.gorand@effinnov.com>1448814415L: linux-nfc@lists.01.org (subscribers-only)1448914489-S: Supported1441614416+S: Orphan1449014417F: Documentation/devicetree/bindings/net/nfc/nxp,nci.yaml1449114418F: drivers/nfc/nxp-nci1449214419···1587415803PIN CONTROLLER - INTEL1587515804M: Mika Westerberg <mika.westerberg@linux.intel.com>1587615805M: Andy Shevchenko <andy@kernel.org>1587715877-S: Maintained1580615806+S: Supported1587815807T: git git://git.kernel.org/pub/scm/linux/kernel/git/pinctrl/intel.git1587915808F: drivers/pinctrl/intel/1588015809···16396163251639716326QCOM AUDIO (ASoC) DRIVERS1639816327M: Srinivas Kandagatla <srinivas.kandagatla@linaro.org>1639916399-M: Banajit Goswami <bgoswami@codeaurora.org>1632816328+M: Banajit Goswami <bgoswami@quicinc.com>1640016329L: alsa-devel@alsa-project.org (moderated for non-subscribers)1640116330S: Supported1640216331F: sound/soc/codecs/lpass-va-macro.c···16677166061667816607QUALCOMM IOMMU1667916608M: Rob Clark <robdclark@gmail.com>1668016680-L: iommu@lists.linux-foundation.org1668116609L: iommu@lists.linux.dev1668216610L: linux-arm-msm@vger.kernel.org1668316611S: Maintained···18190181201819118121SHARED MEMORY COMMUNICATIONS (SMC) SOCKETS1819218122M: Karsten Graul <kgraul@linux.ibm.com>1812318123+M: Wenjia Zhang <wenjia@linux.ibm.com>1819318124L: linux-s390@vger.kernel.org1819418125S: Supported1819518126W: http://www.ibm.com/developerworks/linux/linux390/···1882318752SOUND - SOUND OPEN FIRMWARE (SOF) DRIVERS1882418753M: Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com>1882518754M: Liam Girdwood <lgirdwood@gmail.com>1875518755+M: Peter Ujfalusi <peter.ujfalusi@linux.intel.com>1875618756+M: Bard Liao <yung-chuan.liao@linux.intel.com>1882618757M: Ranjani Sridharan <ranjani.sridharan@linux.intel.com>1882718827-M: Kai Vehmanen <kai.vehmanen@linux.intel.com>1875818758+R: Kai Vehmanen <kai.vehmanen@linux.intel.com>1882818759M: Daniel Baluta <daniel.baluta@nxp.com>1882918760L: sound-open-firmware@alsa-project.org (moderated for non-subscribers)1883018761S: Supported···19305192321930619233SWIOTLB SUBSYSTEM1930719234M: Christoph Hellwig <hch@infradead.org>1930819308-L: iommu@lists.linux-foundation.org1930919235L: iommu@lists.linux.dev1931019236S: Supported1931119237W: http://git.infradead.org/users/hch/dma-mapping.git···2198121909M: Juergen Gross <jgross@suse.com>2198221910M: Stefano Stabellini <sstabellini@kernel.org>2198321911L: xen-devel@lists.xenproject.org (moderated for non-subscribers)2198421984-L: iommu@lists.linux-foundation.org2198521912L: iommu@lists.linux.dev2198621913S: Supported2198721914F: arch/x86/xen/*swiotlb*
···55 * Copyright 2021 Google LLC.66 */7788-#include "sc7180-trogdor.dtsi"88+/* This file must be included after sc7180-trogdor.dtsi */991010/ {1111 /* BOARD-SPECIFIC TOP LEVEL NODES */
···55 * Copyright 2020 Google LLC.66 */7788-#include "sc7180-trogdor.dtsi"88+/* This file must be included after sc7180-trogdor.dtsi */991010&ap_sar_sensor {1111 semtech,cs0-ground;
···2525/*2626 * Verify a frameinfo structure. The return address should be a valid text2727 * address. The frame pointer may be null if its the last frame, otherwise2828- * the frame pointer should point to a location in the stack after the the2828+ * the frame pointer should point to a location in the stack after the2929 * top of the next frame up.3030 */3131static inline int or1k_frameinfo_valid(struct or1k_frameinfo *frameinfo)
···1313# If you really need to reference something from prom_init.o add1414# it to the list below:15151616-grep "^CONFIG_KASAN=y$" .config >/dev/null1616+grep "^CONFIG_KASAN=y$" ${KCONFIG_CONFIG} >/dev/null1717if [ $? -eq 0 ]1818then1919 MEM_FUNCS="__memcpy __memset"
+32-1
arch/powerpc/mm/mem.c
···105105 vm_unmap_aliases();106106}107107108108+/*109109+ * After memory hotplug the variables max_pfn, max_low_pfn and high_memory need110110+ * updating.111111+ */112112+static void update_end_of_memory_vars(u64 start, u64 size)113113+{114114+ unsigned long end_pfn = PFN_UP(start + size);115115+116116+ if (end_pfn > max_pfn) {117117+ max_pfn = end_pfn;118118+ max_low_pfn = end_pfn;119119+ high_memory = (void *)__va(max_pfn * PAGE_SIZE - 1) + 1;120120+ }121121+}122122+123123+int __ref add_pages(int nid, unsigned long start_pfn, unsigned long nr_pages,124124+ struct mhp_params *params)125125+{126126+ int ret;127127+128128+ ret = __add_pages(nid, start_pfn, nr_pages, params);129129+ if (ret)130130+ return ret;131131+132132+ /* update max_pfn, max_low_pfn and high_memory */133133+ update_end_of_memory_vars(start_pfn << PAGE_SHIFT,134134+ nr_pages << PAGE_SHIFT);135135+136136+ return ret;137137+}138138+108139int __ref arch_add_memory(int nid, u64 start, u64 size,109140 struct mhp_params *params)110141{···146115 rc = arch_create_linear_mapping(nid, start, size, params);147116 if (rc)148117 return rc;149149- rc = __add_pages(nid, start_pfn, nr_pages, params);118118+ rc = add_pages(nid, start_pfn, nr_pages, params);150119 if (rc)151120 arch_remove_linear_mapping(start, size);152121 return rc;
···176176 NULL) != pnv_get_random_long_early)177177 return 0;178178179179- for_each_compatible_node(dn, NULL, "ibm,power-rng") {180180- if (rng_create(dn))181181- continue;182182- /* Create devices for hwrng driver */183183- of_platform_device_create(dn, NULL, NULL);184184- }179179+ for_each_compatible_node(dn, NULL, "ibm,power-rng")180180+ rng_create(dn);185181186182 if (!ppc_md.get_random_seed)187183 return 0;···201205202206static int __init pnv_rng_late_init(void)203207{208208+ struct device_node *dn;204209 unsigned long v;210210+205211 /* In case it wasn't called during init for some other reason. */206212 if (ppc_md.get_random_seed == pnv_get_random_long_early)207213 pnv_get_random_long_early(&v);214214+215215+ if (ppc_md.get_random_seed == powernv_get_random_long) {216216+ for_each_compatible_node(dn, NULL, "ibm,power-rng")217217+ of_platform_device_create(dn, NULL, NULL);218218+ }219219+208220 return 0;209221}210222machine_subsys_initcall(powernv, pnv_rng_late_init);
···484484config KEXEC_FILE485485 bool "kexec file based system call"486486 select KEXEC_CORE487487- select BUILD_BIN2C488487 depends on CRYPTO489488 depends on CRYPTO_SHA256490489 depends on CRYPTO_SHA256_S390
-217
arch/s390/crypto/arch_random.c
···44 *55 * Copyright IBM Corp. 2017, 202066 * Author(s): Harald Freudenberger77- *88- * The s390_arch_random_generate() function may be called from random.c99- * in interrupt context. So this implementation does the best to be very1010- * fast. There is a buffer of random data which is asynchronously checked1111- * and filled by a workqueue thread.1212- * If there are enough bytes in the buffer the s390_arch_random_generate()1313- * just delivers these bytes. Otherwise false is returned until the1414- * worker thread refills the buffer.1515- * The worker fills the rng buffer by pulling fresh entropy from the1616- * high quality (but slow) true hardware random generator. This entropy1717- * is then spread over the buffer with an pseudo random generator PRNG.1818- * As the arch_get_random_seed_long() fetches 8 bytes and the calling1919- * function add_interrupt_randomness() counts this as 1 bit entropy the2020- * distribution needs to make sure there is in fact 1 bit entropy contained2121- * in 8 bytes of the buffer. The current values pull 32 byte entropy2222- * and scatter this into a 2048 byte buffer. So 8 byte in the buffer2323- * will contain 1 bit of entropy.2424- * The worker thread is rescheduled based on the charge level of the2525- * buffer but at least with 500 ms delay to avoid too much CPU consumption.2626- * So the max. amount of rng data delivered via arch_get_random_seed is2727- * limited to 4k bytes per second.287 */298309#include <linux/kernel.h>3110#include <linux/atomic.h>3211#include <linux/random.h>3333-#include <linux/slab.h>3412#include <linux/static_key.h>3535-#include <linux/workqueue.h>3636-#include <linux/moduleparam.h>3713#include <asm/cpacf.h>38143915DEFINE_STATIC_KEY_FALSE(s390_arch_random_available);40164117atomic64_t s390_arch_random_counter = ATOMIC64_INIT(0);4218EXPORT_SYMBOL(s390_arch_random_counter);4343-4444-#define ARCH_REFILL_TICKS (HZ/2)4545-#define ARCH_PRNG_SEED_SIZE 324646-#define ARCH_RNG_BUF_SIZE 20484747-4848-static DEFINE_SPINLOCK(arch_rng_lock);4949-static u8 *arch_rng_buf;5050-static unsigned int arch_rng_buf_idx;5151-5252-static void arch_rng_refill_buffer(struct work_struct *);5353-static DECLARE_DELAYED_WORK(arch_rng_work, arch_rng_refill_buffer);5454-5555-bool s390_arch_random_generate(u8 *buf, unsigned int nbytes)5656-{5757- /* max hunk is ARCH_RNG_BUF_SIZE */5858- if (nbytes > ARCH_RNG_BUF_SIZE)5959- return false;6060-6161- /* lock rng buffer */6262- if (!spin_trylock(&arch_rng_lock))6363- return false;6464-6565- /* try to resolve the requested amount of bytes from the buffer */6666- arch_rng_buf_idx -= nbytes;6767- if (arch_rng_buf_idx < ARCH_RNG_BUF_SIZE) {6868- memcpy(buf, arch_rng_buf + arch_rng_buf_idx, nbytes);6969- atomic64_add(nbytes, &s390_arch_random_counter);7070- spin_unlock(&arch_rng_lock);7171- return true;7272- }7373-7474- /* not enough bytes in rng buffer, refill is done asynchronously */7575- spin_unlock(&arch_rng_lock);7676-7777- return false;7878-}7979-EXPORT_SYMBOL(s390_arch_random_generate);8080-8181-static void arch_rng_refill_buffer(struct work_struct *unused)8282-{8383- unsigned int delay = ARCH_REFILL_TICKS;8484-8585- spin_lock(&arch_rng_lock);8686- if (arch_rng_buf_idx > ARCH_RNG_BUF_SIZE) {8787- /* buffer is exhausted and needs refill */8888- u8 seed[ARCH_PRNG_SEED_SIZE];8989- u8 prng_wa[240];9090- /* fetch ARCH_PRNG_SEED_SIZE bytes of entropy */9191- cpacf_trng(NULL, 0, seed, sizeof(seed));9292- /* blow this entropy up to ARCH_RNG_BUF_SIZE with PRNG */9393- memset(prng_wa, 0, sizeof(prng_wa));9494- cpacf_prno(CPACF_PRNO_SHA512_DRNG_SEED,9595- &prng_wa, NULL, 0, seed, sizeof(seed));9696- cpacf_prno(CPACF_PRNO_SHA512_DRNG_GEN,9797- &prng_wa, arch_rng_buf, ARCH_RNG_BUF_SIZE, NULL, 0);9898- arch_rng_buf_idx = ARCH_RNG_BUF_SIZE;9999- }100100- delay += (ARCH_REFILL_TICKS * arch_rng_buf_idx) / ARCH_RNG_BUF_SIZE;101101- spin_unlock(&arch_rng_lock);102102-103103- /* kick next check */104104- queue_delayed_work(system_long_wq, &arch_rng_work, delay);105105-}106106-107107-/*108108- * Here follows the implementation of s390_arch_get_random_long().109109- *110110- * The random longs to be pulled by arch_get_random_long() are111111- * prepared in an 4K buffer which is filled from the NIST 800-90112112- * compliant s390 drbg. By default the random long buffer is refilled113113- * 256 times before the drbg itself needs a reseed. The reseed of the114114- * drbg is done with 32 bytes fetched from the high quality (but slow)115115- * trng which is assumed to deliver 100% entropy. So the 32 * 8 = 256116116- * bits of entropy are spread over 256 * 4KB = 1MB serving 131072117117- * arch_get_random_long() invocations before reseeded.118118- *119119- * How often the 4K random long buffer is refilled with the drbg120120- * before the drbg is reseeded can be adjusted. There is a module121121- * parameter 's390_arch_rnd_long_drbg_reseed' accessible via122122- * /sys/module/arch_random/parameters/rndlong_drbg_reseed123123- * or as kernel command line parameter124124- * arch_random.rndlong_drbg_reseed=<value>125125- * This parameter tells how often the drbg fills the 4K buffer before126126- * it is re-seeded by fresh entropy from the trng.127127- * A value of 16 results in reseeding the drbg at every 16 * 4 KB = 64128128- * KB with 32 bytes of fresh entropy pulled from the trng. So a value129129- * of 16 would result in 256 bits entropy per 64 KB.130130- * A value of 256 results in 1MB of drbg output before a reseed of the131131- * drbg is done. So this would spread the 256 bits of entropy among 1MB.132132- * Setting this parameter to 0 forces the reseed to take place every133133- * time the 4K buffer is depleted, so the entropy rises to 256 bits134134- * entropy per 4K or 0.5 bit entropy per arch_get_random_long(). With135135- * setting this parameter to negative values all this effort is136136- * disabled, arch_get_random long() returns false and thus indicating137137- * that the arch_get_random_long() feature is disabled at all.138138- */139139-140140-static unsigned long rndlong_buf[512];141141-static DEFINE_SPINLOCK(rndlong_lock);142142-static int rndlong_buf_index;143143-144144-static int rndlong_drbg_reseed = 256;145145-module_param_named(rndlong_drbg_reseed, rndlong_drbg_reseed, int, 0600);146146-MODULE_PARM_DESC(rndlong_drbg_reseed, "s390 arch_get_random_long() drbg reseed");147147-148148-static inline void refill_rndlong_buf(void)149149-{150150- static u8 prng_ws[240];151151- static int drbg_counter;152152-153153- if (--drbg_counter < 0) {154154- /* need to re-seed the drbg */155155- u8 seed[32];156156-157157- /* fetch seed from trng */158158- cpacf_trng(NULL, 0, seed, sizeof(seed));159159- /* seed drbg */160160- memset(prng_ws, 0, sizeof(prng_ws));161161- cpacf_prno(CPACF_PRNO_SHA512_DRNG_SEED,162162- &prng_ws, NULL, 0, seed, sizeof(seed));163163- /* re-init counter for drbg */164164- drbg_counter = rndlong_drbg_reseed;165165- }166166-167167- /* fill the arch_get_random_long buffer from drbg */168168- cpacf_prno(CPACF_PRNO_SHA512_DRNG_GEN, &prng_ws,169169- (u8 *) rndlong_buf, sizeof(rndlong_buf),170170- NULL, 0);171171-}172172-173173-bool s390_arch_get_random_long(unsigned long *v)174174-{175175- bool rc = false;176176- unsigned long flags;177177-178178- /* arch_get_random_long() disabled ? */179179- if (rndlong_drbg_reseed < 0)180180- return false;181181-182182- /* try to lock the random long lock */183183- if (!spin_trylock_irqsave(&rndlong_lock, flags))184184- return false;185185-186186- if (--rndlong_buf_index >= 0) {187187- /* deliver next long value from the buffer */188188- *v = rndlong_buf[rndlong_buf_index];189189- rc = true;190190- goto out;191191- }192192-193193- /* buffer is depleted and needs refill */194194- if (in_interrupt()) {195195- /* delay refill in interrupt context to next caller */196196- rndlong_buf_index = 0;197197- goto out;198198- }199199-200200- /* refill random long buffer */201201- refill_rndlong_buf();202202- rndlong_buf_index = ARRAY_SIZE(rndlong_buf);203203-204204- /* and provide one random long */205205- *v = rndlong_buf[--rndlong_buf_index];206206- rc = true;207207-208208-out:209209- spin_unlock_irqrestore(&rndlong_lock, flags);210210- return rc;211211-}212212-EXPORT_SYMBOL(s390_arch_get_random_long);213213-214214-static int __init s390_arch_random_init(void)215215-{216216- /* all the needed PRNO subfunctions available ? */217217- if (cpacf_query_func(CPACF_PRNO, CPACF_PRNO_TRNG) &&218218- cpacf_query_func(CPACF_PRNO, CPACF_PRNO_SHA512_DRNG_GEN)) {219219-220220- /* alloc arch random working buffer */221221- arch_rng_buf = kmalloc(ARCH_RNG_BUF_SIZE, GFP_KERNEL);222222- if (!arch_rng_buf)223223- return -ENOMEM;224224-225225- /* kick worker queue job to fill the random buffer */226226- queue_delayed_work(system_long_wq,227227- &arch_rng_work, ARCH_REFILL_TICKS);228228-229229- /* enable arch random to the outside world */230230- static_branch_enable(&s390_arch_random_available);231231- }232232-233233- return 0;234234-}235235-arch_initcall(s390_arch_random_init);
+7-7
arch/s390/include/asm/archrandom.h
···15151616#include <linux/static_key.h>1717#include <linux/atomic.h>1818+#include <asm/cpacf.h>18191920DECLARE_STATIC_KEY_FALSE(s390_arch_random_available);2021extern atomic64_t s390_arch_random_counter;21222222-bool s390_arch_get_random_long(unsigned long *v);2323-bool s390_arch_random_generate(u8 *buf, unsigned int nbytes);2424-2523static inline bool __must_check arch_get_random_long(unsigned long *v)2624{2727- if (static_branch_likely(&s390_arch_random_available))2828- return s390_arch_get_random_long(v);2925 return false;3026}3127···3337static inline bool __must_check arch_get_random_seed_long(unsigned long *v)3438{3539 if (static_branch_likely(&s390_arch_random_available)) {3636- return s390_arch_random_generate((u8 *)v, sizeof(*v));4040+ cpacf_trng(NULL, 0, (u8 *)v, sizeof(*v));4141+ atomic64_add(sizeof(*v), &s390_arch_random_counter);4242+ return true;3743 }3844 return false;3945}···4345static inline bool __must_check arch_get_random_seed_int(unsigned int *v)4446{4547 if (static_branch_likely(&s390_arch_random_available)) {4646- return s390_arch_random_generate((u8 *)v, sizeof(*v));4848+ cpacf_trng(NULL, 0, (u8 *)v, sizeof(*v));4949+ atomic64_add(sizeof(*v), &s390_arch_random_counter);5050+ return true;4751 }4852 return false;4953}
+3-3
arch/s390/include/asm/qdio.h
···133133 * @sb_count: number of storage blocks134134 * @sba: storage block element addresses135135 * @dcount: size of storage block elements136136- * @user0: user defineable value137137- * @res4: reserved paramater138138- * @user1: user defineable value136136+ * @user0: user definable value137137+ * @res4: reserved parameter138138+ * @user1: user definable value139139 */140140struct qaob {141141 u64 res0[6];
···11111212/* Refer to drivers/acpi/cppc_acpi.c for the description of functions */13131414+bool cpc_supported_by_cpu(void)1515+{1616+ switch (boot_cpu_data.x86_vendor) {1717+ case X86_VENDOR_AMD:1818+ case X86_VENDOR_HYGON:1919+ return boot_cpu_has(X86_FEATURE_CPPC);2020+ }2121+ return false;2222+}2323+1424bool cpc_ffh_supported(void)1525{1626 return true;
+3-1
arch/x86/kernel/head64.c
···426426427427/* Don't add a printk in there. printk relies on the PDA which is not initialized 428428 yet. */429429-static void __init clear_bss(void)429429+void __init clear_bss(void)430430{431431 memset(__bss_start, 0,432432 (unsigned long) __bss_stop - (unsigned long) __bss_start);433433+ memset(__brk_base, 0,434434+ (unsigned long) __brk_limit - (unsigned long) __brk_base);433435}434436435437static unsigned long get_cmd_line_ptr(void)
···11831183extern void early_xen_iret_patch(void);1184118411851185/* First C function to be called on Xen boot */11861186-asmlinkage __visible void __init xen_start_kernel(void)11861186+asmlinkage __visible void __init xen_start_kernel(struct start_info *si)11871187{11881188 struct physdev_set_iopl set_iopl;11891189 unsigned long initrd_start = 0;11901190 int rc;1191119111921192- if (!xen_start_info)11921192+ if (!si)11931193 return;11941194+11951195+ clear_bss();11961196+11971197+ xen_start_info = si;1194119811951199 __text_gen_insn(&early_xen_iret_patch,11961200 JMP32_INSN_OPCODE, &early_xen_iret_patch, &xen_iret,
···666666 CRC32c and CRC32 CRC algorithms implemented using mips crypto667667 instructions, when available.668668669669+config CRYPTO_CRC32_S390670670+ tristate "CRC-32 algorithms"671671+ depends on S390672672+ select CRYPTO_HASH673673+ select CRC32674674+ help675675+ Select this option if you want to use hardware accelerated676676+ implementations of CRC algorithms. With this option, you677677+ can optimize the computation of CRC-32 (IEEE 802.3 Ethernet)678678+ and CRC-32C (Castagnoli).679679+680680+ It is available with IBM z13 or later.669681670682config CRYPTO_XXHASH671683 tristate "xxHash hash algorithm"···910898 Extensions version 1 (AVX1), or Advanced Vector Extensions911899 version 2 (AVX2) instructions, when available.912900901901+config CRYPTO_SHA512_S390902902+ tristate "SHA384 and SHA512 digest algorithm"903903+ depends on S390904904+ select CRYPTO_HASH905905+ help906906+ This is the s390 hardware accelerated implementation of the907907+ SHA512 secure hash standard.908908+909909+ It is available as of z10.910910+913911config CRYPTO_SHA1_OCTEON914912 tristate "SHA1 digest algorithm (OCTEON)"915913 depends on CPU_CAVIUM_OCTEON···951929 help952930 SHA-1 secure hash standard (DFIPS 180-4) implemented953931 using powerpc SPE SIMD instruction set.932932+933933+config CRYPTO_SHA1_S390934934+ tristate "SHA1 digest algorithm"935935+ depends on S390936936+ select CRYPTO_HASH937937+ help938938+ This is the s390 hardware accelerated implementation of the939939+ SHA-1 secure hash standard (FIPS 180-1/DFIPS 180-2).940940+941941+ It is available as of z990.954942955943config CRYPTO_SHA256956944 tristate "SHA224 and SHA256 digest algorithm"···1002970 SHA-256 secure hash standard (DFIPS 180-2) implemented1003971 using sparc64 crypto instructions, when available.1004972973973+config CRYPTO_SHA256_S390974974+ tristate "SHA256 digest algorithm"975975+ depends on S390976976+ select CRYPTO_HASH977977+ help978978+ This is the s390 hardware accelerated implementation of the979979+ SHA256 secure hash standard (DFIPS 180-2).980980+981981+ It is available as of z9.982982+1005983config CRYPTO_SHA5121006984 tristate "SHA384 and SHA512 digest algorithms"1007985 select CRYPTO_HASH···1051100910521010 References:10531011 http://keccak.noekeon.org/10121012+10131013+config CRYPTO_SHA3_256_S39010141014+ tristate "SHA3_224 and SHA3_256 digest algorithm"10151015+ depends on S39010161016+ select CRYPTO_HASH10171017+ help10181018+ This is the s390 hardware accelerated implementation of the10191019+ SHA3_256 secure hash standard.10201020+10211021+ It is available as of z14.10221022+10231023+config CRYPTO_SHA3_512_S39010241024+ tristate "SHA3_384 and SHA3_512 digest algorithm"10251025+ depends on S39010261026+ select CRYPTO_HASH10271027+ help10281028+ This is the s390 hardware accelerated implementation of the10291029+ SHA3_512 secure hash standard.10301030+10311031+ It is available as of z14.1054103210551033config CRYPTO_SM310561034 tristate···11311069 help11321070 This is the x86_64 CLMUL-NI accelerated implementation of11331071 GHASH, the hash function used in GCM (Galois/Counter mode).10721072+10731073+config CRYPTO_GHASH_S39010741074+ tristate "GHASH hash function"10751075+ depends on S39010761076+ select CRYPTO_HASH10771077+ help10781078+ This is the s390 hardware accelerated implementation of GHASH,10791079+ the hash function used in GCM (Galois/Counter mode).10801080+10811081+ It is available as of z196.1134108211351083comment "Ciphers"11361084···12561184 timining attacks. Nevertheless it might be not as secure as other12571185 architecture specific assembler implementations that work on 1KB12581186 tables or 256 bytes S-boxes.11871187+11881188+config CRYPTO_AES_S39011891189+ tristate "AES cipher algorithms"11901190+ depends on S39011911191+ select CRYPTO_ALGAPI11921192+ select CRYPTO_SKCIPHER11931193+ help11941194+ This is the s390 hardware accelerated implementation of the11951195+ AES cipher algorithms (FIPS-197).11961196+11971197+ As of z9 the ECB and CBC modes are hardware accelerated11981198+ for 128 bit keys.11991199+ As of z10 the ECB and CBC modes are hardware accelerated12001200+ for all AES key sizes.12011201+ As of z196 the CTR mode is hardware accelerated for all AES12021202+ key sizes and XTS mode is hardware accelerated for 256 and12031203+ 512 bit keys.1259120412601205config CRYPTO_ANUBIS12611206 tristate "Anubis cipher algorithm"···15041415 algorithm are provided; regular processing one input block and15051416 one that processes three blocks parallel.1506141714181418+config CRYPTO_DES_S39014191419+ tristate "DES and Triple DES cipher algorithms"14201420+ depends on S39014211421+ select CRYPTO_ALGAPI14221422+ select CRYPTO_SKCIPHER14231423+ select CRYPTO_LIB_DES14241424+ help14251425+ This is the s390 hardware accelerated implementation of the14261426+ DES cipher algorithm (FIPS 46-2), and Triple DES EDE (FIPS 46-3).14271427+14281428+ As of z990 the ECB and CBC mode are hardware accelerated.14291429+ As of z196 the CTR mode is hardware accelerated.14301430+15071431config CRYPTO_FCRYPT15081432 tristate "FCrypt cipher algorithm"15091433 select CRYPTO_ALGAPI···15751473 depends on CPU_MIPS32_R215761474 select CRYPTO_SKCIPHER15771475 select CRYPTO_ARCH_HAVE_LIB_CHACHA14761476+14771477+config CRYPTO_CHACHA_S39014781478+ tristate "ChaCha20 stream cipher"14791479+ depends on S39014801480+ select CRYPTO_SKCIPHER14811481+ select CRYPTO_LIB_CHACHA_GENERIC14821482+ select CRYPTO_ARCH_HAVE_LIB_CHACHA14831483+ help14841484+ This is the s390 SIMD implementation of the ChaCha20 stream14851485+ cipher (RFC 7539).14861486+14871487+ It is available as of z13.1578148815791489config CRYPTO_SEED15801490 tristate "SEED cipher algorithm"
···298298bool osc_sb_native_usb4_support_confirmed;299299EXPORT_SYMBOL_GPL(osc_sb_native_usb4_support_confirmed);300300301301-bool osc_sb_cppc_not_supported;301301+bool osc_sb_cppc2_support_acked;302302303303static u8 sb_uuid_str[] = "0811B06E-4A27-44F9-8D60-3CBBC22E7B48";304304static void acpi_bus_osc_negotiate_platform_control(void)···358358 return;359359 }360360361361-#ifdef CONFIG_ACPI_CPPC_LIB362362- osc_sb_cppc_not_supported = !(capbuf_ret[OSC_SUPPORT_DWORD] &363363- (OSC_SB_CPC_SUPPORT | OSC_SB_CPCV2_SUPPORT));364364-#endif365365-366361 /*367362 * Now run _OSC again with query flag clear and with the caps368363 * supported by both the OS and the platform.···371376372377 capbuf_ret = context.ret.pointer;373378 if (context.ret.length > OSC_SUPPORT_DWORD) {379379+#ifdef CONFIG_ACPI_CPPC_LIB380380+ osc_sb_cppc2_support_acked = capbuf_ret[OSC_SUPPORT_DWORD] & OSC_SB_CPCV2_SUPPORT;381381+#endif382382+374383 osc_sb_apei_support_acked =375384 capbuf_ret[OSC_SUPPORT_DWORD] & OSC_SB_APEI_SUPPORT;376385 osc_pc_lpi_support_confirmed =
+18-2
drivers/acpi/cppc_acpi.c
···578578}579579580580/**581581+ * cpc_supported_by_cpu() - check if CPPC is supported by CPU582582+ *583583+ * Check if the architectural support for CPPC is present even584584+ * if the _OSC hasn't prescribed it585585+ *586586+ * Return: true for supported, false for not supported587587+ */588588+bool __weak cpc_supported_by_cpu(void)589589+{590590+ return false;591591+}592592+593593+/**581594 * pcc_data_alloc() - Allocate the pcc_data memory for pcc subspace582595 *583596 * Check and allocate the cppc_pcc_data memory.···697684 acpi_status status;698685 int ret = -ENODATA;699686700700- if (osc_sb_cppc_not_supported)701701- return -ENODEV;687687+ if (!osc_sb_cppc2_support_acked) {688688+ pr_debug("CPPC v2 _OSC not acked\n");689689+ if (!cpc_supported_by_cpu())690690+ return -ENODEV;691691+ }702692703693 /* Parse the ACPI _CPC table for this CPU. */704694 status = acpi_evaluate_object_typed(handle, "_CPC", NULL, &output,
···486486 /* Ensure that all references to the link object have been dropped. */487487 device_link_synchronize_removal();488488489489- pm_runtime_release_supplier(link, true);489489+ pm_runtime_release_supplier(link);490490+ /*491491+ * If supplier_preactivated is set, the link has been dropped between492492+ * the pm_runtime_get_suppliers() and pm_runtime_put_suppliers() calls493493+ * in __driver_probe_device(). In that case, drop the supplier's494494+ * PM-runtime usage counter to remove the reference taken by495495+ * pm_runtime_get_suppliers().496496+ */497497+ if (link->supplier_preactivated)498498+ pm_runtime_put_noidle(link->supplier);499499+500500+ pm_request_idle(link->supplier);490501491502 put_device(link->consumer);492503 put_device(link->supplier);
+10-24
drivers/base/power/runtime.c
···308308/**309309 * pm_runtime_release_supplier - Drop references to device link's supplier.310310 * @link: Target device link.311311- * @check_idle: Whether or not to check if the supplier device is idle.312311 *313313- * Drop all runtime PM references associated with @link to its supplier device314314- * and if @check_idle is set, check if that device is idle (and so it can be315315- * suspended).312312+ * Drop all runtime PM references associated with @link to its supplier device.316313 */317317-void pm_runtime_release_supplier(struct device_link *link, bool check_idle)314314+void pm_runtime_release_supplier(struct device_link *link)318315{319316 struct device *supplier = link->supplier;320317···324327 while (refcount_dec_not_one(&link->rpm_active) &&325328 atomic_read(&supplier->power.usage_count) > 0)326329 pm_runtime_put_noidle(supplier);327327-328328- if (check_idle)329329- pm_request_idle(supplier);330330}331331332332static void __rpm_put_suppliers(struct device *dev, bool try_to_suspend)···331337 struct device_link *link;332338333339 list_for_each_entry_rcu(link, &dev->links.suppliers, c_node,334334- device_links_read_lock_held())335335- pm_runtime_release_supplier(link, try_to_suspend);340340+ device_links_read_lock_held()) {341341+ pm_runtime_release_supplier(link);342342+ if (try_to_suspend)343343+ pm_request_idle(link->supplier);344344+ }336345}337346338347static void rpm_put_suppliers(struct device *dev)···17681771 if (link->flags & DL_FLAG_PM_RUNTIME) {17691772 link->supplier_preactivated = true;17701773 pm_runtime_get_sync(link->supplier);17711771- refcount_inc(&link->rpm_active);17721774 }1773177517741776 device_links_read_unlock(idx);···17871791 list_for_each_entry_rcu(link, &dev->links.suppliers, c_node,17881792 device_links_read_lock_held())17891793 if (link->supplier_preactivated) {17901790- bool put;17911791-17921794 link->supplier_preactivated = false;17931793-17941794- spin_lock_irq(&dev->power.lock);17951795-17961796- put = pm_runtime_status_suspended(dev) &&17971797- refcount_dec_not_one(&link->rpm_active);17981798-17991799- spin_unlock_irq(&dev->power.lock);18001800-18011801- if (put)18021802- pm_runtime_put(link->supplier);17951795+ pm_runtime_put(link->supplier);18031796 }1804179718051798 device_links_read_unlock(idx);···18231838 return;1824183918251840 pm_runtime_drop_link_count(link->consumer);18261826- pm_runtime_release_supplier(link, true);18411841+ pm_runtime_release_supplier(link);18421842+ pm_request_idle(link->supplier);18271843}1828184418291845static bool pm_runtime_need_not_resume(struct device *dev)
+37-17
drivers/block/xen-blkfront.c
···152152module_param_named(max_ring_page_order, xen_blkif_max_ring_order, int, 0444);153153MODULE_PARM_DESC(max_ring_page_order, "Maximum order of pages to be used for the shared ring");154154155155+static bool __read_mostly xen_blkif_trusted = true;156156+module_param_named(trusted, xen_blkif_trusted, bool, 0644);157157+MODULE_PARM_DESC(trusted, "Is the backend trusted");158158+155159#define BLK_RING_SIZE(info) \156160 __CONST_RING_SIZE(blkif, XEN_PAGE_SIZE * (info)->nr_ring_pages)157161···214210 unsigned int feature_discard:1;215211 unsigned int feature_secdiscard:1;216212 unsigned int feature_persistent:1;213213+ unsigned int bounce:1;217214 unsigned int discard_granularity;218215 unsigned int discard_alignment;219216 /* Number of 4KB segments handled */···315310 if (!gnt_list_entry)316311 goto out_of_memory;317312318318- if (info->feature_persistent) {319319- granted_page = alloc_page(GFP_NOIO);313313+ if (info->bounce) {314314+ granted_page = alloc_page(GFP_NOIO | __GFP_ZERO);320315 if (!granted_page) {321316 kfree(gnt_list_entry);322317 goto out_of_memory;···335330 list_for_each_entry_safe(gnt_list_entry, n,336331 &rinfo->grants, node) {337332 list_del(&gnt_list_entry->node);338338- if (info->feature_persistent)333333+ if (info->bounce)339334 __free_page(gnt_list_entry->page);340335 kfree(gnt_list_entry);341336 i--;···381376 /* Assign a gref to this page */382377 gnt_list_entry->gref = gnttab_claim_grant_reference(gref_head);383378 BUG_ON(gnt_list_entry->gref == -ENOSPC);384384- if (info->feature_persistent)379379+ if (info->bounce)385380 grant_foreign_access(gnt_list_entry, info);386381 else {387382 /* Grant access to the GFN passed by the caller */···405400 /* Assign a gref to this page */406401 gnt_list_entry->gref = gnttab_claim_grant_reference(gref_head);407402 BUG_ON(gnt_list_entry->gref == -ENOSPC);408408- if (!info->feature_persistent) {403403+ if (!info->bounce) {409404 struct page *indirect_page;410405411406 /* Fetch a pre-allocated page to use for indirect grefs */···708703 .grant_idx = 0,709704 .segments = NULL,710705 .rinfo = rinfo,711711- .need_copy = rq_data_dir(req) && info->feature_persistent,706706+ .need_copy = rq_data_dir(req) && info->bounce,712707 };713708714709 /*···986981{987982 blk_queue_write_cache(info->rq, info->feature_flush ? true : false,988983 info->feature_fua ? true : false);989989- pr_info("blkfront: %s: %s %s %s %s %s\n",984984+ pr_info("blkfront: %s: %s %s %s %s %s %s %s\n",990985 info->gd->disk_name, flush_info(info),991986 "persistent grants:", info->feature_persistent ?992987 "enabled;" : "disabled;", "indirect descriptors:",993993- info->max_indirect_segments ? "enabled;" : "disabled;");988988+ info->max_indirect_segments ? "enabled;" : "disabled;",989989+ "bounce buffer:", info->bounce ? "enabled" : "disabled;");994990}995991996992static int xen_translate_vdev(int vdevice, int *minor, unsigned int *offset)···12131207 if (!list_empty(&rinfo->indirect_pages)) {12141208 struct page *indirect_page, *n;1215120912161216- BUG_ON(info->feature_persistent);12101210+ BUG_ON(info->bounce);12171211 list_for_each_entry_safe(indirect_page, n, &rinfo->indirect_pages, lru) {12181212 list_del(&indirect_page->lru);12191213 __free_page(indirect_page);···12301224 NULL);12311225 rinfo->persistent_gnts_c--;12321226 }12331233- if (info->feature_persistent)12271227+ if (info->bounce)12341228 __free_page(persistent_gnt->page);12351229 kfree(persistent_gnt);12361230 }···12511245 for (j = 0; j < segs; j++) {12521246 persistent_gnt = rinfo->shadow[i].grants_used[j];12531247 gnttab_end_foreign_access(persistent_gnt->gref, NULL);12541254- if (info->feature_persistent)12481248+ if (info->bounce)12551249 __free_page(persistent_gnt->page);12561250 kfree(persistent_gnt);12571251 }···14341428 data.s = s;14351429 num_sg = s->num_sg;1436143014371437- if (bret->operation == BLKIF_OP_READ && info->feature_persistent) {14311431+ if (bret->operation == BLKIF_OP_READ && info->bounce) {14381432 for_each_sg(s->sg, sg, num_sg, i) {14391433 BUG_ON(sg->offset + sg->length > PAGE_SIZE);14401434···14931487 * Add the used indirect page back to the list of14941488 * available pages for indirect grefs.14951489 */14961496- if (!info->feature_persistent) {14901490+ if (!info->bounce) {14971491 indirect_page = s->indirect_grants[i]->page;14981492 list_add(&indirect_page->lru, &rinfo->indirect_pages);14991493 }···1769176317701764 if (!info)17711765 return -ENODEV;17661766+17671767+ /* Check if backend is trusted. */17681768+ info->bounce = !xen_blkif_trusted ||17691769+ !xenbus_read_unsigned(dev->nodename, "trusted", 1);1772177017731771 max_page_order = xenbus_read_unsigned(info->xbdev->otherend,17741772 "max-ring-page-order", 0);···21832173 if (err)21842174 goto out_of_memory;2185217521862186- if (!info->feature_persistent && info->max_indirect_segments) {21762176+ if (!info->bounce && info->max_indirect_segments) {21872177 /*21882188- * We are using indirect descriptors but not persistent21892189- * grants, we need to allocate a set of pages that can be21782178+ * We are using indirect descriptors but don't have a bounce21792179+ * buffer, we need to allocate a set of pages that can be21902180 * used for mapping indirect grefs21912181 */21922182 int num = INDIRECT_GREFS(grants) * BLK_RING_SIZE(info);2193218321942184 BUG_ON(!list_empty(&rinfo->indirect_pages));21952185 for (i = 0; i < num; i++) {21962196- struct page *indirect_page = alloc_page(GFP_KERNEL);21862186+ struct page *indirect_page = alloc_page(GFP_KERNEL |21872187+ __GFP_ZERO);21972188 if (!indirect_page)21982189 goto out_of_memory;21992190 list_add(&indirect_page->lru, &rinfo->indirect_pages);···22872276 info->feature_persistent =22882277 !!xenbus_read_unsigned(info->xbdev->otherend,22892278 "feature-persistent", 0);22792279+ if (info->feature_persistent)22802280+ info->bounce = true;2290228122912282 indirect_segments = xenbus_read_unsigned(info->xbdev->otherend,22922283 "feature-max-indirect-segments", 0);···25592546{25602547 struct blkfront_info *info;25612548 bool need_schedule_work = false;25492549+25502550+ /*25512551+ * Note that when using bounce buffers but not persistent grants25522552+ * there's no need to run blkfront_delay_work because grants are25532553+ * revoked in blkif_completion or else an error is reported and the25542554+ * connection is closed.25552555+ */2562255625632557 mutex_lock(&blkfront_mutex);25642558
···470470 if (slew_done_gpio_np)471471 slew_done_gpio = read_gpio(slew_done_gpio_np);472472473473+ of_node_put(volt_gpio_np);474474+ of_node_put(freq_gpio_np);475475+ of_node_put(slew_done_gpio_np);476476+473477 /* If we use the frequency GPIOs, calculate the min/max speeds based474478 * on the bus frequencies475479 */
+6
drivers/cpufreq/qcom-cpufreq-hw.c
···442442 struct platform_device *pdev = cpufreq_get_driver_data();443443 int ret;444444445445+ if (data->throttle_irq <= 0)446446+ return 0;447447+445448 ret = irq_set_affinity_hint(data->throttle_irq, policy->cpus);446449 if (ret)447450 dev_err(&pdev->dev, "Failed to set CPU affinity of %s[%d]\n",···472469473470static void qcom_cpufreq_hw_lmh_exit(struct qcom_cpufreq_data *data)474471{472472+ if (data->throttle_irq <= 0)473473+ return;474474+475475 free_irq(data->throttle_irq, data);476476}477477
+1
drivers/cpufreq/qoriq-cpufreq.c
···275275276276 np = of_find_matching_node(NULL, qoriq_cpufreq_blacklist);277277 if (np) {278278+ of_node_put(np);278279 dev_info(&pdev->dev, "Disabling due to erratum A-008083");279280 return -ENODEV;280281 }
-115
drivers/crypto/Kconfig
···133133 Select this option if you want to use the paes cipher134134 for example to use protected key encrypted devices.135135136136-config CRYPTO_SHA1_S390137137- tristate "SHA1 digest algorithm"138138- depends on S390139139- select CRYPTO_HASH140140- help141141- This is the s390 hardware accelerated implementation of the142142- SHA-1 secure hash standard (FIPS 180-1/DFIPS 180-2).143143-144144- It is available as of z990.145145-146146-config CRYPTO_SHA256_S390147147- tristate "SHA256 digest algorithm"148148- depends on S390149149- select CRYPTO_HASH150150- help151151- This is the s390 hardware accelerated implementation of the152152- SHA256 secure hash standard (DFIPS 180-2).153153-154154- It is available as of z9.155155-156156-config CRYPTO_SHA512_S390157157- tristate "SHA384 and SHA512 digest algorithm"158158- depends on S390159159- select CRYPTO_HASH160160- help161161- This is the s390 hardware accelerated implementation of the162162- SHA512 secure hash standard.163163-164164- It is available as of z10.165165-166166-config CRYPTO_SHA3_256_S390167167- tristate "SHA3_224 and SHA3_256 digest algorithm"168168- depends on S390169169- select CRYPTO_HASH170170- help171171- This is the s390 hardware accelerated implementation of the172172- SHA3_256 secure hash standard.173173-174174- It is available as of z14.175175-176176-config CRYPTO_SHA3_512_S390177177- tristate "SHA3_384 and SHA3_512 digest algorithm"178178- depends on S390179179- select CRYPTO_HASH180180- help181181- This is the s390 hardware accelerated implementation of the182182- SHA3_512 secure hash standard.183183-184184- It is available as of z14.185185-186186-config CRYPTO_DES_S390187187- tristate "DES and Triple DES cipher algorithms"188188- depends on S390189189- select CRYPTO_ALGAPI190190- select CRYPTO_SKCIPHER191191- select CRYPTO_LIB_DES192192- help193193- This is the s390 hardware accelerated implementation of the194194- DES cipher algorithm (FIPS 46-2), and Triple DES EDE (FIPS 46-3).195195-196196- As of z990 the ECB and CBC mode are hardware accelerated.197197- As of z196 the CTR mode is hardware accelerated.198198-199199-config CRYPTO_AES_S390200200- tristate "AES cipher algorithms"201201- depends on S390202202- select CRYPTO_ALGAPI203203- select CRYPTO_SKCIPHER204204- help205205- This is the s390 hardware accelerated implementation of the206206- AES cipher algorithms (FIPS-197).207207-208208- As of z9 the ECB and CBC modes are hardware accelerated209209- for 128 bit keys.210210- As of z10 the ECB and CBC modes are hardware accelerated211211- for all AES key sizes.212212- As of z196 the CTR mode is hardware accelerated for all AES213213- key sizes and XTS mode is hardware accelerated for 256 and214214- 512 bit keys.215215-216216-config CRYPTO_CHACHA_S390217217- tristate "ChaCha20 stream cipher"218218- depends on S390219219- select CRYPTO_SKCIPHER220220- select CRYPTO_LIB_CHACHA_GENERIC221221- select CRYPTO_ARCH_HAVE_LIB_CHACHA222222- help223223- This is the s390 SIMD implementation of the ChaCha20 stream224224- cipher (RFC 7539).225225-226226- It is available as of z13.227227-228136config S390_PRNG229137 tristate "Pseudo random number generator device driver"230138 depends on S390···145237 pseudo-random-number device through the char device /dev/prandom.146238147239 It is available as of z9.148148-149149-config CRYPTO_GHASH_S390150150- tristate "GHASH hash function"151151- depends on S390152152- select CRYPTO_HASH153153- help154154- This is the s390 hardware accelerated implementation of GHASH,155155- the hash function used in GCM (Galois/Counter mode).156156-157157- It is available as of z196.158158-159159-config CRYPTO_CRC32_S390160160- tristate "CRC-32 algorithms"161161- depends on S390162162- select CRYPTO_HASH163163- select CRC32164164- help165165- Select this option if you want to use hardware accelerated166166- implementations of CRC algorithms. With this option, you167167- can optimize the computation of CRC-32 (IEEE 802.3 Ethernet)168168- and CRC-32C (Castagnoli).169169-170170- It is available with IBM z13 or later.171240172241config CRYPTO_DEV_NIAGARA2173242 tristate "Niagara2 Stream Processing Unit driver"
+2-10
drivers/crypto/ccp/sp-platform.c
···8585 struct sp_platform *sp_platform = sp->dev_specific;8686 struct device *dev = sp->dev;8787 struct platform_device *pdev = to_platform_device(dev);8888- unsigned int i, count;8988 int ret;90899191- for (i = 0, count = 0; i < pdev->num_resources; i++) {9292- struct resource *res = &pdev->resource[i];9393-9494- if (resource_type(res) == IORESOURCE_IRQ)9595- count++;9696- }9797-9898- sp_platform->irq_count = count;9090+ sp_platform->irq_count = platform_irq_count(pdev);999110092 ret = platform_get_irq(pdev, 0);10193 if (ret < 0) {···96104 }9710598106 sp->psp_irq = ret;9999- if (count == 1) {107107+ if (sp_platform->irq_count == 1) {100108 sp->ccp_irq = ret;101109 } else {102110 ret = platform_get_irq(pdev, 1);
+1-1
drivers/cxl/core/hdm.c
···197197 else198198 cxld->target_type = CXL_DECODER_ACCELERATOR;199199200200- if (is_cxl_endpoint(to_cxl_port(cxld->dev.parent)))200200+ if (is_endpoint_decoder(&cxld->dev))201201 return 0;202202203203 target_list.value =
+4-2
drivers/cxl/core/mbox.c
···355355 return -EBUSY;356356357357 /* Check the input buffer is the expected size */358358- if (info->size_in != send_cmd->in.size)358358+ if ((info->size_in != CXL_VARIABLE_PAYLOAD) &&359359+ (info->size_in != send_cmd->in.size))359360 return -ENOMEM;360361361362 /* Check the output buffer is at least large enough */362362- if (send_cmd->out.size < info->size_out)363363+ if ((info->size_out != CXL_VARIABLE_PAYLOAD) &&364364+ (send_cmd->out.size < info->size_out))363365 return -ENOMEM;364366365367 *mem_cmd = (struct cxl_mem_command) {
···11- // SPDX-License-Identifier: GPL-2.0-only11+// SPDX-License-Identifier: GPL-2.0-only22/*33 * linux/drivers/devfreq/governor_passive.c44 *···1414#include <linux/slab.h>1515#include <linux/device.h>1616#include <linux/devfreq.h>1717+#include <linux/units.h>1718#include "governor.h"1818-1919-#define HZ_PER_KHZ 100020192120static struct devfreq_cpu_data *2221get_parent_cpu_data(struct devfreq_passive_data *p_data,···3132 return parent_cpu_data;32333334 return NULL;3535+}3636+3737+static void delete_parent_cpu_data(struct devfreq_passive_data *p_data)3838+{3939+ struct devfreq_cpu_data *parent_cpu_data, *tmp;4040+4141+ list_for_each_entry_safe(parent_cpu_data, tmp, &p_data->cpu_data_list, node) {4242+ list_del(&parent_cpu_data->node);4343+4444+ if (parent_cpu_data->opp_table)4545+ dev_pm_opp_put_opp_table(parent_cpu_data->opp_table);4646+4747+ kfree(parent_cpu_data);4848+ }3449}35503651static unsigned long get_target_freq_by_required_opp(struct device *p_dev,···144131 goto out;145132146133 /* Use interpolation if required opps is not available */147147- for (i = 0; i < parent_devfreq->profile->max_state; i++)148148- if (parent_devfreq->profile->freq_table[i] == *freq)134134+ for (i = 0; i < parent_devfreq->max_state; i++)135135+ if (parent_devfreq->freq_table[i] == *freq)149136 break;150137151151- if (i == parent_devfreq->profile->max_state)138138+ if (i == parent_devfreq->max_state)152139 return -EINVAL;153140154154- if (i < devfreq->profile->max_state) {155155- child_freq = devfreq->profile->freq_table[i];141141+ if (i < devfreq->max_state) {142142+ child_freq = devfreq->freq_table[i];156143 } else {157157- count = devfreq->profile->max_state;158158- child_freq = devfreq->profile->freq_table[count - 1];144144+ count = devfreq->max_state;145145+ child_freq = devfreq->freq_table[count - 1];159146 }160147161148out:···235222{236223 struct devfreq_passive_data *p_data237224 = (struct devfreq_passive_data *)devfreq->data;238238- struct devfreq_cpu_data *parent_cpu_data;239239- int cpu, ret = 0;225225+ int ret;240226241227 if (p_data->nb.notifier_call) {242228 ret = cpufreq_unregister_notifier(&p_data->nb,···244232 return ret;245233 }246234247247- for_each_possible_cpu(cpu) {248248- struct cpufreq_policy *policy = cpufreq_cpu_get(cpu);249249- if (!policy) {250250- ret = -EINVAL;251251- continue;252252- }235235+ delete_parent_cpu_data(p_data);253236254254- parent_cpu_data = get_parent_cpu_data(p_data, policy);255255- if (!parent_cpu_data) {256256- cpufreq_cpu_put(policy);257257- continue;258258- }259259-260260- list_del(&parent_cpu_data->node);261261- if (parent_cpu_data->opp_table)262262- dev_pm_opp_put_opp_table(parent_cpu_data->opp_table);263263- kfree(parent_cpu_data);264264- cpufreq_cpu_put(policy);265265- }266266-267267- return ret;237237+ return 0;268238}269239270240static int cpufreq_passive_register_notifier(struct devfreq *devfreq)···330336err_put_policy:331337 cpufreq_cpu_put(policy);332338err:333333- WARN_ON(cpufreq_passive_unregister_notifier(devfreq));334339335340 return ret;336341}···400407 if (!p_data)401408 return -EINVAL;402409403403- if (!p_data->this)404404- p_data->this = devfreq;410410+ p_data->this = devfreq;405411406412 switch (event) {407413 case DEVFREQ_GOV_START:
+5
drivers/dma/at_xdmac.c
···19001900 for (i = 0; i < init_nr_desc_per_channel; i++) {19011901 desc = at_xdmac_alloc_desc(chan, GFP_KERNEL);19021902 if (!desc) {19031903+ if (i == 0) {19041904+ dev_warn(chan2dev(chan),19051905+ "can't allocate any descriptors\n");19061906+ return -EIO;19071907+ }19031908 dev_warn(chan2dev(chan),19041909 "only %d descriptors have been allocated\n", i);19051910 break;
+3-10
drivers/dma/dmatest.c
···675675 /*676676 * src and dst buffers are freed by ourselves below677677 */678678- if (params->polled) {678678+ if (params->polled)679679 flags = DMA_CTRL_ACK;680680- } else {681681- if (dma_has_cap(DMA_INTERRUPT, dev->cap_mask)) {682682- flags = DMA_CTRL_ACK | DMA_PREP_INTERRUPT;683683- } else {684684- pr_err("Channel does not support interrupt!\n");685685- goto err_pq_array;686686- }687687- }680680+ else681681+ flags = DMA_CTRL_ACK | DMA_PREP_INTERRUPT;688682689683 ktime = ktime_get();690684 while (!(kthread_should_stop() ||···906912 runtime = ktime_to_us(ktime);907913908914 ret = 0;909909-err_pq_array:910915 kfree(dma_pq);911916err_srcs_array:912917 kfree(srcs);
+5-3
drivers/dma/dw-axi-dmac/dw-axi-dmac-platform.c
···11641164 BIT(chan->id) << DMAC_CHAN_SUSP_WE_SHIFT;11651165 axi_dma_iowrite32(chan->chip, DMAC_CHEN, val);11661166 } else {11671167- val = BIT(chan->id) << DMAC_CHAN_SUSP2_SHIFT |11681168- BIT(chan->id) << DMAC_CHAN_SUSP2_WE_SHIFT;11671167+ val = axi_dma_ioread32(chan->chip, DMAC_CHSUSPREG);11681168+ val |= BIT(chan->id) << DMAC_CHAN_SUSP2_SHIFT |11691169+ BIT(chan->id) << DMAC_CHAN_SUSP2_WE_SHIFT;11691170 axi_dma_iowrite32(chan->chip, DMAC_CHSUSPREG, val);11701171 }11711172···11911190{11921191 u32 val;1193119211941194- val = axi_dma_ioread32(chan->chip, DMAC_CHEN);11951193 if (chan->chip->dw->hdata->reg_map_8_channels) {11941194+ val = axi_dma_ioread32(chan->chip, DMAC_CHEN);11961195 val &= ~(BIT(chan->id) << DMAC_CHAN_SUSP_SHIFT);11971196 val |= (BIT(chan->id) << DMAC_CHAN_SUSP_WE_SHIFT);11981197 axi_dma_iowrite32(chan->chip, DMAC_CHEN, val);11991198 } else {11991199+ val = axi_dma_ioread32(chan->chip, DMAC_CHSUSPREG);12001200 val &= ~(BIT(chan->id) << DMAC_CHAN_SUSP2_SHIFT);12011201 val |= (BIT(chan->id) << DMAC_CHAN_SUSP2_WE_SHIFT);12021202 axi_dma_iowrite32(chan->chip, DMAC_CHSUSPREG, val);
···512512 dev_dbg(dev, "IDXD reset complete\n");513513514514 if (IS_ENABLED(CONFIG_INTEL_IDXD_SVM) && sva) {515515- if (iommu_dev_enable_feature(dev, IOMMU_DEV_FEAT_SVA))515515+ if (iommu_dev_enable_feature(dev, IOMMU_DEV_FEAT_SVA)) {516516 dev_warn(dev, "Unable to turn on user SVA feature.\n");517517- else517517+ } else {518518 set_bit(IDXD_FLAG_USER_PASID_ENABLED, &idxd->flags);519519520520- if (idxd_enable_system_pasid(idxd))521521- dev_warn(dev, "No in-kernel DMA with PASID.\n");522522- else523523- set_bit(IDXD_FLAG_PASID_ENABLED, &idxd->flags);520520+ if (idxd_enable_system_pasid(idxd))521521+ dev_warn(dev, "No in-kernel DMA with PASID.\n");522522+ else523523+ set_bit(IDXD_FLAG_PASID_ENABLED, &idxd->flags);524524+ }524525 } else if (!sva) {525526 dev_warn(dev, "User forced SVA off via module param.\n");526527 }
+2-2
drivers/dma/imx-sdma.c
···891891 * SDMA stops cyclic channel when DMA request triggers a channel and no SDMA892892 * owned buffer is available (i.e. BD_DONE was set too late).893893 */894894- if (!is_sdma_channel_enabled(sdmac->sdma, sdmac->channel)) {894894+ if (sdmac->desc && !is_sdma_channel_enabled(sdmac->sdma, sdmac->channel)) {895895 dev_warn(sdmac->sdma->dev, "restart cyclic channel %d\n", sdmac->channel);896896 sdma_enable_channel(sdmac->sdma, sdmac->channel);897897 }···23462346#if IS_ENABLED(CONFIG_SOC_IMX6Q)23472347MODULE_FIRMWARE("imx/sdma/sdma-imx6q.bin");23482348#endif23492349-#if IS_ENABLED(CONFIG_SOC_IMX7D)23492349+#if IS_ENABLED(CONFIG_SOC_IMX7D) || IS_ENABLED(CONFIG_SOC_IMX8M)23502350MODULE_FIRMWARE("imx/sdma/sdma-imx7d.bin");23512351#endif23522352MODULE_LICENSE("GPL");
+2-1
drivers/dma/lgm/lgm-dma.c
···15931593 d->core_clk = devm_clk_get_optional(dev, NULL);15941594 if (IS_ERR(d->core_clk))15951595 return PTR_ERR(d->core_clk);15961596- clk_prepare_enable(d->core_clk);1597159615981597 d->rst = devm_reset_control_get_optional(dev, NULL);15991598 if (IS_ERR(d->rst))16001599 return PTR_ERR(d->rst);16001600+16011601+ clk_prepare_enable(d->core_clk);16011602 reset_control_deassert(d->rst);1602160316031604 ret = devm_add_action_or_reset(dev, ldma_clk_disable, d);
+1-1
drivers/dma/pl330.c
···2589258925902590 /* If the DMAC pool is empty, alloc new */25912591 if (!desc) {25922592- DEFINE_SPINLOCK(lock);25922592+ static DEFINE_SPINLOCK(lock);25932593 LIST_HEAD(pool);2594259425952595 if (!add_desc(&pool, &lock, GFP_ATOMIC, 1))
+11-28
drivers/dma/qcom/bam_dma.c
···558558 return 0;559559}560560561561-static int bam_pm_runtime_get_sync(struct device *dev)562562-{563563- if (pm_runtime_enabled(dev))564564- return pm_runtime_get_sync(dev);565565-566566- return 0;567567-}568568-569561/**570562 * bam_free_chan - Frees dma resources associated with specific channel571563 * @chan: specified channel···573581 unsigned long flags;574582 int ret;575583576576- ret = bam_pm_runtime_get_sync(bdev->dev);584584+ ret = pm_runtime_get_sync(bdev->dev);577585 if (ret < 0)578586 return;579587···776784 unsigned long flag;777785 int ret;778786779779- ret = bam_pm_runtime_get_sync(bdev->dev);787787+ ret = pm_runtime_get_sync(bdev->dev);780788 if (ret < 0)781789 return ret;782790···802810 unsigned long flag;803811 int ret;804812805805- ret = bam_pm_runtime_get_sync(bdev->dev);813813+ ret = pm_runtime_get_sync(bdev->dev);806814 if (ret < 0)807815 return ret;808816···911919 if (srcs & P_IRQ)912920 tasklet_schedule(&bdev->task);913921914914- ret = bam_pm_runtime_get_sync(bdev->dev);922922+ ret = pm_runtime_get_sync(bdev->dev);915923 if (ret < 0)916924 return IRQ_NONE;917925···10291037 if (!vd)10301038 return;1031103910321032- ret = bam_pm_runtime_get_sync(bdev->dev);10401040+ ret = pm_runtime_get_sync(bdev->dev);10331041 if (ret < 0)10341042 return;10351043···13661374 if (ret)13671375 goto err_unregister_dma;1368137613691369- if (!bdev->bamclk) {13701370- pm_runtime_disable(&pdev->dev);13711371- return 0;13721372- }13731373-13741377 pm_runtime_irq_safe(&pdev->dev);13751378 pm_runtime_set_autosuspend_delay(&pdev->dev, BAM_DMA_AUTOSUSPEND_DELAY);13761379 pm_runtime_use_autosuspend(&pdev->dev);···14491462{14501463 struct bam_device *bdev = dev_get_drvdata(dev);1451146414521452- if (bdev->bamclk) {14531453- pm_runtime_force_suspend(dev);14541454- clk_unprepare(bdev->bamclk);14551455- }14651465+ pm_runtime_force_suspend(dev);14661466+ clk_unprepare(bdev->bamclk);1456146714571468 return 0;14581469}···14601475 struct bam_device *bdev = dev_get_drvdata(dev);14611476 int ret;1462147714631463- if (bdev->bamclk) {14641464- ret = clk_prepare(bdev->bamclk);14651465- if (ret)14661466- return ret;14781478+ ret = clk_prepare(bdev->bamclk);14791479+ if (ret)14801480+ return ret;1467148114681468- pm_runtime_force_resume(dev);14691469- }14821482+ pm_runtime_force_resume(dev);1470148314711484 return 0;14721485}
···179179 * @max_resources: Maximum acceptable number of items, configured by the caller180180 * depending on the underlying resources that it is querying.181181 * @loop_idx: The iterator loop index in the current multi-part reply.182182+ * @rx_len: Size in bytes of the currenly processed message; it can be used by183183+ * the user of the iterator to verify a reply size.182184 * @priv: Optional pointer to some additional state-related private data setup183185 * by the caller during the iterations.184186 */···190188 unsigned int num_remaining;191189 unsigned int max_resources;192190 unsigned int loop_idx;191191+ size_t rx_len;193192 void *priv;194193};195194
+50-8
drivers/firmware/sysfb.c
···3434#include <linux/screen_info.h>3535#include <linux/sysfb.h>36363737+static struct platform_device *pd;3838+static DEFINE_MUTEX(disable_lock);3939+static bool disabled;4040+4141+static bool sysfb_unregister(void)4242+{4343+ if (IS_ERR_OR_NULL(pd))4444+ return false;4545+4646+ platform_device_unregister(pd);4747+ pd = NULL;4848+4949+ return true;5050+}5151+5252+/**5353+ * sysfb_disable() - disable the Generic System Framebuffers support5454+ *5555+ * This disables the registration of system framebuffer devices that match the5656+ * generic drivers that make use of the system framebuffer set up by firmware.5757+ *5858+ * It also unregisters a device if this was already registered by sysfb_init().5959+ *6060+ * Context: The function can sleep. A @disable_lock mutex is acquired to serialize6161+ * against sysfb_init(), that registers a system framebuffer device.6262+ */6363+void sysfb_disable(void)6464+{6565+ mutex_lock(&disable_lock);6666+ sysfb_unregister();6767+ disabled = true;6868+ mutex_unlock(&disable_lock);6969+}7070+EXPORT_SYMBOL_GPL(sysfb_disable);7171+3772static __init int sysfb_init(void)3873{3974 struct screen_info *si = &screen_info;4075 struct simplefb_platform_data mode;4141- struct platform_device *pd;4276 const char *name;4377 bool compatible;4444- int ret;7878+ int ret = 0;7979+8080+ mutex_lock(&disable_lock);8181+ if (disabled)8282+ goto unlock_mutex;45834684 /* try to create a simple-framebuffer device */4785 compatible = sysfb_parse_mode(si, &mode);4886 if (compatible) {4949- ret = sysfb_create_simplefb(si, &mode);5050- if (!ret)5151- return 0;8787+ pd = sysfb_create_simplefb(si, &mode);8888+ if (!IS_ERR(pd))8989+ goto unlock_mutex;5290 }53915492 /* if the FB is incompatible, create a legacy framebuffer device */···9860 name = "platform-framebuffer";996110062 pd = platform_device_alloc(name, 0);101101- if (!pd)102102- return -ENOMEM;6363+ if (!pd) {6464+ ret = -ENOMEM;6565+ goto unlock_mutex;6666+ }1036710468 sysfb_apply_efi_quirks(pd);10569···11373 if (ret)11474 goto err;11575116116- return 0;7676+ goto unlock_mutex;11777err:11878 platform_device_put(pd);7979+unlock_mutex:8080+ mutex_unlock(&disable_lock);11981 return ret;12082}12183
···51645164 */51655165 amdgpu_unregister_gpu_instance(tmp_adev);5166516651675167- drm_fb_helper_set_suspend_unlocked(adev_to_drm(adev)->fb_helper, true);51675167+ drm_fb_helper_set_suspend_unlocked(adev_to_drm(tmp_adev)->fb_helper, true);5168516851695169 /* disable ras on ALL IPs */51705170 if (!need_emergency_restart &&
+1
drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c
···320320 if (!amdgpu_device_has_dc_support(adev)) {321321 if (!adev->enable_virtual_display)322322 /* Disable vblank IRQs aggressively for power-saving */323323+ /* XXX: can this be enabled for DC? */323324 adev_to_drm(adev)->vblank_disable_immediate = true;324325325326 r = drm_vblank_init(adev_to_drm(adev), adev->mode_info.num_crtc);
-3
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
···42594259 }42604260 }4261426142624262- /* Disable vblank IRQs aggressively for power-saving. */42634263- adev_to_drm(adev)->vblank_disable_immediate = true;42644264-42654262 /* loops over all connectors on the board */42664263 for (i = 0; i < link_cnt; i++) {42674264 struct dc_link *link = NULL;
+15-11
drivers/gpu/drm/drm_aperture.c
···329329 const struct drm_driver *req_driver)330330{331331 resource_size_t base, size;332332- int bar, ret = 0;332332+ int bar, ret;333333+334334+ /*335335+ * WARNING: Apparently we must kick fbdev drivers before vgacon,336336+ * otherwise the vga fbdev driver falls over.337337+ */338338+#if IS_REACHABLE(CONFIG_FB)339339+ ret = remove_conflicting_pci_framebuffers(pdev, req_driver->name);340340+ if (ret)341341+ return ret;342342+#endif343343+ ret = vga_remove_vgacon(pdev);344344+ if (ret)345345+ return ret;333346334347 for (bar = 0; bar < PCI_STD_NUM_BARS; ++bar) {335348 if (!(pci_resource_flags(pdev, bar) & IORESOURCE_MEM))···352339 drm_aperture_detach_drivers(base, size);353340 }354341355355- /*356356- * WARNING: Apparently we must kick fbdev drivers before vgacon,357357- * otherwise the vga fbdev driver falls over.358358- */359359-#if IS_REACHABLE(CONFIG_FB)360360- ret = remove_conflicting_pci_framebuffers(pdev, req_driver->name);361361-#endif362362- if (ret == 0)363363- ret = vga_remove_vgacon(pdev);364364- return ret;342342+ return 0;365343}366344EXPORT_SYMBOL(drm_aperture_remove_conflicting_pci_framebuffers);
+3-2
drivers/gpu/drm/i915/gem/i915_gem_context.c
···933933 case I915_CONTEXT_PARAM_PERSISTENCE:934934 if (args->size)935935 ret = -EINVAL;936936- ret = proto_context_set_persistence(fpriv->dev_priv, pc,937937- args->value);936936+ else937937+ ret = proto_context_set_persistence(fpriv->dev_priv, pc,938938+ args->value);938939 break;939940940941 case I915_CONTEXT_PARAM_PROTECTED_CONTENT:
+3-3
drivers/gpu/drm/i915/gem/i915_gem_domain.c
···3535 if (obj->cache_dirty)3636 return false;37373838- if (!(obj->cache_coherent & I915_BO_CACHE_COHERENT_FOR_WRITE))3939- return true;4040-4138 if (IS_DGFX(i915))4239 return false;4040+4141+ if (!(obj->cache_coherent & I915_BO_CACHE_COHERENT_FOR_WRITE))4242+ return true;43434444 /* Currently in use by HW (display engine)? Keep flushed. */4545 return i915_gem_object_is_framebuffer(obj);
+15-19
drivers/gpu/drm/i915/i915_driver.c
···530530static int i915_driver_hw_probe(struct drm_i915_private *dev_priv)531531{532532 struct pci_dev *pdev = to_pci_dev(dev_priv->drm.dev);533533+ struct pci_dev *root_pdev;533534 int ret;534535535536 if (i915_inject_probe_failure(dev_priv))···642641643642 intel_bw_init_hw(dev_priv);644643644644+ /*645645+ * FIXME: Temporary hammer to avoid freezing the machine on our DGFX646646+ * This should be totally removed when we handle the pci states properly647647+ * on runtime PM and on s2idle cases.648648+ */649649+ root_pdev = pcie_find_root_port(pdev);650650+ if (root_pdev)651651+ pci_d3cold_disable(root_pdev);652652+645653 return 0;646654647655err_msi:···674664static void i915_driver_hw_remove(struct drm_i915_private *dev_priv)675665{676666 struct pci_dev *pdev = to_pci_dev(dev_priv->drm.dev);667667+ struct pci_dev *root_pdev;677668678669 i915_perf_fini(dev_priv);679670680671 if (pdev->msi_enabled)681672 pci_disable_msi(pdev);673673+674674+ root_pdev = pcie_find_root_port(pdev);675675+ if (root_pdev)676676+ pci_d3cold_enable(root_pdev);682677}683678684679/**···12081193 goto out;12091194 }1210119512111211- /*12121212- * FIXME: Temporary hammer to avoid freezing the machine on our DGFX12131213- * This should be totally removed when we handle the pci states properly12141214- * on runtime PM and on s2idle cases.12151215- */12161216- if (suspend_to_idle(dev_priv))12171217- pci_d3cold_disable(pdev);12181218-12191196 pci_disable_device(pdev);12201197 /*12211198 * During hibernation on some platforms the BIOS may try to access···13711364 return -EIO;1372136513731366 pci_set_master(pdev);13741374-13751375- pci_d3cold_enable(pdev);1376136713771368 disable_rpm_wakeref_asserts(&dev_priv->runtime_pm);13781369···15481543{15491544 struct drm_i915_private *dev_priv = kdev_to_i915(kdev);15501545 struct intel_runtime_pm *rpm = &dev_priv->runtime_pm;15511551- struct pci_dev *pdev = to_pci_dev(dev_priv->drm.dev);15521546 int ret;1553154715541548 if (drm_WARN_ON_ONCE(&dev_priv->drm, !HAS_RUNTIME_PM(dev_priv)))···15931589 drm_err(&dev_priv->drm,15941590 "Unclaimed access detected prior to suspending\n");1595159115961596- /*15971597- * FIXME: Temporary hammer to avoid freezing the machine on our DGFX15981598- * This should be totally removed when we handle the pci states properly15991599- * on runtime PM and on s2idle cases.16001600- */16011601- pci_d3cold_disable(pdev);16021592 rpm->suspended = true;1603159316041594 /*···16311633{16321634 struct drm_i915_private *dev_priv = kdev_to_i915(kdev);16331635 struct intel_runtime_pm *rpm = &dev_priv->runtime_pm;16341634- struct pci_dev *pdev = to_pci_dev(dev_priv->drm.dev);16351636 int ret;1636163716371638 if (drm_WARN_ON_ONCE(&dev_priv->drm, !HAS_RUNTIME_PM(dev_priv)))···1643164616441647 intel_opregion_notify_adapter(dev_priv, PCI_D0);16451648 rpm->suspended = false;16461646- pci_d3cold_enable(pdev);16471649 if (intel_uncore_unclaimed_mmio(&dev_priv->uncore))16481650 drm_dbg(&dev_priv->drm,16491651 "Unclaimed access during suspend, bios?\n");
···148148 * This only affects the READ_IOUT and READ_TEMPERATURE2 registers.149149 * READ_IOUT will return the sum of currents of all phases of a rail,150150 * and READ_TEMPERATURE2 will return the maximum temperature detected151151- * for the the phases of the rail.151151+ * for the phases of the rail.152152 */153153 for (i = 0; i < info->pages; i++) {154154 /*
···418418 u32 sq_psn;419419 u32 qkey;420420 u32 dest_qp_num;421421+ u8 timeout;421422422423 /* Relevant to qps created from kernel space only (ULPs) */423424 u8 prev_wqe_size;
···320320DEFINE_SPINLOCK(device_domain_lock);321321static LIST_HEAD(device_domain_list);322322323323-/*324324- * Iterate over elements in device_domain_list and call the specified325325- * callback @fn against each element.326326- */327327-int for_each_device_domain(int (*fn)(struct device_domain_info *info,328328- void *data), void *data)329329-{330330- int ret = 0;331331- unsigned long flags;332332- struct device_domain_info *info;333333-334334- spin_lock_irqsave(&device_domain_lock, flags);335335- list_for_each_entry(info, &device_domain_list, global) {336336- ret = fn(info, data);337337- if (ret) {338338- spin_unlock_irqrestore(&device_domain_lock, flags);339339- return ret;340340- }341341- }342342- spin_unlock_irqrestore(&device_domain_lock, flags);343343-344344- return 0;345345-}346346-347323const struct iommu_ops intel_iommu_ops;348324349325static bool translation_pre_enabled(struct intel_iommu *iommu)
+3-66
drivers/iommu/intel/pasid.c
···8686/*8787 * Per device pasid table management:8888 */8989-static inline void9090-device_attach_pasid_table(struct device_domain_info *info,9191- struct pasid_table *pasid_table)9292-{9393- info->pasid_table = pasid_table;9494- list_add(&info->table, &pasid_table->dev);9595-}9696-9797-static inline void9898-device_detach_pasid_table(struct device_domain_info *info,9999- struct pasid_table *pasid_table)100100-{101101- info->pasid_table = NULL;102102- list_del(&info->table);103103-}104104-105105-struct pasid_table_opaque {106106- struct pasid_table **pasid_table;107107- int segment;108108- int bus;109109- int devfn;110110-};111111-112112-static int search_pasid_table(struct device_domain_info *info, void *opaque)113113-{114114- struct pasid_table_opaque *data = opaque;115115-116116- if (info->iommu->segment == data->segment &&117117- info->bus == data->bus &&118118- info->devfn == data->devfn &&119119- info->pasid_table) {120120- *data->pasid_table = info->pasid_table;121121- return 1;122122- }123123-124124- return 0;125125-}126126-127127-static int get_alias_pasid_table(struct pci_dev *pdev, u16 alias, void *opaque)128128-{129129- struct pasid_table_opaque *data = opaque;130130-131131- data->segment = pci_domain_nr(pdev->bus);132132- data->bus = PCI_BUS_NUM(alias);133133- data->devfn = alias & 0xff;134134-135135- return for_each_device_domain(&search_pasid_table, data);136136-}1378913890/*13991 * Allocate a pasid table for @dev. It should be called in a···95143{96144 struct device_domain_info *info;97145 struct pasid_table *pasid_table;9898- struct pasid_table_opaque data;99146 struct page *pages;100147 u32 max_pasid = 0;101101- int ret, order;102102- int size;148148+ int order, size;103149104150 might_sleep();105151 info = dev_iommu_priv_get(dev);106152 if (WARN_ON(!info || !dev_is_pci(dev) || info->pasid_table))107153 return -EINVAL;108154109109- /* DMA alias device already has a pasid table, use it: */110110- data.pasid_table = &pasid_table;111111- ret = pci_for_each_dma_alias(to_pci_dev(dev),112112- &get_alias_pasid_table, &data);113113- if (ret)114114- goto attach_out;115115-116155 pasid_table = kzalloc(sizeof(*pasid_table), GFP_KERNEL);117156 if (!pasid_table)118157 return -ENOMEM;119119- INIT_LIST_HEAD(&pasid_table->dev);120158121159 if (info->pasid_supported)122160 max_pasid = min_t(u32, pci_max_pasids(to_pci_dev(dev)),···124182 pasid_table->table = page_address(pages);125183 pasid_table->order = order;126184 pasid_table->max_pasid = 1 << (order + PAGE_SHIFT + 3);127127-128128-attach_out:129129- device_attach_pasid_table(info, pasid_table);185185+ info->pasid_table = pasid_table;130186131187 return 0;132188}···142202 return;143203144204 pasid_table = info->pasid_table;145145- device_detach_pasid_table(info, pasid_table);146146-147147- if (!list_empty(&pasid_table->dev))148148- return;205205+ info->pasid_table = NULL;149206150207 /* Free scalable mode PASID directory tables: */151208 dir = pasid_table->table;
-1
drivers/iommu/intel/pasid.h
···7474 void *table; /* pasid table pointer */7575 int order; /* page order of pasid table */7676 u32 max_pasid; /* max pasid */7777- struct list_head dev; /* device list */7877};79788079/* Get PRESENT bit of a PASID directory entry. */
+1-1
drivers/irqchip/Kconfig
···298298299299config XILINX_INTC300300 bool "Xilinx Interrupt Controller IP"301301- depends on OF301301+ depends on OF_ADDRESS302302 select IRQ_DOMAIN303303 help304304 Support for the Xilinx Interrupt Controller IP core.
+1-1
drivers/irqchip/irq-apple-aic.c
···228228#define AIC_TMR_EL02_PHYS AIC_TMR_GUEST_PHYS229229#define AIC_TMR_EL02_VIRT AIC_TMR_GUEST_VIRT230230231231-DEFINE_STATIC_KEY_TRUE(use_fast_ipi);231231+static DEFINE_STATIC_KEY_TRUE(use_fast_ipi);232232233233struct aic_info {234234 int version;
+31-10
drivers/irqchip/irq-gic-v3.c
···20422042 vgic_set_kvm_info(&gic_v3_kvm_info);20432043}2044204420452045+static void gic_request_region(resource_size_t base, resource_size_t size,20462046+ const char *name)20472047+{20482048+ if (!request_mem_region(base, size, name))20492049+ pr_warn_once(FW_BUG "%s region %pa has overlapping address\n",20502050+ name, &base);20512051+}20522052+20532053+static void __iomem *gic_of_iomap(struct device_node *node, int idx,20542054+ const char *name, struct resource *res)20552055+{20562056+ void __iomem *base;20572057+ int ret;20582058+20592059+ ret = of_address_to_resource(node, idx, res);20602060+ if (ret)20612061+ return IOMEM_ERR_PTR(ret);20622062+20632063+ gic_request_region(res->start, resource_size(res), name);20642064+ base = of_iomap(node, idx);20652065+20662066+ return base ?: IOMEM_ERR_PTR(-ENOMEM);20672067+}20682068+20452069static int __init gic_of_init(struct device_node *node, struct device_node *parent)20462070{20472071 void __iomem *dist_base;20482072 struct redist_region *rdist_regs;20732073+ struct resource res;20492074 u64 redist_stride;20502075 u32 nr_redist_regions;20512076 int err, i;2052207720532053- dist_base = of_io_request_and_map(node, 0, "GICD");20782078+ dist_base = gic_of_iomap(node, 0, "GICD", &res);20542079 if (IS_ERR(dist_base)) {20552080 pr_err("%pOF: unable to map gic dist registers\n", node);20562081 return PTR_ERR(dist_base);···20982073 }2099207421002075 for (i = 0; i < nr_redist_regions; i++) {21012101- struct resource res;21022102- int ret;21032103-21042104- ret = of_address_to_resource(node, 1 + i, &res);21052105- rdist_regs[i].redist_base = of_io_request_and_map(node, 1 + i, "GICR");21062106- if (ret || IS_ERR(rdist_regs[i].redist_base)) {20762076+ rdist_regs[i].redist_base = gic_of_iomap(node, 1 + i, "GICR", &res);20772077+ if (IS_ERR(rdist_regs[i].redist_base)) {21072078 pr_err("%pOF: couldn't map region %d\n", node, i);21082079 err = -ENODEV;21092080 goto out_unmap_rdist;···21722151 pr_err("Couldn't map GICR region @%llx\n", redist->base_address);21732152 return -ENOMEM;21742153 }21752175- request_mem_region(redist->base_address, redist->length, "GICR");21542154+ gic_request_region(redist->base_address, redist->length, "GICR");2176215521772156 gic_acpi_register_redist(redist->base_address, redist_base);21782157 return 0;···21952174 redist_base = ioremap(gicc->gicr_base_address, size);21962175 if (!redist_base)21972176 return -ENOMEM;21982198- request_mem_region(gicc->gicr_base_address, size, "GICR");21772177+ gic_request_region(gicc->gicr_base_address, size, "GICR");2199217822002179 gic_acpi_register_redist(gicc->gicr_base_address, redist_base);22012180 return 0;···23972376 pr_err("Unable to map GICD registers\n");23982377 return -ENOMEM;23992378 }24002400- request_mem_region(dist->base_address, ACPI_GICV3_DIST_MEM_SIZE, "GICD");23792379+ gic_request_region(dist->base_address, ACPI_GICV3_DIST_MEM_SIZE, "GICD");2401238024022381 err = gic_validate_dist_version(acpi_data.dist_base);24032382 if (err) {
···10011001static int validate_raid_redundancy(struct raid_set *rs)10021002{10031003 unsigned int i, rebuild_cnt = 0;10041004- unsigned int rebuilds_per_group = 0, copies;10041004+ unsigned int rebuilds_per_group = 0, copies, raid_disks;10051005 unsigned int group_size, last_group_start;1006100610071007- for (i = 0; i < rs->md.raid_disks; i++)10081008- if (!test_bit(In_sync, &rs->dev[i].rdev.flags) ||10091009- !rs->dev[i].rdev.sb_page)10071007+ for (i = 0; i < rs->raid_disks; i++)10081008+ if (!test_bit(FirstUse, &rs->dev[i].rdev.flags) &&10091009+ ((!test_bit(In_sync, &rs->dev[i].rdev.flags) ||10101010+ !rs->dev[i].rdev.sb_page)))10101011 rebuild_cnt++;1011101210121013 switch (rs->md.level) {···10471046 * A A B B C10481047 * C D D E E10491048 */10491049+ raid_disks = min(rs->raid_disks, rs->md.raid_disks);10501050 if (__is_raid10_near(rs->md.new_layout)) {10511051- for (i = 0; i < rs->md.raid_disks; i++) {10511051+ for (i = 0; i < raid_disks; i++) {10521052 if (!(i % copies))10531053 rebuilds_per_group = 0;10541054 if ((!rs->dev[i].rdev.sb_page ||···10721070 * results in the need to treat the last (potentially larger)10731071 * set differently.10741072 */10751075- group_size = (rs->md.raid_disks / copies);10761076- last_group_start = (rs->md.raid_disks / group_size) - 1;10731073+ group_size = (raid_disks / copies);10741074+ last_group_start = (raid_disks / group_size) - 1;10771075 last_group_start *= group_size;10781078- for (i = 0; i < rs->md.raid_disks; i++) {10761076+ for (i = 0; i < raid_disks; i++) {10791077 if (!(i % copies) && !(i > last_group_start))10801078 rebuilds_per_group = 0;10811079 if ((!rs->dev[i].rdev.sb_page ||···15901588{15911589 int i;1592159015931593- for (i = 0; i < rs->md.raid_disks; i++) {15911591+ for (i = 0; i < rs->raid_disks; i++) {15941592 struct md_rdev *rdev = &rs->dev[i].rdev;1595159315961594 if (!test_bit(Journal, &rdev->flags) &&···37683766 unsigned int i;37693767 int r = 0;3770376837713771- for (i = 0; !r && i < rs->md.raid_disks; i++)37723772- if (rs->dev[i].data_dev)37733773- r = fn(ti,37743774- rs->dev[i].data_dev,37753775- 0, /* No offset on data devs */37763776- rs->md.dev_sectors,37773777- data);37693769+ for (i = 0; !r && i < rs->raid_disks; i++) {37703770+ if (rs->dev[i].data_dev) {37713771+ r = fn(ti, rs->dev[i].data_dev,37723772+ 0, /* No offset on data devs */37733773+ rs->md.dev_sectors, data);37743774+ }37753775+ }3778377637793777 return r;37803778}
+5-1
drivers/md/raid5.c
···79337933 int err = 0;79347934 int number = rdev->raid_disk;79357935 struct md_rdev __rcu **rdevp;79367936- struct disk_info *p = conf->disks + number;79367936+ struct disk_info *p;79377937 struct md_rdev *tmp;7938793879397939 print_raid5_conf(conf);···79527952 log_exit(conf);79537953 return 0;79547954 }79557955+ if (unlikely(number >= conf->pool_size))79567956+ return 0;79577957+ p = conf->disks + number;79557958 if (rdev == rcu_access_pointer(p->rdev))79567959 rdevp = &p->rdev;79577960 else if (rdev == rcu_access_pointer(p->replacement))···80658062 */80668063 if (rdev->saved_raid_disk >= 0 &&80678064 rdev->saved_raid_disk >= first &&80658065+ rdev->saved_raid_disk <= last &&80688066 conf->disks[rdev->saved_raid_disk].rdev == NULL)80698067 first = rdev->saved_raid_disk;80708068
···9494 select CRYPTO_CURVE25519_NEON if ARM && KERNEL_MODE_NEON9595 select CRYPTO_CHACHA_MIPS if CPU_MIPS32_R29696 select CRYPTO_POLY1305_MIPS if MIPS9797+ select CRYPTO_CHACHA_S390 if S3909798 help9899 WireGuard is a secure, fast, and easy to use replacement for IPSec99100 that uses modern cryptography and clever networking tricks. It's
+2-1
drivers/net/bonding/bond_3ad.c
···22282228 temp_aggregator->num_of_ports--;22292229 if (__agg_active_ports(temp_aggregator) == 0) {22302230 select_new_active_agg = temp_aggregator->is_active;22312231- ad_clear_agg(temp_aggregator);22312231+ if (temp_aggregator->num_of_ports == 0)22322232+ ad_clear_agg(temp_aggregator);22322233 if (select_new_active_agg) {22332234 slave_info(bond->dev, slave->dev, "Removing an active aggregator\n");22342235 /* select new active aggregator */
···334334 * register. It increments once per SYS clock tick,335335 * which is 20 or 40 MHz.336336 *337337- * Observation shows that if the lowest byte (which is338338- * transferred first on the SPI bus) of that register339339- * is 0x00 or 0x80 the calculated CRC doesn't always340340- * match the transferred one.337337+ * Observation on the mcp2518fd shows that if the338338+ * lowest byte (which is transferred first on the SPI339339+ * bus) of that register is 0x00 or 0x80 the340340+ * calculated CRC doesn't always match the transferred341341+ * one. On the mcp2517fd this problem is not limited342342+ * to the first byte being 0x00 or 0x80.341343 *342344 * If the highest bit in the lowest byte is flipped343345 * the transferred CRC matches the calculated one. We344344- * assume for now the CRC calculation in the chip345345- * works on wrong data and the transferred data is346346- * correct.346346+ * assume for now the CRC operates on the correct347347+ * data.347348 */348349 if (reg == MCP251XFD_REG_TBC &&349349- (buf_rx->data[0] == 0x0 || buf_rx->data[0] == 0x80)) {350350+ ((buf_rx->data[0] & 0xf8) == 0x0 ||351351+ (buf_rx->data[0] & 0xf8) == 0x80)) {350352 /* Flip highest bit in lowest byte of le32 */351353 buf_rx->data[0] ^= 0x80;352354···358356 val_len);359357 if (!err) {360358 /* If CRC is now correct, assume361361- * transferred data was OK, flip bit362362- * back to original value.359359+ * flipped data is OK.363360 */364364- buf_rx->data[0] ^= 0x80;365361 goto out;366362 }367363 }
+21-2
drivers/net/can/usb/gs_usb.c
···268268269269 struct usb_anchor tx_submitted;270270 atomic_t active_tx_urbs;271271+ void *rxbuf[GS_MAX_RX_URBS];272272+ dma_addr_t rxbuf_dma[GS_MAX_RX_URBS];271273};272274273275/* usb interface struct */···744742 for (i = 0; i < GS_MAX_RX_URBS; i++) {745743 struct urb *urb;746744 u8 *buf;745745+ dma_addr_t buf_dma;747746748747 /* alloc rx urb */749748 urb = usb_alloc_urb(0, GFP_KERNEL);···755752 buf = usb_alloc_coherent(dev->udev,756753 dev->parent->hf_size_rx,757754 GFP_KERNEL,758758- &urb->transfer_dma);755755+ &buf_dma);759756 if (!buf) {760757 netdev_err(netdev,761758 "No memory left for USB buffer\n");762759 usb_free_urb(urb);763760 return -ENOMEM;764761 }762762+763763+ urb->transfer_dma = buf_dma;765764766765 /* fill, anchor, and submit rx urb */767766 usb_fill_bulk_urb(urb,···786781 "usb_submit failed (err=%d)\n", rc);787782788783 usb_unanchor_urb(urb);784784+ usb_free_coherent(dev->udev,785785+ sizeof(struct gs_host_frame),786786+ buf,787787+ buf_dma);789788 usb_free_urb(urb);790789 break;791790 }791791+792792+ dev->rxbuf[i] = buf;793793+ dev->rxbuf_dma[i] = buf_dma;792794793795 /* Drop reference,794796 * USB core will take care of freeing it···854842 int rc;855843 struct gs_can *dev = netdev_priv(netdev);856844 struct gs_usb *parent = dev->parent;845845+ unsigned int i;857846858847 netif_stop_queue(netdev);859848860849 /* Stop polling */861850 parent->active_channels--;862862- if (!parent->active_channels)851851+ if (!parent->active_channels) {863852 usb_kill_anchored_urbs(&parent->rx_submitted);853853+ for (i = 0; i < GS_MAX_RX_URBS; i++)854854+ usb_free_coherent(dev->udev,855855+ sizeof(struct gs_host_frame),856856+ dev->rxbuf[i],857857+ dev->rxbuf_dma[i]);858858+ }864859865860 /* Stop sending URBs */866861 usb_kill_anchored_urbs(&dev->tx_submitted);
···878878 if (duplex == DUPLEX_FULL)879879 reg |= DUPLX_MODE;880880881881+ if (tx_pause)882882+ reg |= TXFLOW_CNTL;883883+ if (rx_pause)884884+ reg |= RXFLOW_CNTL;885885+881886 core_writel(priv, reg, offset);882887 }883888
+1
drivers/net/dsa/hirschmann/hellcreek_ptp.c
···300300 const char *label, *state;301301 int ret = -EINVAL;302302303303+ of_node_get(hellcreek->dev->of_node);303304 leds = of_find_node_by_name(hellcreek->dev->of_node, "leds");304305 if (!leds) {305306 dev_err(hellcreek->dev, "No LEDs specified in device tree!\n");
···59815981 release_sub_crqs(adapter, 0);59825982 rc = init_sub_crqs(adapter);59835983 } else {59845984+ /* no need to reinitialize completely, but we do59855985+ * need to clean up transmits that were in flight59865986+ * when we processed the reset. Failure to do so59875987+ * will confound the upper layer, usually TCP, by59885988+ * creating the illusion of transmits that are59895989+ * awaiting completion.59905990+ */59915991+ clean_tx_pools(adapter);59925992+59845993 rc = reset_sub_crq_queues(adapter);59855994 }59865995 } else {
+16
drivers/net/ethernet/intel/i40e/i40e.h
···3737#include <net/tc_act/tc_mirred.h>3838#include <net/udp_tunnel.h>3939#include <net/xdp_sock.h>4040+#include <linux/bitfield.h>4041#include "i40e_type.h"4142#include "i40e_prototype.h"4243#include <linux/net/intel/i40e_client.h>···10911090 (u32)(val >> 32));10921091 i40e_write_rx_ctl(&pf->hw, I40E_PRTQF_FD_INSET(addr, 0),10931092 (u32)(val & 0xFFFFFFFFULL));10931093+}10941094+10951095+/**10961096+ * i40e_get_pf_count - get PCI PF count.10971097+ * @hw: pointer to a hw.10981098+ *10991099+ * Reports the function number of the highest PCI physical11001100+ * function plus 1 as it is loaded from the NVM.11011101+ *11021102+ * Return: PCI PF count.11031103+ **/11041104+static inline u32 i40e_get_pf_count(struct i40e_hw *hw)11051105+{11061106+ return FIELD_GET(I40E_GLGEN_PCIFCNCNT_PCIPFCNT_MASK,11071107+ rd32(hw, I40E_GLGEN_PCIFCNCNT));10941108}1095110910961110/* needed by i40e_ethtool.c */
+73
drivers/net/ethernet/intel/i40e/i40e_main.c
···551551}552552553553/**554554+ * i40e_compute_pci_to_hw_id - compute index form PCI function.555555+ * @vsi: ptr to the VSI to read from.556556+ * @hw: ptr to the hardware info.557557+ **/558558+static u32 i40e_compute_pci_to_hw_id(struct i40e_vsi *vsi, struct i40e_hw *hw)559559+{560560+ int pf_count = i40e_get_pf_count(hw);561561+562562+ if (vsi->type == I40E_VSI_SRIOV)563563+ return (hw->port * BIT(7)) / pf_count + vsi->vf_id;564564+565565+ return hw->port + BIT(7);566566+}567567+568568+/**569569+ * i40e_stat_update64 - read and update a 64 bit stat from the chip.570570+ * @hw: ptr to the hardware info.571571+ * @hireg: the high 32 bit reg to read.572572+ * @loreg: the low 32 bit reg to read.573573+ * @offset_loaded: has the initial offset been loaded yet.574574+ * @offset: ptr to current offset value.575575+ * @stat: ptr to the stat.576576+ *577577+ * Since the device stats are not reset at PFReset, they will not578578+ * be zeroed when the driver starts. We'll save the first values read579579+ * and use them as offsets to be subtracted from the raw values in order580580+ * to report stats that count from zero.581581+ **/582582+static void i40e_stat_update64(struct i40e_hw *hw, u32 hireg, u32 loreg,583583+ bool offset_loaded, u64 *offset, u64 *stat)584584+{585585+ u64 new_data;586586+587587+ new_data = rd64(hw, loreg);588588+589589+ if (!offset_loaded || new_data < *offset)590590+ *offset = new_data;591591+ *stat = new_data - *offset;592592+}593593+594594+/**554595 * i40e_stat_update48 - read and update a 48 bit stat from the chip555596 * @hw: ptr to the hardware info556597 * @hireg: the high 32 bit reg to read···663622}664623665624/**625625+ * i40e_stats_update_rx_discards - update rx_discards.626626+ * @vsi: ptr to the VSI to be updated.627627+ * @hw: ptr to the hardware info.628628+ * @stat_idx: VSI's stat_counter_idx.629629+ * @offset_loaded: ptr to the VSI's stat_offsets_loaded.630630+ * @stat_offset: ptr to stat_offset to store first read of specific register.631631+ * @stat: ptr to VSI's stat to be updated.632632+ **/633633+static void634634+i40e_stats_update_rx_discards(struct i40e_vsi *vsi, struct i40e_hw *hw,635635+ int stat_idx, bool offset_loaded,636636+ struct i40e_eth_stats *stat_offset,637637+ struct i40e_eth_stats *stat)638638+{639639+ u64 rx_rdpc, rx_rxerr;640640+641641+ i40e_stat_update32(hw, I40E_GLV_RDPC(stat_idx), offset_loaded,642642+ &stat_offset->rx_discards, &rx_rdpc);643643+ i40e_stat_update64(hw,644644+ I40E_GL_RXERR1H(i40e_compute_pci_to_hw_id(vsi, hw)),645645+ I40E_GL_RXERR1L(i40e_compute_pci_to_hw_id(vsi, hw)),646646+ offset_loaded, &stat_offset->rx_discards_other,647647+ &rx_rxerr);648648+649649+ stat->rx_discards = rx_rdpc + rx_rxerr;650650+}651651+652652+/**666653 * i40e_update_eth_stats - Update VSI-specific ethernet statistics counters.667654 * @vsi: the VSI to be updated668655 **/···749680 I40E_GLV_BPTCL(stat_idx),750681 vsi->stat_offsets_loaded,751682 &oes->tx_broadcast, &es->tx_broadcast);683683+684684+ i40e_stats_update_rx_discards(vsi, hw, stat_idx,685685+ vsi->stat_offsets_loaded, oes, es);686686+752687 vsi->stat_offsets_loaded = true;753688}754689
···45294529 return -EOPNOTSUPP;45304530 }4531453145324532- if (act->police.notexceed.act_id != FLOW_ACTION_PIPE &&45334533- act->police.notexceed.act_id != FLOW_ACTION_ACCEPT) {45344534- NL_SET_ERR_MSG_MOD(extack,45354535- "Offload not supported when conform action is not pipe or ok");45364536- return -EOPNOTSUPP;45374537- }45384538-45394532 if (act->police.notexceed.act_id == FLOW_ACTION_ACCEPT &&45404533 !flow_action_is_last_entry(action, act)) {45414534 NL_SET_ERR_MSG_MOD(extack,···45794586 flow_action_for_each(i, act, flow_action) {45804587 switch (act->id) {45814588 case FLOW_ACTION_POLICE:45894589+ if (act->police.notexceed.act_id != FLOW_ACTION_CONTINUE) {45904590+ NL_SET_ERR_MSG_MOD(extack,45914591+ "Offload not supported when conform action is not continue");45924592+ return -EOPNOTSUPP;45934593+ }45944594+45824595 err = mlx5e_policer_validate(flow_action, act, extack);45834596 if (err)45844597 return err;
···8888 /* Reset PHY, otherwise MII_LPA will provide outdated information.8989 * This issue is reproducible only with some link partner PHYs9090 */9191- if (phydev->state == PHY_NOLINK && phydev->drv->soft_reset)9292- phydev->drv->soft_reset(phydev);9191+ if (phydev->state == PHY_NOLINK) {9292+ phy_init_hw(phydev);9393+ phy_start_aneg(phydev);9494+ }9395}94969597static struct phy_driver asix_driver[] = {
···3131#include <linux/io.h>3232#include <linux/uaccess.h>3333#include <linux/atomic.h>3434+#include <linux/suspend.h>3435#include <net/netlink.h>3536#include <net/genetlink.h>3637#include <net/sock.h>···976975 struct phy_device *phydev = phy_dat;977976 struct phy_driver *drv = phydev->drv;978977 irqreturn_t ret;978978+979979+ /* Wakeup interrupts may occur during a system sleep transition.980980+ * Postpone handling until the PHY has resumed.981981+ */982982+ if (IS_ENABLED(CONFIG_PM_SLEEP) && phydev->irq_suspended) {983983+ struct net_device *netdev = phydev->attached_dev;984984+985985+ if (netdev) {986986+ struct device *parent = netdev->dev.parent;987987+988988+ if (netdev->wol_enabled)989989+ pm_system_wakeup();990990+ else if (device_may_wakeup(&netdev->dev))991991+ pm_wakeup_dev_event(&netdev->dev, 0, true);992992+ else if (parent && device_may_wakeup(parent))993993+ pm_wakeup_dev_event(parent, 0, true);994994+ }995995+996996+ phydev->irq_rerun = 1;997997+ disable_irq_nosync(irq);998998+ return IRQ_HANDLED;999999+ }97910009801001 mutex_lock(&phydev->lock);9811002 ret = drv->handle_interrupt(phydev);
+23
drivers/net/phy/phy_device.c
···278278 if (phydev->mac_managed_pm)279279 return 0;280280281281+ /* Wakeup interrupts may occur during the system sleep transition when282282+ * the PHY is inaccessible. Set flag to postpone handling until the PHY283283+ * has resumed. Wait for concurrent interrupt handler to complete.284284+ */285285+ if (phy_interrupt_is_valid(phydev)) {286286+ phydev->irq_suspended = 1;287287+ synchronize_irq(phydev->irq);288288+ }289289+281290 /* We must stop the state machine manually, otherwise it stops out of282291 * control, possibly with the phydev->lock held. Upon resume, netdev283292 * may call phy routines that try to grab the same lock, and that may···324315 if (ret < 0)325316 return ret;326317no_resume:318318+ if (phy_interrupt_is_valid(phydev)) {319319+ phydev->irq_suspended = 0;320320+ synchronize_irq(phydev->irq);321321+322322+ /* Rerun interrupts which were postponed by phy_interrupt()323323+ * because they occurred during the system sleep transition.324324+ */325325+ if (phydev->irq_rerun) {326326+ phydev->irq_rerun = 0;327327+ enable_irq(phydev->irq);328328+ irq_wake_thread(phydev->irq, phydev);329329+ }330330+ }331331+327332 if (phydev->attached_dev && phydev->adjust_link)328333 phy_start_machine(phydev);329334
···6666MODULE_PARM_DESC(max_queues,6767 "Maximum number of queues per virtual interface");68686969+static bool __read_mostly xennet_trusted = true;7070+module_param_named(trusted, xennet_trusted, bool, 0644);7171+MODULE_PARM_DESC(trusted, "Is the backend trusted");7272+6973#define XENNET_TIMEOUT (5 * HZ)70747175static const struct ethtool_ops xennet_ethtool_ops;···177173 /* Is device behaving sane? */178174 bool broken;179175176176+ /* Should skbs be bounced into a zeroed buffer? */177177+ bool bounce;178178+180179 atomic_t rx_gso_checksum_fixup;181180};182181···278271 if (unlikely(!skb))279272 return NULL;280273281281- page = page_pool_dev_alloc_pages(queue->page_pool);274274+ page = page_pool_alloc_pages(queue->page_pool,275275+ GFP_ATOMIC | __GFP_NOWARN | __GFP_ZERO);282276 if (unlikely(!page)) {283277 kfree_skb(skb);284278 return NULL;···673665 return nxmit;674666}675667668668+struct sk_buff *bounce_skb(const struct sk_buff *skb)669669+{670670+ unsigned int headerlen = skb_headroom(skb);671671+ /* Align size to allocate full pages and avoid contiguous data leaks */672672+ unsigned int size = ALIGN(skb_end_offset(skb) + skb->data_len,673673+ XEN_PAGE_SIZE);674674+ struct sk_buff *n = alloc_skb(size, GFP_ATOMIC | __GFP_ZERO);675675+676676+ if (!n)677677+ return NULL;678678+679679+ if (!IS_ALIGNED((uintptr_t)n->head, XEN_PAGE_SIZE)) {680680+ WARN_ONCE(1, "misaligned skb allocated\n");681681+ kfree_skb(n);682682+ return NULL;683683+ }684684+685685+ /* Set the data pointer */686686+ skb_reserve(n, headerlen);687687+ /* Set the tail pointer and length */688688+ skb_put(n, skb->len);689689+690690+ BUG_ON(skb_copy_bits(skb, -headerlen, n->head, headerlen + skb->len));691691+692692+ skb_copy_header(n, skb);693693+ return n;694694+}676695677696#define MAX_XEN_SKB_FRAGS (65536 / XEN_PAGE_SIZE + 1)678697···753718754719 /* The first req should be at least ETH_HLEN size or the packet will be755720 * dropped by netback.721721+ *722722+ * If the backend is not trusted bounce all data to zeroed pages to723723+ * avoid exposing contiguous data on the granted page not belonging to724724+ * the skb.756725 */757757- if (unlikely(PAGE_SIZE - offset < ETH_HLEN)) {758758- nskb = skb_copy(skb, GFP_ATOMIC);726726+ if (np->bounce || unlikely(PAGE_SIZE - offset < ETH_HLEN)) {727727+ nskb = bounce_skb(skb);759728 if (!nskb)760729 goto drop;761730 dev_consume_skb_any(skb);···10921053 }10931054 }10941055 rcu_read_unlock();10951095-next:10561056+10961057 __skb_queue_tail(list, skb);10581058+10591059+next:10971060 if (!(rx->flags & XEN_NETRXF_more_data))10981061 break;10991062···2255221422562215 info->netdev->irq = 0;2257221622172217+ /* Check if backend is trusted. */22182218+ info->bounce = !xennet_trusted ||22192219+ !xenbus_read_unsigned(dev->nodename, "trusted", 1);22202220+22582221 /* Check if backend supports multiple queues */22592222 max_queues = xenbus_read_unsigned(info->xbdev->otherend,22602223 "multi-queue-max-queues", 1);···24262381 return err;24272382 if (np->netback_has_xdp_headroom)24282383 pr_info("backend supports XDP headroom\n");23842384+ if (np->bounce)23852385+ dev_info(&np->xbdev->dev,23862386+ "bouncing transmitted data to zeroed pages\n");2429238724302388 /* talk_to_netback() sets the correct number of queues */24312389 num_queues = dev->real_num_tx_queues;
+3-3
drivers/nfc/nfcmrvl/i2c.c
···167167 pdata->irq_polarity = IRQF_TRIGGER_RISING;168168169169 ret = irq_of_parse_and_map(node, 0);170170- if (ret < 0) {171171- pr_err("Unable to get irq, error: %d\n", ret);172172- return ret;170170+ if (!ret) {171171+ pr_err("Unable to get irq\n");172172+ return -EINVAL;173173 }174174 pdata->irq = ret;175175
+3-3
drivers/nfc/nfcmrvl/spi.c
···115115 }116116117117 ret = irq_of_parse_and_map(node, 0);118118- if (ret < 0) {119119- pr_err("Unable to get irq, error: %d\n", ret);120120- return ret;118118+ if (!ret) {119119+ pr_err("Unable to get irq\n");120120+ return -EINVAL;121121 }122122 pdata->irq = ret;123123
+9-2
drivers/nfc/nxp-nci/i2c.c
···122122 skb_put_data(*skb, &header, NXP_NCI_FW_HDR_LEN);123123124124 r = i2c_master_recv(client, skb_put(*skb, frame_len), frame_len);125125- if (r != frame_len) {125125+ if (r < 0) {126126+ goto fw_read_exit_free_skb;127127+ } else if (r != frame_len) {126128 nfc_err(&client->dev,127129 "Invalid frame length: %u (expected %zu)\n",128130 r, frame_len);···164162165163 skb_put_data(*skb, (void *)&header, NCI_CTRL_HDR_SIZE);166164165165+ if (!header.plen)166166+ return 0;167167+167168 r = i2c_master_recv(client, skb_put(*skb, header.plen), header.plen);168168- if (r != header.plen) {169169+ if (r < 0) {170170+ goto nci_read_exit_free_skb;171171+ } else if (r != header.plen) {169172 nfc_err(&client->dev,170173 "Invalid frame payload length: %u (expected %u)\n",171174 r, header.plen);
+2-2
drivers/nvdimm/bus.c
···176176 ndr_end = nd_region->ndr_start + nd_region->ndr_size - 1;177177178178 /* make sure we are in the region */179179- if (ctx->phys < nd_region->ndr_start180180- || (ctx->phys + ctx->cleared) > ndr_end)179179+ if (ctx->phys < nd_region->ndr_start ||180180+ (ctx->phys + ctx->cleared - 1) > ndr_end)181181 return 0;182182183183 sector = (ctx->phys - nd_region->ndr_start) / 512;
+2
drivers/nvme/host/core.c
···45954595 nvme_stop_failfast_work(ctrl);45964596 flush_work(&ctrl->async_event_work);45974597 cancel_work_sync(&ctrl->fw_act_work);45984598+ if (ctrl->ops->stop_ctrl)45994599+ ctrl->ops->stop_ctrl(ctrl);45984600}45994601EXPORT_SYMBOL_GPL(nvme_stop_ctrl);46004602
···945945 tristate "Panasonic Laptop Extras"946946 depends on INPUT && ACPI947947 depends on BACKLIGHT_CLASS_DEVICE948948+ depends on ACPI_VIDEO=n || ACPI_VIDEO949949+ depends on SERIO_I8042 || SERIO_I8042 = n948950 select INPUT_SPARSEKMAP949951 help950952 This driver adds support for access to backlight control and hotkeys
···119119 * - v0.1 start from toshiba_acpi driver written by John Belmonte120120 */121121122122-#include <linux/kernel.h>123123-#include <linux/module.h>124124-#include <linux/init.h>125125-#include <linux/types.h>122122+#include <linux/acpi.h>126123#include <linux/backlight.h>127124#include <linux/ctype.h>128128-#include <linux/seq_file.h>129129-#include <linux/uaccess.h>130130-#include <linux/slab.h>131131-#include <linux/acpi.h>125125+#include <linux/i8042.h>126126+#include <linux/init.h>132127#include <linux/input.h>133128#include <linux/input/sparse-keymap.h>129129+#include <linux/kernel.h>130130+#include <linux/module.h>134131#include <linux/platform_device.h>135135-132132+#include <linux/seq_file.h>133133+#include <linux/serio.h>134134+#include <linux/slab.h>135135+#include <linux/types.h>136136+#include <linux/uaccess.h>137137+#include <acpi/video.h>136138137139MODULE_AUTHOR("Hiroshi Miura <miura@da-cha.org>");138140MODULE_AUTHOR("David Bronaugh <dbronaugh@linuxboxen.org>");···242240 struct backlight_device *backlight;243241 struct platform_device *platform;244242};243243+244244+/*245245+ * On some Panasonic models the volume up / down / mute keys send duplicate246246+ * keypress events over the PS/2 kbd interface, filter these out.247247+ */248248+static bool panasonic_i8042_filter(unsigned char data, unsigned char str,249249+ struct serio *port)250250+{251251+ static bool extended;252252+253253+ if (str & I8042_STR_AUXDATA)254254+ return false;255255+256256+ if (data == 0xe0) {257257+ extended = true;258258+ return true;259259+ } else if (extended) {260260+ extended = false;261261+262262+ switch (data & 0x7f) {263263+ case 0x20: /* e0 20 / e0 a0, Volume Mute press / release */264264+ case 0x2e: /* e0 2e / e0 ae, Volume Down press / release */265265+ case 0x30: /* e0 30 / e0 b0, Volume Up press / release */266266+ return true;267267+ default:268268+ /*269269+ * Report the previously filtered e0 before continuing270270+ * with the next non-filtered byte.271271+ */272272+ serio_interrupt(port, 0xe0, 0);273273+ return false;274274+ }275275+ }276276+277277+ return false;278278+}245279246280/* method access functions */247281static int acpi_pcc_write_sset(struct pcc_acpi *pcc, int func, int val)···800762 struct input_dev *hotk_input_dev = pcc->input_dev;801763 int rc;802764 unsigned long long result;765765+ unsigned int key;766766+ unsigned int updown;803767804768 rc = acpi_evaluate_integer(pcc->handle, METHOD_HKEY_QUERY,805769 NULL, &result);···810770 return;811771 }812772773773+ key = result & 0xf;774774+ updown = result & 0x80; /* 0x80 == key down; 0x00 = key up */775775+813776 /* hack: some firmware sends no key down for sleep / hibernate */814814- if ((result & 0xf) == 0x7 || (result & 0xf) == 0xa) {815815- if (result & 0x80)777777+ if (key == 7 || key == 10) {778778+ if (updown)816779 sleep_keydown_seen = 1;817780 if (!sleep_keydown_seen)818781 sparse_keymap_report_event(hotk_input_dev,819819- result & 0xf, 0x80, false);782782+ key, 0x80, false);820783 }821784822822- if ((result & 0xf) == 0x7 || (result & 0xf) == 0x9 || (result & 0xf) == 0xa) {823823- if (!sparse_keymap_report_event(hotk_input_dev,824824- result & 0xf, result & 0x80, false))825825- pr_err("Unknown hotkey event: 0x%04llx\n", result);826826- }785785+ /*786786+ * Don't report brightness key-presses if they are also reported787787+ * by the ACPI video bus.788788+ */789789+ if ((key == 1 || key == 2) && acpi_video_handles_brightness_key_presses())790790+ return;791791+792792+ if (!sparse_keymap_report_event(hotk_input_dev, key, updown, false))793793+ pr_err("Unknown hotkey event: 0x%04llx\n", result);827794}828795829796static void acpi_pcc_hotkey_notify(struct acpi_device *device, u32 event)···1044997 pcc->platform = NULL;1045998 }104699910001000+ i8042_install_filter(panasonic_i8042_filter);10471001 return 0;1048100210491003out_platform:···1067101910681020 if (!device || !pcc)10691021 return -EINVAL;10221022+10231023+ i8042_remove_filter(panasonic_i8042_filter);1070102410711025 if (pcc->platform) {10721026 device_remove_file(&pcc->platform->dev, &dev_attr_cdpower);
+24-27
drivers/platform/x86/thinkpad_acpi.c
···45294529 iounmap(addr);45304530cleanup_resource:45314531 release_resource(res);45324532+ kfree(res);45324533}4533453445344535static struct acpi_s2idle_dev_ops thinkpad_acpi_s2idle_dev_ops = {···1030010299#define DYTC_DISABLE_CQL DYTC_SET_COMMAND(DYTC_FUNCTION_CQL, DYTC_MODE_MMC_BALANCE, 0)1030110300#define DYTC_ENABLE_CQL DYTC_SET_COMMAND(DYTC_FUNCTION_CQL, DYTC_MODE_MMC_BALANCE, 1)10302103011030310303-enum dytc_profile_funcmode {1030410304- DYTC_FUNCMODE_NONE = 0,1030510305- DYTC_FUNCMODE_MMC,1030610306- DYTC_FUNCMODE_PSC,1030710307-};1030810308-1030910309-static enum dytc_profile_funcmode dytc_profile_available;1031010302static enum platform_profile_option dytc_current_profile;1031110303static atomic_t dytc_ignore_event = ATOMIC_INIT(0);1031210304static DEFINE_MUTEX(dytc_mutex);1030510305+static int dytc_capabilities;1031310306static bool dytc_mmc_get_available;10314103071031510308static int convert_dytc_to_profile(int dytcmode, enum platform_profile_option *profile)1031610309{1031710317- if (dytc_profile_available == DYTC_FUNCMODE_MMC) {1031010310+ if (dytc_capabilities & BIT(DYTC_FC_MMC)) {1031810311 switch (dytcmode) {1031910312 case DYTC_MODE_MMC_LOWPOWER:1032010313 *profile = PLATFORM_PROFILE_LOW_POWER;···1032510330 }1032610331 return 0;1032710332 }1032810328- if (dytc_profile_available == DYTC_FUNCMODE_PSC) {1033310333+ if (dytc_capabilities & BIT(DYTC_FC_PSC)) {1032910334 switch (dytcmode) {1033010335 case DYTC_MODE_PSC_LOWPOWER:1033110336 *profile = PLATFORM_PROFILE_LOW_POWER;···1034710352{1034810353 switch (profile) {1034910354 case PLATFORM_PROFILE_LOW_POWER:1035010350- if (dytc_profile_available == DYTC_FUNCMODE_MMC)1035510355+ if (dytc_capabilities & BIT(DYTC_FC_MMC))1035110356 *perfmode = DYTC_MODE_MMC_LOWPOWER;1035210352- else if (dytc_profile_available == DYTC_FUNCMODE_PSC)1035710357+ else if (dytc_capabilities & BIT(DYTC_FC_PSC))1035310358 *perfmode = DYTC_MODE_PSC_LOWPOWER;1035410359 break;1035510360 case PLATFORM_PROFILE_BALANCED:1035610356- if (dytc_profile_available == DYTC_FUNCMODE_MMC)1036110361+ if (dytc_capabilities & BIT(DYTC_FC_MMC))1035710362 *perfmode = DYTC_MODE_MMC_BALANCE;1035810358- else if (dytc_profile_available == DYTC_FUNCMODE_PSC)1036310363+ else if (dytc_capabilities & BIT(DYTC_FC_PSC))1035910364 *perfmode = DYTC_MODE_PSC_BALANCE;1036010365 break;1036110366 case PLATFORM_PROFILE_PERFORMANCE:1036210362- if (dytc_profile_available == DYTC_FUNCMODE_MMC)1036710367+ if (dytc_capabilities & BIT(DYTC_FC_MMC))1036310368 *perfmode = DYTC_MODE_MMC_PERFORM;1036410364- else if (dytc_profile_available == DYTC_FUNCMODE_PSC)1036910369+ else if (dytc_capabilities & BIT(DYTC_FC_PSC))1036510370 *perfmode = DYTC_MODE_PSC_PERFORM;1036610371 break;1036710372 default: /* Unknown profile */···1044010445 if (err)1044110446 goto unlock;10442104471044310443- if (dytc_profile_available == DYTC_FUNCMODE_MMC) {1044810448+ if (dytc_capabilities & BIT(DYTC_FC_MMC)) {1044410449 if (profile == PLATFORM_PROFILE_BALANCED) {1044510450 /*1044610451 * To get back to balanced mode we need to issue a reset command.···1045910464 goto unlock;1046010465 }1046110466 }1046210462- if (dytc_profile_available == DYTC_FUNCMODE_PSC) {1046710467+ if (dytc_capabilities & BIT(DYTC_FC_PSC)) {1046310468 err = dytc_command(DYTC_SET_COMMAND(DYTC_FUNCTION_PSC, perfmode, 1), &output);1046410469 if (err)1046510470 goto unlock;···1047810483 int perfmode;10479104841048010485 mutex_lock(&dytc_mutex);1048110481- if (dytc_profile_available == DYTC_FUNCMODE_MMC) {1048610486+ if (dytc_capabilities & BIT(DYTC_FC_MMC)) {1048210487 if (dytc_mmc_get_available)1048310488 err = dytc_command(DYTC_CMD_MMC_GET, &output);1048410489 else1048510490 err = dytc_cql_command(DYTC_CMD_GET, &output);1048610486- } else if (dytc_profile_available == DYTC_FUNCMODE_PSC)1049110491+ } else if (dytc_capabilities & BIT(DYTC_FC_PSC))1048710492 err = dytc_command(DYTC_CMD_GET, &output);10488104931048910494 mutex_unlock(&dytc_mutex);···1051210517 set_bit(PLATFORM_PROFILE_BALANCED, dytc_profile.choices);1051310518 set_bit(PLATFORM_PROFILE_PERFORMANCE, dytc_profile.choices);10514105191051510515- dytc_profile_available = DYTC_FUNCMODE_NONE;1051610520 err = dytc_command(DYTC_CMD_QUERY, &output);1051710521 if (err)1051810522 return err;···1052410530 return -ENODEV;10525105311052610532 /* Check what capabilities are supported */1052710527- err = dytc_command(DYTC_CMD_FUNC_CAP, &output);1053310533+ err = dytc_command(DYTC_CMD_FUNC_CAP, &dytc_capabilities);1052810534 if (err)1052910535 return err;10530105361053110531- if (output & BIT(DYTC_FC_MMC)) { /* MMC MODE */1053210532- dytc_profile_available = DYTC_FUNCMODE_MMC;1053310533-1053710537+ if (dytc_capabilities & BIT(DYTC_FC_MMC)) { /* MMC MODE */1053810538+ pr_debug("MMC is supported\n");1053410539 /*1053510540 * Check if MMC_GET functionality available1053610541 * Version > 6 and return success from MMC_GET command···1054010547 if (!err && ((output & DYTC_ERR_MASK) == DYTC_ERR_SUCCESS))1054110548 dytc_mmc_get_available = true;1054210549 }1054310543- } else if (output & BIT(DYTC_FC_PSC)) { /* PSC MODE */1054410544- dytc_profile_available = DYTC_FUNCMODE_PSC;1055010550+ } else if (dytc_capabilities & BIT(DYTC_FC_PSC)) { /* PSC MODE */1055110551+ /* Support for this only works on AMD platforms */1055210552+ if (boot_cpu_data.x86_vendor != X86_VENDOR_AMD) {1055310553+ dbg_printk(TPACPI_DBG_INIT, "PSC not support on Intel platforms\n");1055410554+ return -ENODEV;1055510555+ }1055610556+ pr_debug("PSC is supported\n");1054510557 } else {1054610558 dbg_printk(TPACPI_DBG_INIT, "No DYTC support available\n");1054710559 return -ENODEV;···10572105741057310575static void dytc_profile_exit(void)1057410576{1057510575- dytc_profile_available = DYTC_FUNCMODE_NONE;1057610577 platform_profile_remove();1057710578}1057810579
+1-1
drivers/s390/char/sclp.c
···6060/* List of queued requests. */6161static LIST_HEAD(sclp_req_queue);62626363-/* Data for read and and init requests. */6363+/* Data for read and init requests. */6464static struct sclp_req sclp_read_req;6565static struct sclp_req sclp_init_req;6666static void *sclp_read_sccb;
+8-1
drivers/s390/virtio/virtio_ccw.c
···11361136 vcdev->err = -EIO;11371137 }11381138 virtio_ccw_check_activity(vcdev, activity);11391139- /* Interrupts are disabled here */11391139+#ifdef CONFIG_VIRTIO_HARDEN_NOTIFICATION11401140+ /*11411141+ * Paired with virtio_ccw_synchronize_cbs() and interrupts are11421142+ * disabled here.11431143+ */11401144 read_lock(&vcdev->irq_lock);11451145+#endif11411146 for_each_set_bit(i, indicators(vcdev),11421147 sizeof(*indicators(vcdev)) * BITS_PER_BYTE) {11431148 /* The bit clear must happen before the vring kick. */···11511146 vq = virtio_ccw_vq_by_ind(vcdev, i);11521147 vring_interrupt(0, vq);11531148 }11491149+#ifdef CONFIG_VIRTIO_HARDEN_NOTIFICATION11541150 read_unlock(&vcdev->irq_lock);11511151+#endif11551152 if (test_bit(0, indicators2(vcdev))) {11561153 virtio_config_changed(&vcdev->vdev);11571154 clear_bit(0, indicators2(vcdev));
+7
drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
···27822782 struct hisi_hba *hisi_hba = shost_priv(shost);27832783 struct device *dev = hisi_hba->dev;27842784 int ret = sas_slave_configure(sdev);27852785+ unsigned int max_sectors;2785278627862787 if (ret)27872788 return ret;···27992798 pm_runtime_disable(dev);28002799 }28012800 }28012801+28022802+ /* Set according to IOMMU IOVA caching limit */28032803+ max_sectors = min_t(size_t, queue_max_hw_sectors(sdev->request_queue),28042804+ (PAGE_SIZE * 32) >> SECTOR_SHIFT);28052805+28062806+ blk_queue_max_hw_sectors(sdev->request_queue, max_sectors);2802280728032808 return 0;28042809}
···24692469 if (charcount != 256 && charcount != 512)24702470 return -EINVAL;2471247124722472+ /* font bigger than screen resolution ? */24732473+ if (w > FBCON_SWAP(info->var.rotate, info->var.xres, info->var.yres) ||24742474+ h > FBCON_SWAP(info->var.rotate, info->var.yres, info->var.xres))24752475+ return -EINVAL;24762476+24722477 /* Make sure drawing engine can handle the font */24732478 if (!(info->pixmap.blit_x & (1 << (font->width - 1))) ||24742479 !(info->pixmap.blit_y & (1 << (font->height - 1))))···27352730 fbcon_modechanged(info);27362731}27372732EXPORT_SYMBOL(fbcon_update_vcs);27332733+27342734+/* let fbcon check if it supports a new screen resolution */27352735+int fbcon_modechange_possible(struct fb_info *info, struct fb_var_screeninfo *var)27362736+{27372737+ struct fbcon_ops *ops = info->fbcon_par;27382738+ struct vc_data *vc;27392739+ unsigned int i;27402740+27412741+ WARN_CONSOLE_UNLOCKED();27422742+27432743+ if (!ops)27442744+ return 0;27452745+27462746+ /* prevent setting a screen size which is smaller than font size */27472747+ for (i = first_fb_vc; i <= last_fb_vc; i++) {27482748+ vc = vc_cons[i].d;27492749+ if (!vc || vc->vc_mode != KD_TEXT ||27502750+ fbcon_info_from_console(i) != info)27512751+ continue;27522752+27532753+ if (vc->vc_font.width > FBCON_SWAP(var->rotate, var->xres, var->yres) ||27542754+ vc->vc_font.height > FBCON_SWAP(var->rotate, var->yres, var->xres))27552755+ return -EINVAL;27562756+ }27572757+27582758+ return 0;27592759+}27602760+EXPORT_SYMBOL_GPL(fbcon_modechange_possible);2738276127392762int fbcon_mode_deleted(struct fb_info *info,27402763 struct fb_videomode *mode)
+26-2
drivers/video/fbdev/core/fbmem.c
···1919#include <linux/kernel.h>2020#include <linux/major.h>2121#include <linux/slab.h>2222+#include <linux/sysfb.h>2223#include <linux/mm.h>2324#include <linux/mman.h>2425#include <linux/vt.h>···511510512511 while (n && (n * (logo->width + 8) - 8 > xres))513512 --n;514514- image.dx = (xres - n * (logo->width + 8) - 8) / 2;513513+ image.dx = (xres - (n * (logo->width + 8) - 8)) / 2;515514 image.dy = y ?: (yres - logo->height) / 2;516515 } else {517516 image.dx = 0;···10171016 if (ret)10181017 return ret;1019101810191019+ /* verify that virtual resolution >= physical resolution */10201020+ if (var->xres_virtual < var->xres ||10211021+ var->yres_virtual < var->yres) {10221022+ pr_warn("WARNING: fbcon: Driver '%s' missed to adjust virtual screen size (%ux%u vs. %ux%u)\n",10231023+ info->fix.id,10241024+ var->xres_virtual, var->yres_virtual,10251025+ var->xres, var->yres);10261026+ return -EINVAL;10271027+ }10281028+10201029 if ((var->activate & FB_ACTIVATE_MASK) != FB_ACTIVATE_NOW)10211030 return 0;10221031···11171106 return -EFAULT;11181107 console_lock();11191108 lock_fb_info(info);11201120- ret = fb_set_var(info, &var);11091109+ ret = fbcon_modechange_possible(info, &var);11101110+ if (!ret)11111111+ ret = fb_set_var(info, &var);11211112 if (!ret)11221113 fbcon_update_vcs(info, var.activate & FB_ACTIVATE_ALL);11231114 unlock_fb_info(info);···17641751 a->ranges[0].size = ~0;17651752 do_free = true;17661753 }17541754+17551755+ /*17561756+ * If a driver asked to unregister a platform device registered by17571757+ * sysfb, then can be assumed that this is a driver for a display17581758+ * that is set up by the system firmware and has a generic driver.17591759+ *17601760+ * Drivers for devices that don't have a generic driver will never17611761+ * ask for this, so let's assume that a real driver for the display17621762+ * was already probed and prevent sysfb to register devices later.17631763+ */17641764+ sysfb_disable();1767176517681766 mutex_lock(®istration_lock);17691767 do_remove_conflicting_framebuffers(a, name, primary);
+13
drivers/virtio/Kconfig
···29293030if VIRTIO_MENU31313232+config VIRTIO_HARDEN_NOTIFICATION3333+ bool "Harden virtio notification"3434+ help3535+ Enable this to harden the device notifications and suppress3636+ those that happen at a time where notifications are illegal.3737+3838+ Experimental: Note that several drivers still have bugs that3939+ may cause crashes or hangs when correct handling of4040+ notifications is enforced; depending on the subset of4141+ drivers and devices you use, this may or may not work.4242+4343+ If unsure, say N.4444+3245config VIRTIO_PCI3346 tristate "PCI driver for virtio devices"3447 depends on PCI
+2
drivers/virtio/virtio.c
···219219 * */220220void virtio_reset_device(struct virtio_device *dev)221221{222222+#ifdef CONFIG_VIRTIO_HARDEN_NOTIFICATION222223 /*223224 * The below virtio_synchronize_cbs() guarantees that any224225 * interrupt for this line arriving after···228227 */229228 virtio_break_device(dev);230229 virtio_synchronize_cbs(dev);230230+#endif231231232232 dev->config->reset(dev);233233}
···220220221221 check_offsets();222222223223- mdev->pci_dev = pci_dev;224224-225223 /* We only own devices >= 0x1000 and <= 0x107f: leave the rest. */226224 if (pci_dev->device < 0x1000 || pci_dev->device > 0x107f)227225 return -ENODEV;
+60-29
drivers/virtio/virtio_ring.c
···111111 /* Number we've added since last sync. */112112 unsigned int num_added;113113114114- /* Last used index we've seen. */114114+ /* Last used index we've seen.115115+ * for split ring, it just contains last used index116116+ * for packed ring:117117+ * bits up to VRING_PACKED_EVENT_F_WRAP_CTR include the last used index.118118+ * bits from VRING_PACKED_EVENT_F_WRAP_CTR include the used wrap counter.119119+ */115120 u16 last_used_idx;116121117122 /* Hint for event idx: already triggered no need to disable. */···158153159154 /* Driver ring wrap counter. */160155 bool avail_wrap_counter;161161-162162- /* Device ring wrap counter. */163163- bool used_wrap_counter;164156165157 /* Avail used flags. */166158 u16 avail_used_flags;···935933 for (; num && vring_size(num, vring_align) > PAGE_SIZE; num /= 2) {936934 queue = vring_alloc_queue(vdev, vring_size(num, vring_align),937935 &dma_addr,938938- GFP_KERNEL|__GFP_NOWARN|__GFP_ZERO);936936+ GFP_KERNEL | __GFP_NOWARN | __GFP_ZERO);939937 if (queue)940938 break;941939 if (!may_reduce_num)···975973/*976974 * Packed ring specific functions - *_packed().977975 */976976+static inline bool packed_used_wrap_counter(u16 last_used_idx)977977+{978978+ return !!(last_used_idx & (1 << VRING_PACKED_EVENT_F_WRAP_CTR));979979+}980980+981981+static inline u16 packed_last_used(u16 last_used_idx)982982+{983983+ return last_used_idx & ~(-(1 << VRING_PACKED_EVENT_F_WRAP_CTR));984984+}978985979986static void vring_unmap_extra_packed(const struct vring_virtqueue *vq,980987 struct vring_desc_extra *extra)···1417140614181407static inline bool more_used_packed(const struct vring_virtqueue *vq)14191408{14201420- return is_used_desc_packed(vq, vq->last_used_idx,14211421- vq->packed.used_wrap_counter);14091409+ u16 last_used;14101410+ u16 last_used_idx;14111411+ bool used_wrap_counter;14121412+14131413+ last_used_idx = READ_ONCE(vq->last_used_idx);14141414+ last_used = packed_last_used(last_used_idx);14151415+ used_wrap_counter = packed_used_wrap_counter(last_used_idx);14161416+ return is_used_desc_packed(vq, last_used, used_wrap_counter);14221417}1423141814241419static void *virtqueue_get_buf_ctx_packed(struct virtqueue *_vq,···14321415 void **ctx)14331416{14341417 struct vring_virtqueue *vq = to_vvq(_vq);14351435- u16 last_used, id;14181418+ u16 last_used, id, last_used_idx;14191419+ bool used_wrap_counter;14361420 void *ret;1437142114381422 START_USE(vq);···14521434 /* Only get used elements after they have been exposed by host. */14531435 virtio_rmb(vq->weak_barriers);1454143614551455- last_used = vq->last_used_idx;14371437+ last_used_idx = READ_ONCE(vq->last_used_idx);14381438+ used_wrap_counter = packed_used_wrap_counter(last_used_idx);14391439+ last_used = packed_last_used(last_used_idx);14561440 id = le16_to_cpu(vq->packed.vring.desc[last_used].id);14571441 *len = le32_to_cpu(vq->packed.vring.desc[last_used].len);14581442···14711451 ret = vq->packed.desc_state[id].data;14721452 detach_buf_packed(vq, id, ctx);1473145314741474- vq->last_used_idx += vq->packed.desc_state[id].num;14751475- if (unlikely(vq->last_used_idx >= vq->packed.vring.num)) {14761476- vq->last_used_idx -= vq->packed.vring.num;14771477- vq->packed.used_wrap_counter ^= 1;14541454+ last_used += vq->packed.desc_state[id].num;14551455+ if (unlikely(last_used >= vq->packed.vring.num)) {14561456+ last_used -= vq->packed.vring.num;14571457+ used_wrap_counter ^= 1;14781458 }14591459+14601460+ last_used = (last_used | (used_wrap_counter << VRING_PACKED_EVENT_F_WRAP_CTR));14611461+ WRITE_ONCE(vq->last_used_idx, last_used);1479146214801463 /*14811464 * If we expect an interrupt for the next entry, tell host···14881465 if (vq->packed.event_flags_shadow == VRING_PACKED_EVENT_FLAG_DESC)14891466 virtio_store_mb(vq->weak_barriers,14901467 &vq->packed.vring.driver->off_wrap,14911491- cpu_to_le16(vq->last_used_idx |14921492- (vq->packed.used_wrap_counter <<14931493- VRING_PACKED_EVENT_F_WRAP_CTR)));14681468+ cpu_to_le16(vq->last_used_idx));1494146914951470 LAST_ADD_TIME_INVALID(vq);14961471···1520149915211500 if (vq->event) {15221501 vq->packed.vring.driver->off_wrap =15231523- cpu_to_le16(vq->last_used_idx |15241524- (vq->packed.used_wrap_counter <<15251525- VRING_PACKED_EVENT_F_WRAP_CTR));15021502+ cpu_to_le16(vq->last_used_idx);15261503 /*15271504 * We need to update event offset and event wrap15281505 * counter first before updating event flags.···15371518 }1538151915391520 END_USE(vq);15401540- return vq->last_used_idx | ((u16)vq->packed.used_wrap_counter <<15411541- VRING_PACKED_EVENT_F_WRAP_CTR);15211521+ return vq->last_used_idx;15421522}1543152315441524static bool virtqueue_poll_packed(struct virtqueue *_vq, u16 off_wrap)···15551537static bool virtqueue_enable_cb_delayed_packed(struct virtqueue *_vq)15561538{15571539 struct vring_virtqueue *vq = to_vvq(_vq);15581558- u16 used_idx, wrap_counter;15401540+ u16 used_idx, wrap_counter, last_used_idx;15591541 u16 bufs;1560154215611543 START_USE(vq);···15681550 if (vq->event) {15691551 /* TODO: tune this threshold */15701552 bufs = (vq->packed.vring.num - vq->vq.num_free) * 3 / 4;15711571- wrap_counter = vq->packed.used_wrap_counter;15531553+ last_used_idx = READ_ONCE(vq->last_used_idx);15541554+ wrap_counter = packed_used_wrap_counter(last_used_idx);1572155515731573- used_idx = vq->last_used_idx + bufs;15561556+ used_idx = packed_last_used(last_used_idx) + bufs;15741557 if (used_idx >= vq->packed.vring.num) {15751558 used_idx -= vq->packed.vring.num;15761559 wrap_counter ^= 1;···16011582 */16021583 virtio_mb(vq->weak_barriers);1603158416041604- if (is_used_desc_packed(vq,16051605- vq->last_used_idx,16061606- vq->packed.used_wrap_counter)) {15851585+ last_used_idx = READ_ONCE(vq->last_used_idx);15861586+ wrap_counter = packed_used_wrap_counter(last_used_idx);15871587+ used_idx = packed_last_used(last_used_idx);15881588+ if (is_used_desc_packed(vq, used_idx, wrap_counter)) {16071589 END_USE(vq);16081590 return false;16091591 }···17081688 vq->we_own_ring = true;17091689 vq->notify = notify;17101690 vq->weak_barriers = weak_barriers;16911691+#ifdef CONFIG_VIRTIO_HARDEN_NOTIFICATION17111692 vq->broken = true;17121712- vq->last_used_idx = 0;16931693+#else16941694+ vq->broken = false;16951695+#endif16961696+ vq->last_used_idx = 0 | (1 << VRING_PACKED_EVENT_F_WRAP_CTR);17131697 vq->event_triggered = false;17141698 vq->num_added = 0;17151699 vq->packed_ring = true;···1744172017451721 vq->packed.next_avail_idx = 0;17461722 vq->packed.avail_wrap_counter = 1;17471747- vq->packed.used_wrap_counter = 1;17481723 vq->packed.event_flags_shadow = 0;17491724 vq->packed.avail_used_flags = 1 << VRING_PACKED_DESC_F_AVAIL;17501725···21582135 }2159213621602137 if (unlikely(vq->broken)) {21382138+#ifdef CONFIG_VIRTIO_HARDEN_NOTIFICATION21612139 dev_warn_once(&vq->vq.vdev->dev,21622140 "virtio vring IRQ raised before DRIVER_OK");21632141 return IRQ_NONE;21422142+#else21432143+ return IRQ_HANDLED;21442144+#endif21642145 }2165214621662147 /* Just a hint for performance: so it's ok that this can be racy! */···22072180 vq->we_own_ring = false;22082181 vq->notify = notify;22092182 vq->weak_barriers = weak_barriers;21832183+#ifdef CONFIG_VIRTIO_HARDEN_NOTIFICATION22102184 vq->broken = true;21852185+#else21862186+ vq->broken = false;21872187+#endif22112188 vq->last_used_idx = 0;22122189 vq->event_triggered = false;22132190 vq->num_added = 0;
···143143{144144 wait_var_event_timeout(&candidate->flags,145145 !fscache_is_acquire_pending(candidate), 20 * HZ);146146- if (!fscache_is_acquire_pending(candidate)) {146146+ if (fscache_is_acquire_pending(candidate)) {147147 pr_notice("Potential volume collision new=%08x old=%08x",148148 candidate->debug_id, collidee_debug_id);149149 fscache_stat(&fscache_n_volumes_collision);···182182 hlist_bl_add_head(&candidate->hash_link, h);183183 hlist_bl_unlock(h);184184185185- if (test_bit(FSCACHE_VOLUME_ACQUIRE_PENDING, &candidate->flags))185185+ if (fscache_is_acquire_pending(candidate))186186 fscache_wait_on_volume_collision(candidate, collidee_debug_id);187187 return true;188188
+16-8
fs/io_uring.c
···11831183 .unbound_nonreg_file = 1,11841184 .pollout = 1,11851185 .needs_async_setup = 1,11861186+ .ioprio = 1,11861187 .async_size = sizeof(struct io_async_msghdr),11871188 },11881189 [IORING_OP_RECVMSG] = {···11921191 .pollin = 1,11931192 .buffer_select = 1,11941193 .needs_async_setup = 1,11941194+ .ioprio = 1,11951195 .async_size = sizeof(struct io_async_msghdr),11961196 },11971197 [IORING_OP_TIMEOUT] = {···12681266 .unbound_nonreg_file = 1,12691267 .pollout = 1,12701268 .audit_skip = 1,12691269+ .ioprio = 1,12711270 },12721271 [IORING_OP_RECV] = {12731272 .needs_file = 1,···12761273 .pollin = 1,12771274 .buffer_select = 1,12781275 .audit_skip = 1,12761276+ .ioprio = 1,12791277 },12801278 [IORING_OP_OPENAT2] = {12811279 },···43184314 if (unlikely(ret < 0))43194315 return ret;43204316 } else {43174317+ rw = req->async_data;43184318+ s = &rw->s;43194319+43214320 /*43224321 * Safe and required to re-import if we're using provided43234322 * buffers, as we dropped the selected one before retry.43244323 */43254325- if (req->flags & REQ_F_BUFFER_SELECT) {43244324+ if (io_do_buffer_select(req)) {43264325 ret = io_import_iovec(READ, req, &iovec, s, issue_flags);43274326 if (unlikely(ret < 0))43284327 return ret;43294328 }4330432943314331- rw = req->async_data;43324332- s = &rw->s;43334330 /*43344331 * We come here from an earlier attempt, restore our state to43354332 * match in case it doesn't. It's cheap enough that we don't···50665061{50675062 struct io_uring_cmd *ioucmd = &req->uring_cmd;5068506350695069- if (sqe->rw_flags)50645064+ if (sqe->rw_flags || sqe->__pad1)50705065 return -EINVAL;50715066 ioucmd->cmd = sqe->cmd;50725067 ioucmd->cmd_op = READ_ONCE(sqe->cmd_op);···60806075{60816076 struct io_sr_msg *sr = &req->sr_msg;6082607760836083- if (unlikely(sqe->file_index))60786078+ if (unlikely(sqe->file_index || sqe->addr2))60846079 return -EINVAL;6085608060866081 sr->umsg = u64_to_user_ptr(READ_ONCE(sqe->addr));60876082 sr->len = READ_ONCE(sqe->len);60886088- sr->flags = READ_ONCE(sqe->addr2);60836083+ sr->flags = READ_ONCE(sqe->ioprio);60896084 if (sr->flags & ~IORING_RECVSEND_POLL_FIRST)60906085 return -EINVAL;60916086 sr->msg_flags = READ_ONCE(sqe->msg_flags) | MSG_NOSIGNAL;···63166311{63176312 struct io_sr_msg *sr = &req->sr_msg;6318631363196319- if (unlikely(sqe->file_index))63146314+ if (unlikely(sqe->file_index || sqe->addr2))63206315 return -EINVAL;6321631663226317 sr->umsg = u64_to_user_ptr(READ_ONCE(sqe->addr));63236318 sr->len = READ_ONCE(sqe->len);63246324- sr->flags = READ_ONCE(sqe->addr2);63196319+ sr->flags = READ_ONCE(sqe->ioprio);63256320 if (sr->flags & ~IORING_RECVSEND_POLL_FIRST)63266321 return -EINVAL;63276322 sr->msg_flags = READ_ONCE(sqe->msg_flags) | MSG_NOSIGNAL;···79727967 unsigned int done;79737968 struct file *file;79747969 int ret, fd;79707970+79717971+ if (!req->ctx->file_data)79727972+ return -ENXIO;7975797379767974 for (done = 0; done < req->rsrc_update.nr_args; done++) {79777975 if (copy_from_user(&fd, &fds[done], sizeof(fd))) {
+31-17
fs/ksmbd/smb2pdu.c
···64906490 goto out;64916491 }6492649264936493+ ksmbd_debug(SMB, "flags %u\n", le32_to_cpu(req->Flags));64936494 if (le32_to_cpu(req->Flags) & SMB2_WRITEFLAG_WRITE_THROUGH)64946495 writethrough = true;64956496···65056504 }65066505 data_buf = (char *)(((char *)&req->hdr.ProtocolId) +65076506 le16_to_cpu(req->DataOffset));65086508-65096509- ksmbd_debug(SMB, "flags %u\n", le32_to_cpu(req->Flags));65106510- if (le32_to_cpu(req->Flags) & SMB2_WRITEFLAG_WRITE_THROUGH)65116511- writethrough = true;6512650765136508 ksmbd_debug(SMB, "filename %pd, offset %lld, len %zu\n",65146509 fp->filp->f_path.dentry, offset, length);···77007703 {77017704 struct file_zero_data_information *zero_data;77027705 struct ksmbd_file *fp;77037703- loff_t off, len;77067706+ loff_t off, len, bfz;7704770777057708 if (!test_tree_conn_flag(work->tcon, KSMBD_TREE_CONN_FLAG_WRITABLE)) {77067709 ksmbd_debug(SMB,···77177720 zero_data =77187721 (struct file_zero_data_information *)&req->Buffer[0];7719772277207720- fp = ksmbd_lookup_fd_fast(work, id);77217721- if (!fp) {77227722- ret = -ENOENT;77237723+ off = le64_to_cpu(zero_data->FileOffset);77247724+ bfz = le64_to_cpu(zero_data->BeyondFinalZero);77257725+ if (off > bfz) {77267726+ ret = -EINVAL;77237727 goto out;77247728 }7725772977267726- off = le64_to_cpu(zero_data->FileOffset);77277727- len = le64_to_cpu(zero_data->BeyondFinalZero) - off;77307730+ len = bfz - off;77317731+ if (len) {77327732+ fp = ksmbd_lookup_fd_fast(work, id);77337733+ if (!fp) {77347734+ ret = -ENOENT;77357735+ goto out;77367736+ }7728773777297729- ret = ksmbd_vfs_zero_data(work, fp, off, len);77307730- ksmbd_fd_put(work, fp);77317731- if (ret < 0)77327732- goto out;77387738+ ret = ksmbd_vfs_zero_data(work, fp, off, len);77397739+ ksmbd_fd_put(work, fp);77407740+ if (ret < 0)77417741+ goto out;77427742+ }77337743 break;77347744 }77357745 case FSCTL_QUERY_ALLOCATED_RANGES:···78107806 src_off = le64_to_cpu(dup_ext->SourceFileOffset);78117807 dst_off = le64_to_cpu(dup_ext->TargetFileOffset);78127808 length = le64_to_cpu(dup_ext->ByteCount);78137813- cloned = vfs_clone_file_range(fp_in->filp, src_off, fp_out->filp,78147814- dst_off, length, 0);78097809+ /*78107810+ * XXX: It is not clear if FSCTL_DUPLICATE_EXTENTS_TO_FILE78117811+ * should fall back to vfs_copy_file_range(). This could be78127812+ * beneficial when re-exporting nfs/smb mount, but note that78137813+ * this can result in partial copy that returns an error status.78147814+ * If/when FSCTL_DUPLICATE_EXTENTS_TO_FILE_EX is implemented,78157815+ * fall back to vfs_copy_file_range(), should be avoided when78167816+ * the flag DUPLICATE_EXTENTS_DATA_EX_SOURCE_ATOMIC is set.78177817+ */78187818+ cloned = vfs_clone_file_range(fp_in->filp, src_off,78197819+ fp_out->filp, dst_off, length, 0);78157820 if (cloned == -EXDEV || cloned == -EOPNOTSUPP) {78167821 ret = -EOPNOTSUPP;78177822 goto dup_ext_out;78187823 } else if (cloned != length) {78197824 cloned = vfs_copy_file_range(fp_in->filp, src_off,78207820- fp_out->filp, dst_off, length, 0);78257825+ fp_out->filp, dst_off,78267826+ length, 0);78217827 if (cloned != length) {78227828 if (cloned < 0)78237829 ret = cloned;
-10
fs/ksmbd/transport_rdma.c
···55 *66 * Author(s): Long Li <longli@microsoft.com>,77 * Hyunchul Lee <hyc.lee@gmail.com>88- *99- * This program is free software; you can redistribute it and/or modify1010- * it under the terms of the GNU General Public License as published by1111- * the Free Software Foundation; either version 2 of the License, or1212- * (at your option) any later version.1313- *1414- * This program is distributed in the hope that it will be useful,1515- * but WITHOUT ANY WARRANTY; without even the implied warranty of1616- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See1717- * the GNU General Public License for more details.188 */1992010#define SUBMOD_NAME "smb_direct"
+1-1
fs/ksmbd/transport_tcp.c
···230230 break;231231 }232232 ret = kernel_accept(iface->ksmbd_socket, &client_sk,233233- O_NONBLOCK);233233+ SOCK_NONBLOCK);234234 mutex_unlock(&iface->sock_release_lock);235235 if (ret) {236236 if (ret == -EAGAIN)
+9-3
fs/ksmbd/vfs.c
···10151015 FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE,10161016 off, len);1017101710181018- return vfs_fallocate(fp->filp, FALLOC_FL_ZERO_RANGE, off, len);10181018+ return vfs_fallocate(fp->filp,10191019+ FALLOC_FL_ZERO_RANGE | FALLOC_FL_KEEP_SIZE,10201020+ off, len);10191021}1020102210211023int ksmbd_vfs_fqar_lseek(struct ksmbd_file *fp, loff_t start, loff_t length,···10481046 *out_count = 0;10491047 end = start + length;10501048 while (start < end && *out_count < in_count) {10511051- extent_start = f->f_op->llseek(f, start, SEEK_DATA);10491049+ extent_start = vfs_llseek(f, start, SEEK_DATA);10521050 if (extent_start < 0) {10531051 if (extent_start != -ENXIO)10541052 ret = (int)extent_start;···10581056 if (extent_start >= end)10591057 break;1060105810611061- extent_end = f->f_op->llseek(f, extent_start, SEEK_HOLE);10591059+ extent_end = vfs_llseek(f, extent_start, SEEK_HOLE);10621060 if (extent_end < 0) {10631061 if (extent_end != -ENXIO)10641062 ret = (int)extent_end;···1779177717801778 ret = vfs_copy_file_range(src_fp->filp, src_off,17811779 dst_fp->filp, dst_off, len, 0);17801780+ if (ret == -EOPNOTSUPP || ret == -EXDEV)17811781+ ret = generic_copy_file_range(src_fp->filp, src_off,17821782+ dst_fp->filp, dst_off,17831783+ len, 0);17821784 if (ret < 0)17831785 return ret;17841786
+13-6
fs/nfs/nfs4proc.c
···40124012 }4013401340144014 page = alloc_page(GFP_KERNEL);40154015+ if (!page)40164016+ return -ENOMEM;40154017 locations = kmalloc(sizeof(struct nfs4_fs_locations), GFP_KERNEL);40164016- if (page == NULL || locations == NULL)40174017- goto out;40184018+ if (!locations)40194019+ goto out_free;40204020+ locations->fattr = nfs_alloc_fattr();40214021+ if (!locations->fattr)40224022+ goto out_free_2;4018402340194024 status = nfs4_proc_get_locations(server, fhandle, locations, page,40204025 cred);40214026 if (status)40224022- goto out;40274027+ goto out_free_3;4023402840244029 for (i = 0; i < locations->nlocations; i++)40254030 test_fs_location_for_trunking(&locations->locations[i], clp,40264031 server);40274027-out:40284028- if (page)40294029- __free_page(page);40324032+out_free_3:40334033+ kfree(locations->fattr);40344034+out_free_2:40304035 kfree(locations);40364036+out_free:40374037+ __free_page(page);40314038 return status;40324039}40334040
···15131513 return 0;15141514}1515151515161516-static int fanotify_events_supported(struct path *path, __u64 mask)15161516+static int fanotify_events_supported(struct fsnotify_group *group,15171517+ struct path *path, __u64 mask,15181518+ unsigned int flags)15171519{15201520+ unsigned int mark_type = flags & FANOTIFY_MARK_TYPE_BITS;15211521+ /* Strict validation of events in non-dir inode mask with v5.17+ APIs */15221522+ bool strict_dir_events = FAN_GROUP_FLAG(group, FAN_REPORT_TARGET_FID) ||15231523+ (mask & FAN_RENAME);15241524+15181525 /*15191526 * Some filesystems such as 'proc' acquire unusual locks when opening15201527 * files. For them fanotify permission events have high chances of···15331526 if (mask & FANOTIFY_PERM_EVENTS &&15341527 path->mnt->mnt_sb->s_type->fs_flags & FS_DISALLOW_NOTIFY_PERM)15351528 return -EINVAL;15291529+15301530+ /*15311531+ * We shouldn't have allowed setting dirent events and the directory15321532+ * flags FAN_ONDIR and FAN_EVENT_ON_CHILD in mask of non-dir inode,15331533+ * but because we always allowed it, error only when using new APIs.15341534+ */15351535+ if (strict_dir_events && mark_type == FAN_MARK_INODE &&15361536+ !d_is_dir(path->dentry) && (mask & FANOTIFY_DIRONLY_EVENT_BITS))15371537+ return -ENOTDIR;15381538+15361539 return 0;15371540}15381541···16891672 goto fput_and_out;1690167316911674 if (flags & FAN_MARK_ADD) {16921692- ret = fanotify_events_supported(&path, mask);16751675+ ret = fanotify_events_supported(group, &path, mask, flags);16931676 if (ret)16941677 goto path_put_and_out;16951678 }···17111694 inode = path.dentry->d_inode;17121695 else17131696 mnt = path.mnt;17141714-17151715- /*17161716- * FAN_RENAME is not allowed on non-dir (for now).17171717- * We shouldn't have allowed setting any dirent events in mask of17181718- * non-dir, but because we always allowed it, error only if group17191719- * was initialized with the new flag FAN_REPORT_TARGET_FID.17201720- */17211721- ret = -ENOTDIR;17221722- if (inode && !S_ISDIR(inode->i_mode) &&17231723- ((mask & FAN_RENAME) ||17241724- ((mask & FANOTIFY_DIRENT_EVENTS) &&17251725- FAN_GROUP_FLAG(group, FAN_REPORT_TARGET_FID))))17261726- goto path_put_and_out;1727169717281698 /* Mask out FAN_EVENT_ON_CHILD flag for sb/mount/non-dir marks */17291699 if (mnt || !S_ISDIR(inode->i_mode)) {
+44-33
fs/read_write.c
···13971397}13981398EXPORT_SYMBOL(generic_copy_file_range);1399139914001400-static ssize_t do_copy_file_range(struct file *file_in, loff_t pos_in,14011401- struct file *file_out, loff_t pos_out,14021402- size_t len, unsigned int flags)14031403-{14041404- /*14051405- * Although we now allow filesystems to handle cross sb copy, passing14061406- * a file of the wrong filesystem type to filesystem driver can result14071407- * in an attempt to dereference the wrong type of ->private_data, so14081408- * avoid doing that until we really have a good reason. NFS defines14091409- * several different file_system_type structures, but they all end up14101410- * using the same ->copy_file_range() function pointer.14111411- */14121412- if (file_out->f_op->copy_file_range &&14131413- file_out->f_op->copy_file_range == file_in->f_op->copy_file_range)14141414- return file_out->f_op->copy_file_range(file_in, pos_in,14151415- file_out, pos_out,14161416- len, flags);14171417-14181418- return generic_copy_file_range(file_in, pos_in, file_out, pos_out, len,14191419- flags);14201420-}14211421-14221400/*14231401 * Performs necessary checks before doing a file copy14241402 *···14171439 ret = generic_file_rw_checks(file_in, file_out);14181440 if (ret)14191441 return ret;14421442+14431443+ /*14441444+ * We allow some filesystems to handle cross sb copy, but passing14451445+ * a file of the wrong filesystem type to filesystem driver can result14461446+ * in an attempt to dereference the wrong type of ->private_data, so14471447+ * avoid doing that until we really have a good reason.14481448+ *14491449+ * nfs and cifs define several different file_system_type structures14501450+ * and several different sets of file_operations, but they all end up14511451+ * using the same ->copy_file_range() function pointer.14521452+ */14531453+ if (file_out->f_op->copy_file_range) {14541454+ if (file_in->f_op->copy_file_range !=14551455+ file_out->f_op->copy_file_range)14561456+ return -EXDEV;14571457+ } else if (file_inode(file_in)->i_sb != file_inode(file_out)->i_sb) {14581458+ return -EXDEV;14591459+ }1420146014211461 /* Don't touch certain kinds of inodes */14221462 if (IS_IMMUTABLE(inode_out))···15011505 file_start_write(file_out);1502150615031507 /*15041504- * Try cloning first, this is supported by more file systems, and15051505- * more efficient if both clone and copy are supported (e.g. NFS).15081508+ * Cloning is supported by more file systems, so we implement copy on15091509+ * same sb using clone, but for filesystems where both clone and copy15101510+ * are supported (e.g. nfs,cifs), we only call the copy method.15061511 */15121512+ if (file_out->f_op->copy_file_range) {15131513+ ret = file_out->f_op->copy_file_range(file_in, pos_in,15141514+ file_out, pos_out,15151515+ len, flags);15161516+ goto done;15171517+ }15181518+15071519 if (file_in->f_op->remap_file_range &&15081520 file_inode(file_in)->i_sb == file_inode(file_out)->i_sb) {15091509- loff_t cloned;15101510-15111511- cloned = file_in->f_op->remap_file_range(file_in, pos_in,15211521+ ret = file_in->f_op->remap_file_range(file_in, pos_in,15121522 file_out, pos_out,15131523 min_t(loff_t, MAX_RW_COUNT, len),15141524 REMAP_FILE_CAN_SHORTEN);15151515- if (cloned > 0) {15161516- ret = cloned;15251525+ if (ret > 0)15171526 goto done;15181518- }15191527 }1520152815211521- ret = do_copy_file_range(file_in, pos_in, file_out, pos_out, len,15221522- flags);15231523- WARN_ON_ONCE(ret == -EOPNOTSUPP);15291529+ /*15301530+ * We can get here for same sb copy of filesystems that do not implement15311531+ * ->copy_file_range() in case filesystem does not support clone or in15321532+ * case filesystem supports clone but rejected the clone request (e.g.15331533+ * because it was not block aligned).15341534+ *15351535+ * In both cases, fall back to kernel copy so we are able to maintain a15361536+ * consistent story about which filesystems support copy_file_range()15371537+ * and which filesystems do not, that will allow userspace tools to15381538+ * make consistent desicions w.r.t using copy_file_range().15391539+ */15401540+ ret = generic_copy_file_range(file_in, pos_in, file_out, pos_out, len,15411541+ flags);15421542+15241543done:15251544 if (ret > 0) {15261545 fsnotify_access(file_in);
+9-29
fs/xfs/libxfs/xfs_attr.c
···5050STATIC int xfs_attr_leaf_get(xfs_da_args_t *args);5151STATIC int xfs_attr_leaf_removename(xfs_da_args_t *args);5252STATIC int xfs_attr_leaf_hasname(struct xfs_da_args *args, struct xfs_buf **bp);5353-STATIC int xfs_attr_leaf_try_add(struct xfs_da_args *args, struct xfs_buf *bp);5353+STATIC int xfs_attr_leaf_try_add(struct xfs_da_args *args);54545555/*5656 * Internal routines when attribute list is more than one block.···393393 * It won't fit in the shortform, transform to a leaf block. GROT:394394 * another possible req'mt for a double-split btree op.395395 */396396- error = xfs_attr_shortform_to_leaf(args, &attr->xattri_leaf_bp);396396+ error = xfs_attr_shortform_to_leaf(args);397397 if (error)398398 return error;399399400400- /*401401- * Prevent the leaf buffer from being unlocked so that a concurrent AIL402402- * push cannot grab the half-baked leaf buffer and run into problems403403- * with the write verifier.404404- */405405- xfs_trans_bhold(args->trans, attr->xattri_leaf_bp);406400 attr->xattri_dela_state = XFS_DAS_LEAF_ADD;407401out:408402 trace_xfs_attr_sf_addname_return(attr->xattri_dela_state, args->dp);···441447442448 /*443449 * Use the leaf buffer we may already hold locked as a result of444444- * a sf-to-leaf conversion. The held buffer is no longer valid445445- * after this call, regardless of the result.450450+ * a sf-to-leaf conversion.446451 */447447- error = xfs_attr_leaf_try_add(args, attr->xattri_leaf_bp);448448- attr->xattri_leaf_bp = NULL;452452+ error = xfs_attr_leaf_try_add(args);449453450454 if (error == -ENOSPC) {451455 error = xfs_attr3_leaf_to_node(args);···488496{489497 struct xfs_da_args *args = attr->xattri_da_args;490498 int error;491491-492492- ASSERT(!attr->xattri_leaf_bp);493499494500 error = xfs_attr_node_addname_find_attr(attr);495501 if (error)···12051215 */12061216STATIC int12071217xfs_attr_leaf_try_add(12081208- struct xfs_da_args *args,12091209- struct xfs_buf *bp)12181218+ struct xfs_da_args *args)12101219{12201220+ struct xfs_buf *bp;12111221 int error;1212122212131213- /*12141214- * If the caller provided a buffer to us, it is locked and held in12151215- * the transaction because it just did a shortform to leaf conversion.12161216- * Hence we don't need to read it again. Otherwise read in the leaf12171217- * buffer.12181218- */12191219- if (bp) {12201220- xfs_trans_bhold_release(args->trans, bp);12211221- } else {12221222- error = xfs_attr3_leaf_read(args->trans, args->dp, 0, &bp);12231223- if (error)12241224- return error;12251225- }12231223+ error = xfs_attr3_leaf_read(args->trans, args->dp, 0, &bp);12241224+ if (error)12251225+ return error;1226122612271227 /*12281228 * Look up the xattr name to set the insertion point for the new xattr.
-5
fs/xfs/libxfs/xfs_attr.h
···515515 */516516 struct xfs_attri_log_nameval *xattri_nameval;517517518518- /*519519- * Used by xfs_attr_set to hold a leaf buffer across a transaction roll520520- */521521- struct xfs_buf *xattri_leaf_bp;522522-523518 /* Used to keep track of current state of delayed operation */524519 enum xfs_delattr_state xattri_dela_state;525520
+19-16
fs/xfs/libxfs/xfs_attr_leaf.c
···289289 return NULL;290290}291291292292+/*293293+ * Validate an attribute leaf block.294294+ *295295+ * Empty leaf blocks can occur under the following circumstances:296296+ *297297+ * 1. setxattr adds a new extended attribute to a file;298298+ * 2. The file has zero existing attributes;299299+ * 3. The attribute is too large to fit in the attribute fork;300300+ * 4. The attribute is small enough to fit in a leaf block;301301+ * 5. A log flush occurs after committing the transaction that creates302302+ * the (empty) leaf block; and303303+ * 6. The filesystem goes down after the log flush but before the new304304+ * attribute can be committed to the leaf block.305305+ *306306+ * Hence we need to ensure that we don't fail the validation purely307307+ * because the leaf is empty.308308+ */292309static xfs_failaddr_t293310xfs_attr3_leaf_verify(294311 struct xfs_buf *bp)···326309 fa = xfs_da3_blkinfo_verify(bp, bp->b_addr);327310 if (fa)328311 return fa;329329-330330- /*331331- * Empty leaf blocks should never occur; they imply the existence of a332332- * software bug that needs fixing. xfs_repair also flags them as a333333- * corruption that needs fixing, so we should never let these go to334334- * disk.335335- */336336- if (ichdr.count == 0)337337- return __this_address;338312339313 /*340314 * firstused is the block offset of the first name info structure.···930922 return -ENOATTR;931923}932924933933-/*934934- * Convert from using the shortform to the leaf. On success, return the935935- * buffer so that we can keep it locked until we're totally done with it.936936- */925925+/* Convert from using the shortform to the leaf format. */937926int938927xfs_attr_shortform_to_leaf(939939- struct xfs_da_args *args,940940- struct xfs_buf **leaf_bp)928928+ struct xfs_da_args *args)941929{942930 struct xfs_inode *dp;943931 struct xfs_attr_shortform *sf;···995991 sfe = xfs_attr_sf_nextentry(sfe);996992 }997993 error = 0;998998- *leaf_bp = bp;999994out:1000995 kmem_free(tmpbuffer);1001996 return error;
···576576 struct xfs_trans_res tres;577577 struct xfs_attri_log_format *attrp;578578 struct xfs_attri_log_nameval *nv = attrip->attri_nameval;579579- int error, ret = 0;579579+ int error;580580 int total;581581 int local;582582 struct xfs_attrd_log_item *done_item = NULL;···655655 xfs_ilock(ip, XFS_ILOCK_EXCL);656656 xfs_trans_ijoin(tp, ip, 0);657657658658- ret = xfs_xattri_finish_update(attr, done_item);659659- if (ret == -EAGAIN) {660660- /* There's more work to do, so add it to this transaction */658658+ error = xfs_xattri_finish_update(attr, done_item);659659+ if (error == -EAGAIN) {660660+ /*661661+ * There's more work to do, so add the intent item to this662662+ * transaction so that we can continue it later.663663+ */661664 xfs_defer_add(tp, XFS_DEFER_OPS_TYPE_ATTR, &attr->xattri_list);662662- } else663663- error = ret;665665+ error = xfs_defer_ops_capture_and_commit(tp, capture_list);666666+ if (error)667667+ goto out_unlock;664668669669+ xfs_iunlock(ip, XFS_ILOCK_EXCL);670670+ xfs_irele(ip);671671+ return 0;672672+ }665673 if (error) {666674 xfs_trans_cancel(tp);667675 goto out_unlock;668676 }669677670678 error = xfs_defer_ops_capture_and_commit(tp, capture_list);671671-672679out_unlock:673673- if (attr->xattri_leaf_bp)674674- xfs_buf_relse(attr->xattri_leaf_bp);675675-676680 xfs_iunlock(ip, XFS_ILOCK_EXCL);677681 xfs_irele(ip);678682out:679679- if (ret != -EAGAIN)680680- xfs_attr_free_item(attr);683683+ xfs_attr_free_item(attr);681684 return error;682685}683686
···440440 for_each_online_cpu(cpu) {441441 gc = per_cpu_ptr(mp->m_inodegc, cpu);442442 if (!llist_empty(&gc->list))443443- queue_work_on(cpu, mp->m_inodegc_wq, &gc->work);443443+ mod_delayed_work_on(cpu, mp->m_inodegc_wq, &gc->work, 0);444444 }445445}446446···18411841xfs_inodegc_worker(18421842 struct work_struct *work)18431843{18441844- struct xfs_inodegc *gc = container_of(work, struct xfs_inodegc,18451845- work);18441844+ struct xfs_inodegc *gc = container_of(to_delayed_work(work),18451845+ struct xfs_inodegc, work);18461846 struct llist_node *node = llist_del_all(&gc->list);18471847 struct xfs_inode *ip, *n;18481848···18621862}1863186318641864/*18651865+ * Expedite all pending inodegc work to run immediately. This does not wait for18661866+ * completion of the work.18671867+ */18681868+void18691869+xfs_inodegc_push(18701870+ struct xfs_mount *mp)18711871+{18721872+ if (!xfs_is_inodegc_enabled(mp))18731873+ return;18741874+ trace_xfs_inodegc_push(mp, __return_address);18751875+ xfs_inodegc_queue_all(mp);18761876+}18771877+18781878+/*18651879 * Force all currently queued inode inactivation work to run immediately and18661880 * wait for the work to finish.18671881 */···18831869xfs_inodegc_flush(18841870 struct xfs_mount *mp)18851871{18861886- if (!xfs_is_inodegc_enabled(mp))18871887- return;18881888-18721872+ xfs_inodegc_push(mp);18891873 trace_xfs_inodegc_flush(mp, __return_address);18901890-18911891- xfs_inodegc_queue_all(mp);18921874 flush_workqueue(mp->m_inodegc_wq);18931875}18941876···20242014 struct xfs_inodegc *gc;20252015 int items;20262016 unsigned int shrinker_hits;20172017+ unsigned long queue_delay = 1;2027201820282019 trace_xfs_inode_set_need_inactive(ip);20292020 spin_lock(&ip->i_flags_lock);···20362025 items = READ_ONCE(gc->items);20372026 WRITE_ONCE(gc->items, items + 1);20382027 shrinker_hits = READ_ONCE(gc->shrinker_hits);20392039- put_cpu_ptr(gc);2040202820412041- if (!xfs_is_inodegc_enabled(mp))20292029+ /*20302030+ * We queue the work while holding the current CPU so that the work20312031+ * is scheduled to run on this CPU.20322032+ */20332033+ if (!xfs_is_inodegc_enabled(mp)) {20342034+ put_cpu_ptr(gc);20422035 return;20432043-20442044- if (xfs_inodegc_want_queue_work(ip, items)) {20452045- trace_xfs_inodegc_queue(mp, __return_address);20462046- queue_work(mp->m_inodegc_wq, &gc->work);20472036 }20372037+20382038+ if (xfs_inodegc_want_queue_work(ip, items))20392039+ queue_delay = 0;20402040+20412041+ trace_xfs_inodegc_queue(mp, __return_address);20422042+ mod_delayed_work(mp->m_inodegc_wq, &gc->work, queue_delay);20432043+ put_cpu_ptr(gc);2048204420492045 if (xfs_inodegc_want_flush_work(ip, items, shrinker_hits)) {20502046 trace_xfs_inodegc_throttle(mp, __return_address);20512051- flush_work(&gc->work);20472047+ flush_delayed_work(&gc->work);20522048 }20532049}20542050···20722054 unsigned int count = 0;2073205520742056 dead_gc = per_cpu_ptr(mp->m_inodegc, dead_cpu);20752075- cancel_work_sync(&dead_gc->work);20572057+ cancel_delayed_work_sync(&dead_gc->work);2076205820772059 if (llist_empty(&dead_gc->list))20782060 return;···20912073 llist_add_batch(first, last, &gc->list);20922074 count += READ_ONCE(gc->items);20932075 WRITE_ONCE(gc->items, count);20942094- put_cpu_ptr(gc);2095207620962077 if (xfs_is_inodegc_enabled(mp)) {20972078 trace_xfs_inodegc_queue(mp, __return_address);20982098- queue_work(mp->m_inodegc_wq, &gc->work);20792079+ mod_delayed_work(mp->m_inodegc_wq, &gc->work, 0);20992080 }20812081+ put_cpu_ptr(gc);21002082}2101208321022084/*···21912173 unsigned int h = READ_ONCE(gc->shrinker_hits);2192217421932175 WRITE_ONCE(gc->shrinker_hits, h + 1);21942194- queue_work_on(cpu, mp->m_inodegc_wq, &gc->work);21762176+ mod_delayed_work_on(cpu, mp->m_inodegc_wq, &gc->work, 0);21952177 no_items = false;21962178 }21972179 }
···132132}133133134134/*135135+ * You can't set both SHARED and EXCL for the same lock,136136+ * and only XFS_IOLOCK_SHARED, XFS_IOLOCK_EXCL, XFS_MMAPLOCK_SHARED,137137+ * XFS_MMAPLOCK_EXCL, XFS_ILOCK_SHARED, XFS_ILOCK_EXCL are valid values138138+ * to set in lock_flags.139139+ */140140+static inline void141141+xfs_lock_flags_assert(142142+ uint lock_flags)143143+{144144+ ASSERT((lock_flags & (XFS_IOLOCK_SHARED | XFS_IOLOCK_EXCL)) !=145145+ (XFS_IOLOCK_SHARED | XFS_IOLOCK_EXCL));146146+ ASSERT((lock_flags & (XFS_MMAPLOCK_SHARED | XFS_MMAPLOCK_EXCL)) !=147147+ (XFS_MMAPLOCK_SHARED | XFS_MMAPLOCK_EXCL));148148+ ASSERT((lock_flags & (XFS_ILOCK_SHARED | XFS_ILOCK_EXCL)) !=149149+ (XFS_ILOCK_SHARED | XFS_ILOCK_EXCL));150150+ ASSERT((lock_flags & ~(XFS_LOCK_MASK | XFS_LOCK_SUBCLASS_MASK)) == 0);151151+ ASSERT(lock_flags != 0);152152+}153153+154154+/*135155 * In addition to i_rwsem in the VFS inode, the xfs inode contains 2136156 * multi-reader locks: invalidate_lock and the i_lock. This routine allows137157 * various combinations of the locks to be obtained.···188168{189169 trace_xfs_ilock(ip, lock_flags, _RET_IP_);190170191191- /*192192- * You can't set both SHARED and EXCL for the same lock,193193- * and only XFS_IOLOCK_SHARED, XFS_IOLOCK_EXCL, XFS_ILOCK_SHARED,194194- * and XFS_ILOCK_EXCL are valid values to set in lock_flags.195195- */196196- ASSERT((lock_flags & (XFS_IOLOCK_SHARED | XFS_IOLOCK_EXCL)) !=197197- (XFS_IOLOCK_SHARED | XFS_IOLOCK_EXCL));198198- ASSERT((lock_flags & (XFS_MMAPLOCK_SHARED | XFS_MMAPLOCK_EXCL)) !=199199- (XFS_MMAPLOCK_SHARED | XFS_MMAPLOCK_EXCL));200200- ASSERT((lock_flags & (XFS_ILOCK_SHARED | XFS_ILOCK_EXCL)) !=201201- (XFS_ILOCK_SHARED | XFS_ILOCK_EXCL));202202- ASSERT((lock_flags & ~(XFS_LOCK_MASK | XFS_LOCK_SUBCLASS_MASK)) == 0);171171+ xfs_lock_flags_assert(lock_flags);203172204173 if (lock_flags & XFS_IOLOCK_EXCL) {205174 down_write_nested(&VFS_I(ip)->i_rwsem,···231222{232223 trace_xfs_ilock_nowait(ip, lock_flags, _RET_IP_);233224234234- /*235235- * You can't set both SHARED and EXCL for the same lock,236236- * and only XFS_IOLOCK_SHARED, XFS_IOLOCK_EXCL, XFS_ILOCK_SHARED,237237- * and XFS_ILOCK_EXCL are valid values to set in lock_flags.238238- */239239- ASSERT((lock_flags & (XFS_IOLOCK_SHARED | XFS_IOLOCK_EXCL)) !=240240- (XFS_IOLOCK_SHARED | XFS_IOLOCK_EXCL));241241- ASSERT((lock_flags & (XFS_MMAPLOCK_SHARED | XFS_MMAPLOCK_EXCL)) !=242242- (XFS_MMAPLOCK_SHARED | XFS_MMAPLOCK_EXCL));243243- ASSERT((lock_flags & (XFS_ILOCK_SHARED | XFS_ILOCK_EXCL)) !=244244- (XFS_ILOCK_SHARED | XFS_ILOCK_EXCL));245245- ASSERT((lock_flags & ~(XFS_LOCK_MASK | XFS_LOCK_SUBCLASS_MASK)) == 0);225225+ xfs_lock_flags_assert(lock_flags);246226247227 if (lock_flags & XFS_IOLOCK_EXCL) {248228 if (!down_write_trylock(&VFS_I(ip)->i_rwsem))···289291 xfs_inode_t *ip,290292 uint lock_flags)291293{292292- /*293293- * You can't set both SHARED and EXCL for the same lock,294294- * and only XFS_IOLOCK_SHARED, XFS_IOLOCK_EXCL, XFS_ILOCK_SHARED,295295- * and XFS_ILOCK_EXCL are valid values to set in lock_flags.296296- */297297- ASSERT((lock_flags & (XFS_IOLOCK_SHARED | XFS_IOLOCK_EXCL)) !=298298- (XFS_IOLOCK_SHARED | XFS_IOLOCK_EXCL));299299- ASSERT((lock_flags & (XFS_MMAPLOCK_SHARED | XFS_MMAPLOCK_EXCL)) !=300300- (XFS_MMAPLOCK_SHARED | XFS_MMAPLOCK_EXCL));301301- ASSERT((lock_flags & (XFS_ILOCK_SHARED | XFS_ILOCK_EXCL)) !=302302- (XFS_ILOCK_SHARED | XFS_ILOCK_EXCL));303303- ASSERT((lock_flags & ~(XFS_LOCK_MASK | XFS_LOCK_SUBCLASS_MASK)) == 0);304304- ASSERT(lock_flags != 0);294294+ xfs_lock_flags_assert(lock_flags);305295306296 if (lock_flags & XFS_IOLOCK_EXCL)307297 up_write(&VFS_I(ip)->i_rwsem);···365379 }366380367381 if (lock_flags & (XFS_MMAPLOCK_EXCL|XFS_MMAPLOCK_SHARED)) {368368- return __xfs_rwsem_islocked(&VFS_I(ip)->i_rwsem,369369- (lock_flags & XFS_IOLOCK_SHARED));382382+ return __xfs_rwsem_islocked(&VFS_I(ip)->i_mapping->invalidate_lock,383383+ (lock_flags & XFS_MMAPLOCK_SHARED));370384 }371385372386 if (lock_flags & (XFS_IOLOCK_EXCL | XFS_IOLOCK_SHARED)) {
+7-2
fs/xfs/xfs_log.c
···20922092 xlog_in_core_t *iclog, *next_iclog;20932093 int i;2094209420952095- xlog_cil_destroy(log);20962096-20972095 /*20982096 * Cycle all the iclogbuf locks to make sure all log IO completion20992097 * is done before we tear down these buffers.···21022104 up(&iclog->ic_sema);21032105 iclog = iclog->ic_next;21042106 }21072107+21082108+ /*21092109+ * Destroy the CIL after waiting for iclog IO completion because an21102110+ * iclog EIO error will try to shut down the log, which accesses the21112111+ * CIL to wake up the waiters.21122112+ */21132113+ xlog_cil_destroy(log);2105211421062115 iclog = log->l_iclog;21072116 for (i = 0; i < log->l_iclog_bufs; i++) {
+1-1
fs/xfs/xfs_mount.h
···6161 */6262struct xfs_inodegc {6363 struct llist_head list;6464- struct work_struct work;6464+ struct delayed_work work;65656666 /* approximate count of inodes in the list */6767 unsigned int items;
+6-3
fs/xfs/xfs_qm_syscalls.c
···454454 struct xfs_dquot *dqp;455455 int error;456456457457- /* Flush inodegc work at the start of a quota reporting scan. */457457+ /*458458+ * Expedite pending inodegc work at the start of a quota reporting459459+ * scan but don't block waiting for it to complete.460460+ */458461 if (id == 0)459459- xfs_inodegc_flush(mp);462462+ xfs_inodegc_push(mp);460463461464 /*462465 * Try to get the dquot. We don't want it allocated on disk, so don't···501498502499 /* Flush inodegc work at the start of a quota reporting scan. */503500 if (*id == 0)504504- xfs_inodegc_flush(mp);501501+ xfs_inodegc_push(mp);505502506503 error = xfs_qm_dqget_next(mp, *id, type, &dqp);507504 if (error)
+6-3
fs/xfs/xfs_super.c
···797797 xfs_extlen_t lsize;798798 int64_t ffree;799799800800- /* Wait for whatever inactivations are in progress. */801801- xfs_inodegc_flush(mp);800800+ /*801801+ * Expedite background inodegc but don't wait. We do not want to block802802+ * here waiting hours for a billion extent file to be truncated.803803+ */804804+ xfs_inodegc_push(mp);802805803806 statp->f_type = XFS_SUPER_MAGIC;804807 statp->f_namelen = MAXNAMELEN - 1;···10771074 gc = per_cpu_ptr(mp->m_inodegc, cpu);10781075 init_llist_head(&gc->list);10791076 gc->items = 0;10801080- INIT_WORK(&gc->work, xfs_inodegc_worker);10771077+ INIT_DELAYED_WORK(&gc->work, xfs_inodegc_worker);10811078 }10821079 return 0;10831080}
···148148 * reevaluate operable frequencies. Devfreq users may use149149 * devfreq.nb to the corresponding register notifier call chain.150150 * @work: delayed work for load monitoring.151151+ * @freq_table: current frequency table used by the devfreq driver.152152+ * @max_state: count of entry present in the frequency table.151153 * @previous_freq: previously configured frequency value.152154 * @last_status: devfreq user device info, performance statistics153155 * @data: Private data of the governor. The devfreq framework does not···186184 struct opp_table *opp_table;187185 struct notifier_block nb;188186 struct delayed_work work;187187+188188+ unsigned long *freq_table;189189+ unsigned int max_state;189190190191 unsigned long previous_freq;191192 struct devfreq_dev_status last_status;
+1-1
include/linux/dim.h
···2121 * We consider 10% difference as significant.2222 */2323#define IS_SIGNIFICANT_DIFF(val, ref) \2424- (((100UL * abs((val) - (ref))) / (ref)) > 10)2424+ ((ref) && (((100UL * abs((val) - (ref))) / (ref)) > 10))25252626/*2727 * Calculate the gap between two values.
+4
include/linux/fanotify.h
···111111 FANOTIFY_PERM_EVENTS | \112112 FAN_Q_OVERFLOW | FAN_ONDIR)113113114114+/* Events and flags relevant only for directories */115115+#define FANOTIFY_DIRONLY_EVENT_BITS (FANOTIFY_DIRENT_EVENTS | \116116+ FAN_EVENT_ON_CHILD | FAN_ONDIR)117117+114118#define ALL_FANOTIFY_EVENT_BITS (FANOTIFY_OUTGOING_EVENTS | \115119 FANOTIFY_EVENT_FLAGS)116120
···130130#define FSCACHE_COOKIE_DO_PREP_TO_WRITE 12 /* T if cookie needs write preparation */131131#define FSCACHE_COOKIE_HAVE_DATA 13 /* T if this cookie has data stored */132132#define FSCACHE_COOKIE_IS_HASHED 14 /* T if this cookie is hashed */133133+#define FSCACHE_COOKIE_DO_INVALIDATE 15 /* T if cookie needs invalidation */133134134135 enum fscache_cookie_state state;135136 u8 advice; /* FSCACHE_ADV_* */
-3
include/linux/intel-iommu.h
···612612struct device_domain_info {613613 struct list_head link; /* link to domain siblings */614614 struct list_head global; /* link to global list */615615- struct list_head table; /* link to pasid table */616615 u32 segment; /* PCI segment number */617616 u8 bus; /* PCI bus number */618617 u8 devfn; /* PCI devfn number */···728729void *alloc_pgtable_page(int node);729730void free_pgtable_page(void *vaddr);730731struct intel_iommu *domain_get_iommu(struct dmar_domain *domain);731731-int for_each_device_domain(int (*fn)(struct device_domain_info *info,732732- void *data), void *data);733732void iommu_flush_write_buffer(struct intel_iommu *iommu);734733int intel_iommu_enable_pasid(struct intel_iommu *iommu, struct device *dev);735734struct intel_iommu *device_to_iommu(struct device *dev, u8 *bus, u8 *devfn);
-1
include/linux/lockref.h
···3838extern int lockref_put_return(struct lockref *);3939extern int lockref_get_not_zero(struct lockref *);4040extern int lockref_put_not_zero(struct lockref *);4141-extern int lockref_get_or_lock(struct lockref *);4241extern int lockref_put_or_lock(struct lockref *);43424443extern void lockref_mark_dead(struct lockref *);
···572572 * @mdix_ctrl: User setting of crossover573573 * @pma_extable: Cached value of PMA/PMD Extended Abilities Register574574 * @interrupts: Flag interrupts have been enabled575575+ * @irq_suspended: Flag indicating PHY is suspended and therefore interrupt576576+ * handling shall be postponed until PHY has resumed577577+ * @irq_rerun: Flag indicating interrupts occurred while PHY was suspended,578578+ * requiring a rerun of the interrupt handler after resume575579 * @interface: enum phy_interface_t value576580 * @skb: Netlink message for cable diagnostics577581 * @nest: Netlink nest used for cable diagnostics···630626631627 /* Interrupts are enabled */632628 unsigned interrupts:1;629629+ unsigned irq_suspended:1;630630+ unsigned irq_rerun:1;633631634632 enum phy_state state;635633
···257257258258 WARN_ON(status & VIRTIO_CONFIG_S_DRIVER_OK);259259260260+#ifdef CONFIG_VIRTIO_HARDEN_NOTIFICATION260261 /*261262 * The virtio_synchronize_cbs() makes sure vring_interrupt()262263 * will see the driver specific setup if it sees vq->broken···265264 */266265 virtio_synchronize_cbs(dev);267266 __virtio_unbreak_device(dev);267267+#endif268268 /*269269 * The transport should ensure the visibility of vq->broken270270 * before setting DRIVER_OK. See the comments for the transport
···13381338/**13391339 * struct nft_traceinfo - nft tracing information and state13401340 *13411341+ * @trace: other struct members are initialised13421342+ * @nf_trace: copy of skb->nf_trace before rule evaluation13431343+ * @type: event type (enum nft_trace_types)13441344+ * @skbid: hash of skb to be used as trace id13451345+ * @packet_dumped: packet headers sent in a previous traceinfo message13411346 * @pkt: pktinfo currently processed13421347 * @basechain: base chain currently processed13431348 * @chain: chain currently processed13441349 * @rule: rule that was evaluated13451350 * @verdict: verdict given by rule13461346- * @type: event type (enum nft_trace_types)13471347- * @packet_dumped: packet headers sent in a previous traceinfo message13481348- * @trace: other struct members are initialised13491351 */13501352struct nft_traceinfo {13531353+ bool trace;13541354+ bool nf_trace;13551355+ bool packet_dumped;13561356+ enum nft_trace_types type:8;13571357+ u32 skbid;13511358 const struct nft_pktinfo *pkt;13521359 const struct nft_base_chain *basechain;13531360 const struct nft_chain *chain;13541361 const struct nft_rule_dp *rule;13551362 const struct nft_verdict *verdict;13561356- enum nft_trace_types type;13571357- bool packet_dumped;13581358- bool trace;13591363};1360136413611365void nft_trace_init(struct nft_traceinfo *info, const struct nft_pktinfo *pkt,
···2222 union {2323 __u64 off; /* offset into file */2424 __u64 addr2;2525- __u32 cmd_op;2525+ struct {2626+ __u32 cmd_op;2727+ __u32 __pad1;2828+ };2629 };2730 union {2831 __u64 addr; /* pointer to buffer or iovecs */···247244#define IORING_ASYNC_CANCEL_ANY (1U << 2)248245249246/*250250- * send/sendmsg and recv/recvmsg flags (sqe->addr2)247247+ * send/sendmsg and recv/recvmsg flags (sqe->ioprio)251248 *252249 * IORING_RECVSEND_POLL_FIRST If set, instead of first attempting to send253250 * or receive and arm poll if that yields an
+5-4
include/uapi/linux/mptcp.h
···22#ifndef _UAPI_MPTCP_H33#define _UAPI_MPTCP_H4455+#ifndef __KERNEL__66+#include <netinet/in.h> /* for sockaddr_in and sockaddr_in6 */77+#include <sys/socket.h> /* for struct sockaddr */88+#endif99+510#include <linux/const.h>611#include <linux/types.h>712#include <linux/in.h> /* for sockaddr_in */813#include <linux/in6.h> /* for sockaddr_in6 */914#include <linux/socket.h> /* for sockaddr_storage and sa_family */1010-1111-#ifndef __KERNEL__1212-#include <sys/socket.h> /* for struct sockaddr */1313-#endif14151516#define MPTCP_SUBFLOW_FLAG_MCAP_REM _BITUL(0)1617#define MPTCP_SUBFLOW_FLAG_MCAP_LOC _BITUL(1)
···15621562 reg->var_off = tnum_or(tnum_clear_subreg(var64_off), var32_off);15631563}1564156415651565+static void reg_bounds_sync(struct bpf_reg_state *reg)15661566+{15671567+ /* We might have learned new bounds from the var_off. */15681568+ __update_reg_bounds(reg);15691569+ /* We might have learned something about the sign bit. */15701570+ __reg_deduce_bounds(reg);15711571+ /* We might have learned some bits from the bounds. */15721572+ __reg_bound_offset(reg);15731573+ /* Intersecting with the old var_off might have improved our bounds15741574+ * slightly, e.g. if umax was 0x7f...f and var_off was (0; 0xf...fc),15751575+ * then new var_off is (0; 0x7f...fc) which improves our umax.15761576+ */15771577+ __update_reg_bounds(reg);15781578+}15791579+15651580static bool __reg32_bound_s64(s32 a)15661581{15671582 return a >= 0 && a <= S32_MAX;···16181603 * so they do not impact tnum bounds calculation.16191604 */16201605 __mark_reg64_unbounded(reg);16211621- __update_reg_bounds(reg);16221606 }16231623-16241624- /* Intersecting with the old var_off might have improved our bounds16251625- * slightly. e.g. if umax was 0x7f...f and var_off was (0; 0xf...fc),16261626- * then new var_off is (0; 0x7f...fc) which improves our umax.16271627- */16281628- __reg_deduce_bounds(reg);16291629- __reg_bound_offset(reg);16301630- __update_reg_bounds(reg);16071607+ reg_bounds_sync(reg);16311608}1632160916331610static bool __reg64_bound_s32(s64 a)···16351628static void __reg_combine_64_into_32(struct bpf_reg_state *reg)16361629{16371630 __mark_reg32_unbounded(reg);16381638-16391631 if (__reg64_bound_s32(reg->smin_value) && __reg64_bound_s32(reg->smax_value)) {16401632 reg->s32_min_value = (s32)reg->smin_value;16411633 reg->s32_max_value = (s32)reg->smax_value;···16431637 reg->u32_min_value = (u32)reg->umin_value;16441638 reg->u32_max_value = (u32)reg->umax_value;16451639 }16461646-16471647- /* Intersecting with the old var_off might have improved our bounds16481648- * slightly. e.g. if umax was 0x7f...f and var_off was (0; 0xf...fc),16491649- * then new var_off is (0; 0x7f...fc) which improves our umax.16501650- */16511651- __reg_deduce_bounds(reg);16521652- __reg_bound_offset(reg);16531653- __update_reg_bounds(reg);16401640+ reg_bounds_sync(reg);16541641}1655164216561643/* Mark a register as having a completely unknown (scalar) value. */···69426943 ret_reg->s32_max_value = meta->msize_max_value;69436944 ret_reg->smin_value = -MAX_ERRNO;69446945 ret_reg->s32_min_value = -MAX_ERRNO;69456945- __reg_deduce_bounds(ret_reg);69466946- __reg_bound_offset(ret_reg);69476947- __update_reg_bounds(ret_reg);69466946+ reg_bounds_sync(ret_reg);69486947}6949694869506949static int···8199820282008203 if (!check_reg_sane_offset(env, dst_reg, ptr_reg->type))82018204 return -EINVAL;82028202-82038203- __update_reg_bounds(dst_reg);82048204- __reg_deduce_bounds(dst_reg);82058205- __reg_bound_offset(dst_reg);82068206-82058205+ reg_bounds_sync(dst_reg);82078206 if (sanitize_check_bounds(env, insn, dst_reg) < 0)82088207 return -EACCES;82098208 if (sanitize_needed(opcode)) {···89378944 /* ALU32 ops are zero extended into 64bit register */89388945 if (alu32)89398946 zext_32_to_64(dst_reg);89408940-89418941- __update_reg_bounds(dst_reg);89428942- __reg_deduce_bounds(dst_reg);89438943- __reg_bound_offset(dst_reg);89478947+ reg_bounds_sync(dst_reg);89448948 return 0;89458949}89468950···91269136 insn->dst_reg);91279137 }91289138 zext_32_to_64(dst_reg);91299129-91309130- __update_reg_bounds(dst_reg);91319131- __reg_deduce_bounds(dst_reg);91329132- __reg_bound_offset(dst_reg);91399139+ reg_bounds_sync(dst_reg);91339140 }91349141 } else {91359142 /* case: R = imm···95649577 return;9565957895669579 switch (opcode) {95809580+ /* JEQ/JNE comparison doesn't change the register equivalence.95819581+ *95829582+ * r1 = r2;95839583+ * if (r1 == 42) goto label;95849584+ * ...95859585+ * label: // here both r1 and r2 are known to be 42.95869586+ *95879587+ * Hence when marking register as known preserve it's ID.95889588+ */95679589 case BPF_JEQ:95689568- case BPF_JNE:95699569- {95709570- struct bpf_reg_state *reg =95719571- opcode == BPF_JEQ ? true_reg : false_reg;95729572-95739573- /* JEQ/JNE comparison doesn't change the register equivalence.95749574- * r1 = r2;95759575- * if (r1 == 42) goto label;95769576- * ...95779577- * label: // here both r1 and r2 are known to be 42.95789578- *95799579- * Hence when marking register as known preserve it's ID.95809580- */95819581- if (is_jmp32)95829582- __mark_reg32_known(reg, val32);95839583- else95849584- ___mark_reg_known(reg, val);95909590+ if (is_jmp32) {95919591+ __mark_reg32_known(true_reg, val32);95929592+ true_32off = tnum_subreg(true_reg->var_off);95939593+ } else {95949594+ ___mark_reg_known(true_reg, val);95959595+ true_64off = true_reg->var_off;95969596+ }95859597 break;95869586- }95989598+ case BPF_JNE:95999599+ if (is_jmp32) {96009600+ __mark_reg32_known(false_reg, val32);96019601+ false_32off = tnum_subreg(false_reg->var_off);96029602+ } else {96039603+ ___mark_reg_known(false_reg, val);96049604+ false_64off = false_reg->var_off;96059605+ }96069606+ break;95879607 case BPF_JSET:95889608 if (is_jmp32) {95899609 false_32off = tnum_and(false_32off, tnum_const(~val32));···97299735 dst_reg->smax_value);97309736 src_reg->var_off = dst_reg->var_off = tnum_intersect(src_reg->var_off,97319737 dst_reg->var_off);97329732- /* We might have learned new bounds from the var_off. */97339733- __update_reg_bounds(src_reg);97349734- __update_reg_bounds(dst_reg);97359735- /* We might have learned something about the sign bit. */97369736- __reg_deduce_bounds(src_reg);97379737- __reg_deduce_bounds(dst_reg);97389738- /* We might have learned some bits from the bounds. */97399739- __reg_bound_offset(src_reg);97409740- __reg_bound_offset(dst_reg);97419741- /* Intersecting with the old var_off might have improved our bounds97429742- * slightly. e.g. if umax was 0x7f...f and var_off was (0; 0xf...fc),97439743- * then new var_off is (0; 0x7f...fc) which improves our umax.97449744- */97459745- __update_reg_bounds(src_reg);97469746- __update_reg_bounds(dst_reg);97389738+ reg_bounds_sync(src_reg);97399739+ reg_bounds_sync(dst_reg);97479740}9748974197499742static void reg_combine_min_max(struct bpf_reg_state *true_src,
···20292029 bool autoreap = false;20302030 u64 utime, stime;2031203120322032- BUG_ON(sig == -1);20322032+ WARN_ON_ONCE(sig == -1);2033203320342034- /* do_notify_parent_cldstop should have been called instead. */20352035- BUG_ON(task_is_stopped_or_traced(tsk));20342034+ /* do_notify_parent_cldstop should have been called instead. */20352035+ WARN_ON_ONCE(task_is_stopped_or_traced(tsk));2036203620372037- BUG_ON(!tsk->ptrace &&20372037+ WARN_ON_ONCE(!tsk->ptrace &&20382038 (tsk->group_leader != tsk || !thread_group_empty(tsk)));2039203920402040 /* Wake up all pidfd waiters */
-1
kernel/time/tick-sched.c
···526526 cpumask_copy(tick_nohz_full_mask, cpumask);527527 tick_nohz_full_running = true;528528}529529-EXPORT_SYMBOL_GPL(tick_nohz_full_setup);530529531530static int tick_nohz_cpu_down(unsigned int cpu)532531{
+2-1
lib/idr.c
···491491 struct ida_bitmap *bitmap;492492 unsigned long flags;493493494494- BUG_ON((int)id < 0);494494+ if ((int)id < 0)495495+ return;495496496497 xas_lock_irqsave(&xas, flags);497498 bitmap = xas_load(&xas);
-25
lib/lockref.c
···111111EXPORT_SYMBOL(lockref_put_not_zero);112112113113/**114114- * lockref_get_or_lock - Increments count unless the count is 0 or dead115115- * @lockref: pointer to lockref structure116116- * Return: 1 if count updated successfully or 0 if count was zero117117- * and we got the lock instead.118118- */119119-int lockref_get_or_lock(struct lockref *lockref)120120-{121121- CMPXCHG_LOOP(122122- new.count++;123123- if (old.count <= 0)124124- break;125125- ,126126- return 1;127127- );128128-129129- spin_lock(&lockref->lock);130130- if (lockref->count <= 0)131131- return 0;132132- lockref->count++;133133- spin_unlock(&lockref->lock);134134- return 1;135135-}136136-EXPORT_SYMBOL(lockref_get_or_lock);137137-138138-/**139114 * lockref_put_return - Decrement reference count if possible140115 * @lockref: pointer to lockref structure141116 *
+4-1
lib/sbitmap.c
···528528529529 sbitmap_deferred_clear(map);530530 if (map->word == (1UL << (map_depth - 1)) - 1)531531- continue;531531+ goto next;532532533533 nr = find_first_zero_bit(&map->word, map_depth);534534 if (nr + nr_tags <= map_depth) {···539539 get_mask = ((1UL << map_tags) - 1) << nr;540540 do {541541 val = READ_ONCE(map->word);542542+ if ((val & ~get_mask) != val)543543+ goto next;542544 ret = atomic_long_cmpxchg(ptr, val, get_mask | val);543545 } while (ret != val);544546 get_mask = (get_mask & ~ret) >> nr;···551549 return get_mask;552550 }553551 }552552+next:554553 /* Jump to next index. */555554 if (++index >= sb->map_nr)556555 index = 0;
···10121012 return okfn(net, sk, skb);1013101310141014 ops = nf_hook_entries_get_hook_ops(e);10151015- for (i = 0; i < e->num_hook_entries &&10161016- ops[i]->priority <= NF_BR_PRI_BRNF; i++)10171017- ;10151015+ for (i = 0; i < e->num_hook_entries; i++) {10161016+ /* These hooks have already been called */10171017+ if (ops[i]->priority < NF_BR_PRI_BRNF)10181018+ continue;10191019+10201020+ /* These hooks have not been called yet, run them. */10211021+ if (ops[i]->priority > NF_BR_PRI_BRNF)10221022+ break;10231023+10241024+ /* take a closer look at NF_BR_PRI_BRNF. */10251025+ if (ops[i]->hook == br_nf_pre_routing) {10261026+ /* This hook diverted the skb to this function,10271027+ * hooks after this have not been run yet.10281028+ */10291029+ i++;10301030+ break;10311031+ }10321032+ }1018103310191034 nf_hook_state_init(&state, hook, NFPROTO_BRIDGE, indev, outdev,10201035 sk, net, okfn);
···77#include <linux/module.h>88#include <linux/static_key.h>99#include <linux/hash.h>1010-#include <linux/jhash.h>1010+#include <linux/siphash.h>1111#include <linux/if_vlan.h>1212#include <linux/init.h>1313#include <linux/skbuff.h>···24242525DEFINE_STATIC_KEY_FALSE(nft_trace_enabled);2626EXPORT_SYMBOL_GPL(nft_trace_enabled);2727-2828-static int trace_fill_id(struct sk_buff *nlskb, struct sk_buff *skb)2929-{3030- __be32 id;3131-3232- /* using skb address as ID results in a limited number of3333- * values (and quick reuse).3434- *3535- * So we attempt to use as many skb members that will not3636- * change while skb is with netfilter.3737- */3838- id = (__be32)jhash_2words(hash32_ptr(skb), skb_get_hash(skb),3939- skb->skb_iif);4040-4141- return nla_put_be32(nlskb, NFTA_TRACE_ID, id);4242-}43274428static int trace_fill_header(struct sk_buff *nlskb, u16 type,4529 const struct sk_buff *skb,···170186 struct nlmsghdr *nlh;171187 struct sk_buff *skb;172188 unsigned int size;189189+ u32 mark = 0;173190 u16 event;174191175192 if (!nfnetlink_has_listeners(nft_net(pkt), NFNLGRP_NFTRACE))···214229 if (nla_put_be32(skb, NFTA_TRACE_TYPE, htonl(info->type)))215230 goto nla_put_failure;216231217217- if (trace_fill_id(skb, pkt->skb))232232+ if (nla_put_u32(skb, NFTA_TRACE_ID, info->skbid))218233 goto nla_put_failure;219234220235 if (nla_put_string(skb, NFTA_TRACE_CHAIN, info->chain->name))···234249 case NFT_TRACETYPE_RULE:235250 if (nft_verdict_dump(skb, NFTA_TRACE_VERDICT, info->verdict))236251 goto nla_put_failure;252252+253253+ /* pkt->skb undefined iff NF_STOLEN, disable dump */254254+ if (info->verdict->code == NF_STOLEN)255255+ info->packet_dumped = true;256256+ else257257+ mark = pkt->skb->mark;258258+237259 break;238260 case NFT_TRACETYPE_POLICY:261261+ mark = pkt->skb->mark;262262+239263 if (nla_put_be32(skb, NFTA_TRACE_POLICY,240264 htonl(info->basechain->policy)))241265 goto nla_put_failure;242266 break;243267 }244268245245- if (pkt->skb->mark &&246246- nla_put_be32(skb, NFTA_TRACE_MARK, htonl(pkt->skb->mark)))269269+ if (mark && nla_put_be32(skb, NFTA_TRACE_MARK, htonl(mark)))247270 goto nla_put_failure;248271249272 if (!info->packet_dumped) {···276283 const struct nft_verdict *verdict,277284 const struct nft_chain *chain)278285{286286+ static siphash_key_t trace_key __read_mostly;287287+ struct sk_buff *skb = pkt->skb;288288+279289 info->basechain = nft_base_chain(chain);280290 info->trace = true;291291+ info->nf_trace = pkt->skb->nf_trace;281292 info->packet_dumped = false;282293 info->pkt = pkt;283294 info->verdict = verdict;295295+296296+ net_get_random_once(&trace_key, sizeof(trace_key));297297+298298+ info->skbid = (u32)siphash_3u32(hash32_ptr(skb),299299+ skb_get_hash(skb),300300+ skb->skb_iif,301301+ &trace_key);284302}
+2
net/netfilter/nft_set_hash.c
···143143 /* Another cpu may race to insert the element with the same key */144144 if (prev) {145145 nft_set_elem_destroy(set, he, true);146146+ atomic_dec(&set->nelems);146147 he = prev;147148 }148149···153152154153err2:155154 nft_set_elem_destroy(set, he, true);155155+ atomic_dec(&set->nelems);156156err1:157157 return false;158158}
+33-15
net/netfilter/nft_set_pipapo.c
···21252125}2126212621272127/**21282128+ * nft_set_pipapo_match_destroy() - Destroy elements from key mapping array21292129+ * @set: nftables API set representation21302130+ * @m: matching data pointing to key mapping array21312131+ */21322132+static void nft_set_pipapo_match_destroy(const struct nft_set *set,21332133+ struct nft_pipapo_match *m)21342134+{21352135+ struct nft_pipapo_field *f;21362136+ int i, r;21372137+21382138+ for (i = 0, f = m->f; i < m->field_count - 1; i++, f++)21392139+ ;21402140+21412141+ for (r = 0; r < f->rules; r++) {21422142+ struct nft_pipapo_elem *e;21432143+21442144+ if (r < f->rules - 1 && f->mt[r + 1].e == f->mt[r].e)21452145+ continue;21462146+21472147+ e = f->mt[r].e;21482148+21492149+ nft_set_elem_destroy(set, e, true);21502150+ }21512151+}21522152+21532153+/**21282154 * nft_pipapo_destroy() - Free private data for set and all committed elements21292155 * @set: nftables API set representation21302156 */···21582132{21592133 struct nft_pipapo *priv = nft_set_priv(set);21602134 struct nft_pipapo_match *m;21612161- struct nft_pipapo_field *f;21622162- int i, r, cpu;21352135+ int cpu;2163213621642137 m = rcu_dereference_protected(priv->match, true);21652138 if (m) {21662139 rcu_barrier();2167214021682168- for (i = 0, f = m->f; i < m->field_count - 1; i++, f++)21692169- ;21702170-21712171- for (r = 0; r < f->rules; r++) {21722172- struct nft_pipapo_elem *e;21732173-21742174- if (r < f->rules - 1 && f->mt[r + 1].e == f->mt[r].e)21752175- continue;21762176-21772177- e = f->mt[r].e;21782178-21792179- nft_set_elem_destroy(set, e, true);21802180- }21412141+ nft_set_pipapo_match_destroy(set, m);2181214221822143#ifdef NFT_PIPAPO_ALIGN21832144 free_percpu(m->scratch_aligned);···21782165 }2179216621802167 if (priv->clone) {21682168+ m = priv->clone;21692169+21702170+ if (priv->dirty)21712171+ nft_set_pipapo_match_destroy(set, m);21722172+21812173#ifdef NFT_PIPAPO_ALIGN21822174 free_percpu(priv->clone->scratch_aligned);21832175#endif
···588588}589589590590static int tcf_del_walker(struct tcf_idrinfo *idrinfo, struct sk_buff *skb,591591- const struct tc_action_ops *ops)591591+ const struct tc_action_ops *ops,592592+ struct netlink_ext_ack *extack)592593{593594 struct nlattr *nest;594595 int n_i = 0;···605604 if (nla_put_string(skb, TCA_KIND, ops->kind))606605 goto nla_put_failure;607606607607+ ret = 0;608608 mutex_lock(&idrinfo->lock);609609 idr_for_each_entry_ul(idr, p, tmp, id) {610610 if (IS_ERR(p))611611 continue;612612 ret = tcf_idr_release_unsafe(p);613613- if (ret == ACT_P_DELETED) {613613+ if (ret == ACT_P_DELETED)614614 module_put(ops->owner);615615- n_i++;616616- } else if (ret < 0) {617617- mutex_unlock(&idrinfo->lock);618618- goto nla_put_failure;619619- }615615+ else if (ret < 0)616616+ break;617617+ n_i++;620618 }621619 mutex_unlock(&idrinfo->lock);620620+ if (ret < 0) {621621+ if (n_i)622622+ NL_SET_ERR_MSG(extack, "Unable to flush all TC actions");623623+ else624624+ goto nla_put_failure;625625+ }622626623627 ret = nla_put_u32(skb, TCA_FCNT, n_i);624628 if (ret)···644638 struct tcf_idrinfo *idrinfo = tn->idrinfo;645639646640 if (type == RTM_DELACTION) {647647- return tcf_del_walker(idrinfo, skb, ops);641641+ return tcf_del_walker(idrinfo, skb, ops, extack);648642 } else if (type == RTM_GETACTION) {649643 return tcf_dump_walker(idrinfo, skb, cb);650644 } else {
+1-1
net/sched/act_police.c
···442442 act_id = FLOW_ACTION_JUMP;443443 *extval = tc_act & TC_ACT_EXT_VAL_MASK;444444 } else if (tc_act == TC_ACT_UNSPEC) {445445- NL_SET_ERR_MSG_MOD(extack, "Offload not supported when conform/exceed action is \"continue\"");445445+ act_id = FLOW_ACTION_CONTINUE;446446 } else {447447 NL_SET_ERR_MSG_MOD(extack, "Unsupported conform/exceed action offload");448448 }
+6-10
net/socket.c
···21492149int __sys_recvfrom(int fd, void __user *ubuf, size_t size, unsigned int flags,21502150 struct sockaddr __user *addr, int __user *addr_len)21512151{21522152+ struct sockaddr_storage address;21532153+ struct msghdr msg = {21542154+ /* Save some cycles and don't copy the address if not needed */21552155+ .msg_name = addr ? (struct sockaddr *)&address : NULL,21562156+ };21522157 struct socket *sock;21532158 struct iovec iov;21542154- struct msghdr msg;21552155- struct sockaddr_storage address;21562159 int err, err2;21572160 int fput_needed;21582161···21662163 if (!sock)21672164 goto out;2168216521692169- msg.msg_control = NULL;21702170- msg.msg_controllen = 0;21712171- /* Save some cycles and don't copy the address if not needed */21722172- msg.msg_name = addr ? (struct sockaddr *)&address : NULL;21732173- /* We assume all kernel code knows the size of sockaddr_storage */21742174- msg.msg_namelen = 0;21752175- msg.msg_iocb = NULL;21762176- msg.msg_flags = 0;21772166 if (sock->file->f_flags & O_NONBLOCK)21782167 flags |= MSG_DONTWAIT;21792168 err = sock_recvmsg(sock, &msg, flags);···23702375 return -EFAULT;2371237623722377 kmsg->msg_control_is_user = true;23782378+ kmsg->msg_get_inq = 0;23732379 kmsg->msg_control_user = msg.msg_control;23742380 kmsg->msg_controllen = msg.msg_controllen;23752381 kmsg->msg_flags = msg.msg_flags;
+1-1
net/sunrpc/xdr.c
···984984 p = page_address(*xdr->page_ptr);985985 xdr->p = p + frag2bytes;986986 space_left = xdr->buf->buflen - xdr->buf->len;987987- if (space_left - nbytes >= PAGE_SIZE)987987+ if (space_left - frag1bytes >= PAGE_SIZE)988988 xdr->end = p + PAGE_SIZE;989989 else990990 xdr->end = p + space_left - frag1bytes;
+22-19
net/tipc/node.c
···472472 bool preliminary)473473{474474 struct tipc_net *tn = net_generic(net, tipc_net_id);475475+ struct tipc_link *l, *snd_l = tipc_bc_sndlink(net);475476 struct tipc_node *n, *temp_node;476476- struct tipc_link *l;477477 unsigned long intv;478478 int bearer_id;479479 int i;···488488 goto exit;489489 /* A preliminary node becomes "real" now, refresh its data */490490 tipc_node_write_lock(n);491491+ if (!tipc_link_bc_create(net, tipc_own_addr(net), addr, peer_id, U16_MAX,492492+ tipc_link_min_win(snd_l), tipc_link_max_win(snd_l),493493+ n->capabilities, &n->bc_entry.inputq1,494494+ &n->bc_entry.namedq, snd_l, &n->bc_entry.link)) {495495+ pr_warn("Broadcast rcv link refresh failed, no memory\n");496496+ tipc_node_write_unlock_fast(n);497497+ tipc_node_put(n);498498+ n = NULL;499499+ goto exit;500500+ }491501 n->preliminary = false;492502 n->addr = addr;493503 hlist_del_rcu(&n->hash);···577567 n->signature = INVALID_NODE_SIG;578568 n->active_links[0] = INVALID_BEARER_ID;579569 n->active_links[1] = INVALID_BEARER_ID;580580- n->bc_entry.link = NULL;570570+ if (!preliminary &&571571+ !tipc_link_bc_create(net, tipc_own_addr(net), addr, peer_id, U16_MAX,572572+ tipc_link_min_win(snd_l), tipc_link_max_win(snd_l),573573+ n->capabilities, &n->bc_entry.inputq1,574574+ &n->bc_entry.namedq, snd_l, &n->bc_entry.link)) {575575+ pr_warn("Broadcast rcv link creation failed, no memory\n");576576+ kfree(n);577577+ n = NULL;578578+ goto exit;579579+ }581580 tipc_node_get(n);582581 timer_setup(&n->timer, tipc_node_timeout, 0);583582 /* Start a slow timer anyway, crypto needs it */···11741155 bool *respond, bool *dupl_addr)11751156{11761157 struct tipc_node *n;11771177- struct tipc_link *l, *snd_l;11581158+ struct tipc_link *l;11781159 struct tipc_link_entry *le;11791160 bool addr_match = false;11801161 bool sign_match = false;···11941175 return;1195117611961177 tipc_node_write_lock(n);11971197- if (unlikely(!n->bc_entry.link)) {11981198- snd_l = tipc_bc_sndlink(net);11991199- if (!tipc_link_bc_create(net, tipc_own_addr(net),12001200- addr, peer_id, U16_MAX,12011201- tipc_link_min_win(snd_l),12021202- tipc_link_max_win(snd_l),12031203- n->capabilities,12041204- &n->bc_entry.inputq1,12051205- &n->bc_entry.namedq, snd_l,12061206- &n->bc_entry.link)) {12071207- pr_warn("Broadcast rcv link creation failed, no mem\n");12081208- tipc_node_write_unlock_fast(n);12091209- tipc_node_put(n);12101210- return;12111211- }12121212- }1213117812141179 le = &n->links[b->identity];12151180
+1
net/tipc/socket.c
···502502 sock_init_data(sock, sk);503503 tipc_set_sk_state(sk, TIPC_OPEN);504504 if (tipc_sk_insert(tsk)) {505505+ sk_free(sk);505506 pr_warn("Socket create failed; port number exhausted\n");506507 return -EINVAL;507508 }
+4-4
net/tls/tls_sw.c
···267267 }268268 darg->async = false;269269270270- if (ret == -EBADMSG)271271- TLS_INC_STATS(sock_net(sk), LINUX_MIB_TLSDECRYPTERROR);272272-273270 return ret;274271}275272···15761579 }1577158015781581 err = decrypt_internal(sk, skb, dest, NULL, darg);15791579- if (err < 0)15821582+ if (err < 0) {15831583+ if (err == -EBADMSG)15841584+ TLS_INC_STATS(sock_net(sk), LINUX_MIB_TLSDECRYPTERROR);15801585 return err;15861586+ }15811587 if (darg->async)15821588 goto decrypt_next;15831589
+1
net/xdp/xsk_buff_pool.c
···332332 for (i = 0; i < dma_map->dma_pages_cnt; i++) {333333 dma = &dma_map->dma_pages[i];334334 if (*dma) {335335+ *dma &= ~XSK_NEXT_PG_CONTIG_MASK;335336 dma_unmap_page_attrs(dma_map->dev, *dma, PAGE_SIZE,336337 DMA_BIDIRECTIONAL, attrs);337338 *dma = 0;
+7
samples/fprobe/fprobe_example.c
···25252626static char symbol[MAX_SYMBOL_LEN] = "kernel_clone";2727module_param_string(symbol, symbol, sizeof(symbol), 0644);2828+MODULE_PARM_DESC(symbol, "Probed symbol(s), given by comma separated symbols or a wildcard pattern.");2929+2830static char nosymbol[MAX_SYMBOL_LEN] = "";2931module_param_string(nosymbol, nosymbol, sizeof(nosymbol), 0644);3232+MODULE_PARM_DESC(nosymbol, "Not-probed symbols, given by a wildcard pattern.");3333+3034static bool stackdump = true;3135module_param(stackdump, bool, 0644);3636+MODULE_PARM_DESC(stackdump, "Enable stackdump.");3737+3238static bool use_trace = false;3339module_param(use_trace, bool, 0644);4040+MODULE_PARM_DESC(use_trace, "Use trace_printk instead of printk. This is only for debugging.");34413542static void show_backtrace(void)3643{
···157157 if ext != '.ko':158158 sys.exit('{}: module path must end with .ko'.format(ko))159159 mod = base + '.mod'160160- # The first line of *.mod lists the objects that compose the module.160160+ # Read from *.mod, to get a list of objects that compose the module.161161 with open(mod) as m:162162- for obj in m.readline().split():163163- yield to_cmdfile(obj)162162+ for mod_line in m:163163+ yield to_cmdfile(mod_line.rstrip())164164165165166166def process_line(root_directory, command_prefix, file_path):
···868868869869 /*870870 * connected STDI871871+ * TDM support is assuming it is probed via Audio-Graph-Card style here.872872+ * Default is SDTIx1 if it was probed via Simple-Audio-Card for now.871873 */872874 sdti_num = of_graph_get_endpoint_count(np);873873- if (WARN_ON((sdti_num > 3) || (sdti_num < 1)))874874- return;875875+ if ((sdti_num >= SDTx_MAX) || (sdti_num < 1))876876+ sdti_num = 1;875877876878 AK4613_CONFIG_SDTI_set(priv, sdti_num);877879}
···122122 snd_soc_kcontrol_component(kcontrol);123123 struct cs47l15 *cs47l15 = snd_soc_component_get_drvdata(component);124124125125+ if (!!ucontrol->value.integer.value[0] == cs47l15->in1_lp_mode)126126+ return 0;127127+125128 switch (ucontrol->value.integer.value[0]) {126129 case 0:127130 /* Set IN1 to normal mode */···153150 break;154151 }155152156156- return 0;153153+ return 1;157154}158155159156static const struct snd_kcontrol_new cs47l15_snd_controls[] = {
+10-4
sound/soc/codecs/madera.c
···618618end:619619 snd_soc_dapm_mutex_unlock(dapm);620620621621- return snd_soc_dapm_mux_update_power(dapm, kcontrol, mux, e, NULL);621621+ ret = snd_soc_dapm_mux_update_power(dapm, kcontrol, mux, e, NULL);622622+ if (ret < 0) {623623+ dev_err(madera->dev, "Failed to update demux power state: %d\n", ret);624624+ return ret;625625+ }626626+627627+ return change;622628}623629EXPORT_SYMBOL_GPL(madera_out1_demux_put);624630···899893 struct soc_enum *e = (struct soc_enum *)kcontrol->private_value;900894 const int adsp_num = e->shift_l;901895 const unsigned int item = ucontrol->value.enumerated.item[0];902902- int ret;896896+ int ret = 0;903897904898 if (item >= e->items)905899 return -EINVAL;···916910 "Cannot change '%s' while in use by active audio paths\n",917911 kcontrol->id.name);918912 ret = -EBUSY;919919- } else {913913+ } else if (priv->adsp_rate_cache[adsp_num] != e->values[item]) {920914 /* Volatile register so defer until the codec is powered up */921915 priv->adsp_rate_cache[adsp_num] = e->values[item];922922- ret = 0;916916+ ret = 1;923917 }924918925919 mutex_unlock(&priv->rate_lock);
···12871287 struct snd_soc_dapm_update *update = NULL;12881288 u32 port_id = w->shift;1289128912901290+ if (wcd->rx_port_value[port_id] == ucontrol->value.enumerated.item[0])12911291+ return 0;12921292+12901293 wcd->rx_port_value[port_id] = ucontrol->value.enumerated.item[0];12941294+12951295+ /* Remove channel from any list it's in before adding it to a new one */12961296+ list_del_init(&wcd->rx_chs[port_id].list);1291129712921298 switch (wcd->rx_port_value[port_id]) {12931299 case 0:12941294- list_del_init(&wcd->rx_chs[port_id].list);13001300+ /* Channel already removed from lists. Nothing to do here */12951301 break;12961302 case 1:12971303 list_add_tail(&wcd->rx_chs[port_id].list,
···421421 priv->spkvdd_en_gpio = gpiod_get(codec_dev, "wlf,spkvdd-ena", GPIOD_OUT_LOW);422422 put_device(codec_dev);423423424424- if (IS_ERR(priv->spkvdd_en_gpio))425425- return dev_err_probe(dev, PTR_ERR(priv->spkvdd_en_gpio), "getting spkvdd-GPIO\n");424424+ if (IS_ERR(priv->spkvdd_en_gpio)) {425425+ ret = PTR_ERR(priv->spkvdd_en_gpio);426426+ /*427427+ * The spkvdd gpio-lookup is registered by: drivers/mfd/arizona-spi.c,428428+ * so -ENOENT means that arizona-spi hasn't probed yet.429429+ */430430+ if (ret == -ENOENT)431431+ ret = -EPROBE_DEFER;432432+433433+ return dev_err_probe(dev, ret, "getting spkvdd-GPIO\n");434434+ }426435427436 /* override platform name, if required */428437 byt_wm5102_card.dev = dev;
+29-22
sound/soc/intel/boards/sof_sdw.c
···13981398 .late_probe = sof_sdw_card_late_probe,13991399};1400140014011401+static void mc_dailink_exit_loop(struct snd_soc_card *card)14021402+{14031403+ struct snd_soc_dai_link *link;14041404+ int ret;14051405+ int i, j;14061406+14071407+ for (i = 0; i < ARRAY_SIZE(codec_info_list); i++) {14081408+ if (!codec_info_list[i].exit)14091409+ continue;14101410+ /*14111411+ * We don't need to call .exit function if there is no matched14121412+ * dai link found.14131413+ */14141414+ for_each_card_prelinks(card, j, link) {14151415+ if (!strcmp(link->codecs[0].dai_name,14161416+ codec_info_list[i].dai_name)) {14171417+ ret = codec_info_list[i].exit(card, link);14181418+ if (ret)14191419+ dev_warn(card->dev,14201420+ "codec exit failed %d\n",14211421+ ret);14221422+ break;14231423+ }14241424+ }14251425+ }14261426+}14271427+14011428static int mc_probe(struct platform_device *pdev)14021429{14031430 struct snd_soc_card *card = &card_sof_sdw;···14891462 ret = devm_snd_soc_register_card(&pdev->dev, card);14901463 if (ret) {14911464 dev_err(card->dev, "snd_soc_register_card failed %d\n", ret);14651465+ mc_dailink_exit_loop(card);14921466 return ret;14931467 }14941468···15011473static int mc_remove(struct platform_device *pdev)15021474{15031475 struct snd_soc_card *card = platform_get_drvdata(pdev);15041504- struct snd_soc_dai_link *link;15051505- int ret;15061506- int i, j;1507147615081508- for (i = 0; i < ARRAY_SIZE(codec_info_list); i++) {15091509- if (!codec_info_list[i].exit)15101510- continue;15111511- /*15121512- * We don't need to call .exit function if there is no matched15131513- * dai link found.15141514- */15151515- for_each_card_prelinks(card, j, link) {15161516- if (!strcmp(link->codecs[0].dai_name,15171517- codec_info_list[i].dai_name)) {15181518- ret = codec_info_list[i].exit(card, link);15191519- if (ret)15201520- dev_warn(&pdev->dev,15211521- "codec exit failed %d\n",15221522- ret);15231523- break;15241524- }15251525- }15261526- }14771477+ mc_dailink_exit_loop(card);1527147815281479 return 0;15291480}
+6
sound/soc/qcom/qdsp6/q6apm-dai.c
···147147 cfg.num_channels = runtime->channels;148148 cfg.bit_width = prtd->bits_per_sample;149149150150+ if (prtd->state) {151151+ /* clear the previous setup if any */152152+ q6apm_graph_stop(prtd->graph);153153+ q6apm_unmap_memory_regions(prtd->graph, substream->stream);154154+ }155155+150156 prtd->pcm_count = snd_pcm_lib_period_bytes(substream);151157 prtd->pos = 0;152158 /* rate and channels are sent to audio driver */
+129-31
sound/soc/rockchip/rockchip_i2s.c
···1313#include <linux/of_gpio.h>1414#include <linux/of_device.h>1515#include <linux/clk.h>1616+#include <linux/pinctrl/consumer.h>1617#include <linux/pm_runtime.h>1718#include <linux/regmap.h>1819#include <linux/spinlock.h>···5554 const struct rk_i2s_pins *pins;5655 unsigned int bclk_ratio;5756 spinlock_t lock; /* tx/rx lock */5757+ struct pinctrl *pinctrl;5858+ struct pinctrl_state *bclk_on;5959+ struct pinctrl_state *bclk_off;5860};6161+6262+static int i2s_pinctrl_select_bclk_on(struct rk_i2s_dev *i2s)6363+{6464+ int ret = 0;6565+6666+ if (!IS_ERR(i2s->pinctrl) && !IS_ERR_OR_NULL(i2s->bclk_on))6767+ ret = pinctrl_select_state(i2s->pinctrl,6868+ i2s->bclk_on);6969+7070+ if (ret)7171+ dev_err(i2s->dev, "bclk enable failed %d\n", ret);7272+7373+ return ret;7474+}7575+7676+static int i2s_pinctrl_select_bclk_off(struct rk_i2s_dev *i2s)7777+{7878+7979+ int ret = 0;8080+8181+ if (!IS_ERR(i2s->pinctrl) && !IS_ERR_OR_NULL(i2s->bclk_off))8282+ ret = pinctrl_select_state(i2s->pinctrl,8383+ i2s->bclk_off);8484+8585+ if (ret)8686+ dev_err(i2s->dev, "bclk disable failed %d\n", ret);8787+8888+ return ret;8989+}59906091static int i2s_runtime_suspend(struct device *dev)6192{···12592 return snd_soc_dai_get_drvdata(dai);12693}12794128128-static void rockchip_snd_txctrl(struct rk_i2s_dev *i2s, int on)9595+static int rockchip_snd_txctrl(struct rk_i2s_dev *i2s, int on)12996{13097 unsigned int val = 0;13198 int retry = 10;9999+ int ret = 0;132100133101 spin_lock(&i2s->lock);134102 if (on) {135135- regmap_update_bits(i2s->regmap, I2S_DMACR,136136- I2S_DMACR_TDE_ENABLE, I2S_DMACR_TDE_ENABLE);103103+ ret = regmap_update_bits(i2s->regmap, I2S_DMACR,104104+ I2S_DMACR_TDE_ENABLE, I2S_DMACR_TDE_ENABLE);105105+ if (ret < 0)106106+ goto end;137107138138- regmap_update_bits(i2s->regmap, I2S_XFER,139139- I2S_XFER_TXS_START | I2S_XFER_RXS_START,140140- I2S_XFER_TXS_START | I2S_XFER_RXS_START);108108+ ret = regmap_update_bits(i2s->regmap, I2S_XFER,109109+ I2S_XFER_TXS_START | I2S_XFER_RXS_START,110110+ I2S_XFER_TXS_START | I2S_XFER_RXS_START);111111+ if (ret < 0)112112+ goto end;141113142114 i2s->tx_start = true;143115 } else {144116 i2s->tx_start = false;145117146146- regmap_update_bits(i2s->regmap, I2S_DMACR,147147- I2S_DMACR_TDE_ENABLE, I2S_DMACR_TDE_DISABLE);118118+ ret = regmap_update_bits(i2s->regmap, I2S_DMACR,119119+ I2S_DMACR_TDE_ENABLE, I2S_DMACR_TDE_DISABLE);120120+ if (ret < 0)121121+ goto end;148122149123 if (!i2s->rx_start) {150150- regmap_update_bits(i2s->regmap, I2S_XFER,151151- I2S_XFER_TXS_START |152152- I2S_XFER_RXS_START,153153- I2S_XFER_TXS_STOP |154154- I2S_XFER_RXS_STOP);124124+ ret = regmap_update_bits(i2s->regmap, I2S_XFER,125125+ I2S_XFER_TXS_START |126126+ I2S_XFER_RXS_START,127127+ I2S_XFER_TXS_STOP |128128+ I2S_XFER_RXS_STOP);129129+ if (ret < 0)130130+ goto end;155131156132 udelay(150);157157- regmap_update_bits(i2s->regmap, I2S_CLR,158158- I2S_CLR_TXC | I2S_CLR_RXC,159159- I2S_CLR_TXC | I2S_CLR_RXC);133133+ ret = regmap_update_bits(i2s->regmap, I2S_CLR,134134+ I2S_CLR_TXC | I2S_CLR_RXC,135135+ I2S_CLR_TXC | I2S_CLR_RXC);136136+ if (ret < 0)137137+ goto end;160138161139 regmap_read(i2s->regmap, I2S_CLR, &val);162140···182138 }183139 }184140 }141141+end:185142 spin_unlock(&i2s->lock);143143+ if (ret < 0)144144+ dev_err(i2s->dev, "lrclk update failed\n");145145+146146+ return ret;186147}187148188188-static void rockchip_snd_rxctrl(struct rk_i2s_dev *i2s, int on)149149+static int rockchip_snd_rxctrl(struct rk_i2s_dev *i2s, int on)189150{190151 unsigned int val = 0;191152 int retry = 10;153153+ int ret = 0;192154193155 spin_lock(&i2s->lock);194156 if (on) {195195- regmap_update_bits(i2s->regmap, I2S_DMACR,157157+ ret = regmap_update_bits(i2s->regmap, I2S_DMACR,196158 I2S_DMACR_RDE_ENABLE, I2S_DMACR_RDE_ENABLE);159159+ if (ret < 0)160160+ goto end;197161198198- regmap_update_bits(i2s->regmap, I2S_XFER,162162+ ret = regmap_update_bits(i2s->regmap, I2S_XFER,199163 I2S_XFER_TXS_START | I2S_XFER_RXS_START,200164 I2S_XFER_TXS_START | I2S_XFER_RXS_START);165165+ if (ret < 0)166166+ goto end;201167202168 i2s->rx_start = true;203169 } else {204170 i2s->rx_start = false;205171206206- regmap_update_bits(i2s->regmap, I2S_DMACR,172172+ ret = regmap_update_bits(i2s->regmap, I2S_DMACR,207173 I2S_DMACR_RDE_ENABLE, I2S_DMACR_RDE_DISABLE);174174+ if (ret < 0)175175+ goto end;208176209177 if (!i2s->tx_start) {210210- regmap_update_bits(i2s->regmap, I2S_XFER,178178+ ret = regmap_update_bits(i2s->regmap, I2S_XFER,211179 I2S_XFER_TXS_START |212180 I2S_XFER_RXS_START,213181 I2S_XFER_TXS_STOP |214182 I2S_XFER_RXS_STOP);215215-183183+ if (ret < 0)184184+ goto end;216185 udelay(150);217217- regmap_update_bits(i2s->regmap, I2S_CLR,186186+ ret = regmap_update_bits(i2s->regmap, I2S_CLR,218187 I2S_CLR_TXC | I2S_CLR_RXC,219188 I2S_CLR_TXC | I2S_CLR_RXC);220220-189189+ if (ret < 0)190190+ goto end;221191 regmap_read(i2s->regmap, I2S_CLR, &val);222222-223192 /* Should wait for clear operation to finish */224193 while (val) {225194 regmap_read(i2s->regmap, I2S_CLR, &val);···244187 }245188 }246189 }190190+end:247191 spin_unlock(&i2s->lock);192192+ if (ret < 0)193193+ dev_err(i2s->dev, "lrclk update failed\n");194194+195195+ return ret;248196}249197250198static int rockchip_i2s_set_fmt(struct snd_soc_dai *cpu_dai,···487425 case SNDRV_PCM_TRIGGER_RESUME:488426 case SNDRV_PCM_TRIGGER_PAUSE_RELEASE:489427 if (substream->stream == SNDRV_PCM_STREAM_CAPTURE)490490- rockchip_snd_rxctrl(i2s, 1);428428+ ret = rockchip_snd_rxctrl(i2s, 1);491429 else492492- rockchip_snd_txctrl(i2s, 1);430430+ ret = rockchip_snd_txctrl(i2s, 1);431431+ /* Do not turn on bclk if lrclk open fails. */432432+ if (ret < 0)433433+ return ret;434434+ i2s_pinctrl_select_bclk_on(i2s);493435 break;494436 case SNDRV_PCM_TRIGGER_SUSPEND:495437 case SNDRV_PCM_TRIGGER_STOP:496438 case SNDRV_PCM_TRIGGER_PAUSE_PUSH:497497- if (substream->stream == SNDRV_PCM_STREAM_CAPTURE)498498- rockchip_snd_rxctrl(i2s, 0);499499- else500500- rockchip_snd_txctrl(i2s, 0);439439+ if (substream->stream == SNDRV_PCM_STREAM_CAPTURE) {440440+ if (!i2s->tx_start)441441+ i2s_pinctrl_select_bclk_off(i2s);442442+ ret = rockchip_snd_rxctrl(i2s, 0);443443+ } else {444444+ if (!i2s->rx_start)445445+ i2s_pinctrl_select_bclk_off(i2s);446446+ ret = rockchip_snd_txctrl(i2s, 0);447447+ }501448 break;502449 default:503450 ret = -EINVAL;···807736 }808737809738 i2s->bclk_ratio = 64;739739+ i2s->pinctrl = devm_pinctrl_get(&pdev->dev);740740+ if (IS_ERR(i2s->pinctrl))741741+ dev_err(&pdev->dev, "failed to find i2s pinctrl\n");742742+743743+ i2s->bclk_on = pinctrl_lookup_state(i2s->pinctrl,744744+ "bclk_on");745745+ if (IS_ERR_OR_NULL(i2s->bclk_on))746746+ dev_err(&pdev->dev, "failed to find i2s default state\n");747747+ else748748+ dev_dbg(&pdev->dev, "find i2s bclk state\n");749749+750750+ i2s->bclk_off = pinctrl_lookup_state(i2s->pinctrl,751751+ "bclk_off");752752+ if (IS_ERR_OR_NULL(i2s->bclk_off))753753+ dev_err(&pdev->dev, "failed to find i2s gpio state\n");754754+ else755755+ dev_dbg(&pdev->dev, "find i2s bclk_off state\n");756756+757757+ i2s_pinctrl_select_bclk_off(i2s);758758+759759+ i2s->playback_dma_data.addr = res->start + I2S_TXDR;760760+ i2s->playback_dma_data.addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES;761761+ i2s->playback_dma_data.maxburst = 4;762762+763763+ i2s->capture_dma_data.addr = res->start + I2S_RXDR;764764+ i2s->capture_dma_data.addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES;765765+ i2s->capture_dma_data.maxburst = 4;810766811767 dev_set_drvdata(&pdev->dev, i2s);812768
+5
sound/soc/soc-dapm.c
···6262snd_soc_dapm_new_control_unlocked(struct snd_soc_dapm_context *dapm,6363 const struct snd_soc_dapm_widget *widget);64646565+static unsigned int soc_dapm_read(struct snd_soc_dapm_context *dapm, int reg);6666+6567/* dapm power sequences - make this per codec in the future */6668static int dapm_up_seq[] = {6769 [snd_soc_dapm_pre] = 1,···444442445443 snd_soc_dapm_add_path(widget->dapm, data->widget,446444 widget, NULL, NULL);445445+ } else if (e->reg != SND_SOC_NOPM) {446446+ data->value = soc_dapm_read(widget->dapm, e->reg) &447447+ (e->mask << e->shift_l);447448 }448449 break;449450 default:
+2-2
sound/soc/soc-ops.c
···526526 return -EINVAL;527527 if (mc->platform_max && tmp > mc->platform_max)528528 return -EINVAL;529529- if (tmp > mc->max - mc->min + 1)529529+ if (tmp > mc->max - mc->min)530530 return -EINVAL;531531532532 if (invert)···547547 return -EINVAL;548548 if (mc->platform_max && tmp > mc->platform_max)549549 return -EINVAL;550550- if (tmp > mc->max - mc->min + 1)550550+ if (tmp > mc->max - mc->min)551551 return -EINVAL;552552553553 if (invert)
+9-1
sound/soc/sof/intel/hda-dsp.c
···181181 * Power Management.182182 */183183184184-static int hda_dsp_core_power_up(struct snd_sof_dev *sdev, unsigned int core_mask)184184+int hda_dsp_core_power_up(struct snd_sof_dev *sdev, unsigned int core_mask)185185{186186+ struct sof_intel_hda_dev *hda = sdev->pdata->hw_pdata;187187+ const struct sof_intel_dsp_desc *chip = hda->desc;186188 unsigned int cpa;187189 u32 adspcs;188190 int ret;191191+192192+ /* restrict core_mask to host managed cores mask */193193+ core_mask &= chip->host_managed_cores_mask;194194+ /* return if core_mask is not valid */195195+ if (!core_mask)196196+ return 0;189197190198 /* update bits */191199 snd_sof_dsp_update_bits(sdev, HDA_DSP_BAR, HDA_DSP_REG_ADSPCS,
+7-6
sound/soc/sof/intel/hda-loader.c
···9595}96969797/*9898- * first boot sequence has some extra steps. core 0 waits for power9999- * status on core 1, so power up core 1 also momentarily, keep it in100100- * reset/stall and then turn it off9898+ * first boot sequence has some extra steps.9999+ * power on all host managed cores and only unstall/run the boot core to boot the100100+ * DSP then turn off all non boot cores (if any) is powered on.101101 */102102static int cl_dsp_init(struct snd_sof_dev *sdev, int stream_tag, bool imr_boot)103103{···110110 int ret;111111112112 /* step 1: power up corex */113113- ret = hda_dsp_enable_core(sdev, chip->host_managed_cores_mask);113113+ ret = hda_dsp_core_power_up(sdev, chip->host_managed_cores_mask);114114 if (ret < 0) {115115 if (hda->boot_iteration == HDA_FW_BOOT_ATTEMPTS)116116 dev_err(sdev->dev, "error: dsp core 0/1 power up failed\n");···127127 snd_sof_dsp_write(sdev, HDA_DSP_BAR, chip->ipc_req, ipc_hdr);128128129129 /* step 3: unset core 0 reset state & unstall/run core 0 */130130- ret = hda_dsp_core_run(sdev, BIT(0));130130+ ret = hda_dsp_core_run(sdev, chip->init_core_mask);131131 if (ret < 0) {132132 if (hda->boot_iteration == HDA_FW_BOOT_ATTEMPTS)133133 dev_err(sdev->dev,···389389 struct snd_dma_buffer dmab;390390 int ret, ret1, i;391391392392- if (hda->imrboot_supported && !sdev->first_boot) {392392+ if (sdev->system_suspend_target < SOF_SUSPEND_S4 &&393393+ hda->imrboot_supported && !sdev->first_boot) {393394 dev_dbg(sdev->dev, "IMR restore supported, booting from IMR directly\n");394395 hda->boot_iteration = 0;395396 ret = hda_dsp_boot_imr(sdev);
+1-73
sound/soc/sof/intel/hda-pcm.c
···192192 goto found;193193 }194194195195- switch (sof_hda_position_quirk) {196196- case SOF_HDA_POSITION_QUIRK_USE_SKYLAKE_LEGACY:197197- /*198198- * This legacy code, inherited from the Skylake driver,199199- * mixes DPIB registers and DPIB DDR updates and200200- * does not seem to follow any known hardware recommendations.201201- * It's not clear e.g. why there is a different flow202202- * for capture and playback, the only information that matters is203203- * what traffic class is used, and on all SOF-enabled platforms204204- * only VC0 is supported so the work-around was likely not necessary205205- * and quite possibly wrong.206206- */207207-208208- /* DPIB/posbuf position mode:209209- * For Playback, Use DPIB register from HDA space which210210- * reflects the actual data transferred.211211- * For Capture, Use the position buffer for pointer, as DPIB212212- * is not accurate enough, its update may be completed213213- * earlier than the data written to DDR.214214- */215215- if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK) {216216- pos = snd_sof_dsp_read(sdev, HDA_DSP_HDA_BAR,217217- AZX_REG_VS_SDXDPIB_XBASE +218218- (AZX_REG_VS_SDXDPIB_XINTERVAL *219219- hstream->index));220220- } else {221221- /*222222- * For capture stream, we need more workaround to fix the223223- * position incorrect issue:224224- *225225- * 1. Wait at least 20us before reading position buffer after226226- * the interrupt generated(IOC), to make sure position update227227- * happens on frame boundary i.e. 20.833uSec for 48KHz.228228- * 2. Perform a dummy Read to DPIB register to flush DMA229229- * position value.230230- * 3. Read the DMA Position from posbuf. Now the readback231231- * value should be >= period boundary.232232- */233233- usleep_range(20, 21);234234- snd_sof_dsp_read(sdev, HDA_DSP_HDA_BAR,235235- AZX_REG_VS_SDXDPIB_XBASE +236236- (AZX_REG_VS_SDXDPIB_XINTERVAL *237237- hstream->index));238238- pos = snd_hdac_stream_get_pos_posbuf(hstream);239239- }240240- break;241241- case SOF_HDA_POSITION_QUIRK_USE_DPIB_REGISTERS:242242- /*243243- * In case VC1 traffic is disabled this is the recommended option244244- */245245- pos = snd_sof_dsp_read(sdev, HDA_DSP_HDA_BAR,246246- AZX_REG_VS_SDXDPIB_XBASE +247247- (AZX_REG_VS_SDXDPIB_XINTERVAL *248248- hstream->index));249249- break;250250- case SOF_HDA_POSITION_QUIRK_USE_DPIB_DDR_UPDATE:251251- /*252252- * This is the recommended option when VC1 is enabled.253253- * While this isn't needed for SOF platforms it's added for254254- * consistency and debug.255255- */256256- pos = snd_hdac_stream_get_pos_posbuf(hstream);257257- break;258258- default:259259- dev_err_once(sdev->dev, "hda_position_quirk value %d not supported\n",260260- sof_hda_position_quirk);261261- pos = 0;262262- break;263263- }264264-265265- if (pos >= hstream->bufsize)266266- pos = 0;267267-195195+ pos = hda_dsp_stream_get_position(hstream, substream->stream, true);268196found:269197 pos = bytes_to_frames(substream->runtime, pos);270198
+90-4
sound/soc/sof/intel/hda-stream.c
···707707}708708709709static void710710-hda_dsp_set_bytes_transferred(struct hdac_stream *hstream, u64 buffer_size)710710+hda_dsp_compr_bytes_transferred(struct hdac_stream *hstream, int direction)711711{712712+ u64 buffer_size = hstream->bufsize;712713 u64 prev_pos, pos, num_bytes;713714714715 div64_u64_rem(hstream->curr_pos, buffer_size, &prev_pos);715715- pos = snd_hdac_stream_get_pos_posbuf(hstream);716716+ pos = hda_dsp_stream_get_position(hstream, direction, false);716717717718 if (pos < prev_pos)718719 num_bytes = (buffer_size - prev_pos) + pos;···749748 if (s->substream && sof_hda->no_ipc_position) {750749 snd_sof_pcm_period_elapsed(s->substream);751750 } else if (s->cstream) {752752- hda_dsp_set_bytes_transferred(s,753753- s->cstream->runtime->buffer_size);751751+ hda_dsp_compr_bytes_transferred(s, s->cstream->direction);754752 snd_compr_fragment_elapsed(s->cstream);755753 }756754 }···10081008 hext_stream);10091009 devm_kfree(sdev->dev, hda_stream);10101010 }10111011+}10121012+10131013+snd_pcm_uframes_t hda_dsp_stream_get_position(struct hdac_stream *hstream,10141014+ int direction, bool can_sleep)10151015+{10161016+ struct hdac_ext_stream *hext_stream = stream_to_hdac_ext_stream(hstream);10171017+ struct sof_intel_hda_stream *hda_stream = hstream_to_sof_hda_stream(hext_stream);10181018+ struct snd_sof_dev *sdev = hda_stream->sdev;10191019+ snd_pcm_uframes_t pos;10201020+10211021+ switch (sof_hda_position_quirk) {10221022+ case SOF_HDA_POSITION_QUIRK_USE_SKYLAKE_LEGACY:10231023+ /*10241024+ * This legacy code, inherited from the Skylake driver,10251025+ * mixes DPIB registers and DPIB DDR updates and10261026+ * does not seem to follow any known hardware recommendations.10271027+ * It's not clear e.g. why there is a different flow10281028+ * for capture and playback, the only information that matters is10291029+ * what traffic class is used, and on all SOF-enabled platforms10301030+ * only VC0 is supported so the work-around was likely not necessary10311031+ * and quite possibly wrong.10321032+ */10331033+10341034+ /* DPIB/posbuf position mode:10351035+ * For Playback, Use DPIB register from HDA space which10361036+ * reflects the actual data transferred.10371037+ * For Capture, Use the position buffer for pointer, as DPIB10381038+ * is not accurate enough, its update may be completed10391039+ * earlier than the data written to DDR.10401040+ */10411041+ if (direction == SNDRV_PCM_STREAM_PLAYBACK) {10421042+ pos = snd_sof_dsp_read(sdev, HDA_DSP_HDA_BAR,10431043+ AZX_REG_VS_SDXDPIB_XBASE +10441044+ (AZX_REG_VS_SDXDPIB_XINTERVAL *10451045+ hstream->index));10461046+ } else {10471047+ /*10481048+ * For capture stream, we need more workaround to fix the10491049+ * position incorrect issue:10501050+ *10511051+ * 1. Wait at least 20us before reading position buffer after10521052+ * the interrupt generated(IOC), to make sure position update10531053+ * happens on frame boundary i.e. 20.833uSec for 48KHz.10541054+ * 2. Perform a dummy Read to DPIB register to flush DMA10551055+ * position value.10561056+ * 3. Read the DMA Position from posbuf. Now the readback10571057+ * value should be >= period boundary.10581058+ */10591059+ if (can_sleep)10601060+ usleep_range(20, 21);10611061+10621062+ snd_sof_dsp_read(sdev, HDA_DSP_HDA_BAR,10631063+ AZX_REG_VS_SDXDPIB_XBASE +10641064+ (AZX_REG_VS_SDXDPIB_XINTERVAL *10651065+ hstream->index));10661066+ pos = snd_hdac_stream_get_pos_posbuf(hstream);10671067+ }10681068+ break;10691069+ case SOF_HDA_POSITION_QUIRK_USE_DPIB_REGISTERS:10701070+ /*10711071+ * In case VC1 traffic is disabled this is the recommended option10721072+ */10731073+ pos = snd_sof_dsp_read(sdev, HDA_DSP_HDA_BAR,10741074+ AZX_REG_VS_SDXDPIB_XBASE +10751075+ (AZX_REG_VS_SDXDPIB_XINTERVAL *10761076+ hstream->index));10771077+ break;10781078+ case SOF_HDA_POSITION_QUIRK_USE_DPIB_DDR_UPDATE:10791079+ /*10801080+ * This is the recommended option when VC1 is enabled.10811081+ * While this isn't needed for SOF platforms it's added for10821082+ * consistency and debug.10831083+ */10841084+ pos = snd_hdac_stream_get_pos_posbuf(hstream);10851085+ break;10861086+ default:10871087+ dev_err_once(sdev->dev, "hda_position_quirk value %d not supported\n",10881088+ sof_hda_position_quirk);10891089+ pos = 0;10901090+ break;10911091+ }10921092+10931093+ if (pos >= hstream->bufsize)10941094+ pos = 0;10951095+10961096+ return pos;10111097}
···15771577 struct sof_ipc_ctrl_data *cdata;15781578 int ret;1579157915801580+ if (scontrol->max_size < (sizeof(*cdata) + sizeof(struct sof_abi_hdr))) {15811581+ dev_err(sdev->dev, "%s: insufficient size for a bytes control: %zu.\n",15821582+ __func__, scontrol->max_size);15831583+ return -EINVAL;15841584+ }15851585+15861586+ if (scontrol->priv_size > scontrol->max_size - sizeof(*cdata)) {15871587+ dev_err(sdev->dev,15881588+ "%s: bytes data size %zu exceeds max %zu.\n", __func__,15891589+ scontrol->priv_size, scontrol->max_size - sizeof(*cdata));15901590+ return -EINVAL;15911591+ }15921592+15801593 scontrol->ipc_control_data = kzalloc(scontrol->max_size, GFP_KERNEL);15811594 if (!scontrol->ipc_control_data)15821595 return -ENOMEM;15831583-15841584- if (scontrol->max_size < sizeof(*cdata) ||15851585- scontrol->max_size < sizeof(struct sof_abi_hdr)) {15861586- ret = -EINVAL;15871587- goto err;15881588- }15891589-15901590- /* init the get/put bytes data */15911591- if (scontrol->priv_size > scontrol->max_size - sizeof(*cdata)) {15921592- dev_err(sdev->dev, "err: bytes data size %zu exceeds max %zu.\n",15931593- scontrol->priv_size, scontrol->max_size - sizeof(*cdata));15941594- ret = -EINVAL;15951595- goto err;15961596- }1597159615981597 scontrol->size = sizeof(struct sof_ipc_ctrl_data) + scontrol->priv_size;15991598
+1-1
sound/soc/sof/mediatek/mt8186/mt8186.c
···392392 PLATFORM_DEVID_NONE,393393 pdev, sizeof(*pdev));394394 if (IS_ERR(priv->ipc_dev)) {395395- ret = IS_ERR(priv->ipc_dev);395395+ ret = PTR_ERR(priv->ipc_dev);396396 dev_err(sdev->dev, "failed to create mtk-adsp-ipc device\n");397397 goto err_adsp_off;398398 }
+20-1
sound/soc/sof/pm.c
···2323 u32 target_dsp_state;24242525 switch (sdev->system_suspend_target) {2626+ case SOF_SUSPEND_S5:2727+ case SOF_SUSPEND_S4:2828+ /* DSP should be in D3 if the system is suspending to S3+ */2629 case SOF_SUSPEND_S3:2730 /* DSP should be in D3 if the system is suspending to S3 */2831 target_dsp_state = SOF_DSP_PM_D3;···338335 return 0;339336340337#if defined(CONFIG_ACPI)341341- if (acpi_target_system_state() == ACPI_STATE_S0)338338+ switch (acpi_target_system_state()) {339339+ case ACPI_STATE_S0:342340 sdev->system_suspend_target = SOF_SUSPEND_S0IX;341341+ break;342342+ case ACPI_STATE_S1:343343+ case ACPI_STATE_S2:344344+ case ACPI_STATE_S3:345345+ sdev->system_suspend_target = SOF_SUSPEND_S3;346346+ break;347347+ case ACPI_STATE_S4:348348+ sdev->system_suspend_target = SOF_SUSPEND_S4;349349+ break;350350+ case ACPI_STATE_S5:351351+ sdev->system_suspend_target = SOF_SUSPEND_S5;352352+ break;353353+ default:354354+ break;355355+ }343356#endif344357345358 return 0;
···265265266266 sample_type = evsel->core.attr.sample_type;267267268268+ if (sample_type & ~OFFCPU_SAMPLE_TYPES) {269269+ pr_err("not supported sample type: %llx\n",270270+ (unsigned long long)sample_type);271271+ return -1;272272+ }273273+268274 if (sample_type & (PERF_SAMPLE_ID | PERF_SAMPLE_IDENTIFIER)) {269275 if (evsel->core.id)270276 sid = evsel->core.id[0];···325319 }326320 if (sample_type & PERF_SAMPLE_CGROUP)327321 data.array[n++] = key.cgroup_id;328328- /* TODO: handle more sample types */329322330323 size = n * sizeof(u64);331324 data.hdr.size = size;
+14-6
tools/perf/util/bpf_skel/off_cpu.bpf.c
···7171 __uint(max_entries, 1);7272} cgroup_filter SEC(".maps");73737474+/* new kernel task_struct definition */7575+struct task_struct___new {7676+ long __state;7777+} __attribute__((preserve_access_index));7878+7479/* old kernel task_struct definition */7580struct task_struct___old {7681 long state;···9893 */9994static inline int get_task_state(struct task_struct *t)10095{101101- if (bpf_core_field_exists(t->__state))102102- return BPF_CORE_READ(t, __state);9696+ /* recast pointer to capture new type for compiler */9797+ struct task_struct___new *t_new = (void *)t;10398104104- /* recast pointer to capture task_struct___old type for compiler */105105- struct task_struct___old *t_old = (void *)t;9999+ if (bpf_core_field_exists(t_new->__state)) {100100+ return BPF_CORE_READ(t_new, __state);101101+ } else {102102+ /* recast pointer to capture old type for compiler */103103+ struct task_struct___old *t_old = (void *)t;106104107107- /* now use old "state" name of the field */108108- return BPF_CORE_READ(t_old, state);105105+ return BPF_CORE_READ(t_old, state);106106+ }109107}110108111109static inline __u64 get_cgroup_id(struct task_struct *t)
+9
tools/perf/util/evsel.c
···4848#include "util.h"4949#include "hashmap.h"5050#include "pmu-hybrid.h"5151+#include "off_cpu.h"5152#include "../perf-sys.h"5253#include "util/parse-branch-options.h"5354#include <internal/xyarray.h>···11031102 }11041103}1105110411051105+static bool evsel__is_offcpu_event(struct evsel *evsel)11061106+{11071107+ return evsel__is_bpf_output(evsel) && !strcmp(evsel->name, OFFCPU_EVENT);11081108+}11091109+11061110/*11071111 * The enable_on_exec/disabled value strategy:11081112 *···13721366 */13731367 if (evsel__is_dummy_event(evsel))13741368 evsel__reset_sample_bit(evsel, BRANCH_STACK);13691369+13701370+ if (evsel__is_offcpu_event(evsel))13711371+ evsel->core.attr.sample_type &= OFFCPU_SAMPLE_TYPES;13751372}1376137313771374int evsel__set_filter(struct evsel *evsel, const char *filter)
···12401240 # FDB entry was installed.12411241 bridge link set dev $br_port1 flood off1242124212431243+ ip link set $host1_if promisc on12431244 tc qdisc add dev $host1_if ingress12441245 tc filter add dev $host1_if ingress protocol ip pref 1 handle 101 \12451246 flower dst_mac $mac action drop···12511250 tc -j -s filter show dev $host1_if ingress \12521251 | jq -e ".[] | select(.options.handle == 101) \12531252 | select(.options.actions[0].stats.packets == 1)" &> /dev/null12541254- check_fail $? "Packet reached second host when should not"12531253+ check_fail $? "Packet reached first host when should not"1255125412561255 $MZ $host1_if -c 1 -p 64 -a $mac -t ip -q12571256 sleep 1···1290128912911290 tc filter del dev $host1_if ingress protocol ip pref 1 handle 101 flower12921291 tc qdisc del dev $host1_if ingress12921292+ ip link set $host1_if promisc off1293129312941294 bridge link set dev $br_port1 flood on12951295···1308130613091307 # Add an ACL on `host2_if` which will tell us whether the packet13101308 # was flooded to it or not.13091309+ ip link set $host2_if promisc on13111310 tc qdisc add dev $host2_if ingress13121311 tc filter add dev $host2_if ingress protocol ip pref 1 handle 101 \13131312 flower dst_mac $mac action drop···1326132313271324 tc filter del dev $host2_if ingress protocol ip pref 1 handle 101 flower13281325 tc qdisc del dev $host2_if ingress13261326+ ip link set $host2_if promisc off1329132713301328 return $err13311329}
···3434 ip -netns "${PEER_NS}" addr add dev veth1 192.168.1.1/243535 ip -netns "${PEER_NS}" addr add dev veth1 2001:db8::1/64 nodad3636 ip -netns "${PEER_NS}" link set dev veth1 up3737- ip -n "${PEER_NS}" link set veth1 xdp object ../bpf/xdp_dummy.o section xdp_dummy3737+ ip -n "${PEER_NS}" link set veth1 xdp object ../bpf/xdp_dummy.o section xdp3838}39394040run_one() {
+1-1
tools/testing/selftests/net/udpgro_bench.sh
···3434 ip -netns "${PEER_NS}" addr add dev veth1 2001:db8::1/64 nodad3535 ip -netns "${PEER_NS}" link set dev veth1 up36363737- ip -n "${PEER_NS}" link set veth1 xdp object ../bpf/xdp_dummy.o section xdp_dummy3737+ ip -n "${PEER_NS}" link set veth1 xdp object ../bpf/xdp_dummy.o section xdp3838 ip netns exec "${PEER_NS}" ./udpgso_bench_rx ${rx_args} -r &3939 ip netns exec "${PEER_NS}" ./udpgso_bench_rx -t ${rx_args} -r &4040
+1-1
tools/testing/selftests/net/udpgro_frglist.sh
···3636 ip netns exec "${PEER_NS}" ethtool -K veth1 rx-gro-list on373738383939- ip -n "${PEER_NS}" link set veth1 xdp object ../bpf/xdp_dummy.o section xdp_dummy3939+ ip -n "${PEER_NS}" link set veth1 xdp object ../bpf/xdp_dummy.o section xdp4040 tc -n "${PEER_NS}" qdisc add dev veth1 clsact4141 tc -n "${PEER_NS}" filter add dev veth1 ingress prio 4 protocol ipv6 bpf object-file ../bpf/nat6to4.o section schedcls/ingress6/nat_6 direct-action4242 tc -n "${PEER_NS}" filter add dev veth1 egress prio 4 protocol ip bpf object-file ../bpf/nat6to4.o section schedcls/egress4/snat4 direct-action
+1-1
tools/testing/selftests/net/udpgro_fwd.sh
···4646 ip -n $BASE$ns addr add dev veth$ns $BM_NET_V4$ns/244747 ip -n $BASE$ns addr add dev veth$ns $BM_NET_V6$ns/64 nodad4848 done4949- ip -n $NS_DST link set veth$DST xdp object ../bpf/xdp_dummy.o section xdp_dummy 2>/dev/null4949+ ip -n $NS_DST link set veth$DST xdp object ../bpf/xdp_dummy.o section xdp 2>/dev/null5050}51515252create_vxlan_endpoint() {