···1313.. kernel-doc:: drivers/firmware/edd.c1414 :internal:15151616+Generic System Framebuffers Interface1717+-------------------------------------1818+1919+.. kernel-doc:: drivers/firmware/sysfb.c2020+ :export:2121+1622Intel Stratix10 SoC Service Layer1723---------------------------------1824Some features of the Intel Stratix10 SoC require a level of privilege
+36
Documentation/process/maintainer-netdev.rst
···66netdev FAQ77==========8899+tl;dr1010+-----1111+1212+ - designate your patch to a tree - ``[PATCH net]`` or ``[PATCH net-next]``1313+ - for fixes the ``Fixes:`` tag is required, regardless of the tree1414+ - don't post large series (> 15 patches), break them up1515+ - don't repost your patches within one 24h period1616+ - reverse xmas tree1717+918What is netdev?1019---------------1120It is a mailing list for all network-related Linux stuff. This···145136version that should be applied. If there is any doubt, the maintainer146137will reply and ask what should be done.147138139139+How do I divide my work into patches?140140+-------------------------------------141141+142142+Put yourself in the shoes of the reviewer. Each patch is read separately143143+and therefore should constitute a comprehensible step towards your stated144144+goal.145145+146146+Avoid sending series longer than 15 patches. Larger series takes longer147147+to review as reviewers will defer looking at it until they find a large148148+chunk of time. A small series can be reviewed in a short time, so Maintainers149149+just do it. As a result, a sequence of smaller series gets merged quicker and150150+with better review coverage. Re-posting large series also increases the mailing151151+list traffic.152152+148153I made changes to only a few patches in a patch series should I resend only those changed?149154------------------------------------------------------------------------------------------150155No, please resend the entire patch series and make sure you do number your···205182 /* foobar blah blah blah206183 * another line of text207184 */185185+186186+What is "reverse xmas tree"?187187+----------------------------188188+189189+Netdev has a convention for ordering local variables in functions.190190+Order the variable declaration lines longest to shortest, e.g.::191191+192192+ struct scatterlist *sg;193193+ struct sk_buff *skb;194194+ int err, i;195195+196196+If there are dependencies between the variables preventing the ordering197197+move the initialization out of line.208198209199I am working in existing code which uses non-standard formatting. Which formatting should I use?210200------------------------------------------------------------------------------------------------
+107-24
MAINTAINERS
···25392539ARM/QUALCOMM SUPPORT25402540M: Andy Gross <agross@kernel.org>25412541M: Bjorn Andersson <bjorn.andersson@linaro.org>25422542+R: Konrad Dybcio <konrad.dybcio@somainline.org>25422543L: linux-arm-msm@vger.kernel.org25432544S: Maintained25442545T: git git://git.kernel.org/pub/scm/linux/kernel/git/qcom/linux.git···36173616F: Documentation/devicetree/bindings/iio/accel/bosch,bma400.yaml36183617F: drivers/iio/accel/bma400*3619361836203620-BPF (Safe dynamic programs and tools)36193619+BPF [GENERAL] (Safe Dynamic Programs and Tools)36213620M: Alexei Starovoitov <ast@kernel.org>36223621M: Daniel Borkmann <daniel@iogearbox.net>36233622M: Andrii Nakryiko <andrii@kernel.org>36243624-R: Martin KaFai Lau <kafai@fb.com>36253625-R: Song Liu <songliubraving@fb.com>36233623+R: Martin KaFai Lau <martin.lau@linux.dev>36243624+R: Song Liu <song@kernel.org>36263625R: Yonghong Song <yhs@fb.com>36273626R: John Fastabend <john.fastabend@gmail.com>36283627R: KP Singh <kpsingh@kernel.org>36293629-L: netdev@vger.kernel.org36283628+R: Stanislav Fomichev <sdf@google.com>36293629+R: Hao Luo <haoluo@google.com>36303630+R: Jiri Olsa <jolsa@kernel.org>36303631L: bpf@vger.kernel.org36313632S: Supported36323633W: https://bpf.io/···36603657F: tools/bpf/36613658F: tools/lib/bpf/36623659F: tools/testing/selftests/bpf/36633663-N: bpf36643664-K: bpf3665366036663661BPF JIT for ARM36673662M: Shubham Bansal <illusionist.neo@gmail.com>36683668-L: netdev@vger.kernel.org36693663L: bpf@vger.kernel.org36703664S: Odd Fixes36713665F: arch/arm/net/···36713671M: Daniel Borkmann <daniel@iogearbox.net>36723672M: Alexei Starovoitov <ast@kernel.org>36733673M: Zi Shen Lim <zlim.lnx@gmail.com>36743674-L: netdev@vger.kernel.org36753674L: bpf@vger.kernel.org36763675S: Supported36773676F: arch/arm64/net/···36783679BPF JIT for MIPS (32-BIT AND 64-BIT)36793680M: Johan Almbladh <johan.almbladh@anyfinetworks.com>36803681M: Paul Burton <paulburton@kernel.org>36813681-L: netdev@vger.kernel.org36823682L: bpf@vger.kernel.org36833683S: Maintained36843684F: arch/mips/net/3685368536863686BPF JIT for NFP NICs36873687M: Jakub Kicinski <kuba@kernel.org>36883688-L: netdev@vger.kernel.org36893688L: bpf@vger.kernel.org36903689S: Odd Fixes36913690F: drivers/net/ethernet/netronome/nfp/bpf/···36913694BPF JIT for POWERPC (32-BIT AND 64-BIT)36923695M: Naveen N. Rao <naveen.n.rao@linux.ibm.com>36933696M: Michael Ellerman <mpe@ellerman.id.au>36943694-L: netdev@vger.kernel.org36953697L: bpf@vger.kernel.org36963698S: Supported36973699F: arch/powerpc/net/···36983702BPF JIT for RISC-V (32-bit)36993703M: Luke Nelson <luke.r.nels@gmail.com>37003704M: Xi Wang <xi.wang@gmail.com>37013701-L: netdev@vger.kernel.org37023705L: bpf@vger.kernel.org37033706S: Maintained37043707F: arch/riscv/net/···3705371037063711BPF JIT for RISC-V (64-bit)37073712M: Björn Töpel <bjorn@kernel.org>37083708-L: netdev@vger.kernel.org37093713L: bpf@vger.kernel.org37103714S: Maintained37113715F: arch/riscv/net/···37143720M: Ilya Leoshkevich <iii@linux.ibm.com>37153721M: Heiko Carstens <hca@linux.ibm.com>37163722M: Vasily Gorbik <gor@linux.ibm.com>37173717-L: netdev@vger.kernel.org37183723L: bpf@vger.kernel.org37193724S: Supported37203725F: arch/s390/net/···3721372837223729BPF JIT for SPARC (32-BIT AND 64-BIT)37233730M: David S. Miller <davem@davemloft.net>37243724-L: netdev@vger.kernel.org37253731L: bpf@vger.kernel.org37263732S: Odd Fixes37273733F: arch/sparc/net/3728373437293735BPF JIT for X86 32-BIT37303736M: Wang YanQing <udknight@gmail.com>37313731-L: netdev@vger.kernel.org37323737L: bpf@vger.kernel.org37333738S: Odd Fixes37343739F: arch/x86/net/bpf_jit_comp32.c···37343743BPF JIT for X86 64-BIT37353744M: Alexei Starovoitov <ast@kernel.org>37363745M: Daniel Borkmann <daniel@iogearbox.net>37373737-L: netdev@vger.kernel.org37383746L: bpf@vger.kernel.org37393747S: Supported37403748F: arch/x86/net/37413749X: arch/x86/net/bpf_jit_comp32.c3742375037433743-BPF LSM (Security Audit and Enforcement using BPF)37513751+BPF [CORE]37523752+M: Alexei Starovoitov <ast@kernel.org>37533753+M: Daniel Borkmann <daniel@iogearbox.net>37543754+R: John Fastabend <john.fastabend@gmail.com>37553755+L: bpf@vger.kernel.org37563756+S: Maintained37573757+F: kernel/bpf/verifier.c37583758+F: kernel/bpf/tnum.c37593759+F: kernel/bpf/core.c37603760+F: kernel/bpf/syscall.c37613761+F: kernel/bpf/dispatcher.c37623762+F: kernel/bpf/trampoline.c37633763+F: include/linux/bpf*37643764+F: include/linux/filter.h37653765+37663766+BPF [BTF]37673767+M: Martin KaFai Lau <martin.lau@linux.dev>37683768+L: bpf@vger.kernel.org37693769+S: Maintained37703770+F: kernel/bpf/btf.c37713771+F: include/linux/btf*37723772+37733773+BPF [TRACING]37743774+M: Song Liu <song@kernel.org>37753775+R: Jiri Olsa <jolsa@kernel.org>37763776+L: bpf@vger.kernel.org37773777+S: Maintained37783778+F: kernel/trace/bpf_trace.c37793779+F: kernel/bpf/stackmap.c37803780+37813781+BPF [NETWORKING] (tc BPF, sock_addr)37823782+M: Martin KaFai Lau <martin.lau@linux.dev>37833783+M: Daniel Borkmann <daniel@iogearbox.net>37843784+R: John Fastabend <john.fastabend@gmail.com>37853785+L: bpf@vger.kernel.org37863786+L: netdev@vger.kernel.org37873787+S: Maintained37883788+F: net/core/filter.c37893789+F: net/sched/act_bpf.c37903790+F: net/sched/cls_bpf.c37913791+37923792+BPF [NETWORKING] (struct_ops, reuseport)37933793+M: Martin KaFai Lau <martin.lau@linux.dev>37943794+L: bpf@vger.kernel.org37953795+L: netdev@vger.kernel.org37963796+S: Maintained37973797+F: kernel/bpf/bpf_struct*37983798+37993799+BPF [SECURITY & LSM] (Security Audit and Enforcement using BPF)37443800M: KP Singh <kpsingh@kernel.org>37453801R: Florent Revest <revest@chromium.org>37463802R: Brendan Jackman <jackmanb@chromium.org>···37983760F: kernel/bpf/bpf_lsm.c37993761F: security/bpf/3800376238013801-BPF L7 FRAMEWORK37633763+BPF [STORAGE & CGROUPS]37643764+M: Martin KaFai Lau <martin.lau@linux.dev>37653765+L: bpf@vger.kernel.org37663766+S: Maintained37673767+F: kernel/bpf/cgroup.c37683768+F: kernel/bpf/*storage.c37693769+F: kernel/bpf/bpf_lru*37703770+37713771+BPF [RINGBUF]37723772+M: Andrii Nakryiko <andrii@kernel.org>37733773+L: bpf@vger.kernel.org37743774+S: Maintained37753775+F: kernel/bpf/ringbuf.c37763776+37773777+BPF [ITERATOR]37783778+M: Yonghong Song <yhs@fb.com>37793779+L: bpf@vger.kernel.org37803780+S: Maintained37813781+F: kernel/bpf/*iter.c37823782+37833783+BPF [L7 FRAMEWORK] (sockmap)38023784M: John Fastabend <john.fastabend@gmail.com>38033785M: Jakub Sitnicki <jakub@cloudflare.com>38043786L: netdev@vger.kernel.org···38313773F: net/ipv4/udp_bpf.c38323774F: net/unix/unix_bpf.c3833377538343834-BPFTOOL37763776+BPF [LIBRARY] (libbpf)37773777+M: Andrii Nakryiko <andrii@kernel.org>37783778+L: bpf@vger.kernel.org37793779+S: Maintained37803780+F: tools/lib/bpf/37813781+37823782+BPF [TOOLING] (bpftool)38353783M: Quentin Monnet <quentin@isovalent.com>38363784L: bpf@vger.kernel.org38373785S: Maintained38383786F: kernel/bpf/disasm.*38393787F: tools/bpf/bpftool/37883788+37893789+BPF [SELFTESTS] (Test Runners & Infrastructure)37903790+M: Andrii Nakryiko <andrii@kernel.org>37913791+R: Mykola Lysenko <mykolal@fb.com>37923792+L: bpf@vger.kernel.org37933793+S: Maintained37943794+F: tools/testing/selftests/bpf/37953795+37963796+BPF [MISC]37973797+L: bpf@vger.kernel.org37983798+S: Odd Fixes37993799+K: (?:\b|_)bpf(?:\b|_)3840380038413801BROADCOM B44 10/100 ETHERNET DRIVER38423802M: Michael Chan <michael.chan@broadcom.com>···50514975T: git git://git.kernel.org/pub/scm/linux/kernel/git/clk/linux.git50524976F: Documentation/devicetree/bindings/clock/50534977F: drivers/clk/49784978+F: include/dt-bindings/clock/50544979F: include/linux/clk-pr*50554980F: include/linux/clk/50564981F: include/linux/of_clk.h···99259848M: Cezary Rojewski <cezary.rojewski@intel.com>99269849M: Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com>99279850M: Liam Girdwood <liam.r.girdwood@linux.intel.com>99289928-M: Jie Yang <yang.jie@linux.intel.com>98519851+M: Peter Ujfalusi <peter.ujfalusi@linux.intel.com>98529852+M: Bard Liao <yung-chuan.liao@linux.intel.com>98539853+M: Ranjani Sridharan <ranjani.sridharan@linux.intel.com>98549854+M: Kai Vehmanen <kai.vehmanen@linux.intel.com>99299855L: alsa-devel@alsa-project.org (moderated for non-subscribers)99309856S: Supported99319857F: sound/soc/intel/···1588015800PIN CONTROLLER - INTEL1588115801M: Mika Westerberg <mika.westerberg@linux.intel.com>1588215802M: Andy Shevchenko <andy@kernel.org>1588315883-S: Maintained1580315803+S: Supported1588415804T: git git://git.kernel.org/pub/scm/linux/kernel/git/pinctrl/intel.git1588515805F: drivers/pinctrl/intel/1588615806···16402163221640316323QCOM AUDIO (ASoC) DRIVERS1640416324M: Srinivas Kandagatla <srinivas.kandagatla@linaro.org>1640516405-M: Banajit Goswami <bgoswami@codeaurora.org>1632516325+M: Banajit Goswami <bgoswami@quicinc.com>1640616326L: alsa-devel@alsa-project.org (moderated for non-subscribers)1640716327S: Supported1640816328F: sound/soc/codecs/lpass-va-macro.c···18210181301821118131SHARED MEMORY COMMUNICATIONS (SMC) SOCKETS1821218132M: Karsten Graul <kgraul@linux.ibm.com>1813318133+M: Wenjia Zhang <wenjia@linux.ibm.com>1821318134L: linux-s390@vger.kernel.org1821418135S: Supported1821518136W: http://www.ibm.com/developerworks/linux/linux390/···1884318762SOUND - SOUND OPEN FIRMWARE (SOF) DRIVERS1884418763M: Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com>1884518764M: Liam Girdwood <lgirdwood@gmail.com>1876518765+M: Peter Ujfalusi <peter.ujfalusi@linux.intel.com>1876618766+M: Bard Liao <yung-chuan.liao@linux.intel.com>1884618767M: Ranjani Sridharan <ranjani.sridharan@linux.intel.com>1884718847-M: Kai Vehmanen <kai.vehmanen@linux.intel.com>1876818768+R: Kai Vehmanen <kai.vehmanen@linux.intel.com>1884818769M: Daniel Baluta <daniel.baluta@nxp.com>1884918770L: sound-open-firmware@alsa-project.org (moderated for non-subscribers)1885018771S: Supported
···55 * Copyright 2021 Google LLC.66 */7788-#include "sc7180-trogdor.dtsi"88+/* This file must be included after sc7180-trogdor.dtsi */991010/ {1111 /* BOARD-SPECIFIC TOP LEVEL NODES */
···55 * Copyright 2020 Google LLC.66 */7788-#include "sc7180-trogdor.dtsi"88+/* This file must be included after sc7180-trogdor.dtsi */991010&ap_sar_sensor {1111 semtech,cs0-ground;
···2525/*2626 * Verify a frameinfo structure. The return address should be a valid text2727 * address. The frame pointer may be null if its the last frame, otherwise2828- * the frame pointer should point to a location in the stack after the the2828+ * the frame pointer should point to a location in the stack after the2929 * top of the next frame up.3030 */3131static inline int or1k_frameinfo_valid(struct or1k_frameinfo *frameinfo)
···1313# If you really need to reference something from prom_init.o add1414# it to the list below:15151616-grep "^CONFIG_KASAN=y$" .config >/dev/null1616+grep "^CONFIG_KASAN=y$" ${KCONFIG_CONFIG} >/dev/null1717if [ $? -eq 0 ]1818then1919 MEM_FUNCS="__memcpy __memset"
+32-1
arch/powerpc/mm/mem.c
···105105 vm_unmap_aliases();106106}107107108108+/*109109+ * After memory hotplug the variables max_pfn, max_low_pfn and high_memory need110110+ * updating.111111+ */112112+static void update_end_of_memory_vars(u64 start, u64 size)113113+{114114+ unsigned long end_pfn = PFN_UP(start + size);115115+116116+ if (end_pfn > max_pfn) {117117+ max_pfn = end_pfn;118118+ max_low_pfn = end_pfn;119119+ high_memory = (void *)__va(max_pfn * PAGE_SIZE - 1) + 1;120120+ }121121+}122122+123123+int __ref add_pages(int nid, unsigned long start_pfn, unsigned long nr_pages,124124+ struct mhp_params *params)125125+{126126+ int ret;127127+128128+ ret = __add_pages(nid, start_pfn, nr_pages, params);129129+ if (ret)130130+ return ret;131131+132132+ /* update max_pfn, max_low_pfn and high_memory */133133+ update_end_of_memory_vars(start_pfn << PAGE_SHIFT,134134+ nr_pages << PAGE_SHIFT);135135+136136+ return ret;137137+}138138+108139int __ref arch_add_memory(int nid, u64 start, u64 size,109140 struct mhp_params *params)110141{···146115 rc = arch_create_linear_mapping(nid, start, size, params);147116 if (rc)148117 return rc;149149- rc = __add_pages(nid, start_pfn, nr_pages, params);118118+ rc = add_pages(nid, start_pfn, nr_pages, params);150119 if (rc)151120 arch_remove_linear_mapping(start, size);152121 return rc;
···484484config KEXEC_FILE485485 bool "kexec file based system call"486486 select KEXEC_CORE487487- select BUILD_BIN2C488487 depends on CRYPTO489488 depends on CRYPTO_SHA256490489 depends on CRYPTO_SHA256_S390
-217
arch/s390/crypto/arch_random.c
···44 *55 * Copyright IBM Corp. 2017, 202066 * Author(s): Harald Freudenberger77- *88- * The s390_arch_random_generate() function may be called from random.c99- * in interrupt context. So this implementation does the best to be very1010- * fast. There is a buffer of random data which is asynchronously checked1111- * and filled by a workqueue thread.1212- * If there are enough bytes in the buffer the s390_arch_random_generate()1313- * just delivers these bytes. Otherwise false is returned until the1414- * worker thread refills the buffer.1515- * The worker fills the rng buffer by pulling fresh entropy from the1616- * high quality (but slow) true hardware random generator. This entropy1717- * is then spread over the buffer with an pseudo random generator PRNG.1818- * As the arch_get_random_seed_long() fetches 8 bytes and the calling1919- * function add_interrupt_randomness() counts this as 1 bit entropy the2020- * distribution needs to make sure there is in fact 1 bit entropy contained2121- * in 8 bytes of the buffer. The current values pull 32 byte entropy2222- * and scatter this into a 2048 byte buffer. So 8 byte in the buffer2323- * will contain 1 bit of entropy.2424- * The worker thread is rescheduled based on the charge level of the2525- * buffer but at least with 500 ms delay to avoid too much CPU consumption.2626- * So the max. amount of rng data delivered via arch_get_random_seed is2727- * limited to 4k bytes per second.287 */298309#include <linux/kernel.h>3110#include <linux/atomic.h>3211#include <linux/random.h>3333-#include <linux/slab.h>3412#include <linux/static_key.h>3535-#include <linux/workqueue.h>3636-#include <linux/moduleparam.h>3713#include <asm/cpacf.h>38143915DEFINE_STATIC_KEY_FALSE(s390_arch_random_available);40164117atomic64_t s390_arch_random_counter = ATOMIC64_INIT(0);4218EXPORT_SYMBOL(s390_arch_random_counter);4343-4444-#define ARCH_REFILL_TICKS (HZ/2)4545-#define ARCH_PRNG_SEED_SIZE 324646-#define ARCH_RNG_BUF_SIZE 20484747-4848-static DEFINE_SPINLOCK(arch_rng_lock);4949-static u8 *arch_rng_buf;5050-static unsigned int arch_rng_buf_idx;5151-5252-static void arch_rng_refill_buffer(struct work_struct *);5353-static DECLARE_DELAYED_WORK(arch_rng_work, arch_rng_refill_buffer);5454-5555-bool s390_arch_random_generate(u8 *buf, unsigned int nbytes)5656-{5757- /* max hunk is ARCH_RNG_BUF_SIZE */5858- if (nbytes > ARCH_RNG_BUF_SIZE)5959- return false;6060-6161- /* lock rng buffer */6262- if (!spin_trylock(&arch_rng_lock))6363- return false;6464-6565- /* try to resolve the requested amount of bytes from the buffer */6666- arch_rng_buf_idx -= nbytes;6767- if (arch_rng_buf_idx < ARCH_RNG_BUF_SIZE) {6868- memcpy(buf, arch_rng_buf + arch_rng_buf_idx, nbytes);6969- atomic64_add(nbytes, &s390_arch_random_counter);7070- spin_unlock(&arch_rng_lock);7171- return true;7272- }7373-7474- /* not enough bytes in rng buffer, refill is done asynchronously */7575- spin_unlock(&arch_rng_lock);7676-7777- return false;7878-}7979-EXPORT_SYMBOL(s390_arch_random_generate);8080-8181-static void arch_rng_refill_buffer(struct work_struct *unused)8282-{8383- unsigned int delay = ARCH_REFILL_TICKS;8484-8585- spin_lock(&arch_rng_lock);8686- if (arch_rng_buf_idx > ARCH_RNG_BUF_SIZE) {8787- /* buffer is exhausted and needs refill */8888- u8 seed[ARCH_PRNG_SEED_SIZE];8989- u8 prng_wa[240];9090- /* fetch ARCH_PRNG_SEED_SIZE bytes of entropy */9191- cpacf_trng(NULL, 0, seed, sizeof(seed));9292- /* blow this entropy up to ARCH_RNG_BUF_SIZE with PRNG */9393- memset(prng_wa, 0, sizeof(prng_wa));9494- cpacf_prno(CPACF_PRNO_SHA512_DRNG_SEED,9595- &prng_wa, NULL, 0, seed, sizeof(seed));9696- cpacf_prno(CPACF_PRNO_SHA512_DRNG_GEN,9797- &prng_wa, arch_rng_buf, ARCH_RNG_BUF_SIZE, NULL, 0);9898- arch_rng_buf_idx = ARCH_RNG_BUF_SIZE;9999- }100100- delay += (ARCH_REFILL_TICKS * arch_rng_buf_idx) / ARCH_RNG_BUF_SIZE;101101- spin_unlock(&arch_rng_lock);102102-103103- /* kick next check */104104- queue_delayed_work(system_long_wq, &arch_rng_work, delay);105105-}106106-107107-/*108108- * Here follows the implementation of s390_arch_get_random_long().109109- *110110- * The random longs to be pulled by arch_get_random_long() are111111- * prepared in an 4K buffer which is filled from the NIST 800-90112112- * compliant s390 drbg. By default the random long buffer is refilled113113- * 256 times before the drbg itself needs a reseed. The reseed of the114114- * drbg is done with 32 bytes fetched from the high quality (but slow)115115- * trng which is assumed to deliver 100% entropy. So the 32 * 8 = 256116116- * bits of entropy are spread over 256 * 4KB = 1MB serving 131072117117- * arch_get_random_long() invocations before reseeded.118118- *119119- * How often the 4K random long buffer is refilled with the drbg120120- * before the drbg is reseeded can be adjusted. There is a module121121- * parameter 's390_arch_rnd_long_drbg_reseed' accessible via122122- * /sys/module/arch_random/parameters/rndlong_drbg_reseed123123- * or as kernel command line parameter124124- * arch_random.rndlong_drbg_reseed=<value>125125- * This parameter tells how often the drbg fills the 4K buffer before126126- * it is re-seeded by fresh entropy from the trng.127127- * A value of 16 results in reseeding the drbg at every 16 * 4 KB = 64128128- * KB with 32 bytes of fresh entropy pulled from the trng. So a value129129- * of 16 would result in 256 bits entropy per 64 KB.130130- * A value of 256 results in 1MB of drbg output before a reseed of the131131- * drbg is done. So this would spread the 256 bits of entropy among 1MB.132132- * Setting this parameter to 0 forces the reseed to take place every133133- * time the 4K buffer is depleted, so the entropy rises to 256 bits134134- * entropy per 4K or 0.5 bit entropy per arch_get_random_long(). With135135- * setting this parameter to negative values all this effort is136136- * disabled, arch_get_random long() returns false and thus indicating137137- * that the arch_get_random_long() feature is disabled at all.138138- */139139-140140-static unsigned long rndlong_buf[512];141141-static DEFINE_SPINLOCK(rndlong_lock);142142-static int rndlong_buf_index;143143-144144-static int rndlong_drbg_reseed = 256;145145-module_param_named(rndlong_drbg_reseed, rndlong_drbg_reseed, int, 0600);146146-MODULE_PARM_DESC(rndlong_drbg_reseed, "s390 arch_get_random_long() drbg reseed");147147-148148-static inline void refill_rndlong_buf(void)149149-{150150- static u8 prng_ws[240];151151- static int drbg_counter;152152-153153- if (--drbg_counter < 0) {154154- /* need to re-seed the drbg */155155- u8 seed[32];156156-157157- /* fetch seed from trng */158158- cpacf_trng(NULL, 0, seed, sizeof(seed));159159- /* seed drbg */160160- memset(prng_ws, 0, sizeof(prng_ws));161161- cpacf_prno(CPACF_PRNO_SHA512_DRNG_SEED,162162- &prng_ws, NULL, 0, seed, sizeof(seed));163163- /* re-init counter for drbg */164164- drbg_counter = rndlong_drbg_reseed;165165- }166166-167167- /* fill the arch_get_random_long buffer from drbg */168168- cpacf_prno(CPACF_PRNO_SHA512_DRNG_GEN, &prng_ws,169169- (u8 *) rndlong_buf, sizeof(rndlong_buf),170170- NULL, 0);171171-}172172-173173-bool s390_arch_get_random_long(unsigned long *v)174174-{175175- bool rc = false;176176- unsigned long flags;177177-178178- /* arch_get_random_long() disabled ? */179179- if (rndlong_drbg_reseed < 0)180180- return false;181181-182182- /* try to lock the random long lock */183183- if (!spin_trylock_irqsave(&rndlong_lock, flags))184184- return false;185185-186186- if (--rndlong_buf_index >= 0) {187187- /* deliver next long value from the buffer */188188- *v = rndlong_buf[rndlong_buf_index];189189- rc = true;190190- goto out;191191- }192192-193193- /* buffer is depleted and needs refill */194194- if (in_interrupt()) {195195- /* delay refill in interrupt context to next caller */196196- rndlong_buf_index = 0;197197- goto out;198198- }199199-200200- /* refill random long buffer */201201- refill_rndlong_buf();202202- rndlong_buf_index = ARRAY_SIZE(rndlong_buf);203203-204204- /* and provide one random long */205205- *v = rndlong_buf[--rndlong_buf_index];206206- rc = true;207207-208208-out:209209- spin_unlock_irqrestore(&rndlong_lock, flags);210210- return rc;211211-}212212-EXPORT_SYMBOL(s390_arch_get_random_long);213213-214214-static int __init s390_arch_random_init(void)215215-{216216- /* all the needed PRNO subfunctions available ? */217217- if (cpacf_query_func(CPACF_PRNO, CPACF_PRNO_TRNG) &&218218- cpacf_query_func(CPACF_PRNO, CPACF_PRNO_SHA512_DRNG_GEN)) {219219-220220- /* alloc arch random working buffer */221221- arch_rng_buf = kmalloc(ARCH_RNG_BUF_SIZE, GFP_KERNEL);222222- if (!arch_rng_buf)223223- return -ENOMEM;224224-225225- /* kick worker queue job to fill the random buffer */226226- queue_delayed_work(system_long_wq,227227- &arch_rng_work, ARCH_REFILL_TICKS);228228-229229- /* enable arch random to the outside world */230230- static_branch_enable(&s390_arch_random_available);231231- }232232-233233- return 0;234234-}235235-arch_initcall(s390_arch_random_init);
+7-7
arch/s390/include/asm/archrandom.h
···15151616#include <linux/static_key.h>1717#include <linux/atomic.h>1818+#include <asm/cpacf.h>18191920DECLARE_STATIC_KEY_FALSE(s390_arch_random_available);2021extern atomic64_t s390_arch_random_counter;21222222-bool s390_arch_get_random_long(unsigned long *v);2323-bool s390_arch_random_generate(u8 *buf, unsigned int nbytes);2424-2523static inline bool __must_check arch_get_random_long(unsigned long *v)2624{2727- if (static_branch_likely(&s390_arch_random_available))2828- return s390_arch_get_random_long(v);2925 return false;3026}3127···3337static inline bool __must_check arch_get_random_seed_long(unsigned long *v)3438{3539 if (static_branch_likely(&s390_arch_random_available)) {3636- return s390_arch_random_generate((u8 *)v, sizeof(*v));4040+ cpacf_trng(NULL, 0, (u8 *)v, sizeof(*v));4141+ atomic64_add(sizeof(*v), &s390_arch_random_counter);4242+ return true;3743 }3844 return false;3945}···4345static inline bool __must_check arch_get_random_seed_int(unsigned int *v)4446{4547 if (static_branch_likely(&s390_arch_random_available)) {4646- return s390_arch_random_generate((u8 *)v, sizeof(*v));4848+ cpacf_trng(NULL, 0, (u8 *)v, sizeof(*v));4949+ atomic64_add(sizeof(*v), &s390_arch_random_counter);5050+ return true;4751 }4852 return false;4953}
+3-3
arch/s390/include/asm/qdio.h
···133133 * @sb_count: number of storage blocks134134 * @sba: storage block element addresses135135 * @dcount: size of storage block elements136136- * @user0: user defineable value137137- * @res4: reserved paramater138138- * @user1: user defineable value136136+ * @user0: user definable value137137+ * @res4: reserved parameter138138+ * @user1: user definable value139139 */140140struct qaob {141141 u64 res0[6];
···666666 CRC32c and CRC32 CRC algorithms implemented using mips crypto667667 instructions, when available.668668669669+config CRYPTO_CRC32_S390670670+ tristate "CRC-32 algorithms"671671+ depends on S390672672+ select CRYPTO_HASH673673+ select CRC32674674+ help675675+ Select this option if you want to use hardware accelerated676676+ implementations of CRC algorithms. With this option, you677677+ can optimize the computation of CRC-32 (IEEE 802.3 Ethernet)678678+ and CRC-32C (Castagnoli).679679+680680+ It is available with IBM z13 or later.669681670682config CRYPTO_XXHASH671683 tristate "xxHash hash algorithm"···910898 Extensions version 1 (AVX1), or Advanced Vector Extensions911899 version 2 (AVX2) instructions, when available.912900901901+config CRYPTO_SHA512_S390902902+ tristate "SHA384 and SHA512 digest algorithm"903903+ depends on S390904904+ select CRYPTO_HASH905905+ help906906+ This is the s390 hardware accelerated implementation of the907907+ SHA512 secure hash standard.908908+909909+ It is available as of z10.910910+913911config CRYPTO_SHA1_OCTEON914912 tristate "SHA1 digest algorithm (OCTEON)"915913 depends on CPU_CAVIUM_OCTEON···951929 help952930 SHA-1 secure hash standard (DFIPS 180-4) implemented953931 using powerpc SPE SIMD instruction set.932932+933933+config CRYPTO_SHA1_S390934934+ tristate "SHA1 digest algorithm"935935+ depends on S390936936+ select CRYPTO_HASH937937+ help938938+ This is the s390 hardware accelerated implementation of the939939+ SHA-1 secure hash standard (FIPS 180-1/DFIPS 180-2).940940+941941+ It is available as of z990.954942955943config CRYPTO_SHA256956944 tristate "SHA224 and SHA256 digest algorithm"···1002970 SHA-256 secure hash standard (DFIPS 180-2) implemented1003971 using sparc64 crypto instructions, when available.1004972973973+config CRYPTO_SHA256_S390974974+ tristate "SHA256 digest algorithm"975975+ depends on S390976976+ select CRYPTO_HASH977977+ help978978+ This is the s390 hardware accelerated implementation of the979979+ SHA256 secure hash standard (DFIPS 180-2).980980+981981+ It is available as of z9.982982+1005983config CRYPTO_SHA5121006984 tristate "SHA384 and SHA512 digest algorithms"1007985 select CRYPTO_HASH···1051100910521010 References:10531011 http://keccak.noekeon.org/10121012+10131013+config CRYPTO_SHA3_256_S39010141014+ tristate "SHA3_224 and SHA3_256 digest algorithm"10151015+ depends on S39010161016+ select CRYPTO_HASH10171017+ help10181018+ This is the s390 hardware accelerated implementation of the10191019+ SHA3_256 secure hash standard.10201020+10211021+ It is available as of z14.10221022+10231023+config CRYPTO_SHA3_512_S39010241024+ tristate "SHA3_384 and SHA3_512 digest algorithm"10251025+ depends on S39010261026+ select CRYPTO_HASH10271027+ help10281028+ This is the s390 hardware accelerated implementation of the10291029+ SHA3_512 secure hash standard.10301030+10311031+ It is available as of z14.1054103210551033config CRYPTO_SM310561034 tristate···11311069 help11321070 This is the x86_64 CLMUL-NI accelerated implementation of11331071 GHASH, the hash function used in GCM (Galois/Counter mode).10721072+10731073+config CRYPTO_GHASH_S39010741074+ tristate "GHASH hash function"10751075+ depends on S39010761076+ select CRYPTO_HASH10771077+ help10781078+ This is the s390 hardware accelerated implementation of GHASH,10791079+ the hash function used in GCM (Galois/Counter mode).10801080+10811081+ It is available as of z196.1134108211351083comment "Ciphers"11361084···12561184 timining attacks. Nevertheless it might be not as secure as other12571185 architecture specific assembler implementations that work on 1KB12581186 tables or 256 bytes S-boxes.11871187+11881188+config CRYPTO_AES_S39011891189+ tristate "AES cipher algorithms"11901190+ depends on S39011911191+ select CRYPTO_ALGAPI11921192+ select CRYPTO_SKCIPHER11931193+ help11941194+ This is the s390 hardware accelerated implementation of the11951195+ AES cipher algorithms (FIPS-197).11961196+11971197+ As of z9 the ECB and CBC modes are hardware accelerated11981198+ for 128 bit keys.11991199+ As of z10 the ECB and CBC modes are hardware accelerated12001200+ for all AES key sizes.12011201+ As of z196 the CTR mode is hardware accelerated for all AES12021202+ key sizes and XTS mode is hardware accelerated for 256 and12031203+ 512 bit keys.1259120412601205config CRYPTO_ANUBIS12611206 tristate "Anubis cipher algorithm"···15041415 algorithm are provided; regular processing one input block and15051416 one that processes three blocks parallel.1506141714181418+config CRYPTO_DES_S39014191419+ tristate "DES and Triple DES cipher algorithms"14201420+ depends on S39014211421+ select CRYPTO_ALGAPI14221422+ select CRYPTO_SKCIPHER14231423+ select CRYPTO_LIB_DES14241424+ help14251425+ This is the s390 hardware accelerated implementation of the14261426+ DES cipher algorithm (FIPS 46-2), and Triple DES EDE (FIPS 46-3).14271427+14281428+ As of z990 the ECB and CBC mode are hardware accelerated.14291429+ As of z196 the CTR mode is hardware accelerated.14301430+15071431config CRYPTO_FCRYPT15081432 tristate "FCrypt cipher algorithm"15091433 select CRYPTO_ALGAPI···15751473 depends on CPU_MIPS32_R215761474 select CRYPTO_SKCIPHER15771475 select CRYPTO_ARCH_HAVE_LIB_CHACHA14761476+14771477+config CRYPTO_CHACHA_S39014781478+ tristate "ChaCha20 stream cipher"14791479+ depends on S39014801480+ select CRYPTO_SKCIPHER14811481+ select CRYPTO_LIB_CHACHA_GENERIC14821482+ select CRYPTO_ARCH_HAVE_LIB_CHACHA14831483+ help14841484+ This is the s390 SIMD implementation of the ChaCha20 stream14851485+ cipher (RFC 7539).14861486+14871487+ It is available as of z13.1578148815791489config CRYPTO_SEED15801490 tristate "SEED cipher algorithm"
···152152module_param_named(max_ring_page_order, xen_blkif_max_ring_order, int, 0444);153153MODULE_PARM_DESC(max_ring_page_order, "Maximum order of pages to be used for the shared ring");154154155155+static bool __read_mostly xen_blkif_trusted = true;156156+module_param_named(trusted, xen_blkif_trusted, bool, 0644);157157+MODULE_PARM_DESC(trusted, "Is the backend trusted");158158+155159#define BLK_RING_SIZE(info) \156160 __CONST_RING_SIZE(blkif, XEN_PAGE_SIZE * (info)->nr_ring_pages)157161···214210 unsigned int feature_discard:1;215211 unsigned int feature_secdiscard:1;216212 unsigned int feature_persistent:1;213213+ unsigned int bounce:1;217214 unsigned int discard_granularity;218215 unsigned int discard_alignment;219216 /* Number of 4KB segments handled */···315310 if (!gnt_list_entry)316311 goto out_of_memory;317312318318- if (info->feature_persistent) {319319- granted_page = alloc_page(GFP_NOIO);313313+ if (info->bounce) {314314+ granted_page = alloc_page(GFP_NOIO | __GFP_ZERO);320315 if (!granted_page) {321316 kfree(gnt_list_entry);322317 goto out_of_memory;···335330 list_for_each_entry_safe(gnt_list_entry, n,336331 &rinfo->grants, node) {337332 list_del(&gnt_list_entry->node);338338- if (info->feature_persistent)333333+ if (info->bounce)339334 __free_page(gnt_list_entry->page);340335 kfree(gnt_list_entry);341336 i--;···381376 /* Assign a gref to this page */382377 gnt_list_entry->gref = gnttab_claim_grant_reference(gref_head);383378 BUG_ON(gnt_list_entry->gref == -ENOSPC);384384- if (info->feature_persistent)379379+ if (info->bounce)385380 grant_foreign_access(gnt_list_entry, info);386381 else {387382 /* Grant access to the GFN passed by the caller */···405400 /* Assign a gref to this page */406401 gnt_list_entry->gref = gnttab_claim_grant_reference(gref_head);407402 BUG_ON(gnt_list_entry->gref == -ENOSPC);408408- if (!info->feature_persistent) {403403+ if (!info->bounce) {409404 struct page *indirect_page;410405411406 /* Fetch a pre-allocated page to use for indirect grefs */···708703 .grant_idx = 0,709704 .segments = NULL,710705 .rinfo = rinfo,711711- .need_copy = rq_data_dir(req) && info->feature_persistent,706706+ .need_copy = rq_data_dir(req) && info->bounce,712707 };713708714709 /*···986981{987982 blk_queue_write_cache(info->rq, info->feature_flush ? true : false,988983 info->feature_fua ? true : false);989989- pr_info("blkfront: %s: %s %s %s %s %s\n",984984+ pr_info("blkfront: %s: %s %s %s %s %s %s %s\n",990985 info->gd->disk_name, flush_info(info),991986 "persistent grants:", info->feature_persistent ?992987 "enabled;" : "disabled;", "indirect descriptors:",993993- info->max_indirect_segments ? "enabled;" : "disabled;");988988+ info->max_indirect_segments ? "enabled;" : "disabled;",989989+ "bounce buffer:", info->bounce ? "enabled" : "disabled;");994990}995991996992static int xen_translate_vdev(int vdevice, int *minor, unsigned int *offset)···12131207 if (!list_empty(&rinfo->indirect_pages)) {12141208 struct page *indirect_page, *n;1215120912161216- BUG_ON(info->feature_persistent);12101210+ BUG_ON(info->bounce);12171211 list_for_each_entry_safe(indirect_page, n, &rinfo->indirect_pages, lru) {12181212 list_del(&indirect_page->lru);12191213 __free_page(indirect_page);···12301224 NULL);12311225 rinfo->persistent_gnts_c--;12321226 }12331233- if (info->feature_persistent)12271227+ if (info->bounce)12341228 __free_page(persistent_gnt->page);12351229 kfree(persistent_gnt);12361230 }···12511245 for (j = 0; j < segs; j++) {12521246 persistent_gnt = rinfo->shadow[i].grants_used[j];12531247 gnttab_end_foreign_access(persistent_gnt->gref, NULL);12541254- if (info->feature_persistent)12481248+ if (info->bounce)12551249 __free_page(persistent_gnt->page);12561250 kfree(persistent_gnt);12571251 }···14341428 data.s = s;14351429 num_sg = s->num_sg;1436143014371437- if (bret->operation == BLKIF_OP_READ && info->feature_persistent) {14311431+ if (bret->operation == BLKIF_OP_READ && info->bounce) {14381432 for_each_sg(s->sg, sg, num_sg, i) {14391433 BUG_ON(sg->offset + sg->length > PAGE_SIZE);14401434···14931487 * Add the used indirect page back to the list of14941488 * available pages for indirect grefs.14951489 */14961496- if (!info->feature_persistent) {14901490+ if (!info->bounce) {14971491 indirect_page = s->indirect_grants[i]->page;14981492 list_add(&indirect_page->lru, &rinfo->indirect_pages);14991493 }···1769176317701764 if (!info)17711765 return -ENODEV;17661766+17671767+ /* Check if backend is trusted. */17681768+ info->bounce = !xen_blkif_trusted ||17691769+ !xenbus_read_unsigned(dev->nodename, "trusted", 1);1772177017731771 max_page_order = xenbus_read_unsigned(info->xbdev->otherend,17741772 "max-ring-page-order", 0);···21832173 if (err)21842174 goto out_of_memory;2185217521862186- if (!info->feature_persistent && info->max_indirect_segments) {21762176+ if (!info->bounce && info->max_indirect_segments) {21872177 /*21882188- * We are using indirect descriptors but not persistent21892189- * grants, we need to allocate a set of pages that can be21782178+ * We are using indirect descriptors but don't have a bounce21792179+ * buffer, we need to allocate a set of pages that can be21902180 * used for mapping indirect grefs21912181 */21922182 int num = INDIRECT_GREFS(grants) * BLK_RING_SIZE(info);2193218321942184 BUG_ON(!list_empty(&rinfo->indirect_pages));21952185 for (i = 0; i < num; i++) {21962196- struct page *indirect_page = alloc_page(GFP_KERNEL);21862186+ struct page *indirect_page = alloc_page(GFP_KERNEL |21872187+ __GFP_ZERO);21972188 if (!indirect_page)21982189 goto out_of_memory;21992190 list_add(&indirect_page->lru, &rinfo->indirect_pages);···22872276 info->feature_persistent =22882277 !!xenbus_read_unsigned(info->xbdev->otherend,22892278 "feature-persistent", 0);22792279+ if (info->feature_persistent)22802280+ info->bounce = true;2290228122912282 indirect_segments = xenbus_read_unsigned(info->xbdev->otherend,22922283 "feature-max-indirect-segments", 0);···25592546{25602547 struct blkfront_info *info;25612548 bool need_schedule_work = false;25492549+25502550+ /*25512551+ * Note that when using bounce buffers but not persistent grants25522552+ * there's no need to run blkfront_delay_work because grants are25532553+ * revoked in blkif_completion or else an error is reported and the25542554+ * connection is closed.25552555+ */2562255625632557 mutex_lock(&blkfront_mutex);25642558
···470470 if (slew_done_gpio_np)471471 slew_done_gpio = read_gpio(slew_done_gpio_np);472472473473+ of_node_put(volt_gpio_np);474474+ of_node_put(freq_gpio_np);475475+ of_node_put(slew_done_gpio_np);476476+473477 /* If we use the frequency GPIOs, calculate the min/max speeds based474478 * on the bus frequencies475479 */
+6
drivers/cpufreq/qcom-cpufreq-hw.c
···442442 struct platform_device *pdev = cpufreq_get_driver_data();443443 int ret;444444445445+ if (data->throttle_irq <= 0)446446+ return 0;447447+445448 ret = irq_set_affinity_hint(data->throttle_irq, policy->cpus);446449 if (ret)447450 dev_err(&pdev->dev, "Failed to set CPU affinity of %s[%d]\n",···472469473470static void qcom_cpufreq_hw_lmh_exit(struct qcom_cpufreq_data *data)474471{472472+ if (data->throttle_irq <= 0)473473+ return;474474+475475 free_irq(data->throttle_irq, data);476476}477477
+1
drivers/cpufreq/qoriq-cpufreq.c
···275275276276 np = of_find_matching_node(NULL, qoriq_cpufreq_blacklist);277277 if (np) {278278+ of_node_put(np);278279 dev_info(&pdev->dev, "Disabling due to erratum A-008083");279280 return -ENODEV;280281 }
-115
drivers/crypto/Kconfig
···133133 Select this option if you want to use the paes cipher134134 for example to use protected key encrypted devices.135135136136-config CRYPTO_SHA1_S390137137- tristate "SHA1 digest algorithm"138138- depends on S390139139- select CRYPTO_HASH140140- help141141- This is the s390 hardware accelerated implementation of the142142- SHA-1 secure hash standard (FIPS 180-1/DFIPS 180-2).143143-144144- It is available as of z990.145145-146146-config CRYPTO_SHA256_S390147147- tristate "SHA256 digest algorithm"148148- depends on S390149149- select CRYPTO_HASH150150- help151151- This is the s390 hardware accelerated implementation of the152152- SHA256 secure hash standard (DFIPS 180-2).153153-154154- It is available as of z9.155155-156156-config CRYPTO_SHA512_S390157157- tristate "SHA384 and SHA512 digest algorithm"158158- depends on S390159159- select CRYPTO_HASH160160- help161161- This is the s390 hardware accelerated implementation of the162162- SHA512 secure hash standard.163163-164164- It is available as of z10.165165-166166-config CRYPTO_SHA3_256_S390167167- tristate "SHA3_224 and SHA3_256 digest algorithm"168168- depends on S390169169- select CRYPTO_HASH170170- help171171- This is the s390 hardware accelerated implementation of the172172- SHA3_256 secure hash standard.173173-174174- It is available as of z14.175175-176176-config CRYPTO_SHA3_512_S390177177- tristate "SHA3_384 and SHA3_512 digest algorithm"178178- depends on S390179179- select CRYPTO_HASH180180- help181181- This is the s390 hardware accelerated implementation of the182182- SHA3_512 secure hash standard.183183-184184- It is available as of z14.185185-186186-config CRYPTO_DES_S390187187- tristate "DES and Triple DES cipher algorithms"188188- depends on S390189189- select CRYPTO_ALGAPI190190- select CRYPTO_SKCIPHER191191- select CRYPTO_LIB_DES192192- help193193- This is the s390 hardware accelerated implementation of the194194- DES cipher algorithm (FIPS 46-2), and Triple DES EDE (FIPS 46-3).195195-196196- As of z990 the ECB and CBC mode are hardware accelerated.197197- As of z196 the CTR mode is hardware accelerated.198198-199199-config CRYPTO_AES_S390200200- tristate "AES cipher algorithms"201201- depends on S390202202- select CRYPTO_ALGAPI203203- select CRYPTO_SKCIPHER204204- help205205- This is the s390 hardware accelerated implementation of the206206- AES cipher algorithms (FIPS-197).207207-208208- As of z9 the ECB and CBC modes are hardware accelerated209209- for 128 bit keys.210210- As of z10 the ECB and CBC modes are hardware accelerated211211- for all AES key sizes.212212- As of z196 the CTR mode is hardware accelerated for all AES213213- key sizes and XTS mode is hardware accelerated for 256 and214214- 512 bit keys.215215-216216-config CRYPTO_CHACHA_S390217217- tristate "ChaCha20 stream cipher"218218- depends on S390219219- select CRYPTO_SKCIPHER220220- select CRYPTO_LIB_CHACHA_GENERIC221221- select CRYPTO_ARCH_HAVE_LIB_CHACHA222222- help223223- This is the s390 SIMD implementation of the ChaCha20 stream224224- cipher (RFC 7539).225225-226226- It is available as of z13.227227-228136config S390_PRNG229137 tristate "Pseudo random number generator device driver"230138 depends on S390···145237 pseudo-random-number device through the char device /dev/prandom.146238147239 It is available as of z9.148148-149149-config CRYPTO_GHASH_S390150150- tristate "GHASH hash function"151151- depends on S390152152- select CRYPTO_HASH153153- help154154- This is the s390 hardware accelerated implementation of GHASH,155155- the hash function used in GCM (Galois/Counter mode).156156-157157- It is available as of z196.158158-159159-config CRYPTO_CRC32_S390160160- tristate "CRC-32 algorithms"161161- depends on S390162162- select CRYPTO_HASH163163- select CRC32164164- help165165- Select this option if you want to use hardware accelerated166166- implementations of CRC algorithms. With this option, you167167- can optimize the computation of CRC-32 (IEEE 802.3 Ethernet)168168- and CRC-32C (Castagnoli).169169-170170- It is available with IBM z13 or later.171240172241config CRYPTO_DEV_NIAGARA2173242 tristate "Niagara2 Stream Processing Unit driver"
+37-39
drivers/devfreq/devfreq.c
···123123 unsigned long *min_freq,124124 unsigned long *max_freq)125125{126126- unsigned long *freq_table = devfreq->profile->freq_table;126126+ unsigned long *freq_table = devfreq->freq_table;127127 s32 qos_min_freq, qos_max_freq;128128129129 lockdep_assert_held(&devfreq->lock);···133133 * The devfreq drivers can initialize this in either ascending or134134 * descending order and devfreq core supports both.135135 */136136- if (freq_table[0] < freq_table[devfreq->profile->max_state - 1]) {136136+ if (freq_table[0] < freq_table[devfreq->max_state - 1]) {137137 *min_freq = freq_table[0];138138- *max_freq = freq_table[devfreq->profile->max_state - 1];138138+ *max_freq = freq_table[devfreq->max_state - 1];139139 } else {140140- *min_freq = freq_table[devfreq->profile->max_state - 1];140140+ *min_freq = freq_table[devfreq->max_state - 1];141141 *max_freq = freq_table[0];142142 }143143···169169{170170 int lev;171171172172- for (lev = 0; lev < devfreq->profile->max_state; lev++)173173- if (freq == devfreq->profile->freq_table[lev])172172+ for (lev = 0; lev < devfreq->max_state; lev++)173173+ if (freq == devfreq->freq_table[lev])174174 return lev;175175176176 return -EINVAL;···178178179179static int set_freq_table(struct devfreq *devfreq)180180{181181- struct devfreq_dev_profile *profile = devfreq->profile;182181 struct dev_pm_opp *opp;183182 unsigned long freq;184183 int i, count;···187188 if (count <= 0)188189 return -EINVAL;189190190190- profile->max_state = count;191191- profile->freq_table = devm_kcalloc(devfreq->dev.parent,192192- profile->max_state,193193- sizeof(*profile->freq_table),194194- GFP_KERNEL);195195- if (!profile->freq_table) {196196- profile->max_state = 0;191191+ devfreq->max_state = count;192192+ devfreq->freq_table = devm_kcalloc(devfreq->dev.parent,193193+ devfreq->max_state,194194+ sizeof(*devfreq->freq_table),195195+ GFP_KERNEL);196196+ if (!devfreq->freq_table)197197 return -ENOMEM;198198- }199198200200- for (i = 0, freq = 0; i < profile->max_state; i++, freq++) {199199+ for (i = 0, freq = 0; i < devfreq->max_state; i++, freq++) {201200 opp = dev_pm_opp_find_freq_ceil(devfreq->dev.parent, &freq);202201 if (IS_ERR(opp)) {203203- devm_kfree(devfreq->dev.parent, profile->freq_table);204204- profile->max_state = 0;202202+ devm_kfree(devfreq->dev.parent, devfreq->freq_table);205203 return PTR_ERR(opp);206204 }207205 dev_pm_opp_put(opp);208208- profile->freq_table[i] = freq;206206+ devfreq->freq_table[i] = freq;209207 }210208211209 return 0;···242246243247 if (lev != prev_lev) {244248 devfreq->stats.trans_table[245245- (prev_lev * devfreq->profile->max_state) + lev]++;249249+ (prev_lev * devfreq->max_state) + lev]++;246250 devfreq->stats.total_trans++;247251 }248252···831835 if (err < 0)832836 goto err_dev;833837 mutex_lock(&devfreq->lock);838838+ } else {839839+ devfreq->freq_table = devfreq->profile->freq_table;840840+ devfreq->max_state = devfreq->profile->max_state;834841 }835842836843 devfreq->scaling_min_freq = find_available_min_freq(devfreq);···869870870871 devfreq->stats.trans_table = devm_kzalloc(&devfreq->dev,871872 array3_size(sizeof(unsigned int),872872- devfreq->profile->max_state,873873- devfreq->profile->max_state),873873+ devfreq->max_state,874874+ devfreq->max_state),874875 GFP_KERNEL);875876 if (!devfreq->stats.trans_table) {876877 mutex_unlock(&devfreq->lock);···879880 }880881881882 devfreq->stats.time_in_state = devm_kcalloc(&devfreq->dev,882882- devfreq->profile->max_state,883883+ devfreq->max_state,883884 sizeof(*devfreq->stats.time_in_state),884885 GFP_KERNEL);885886 if (!devfreq->stats.time_in_state) {···931932 err = devfreq->governor->event_handler(devfreq, DEVFREQ_GOV_START,932933 NULL);933934 if (err) {934934- dev_err(dev, "%s: Unable to start governor for the device\n",935935- __func__);935935+ dev_err_probe(dev, err,936936+ "%s: Unable to start governor for the device\n",937937+ __func__);936938 goto err_init;937939 }938940 create_sysfs_files(devfreq, devfreq->governor);···1665166516661666 mutex_lock(&df->lock);1667166716681668- for (i = 0; i < df->profile->max_state; i++)16681668+ for (i = 0; i < df->max_state; i++)16691669 count += scnprintf(&buf[count], (PAGE_SIZE - count - 2),16701670- "%lu ", df->profile->freq_table[i]);16701670+ "%lu ", df->freq_table[i]);1671167116721672 mutex_unlock(&df->lock);16731673 /* Truncate the trailing space */···1690169016911691 if (!df->profile)16921692 return -EINVAL;16931693- max_state = df->profile->max_state;16931693+ max_state = df->max_state;1694169416951695 if (max_state == 0)16961696 return sprintf(buf, "Not Supported.\n");···17071707 len += sprintf(buf + len, " :");17081708 for (i = 0; i < max_state; i++)17091709 len += sprintf(buf + len, "%10lu",17101710- df->profile->freq_table[i]);17101710+ df->freq_table[i]);1711171117121712 len += sprintf(buf + len, " time(ms)\n");1713171317141714 for (i = 0; i < max_state; i++) {17151715- if (df->profile->freq_table[i]17161716- == df->previous_freq) {17151715+ if (df->freq_table[i] == df->previous_freq)17171716 len += sprintf(buf + len, "*");17181718- } else {17171717+ else17191718 len += sprintf(buf + len, " ");17201720- }17211721- len += sprintf(buf + len, "%10lu:",17221722- df->profile->freq_table[i]);17191719+17201720+ len += sprintf(buf + len, "%10lu:", df->freq_table[i]);17231721 for (j = 0; j < max_state; j++)17241722 len += sprintf(buf + len, "%10u",17251723 df->stats.trans_table[(i * max_state) + j]);···17411743 if (!df->profile)17421744 return -EINVAL;1743174517441744- if (df->profile->max_state == 0)17461746+ if (df->max_state == 0)17451747 return count;1746174817471749 err = kstrtoint(buf, 10, &value);···17491751 return -EINVAL;1750175217511753 mutex_lock(&df->lock);17521752- memset(df->stats.time_in_state, 0, (df->profile->max_state *17541754+ memset(df->stats.time_in_state, 0, (df->max_state *17531755 sizeof(*df->stats.time_in_state)));17541756 memset(df->stats.trans_table, 0, array3_size(sizeof(unsigned int),17551755- df->profile->max_state,17561756- df->profile->max_state));17571757+ df->max_state,17581758+ df->max_state));17571759 df->stats.total_trans = 0;17581760 df->stats.last_update = get_jiffies_64();17591761 mutex_unlock(&df->lock);
···179179 * @max_resources: Maximum acceptable number of items, configured by the caller180180 * depending on the underlying resources that it is querying.181181 * @loop_idx: The iterator loop index in the current multi-part reply.182182+ * @rx_len: Size in bytes of the currenly processed message; it can be used by183183+ * the user of the iterator to verify a reply size.182184 * @priv: Optional pointer to some additional state-related private data setup183185 * by the caller during the iterations.184186 */···190188 unsigned int num_remaining;191189 unsigned int max_resources;192190 unsigned int loop_idx;191191+ size_t rx_len;193192 void *priv;194193};195194
+50-8
drivers/firmware/sysfb.c
···3434#include <linux/screen_info.h>3535#include <linux/sysfb.h>36363737+static struct platform_device *pd;3838+static DEFINE_MUTEX(disable_lock);3939+static bool disabled;4040+4141+static bool sysfb_unregister(void)4242+{4343+ if (IS_ERR_OR_NULL(pd))4444+ return false;4545+4646+ platform_device_unregister(pd);4747+ pd = NULL;4848+4949+ return true;5050+}5151+5252+/**5353+ * sysfb_disable() - disable the Generic System Framebuffers support5454+ *5555+ * This disables the registration of system framebuffer devices that match the5656+ * generic drivers that make use of the system framebuffer set up by firmware.5757+ *5858+ * It also unregisters a device if this was already registered by sysfb_init().5959+ *6060+ * Context: The function can sleep. A @disable_lock mutex is acquired to serialize6161+ * against sysfb_init(), that registers a system framebuffer device.6262+ */6363+void sysfb_disable(void)6464+{6565+ mutex_lock(&disable_lock);6666+ sysfb_unregister();6767+ disabled = true;6868+ mutex_unlock(&disable_lock);6969+}7070+EXPORT_SYMBOL_GPL(sysfb_disable);7171+3772static __init int sysfb_init(void)3873{3974 struct screen_info *si = &screen_info;4075 struct simplefb_platform_data mode;4141- struct platform_device *pd;4276 const char *name;4377 bool compatible;4444- int ret;7878+ int ret = 0;7979+8080+ mutex_lock(&disable_lock);8181+ if (disabled)8282+ goto unlock_mutex;45834684 /* try to create a simple-framebuffer device */4785 compatible = sysfb_parse_mode(si, &mode);4886 if (compatible) {4949- ret = sysfb_create_simplefb(si, &mode);5050- if (!ret)5151- return 0;8787+ pd = sysfb_create_simplefb(si, &mode);8888+ if (!IS_ERR(pd))8989+ goto unlock_mutex;5290 }53915492 /* if the FB is incompatible, create a legacy framebuffer device */···9860 name = "platform-framebuffer";996110062 pd = platform_device_alloc(name, 0);101101- if (!pd)102102- return -ENOMEM;6363+ if (!pd) {6464+ ret = -ENOMEM;6565+ goto unlock_mutex;6666+ }1036710468 sysfb_apply_efi_quirks(pd);10569···11373 if (ret)11474 goto err;11575116116- return 0;7676+ goto unlock_mutex;11777err:11878 platform_device_put(pd);7979+unlock_mutex:8080+ mutex_unlock(&disable_lock);11981 return ret;12082}12183
···51645164 */51655165 amdgpu_unregister_gpu_instance(tmp_adev);5166516651675167- drm_fb_helper_set_suspend_unlocked(adev_to_drm(adev)->fb_helper, true);51675167+ drm_fb_helper_set_suspend_unlocked(adev_to_drm(tmp_adev)->fb_helper, true);5168516851695169 /* disable ras on ALL IPs */51705170 if (!need_emergency_restart &&
+1
drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c
···320320 if (!amdgpu_device_has_dc_support(adev)) {321321 if (!adev->enable_virtual_display)322322 /* Disable vblank IRQs aggressively for power-saving */323323+ /* XXX: can this be enabled for DC? */323324 adev_to_drm(adev)->vblank_disable_immediate = true;324325325326 r = drm_vblank_init(adev_to_drm(adev), adev->mode_info.num_crtc);
-3
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
···42594259 }42604260 }4261426142624262- /* Disable vblank IRQs aggressively for power-saving. */42634263- adev_to_drm(adev)->vblank_disable_immediate = true;42644264-42654262 /* loops over all connectors on the board */42664263 for (i = 0; i < link_cnt; i++) {42674264 struct dc_link *link = NULL;
+3-2
drivers/gpu/drm/i915/gem/i915_gem_context.c
···933933 case I915_CONTEXT_PARAM_PERSISTENCE:934934 if (args->size)935935 ret = -EINVAL;936936- ret = proto_context_set_persistence(fpriv->dev_priv, pc,937937- args->value);936936+ else937937+ ret = proto_context_set_persistence(fpriv->dev_priv, pc,938938+ args->value);938939 break;939940940941 case I915_CONTEXT_PARAM_PROTECTED_CONTENT:
+3-3
drivers/gpu/drm/i915/gem/i915_gem_domain.c
···3535 if (obj->cache_dirty)3636 return false;37373838- if (!(obj->cache_coherent & I915_BO_CACHE_COHERENT_FOR_WRITE))3939- return true;4040-4138 if (IS_DGFX(i915))4239 return false;4040+4141+ if (!(obj->cache_coherent & I915_BO_CACHE_COHERENT_FOR_WRITE))4242+ return true;43434444 /* Currently in use by HW (display engine)? Keep flushed. */4545 return i915_gem_object_is_framebuffer(obj);
+15-19
drivers/gpu/drm/i915/i915_driver.c
···530530static int i915_driver_hw_probe(struct drm_i915_private *dev_priv)531531{532532 struct pci_dev *pdev = to_pci_dev(dev_priv->drm.dev);533533+ struct pci_dev *root_pdev;533534 int ret;534535535536 if (i915_inject_probe_failure(dev_priv))···642641643642 intel_bw_init_hw(dev_priv);644643644644+ /*645645+ * FIXME: Temporary hammer to avoid freezing the machine on our DGFX646646+ * This should be totally removed when we handle the pci states properly647647+ * on runtime PM and on s2idle cases.648648+ */649649+ root_pdev = pcie_find_root_port(pdev);650650+ if (root_pdev)651651+ pci_d3cold_disable(root_pdev);652652+645653 return 0;646654647655err_msi:···674664static void i915_driver_hw_remove(struct drm_i915_private *dev_priv)675665{676666 struct pci_dev *pdev = to_pci_dev(dev_priv->drm.dev);667667+ struct pci_dev *root_pdev;677668678669 i915_perf_fini(dev_priv);679670680671 if (pdev->msi_enabled)681672 pci_disable_msi(pdev);673673+674674+ root_pdev = pcie_find_root_port(pdev);675675+ if (root_pdev)676676+ pci_d3cold_enable(root_pdev);682677}683678684679/**···12081193 goto out;12091194 }1210119512111211- /*12121212- * FIXME: Temporary hammer to avoid freezing the machine on our DGFX12131213- * This should be totally removed when we handle the pci states properly12141214- * on runtime PM and on s2idle cases.12151215- */12161216- if (suspend_to_idle(dev_priv))12171217- pci_d3cold_disable(pdev);12181218-12191196 pci_disable_device(pdev);12201197 /*12211198 * During hibernation on some platforms the BIOS may try to access···13711364 return -EIO;1372136513731366 pci_set_master(pdev);13741374-13751375- pci_d3cold_enable(pdev);1376136713771368 disable_rpm_wakeref_asserts(&dev_priv->runtime_pm);13781369···15481543{15491544 struct drm_i915_private *dev_priv = kdev_to_i915(kdev);15501545 struct intel_runtime_pm *rpm = &dev_priv->runtime_pm;15511551- struct pci_dev *pdev = to_pci_dev(dev_priv->drm.dev);15521546 int ret;1553154715541548 if (drm_WARN_ON_ONCE(&dev_priv->drm, !HAS_RUNTIME_PM(dev_priv)))···15931589 drm_err(&dev_priv->drm,15941590 "Unclaimed access detected prior to suspending\n");1595159115961596- /*15971597- * FIXME: Temporary hammer to avoid freezing the machine on our DGFX15981598- * This should be totally removed when we handle the pci states properly15991599- * on runtime PM and on s2idle cases.16001600- */16011601- pci_d3cold_disable(pdev);16021592 rpm->suspended = true;1603159316041594 /*···16311633{16321634 struct drm_i915_private *dev_priv = kdev_to_i915(kdev);16331635 struct intel_runtime_pm *rpm = &dev_priv->runtime_pm;16341634- struct pci_dev *pdev = to_pci_dev(dev_priv->drm.dev);16351636 int ret;1636163716371638 if (drm_WARN_ON_ONCE(&dev_priv->drm, !HAS_RUNTIME_PM(dev_priv)))···1643164616441647 intel_opregion_notify_adapter(dev_priv, PCI_D0);16451648 rpm->suspended = false;16461646- pci_d3cold_enable(pdev);16471649 if (intel_uncore_unclaimed_mmio(&dev_priv->uncore))16481650 drm_dbg(&dev_priv->drm,16491651 "Unclaimed access during suspend, bios?\n");
···148148 * This only affects the READ_IOUT and READ_TEMPERATURE2 registers.149149 * READ_IOUT will return the sum of currents of all phases of a rail,150150 * and READ_TEMPERATURE2 will return the maximum temperature detected151151- * for the the phases of the rail.151151+ * for the phases of the rail.152152 */153153 for (i = 0; i < info->pages; i++) {154154 /*
···10011001static int validate_raid_redundancy(struct raid_set *rs)10021002{10031003 unsigned int i, rebuild_cnt = 0;10041004- unsigned int rebuilds_per_group = 0, copies;10041004+ unsigned int rebuilds_per_group = 0, copies, raid_disks;10051005 unsigned int group_size, last_group_start;1006100610071007- for (i = 0; i < rs->md.raid_disks; i++)10081008- if (!test_bit(In_sync, &rs->dev[i].rdev.flags) ||10091009- !rs->dev[i].rdev.sb_page)10071007+ for (i = 0; i < rs->raid_disks; i++)10081008+ if (!test_bit(FirstUse, &rs->dev[i].rdev.flags) &&10091009+ ((!test_bit(In_sync, &rs->dev[i].rdev.flags) ||10101010+ !rs->dev[i].rdev.sb_page)))10101011 rebuild_cnt++;1011101210121013 switch (rs->md.level) {···10471046 * A A B B C10481047 * C D D E E10491048 */10491049+ raid_disks = min(rs->raid_disks, rs->md.raid_disks);10501050 if (__is_raid10_near(rs->md.new_layout)) {10511051- for (i = 0; i < rs->md.raid_disks; i++) {10511051+ for (i = 0; i < raid_disks; i++) {10521052 if (!(i % copies))10531053 rebuilds_per_group = 0;10541054 if ((!rs->dev[i].rdev.sb_page ||···10721070 * results in the need to treat the last (potentially larger)10731071 * set differently.10741072 */10751075- group_size = (rs->md.raid_disks / copies);10761076- last_group_start = (rs->md.raid_disks / group_size) - 1;10731073+ group_size = (raid_disks / copies);10741074+ last_group_start = (raid_disks / group_size) - 1;10771075 last_group_start *= group_size;10781078- for (i = 0; i < rs->md.raid_disks; i++) {10761076+ for (i = 0; i < raid_disks; i++) {10791077 if (!(i % copies) && !(i > last_group_start))10801078 rebuilds_per_group = 0;10811079 if ((!rs->dev[i].rdev.sb_page ||···15901588{15911589 int i;1592159015931593- for (i = 0; i < rs->md.raid_disks; i++) {15911591+ for (i = 0; i < rs->raid_disks; i++) {15941592 struct md_rdev *rdev = &rs->dev[i].rdev;1595159315961594 if (!test_bit(Journal, &rdev->flags) &&···37683766 unsigned int i;37693767 int r = 0;3770376837713771- for (i = 0; !r && i < rs->md.raid_disks; i++)37723772- if (rs->dev[i].data_dev)37733773- r = fn(ti,37743774- rs->dev[i].data_dev,37753775- 0, /* No offset on data devs */37763776- rs->md.dev_sectors,37773777- data);37693769+ for (i = 0; !r && i < rs->raid_disks; i++) {37703770+ if (rs->dev[i].data_dev) {37713771+ r = fn(ti, rs->dev[i].data_dev,37723772+ 0, /* No offset on data devs */37733773+ rs->md.dev_sectors, data);37743774+ }37753775+ }3778377637793777 return r;37803778}
+5-1
drivers/md/raid5.c
···79337933 int err = 0;79347934 int number = rdev->raid_disk;79357935 struct md_rdev __rcu **rdevp;79367936- struct disk_info *p = conf->disks + number;79367936+ struct disk_info *p;79377937 struct md_rdev *tmp;7938793879397939 print_raid5_conf(conf);···79527952 log_exit(conf);79537953 return 0;79547954 }79557955+ if (unlikely(number >= conf->pool_size))79567956+ return 0;79577957+ p = conf->disks + number;79557958 if (rdev == rcu_access_pointer(p->rdev))79567959 rdevp = &p->rdev;79577960 else if (rdev == rcu_access_pointer(p->replacement))···80658062 */80668063 if (rdev->saved_raid_disk >= 0 &&80678064 rdev->saved_raid_disk >= first &&80658065+ rdev->saved_raid_disk <= last &&80688066 conf->disks[rdev->saved_raid_disk].rdev == NULL)80698067 first = rdev->saved_raid_disk;80708068
+1
drivers/net/Kconfig
···9494 select CRYPTO_CURVE25519_NEON if ARM && KERNEL_MODE_NEON9595 select CRYPTO_CHACHA_MIPS if CPU_MIPS32_R29696 select CRYPTO_POLY1305_MIPS if MIPS9797+ select CRYPTO_CHACHA_S390 if S3909798 help9899 WireGuard is a secure, fast, and easy to use replacement for IPSec99100 that uses modern cryptography and clever networking tricks. It's
···334334 * register. It increments once per SYS clock tick,335335 * which is 20 or 40 MHz.336336 *337337- * Observation shows that if the lowest byte (which is338338- * transferred first on the SPI bus) of that register339339- * is 0x00 or 0x80 the calculated CRC doesn't always340340- * match the transferred one.337337+ * Observation on the mcp2518fd shows that if the338338+ * lowest byte (which is transferred first on the SPI339339+ * bus) of that register is 0x00 or 0x80 the340340+ * calculated CRC doesn't always match the transferred341341+ * one. On the mcp2517fd this problem is not limited342342+ * to the first byte being 0x00 or 0x80.341343 *342344 * If the highest bit in the lowest byte is flipped343345 * the transferred CRC matches the calculated one. We344344- * assume for now the CRC calculation in the chip345345- * works on wrong data and the transferred data is346346- * correct.346346+ * assume for now the CRC operates on the correct347347+ * data.347348 */348349 if (reg == MCP251XFD_REG_TBC &&349349- (buf_rx->data[0] == 0x0 || buf_rx->data[0] == 0x80)) {350350+ ((buf_rx->data[0] & 0xf8) == 0x0 ||351351+ (buf_rx->data[0] & 0xf8) == 0x80)) {350352 /* Flip highest bit in lowest byte of le32 */351353 buf_rx->data[0] ^= 0x80;352354···358356 val_len);359357 if (!err) {360358 /* If CRC is now correct, assume361361- * transferred data was OK, flip bit362362- * back to original value.359359+ * flipped data is OK.363360 */364364- buf_rx->data[0] ^= 0x80;365361 goto out;366362 }367363 }
+21-2
drivers/net/can/usb/gs_usb.c
···268268269269 struct usb_anchor tx_submitted;270270 atomic_t active_tx_urbs;271271+ void *rxbuf[GS_MAX_RX_URBS];272272+ dma_addr_t rxbuf_dma[GS_MAX_RX_URBS];271273};272274273275/* usb interface struct */···744742 for (i = 0; i < GS_MAX_RX_URBS; i++) {745743 struct urb *urb;746744 u8 *buf;745745+ dma_addr_t buf_dma;747746748747 /* alloc rx urb */749748 urb = usb_alloc_urb(0, GFP_KERNEL);···755752 buf = usb_alloc_coherent(dev->udev,756753 dev->parent->hf_size_rx,757754 GFP_KERNEL,758758- &urb->transfer_dma);755755+ &buf_dma);759756 if (!buf) {760757 netdev_err(netdev,761758 "No memory left for USB buffer\n");762759 usb_free_urb(urb);763760 return -ENOMEM;764761 }762762+763763+ urb->transfer_dma = buf_dma;765764766765 /* fill, anchor, and submit rx urb */767766 usb_fill_bulk_urb(urb,···786781 "usb_submit failed (err=%d)\n", rc);787782788783 usb_unanchor_urb(urb);784784+ usb_free_coherent(dev->udev,785785+ sizeof(struct gs_host_frame),786786+ buf,787787+ buf_dma);789788 usb_free_urb(urb);790789 break;791790 }791791+792792+ dev->rxbuf[i] = buf;793793+ dev->rxbuf_dma[i] = buf_dma;792794793795 /* Drop reference,794796 * USB core will take care of freeing it···854842 int rc;855843 struct gs_can *dev = netdev_priv(netdev);856844 struct gs_usb *parent = dev->parent;845845+ unsigned int i;857846858847 netif_stop_queue(netdev);859848860849 /* Stop polling */861850 parent->active_channels--;862862- if (!parent->active_channels)851851+ if (!parent->active_channels) {863852 usb_kill_anchored_urbs(&parent->rx_submitted);853853+ for (i = 0; i < GS_MAX_RX_URBS; i++)854854+ usb_free_coherent(dev->udev,855855+ sizeof(struct gs_host_frame),856856+ dev->rxbuf[i],857857+ dev->rxbuf_dma[i]);858858+ }864859865860 /* Stop sending URBs */866861 usb_kill_anchored_urbs(&dev->tx_submitted);
···59815981 release_sub_crqs(adapter, 0);59825982 rc = init_sub_crqs(adapter);59835983 } else {59845984+ /* no need to reinitialize completely, but we do59855985+ * need to clean up transmits that were in flight59865986+ * when we processed the reset. Failure to do so59875987+ * will confound the upper layer, usually TCP, by59885988+ * creating the illusion of transmits that are59895989+ * awaiting completion.59905990+ */59915991+ clean_tx_pools(adapter);59925992+59845993 rc = reset_sub_crq_queues(adapter);59855994 }59865995 } else {
+16
drivers/net/ethernet/intel/i40e/i40e.h
···3737#include <net/tc_act/tc_mirred.h>3838#include <net/udp_tunnel.h>3939#include <net/xdp_sock.h>4040+#include <linux/bitfield.h>4041#include "i40e_type.h"4142#include "i40e_prototype.h"4243#include <linux/net/intel/i40e_client.h>···10921091 (u32)(val >> 32));10931092 i40e_write_rx_ctl(&pf->hw, I40E_PRTQF_FD_INSET(addr, 0),10941093 (u32)(val & 0xFFFFFFFFULL));10941094+}10951095+10961096+/**10971097+ * i40e_get_pf_count - get PCI PF count.10981098+ * @hw: pointer to a hw.10991099+ *11001100+ * Reports the function number of the highest PCI physical11011101+ * function plus 1 as it is loaded from the NVM.11021102+ *11031103+ * Return: PCI PF count.11041104+ **/11051105+static inline u32 i40e_get_pf_count(struct i40e_hw *hw)11061106+{11071107+ return FIELD_GET(I40E_GLGEN_PCIFCNCNT_PCIPFCNT_MASK,11081108+ rd32(hw, I40E_GLGEN_PCIFCNCNT));10951109}1096111010971111/* needed by i40e_ethtool.c */
+73
drivers/net/ethernet/intel/i40e/i40e_main.c
···551551}552552553553/**554554+ * i40e_compute_pci_to_hw_id - compute index form PCI function.555555+ * @vsi: ptr to the VSI to read from.556556+ * @hw: ptr to the hardware info.557557+ **/558558+static u32 i40e_compute_pci_to_hw_id(struct i40e_vsi *vsi, struct i40e_hw *hw)559559+{560560+ int pf_count = i40e_get_pf_count(hw);561561+562562+ if (vsi->type == I40E_VSI_SRIOV)563563+ return (hw->port * BIT(7)) / pf_count + vsi->vf_id;564564+565565+ return hw->port + BIT(7);566566+}567567+568568+/**569569+ * i40e_stat_update64 - read and update a 64 bit stat from the chip.570570+ * @hw: ptr to the hardware info.571571+ * @hireg: the high 32 bit reg to read.572572+ * @loreg: the low 32 bit reg to read.573573+ * @offset_loaded: has the initial offset been loaded yet.574574+ * @offset: ptr to current offset value.575575+ * @stat: ptr to the stat.576576+ *577577+ * Since the device stats are not reset at PFReset, they will not578578+ * be zeroed when the driver starts. We'll save the first values read579579+ * and use them as offsets to be subtracted from the raw values in order580580+ * to report stats that count from zero.581581+ **/582582+static void i40e_stat_update64(struct i40e_hw *hw, u32 hireg, u32 loreg,583583+ bool offset_loaded, u64 *offset, u64 *stat)584584+{585585+ u64 new_data;586586+587587+ new_data = rd64(hw, loreg);588588+589589+ if (!offset_loaded || new_data < *offset)590590+ *offset = new_data;591591+ *stat = new_data - *offset;592592+}593593+594594+/**554595 * i40e_stat_update48 - read and update a 48 bit stat from the chip555596 * @hw: ptr to the hardware info556597 * @hireg: the high 32 bit reg to read···663622}664623665624/**625625+ * i40e_stats_update_rx_discards - update rx_discards.626626+ * @vsi: ptr to the VSI to be updated.627627+ * @hw: ptr to the hardware info.628628+ * @stat_idx: VSI's stat_counter_idx.629629+ * @offset_loaded: ptr to the VSI's stat_offsets_loaded.630630+ * @stat_offset: ptr to stat_offset to store first read of specific register.631631+ * @stat: ptr to VSI's stat to be updated.632632+ **/633633+static void634634+i40e_stats_update_rx_discards(struct i40e_vsi *vsi, struct i40e_hw *hw,635635+ int stat_idx, bool offset_loaded,636636+ struct i40e_eth_stats *stat_offset,637637+ struct i40e_eth_stats *stat)638638+{639639+ u64 rx_rdpc, rx_rxerr;640640+641641+ i40e_stat_update32(hw, I40E_GLV_RDPC(stat_idx), offset_loaded,642642+ &stat_offset->rx_discards, &rx_rdpc);643643+ i40e_stat_update64(hw,644644+ I40E_GL_RXERR1H(i40e_compute_pci_to_hw_id(vsi, hw)),645645+ I40E_GL_RXERR1L(i40e_compute_pci_to_hw_id(vsi, hw)),646646+ offset_loaded, &stat_offset->rx_discards_other,647647+ &rx_rxerr);648648+649649+ stat->rx_discards = rx_rdpc + rx_rxerr;650650+}651651+652652+/**666653 * i40e_update_eth_stats - Update VSI-specific ethernet statistics counters.667654 * @vsi: the VSI to be updated668655 **/···749680 I40E_GLV_BPTCL(stat_idx),750681 vsi->stat_offsets_loaded,751682 &oes->tx_broadcast, &es->tx_broadcast);683683+684684+ i40e_stats_update_rx_discards(vsi, hw, stat_idx,685685+ vsi->stat_offsets_loaded, oes, es);686686+752687 vsi->stat_offsets_loaded = true;753688}754689
···21472147 /* VFs only use TC 0 */21482148 vfres->vsi_res[0].qset_handle21492149 = le16_to_cpu(vsi->info.qs_handle[0]);21502150+ if (!(vf->driver_caps & VIRTCHNL_VF_OFFLOAD_USO) && !vf->pf_set_mac) {21512151+ i40e_del_mac_filter(vsi, vf->default_lan_addr.addr);21522152+ eth_zero_addr(vf->default_lan_addr.addr);21532153+ }21502154 ether_addr_copy(vfres->vsi_res[0].default_mac_addr,21512155 vf->default_lan_addr.addr);21522156 }
+6-7
drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
···45914591 return -EOPNOTSUPP;45924592 }4593459345944594- if (act->police.notexceed.act_id != FLOW_ACTION_PIPE &&45954595- act->police.notexceed.act_id != FLOW_ACTION_ACCEPT) {45964596- NL_SET_ERR_MSG_MOD(extack,45974597- "Offload not supported when conform action is not pipe or ok");45984598- return -EOPNOTSUPP;45994599- }46004600-46014594 if (act->police.notexceed.act_id == FLOW_ACTION_ACCEPT &&46024595 !flow_action_is_last_entry(action, act)) {46034596 NL_SET_ERR_MSG_MOD(extack,···46414648 flow_action_for_each(i, act, flow_action) {46424649 switch (act->id) {46434650 case FLOW_ACTION_POLICE:46514651+ if (act->police.notexceed.act_id != FLOW_ACTION_CONTINUE) {46524652+ NL_SET_ERR_MSG_MOD(extack,46534653+ "Offload not supported when conform action is not continue");46544654+ return -EOPNOTSUPP;46554655+ }46564656+46444657 err = mlx5e_policer_validate(flow_action, act, extack);46454658 if (err)46464659 return err;
···544544 struct sunxi_pinctrl *pctl = pinctrl_dev_get_drvdata(pctldev);545545 int i;546546547547+ pin -= pctl->desc->pin_base;548548+547549 for (i = 0; i < num_configs; i++) {548550 enum pin_config_param param;549551 unsigned long flags;
+1-1
drivers/s390/char/sclp.c
···6060/* List of queued requests. */6161static LIST_HEAD(sclp_req_queue);62626363-/* Data for read and and init requests. */6363+/* Data for read and init requests. */6464static struct sclp_req sclp_read_req;6565static struct sclp_req sclp_init_req;6666static void *sclp_read_sccb;
+7
drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
···27822782 struct hisi_hba *hisi_hba = shost_priv(shost);27832783 struct device *dev = hisi_hba->dev;27842784 int ret = sas_slave_configure(sdev);27852785+ unsigned int max_sectors;2785278627862787 if (ret)27872788 return ret;···27992798 pm_runtime_disable(dev);28002799 }28012800 }28012801+28022802+ /* Set according to IOMMU IOVA caching limit */28032803+ max_sectors = min_t(size_t, queue_max_hw_sectors(sdev->request_queue),28042804+ (PAGE_SIZE * 32) >> SECTOR_SHIFT);28052805+28062806+ blk_queue_max_hw_sectors(sdev->request_queue, max_sectors);2802280728032808 return 0;28042809}
···1919#include <linux/kernel.h>2020#include <linux/major.h>2121#include <linux/slab.h>2222+#include <linux/sysfb.h>2223#include <linux/mm.h>2324#include <linux/mman.h>2425#include <linux/vt.h>···17521751 a->ranges[0].size = ~0;17531752 do_free = true;17541753 }17541754+17551755+ /*17561756+ * If a driver asked to unregister a platform device registered by17571757+ * sysfb, then can be assumed that this is a driver for a display17581758+ * that is set up by the system firmware and has a generic driver.17591759+ *17601760+ * Drivers for devices that don't have a generic driver will never17611761+ * ask for this, so let's assume that a real driver for the display17621762+ * was already probed and prevent sysfb to register devices later.17631763+ */17641764+ sysfb_disable();1755176517561766 mutex_lock(®istration_lock);17571767 do_remove_conflicting_framebuffers(a, name, primary);
···5050STATIC int xfs_attr_leaf_get(xfs_da_args_t *args);5151STATIC int xfs_attr_leaf_removename(xfs_da_args_t *args);5252STATIC int xfs_attr_leaf_hasname(struct xfs_da_args *args, struct xfs_buf **bp);5353-STATIC int xfs_attr_leaf_try_add(struct xfs_da_args *args, struct xfs_buf *bp);5353+STATIC int xfs_attr_leaf_try_add(struct xfs_da_args *args);54545555/*5656 * Internal routines when attribute list is more than one block.···393393 * It won't fit in the shortform, transform to a leaf block. GROT:394394 * another possible req'mt for a double-split btree op.395395 */396396- error = xfs_attr_shortform_to_leaf(args, &attr->xattri_leaf_bp);396396+ error = xfs_attr_shortform_to_leaf(args);397397 if (error)398398 return error;399399400400- /*401401- * Prevent the leaf buffer from being unlocked so that a concurrent AIL402402- * push cannot grab the half-baked leaf buffer and run into problems403403- * with the write verifier.404404- */405405- xfs_trans_bhold(args->trans, attr->xattri_leaf_bp);406400 attr->xattri_dela_state = XFS_DAS_LEAF_ADD;407401out:408402 trace_xfs_attr_sf_addname_return(attr->xattri_dela_state, args->dp);···441447442448 /*443449 * Use the leaf buffer we may already hold locked as a result of444444- * a sf-to-leaf conversion. The held buffer is no longer valid445445- * after this call, regardless of the result.450450+ * a sf-to-leaf conversion.446451 */447447- error = xfs_attr_leaf_try_add(args, attr->xattri_leaf_bp);448448- attr->xattri_leaf_bp = NULL;452452+ error = xfs_attr_leaf_try_add(args);449453450454 if (error == -ENOSPC) {451455 error = xfs_attr3_leaf_to_node(args);···488496{489497 struct xfs_da_args *args = attr->xattri_da_args;490498 int error;491491-492492- ASSERT(!attr->xattri_leaf_bp);493499494500 error = xfs_attr_node_addname_find_attr(attr);495501 if (error)···12051215 */12061216STATIC int12071217xfs_attr_leaf_try_add(12081208- struct xfs_da_args *args,12091209- struct xfs_buf *bp)12181218+ struct xfs_da_args *args)12101219{12201220+ struct xfs_buf *bp;12111221 int error;1212122212131213- /*12141214- * If the caller provided a buffer to us, it is locked and held in12151215- * the transaction because it just did a shortform to leaf conversion.12161216- * Hence we don't need to read it again. Otherwise read in the leaf12171217- * buffer.12181218- */12191219- if (bp) {12201220- xfs_trans_bhold_release(args->trans, bp);12211221- } else {12221222- error = xfs_attr3_leaf_read(args->trans, args->dp, 0, &bp);12231223- if (error)12241224- return error;12251225- }12231223+ error = xfs_attr3_leaf_read(args->trans, args->dp, 0, &bp);12241224+ if (error)12251225+ return error;1226122612271227 /*12281228 * Look up the xattr name to set the insertion point for the new xattr.
-5
fs/xfs/libxfs/xfs_attr.h
···515515 */516516 struct xfs_attri_log_nameval *xattri_nameval;517517518518- /*519519- * Used by xfs_attr_set to hold a leaf buffer across a transaction roll520520- */521521- struct xfs_buf *xattri_leaf_bp;522522-523518 /* Used to keep track of current state of delayed operation */524519 enum xfs_delattr_state xattri_dela_state;525520
+19-16
fs/xfs/libxfs/xfs_attr_leaf.c
···289289 return NULL;290290}291291292292+/*293293+ * Validate an attribute leaf block.294294+ *295295+ * Empty leaf blocks can occur under the following circumstances:296296+ *297297+ * 1. setxattr adds a new extended attribute to a file;298298+ * 2. The file has zero existing attributes;299299+ * 3. The attribute is too large to fit in the attribute fork;300300+ * 4. The attribute is small enough to fit in a leaf block;301301+ * 5. A log flush occurs after committing the transaction that creates302302+ * the (empty) leaf block; and303303+ * 6. The filesystem goes down after the log flush but before the new304304+ * attribute can be committed to the leaf block.305305+ *306306+ * Hence we need to ensure that we don't fail the validation purely307307+ * because the leaf is empty.308308+ */292309static xfs_failaddr_t293310xfs_attr3_leaf_verify(294311 struct xfs_buf *bp)···326309 fa = xfs_da3_blkinfo_verify(bp, bp->b_addr);327310 if (fa)328311 return fa;329329-330330- /*331331- * Empty leaf blocks should never occur; they imply the existence of a332332- * software bug that needs fixing. xfs_repair also flags them as a333333- * corruption that needs fixing, so we should never let these go to334334- * disk.335335- */336336- if (ichdr.count == 0)337337- return __this_address;338312339313 /*340314 * firstused is the block offset of the first name info structure.···930922 return -ENOATTR;931923}932924933933-/*934934- * Convert from using the shortform to the leaf. On success, return the935935- * buffer so that we can keep it locked until we're totally done with it.936936- */925925+/* Convert from using the shortform to the leaf format. */937926int938927xfs_attr_shortform_to_leaf(939939- struct xfs_da_args *args,940940- struct xfs_buf **leaf_bp)928928+ struct xfs_da_args *args)941929{942930 struct xfs_inode *dp;943931 struct xfs_attr_shortform *sf;···995991 sfe = xfs_attr_sf_nextentry(sfe);996992 }997993 error = 0;998998- *leaf_bp = bp;999994out:1000995 kmem_free(tmpbuffer);1001996 return error;
···576576 struct xfs_trans_res tres;577577 struct xfs_attri_log_format *attrp;578578 struct xfs_attri_log_nameval *nv = attrip->attri_nameval;579579- int error, ret = 0;579579+ int error;580580 int total;581581 int local;582582 struct xfs_attrd_log_item *done_item = NULL;···655655 xfs_ilock(ip, XFS_ILOCK_EXCL);656656 xfs_trans_ijoin(tp, ip, 0);657657658658- ret = xfs_xattri_finish_update(attr, done_item);659659- if (ret == -EAGAIN) {660660- /* There's more work to do, so add it to this transaction */658658+ error = xfs_xattri_finish_update(attr, done_item);659659+ if (error == -EAGAIN) {660660+ /*661661+ * There's more work to do, so add the intent item to this662662+ * transaction so that we can continue it later.663663+ */661664 xfs_defer_add(tp, XFS_DEFER_OPS_TYPE_ATTR, &attr->xattri_list);662662- } else663663- error = ret;665665+ error = xfs_defer_ops_capture_and_commit(tp, capture_list);666666+ if (error)667667+ goto out_unlock;664668669669+ xfs_iunlock(ip, XFS_ILOCK_EXCL);670670+ xfs_irele(ip);671671+ return 0;672672+ }665673 if (error) {666674 xfs_trans_cancel(tp);667675 goto out_unlock;668676 }669677670678 error = xfs_defer_ops_capture_and_commit(tp, capture_list);671671-672679out_unlock:673673- if (attr->xattri_leaf_bp)674674- xfs_buf_relse(attr->xattri_leaf_bp);675675-676680 xfs_iunlock(ip, XFS_ILOCK_EXCL);677681 xfs_irele(ip);678682out:679679- if (ret != -EAGAIN)680680- xfs_attr_free_item(attr);683683+ xfs_attr_free_item(attr);681684 return error;682685}683686
···440440 for_each_online_cpu(cpu) {441441 gc = per_cpu_ptr(mp->m_inodegc, cpu);442442 if (!llist_empty(&gc->list))443443- queue_work_on(cpu, mp->m_inodegc_wq, &gc->work);443443+ mod_delayed_work_on(cpu, mp->m_inodegc_wq, &gc->work, 0);444444 }445445}446446···18411841xfs_inodegc_worker(18421842 struct work_struct *work)18431843{18441844- struct xfs_inodegc *gc = container_of(work, struct xfs_inodegc,18451845- work);18441844+ struct xfs_inodegc *gc = container_of(to_delayed_work(work),18451845+ struct xfs_inodegc, work);18461846 struct llist_node *node = llist_del_all(&gc->list);18471847 struct xfs_inode *ip, *n;18481848···18621862}1863186318641864/*18651865+ * Expedite all pending inodegc work to run immediately. This does not wait for18661866+ * completion of the work.18671867+ */18681868+void18691869+xfs_inodegc_push(18701870+ struct xfs_mount *mp)18711871+{18721872+ if (!xfs_is_inodegc_enabled(mp))18731873+ return;18741874+ trace_xfs_inodegc_push(mp, __return_address);18751875+ xfs_inodegc_queue_all(mp);18761876+}18771877+18781878+/*18651879 * Force all currently queued inode inactivation work to run immediately and18661880 * wait for the work to finish.18671881 */···18831869xfs_inodegc_flush(18841870 struct xfs_mount *mp)18851871{18861886- if (!xfs_is_inodegc_enabled(mp))18871887- return;18881888-18721872+ xfs_inodegc_push(mp);18891873 trace_xfs_inodegc_flush(mp, __return_address);18901890-18911891- xfs_inodegc_queue_all(mp);18921874 flush_workqueue(mp->m_inodegc_wq);18931875}18941876···20242014 struct xfs_inodegc *gc;20252015 int items;20262016 unsigned int shrinker_hits;20172017+ unsigned long queue_delay = 1;2027201820282019 trace_xfs_inode_set_need_inactive(ip);20292020 spin_lock(&ip->i_flags_lock);···20362025 items = READ_ONCE(gc->items);20372026 WRITE_ONCE(gc->items, items + 1);20382027 shrinker_hits = READ_ONCE(gc->shrinker_hits);20392039- put_cpu_ptr(gc);2040202820412041- if (!xfs_is_inodegc_enabled(mp))20292029+ /*20302030+ * We queue the work while holding the current CPU so that the work20312031+ * is scheduled to run on this CPU.20322032+ */20332033+ if (!xfs_is_inodegc_enabled(mp)) {20342034+ put_cpu_ptr(gc);20422035 return;20432043-20442044- if (xfs_inodegc_want_queue_work(ip, items)) {20452045- trace_xfs_inodegc_queue(mp, __return_address);20462046- queue_work(mp->m_inodegc_wq, &gc->work);20472036 }20372037+20382038+ if (xfs_inodegc_want_queue_work(ip, items))20392039+ queue_delay = 0;20402040+20412041+ trace_xfs_inodegc_queue(mp, __return_address);20422042+ mod_delayed_work(mp->m_inodegc_wq, &gc->work, queue_delay);20432043+ put_cpu_ptr(gc);2048204420492045 if (xfs_inodegc_want_flush_work(ip, items, shrinker_hits)) {20502046 trace_xfs_inodegc_throttle(mp, __return_address);20512051- flush_work(&gc->work);20472047+ flush_delayed_work(&gc->work);20522048 }20532049}20542050···20722054 unsigned int count = 0;2073205520742056 dead_gc = per_cpu_ptr(mp->m_inodegc, dead_cpu);20752075- cancel_work_sync(&dead_gc->work);20572057+ cancel_delayed_work_sync(&dead_gc->work);2076205820772059 if (llist_empty(&dead_gc->list))20782060 return;···20912073 llist_add_batch(first, last, &gc->list);20922074 count += READ_ONCE(gc->items);20932075 WRITE_ONCE(gc->items, count);20942094- put_cpu_ptr(gc);2095207620962077 if (xfs_is_inodegc_enabled(mp)) {20972078 trace_xfs_inodegc_queue(mp, __return_address);20982098- queue_work(mp->m_inodegc_wq, &gc->work);20792079+ mod_delayed_work(mp->m_inodegc_wq, &gc->work, 0);20992080 }20812081+ put_cpu_ptr(gc);21002082}2101208321022084/*···21912173 unsigned int h = READ_ONCE(gc->shrinker_hits);2192217421932175 WRITE_ONCE(gc->shrinker_hits, h + 1);21942194- queue_work_on(cpu, mp->m_inodegc_wq, &gc->work);21762176+ mod_delayed_work_on(cpu, mp->m_inodegc_wq, &gc->work, 0);21952177 no_items = false;21962178 }21972179 }
···132132}133133134134/*135135+ * You can't set both SHARED and EXCL for the same lock,136136+ * and only XFS_IOLOCK_SHARED, XFS_IOLOCK_EXCL, XFS_MMAPLOCK_SHARED,137137+ * XFS_MMAPLOCK_EXCL, XFS_ILOCK_SHARED, XFS_ILOCK_EXCL are valid values138138+ * to set in lock_flags.139139+ */140140+static inline void141141+xfs_lock_flags_assert(142142+ uint lock_flags)143143+{144144+ ASSERT((lock_flags & (XFS_IOLOCK_SHARED | XFS_IOLOCK_EXCL)) !=145145+ (XFS_IOLOCK_SHARED | XFS_IOLOCK_EXCL));146146+ ASSERT((lock_flags & (XFS_MMAPLOCK_SHARED | XFS_MMAPLOCK_EXCL)) !=147147+ (XFS_MMAPLOCK_SHARED | XFS_MMAPLOCK_EXCL));148148+ ASSERT((lock_flags & (XFS_ILOCK_SHARED | XFS_ILOCK_EXCL)) !=149149+ (XFS_ILOCK_SHARED | XFS_ILOCK_EXCL));150150+ ASSERT((lock_flags & ~(XFS_LOCK_MASK | XFS_LOCK_SUBCLASS_MASK)) == 0);151151+ ASSERT(lock_flags != 0);152152+}153153+154154+/*135155 * In addition to i_rwsem in the VFS inode, the xfs inode contains 2136156 * multi-reader locks: invalidate_lock and the i_lock. This routine allows137157 * various combinations of the locks to be obtained.···188168{189169 trace_xfs_ilock(ip, lock_flags, _RET_IP_);190170191191- /*192192- * You can't set both SHARED and EXCL for the same lock,193193- * and only XFS_IOLOCK_SHARED, XFS_IOLOCK_EXCL, XFS_ILOCK_SHARED,194194- * and XFS_ILOCK_EXCL are valid values to set in lock_flags.195195- */196196- ASSERT((lock_flags & (XFS_IOLOCK_SHARED | XFS_IOLOCK_EXCL)) !=197197- (XFS_IOLOCK_SHARED | XFS_IOLOCK_EXCL));198198- ASSERT((lock_flags & (XFS_MMAPLOCK_SHARED | XFS_MMAPLOCK_EXCL)) !=199199- (XFS_MMAPLOCK_SHARED | XFS_MMAPLOCK_EXCL));200200- ASSERT((lock_flags & (XFS_ILOCK_SHARED | XFS_ILOCK_EXCL)) !=201201- (XFS_ILOCK_SHARED | XFS_ILOCK_EXCL));202202- ASSERT((lock_flags & ~(XFS_LOCK_MASK | XFS_LOCK_SUBCLASS_MASK)) == 0);171171+ xfs_lock_flags_assert(lock_flags);203172204173 if (lock_flags & XFS_IOLOCK_EXCL) {205174 down_write_nested(&VFS_I(ip)->i_rwsem,···231222{232223 trace_xfs_ilock_nowait(ip, lock_flags, _RET_IP_);233224234234- /*235235- * You can't set both SHARED and EXCL for the same lock,236236- * and only XFS_IOLOCK_SHARED, XFS_IOLOCK_EXCL, XFS_ILOCK_SHARED,237237- * and XFS_ILOCK_EXCL are valid values to set in lock_flags.238238- */239239- ASSERT((lock_flags & (XFS_IOLOCK_SHARED | XFS_IOLOCK_EXCL)) !=240240- (XFS_IOLOCK_SHARED | XFS_IOLOCK_EXCL));241241- ASSERT((lock_flags & (XFS_MMAPLOCK_SHARED | XFS_MMAPLOCK_EXCL)) !=242242- (XFS_MMAPLOCK_SHARED | XFS_MMAPLOCK_EXCL));243243- ASSERT((lock_flags & (XFS_ILOCK_SHARED | XFS_ILOCK_EXCL)) !=244244- (XFS_ILOCK_SHARED | XFS_ILOCK_EXCL));245245- ASSERT((lock_flags & ~(XFS_LOCK_MASK | XFS_LOCK_SUBCLASS_MASK)) == 0);225225+ xfs_lock_flags_assert(lock_flags);246226247227 if (lock_flags & XFS_IOLOCK_EXCL) {248228 if (!down_write_trylock(&VFS_I(ip)->i_rwsem))···289291 xfs_inode_t *ip,290292 uint lock_flags)291293{292292- /*293293- * You can't set both SHARED and EXCL for the same lock,294294- * and only XFS_IOLOCK_SHARED, XFS_IOLOCK_EXCL, XFS_ILOCK_SHARED,295295- * and XFS_ILOCK_EXCL are valid values to set in lock_flags.296296- */297297- ASSERT((lock_flags & (XFS_IOLOCK_SHARED | XFS_IOLOCK_EXCL)) !=298298- (XFS_IOLOCK_SHARED | XFS_IOLOCK_EXCL));299299- ASSERT((lock_flags & (XFS_MMAPLOCK_SHARED | XFS_MMAPLOCK_EXCL)) !=300300- (XFS_MMAPLOCK_SHARED | XFS_MMAPLOCK_EXCL));301301- ASSERT((lock_flags & (XFS_ILOCK_SHARED | XFS_ILOCK_EXCL)) !=302302- (XFS_ILOCK_SHARED | XFS_ILOCK_EXCL));303303- ASSERT((lock_flags & ~(XFS_LOCK_MASK | XFS_LOCK_SUBCLASS_MASK)) == 0);304304- ASSERT(lock_flags != 0);294294+ xfs_lock_flags_assert(lock_flags);305295306296 if (lock_flags & XFS_IOLOCK_EXCL)307297 up_write(&VFS_I(ip)->i_rwsem);···365379 }366380367381 if (lock_flags & (XFS_MMAPLOCK_EXCL|XFS_MMAPLOCK_SHARED)) {368368- return __xfs_rwsem_islocked(&VFS_I(ip)->i_rwsem,369369- (lock_flags & XFS_IOLOCK_SHARED));382382+ return __xfs_rwsem_islocked(&VFS_I(ip)->i_mapping->invalidate_lock,383383+ (lock_flags & XFS_MMAPLOCK_SHARED));370384 }371385372386 if (lock_flags & (XFS_IOLOCK_EXCL | XFS_IOLOCK_SHARED)) {
+7-2
fs/xfs/xfs_log.c
···20922092 xlog_in_core_t *iclog, *next_iclog;20932093 int i;2094209420952095- xlog_cil_destroy(log);20962096-20972095 /*20982096 * Cycle all the iclogbuf locks to make sure all log IO completion20992097 * is done before we tear down these buffers.···21022104 up(&iclog->ic_sema);21032105 iclog = iclog->ic_next;21042106 }21072107+21082108+ /*21092109+ * Destroy the CIL after waiting for iclog IO completion because an21102110+ * iclog EIO error will try to shut down the log, which accesses the21112111+ * CIL to wake up the waiters.21122112+ */21132113+ xlog_cil_destroy(log);2105211421062115 iclog = log->l_iclog;21072116 for (i = 0; i < log->l_iclog_bufs; i++) {
+1-1
fs/xfs/xfs_mount.h
···6161 */6262struct xfs_inodegc {6363 struct llist_head list;6464- struct work_struct work;6464+ struct delayed_work work;65656666 /* approximate count of inodes in the list */6767 unsigned int items;
+6-3
fs/xfs/xfs_qm_syscalls.c
···454454 struct xfs_dquot *dqp;455455 int error;456456457457- /* Flush inodegc work at the start of a quota reporting scan. */457457+ /*458458+ * Expedite pending inodegc work at the start of a quota reporting459459+ * scan but don't block waiting for it to complete.460460+ */458461 if (id == 0)459459- xfs_inodegc_flush(mp);462462+ xfs_inodegc_push(mp);460463461464 /*462465 * Try to get the dquot. We don't want it allocated on disk, so don't···501498502499 /* Flush inodegc work at the start of a quota reporting scan. */503500 if (*id == 0)504504- xfs_inodegc_flush(mp);501501+ xfs_inodegc_push(mp);505502506503 error = xfs_qm_dqget_next(mp, *id, type, &dqp);507504 if (error)
+6-3
fs/xfs/xfs_super.c
···797797 xfs_extlen_t lsize;798798 int64_t ffree;799799800800- /* Wait for whatever inactivations are in progress. */801801- xfs_inodegc_flush(mp);800800+ /*801801+ * Expedite background inodegc but don't wait. We do not want to block802802+ * here waiting hours for a billion extent file to be truncated.803803+ */804804+ xfs_inodegc_push(mp);802805803806 statp->f_type = XFS_SUPER_MAGIC;804807 statp->f_namelen = MAXNAMELEN - 1;···10771074 gc = per_cpu_ptr(mp->m_inodegc, cpu);10781075 init_llist_head(&gc->list);10791076 gc->items = 0;10801080- INIT_WORK(&gc->work, xfs_inodegc_worker);10771077+ INIT_DELAYED_WORK(&gc->work, xfs_inodegc_worker);10811078 }10821079 return 0;10831080}
···148148 * reevaluate operable frequencies. Devfreq users may use149149 * devfreq.nb to the corresponding register notifier call chain.150150 * @work: delayed work for load monitoring.151151+ * @freq_table: current frequency table used by the devfreq driver.152152+ * @max_state: count of entry present in the frequency table.151153 * @previous_freq: previously configured frequency value.152154 * @last_status: devfreq user device info, performance statistics153155 * @data: Private data of the governor. The devfreq framework does not···186184 struct opp_table *opp_table;187185 struct notifier_block nb;188186 struct delayed_work work;187187+188188+ unsigned long *freq_table;189189+ unsigned int max_state;189190190191 unsigned long previous_freq;191192 struct devfreq_dev_status last_status;
-1
include/linux/lockref.h
···3838extern int lockref_put_return(struct lockref *);3939extern int lockref_get_not_zero(struct lockref *);4040extern int lockref_put_not_zero(struct lockref *);4141-extern int lockref_get_or_lock(struct lockref *);4241extern int lockref_put_or_lock(struct lockref *);43424443extern void lockref_mark_dead(struct lockref *);
···244244#define IORING_ASYNC_CANCEL_ANY (1U << 2)245245246246/*247247- * send/sendmsg and recv/recvmsg flags (sqe->addr2)247247+ * send/sendmsg and recv/recvmsg flags (sqe->ioprio)248248 *249249 * IORING_RECVSEND_POLL_FIRST If set, instead of first attempting to send250250 * or receive and arm poll if that yields an
+48-67
kernel/bpf/verifier.c
···15621562 reg->var_off = tnum_or(tnum_clear_subreg(var64_off), var32_off);15631563}1564156415651565+static void reg_bounds_sync(struct bpf_reg_state *reg)15661566+{15671567+ /* We might have learned new bounds from the var_off. */15681568+ __update_reg_bounds(reg);15691569+ /* We might have learned something about the sign bit. */15701570+ __reg_deduce_bounds(reg);15711571+ /* We might have learned some bits from the bounds. */15721572+ __reg_bound_offset(reg);15731573+ /* Intersecting with the old var_off might have improved our bounds15741574+ * slightly, e.g. if umax was 0x7f...f and var_off was (0; 0xf...fc),15751575+ * then new var_off is (0; 0x7f...fc) which improves our umax.15761576+ */15771577+ __update_reg_bounds(reg);15781578+}15791579+15651580static bool __reg32_bound_s64(s32 a)15661581{15671582 return a >= 0 && a <= S32_MAX;···16181603 * so they do not impact tnum bounds calculation.16191604 */16201605 __mark_reg64_unbounded(reg);16211621- __update_reg_bounds(reg);16221606 }16231623-16241624- /* Intersecting with the old var_off might have improved our bounds16251625- * slightly. e.g. if umax was 0x7f...f and var_off was (0; 0xf...fc),16261626- * then new var_off is (0; 0x7f...fc) which improves our umax.16271627- */16281628- __reg_deduce_bounds(reg);16291629- __reg_bound_offset(reg);16301630- __update_reg_bounds(reg);16071607+ reg_bounds_sync(reg);16311608}1632160916331610static bool __reg64_bound_s32(s64 a)···16351628static void __reg_combine_64_into_32(struct bpf_reg_state *reg)16361629{16371630 __mark_reg32_unbounded(reg);16381638-16391631 if (__reg64_bound_s32(reg->smin_value) && __reg64_bound_s32(reg->smax_value)) {16401632 reg->s32_min_value = (s32)reg->smin_value;16411633 reg->s32_max_value = (s32)reg->smax_value;···16431637 reg->u32_min_value = (u32)reg->umin_value;16441638 reg->u32_max_value = (u32)reg->umax_value;16451639 }16461646-16471647- /* Intersecting with the old var_off might have improved our bounds16481648- * slightly. e.g. if umax was 0x7f...f and var_off was (0; 0xf...fc),16491649- * then new var_off is (0; 0x7f...fc) which improves our umax.16501650- */16511651- __reg_deduce_bounds(reg);16521652- __reg_bound_offset(reg);16531653- __update_reg_bounds(reg);16401640+ reg_bounds_sync(reg);16541641}1655164216561643/* Mark a register as having a completely unknown (scalar) value. */···69636964 ret_reg->s32_max_value = meta->msize_max_value;69646965 ret_reg->smin_value = -MAX_ERRNO;69656966 ret_reg->s32_min_value = -MAX_ERRNO;69666966- __reg_deduce_bounds(ret_reg);69676967- __reg_bound_offset(ret_reg);69686968- __update_reg_bounds(ret_reg);69676967+ reg_bounds_sync(ret_reg);69696968}6970696969716970static int···8220822382218224 if (!check_reg_sane_offset(env, dst_reg, ptr_reg->type))82228225 return -EINVAL;82238223-82248224- __update_reg_bounds(dst_reg);82258225- __reg_deduce_bounds(dst_reg);82268226- __reg_bound_offset(dst_reg);82278227-82268226+ reg_bounds_sync(dst_reg);82288227 if (sanitize_check_bounds(env, insn, dst_reg) < 0)82298228 return -EACCES;82308229 if (sanitize_needed(opcode)) {···89588965 /* ALU32 ops are zero extended into 64bit register */89598966 if (alu32)89608967 zext_32_to_64(dst_reg);89618961-89628962- __update_reg_bounds(dst_reg);89638963- __reg_deduce_bounds(dst_reg);89648964- __reg_bound_offset(dst_reg);89688968+ reg_bounds_sync(dst_reg);89658969 return 0;89668970}89678971···91479157 insn->dst_reg);91489158 }91499159 zext_32_to_64(dst_reg);91509150-91519151- __update_reg_bounds(dst_reg);91529152- __reg_deduce_bounds(dst_reg);91539153- __reg_bound_offset(dst_reg);91609160+ reg_bounds_sync(dst_reg);91549161 }91559162 } else {91569163 /* case: R = imm···95859598 return;9586959995879600 switch (opcode) {96019601+ /* JEQ/JNE comparison doesn't change the register equivalence.96029602+ *96039603+ * r1 = r2;96049604+ * if (r1 == 42) goto label;96059605+ * ...96069606+ * label: // here both r1 and r2 are known to be 42.96079607+ *96089608+ * Hence when marking register as known preserve it's ID.96099609+ */95889610 case BPF_JEQ:95899589- case BPF_JNE:95909590- {95919591- struct bpf_reg_state *reg =95929592- opcode == BPF_JEQ ? true_reg : false_reg;95939593-95949594- /* JEQ/JNE comparison doesn't change the register equivalence.95959595- * r1 = r2;95969596- * if (r1 == 42) goto label;95979597- * ...95989598- * label: // here both r1 and r2 are known to be 42.95999599- *96009600- * Hence when marking register as known preserve it's ID.96019601- */96029602- if (is_jmp32)96039603- __mark_reg32_known(reg, val32);96049604- else96059605- ___mark_reg_known(reg, val);96119611+ if (is_jmp32) {96129612+ __mark_reg32_known(true_reg, val32);96139613+ true_32off = tnum_subreg(true_reg->var_off);96149614+ } else {96159615+ ___mark_reg_known(true_reg, val);96169616+ true_64off = true_reg->var_off;96179617+ }96069618 break;96079607- }96199619+ case BPF_JNE:96209620+ if (is_jmp32) {96219621+ __mark_reg32_known(false_reg, val32);96229622+ false_32off = tnum_subreg(false_reg->var_off);96239623+ } else {96249624+ ___mark_reg_known(false_reg, val);96259625+ false_64off = false_reg->var_off;96269626+ }96279627+ break;96089628 case BPF_JSET:96099629 if (is_jmp32) {96109630 false_32off = tnum_and(false_32off, tnum_const(~val32));···97509756 dst_reg->smax_value);97519757 src_reg->var_off = dst_reg->var_off = tnum_intersect(src_reg->var_off,97529758 dst_reg->var_off);97539753- /* We might have learned new bounds from the var_off. */97549754- __update_reg_bounds(src_reg);97559755- __update_reg_bounds(dst_reg);97569756- /* We might have learned something about the sign bit. */97579757- __reg_deduce_bounds(src_reg);97589758- __reg_deduce_bounds(dst_reg);97599759- /* We might have learned some bits from the bounds. */97609760- __reg_bound_offset(src_reg);97619761- __reg_bound_offset(dst_reg);97629762- /* Intersecting with the old var_off might have improved our bounds97639763- * slightly. e.g. if umax was 0x7f...f and var_off was (0; 0xf...fc),97649764- * then new var_off is (0; 0x7f...fc) which improves our umax.97659765- */97669766- __update_reg_bounds(src_reg);97679767- __update_reg_bounds(dst_reg);97599759+ reg_bounds_sync(src_reg);97609760+ reg_bounds_sync(dst_reg);97689761}9769976297709763static void reg_combine_min_max(struct bpf_reg_state *true_src,
+4-4
kernel/signal.c
···20292029 bool autoreap = false;20302030 u64 utime, stime;2031203120322032- BUG_ON(sig == -1);20322032+ WARN_ON_ONCE(sig == -1);2033203320342034- /* do_notify_parent_cldstop should have been called instead. */20352035- BUG_ON(task_is_stopped_or_traced(tsk));20342034+ /* do_notify_parent_cldstop should have been called instead. */20352035+ WARN_ON_ONCE(task_is_stopped_or_traced(tsk));2036203620372037- BUG_ON(!tsk->ptrace &&20372037+ WARN_ON_ONCE(!tsk->ptrace &&20382038 (tsk->group_leader != tsk || !thread_group_empty(tsk)));2039203920402040 /* Wake up all pidfd waiters */
-25
lib/lockref.c
···111111EXPORT_SYMBOL(lockref_put_not_zero);112112113113/**114114- * lockref_get_or_lock - Increments count unless the count is 0 or dead115115- * @lockref: pointer to lockref structure116116- * Return: 1 if count updated successfully or 0 if count was zero117117- * and we got the lock instead.118118- */119119-int lockref_get_or_lock(struct lockref *lockref)120120-{121121- CMPXCHG_LOOP(122122- new.count++;123123- if (old.count <= 0)124124- break;125125- ,126126- return 1;127127- );128128-129129- spin_lock(&lockref->lock);130130- if (lockref->count <= 0)131131- return 0;132132- lockref->count++;133133- spin_unlock(&lockref->lock);134134- return 1;135135-}136136-EXPORT_SYMBOL(lockref_get_or_lock);137137-138138-/**139114 * lockref_put_return - Decrement reference count if possible140115 * @lockref: pointer to lockref structure141116 *
+4-1
lib/sbitmap.c
···528528529529 sbitmap_deferred_clear(map);530530 if (map->word == (1UL << (map_depth - 1)) - 1)531531- continue;531531+ goto next;532532533533 nr = find_first_zero_bit(&map->word, map_depth);534534 if (nr + nr_tags <= map_depth) {···539539 get_mask = ((1UL << map_tags) - 1) << nr;540540 do {541541 val = READ_ONCE(map->word);542542+ if ((val & ~get_mask) != val)543543+ goto next;542544 ret = atomic_long_cmpxchg(ptr, val, get_mask | val);543545 } while (ret != val);544546 get_mask = (get_mask & ~ret) >> nr;···551549 return get_mask;552550 }553551 }552552+next:554553 /* Jump to next index. */555554 if (++index >= sb->map_nr)556555 index = 0;
···21252125}2126212621272127/**21282128+ * nft_set_pipapo_match_destroy() - Destroy elements from key mapping array21292129+ * @set: nftables API set representation21302130+ * @m: matching data pointing to key mapping array21312131+ */21322132+static void nft_set_pipapo_match_destroy(const struct nft_set *set,21332133+ struct nft_pipapo_match *m)21342134+{21352135+ struct nft_pipapo_field *f;21362136+ int i, r;21372137+21382138+ for (i = 0, f = m->f; i < m->field_count - 1; i++, f++)21392139+ ;21402140+21412141+ for (r = 0; r < f->rules; r++) {21422142+ struct nft_pipapo_elem *e;21432143+21442144+ if (r < f->rules - 1 && f->mt[r + 1].e == f->mt[r].e)21452145+ continue;21462146+21472147+ e = f->mt[r].e;21482148+21492149+ nft_set_elem_destroy(set, e, true);21502150+ }21512151+}21522152+21532153+/**21282154 * nft_pipapo_destroy() - Free private data for set and all committed elements21292155 * @set: nftables API set representation21302156 */···21582132{21592133 struct nft_pipapo *priv = nft_set_priv(set);21602134 struct nft_pipapo_match *m;21612161- struct nft_pipapo_field *f;21622162- int i, r, cpu;21352135+ int cpu;2163213621642137 m = rcu_dereference_protected(priv->match, true);21652138 if (m) {21662139 rcu_barrier();2167214021682168- for (i = 0, f = m->f; i < m->field_count - 1; i++, f++)21692169- ;21702170-21712171- for (r = 0; r < f->rules; r++) {21722172- struct nft_pipapo_elem *e;21732173-21742174- if (r < f->rules - 1 && f->mt[r + 1].e == f->mt[r].e)21752175- continue;21762176-21772177- e = f->mt[r].e;21782178-21792179- nft_set_elem_destroy(set, e, true);21802180- }21412141+ nft_set_pipapo_match_destroy(set, m);2181214221822143#ifdef NFT_PIPAPO_ALIGN21832144 free_percpu(m->scratch_aligned);···21782165 }2179216621802167 if (priv->clone) {21682168+ m = priv->clone;21692169+21702170+ if (priv->dirty)21712171+ nft_set_pipapo_match_destroy(set, m);21722172+21812173#ifdef NFT_PIPAPO_ALIGN21822174 free_percpu(priv->clone->scratch_aligned);21832175#endif
···442442 act_id = FLOW_ACTION_JUMP;443443 *extval = tc_act & TC_ACT_EXT_VAL_MASK;444444 } else if (tc_act == TC_ACT_UNSPEC) {445445- NL_SET_ERR_MSG_MOD(extack, "Offload not supported when conform/exceed action is \"continue\"");445445+ act_id = FLOW_ACTION_CONTINUE;446446 } else {447447 NL_SET_ERR_MSG_MOD(extack, "Unsupported conform/exceed action offload");448448 }
+1-1
net/sunrpc/xdr.c
···984984 p = page_address(*xdr->page_ptr);985985 xdr->p = p + frag2bytes;986986 space_left = xdr->buf->buflen - xdr->buf->len;987987- if (space_left - nbytes >= PAGE_SIZE)987987+ if (space_left - frag1bytes >= PAGE_SIZE)988988 xdr->end = p + PAGE_SIZE;989989 else990990 xdr->end = p + space_left - frag1bytes;
+4-4
net/tls/tls_sw.c
···269269 }270270 darg->async = false;271271272272- if (ret == -EBADMSG)273273- TLS_INC_STATS(sock_net(sk), LINUX_MIB_TLSDECRYPTERROR);274274-275272 return ret;276273}277274···15901593 }1591159415921595 err = decrypt_internal(sk, skb, dest, NULL, darg);15931593- if (err < 0)15961596+ if (err < 0) {15971597+ if (err == -EBADMSG)15981598+ TLS_INC_STATS(sock_net(sk), LINUX_MIB_TLSDECRYPTERROR);15941599 return err;16001600+ }15951601 if (darg->async)15961602 goto decrypt_next;15971603 /* If opportunistic TLS 1.3 ZC failed retry without ZC */
+1
net/xdp/xsk_buff_pool.c
···332332 for (i = 0; i < dma_map->dma_pages_cnt; i++) {333333 dma = &dma_map->dma_pages[i];334334 if (*dma) {335335+ *dma &= ~XSK_NEXT_PG_CONTIG_MASK;335336 dma_unmap_page_attrs(dma_map->dev, *dma, PAGE_SIZE,336337 DMA_BIDIRECTIONAL, attrs);337338 *dma = 0;
+7
samples/fprobe/fprobe_example.c
···25252626static char symbol[MAX_SYMBOL_LEN] = "kernel_clone";2727module_param_string(symbol, symbol, sizeof(symbol), 0644);2828+MODULE_PARM_DESC(symbol, "Probed symbol(s), given by comma separated symbols or a wildcard pattern.");2929+2830static char nosymbol[MAX_SYMBOL_LEN] = "";2931module_param_string(nosymbol, nosymbol, sizeof(nosymbol), 0644);3232+MODULE_PARM_DESC(nosymbol, "Not-probed symbols, given by a wildcard pattern.");3333+3034static bool stackdump = true;3135module_param(stackdump, bool, 0644);3636+MODULE_PARM_DESC(stackdump, "Enable stackdump.");3737+3238static bool use_trace = false;3339module_param(use_trace, bool, 0644);4040+MODULE_PARM_DESC(use_trace, "Use trace_printk instead of printk. This is only for debugging.");34413542static void show_backtrace(void)3643{
···868868869869 /*870870 * connected STDI871871+ * TDM support is assuming it is probed via Audio-Graph-Card style here.872872+ * Default is SDTIx1 if it was probed via Simple-Audio-Card for now.871873 */872874 sdti_num = of_graph_get_endpoint_count(np);873873- if (WARN_ON((sdti_num > 3) || (sdti_num < 1)))874874- return;875875+ if ((sdti_num >= SDTx_MAX) || (sdti_num < 1))876876+ sdti_num = 1;875877876878 AK4613_CONFIG_SDTI_set(priv, sdti_num);877879}
···122122 snd_soc_kcontrol_component(kcontrol);123123 struct cs47l15 *cs47l15 = snd_soc_component_get_drvdata(component);124124125125+ if (!!ucontrol->value.integer.value[0] == cs47l15->in1_lp_mode)126126+ return 0;127127+125128 switch (ucontrol->value.integer.value[0]) {126129 case 0:127130 /* Set IN1 to normal mode */···153150 break;154151 }155152156156- return 0;153153+ return 1;157154}158155159156static const struct snd_kcontrol_new cs47l15_snd_controls[] = {
+10-4
sound/soc/codecs/madera.c
···618618end:619619 snd_soc_dapm_mutex_unlock(dapm);620620621621- return snd_soc_dapm_mux_update_power(dapm, kcontrol, mux, e, NULL);621621+ ret = snd_soc_dapm_mux_update_power(dapm, kcontrol, mux, e, NULL);622622+ if (ret < 0) {623623+ dev_err(madera->dev, "Failed to update demux power state: %d\n", ret);624624+ return ret;625625+ }626626+627627+ return change;622628}623629EXPORT_SYMBOL_GPL(madera_out1_demux_put);624630···899893 struct soc_enum *e = (struct soc_enum *)kcontrol->private_value;900894 const int adsp_num = e->shift_l;901895 const unsigned int item = ucontrol->value.enumerated.item[0];902902- int ret;896896+ int ret = 0;903897904898 if (item >= e->items)905899 return -EINVAL;···916910 "Cannot change '%s' while in use by active audio paths\n",917911 kcontrol->id.name);918912 ret = -EBUSY;919919- } else {913913+ } else if (priv->adsp_rate_cache[adsp_num] != e->values[item]) {920914 /* Volatile register so defer until the codec is powered up */921915 priv->adsp_rate_cache[adsp_num] = e->values[item];922922- ret = 0;916916+ ret = 1;923917 }924918925919 mutex_unlock(&priv->rate_lock);
···12871287 struct snd_soc_dapm_update *update = NULL;12881288 u32 port_id = w->shift;1289128912901290+ if (wcd->rx_port_value[port_id] == ucontrol->value.enumerated.item[0])12911291+ return 0;12921292+12901293 wcd->rx_port_value[port_id] = ucontrol->value.enumerated.item[0];12941294+12951295+ /* Remove channel from any list it's in before adding it to a new one */12961296+ list_del_init(&wcd->rx_chs[port_id].list);1291129712921298 switch (wcd->rx_port_value[port_id]) {12931299 case 0:12941294- list_del_init(&wcd->rx_chs[port_id].list);13001300+ /* Channel already removed from lists. Nothing to do here */12951301 break;12961302 case 1:12971303 list_add_tail(&wcd->rx_chs[port_id].list,
···421421 priv->spkvdd_en_gpio = gpiod_get(codec_dev, "wlf,spkvdd-ena", GPIOD_OUT_LOW);422422 put_device(codec_dev);423423424424- if (IS_ERR(priv->spkvdd_en_gpio))425425- return dev_err_probe(dev, PTR_ERR(priv->spkvdd_en_gpio), "getting spkvdd-GPIO\n");424424+ if (IS_ERR(priv->spkvdd_en_gpio)) {425425+ ret = PTR_ERR(priv->spkvdd_en_gpio);426426+ /*427427+ * The spkvdd gpio-lookup is registered by: drivers/mfd/arizona-spi.c,428428+ * so -ENOENT means that arizona-spi hasn't probed yet.429429+ */430430+ if (ret == -ENOENT)431431+ ret = -EPROBE_DEFER;432432+433433+ return dev_err_probe(dev, ret, "getting spkvdd-GPIO\n");434434+ }426435427436 /* override platform name, if required */428437 byt_wm5102_card.dev = dev;
+29-22
sound/soc/intel/boards/sof_sdw.c
···13981398 .late_probe = sof_sdw_card_late_probe,13991399};1400140014011401+static void mc_dailink_exit_loop(struct snd_soc_card *card)14021402+{14031403+ struct snd_soc_dai_link *link;14041404+ int ret;14051405+ int i, j;14061406+14071407+ for (i = 0; i < ARRAY_SIZE(codec_info_list); i++) {14081408+ if (!codec_info_list[i].exit)14091409+ continue;14101410+ /*14111411+ * We don't need to call .exit function if there is no matched14121412+ * dai link found.14131413+ */14141414+ for_each_card_prelinks(card, j, link) {14151415+ if (!strcmp(link->codecs[0].dai_name,14161416+ codec_info_list[i].dai_name)) {14171417+ ret = codec_info_list[i].exit(card, link);14181418+ if (ret)14191419+ dev_warn(card->dev,14201420+ "codec exit failed %d\n",14211421+ ret);14221422+ break;14231423+ }14241424+ }14251425+ }14261426+}14271427+14011428static int mc_probe(struct platform_device *pdev)14021429{14031430 struct snd_soc_card *card = &card_sof_sdw;···14891462 ret = devm_snd_soc_register_card(&pdev->dev, card);14901463 if (ret) {14911464 dev_err(card->dev, "snd_soc_register_card failed %d\n", ret);14651465+ mc_dailink_exit_loop(card);14921466 return ret;14931467 }14941468···15011473static int mc_remove(struct platform_device *pdev)15021474{15031475 struct snd_soc_card *card = platform_get_drvdata(pdev);15041504- struct snd_soc_dai_link *link;15051505- int ret;15061506- int i, j;1507147615081508- for (i = 0; i < ARRAY_SIZE(codec_info_list); i++) {15091509- if (!codec_info_list[i].exit)15101510- continue;15111511- /*15121512- * We don't need to call .exit function if there is no matched15131513- * dai link found.15141514- */15151515- for_each_card_prelinks(card, j, link) {15161516- if (!strcmp(link->codecs[0].dai_name,15171517- codec_info_list[i].dai_name)) {15181518- ret = codec_info_list[i].exit(card, link);15191519- if (ret)15201520- dev_warn(&pdev->dev,15211521- "codec exit failed %d\n",15221522- ret);15231523- break;15241524- }15251525- }15261526- }14771477+ mc_dailink_exit_loop(card);1527147815281479 return 0;15291480}
+6
sound/soc/qcom/qdsp6/q6apm-dai.c
···147147 cfg.num_channels = runtime->channels;148148 cfg.bit_width = prtd->bits_per_sample;149149150150+ if (prtd->state) {151151+ /* clear the previous setup if any */152152+ q6apm_graph_stop(prtd->graph);153153+ q6apm_unmap_memory_regions(prtd->graph, substream->stream);154154+ }155155+150156 prtd->pcm_count = snd_pcm_lib_period_bytes(substream);151157 prtd->pos = 0;152158 /* rate and channels are sent to audio driver */
+129-31
sound/soc/rockchip/rockchip_i2s.c
···1313#include <linux/of_gpio.h>1414#include <linux/of_device.h>1515#include <linux/clk.h>1616+#include <linux/pinctrl/consumer.h>1617#include <linux/pm_runtime.h>1718#include <linux/regmap.h>1819#include <linux/spinlock.h>···5554 const struct rk_i2s_pins *pins;5655 unsigned int bclk_ratio;5756 spinlock_t lock; /* tx/rx lock */5757+ struct pinctrl *pinctrl;5858+ struct pinctrl_state *bclk_on;5959+ struct pinctrl_state *bclk_off;5860};6161+6262+static int i2s_pinctrl_select_bclk_on(struct rk_i2s_dev *i2s)6363+{6464+ int ret = 0;6565+6666+ if (!IS_ERR(i2s->pinctrl) && !IS_ERR_OR_NULL(i2s->bclk_on))6767+ ret = pinctrl_select_state(i2s->pinctrl,6868+ i2s->bclk_on);6969+7070+ if (ret)7171+ dev_err(i2s->dev, "bclk enable failed %d\n", ret);7272+7373+ return ret;7474+}7575+7676+static int i2s_pinctrl_select_bclk_off(struct rk_i2s_dev *i2s)7777+{7878+7979+ int ret = 0;8080+8181+ if (!IS_ERR(i2s->pinctrl) && !IS_ERR_OR_NULL(i2s->bclk_off))8282+ ret = pinctrl_select_state(i2s->pinctrl,8383+ i2s->bclk_off);8484+8585+ if (ret)8686+ dev_err(i2s->dev, "bclk disable failed %d\n", ret);8787+8888+ return ret;8989+}59906091static int i2s_runtime_suspend(struct device *dev)6192{···12592 return snd_soc_dai_get_drvdata(dai);12693}12794128128-static void rockchip_snd_txctrl(struct rk_i2s_dev *i2s, int on)9595+static int rockchip_snd_txctrl(struct rk_i2s_dev *i2s, int on)12996{13097 unsigned int val = 0;13198 int retry = 10;9999+ int ret = 0;132100133101 spin_lock(&i2s->lock);134102 if (on) {135135- regmap_update_bits(i2s->regmap, I2S_DMACR,136136- I2S_DMACR_TDE_ENABLE, I2S_DMACR_TDE_ENABLE);103103+ ret = regmap_update_bits(i2s->regmap, I2S_DMACR,104104+ I2S_DMACR_TDE_ENABLE, I2S_DMACR_TDE_ENABLE);105105+ if (ret < 0)106106+ goto end;137107138138- regmap_update_bits(i2s->regmap, I2S_XFER,139139- I2S_XFER_TXS_START | I2S_XFER_RXS_START,140140- I2S_XFER_TXS_START | I2S_XFER_RXS_START);108108+ ret = regmap_update_bits(i2s->regmap, I2S_XFER,109109+ I2S_XFER_TXS_START | I2S_XFER_RXS_START,110110+ I2S_XFER_TXS_START | I2S_XFER_RXS_START);111111+ if (ret < 0)112112+ goto end;141113142114 i2s->tx_start = true;143115 } else {144116 i2s->tx_start = false;145117146146- regmap_update_bits(i2s->regmap, I2S_DMACR,147147- I2S_DMACR_TDE_ENABLE, I2S_DMACR_TDE_DISABLE);118118+ ret = regmap_update_bits(i2s->regmap, I2S_DMACR,119119+ I2S_DMACR_TDE_ENABLE, I2S_DMACR_TDE_DISABLE);120120+ if (ret < 0)121121+ goto end;148122149123 if (!i2s->rx_start) {150150- regmap_update_bits(i2s->regmap, I2S_XFER,151151- I2S_XFER_TXS_START |152152- I2S_XFER_RXS_START,153153- I2S_XFER_TXS_STOP |154154- I2S_XFER_RXS_STOP);124124+ ret = regmap_update_bits(i2s->regmap, I2S_XFER,125125+ I2S_XFER_TXS_START |126126+ I2S_XFER_RXS_START,127127+ I2S_XFER_TXS_STOP |128128+ I2S_XFER_RXS_STOP);129129+ if (ret < 0)130130+ goto end;155131156132 udelay(150);157157- regmap_update_bits(i2s->regmap, I2S_CLR,158158- I2S_CLR_TXC | I2S_CLR_RXC,159159- I2S_CLR_TXC | I2S_CLR_RXC);133133+ ret = regmap_update_bits(i2s->regmap, I2S_CLR,134134+ I2S_CLR_TXC | I2S_CLR_RXC,135135+ I2S_CLR_TXC | I2S_CLR_RXC);136136+ if (ret < 0)137137+ goto end;160138161139 regmap_read(i2s->regmap, I2S_CLR, &val);162140···182138 }183139 }184140 }141141+end:185142 spin_unlock(&i2s->lock);143143+ if (ret < 0)144144+ dev_err(i2s->dev, "lrclk update failed\n");145145+146146+ return ret;186147}187148188188-static void rockchip_snd_rxctrl(struct rk_i2s_dev *i2s, int on)149149+static int rockchip_snd_rxctrl(struct rk_i2s_dev *i2s, int on)189150{190151 unsigned int val = 0;191152 int retry = 10;153153+ int ret = 0;192154193155 spin_lock(&i2s->lock);194156 if (on) {195195- regmap_update_bits(i2s->regmap, I2S_DMACR,157157+ ret = regmap_update_bits(i2s->regmap, I2S_DMACR,196158 I2S_DMACR_RDE_ENABLE, I2S_DMACR_RDE_ENABLE);159159+ if (ret < 0)160160+ goto end;197161198198- regmap_update_bits(i2s->regmap, I2S_XFER,162162+ ret = regmap_update_bits(i2s->regmap, I2S_XFER,199163 I2S_XFER_TXS_START | I2S_XFER_RXS_START,200164 I2S_XFER_TXS_START | I2S_XFER_RXS_START);165165+ if (ret < 0)166166+ goto end;201167202168 i2s->rx_start = true;203169 } else {204170 i2s->rx_start = false;205171206206- regmap_update_bits(i2s->regmap, I2S_DMACR,172172+ ret = regmap_update_bits(i2s->regmap, I2S_DMACR,207173 I2S_DMACR_RDE_ENABLE, I2S_DMACR_RDE_DISABLE);174174+ if (ret < 0)175175+ goto end;208176209177 if (!i2s->tx_start) {210210- regmap_update_bits(i2s->regmap, I2S_XFER,178178+ ret = regmap_update_bits(i2s->regmap, I2S_XFER,211179 I2S_XFER_TXS_START |212180 I2S_XFER_RXS_START,213181 I2S_XFER_TXS_STOP |214182 I2S_XFER_RXS_STOP);215215-183183+ if (ret < 0)184184+ goto end;216185 udelay(150);217217- regmap_update_bits(i2s->regmap, I2S_CLR,186186+ ret = regmap_update_bits(i2s->regmap, I2S_CLR,218187 I2S_CLR_TXC | I2S_CLR_RXC,219188 I2S_CLR_TXC | I2S_CLR_RXC);220220-189189+ if (ret < 0)190190+ goto end;221191 regmap_read(i2s->regmap, I2S_CLR, &val);222222-223192 /* Should wait for clear operation to finish */224193 while (val) {225194 regmap_read(i2s->regmap, I2S_CLR, &val);···244187 }245188 }246189 }190190+end:247191 spin_unlock(&i2s->lock);192192+ if (ret < 0)193193+ dev_err(i2s->dev, "lrclk update failed\n");194194+195195+ return ret;248196}249197250198static int rockchip_i2s_set_fmt(struct snd_soc_dai *cpu_dai,···487425 case SNDRV_PCM_TRIGGER_RESUME:488426 case SNDRV_PCM_TRIGGER_PAUSE_RELEASE:489427 if (substream->stream == SNDRV_PCM_STREAM_CAPTURE)490490- rockchip_snd_rxctrl(i2s, 1);428428+ ret = rockchip_snd_rxctrl(i2s, 1);491429 else492492- rockchip_snd_txctrl(i2s, 1);430430+ ret = rockchip_snd_txctrl(i2s, 1);431431+ /* Do not turn on bclk if lrclk open fails. */432432+ if (ret < 0)433433+ return ret;434434+ i2s_pinctrl_select_bclk_on(i2s);493435 break;494436 case SNDRV_PCM_TRIGGER_SUSPEND:495437 case SNDRV_PCM_TRIGGER_STOP:496438 case SNDRV_PCM_TRIGGER_PAUSE_PUSH:497497- if (substream->stream == SNDRV_PCM_STREAM_CAPTURE)498498- rockchip_snd_rxctrl(i2s, 0);499499- else500500- rockchip_snd_txctrl(i2s, 0);439439+ if (substream->stream == SNDRV_PCM_STREAM_CAPTURE) {440440+ if (!i2s->tx_start)441441+ i2s_pinctrl_select_bclk_off(i2s);442442+ ret = rockchip_snd_rxctrl(i2s, 0);443443+ } else {444444+ if (!i2s->rx_start)445445+ i2s_pinctrl_select_bclk_off(i2s);446446+ ret = rockchip_snd_txctrl(i2s, 0);447447+ }501448 break;502449 default:503450 ret = -EINVAL;···807736 }808737809738 i2s->bclk_ratio = 64;739739+ i2s->pinctrl = devm_pinctrl_get(&pdev->dev);740740+ if (IS_ERR(i2s->pinctrl))741741+ dev_err(&pdev->dev, "failed to find i2s pinctrl\n");742742+743743+ i2s->bclk_on = pinctrl_lookup_state(i2s->pinctrl,744744+ "bclk_on");745745+ if (IS_ERR_OR_NULL(i2s->bclk_on))746746+ dev_err(&pdev->dev, "failed to find i2s default state\n");747747+ else748748+ dev_dbg(&pdev->dev, "find i2s bclk state\n");749749+750750+ i2s->bclk_off = pinctrl_lookup_state(i2s->pinctrl,751751+ "bclk_off");752752+ if (IS_ERR_OR_NULL(i2s->bclk_off))753753+ dev_err(&pdev->dev, "failed to find i2s gpio state\n");754754+ else755755+ dev_dbg(&pdev->dev, "find i2s bclk_off state\n");756756+757757+ i2s_pinctrl_select_bclk_off(i2s);758758+759759+ i2s->playback_dma_data.addr = res->start + I2S_TXDR;760760+ i2s->playback_dma_data.addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES;761761+ i2s->playback_dma_data.maxburst = 4;762762+763763+ i2s->capture_dma_data.addr = res->start + I2S_RXDR;764764+ i2s->capture_dma_data.addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES;765765+ i2s->capture_dma_data.maxburst = 4;810766811767 dev_set_drvdata(&pdev->dev, i2s);812768
+5
sound/soc/soc-dapm.c
···6262snd_soc_dapm_new_control_unlocked(struct snd_soc_dapm_context *dapm,6363 const struct snd_soc_dapm_widget *widget);64646565+static unsigned int soc_dapm_read(struct snd_soc_dapm_context *dapm, int reg);6666+6567/* dapm power sequences - make this per codec in the future */6668static int dapm_up_seq[] = {6769 [snd_soc_dapm_pre] = 1,···444442445443 snd_soc_dapm_add_path(widget->dapm, data->widget,446444 widget, NULL, NULL);445445+ } else if (e->reg != SND_SOC_NOPM) {446446+ data->value = soc_dapm_read(widget->dapm, e->reg) &447447+ (e->mask << e->shift_l);447448 }448449 break;449450 default:
+2-2
sound/soc/soc-ops.c
···526526 return -EINVAL;527527 if (mc->platform_max && tmp > mc->platform_max)528528 return -EINVAL;529529- if (tmp > mc->max - mc->min + 1)529529+ if (tmp > mc->max - mc->min)530530 return -EINVAL;531531532532 if (invert)···547547 return -EINVAL;548548 if (mc->platform_max && tmp > mc->platform_max)549549 return -EINVAL;550550- if (tmp > mc->max - mc->min + 1)550550+ if (tmp > mc->max - mc->min)551551 return -EINVAL;552552553553 if (invert)
+9-1
sound/soc/sof/intel/hda-dsp.c
···181181 * Power Management.182182 */183183184184-static int hda_dsp_core_power_up(struct snd_sof_dev *sdev, unsigned int core_mask)184184+int hda_dsp_core_power_up(struct snd_sof_dev *sdev, unsigned int core_mask)185185{186186+ struct sof_intel_hda_dev *hda = sdev->pdata->hw_pdata;187187+ const struct sof_intel_dsp_desc *chip = hda->desc;186188 unsigned int cpa;187189 u32 adspcs;188190 int ret;191191+192192+ /* restrict core_mask to host managed cores mask */193193+ core_mask &= chip->host_managed_cores_mask;194194+ /* return if core_mask is not valid */195195+ if (!core_mask)196196+ return 0;189197190198 /* update bits */191199 snd_sof_dsp_update_bits(sdev, HDA_DSP_BAR, HDA_DSP_REG_ADSPCS,
+7-6
sound/soc/sof/intel/hda-loader.c
···9595}96969797/*9898- * first boot sequence has some extra steps. core 0 waits for power9999- * status on core 1, so power up core 1 also momentarily, keep it in100100- * reset/stall and then turn it off9898+ * first boot sequence has some extra steps.9999+ * power on all host managed cores and only unstall/run the boot core to boot the100100+ * DSP then turn off all non boot cores (if any) is powered on.101101 */102102static int cl_dsp_init(struct snd_sof_dev *sdev, int stream_tag, bool imr_boot)103103{···110110 int ret;111111112112 /* step 1: power up corex */113113- ret = hda_dsp_enable_core(sdev, chip->host_managed_cores_mask);113113+ ret = hda_dsp_core_power_up(sdev, chip->host_managed_cores_mask);114114 if (ret < 0) {115115 if (hda->boot_iteration == HDA_FW_BOOT_ATTEMPTS)116116 dev_err(sdev->dev, "error: dsp core 0/1 power up failed\n");···127127 snd_sof_dsp_write(sdev, HDA_DSP_BAR, chip->ipc_req, ipc_hdr);128128129129 /* step 3: unset core 0 reset state & unstall/run core 0 */130130- ret = hda_dsp_core_run(sdev, BIT(0));130130+ ret = hda_dsp_core_run(sdev, chip->init_core_mask);131131 if (ret < 0) {132132 if (hda->boot_iteration == HDA_FW_BOOT_ATTEMPTS)133133 dev_err(sdev->dev,···389389 struct snd_dma_buffer dmab;390390 int ret, ret1, i;391391392392- if (hda->imrboot_supported && !sdev->first_boot) {392392+ if (sdev->system_suspend_target < SOF_SUSPEND_S4 &&393393+ hda->imrboot_supported && !sdev->first_boot) {393394 dev_dbg(sdev->dev, "IMR restore supported, booting from IMR directly\n");394395 hda->boot_iteration = 0;395396 ret = hda_dsp_boot_imr(sdev);
+1-73
sound/soc/sof/intel/hda-pcm.c
···192192 goto found;193193 }194194195195- switch (sof_hda_position_quirk) {196196- case SOF_HDA_POSITION_QUIRK_USE_SKYLAKE_LEGACY:197197- /*198198- * This legacy code, inherited from the Skylake driver,199199- * mixes DPIB registers and DPIB DDR updates and200200- * does not seem to follow any known hardware recommendations.201201- * It's not clear e.g. why there is a different flow202202- * for capture and playback, the only information that matters is203203- * what traffic class is used, and on all SOF-enabled platforms204204- * only VC0 is supported so the work-around was likely not necessary205205- * and quite possibly wrong.206206- */207207-208208- /* DPIB/posbuf position mode:209209- * For Playback, Use DPIB register from HDA space which210210- * reflects the actual data transferred.211211- * For Capture, Use the position buffer for pointer, as DPIB212212- * is not accurate enough, its update may be completed213213- * earlier than the data written to DDR.214214- */215215- if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK) {216216- pos = snd_sof_dsp_read(sdev, HDA_DSP_HDA_BAR,217217- AZX_REG_VS_SDXDPIB_XBASE +218218- (AZX_REG_VS_SDXDPIB_XINTERVAL *219219- hstream->index));220220- } else {221221- /*222222- * For capture stream, we need more workaround to fix the223223- * position incorrect issue:224224- *225225- * 1. Wait at least 20us before reading position buffer after226226- * the interrupt generated(IOC), to make sure position update227227- * happens on frame boundary i.e. 20.833uSec for 48KHz.228228- * 2. Perform a dummy Read to DPIB register to flush DMA229229- * position value.230230- * 3. Read the DMA Position from posbuf. Now the readback231231- * value should be >= period boundary.232232- */233233- usleep_range(20, 21);234234- snd_sof_dsp_read(sdev, HDA_DSP_HDA_BAR,235235- AZX_REG_VS_SDXDPIB_XBASE +236236- (AZX_REG_VS_SDXDPIB_XINTERVAL *237237- hstream->index));238238- pos = snd_hdac_stream_get_pos_posbuf(hstream);239239- }240240- break;241241- case SOF_HDA_POSITION_QUIRK_USE_DPIB_REGISTERS:242242- /*243243- * In case VC1 traffic is disabled this is the recommended option244244- */245245- pos = snd_sof_dsp_read(sdev, HDA_DSP_HDA_BAR,246246- AZX_REG_VS_SDXDPIB_XBASE +247247- (AZX_REG_VS_SDXDPIB_XINTERVAL *248248- hstream->index));249249- break;250250- case SOF_HDA_POSITION_QUIRK_USE_DPIB_DDR_UPDATE:251251- /*252252- * This is the recommended option when VC1 is enabled.253253- * While this isn't needed for SOF platforms it's added for254254- * consistency and debug.255255- */256256- pos = snd_hdac_stream_get_pos_posbuf(hstream);257257- break;258258- default:259259- dev_err_once(sdev->dev, "hda_position_quirk value %d not supported\n",260260- sof_hda_position_quirk);261261- pos = 0;262262- break;263263- }264264-265265- if (pos >= hstream->bufsize)266266- pos = 0;267267-195195+ pos = hda_dsp_stream_get_position(hstream, substream->stream, true);268196found:269197 pos = bytes_to_frames(substream->runtime, pos);270198
+90-4
sound/soc/sof/intel/hda-stream.c
···707707}708708709709static void710710-hda_dsp_set_bytes_transferred(struct hdac_stream *hstream, u64 buffer_size)710710+hda_dsp_compr_bytes_transferred(struct hdac_stream *hstream, int direction)711711{712712+ u64 buffer_size = hstream->bufsize;712713 u64 prev_pos, pos, num_bytes;713714714715 div64_u64_rem(hstream->curr_pos, buffer_size, &prev_pos);715715- pos = snd_hdac_stream_get_pos_posbuf(hstream);716716+ pos = hda_dsp_stream_get_position(hstream, direction, false);716717717718 if (pos < prev_pos)718719 num_bytes = (buffer_size - prev_pos) + pos;···749748 if (s->substream && sof_hda->no_ipc_position) {750749 snd_sof_pcm_period_elapsed(s->substream);751750 } else if (s->cstream) {752752- hda_dsp_set_bytes_transferred(s,753753- s->cstream->runtime->buffer_size);751751+ hda_dsp_compr_bytes_transferred(s, s->cstream->direction);754752 snd_compr_fragment_elapsed(s->cstream);755753 }756754 }···10081008 hext_stream);10091009 devm_kfree(sdev->dev, hda_stream);10101010 }10111011+}10121012+10131013+snd_pcm_uframes_t hda_dsp_stream_get_position(struct hdac_stream *hstream,10141014+ int direction, bool can_sleep)10151015+{10161016+ struct hdac_ext_stream *hext_stream = stream_to_hdac_ext_stream(hstream);10171017+ struct sof_intel_hda_stream *hda_stream = hstream_to_sof_hda_stream(hext_stream);10181018+ struct snd_sof_dev *sdev = hda_stream->sdev;10191019+ snd_pcm_uframes_t pos;10201020+10211021+ switch (sof_hda_position_quirk) {10221022+ case SOF_HDA_POSITION_QUIRK_USE_SKYLAKE_LEGACY:10231023+ /*10241024+ * This legacy code, inherited from the Skylake driver,10251025+ * mixes DPIB registers and DPIB DDR updates and10261026+ * does not seem to follow any known hardware recommendations.10271027+ * It's not clear e.g. why there is a different flow10281028+ * for capture and playback, the only information that matters is10291029+ * what traffic class is used, and on all SOF-enabled platforms10301030+ * only VC0 is supported so the work-around was likely not necessary10311031+ * and quite possibly wrong.10321032+ */10331033+10341034+ /* DPIB/posbuf position mode:10351035+ * For Playback, Use DPIB register from HDA space which10361036+ * reflects the actual data transferred.10371037+ * For Capture, Use the position buffer for pointer, as DPIB10381038+ * is not accurate enough, its update may be completed10391039+ * earlier than the data written to DDR.10401040+ */10411041+ if (direction == SNDRV_PCM_STREAM_PLAYBACK) {10421042+ pos = snd_sof_dsp_read(sdev, HDA_DSP_HDA_BAR,10431043+ AZX_REG_VS_SDXDPIB_XBASE +10441044+ (AZX_REG_VS_SDXDPIB_XINTERVAL *10451045+ hstream->index));10461046+ } else {10471047+ /*10481048+ * For capture stream, we need more workaround to fix the10491049+ * position incorrect issue:10501050+ *10511051+ * 1. Wait at least 20us before reading position buffer after10521052+ * the interrupt generated(IOC), to make sure position update10531053+ * happens on frame boundary i.e. 20.833uSec for 48KHz.10541054+ * 2. Perform a dummy Read to DPIB register to flush DMA10551055+ * position value.10561056+ * 3. Read the DMA Position from posbuf. Now the readback10571057+ * value should be >= period boundary.10581058+ */10591059+ if (can_sleep)10601060+ usleep_range(20, 21);10611061+10621062+ snd_sof_dsp_read(sdev, HDA_DSP_HDA_BAR,10631063+ AZX_REG_VS_SDXDPIB_XBASE +10641064+ (AZX_REG_VS_SDXDPIB_XINTERVAL *10651065+ hstream->index));10661066+ pos = snd_hdac_stream_get_pos_posbuf(hstream);10671067+ }10681068+ break;10691069+ case SOF_HDA_POSITION_QUIRK_USE_DPIB_REGISTERS:10701070+ /*10711071+ * In case VC1 traffic is disabled this is the recommended option10721072+ */10731073+ pos = snd_sof_dsp_read(sdev, HDA_DSP_HDA_BAR,10741074+ AZX_REG_VS_SDXDPIB_XBASE +10751075+ (AZX_REG_VS_SDXDPIB_XINTERVAL *10761076+ hstream->index));10771077+ break;10781078+ case SOF_HDA_POSITION_QUIRK_USE_DPIB_DDR_UPDATE:10791079+ /*10801080+ * This is the recommended option when VC1 is enabled.10811081+ * While this isn't needed for SOF platforms it's added for10821082+ * consistency and debug.10831083+ */10841084+ pos = snd_hdac_stream_get_pos_posbuf(hstream);10851085+ break;10861086+ default:10871087+ dev_err_once(sdev->dev, "hda_position_quirk value %d not supported\n",10881088+ sof_hda_position_quirk);10891089+ pos = 0;10901090+ break;10911091+ }10921092+10931093+ if (pos >= hstream->bufsize)10941094+ pos = 0;10951095+10961096+ return pos;10111097}
···15771577 struct sof_ipc_ctrl_data *cdata;15781578 int ret;1579157915801580+ if (scontrol->max_size < (sizeof(*cdata) + sizeof(struct sof_abi_hdr))) {15811581+ dev_err(sdev->dev, "%s: insufficient size for a bytes control: %zu.\n",15821582+ __func__, scontrol->max_size);15831583+ return -EINVAL;15841584+ }15851585+15861586+ if (scontrol->priv_size > scontrol->max_size - sizeof(*cdata)) {15871587+ dev_err(sdev->dev,15881588+ "%s: bytes data size %zu exceeds max %zu.\n", __func__,15891589+ scontrol->priv_size, scontrol->max_size - sizeof(*cdata));15901590+ return -EINVAL;15911591+ }15921592+15801593 scontrol->ipc_control_data = kzalloc(scontrol->max_size, GFP_KERNEL);15811594 if (!scontrol->ipc_control_data)15821595 return -ENOMEM;15831583-15841584- if (scontrol->max_size < sizeof(*cdata) ||15851585- scontrol->max_size < sizeof(struct sof_abi_hdr)) {15861586- ret = -EINVAL;15871587- goto err;15881588- }15891589-15901590- /* init the get/put bytes data */15911591- if (scontrol->priv_size > scontrol->max_size - sizeof(*cdata)) {15921592- dev_err(sdev->dev, "err: bytes data size %zu exceeds max %zu.\n",15931593- scontrol->priv_size, scontrol->max_size - sizeof(*cdata));15941594- ret = -EINVAL;15951595- goto err;15961596- }1597159615981597 scontrol->size = sizeof(struct sof_ipc_ctrl_data) + scontrol->priv_size;15991598
+1-1
sound/soc/sof/mediatek/mt8186/mt8186.c
···392392 PLATFORM_DEVID_NONE,393393 pdev, sizeof(*pdev));394394 if (IS_ERR(priv->ipc_dev)) {395395- ret = IS_ERR(priv->ipc_dev);395395+ ret = PTR_ERR(priv->ipc_dev);396396 dev_err(sdev->dev, "failed to create mtk-adsp-ipc device\n");397397 goto err_adsp_off;398398 }
+20-1
sound/soc/sof/pm.c
···2323 u32 target_dsp_state;24242525 switch (sdev->system_suspend_target) {2626+ case SOF_SUSPEND_S5:2727+ case SOF_SUSPEND_S4:2828+ /* DSP should be in D3 if the system is suspending to S3+ */2629 case SOF_SUSPEND_S3:2730 /* DSP should be in D3 if the system is suspending to S3 */2831 target_dsp_state = SOF_DSP_PM_D3;···338335 return 0;339336340337#if defined(CONFIG_ACPI)341341- if (acpi_target_system_state() == ACPI_STATE_S0)338338+ switch (acpi_target_system_state()) {339339+ case ACPI_STATE_S0:342340 sdev->system_suspend_target = SOF_SUSPEND_S0IX;341341+ break;342342+ case ACPI_STATE_S1:343343+ case ACPI_STATE_S2:344344+ case ACPI_STATE_S3:345345+ sdev->system_suspend_target = SOF_SUSPEND_S3;346346+ break;347347+ case ACPI_STATE_S4:348348+ sdev->system_suspend_target = SOF_SUSPEND_S4;349349+ break;350350+ case ACPI_STATE_S5:351351+ sdev->system_suspend_target = SOF_SUSPEND_S5;352352+ break;353353+ default:354354+ break;355355+ }343356#endif344357345358 return 0;
···265265266266 sample_type = evsel->core.attr.sample_type;267267268268+ if (sample_type & ~OFFCPU_SAMPLE_TYPES) {269269+ pr_err("not supported sample type: %llx\n",270270+ (unsigned long long)sample_type);271271+ return -1;272272+ }273273+268274 if (sample_type & (PERF_SAMPLE_ID | PERF_SAMPLE_IDENTIFIER)) {269275 if (evsel->core.id)270276 sid = evsel->core.id[0];···325319 }326320 if (sample_type & PERF_SAMPLE_CGROUP)327321 data.array[n++] = key.cgroup_id;328328- /* TODO: handle more sample types */329322330323 size = n * sizeof(u64);331324 data.hdr.size = size;
+14-6
tools/perf/util/bpf_skel/off_cpu.bpf.c
···7171 __uint(max_entries, 1);7272} cgroup_filter SEC(".maps");73737474+/* new kernel task_struct definition */7575+struct task_struct___new {7676+ long __state;7777+} __attribute__((preserve_access_index));7878+7479/* old kernel task_struct definition */7580struct task_struct___old {7681 long state;···9893 */9994static inline int get_task_state(struct task_struct *t)10095{101101- if (bpf_core_field_exists(t->__state))102102- return BPF_CORE_READ(t, __state);9696+ /* recast pointer to capture new type for compiler */9797+ struct task_struct___new *t_new = (void *)t;10398104104- /* recast pointer to capture task_struct___old type for compiler */105105- struct task_struct___old *t_old = (void *)t;9999+ if (bpf_core_field_exists(t_new->__state)) {100100+ return BPF_CORE_READ(t_new, __state);101101+ } else {102102+ /* recast pointer to capture old type for compiler */103103+ struct task_struct___old *t_old = (void *)t;106104107107- /* now use old "state" name of the field */108108- return BPF_CORE_READ(t_old, state);105105+ return BPF_CORE_READ(t_old, state);106106+ }109107}110108111109static inline __u64 get_cgroup_id(struct task_struct *t)
+9
tools/perf/util/evsel.c
···4848#include "util.h"4949#include "hashmap.h"5050#include "pmu-hybrid.h"5151+#include "off_cpu.h"5152#include "../perf-sys.h"5253#include "util/parse-branch-options.h"5354#include <internal/xyarray.h>···11031102 }11041103}1105110411051105+static bool evsel__is_offcpu_event(struct evsel *evsel)11061106+{11071107+ return evsel__is_bpf_output(evsel) && !strcmp(evsel->name, OFFCPU_EVENT);11081108+}11091109+11061110/*11071111 * The enable_on_exec/disabled value strategy:11081112 *···13721366 */13731367 if (evsel__is_dummy_event(evsel))13741368 evsel__reset_sample_bit(evsel, BRANCH_STACK);13691369+13701370+ if (evsel__is_offcpu_event(evsel))13711371+ evsel->core.attr.sample_type &= OFFCPU_SAMPLE_TYPES;13751372}1376137313771374int evsel__set_filter(struct evsel *evsel, const char *filter)
···12401240 # FDB entry was installed.12411241 bridge link set dev $br_port1 flood off1242124212431243+ ip link set $host1_if promisc on12431244 tc qdisc add dev $host1_if ingress12441245 tc filter add dev $host1_if ingress protocol ip pref 1 handle 101 \12451246 flower dst_mac $mac action drop···12511250 tc -j -s filter show dev $host1_if ingress \12521251 | jq -e ".[] | select(.options.handle == 101) \12531252 | select(.options.actions[0].stats.packets == 1)" &> /dev/null12541254- check_fail $? "Packet reached second host when should not"12531253+ check_fail $? "Packet reached first host when should not"1255125412561255 $MZ $host1_if -c 1 -p 64 -a $mac -t ip -q12571256 sleep 1···1290128912911290 tc filter del dev $host1_if ingress protocol ip pref 1 handle 101 flower12921291 tc qdisc del dev $host1_if ingress12921292+ ip link set $host1_if promisc off1293129312941294 bridge link set dev $br_port1 flood on12951295···1308130613091307 # Add an ACL on `host2_if` which will tell us whether the packet13101308 # was flooded to it or not.13091309+ ip link set $host2_if promisc on13111310 tc qdisc add dev $host2_if ingress13121311 tc filter add dev $host2_if ingress protocol ip pref 1 handle 101 \13131312 flower dst_mac $mac action drop···1326132313271324 tc filter del dev $host2_if ingress protocol ip pref 1 handle 101 flower13281325 tc qdisc del dev $host2_if ingress13261326+ ip link set $host2_if promisc off1329132713301328 return $err13311329}
···3434 ip -netns "${PEER_NS}" addr add dev veth1 192.168.1.1/243535 ip -netns "${PEER_NS}" addr add dev veth1 2001:db8::1/64 nodad3636 ip -netns "${PEER_NS}" link set dev veth1 up3737- ip -n "${PEER_NS}" link set veth1 xdp object ../bpf/xdp_dummy.o section xdp_dummy3737+ ip -n "${PEER_NS}" link set veth1 xdp object ../bpf/xdp_dummy.o section xdp3838}39394040run_one() {
+1-1
tools/testing/selftests/net/udpgro_bench.sh
···3434 ip -netns "${PEER_NS}" addr add dev veth1 2001:db8::1/64 nodad3535 ip -netns "${PEER_NS}" link set dev veth1 up36363737- ip -n "${PEER_NS}" link set veth1 xdp object ../bpf/xdp_dummy.o section xdp_dummy3737+ ip -n "${PEER_NS}" link set veth1 xdp object ../bpf/xdp_dummy.o section xdp3838 ip netns exec "${PEER_NS}" ./udpgso_bench_rx ${rx_args} -r &3939 ip netns exec "${PEER_NS}" ./udpgso_bench_rx -t ${rx_args} -r &4040
+1-1
tools/testing/selftests/net/udpgro_frglist.sh
···3636 ip netns exec "${PEER_NS}" ethtool -K veth1 rx-gro-list on373738383939- ip -n "${PEER_NS}" link set veth1 xdp object ../bpf/xdp_dummy.o section xdp_dummy3939+ ip -n "${PEER_NS}" link set veth1 xdp object ../bpf/xdp_dummy.o section xdp4040 tc -n "${PEER_NS}" qdisc add dev veth1 clsact4141 tc -n "${PEER_NS}" filter add dev veth1 ingress prio 4 protocol ipv6 bpf object-file ../bpf/nat6to4.o section schedcls/ingress6/nat_6 direct-action4242 tc -n "${PEER_NS}" filter add dev veth1 egress prio 4 protocol ip bpf object-file ../bpf/nat6to4.o section schedcls/egress4/snat4 direct-action
+1-1
tools/testing/selftests/net/udpgro_fwd.sh
···4646 ip -n $BASE$ns addr add dev veth$ns $BM_NET_V4$ns/244747 ip -n $BASE$ns addr add dev veth$ns $BM_NET_V6$ns/64 nodad4848 done4949- ip -n $NS_DST link set veth$DST xdp object ../bpf/xdp_dummy.o section xdp_dummy 2>/dev/null4949+ ip -n $NS_DST link set veth$DST xdp object ../bpf/xdp_dummy.o section xdp 2>/dev/null5050}51515252create_vxlan_endpoint() {
+3-3
tools/testing/selftests/net/veth.sh
···289289 ip netns exec $NS_SRC ethtool -L veth$SRC rx 1 tx 2 2>/dev/null290290 printf "%-60s" "bad setting: XDP with RX nr less than TX"291291 ip -n $NS_DST link set dev veth$DST xdp object ../bpf/xdp_dummy.o \292292- section xdp_dummy 2>/dev/null &&\292292+ section xdp 2>/dev/null &&\293293 echo "fail - set operation successful ?!?" || echo " ok "294294295295 # the following tests will run with multiple channels active296296 ip netns exec $NS_SRC ethtool -L veth$SRC rx 2297297 ip netns exec $NS_DST ethtool -L veth$DST rx 2298298 ip -n $NS_DST link set dev veth$DST xdp object ../bpf/xdp_dummy.o \299299- section xdp_dummy 2>/dev/null299299+ section xdp 2>/dev/null300300 printf "%-60s" "bad setting: reducing RX nr below peer TX with XDP set"301301 ip netns exec $NS_DST ethtool -L veth$DST rx 1 2>/dev/null &&\302302 echo "fail - set operation successful ?!?" || echo " ok "···311311 chk_channels "setting invalid channels nr" $DST 2 2312312fi313313314314-ip -n $NS_DST link set dev veth$DST xdp object ../bpf/xdp_dummy.o section xdp_dummy 2>/dev/null314314+ip -n $NS_DST link set dev veth$DST xdp object ../bpf/xdp_dummy.o section xdp 2>/dev/null315315chk_gro_flag "with xdp attached - gro flag" $DST on316316chk_gro_flag " - peer gro flag" $SRC off317317chk_tso_flag " - tso flag" $SRC off