···182182 space-efficient. If this option is not present, large padding is183183 used - that is for compatibility with older kernels.184184185185+allow_discards186186+ Allow block discard requests (a.k.a. TRIM) for the integrity device.187187+ Discards are only allowed to devices using internal hash.185188186186-The journal mode (D/J), buffer_sectors, journal_watermark, commit_time can187187-be changed when reloading the target (load an inactive table and swap the188188-tables with suspend and resume). The other arguments should not be changed189189-when reloading the target because the layout of disk data depend on them190190-and the reloaded target would be non-functional.189189+The journal mode (D/J), buffer_sectors, journal_watermark, commit_time and190190+allow_discards can be changed when reloading the target (load an inactive191191+table and swap the tables with suspend and resume). The other arguments192192+should not be changed when reloading the target because the layout of disk193193+data depend on them and the reloaded target would be non-functional.191194192195193196The layout of the formatted block device:
···6161 - running6262 - ICE OS Default Package6363 - The name of the DDP package that is active in the device. The DDP6464- package is loaded by the driver during initialization. Each varation6565- of DDP package shall have a unique name.6464+ package is loaded by the driver during initialization. Each6565+ variation of the DDP package has a unique name.6666 * - ``fw.app``6767 - running6868 - 1.3.1.0
···11+==============================22+Running nested guests with KVM33+==============================44+55+A nested guest is the ability to run a guest inside another guest (it66+can be KVM-based or a different hypervisor). The straightforward77+example is a KVM guest that in turn runs on a KVM guest (the rest of88+this document is built on this example)::99+1010+ .----------------. .----------------.1111+ | | | |1212+ | L2 | | L2 |1313+ | (Nested Guest) | | (Nested Guest) |1414+ | | | |1515+ |----------------'--'----------------|1616+ | |1717+ | L1 (Guest Hypervisor) |1818+ | KVM (/dev/kvm) |1919+ | |2020+ .------------------------------------------------------.2121+ | L0 (Host Hypervisor) |2222+ | KVM (/dev/kvm) |2323+ |------------------------------------------------------|2424+ | Hardware (with virtualization extensions) |2525+ '------------------------------------------------------'2626+2727+Terminology:2828+2929+- L0 – level-0; the bare metal host, running KVM3030+3131+- L1 – level-1 guest; a VM running on L0; also called the "guest3232+ hypervisor", as it itself is capable of running KVM.3333+3434+- L2 – level-2 guest; a VM running on L1, this is the "nested guest"3535+3636+.. note:: The above diagram is modelled after the x86 architecture;3737+ s390x, ppc64 and other architectures are likely to have3838+ a different design for nesting.3939+4040+ For example, s390x always has an LPAR (LogicalPARtition)4141+ hypervisor running on bare metal, adding another layer and4242+ resulting in at least four levels in a nested setup — L0 (bare4343+ metal, running the LPAR hypervisor), L1 (host hypervisor), L24444+ (guest hypervisor), L3 (nested guest).4545+4646+ This document will stick with the three-level terminology (L0,4747+ L1, and L2) for all architectures; and will largely focus on4848+ x86.4949+5050+5151+Use Cases5252+---------5353+5454+There are several scenarios where nested KVM can be useful, to name a5555+few:5656+5757+- As a developer, you want to test your software on different operating5858+ systems (OSes). Instead of renting multiple VMs from a Cloud5959+ Provider, using nested KVM lets you rent a large enough "guest6060+ hypervisor" (level-1 guest). This in turn allows you to create6161+ multiple nested guests (level-2 guests), running different OSes, on6262+ which you can develop and test your software.6363+6464+- Live migration of "guest hypervisors" and their nested guests, for6565+ load balancing, disaster recovery, etc.6666+6767+- VM image creation tools (e.g. ``virt-install``, etc) often run6868+ their own VM, and users expect these to work inside a VM.6969+7070+- Some OSes use virtualization internally for security (e.g. to let7171+ applications run safely in isolation).7272+7373+7474+Enabling "nested" (x86)7575+-----------------------7676+7777+From Linux kernel v4.19 onwards, the ``nested`` KVM parameter is enabled7878+by default for Intel and AMD. (Though your Linux distribution might7979+override this default.)8080+8181+In case you are running a Linux kernel older than v4.19, to enable8282+nesting, set the ``nested`` KVM module parameter to ``Y`` or ``1``. To8383+persist this setting across reboots, you can add it in a config file, as8484+shown below:8585+8686+1. On the bare metal host (L0), list the kernel modules and ensure that8787+ the KVM modules::8888+8989+ $ lsmod | grep -i kvm9090+ kvm_intel 133627 09191+ kvm 435079 1 kvm_intel9292+9393+2. Show information for ``kvm_intel`` module::9494+9595+ $ modinfo kvm_intel | grep -i nested9696+ parm: nested:bool9797+9898+3. For the nested KVM configuration to persist across reboots, place the9999+ below in ``/etc/modprobed/kvm_intel.conf`` (create the file if it100100+ doesn't exist)::101101+102102+ $ cat /etc/modprobe.d/kvm_intel.conf103103+ options kvm-intel nested=y104104+105105+4. Unload and re-load the KVM Intel module::106106+107107+ $ sudo rmmod kvm-intel108108+ $ sudo modprobe kvm-intel109109+110110+5. Verify if the ``nested`` parameter for KVM is enabled::111111+112112+ $ cat /sys/module/kvm_intel/parameters/nested113113+ Y114114+115115+For AMD hosts, the process is the same as above, except that the module116116+name is ``kvm-amd``.117117+118118+119119+Additional nested-related kernel parameters (x86)120120+-------------------------------------------------121121+122122+If your hardware is sufficiently advanced (Intel Haswell processor or123123+higher, which has newer hardware virt extensions), the following124124+additional features will also be enabled by default: "Shadow VMCS125125+(Virtual Machine Control Structure)", APIC Virtualization on your bare126126+metal host (L0). Parameters for Intel hosts::127127+128128+ $ cat /sys/module/kvm_intel/parameters/enable_shadow_vmcs129129+ Y130130+131131+ $ cat /sys/module/kvm_intel/parameters/enable_apicv132132+ Y133133+134134+ $ cat /sys/module/kvm_intel/parameters/ept135135+ Y136136+137137+.. note:: If you suspect your L2 (i.e. nested guest) is running slower,138138+ ensure the above are enabled (particularly139139+ ``enable_shadow_vmcs`` and ``ept``).140140+141141+142142+Starting a nested guest (x86)143143+-----------------------------144144+145145+Once your bare metal host (L0) is configured for nesting, you should be146146+able to start an L1 guest with::147147+148148+ $ qemu-kvm -cpu host [...]149149+150150+The above will pass through the host CPU's capabilities as-is to the151151+gues); or for better live migration compatibility, use a named CPU152152+model supported by QEMU. e.g.::153153+154154+ $ qemu-kvm -cpu Haswell-noTSX-IBRS,vmx=on155155+156156+then the guest hypervisor will subsequently be capable of running a157157+nested guest with accelerated KVM.158158+159159+160160+Enabling "nested" (s390x)161161+-------------------------162162+163163+1. On the host hypervisor (L0), enable the ``nested`` parameter on164164+ s390x::165165+166166+ $ rmmod kvm167167+ $ modprobe kvm nested=1168168+169169+.. note:: On s390x, the kernel parameter ``hpage`` is mutually exclusive170170+ with the ``nested`` paramter — i.e. to be able to enable171171+ ``nested``, the ``hpage`` parameter *must* be disabled.172172+173173+2. The guest hypervisor (L1) must be provided with the ``sie`` CPU174174+ feature — with QEMU, this can be done by using "host passthrough"175175+ (via the command-line ``-cpu host``).176176+177177+3. Now the KVM module can be loaded in the L1 (guest hypervisor)::178178+179179+ $ modprobe kvm180180+181181+182182+Live migration with nested KVM183183+------------------------------184184+185185+Migrating an L1 guest, with a *live* nested guest in it, to another186186+bare metal host, works as of Linux kernel 5.3 and QEMU 4.2.0 for187187+Intel x86 systems, and even on older versions for s390x.188188+189189+On AMD systems, once an L1 guest has started an L2 guest, the L1 guest190190+should no longer be migrated or saved (refer to QEMU documentation on191191+"savevm"/"loadvm") until the L2 guest shuts down. Attempting to migrate192192+or save-and-load an L1 guest while an L2 guest is running will result in193193+undefined behavior. You might see a ``kernel BUG!`` entry in ``dmesg``, a194194+kernel 'oops', or an outright kernel panic. Such a migrated or loaded L1195195+guest can no longer be considered stable or secure, and must be restarted.196196+Migrating an L1 guest merely configured to support nesting, while not197197+actually running L2 guests, is expected to function normally even on AMD198198+systems but may fail once guests are started.199199+200200+Migrating an L2 guest is always expected to succeed, so all the following201201+scenarios should work even on AMD systems:202202+203203+- Migrating a nested guest (L2) to another L1 guest on the *same* bare204204+ metal host.205205+206206+- Migrating a nested guest (L2) to another L1 guest on a *different*207207+ bare metal host.208208+209209+- Migrating a nested guest (L2) to a bare metal host.210210+211211+Reporting bugs from nested setups212212+-----------------------------------213213+214214+Debugging "nested" problems can involve sifting through log files across215215+L0, L1 and L2; this can result in tedious back-n-forth between the bug216216+reporter and the bug fixer.217217+218218+- Mention that you are in a "nested" setup. If you are running any kind219219+ of "nesting" at all, say so. Unfortunately, this needs to be called220220+ out because when reporting bugs, people tend to forget to even221221+ *mention* that they're using nested virtualization.222222+223223+- Ensure you are actually running KVM on KVM. Sometimes people do not224224+ have KVM enabled for their guest hypervisor (L1), which results in225225+ them running with pure emulation or what QEMU calls it as "TCG", but226226+ they think they're running nested KVM. Thus confusing "nested Virt"227227+ (which could also mean, QEMU on KVM) with "nested KVM" (KVM on KVM).228228+229229+Information to collect (generic)230230+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~231231+232232+The following is not an exhaustive list, but a very good starting point:233233+234234+ - Kernel, libvirt, and QEMU version from L0235235+236236+ - Kernel, libvirt and QEMU version from L1237237+238238+ - QEMU command-line of L1 -- when using libvirt, you'll find it here:239239+ ``/var/log/libvirt/qemu/instance.log``240240+241241+ - QEMU command-line of L2 -- as above, when using libvirt, get the242242+ complete libvirt-generated QEMU command-line243243+244244+ - ``cat /sys/cpuinfo`` from L0245245+246246+ - ``cat /sys/cpuinfo`` from L1247247+248248+ - ``lscpu`` from L0249249+250250+ - ``lscpu`` from L1251251+252252+ - Full ``dmesg`` output from L0253253+254254+ - Full ``dmesg`` output from L1255255+256256+x86-specific info to collect257257+~~~~~~~~~~~~~~~~~~~~~~~~~~~~258258+259259+Both the below commands, ``x86info`` and ``dmidecode``, should be260260+available on most Linux distributions with the same name:261261+262262+ - Output of: ``x86info -a`` from L0263263+264264+ - Output of: ``x86info -a`` from L1265265+266266+ - Output of: ``dmidecode`` from L0267267+268268+ - Output of: ``dmidecode`` from L1269269+270270+s390x-specific info to collect271271+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~272272+273273+Along with the earlier mentioned generic details, the below is274274+also recommended:275275+276276+ - ``/proc/sysinfo`` from L1; this will also include the info from L0
+4-14
MAINTAINERS
···36573657S: Maintained36583658W: http://btrfs.wiki.kernel.org/36593659Q: http://patchwork.kernel.org/project/linux-btrfs/list/36603660-T: git git://git.kernel.org/pub/scm/linux/kernel/git/mason/linux-btrfs.git36603660+T: git git://git.kernel.org/pub/scm/linux/kernel/git/kdave/linux.git36613661F: Documentation/filesystems/btrfs.rst36623662F: fs/btrfs/36633663F: include/linux/btrfs*···39363936CEPH COMMON CODE (LIBCEPH)39373937M: Ilya Dryomov <idryomov@gmail.com>39383938M: Jeff Layton <jlayton@kernel.org>39393939-M: Sage Weil <sage@redhat.com>39403939L: ceph-devel@vger.kernel.org39413940S: Supported39423941W: http://ceph.com/39433943-T: git git://git.kernel.org/pub/scm/linux/kernel/git/sage/ceph-client.git39443942T: git git://github.com/ceph/ceph-client.git39453943F: include/linux/ceph/39463944F: include/linux/crush/···3946394839473949CEPH DISTRIBUTED FILE SYSTEM CLIENT (CEPH)39483950M: Jeff Layton <jlayton@kernel.org>39493949-M: Sage Weil <sage@redhat.com>39503951M: Ilya Dryomov <idryomov@gmail.com>39513952L: ceph-devel@vger.kernel.org39523953S: Supported39533954W: http://ceph.com/39543954-T: git git://git.kernel.org/pub/scm/linux/kernel/git/sage/ceph-client.git39553955T: git git://github.com/ceph/ceph-client.git39563956F: Documentation/filesystems/ceph.rst39573957F: fs/ceph/···59315935DYNAMIC INTERRUPT MODERATION59325936M: Tal Gilboa <talgi@mellanox.com>59335937S: Maintained59385938+F: Documentation/networking/net_dim.rst59345939F: include/linux/dim.h59355940F: lib/dim/59365936-F: Documentation/networking/net_dim.rst5937594159385942DZ DECSTATION DZ11 SERIAL DRIVER59395943M: "Maciej W. Rozycki" <macro@linux-mips.org>···7115711971167120GENERIC PHY FRAMEWORK71177121M: Kishon Vijay Abraham I <kishon@ti.com>71227122+M: Vinod Koul <vkoul@kernel.org>71187123L: linux-kernel@vger.kernel.org71197124S: Supported71207120-T: git git://git.kernel.org/pub/scm/linux/kernel/git/kishon/linux-phy.git71257125+T: git git://git.kernel.org/pub/scm/linux/kernel/git/phy/linux-phy.git71217126F: Documentation/devicetree/bindings/phy/71227127F: drivers/phy/71237128F: include/linux/phy/···77427745L: platform-driver-x86@vger.kernel.org77437746S: Orphan77447747F: drivers/platform/x86/tc1100-wmi.c77457745-77467746-HP100: Driver for HP 10/100 Mbit/s Voice Grade Network Adapter Series77477747-M: Jaroslav Kysela <perex@perex.cz>77487748-S: Obsolete77497749-F: drivers/staging/hp/hp100.*7750774877517749HPET: High Precision Event Timers driver77527750M: Clemens Ladisch <clemens@ladisch.de>···14094141021409514103RADOS BLOCK DEVICE (RBD)1409614104M: Ilya Dryomov <idryomov@gmail.com>1409714097-M: Sage Weil <sage@redhat.com>1409814105R: Dongsheng Yang <dongsheng.yang@easystack.cn>1409914106L: ceph-devel@vger.kernel.org1410014107S: Supported1410114108W: http://ceph.com/1410214102-T: git git://git.kernel.org/pub/scm/linux/kernel/git/sage/ceph-client.git1410314109T: git git://github.com/ceph/ceph-client.git1410414110F: Documentation/ABI/testing/sysfs-bus-rbd1410514111F: drivers/block/rbd.c
+12-5
Makefile
···22VERSION = 533PATCHLEVEL = 744SUBLEVEL = 055-EXTRAVERSION = -rc355+EXTRAVERSION = -rc566NAME = Kleptomaniac Octopus7788# *DOCUMENTATION*···729729KBUILD_CFLAGS += -Os730730endif731731732732-ifdef CONFIG_CC_DISABLE_WARN_MAYBE_UNINITIALIZED733733-KBUILD_CFLAGS += -Wno-maybe-uninitialized734734-endif735735-736732# Tell gcc to never replace conditional load with a non-conditional one737733KBUILD_CFLAGS += $(call cc-option,--param=allow-store-data-races=0)738734KBUILD_CFLAGS += $(call cc-option,-fno-allow-store-data-races)···876880877881# disable stringop warnings in gcc 8+878882KBUILD_CFLAGS += $(call cc-disable-warning, stringop-truncation)883883+884884+# We'll want to enable this eventually, but it's not going away for 5.7 at least885885+KBUILD_CFLAGS += $(call cc-disable-warning, zero-length-bounds)886886+KBUILD_CFLAGS += $(call cc-disable-warning, array-bounds)887887+KBUILD_CFLAGS += $(call cc-disable-warning, stringop-overflow)888888+889889+# Another good warning that we'll want to enable eventually890890+KBUILD_CFLAGS += $(call cc-disable-warning, restrict)891891+892892+# Enabled with W=2, disabled by default as noisy893893+KBUILD_CFLAGS += $(call cc-disable-warning, maybe-uninitialized)879894880895# disable invalid "can't wrap" optimizations for signed / pointers881896KBUILD_CFLAGS += $(call cc-option,-fno-strict-overflow)
+11-3
arch/arm/crypto/chacha-glue.c
···9191 return;9292 }93939494- kernel_neon_begin();9595- chacha_doneon(state, dst, src, bytes, nrounds);9696- kernel_neon_end();9494+ do {9595+ unsigned int todo = min_t(unsigned int, bytes, SZ_4K);9696+9797+ kernel_neon_begin();9898+ chacha_doneon(state, dst, src, todo, nrounds);9999+ kernel_neon_end();100100+101101+ bytes -= todo;102102+ src += todo;103103+ dst += todo;104104+ } while (bytes);97105}98106EXPORT_SYMBOL(chacha_crypt_arch);99107
+1-1
arch/arm/crypto/nhpoly1305-neon-glue.c
···3030 return crypto_nhpoly1305_update(desc, src, srclen);31313232 do {3333- unsigned int n = min_t(unsigned int, srclen, PAGE_SIZE);3333+ unsigned int n = min_t(unsigned int, srclen, SZ_4K);34343535 kernel_neon_begin();3636 crypto_nhpoly1305_update_helper(desc, src, n, _nh_neon);
+11-4
arch/arm/crypto/poly1305-glue.c
···160160 unsigned int len = round_down(nbytes, POLY1305_BLOCK_SIZE);161161162162 if (static_branch_likely(&have_neon) && do_neon) {163163- kernel_neon_begin();164164- poly1305_blocks_neon(&dctx->h, src, len, 1);165165- kernel_neon_end();163163+ do {164164+ unsigned int todo = min_t(unsigned int, len, SZ_4K);165165+166166+ kernel_neon_begin();167167+ poly1305_blocks_neon(&dctx->h, src, todo, 1);168168+ kernel_neon_end();169169+170170+ len -= todo;171171+ src += todo;172172+ } while (len);166173 } else {167174 poly1305_blocks_arm(&dctx->h, src, len, 1);175175+ src += len;168176 }169169- src += len;170177 nbytes %= POLY1305_BLOCK_SIZE;171178 }172179
+7-2
arch/arm/include/asm/futex.h
···165165 preempt_enable();166166#endif167167168168- if (!ret)169169- *oval = oldval;168168+ /*169169+ * Store unconditionally. If ret != 0 the extra store is the least170170+ * of the worries but GCC cannot figure out that __futex_atomic_op()171171+ * is either setting ret to -EFAULT or storing the old value in172172+ * oldval which results in a uninitialized warning at the call site.173173+ */174174+ *oval = oldval;170175171176 return ret;172177}
+11-3
arch/arm64/crypto/chacha-neon-glue.c
···8787 !crypto_simd_usable())8888 return chacha_crypt_generic(state, dst, src, bytes, nrounds);89899090- kernel_neon_begin();9191- chacha_doneon(state, dst, src, bytes, nrounds);9292- kernel_neon_end();9090+ do {9191+ unsigned int todo = min_t(unsigned int, bytes, SZ_4K);9292+9393+ kernel_neon_begin();9494+ chacha_doneon(state, dst, src, todo, nrounds);9595+ kernel_neon_end();9696+9797+ bytes -= todo;9898+ src += todo;9999+ dst += todo;100100+ } while (bytes);93101}94102EXPORT_SYMBOL(chacha_crypt_arch);95103
+1-1
arch/arm64/crypto/nhpoly1305-neon-glue.c
···3030 return crypto_nhpoly1305_update(desc, src, srclen);31313232 do {3333- unsigned int n = min_t(unsigned int, srclen, PAGE_SIZE);3333+ unsigned int n = min_t(unsigned int, srclen, SZ_4K);34343535 kernel_neon_begin();3636 crypto_nhpoly1305_update_helper(desc, src, n, _nh_neon);
+11-4
arch/arm64/crypto/poly1305-glue.c
···143143 unsigned int len = round_down(nbytes, POLY1305_BLOCK_SIZE);144144145145 if (static_branch_likely(&have_neon) && crypto_simd_usable()) {146146- kernel_neon_begin();147147- poly1305_blocks_neon(&dctx->h, src, len, 1);148148- kernel_neon_end();146146+ do {147147+ unsigned int todo = min_t(unsigned int, len, SZ_4K);148148+149149+ kernel_neon_begin();150150+ poly1305_blocks_neon(&dctx->h, src, todo, 1);151151+ kernel_neon_end();152152+153153+ len -= todo;154154+ src += todo;155155+ } while (len);149156 } else {150157 poly1305_blocks(&dctx->h, src, len, 1);158158+ src += len;151159 }152152- src += len;153160 nbytes %= POLY1305_BLOCK_SIZE;154161 }155162
···200200 }201201202202 memcpy((u32 *)regs + off, valp, KVM_REG_SIZE(reg->id));203203+204204+ if (*vcpu_cpsr(vcpu) & PSR_MODE32_BIT) {205205+ int i;206206+207207+ for (i = 0; i < 16; i++)208208+ *vcpu_reg32(vcpu, i) = (u32)*vcpu_reg32(vcpu, i);209209+ }203210out:204211 return err;205212}
+23
arch/arm64/kvm/hyp/entry.S
···18181919#define CPU_GP_REG_OFFSET(x) (CPU_GP_REGS + x)2020#define CPU_XREG_OFFSET(x) CPU_GP_REG_OFFSET(CPU_USER_PT_REGS + 8*x)2121+#define CPU_SP_EL0_OFFSET (CPU_XREG_OFFSET(30) + 8)21222223 .text2324 .pushsection .hyp.text, "ax"···4847 ldp x29, lr, [\ctxt, #CPU_XREG_OFFSET(29)]4948.endm50495050+.macro save_sp_el0 ctxt, tmp5151+ mrs \tmp, sp_el05252+ str \tmp, [\ctxt, #CPU_SP_EL0_OFFSET]5353+.endm5454+5555+.macro restore_sp_el0 ctxt, tmp5656+ ldr \tmp, [\ctxt, #CPU_SP_EL0_OFFSET]5757+ msr sp_el0, \tmp5858+.endm5959+5160/*5261 * u64 __guest_enter(struct kvm_vcpu *vcpu,5362 * struct kvm_cpu_context *host_ctxt);···70597160 // Store the host regs7261 save_callee_saved_regs x16262+6363+ // Save the host's sp_el06464+ save_sp_el0 x1, x273657466 // Now the host state is stored if we have a pending RAS SError it must7567 // affect the host. If any asynchronous exception is pending we defer···9682 // as it may cause Pointer Authentication key signing mismatch errors9783 // when this feature is enabled for kernel code.9884 ptrauth_switch_to_guest x29, x0, x1, x28585+8686+ // Restore the guest's sp_el08787+ restore_sp_el0 x29, x0998810089 // Restore guest regs x0-x1710190 ldp x0, x1, [x29, #CPU_XREG_OFFSET(0)]···147130 // Store the guest regs x18-x29, lr148131 save_callee_saved_regs x1149132133133+ // Store the guest's sp_el0134134+ save_sp_el0 x1, x2135135+150136 get_host_ctxt x2, x3151137152138 // Macro ptrauth_switch_to_guest format:···158138 // as it may cause Pointer Authentication key signing mismatch errors159139 // when this feature is enabled for kernel code.160140 ptrauth_switch_to_host x1, x2, x3, x4, x5141141+142142+ // Restore the hosts's sp_el0143143+ restore_sp_el0 x2, x3161144162145 // Now restore the host regs163146 restore_callee_saved_regs x2
-1
arch/arm64/kvm/hyp/hyp-entry.S
···198198.macro invalid_vector label, target = __hyp_panic199199 .align 2200200SYM_CODE_START(\label)201201-\label:202201 b \target203202SYM_CODE_END(\label)204203.endm
+3-14
arch/arm64/kvm/hyp/sysreg-sr.c
···1515/*1616 * Non-VHE: Both host and guest must save everything.1717 *1818- * VHE: Host and guest must save mdscr_el1 and sp_el0 (and the PC and pstate,1919- * which are handled as part of the el2 return state) on every switch.1818+ * VHE: Host and guest must save mdscr_el1 and sp_el0 (and the PC and1919+ * pstate, which are handled as part of the el2 return state) on every2020+ * switch (sp_el0 is being dealt with in the assembly code).2021 * tpidr_el0 and tpidrro_el0 only need to be switched when going2122 * to host userspace or a different VCPU. EL1 registers only need to be2223 * switched when potentially going to run a different VCPU. The latter two···2726static void __hyp_text __sysreg_save_common_state(struct kvm_cpu_context *ctxt)2827{2928 ctxt->sys_regs[MDSCR_EL1] = read_sysreg(mdscr_el1);3030-3131- /*3232- * The host arm64 Linux uses sp_el0 to point to 'current' and it must3333- * therefore be saved/restored on every entry/exit to/from the guest.3434- */3535- ctxt->gp_regs.regs.sp = read_sysreg(sp_el0);3629}37303831static void __hyp_text __sysreg_save_user_state(struct kvm_cpu_context *ctxt)···9499static void __hyp_text __sysreg_restore_common_state(struct kvm_cpu_context *ctxt)95100{96101 write_sysreg(ctxt->sys_regs[MDSCR_EL1], mdscr_el1);9797-9898- /*9999- * The host arm64 Linux uses sp_el0 to point to 'current' and it must100100- * therefore be saved/restored on every entry/exit to/from the guest.101101- */102102- write_sysreg(ctxt->gp_regs.regs.sp, sp_el0);103102}104103105104static void __hyp_text __sysreg_restore_user_state(struct kvm_cpu_context *ctxt)
···521521 case KVM_CAP_IOEVENTFD:522522 case KVM_CAP_DEVICE_CTRL:523523 case KVM_CAP_IMMEDIATE_EXIT:524524+ case KVM_CAP_SET_GUEST_DEBUG:524525 r = 1;525526 break;526527 case KVM_CAP_PPC_GUEST_DEBUG_SSTEP:
+1-1
arch/riscv/Kconfig
···6060 select ARCH_HAS_GIGANTIC_PAGE6161 select ARCH_HAS_SET_DIRECT_MAP6262 select ARCH_HAS_SET_MEMORY6363- select ARCH_HAS_STRICT_KERNEL_RWX6363+ select ARCH_HAS_STRICT_KERNEL_RWX if MMU6464 select ARCH_WANT_HUGE_PMD_SHARE if 64BIT6565 select SPARSEMEM_STATIC if 32BIT6666 select ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT if MMU
···66 * Copyright (C) 2017 SiFive77 */8899+#include <linux/bitmap.h>910#include <linux/of.h>1011#include <asm/processor.h>1112#include <asm/hwcap.h>···1413#include <asm/switch_to.h>15141615unsigned long elf_hwcap __read_mostly;1616+1717+/* Host ISA bitmap */1818+static DECLARE_BITMAP(riscv_isa, RISCV_ISA_EXT_MAX) __read_mostly;1919+1720#ifdef CONFIG_FPU1821bool has_fpu __read_mostly;1922#endif2323+2424+/**2525+ * riscv_isa_extension_base() - Get base extension word2626+ *2727+ * @isa_bitmap: ISA bitmap to use2828+ * Return: base extension word as unsigned long value2929+ *3030+ * NOTE: If isa_bitmap is NULL then Host ISA bitmap will be used.3131+ */3232+unsigned long riscv_isa_extension_base(const unsigned long *isa_bitmap)3333+{3434+ if (!isa_bitmap)3535+ return riscv_isa[0];3636+ return isa_bitmap[0];3737+}3838+EXPORT_SYMBOL_GPL(riscv_isa_extension_base);3939+4040+/**4141+ * __riscv_isa_extension_available() - Check whether given extension4242+ * is available or not4343+ *4444+ * @isa_bitmap: ISA bitmap to use4545+ * @bit: bit position of the desired extension4646+ * Return: true or false4747+ *4848+ * NOTE: If isa_bitmap is NULL then Host ISA bitmap will be used.4949+ */5050+bool __riscv_isa_extension_available(const unsigned long *isa_bitmap, int bit)5151+{5252+ const unsigned long *bmap = (isa_bitmap) ? isa_bitmap : riscv_isa;5353+5454+ if (bit >= RISCV_ISA_EXT_MAX)5555+ return false;5656+5757+ return test_bit(bit, bmap) ? true : false;5858+}5959+EXPORT_SYMBOL_GPL(__riscv_isa_extension_available);20602161void riscv_fill_hwcap(void)2262{2363 struct device_node *node;2464 const char *isa;2525- size_t i;6565+ char print_str[BITS_PER_LONG + 1];6666+ size_t i, j, isa_len;2667 static unsigned long isa2hwcap[256] = {0};27682869 isa2hwcap['i'] = isa2hwcap['I'] = COMPAT_HWCAP_ISA_I;···76337734 elf_hwcap = 0;78353636+ bitmap_zero(riscv_isa, RISCV_ISA_EXT_MAX);3737+7938 for_each_of_cpu_node(node) {8039 unsigned long this_hwcap = 0;4040+ unsigned long this_isa = 0;81418242 if (riscv_of_processor_hartid(node) < 0)8343 continue;···9044 continue;9145 }92469393- for (i = 0; i < strlen(isa); ++i)4747+ i = 0;4848+ isa_len = strlen(isa);4949+#if IS_ENABLED(CONFIG_32BIT)5050+ if (!strncmp(isa, "rv32", 4))5151+ i += 4;5252+#elif IS_ENABLED(CONFIG_64BIT)5353+ if (!strncmp(isa, "rv64", 4))5454+ i += 4;5555+#endif5656+ for (; i < isa_len; ++i) {9457 this_hwcap |= isa2hwcap[(unsigned char)(isa[i])];5858+ /*5959+ * TODO: X, Y and Z extension parsing for Host ISA6060+ * bitmap will be added in-future.6161+ */6262+ if ('a' <= isa[i] && isa[i] < 'x')6363+ this_isa |= (1UL << (isa[i] - 'a'));6464+ }95659666 /*9767 * All "okay" hart should have same isa. Set HWCAP based on···11856 elf_hwcap &= this_hwcap;11957 else12058 elf_hwcap = this_hwcap;5959+6060+ if (riscv_isa[0])6161+ riscv_isa[0] &= this_isa;6262+ else6363+ riscv_isa[0] = this_isa;12164 }1226512366 /* We don't support systems with F but without D, so mask those out···13265 elf_hwcap &= ~COMPAT_HWCAP_ISA_F;13366 }13467135135- pr_info("elf_hwcap is 0x%lx\n", elf_hwcap);6868+ memset(print_str, 0, sizeof(print_str));6969+ for (i = 0, j = 0; i < BITS_PER_LONG; i++)7070+ if (riscv_isa[0] & BIT_MASK(i))7171+ print_str[j++] = (char)('a' + i);7272+ pr_info("riscv: ISA extensions %s\n", print_str);7373+7474+ memset(print_str, 0, sizeof(print_str));7575+ for (i = 0, j = 0; i < BITS_PER_LONG; i++)7676+ if (elf_hwcap & BIT_MASK(i))7777+ print_str[j++] = (char)('a' + i);7878+ pr_info("riscv: ELF capabilities %s\n", print_str);1367913780#ifdef CONFIG_FPU13881 if (elf_hwcap & (COMPAT_HWCAP_ISA_F | COMPAT_HWCAP_ISA_D))
+10-7
arch/riscv/kernel/sbi.c
···102102{103103 sbi_ecall(SBI_EXT_0_1_SHUTDOWN, 0, 0, 0, 0, 0, 0, 0);104104}105105-EXPORT_SYMBOL(sbi_set_timer);105105+EXPORT_SYMBOL(sbi_shutdown);106106107107/**108108 * sbi_clear_ipi() - Clear any pending IPIs for the calling hart.···113113{114114 sbi_ecall(SBI_EXT_0_1_CLEAR_IPI, 0, 0, 0, 0, 0, 0, 0);115115}116116-EXPORT_SYMBOL(sbi_shutdown);116116+EXPORT_SYMBOL(sbi_clear_ipi);117117118118/**119119 * sbi_set_timer_v01() - Program the timer for next timer event.···167167168168 return result;169169}170170+171171+static void sbi_set_power_off(void)172172+{173173+ pm_power_off = sbi_shutdown;174174+}170175#else171176static void __sbi_set_timer_v01(uint64_t stime_value)172177{···196191197192 return 0;198193}194194+195195+static void sbi_set_power_off(void) {}199196#endif /* CONFIG_RISCV_SBI_V01 */200197201198static void __sbi_set_timer_v02(uint64_t stime_value)···547540 return __sbi_base_ecall(SBI_EXT_BASE_GET_IMP_VERSION);548541}549542550550-static void sbi_power_off(void)551551-{552552- sbi_shutdown();553553-}554543555544int __init sbi_init(void)556545{557546 int ret;558547559559- pm_power_off = sbi_power_off;548548+ sbi_set_power_off();560549 ret = sbi_get_spec_version();561550 if (ret > 0)562551 sbi_spec_version = ret;
···1212#include <linux/stacktrace.h>1313#include <linux/ftrace.h>14141515+register unsigned long sp_in_global __asm__("sp");1616+1517#ifdef CONFIG_FRAME_POINTER16181719struct stackframe {1820 unsigned long fp;1921 unsigned long ra;2022};2121-2222-register unsigned long sp_in_global __asm__("sp");23232424void notrace walk_stackframe(struct task_struct *task, struct pt_regs *regs,2525 bool (*fn)(unsigned long, void *), void *arg)
+4-4
arch/riscv/kernel/vdso/Makefile
···1212vdso-syms += flush_icache13131414# Files to link into the vdso1515-obj-vdso = $(patsubst %, %.o, $(vdso-syms))1515+obj-vdso = $(patsubst %, %.o, $(vdso-syms)) note.o16161717# Build rules1818targets := $(obj-vdso) vdso.so vdso.so.dbg vdso.lds vdso-dummy.o···3333 $(call if_changed,vdsold)34343535# We also create a special relocatable object that should mirror the symbol3636-# table and layout of the linked DSO. With ld -R we can then refer to3737-# these symbols in the kernel code rather than hand-coded addresses.3636+# table and layout of the linked DSO. With ld --just-symbols we can then3737+# refer to these symbols in the kernel code rather than hand-coded addresses.38383939SYSCFLAGS_vdso.so.dbg = -shared -s -Wl,-soname=linux-vdso.so.1 \4040 -Wl,--build-id -Wl,--hash-style=both4141$(obj)/vdso-dummy.o: $(src)/vdso.lds $(obj)/rt_sigreturn.o FORCE4242 $(call if_changed,vdsold)43434444-LDFLAGS_vdso-syms.o := -r -R4444+LDFLAGS_vdso-syms.o := -r --just-symbols4545$(obj)/vdso-syms.o: $(obj)/vdso-dummy.o FORCE4646 $(call if_changed,ld)4747
+12
arch/riscv/kernel/vdso/note.S
···11+/* SPDX-License-Identifier: GPL-2.0-or-later */22+/*33+ * This supplies .note.* sections to go into the PT_NOTE inside the vDSO text.44+ * Here we can supply some information useful to userland.55+ */66+77+#include <linux/elfnote.h>88+#include <linux/version.h>99+1010+ELFNOTE_START(Linux, 0, "a")1111+ .long LINUX_VERSION_CODE1212+ELFNOTE_END
···545545 case KVM_CAP_S390_AIS:546546 case KVM_CAP_S390_AIS_MIGRATION:547547 case KVM_CAP_S390_VCPU_RESETS:548548+ case KVM_CAP_SET_GUEST_DEBUG:548549 r = 1;549550 break;550551 case KVM_CAP_S390_HPAGE_1M:
+3-1
arch/s390/kvm/priv.c
···626626 * available for the guest are AQIC and TAPQ with the t bit set627627 * since we do not set IC.3 (FIII) we currently will only intercept628628 * the AQIC function code.629629+ * Note: running nested under z/VM can result in intercepts for other630630+ * function codes, e.g. PQAP(QCI). We do not support this and bail out.629631 */630632 reg0 = vcpu->run->s.regs.gprs[0];631633 fc = (reg0 >> 24) & 0xff;632632- if (WARN_ON_ONCE(fc != 0x03))634634+ if (fc != 0x03)633635 return -EOPNOTSUPP;634636635637 /* PQAP instruction is allowed for guest kernel only */
+4
arch/s390/lib/uaccess.c
···6464{6565 mm_segment_t old_fs;6666 unsigned long asce, cr;6767+ unsigned long flags;67686869 old_fs = current->thread.mm_segment;6970 if (old_fs & 1)7071 return old_fs;7272+ /* protect against a concurrent page table upgrade */7373+ local_irq_save(flags);7174 current->thread.mm_segment |= 1;7275 asce = S390_lowcore.kernel_asce;7376 if (likely(old_fs == USER_DS)) {···8683 __ctl_load(asce, 7, 7);8784 set_cpu_flag(CIF_ASCE_SECONDARY);8885 }8686+ local_irq_restore(flags);8987 return old_fs;9088}9189EXPORT_SYMBOL(enable_sacf_uaccess);
+14-2
arch/s390/mm/pgalloc.c
···7070{7171 struct mm_struct *mm = arg;72727373- if (current->active_mm == mm)7474- set_user_asce(mm);7373+ /* we must change all active ASCEs to avoid the creation of new TLBs */7474+ if (current->active_mm == mm) {7575+ S390_lowcore.user_asce = mm->context.asce;7676+ if (current->thread.mm_segment == USER_DS) {7777+ __ctl_load(S390_lowcore.user_asce, 1, 1);7878+ /* Mark user-ASCE present in CR1 */7979+ clear_cpu_flag(CIF_ASCE_PRIMARY);8080+ }8181+ if (current->thread.mm_segment == USER_DS_SACF) {8282+ __ctl_load(S390_lowcore.user_asce, 7, 7);8383+ /* enable_sacf_uaccess does all or nothing */8484+ WARN_ON(!test_cpu_flag(CIF_ASCE_SECONDARY));8585+ }8686+ }7587 __tlb_flush_local();7688}7789
+4-6
arch/x86/crypto/blake2s-glue.c
···3232 const u32 inc)3333{3434 /* SIMD disables preemption, so relax after processing each page. */3535- BUILD_BUG_ON(PAGE_SIZE / BLAKE2S_BLOCK_SIZE < 8);3535+ BUILD_BUG_ON(SZ_4K / BLAKE2S_BLOCK_SIZE < 8);36363737 if (!static_branch_likely(&blake2s_use_ssse3) || !crypto_simd_usable()) {3838 blake2s_compress_generic(state, block, nblocks, inc);3939 return;4040 }41414242- for (;;) {4242+ do {4343 const size_t blocks = min_t(size_t, nblocks,4444- PAGE_SIZE / BLAKE2S_BLOCK_SIZE);4444+ SZ_4K / BLAKE2S_BLOCK_SIZE);45454646 kernel_fpu_begin();4747 if (IS_ENABLED(CONFIG_AS_AVX512) &&···5252 kernel_fpu_end();53535454 nblocks -= blocks;5555- if (!nblocks)5656- break;5755 block += blocks * BLAKE2S_BLOCK_SIZE;5858- }5656+ } while (nblocks);5957}6058EXPORT_SYMBOL(blake2s_compress_arch);6159
+11-3
arch/x86/crypto/chacha_glue.c
···153153 bytes <= CHACHA_BLOCK_SIZE)154154 return chacha_crypt_generic(state, dst, src, bytes, nrounds);155155156156- kernel_fpu_begin();157157- chacha_dosimd(state, dst, src, bytes, nrounds);158158- kernel_fpu_end();156156+ do {157157+ unsigned int todo = min_t(unsigned int, bytes, SZ_4K);158158+159159+ kernel_fpu_begin();160160+ chacha_dosimd(state, dst, src, todo, nrounds);161161+ kernel_fpu_end();162162+163163+ bytes -= todo;164164+ src += todo;165165+ dst += todo;166166+ } while (bytes);159167}160168EXPORT_SYMBOL(chacha_crypt_arch);161169
+1-1
arch/x86/crypto/nhpoly1305-avx2-glue.c
···2929 return crypto_nhpoly1305_update(desc, src, srclen);30303131 do {3232- unsigned int n = min_t(unsigned int, srclen, PAGE_SIZE);3232+ unsigned int n = min_t(unsigned int, srclen, SZ_4K);33333434 kernel_fpu_begin();3535 crypto_nhpoly1305_update_helper(desc, src, n, _nh_avx2);
+1-1
arch/x86/crypto/nhpoly1305-sse2-glue.c
···2929 return crypto_nhpoly1305_update(desc, src, srclen);30303131 do {3232- unsigned int n = min_t(unsigned int, srclen, PAGE_SIZE);3232+ unsigned int n = min_t(unsigned int, srclen, SZ_4K);33333434 kernel_fpu_begin();3535 crypto_nhpoly1305_update_helper(desc, src, n, _nh_sse2);
+6-7
arch/x86/crypto/poly1305_glue.c
···9191 struct poly1305_arch_internal *state = ctx;92929393 /* SIMD disables preemption, so relax after processing each page. */9494- BUILD_BUG_ON(PAGE_SIZE < POLY1305_BLOCK_SIZE ||9595- PAGE_SIZE % POLY1305_BLOCK_SIZE);9494+ BUILD_BUG_ON(SZ_4K < POLY1305_BLOCK_SIZE ||9595+ SZ_4K % POLY1305_BLOCK_SIZE);96969797 if (!static_branch_likely(&poly1305_use_avx) ||9898 (len < (POLY1305_BLOCK_SIZE * 18) && !state->is_base2_26) ||···102102 return;103103 }104104105105- for (;;) {106106- const size_t bytes = min_t(size_t, len, PAGE_SIZE);105105+ do {106106+ const size_t bytes = min_t(size_t, len, SZ_4K);107107108108 kernel_fpu_begin();109109 if (IS_ENABLED(CONFIG_AS_AVX512) && static_branch_likely(&poly1305_use_avx512))···113113 else114114 poly1305_blocks_avx(ctx, inp, bytes, padbit);115115 kernel_fpu_end();116116+116117 len -= bytes;117117- if (!len)118118- break;119118 inp += bytes;120120- }119119+ } while (len);121120}122121123122static void poly1305_simd_emit(void *ctx, u8 mac[POLY1305_DIGEST_SIZE],
+21-19
arch/x86/entry/calling.h
···9898#define SIZEOF_PTREGS 21*89999100100.macro PUSH_AND_CLEAR_REGS rdx=%rdx rax=%rax save_ret=0101101- /*102102- * Push registers and sanitize registers of values that a103103- * speculation attack might otherwise want to exploit. The104104- * lower registers are likely clobbered well before they105105- * could be put to use in a speculative execution gadget.106106- * Interleave XOR with PUSH for better uop scheduling:107107- */108101 .if \save_ret109102 pushq %rsi /* pt_regs->si */110103 movq 8(%rsp), %rsi /* temporarily store the return address in %rsi */···107114 pushq %rsi /* pt_regs->si */108115 .endif109116 pushq \rdx /* pt_regs->dx */110110- xorl %edx, %edx /* nospec dx */111117 pushq %rcx /* pt_regs->cx */112112- xorl %ecx, %ecx /* nospec cx */113118 pushq \rax /* pt_regs->ax */114119 pushq %r8 /* pt_regs->r8 */115115- xorl %r8d, %r8d /* nospec r8 */116120 pushq %r9 /* pt_regs->r9 */117117- xorl %r9d, %r9d /* nospec r9 */118121 pushq %r10 /* pt_regs->r10 */119119- xorl %r10d, %r10d /* nospec r10 */120122 pushq %r11 /* pt_regs->r11 */121121- xorl %r11d, %r11d /* nospec r11*/122123 pushq %rbx /* pt_regs->rbx */123123- xorl %ebx, %ebx /* nospec rbx*/124124 pushq %rbp /* pt_regs->rbp */125125- xorl %ebp, %ebp /* nospec rbp*/126125 pushq %r12 /* pt_regs->r12 */127127- xorl %r12d, %r12d /* nospec r12*/128126 pushq %r13 /* pt_regs->r13 */129129- xorl %r13d, %r13d /* nospec r13*/130127 pushq %r14 /* pt_regs->r14 */131131- xorl %r14d, %r14d /* nospec r14*/132128 pushq %r15 /* pt_regs->r15 */133133- xorl %r15d, %r15d /* nospec r15*/134129 UNWIND_HINT_REGS130130+135131 .if \save_ret136132 pushq %rsi /* return address on top of stack */137133 .endif134134+135135+ /*136136+ * Sanitize registers of values that a speculation attack might137137+ * otherwise want to exploit. The lower registers are likely clobbered138138+ * well before they could be put to use in a speculative execution139139+ * gadget.140140+ */141141+ xorl %edx, %edx /* nospec dx */142142+ xorl %ecx, %ecx /* nospec cx */143143+ xorl %r8d, %r8d /* nospec r8 */144144+ xorl %r9d, %r9d /* nospec r9 */145145+ xorl %r10d, %r10d /* nospec r10 */146146+ xorl %r11d, %r11d /* nospec r11 */147147+ xorl %ebx, %ebx /* nospec rbx */148148+ xorl %ebp, %ebp /* nospec rbp */149149+ xorl %r12d, %r12d /* nospec r12 */150150+ xorl %r13d, %r13d /* nospec r13 */151151+ xorl %r14d, %r14d /* nospec r14 */152152+ xorl %r15d, %r15d /* nospec r15 */153153+138154.endm139155140156.macro POP_REGS pop_rdi=1 skip_r11rcx=0
+7-7
arch/x86/entry/entry_64.S
···249249 */250250syscall_return_via_sysret:251251 /* rcx and r11 are already restored (see code above) */252252- UNWIND_HINT_EMPTY253252 POP_REGS pop_rdi=0 skip_r11rcx=1254253255254 /*···257258 */258259 movq %rsp, %rdi259260 movq PER_CPU_VAR(cpu_tss_rw + TSS_sp0), %rsp261261+ UNWIND_HINT_EMPTY260262261263 pushq RSP-RDI(%rdi) /* RSP */262264 pushq (%rdi) /* RDI */···279279 * %rdi: prev task280280 * %rsi: next task281281 */282282-SYM_CODE_START(__switch_to_asm)283283- UNWIND_HINT_FUNC282282+SYM_FUNC_START(__switch_to_asm)284283 /*285284 * Save callee-saved registers286285 * This must match the order in inactive_task_frame···320321 popq %rbp321322322323 jmp __switch_to323323-SYM_CODE_END(__switch_to_asm)324324+SYM_FUNC_END(__switch_to_asm)324325325326/*326327 * A newly forked process directly context switches into this address.···511512 * +----------------------------------------------------+512513 */513514SYM_CODE_START(interrupt_entry)514514- UNWIND_HINT_FUNC515515+ UNWIND_HINT_IRET_REGS offset=16515516 ASM_CLAC516517 cld517518···543544 pushq 5*8(%rdi) /* regs->eflags */544545 pushq 4*8(%rdi) /* regs->cs */545546 pushq 3*8(%rdi) /* regs->ip */547547+ UNWIND_HINT_IRET_REGS546548 pushq 2*8(%rdi) /* regs->orig_ax */547549 pushq 8(%rdi) /* return address */548548- UNWIND_HINT_FUNC549550550551 movq (%rdi), %rdi551552 jmp 2f···636637 */637638 movq %rsp, %rdi638639 movq PER_CPU_VAR(cpu_tss_rw + TSS_sp0), %rsp640640+ UNWIND_HINT_EMPTY639641640642 /* Copy the IRET frame to the trampoline stack. */641643 pushq 6*8(%rdi) /* SS */···1739173917401740 movq PER_CPU_VAR(cpu_current_top_of_stack), %rax17411741 leaq -PTREGS_SIZE(%rax), %rsp17421742- UNWIND_HINT_FUNC sp_offset=PTREGS_SIZE17421742+ UNWIND_HINT_REGS1743174317441744 call do_exit17451745SYM_CODE_END(rewind_stack_do_exit)
+10-2
arch/x86/hyperv/hv_init.c
···7373 struct page *pg;74747575 input_arg = (void **)this_cpu_ptr(hyperv_pcpu_input_arg);7676- pg = alloc_page(GFP_KERNEL);7676+ /* hv_cpu_init() can be called with IRQs disabled from hv_resume() */7777+ pg = alloc_page(irqs_disabled() ? GFP_ATOMIC : GFP_KERNEL);7778 if (unlikely(!pg))7879 return -ENOMEM;7980 *input_arg = page_address(pg);···255254static int hv_suspend(void)256255{257256 union hv_x64_msr_hypercall_contents hypercall_msr;257257+ int ret;258258259259 /*260260 * Reset the hypercall page as it is going to be invalidated···272270 hypercall_msr.enable = 0;273271 wrmsrl(HV_X64_MSR_HYPERCALL, hypercall_msr.as_uint64);274272275275- return 0;273273+ ret = hv_cpu_die(0);274274+ return ret;276275}277276278277static void hv_resume(void)279278{280279 union hv_x64_msr_hypercall_contents hypercall_msr;280280+ int ret;281281+282282+ ret = hv_cpu_init(0);283283+ WARN_ON(ret);281284282285 /* Re-enable the hypercall page */283286 rdmsrl(HV_X64_MSR_HYPERCALL, hypercall_msr.as_uint64);···295288 hv_hypercall_pg_saved = NULL;296289}297290291291+/* Note: when the ops are called, only CPU0 is online and IRQs are disabled. */298292static struct syscore_ops hv_syscore_ops = {299293 .suspend = hv_suspend,300294 .resume = hv_resume,
+3-2
arch/x86/include/asm/ftrace.h
···6161{6262 /*6363 * Compare the symbol name with the system call name. Skip the6464- * "__x64_sys", "__ia32_sys" or simple "sys" prefix.6464+ * "__x64_sys", "__ia32_sys", "__do_sys" or simple "sys" prefix.6565 */6666 return !strcmp(sym + 3, name + 3) ||6767 (!strncmp(sym, "__x64_", 6) && !strcmp(sym + 9, name + 3)) ||6868- (!strncmp(sym, "__ia32_", 7) && !strcmp(sym + 10, name + 3));6868+ (!strncmp(sym, "__ia32_", 7) && !strcmp(sym + 10, name + 3)) ||6969+ (!strncmp(sym, "__do_sys", 8) && !strcmp(sym + 8, name + 3));6970}70717172#ifndef COMPILE_OFFSETS
+2-2
arch/x86/include/asm/kvm_host.h
···16631663static inline bool kvm_irq_is_postable(struct kvm_lapic_irq *irq)16641664{16651665 /* We can only post Fixed and LowPrio IRQs */16661666- return (irq->delivery_mode == dest_Fixed ||16671667- irq->delivery_mode == dest_LowestPrio);16661666+ return (irq->delivery_mode == APIC_DM_FIXED ||16671667+ irq->delivery_mode == APIC_DM_LOWEST);16681668}1669166916701670static inline void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu)
···352352 * According to Intel, MFENCE can do the serialization here.353353 */354354 asm volatile("mfence" : : : "memory");355355-356356- printk_once(KERN_DEBUG "TSC deadline timer enabled\n");357355 return;358356 }359357···544546};545547static DEFINE_PER_CPU(struct clock_event_device, lapic_events);546548547547-static u32 hsx_deadline_rev(void)549549+static __init u32 hsx_deadline_rev(void)548550{549551 switch (boot_cpu_data.x86_stepping) {550552 case 0x02: return 0x3a; /* EP */···554556 return ~0U;555557}556558557557-static u32 bdx_deadline_rev(void)559559+static __init u32 bdx_deadline_rev(void)558560{559561 switch (boot_cpu_data.x86_stepping) {560562 case 0x02: return 0x00000011;···566568 return ~0U;567569}568570569569-static u32 skx_deadline_rev(void)571571+static __init u32 skx_deadline_rev(void)570572{571573 switch (boot_cpu_data.x86_stepping) {572574 case 0x03: return 0x01000136;···579581 return ~0U;580582}581583582582-static const struct x86_cpu_id deadline_match[] = {584584+static const struct x86_cpu_id deadline_match[] __initconst = {583585 X86_MATCH_INTEL_FAM6_MODEL( HASWELL_X, &hsx_deadline_rev),584586 X86_MATCH_INTEL_FAM6_MODEL( BROADWELL_X, 0x0b000020),585587 X86_MATCH_INTEL_FAM6_MODEL( BROADWELL_D, &bdx_deadline_rev),···601603 {},602604};603605604604-static void apic_check_deadline_errata(void)606606+static __init bool apic_validate_deadline_timer(void)605607{606608 const struct x86_cpu_id *m;607609 u32 rev;608610609609- if (!boot_cpu_has(X86_FEATURE_TSC_DEADLINE_TIMER) ||610610- boot_cpu_has(X86_FEATURE_HYPERVISOR))611611- return;611611+ if (!boot_cpu_has(X86_FEATURE_TSC_DEADLINE_TIMER))612612+ return false;613613+ if (boot_cpu_has(X86_FEATURE_HYPERVISOR))614614+ return true;612615613616 m = x86_match_cpu(deadline_match);614617 if (!m)615615- return;618618+ return true;616619617620 /*618621 * Function pointers will have the MSB set due to address layout,···625626 rev = (u32)m->driver_data;626627627628 if (boot_cpu_data.microcode >= rev)628628- return;629629+ return true;629630630631 setup_clear_cpu_cap(X86_FEATURE_TSC_DEADLINE_TIMER);631632 pr_err(FW_BUG "TSC_DEADLINE disabled due to Errata; "632633 "please update microcode to version: 0x%x (or later)\n", rev);634634+ return false;633635}634636635637/*···20922092{20932093 unsigned int new_apicid;2094209420952095- apic_check_deadline_errata();20952095+ if (apic_validate_deadline_timer())20962096+ pr_debug("TSC deadline timer available\n");2096209720972098 if (x2apic_mode) {20982099 boot_cpu_physical_apicid = read_apic_id();
+2-1
arch/x86/kernel/dumpstack_64.c
···183183 */184184 if (visit_mask) {185185 if (*visit_mask & (1UL << info->type)) {186186- printk_deferred_once(KERN_WARNING "WARNING: stack recursion on stack type %d\n", info->type);186186+ if (task == current)187187+ printk_deferred_once(KERN_WARNING "WARNING: stack recursion on stack type %d\n", info->type);187188 goto unknown;188189 }189190 *visit_mask |= 1UL << info->type;
+3
arch/x86/kernel/unwind_frame.c
···344344 if (IS_ENABLED(CONFIG_X86_32))345345 goto the_end;346346347347+ if (state->task != current)348348+ goto the_end;349349+347350 if (state->regs) {348351 printk_deferred_once(KERN_WARNING349352 "WARNING: kernel stack regs at %p in %s:%d has bad 'bp' value %p\n",
+74-39
arch/x86/kernel/unwind_orc.c
···88#include <asm/orc_lookup.h>991010#define orc_warn(fmt, ...) \1111- printk_deferred_once(KERN_WARNING pr_fmt("WARNING: " fmt), ##__VA_ARGS__)1111+ printk_deferred_once(KERN_WARNING "WARNING: " fmt, ##__VA_ARGS__)1212+1313+#define orc_warn_current(args...) \1414+({ \1515+ if (state->task == current) \1616+ orc_warn(args); \1717+})12181319extern int __start_orc_unwind_ip[];1420extern int __stop_orc_unwind_ip[];1521extern struct orc_entry __start_orc_unwind[];1622extern struct orc_entry __stop_orc_unwind[];17231818-static DEFINE_MUTEX(sort_mutex);1919-int *cur_orc_ip_table = __start_orc_unwind_ip;2020-struct orc_entry *cur_orc_table = __start_orc_unwind;2121-2222-unsigned int lookup_num_blocks;2323-bool orc_init;2424+static bool orc_init __ro_after_init;2525+static unsigned int lookup_num_blocks __ro_after_init;24262527static inline unsigned long orc_ip(const int *ip)2628{···144142{145143 static struct orc_entry *orc;146144147147- if (!orc_init)148148- return NULL;149149-150145 if (ip == 0)151146 return &null_orc_entry;152147···187188}188189189190#ifdef CONFIG_MODULES191191+192192+static DEFINE_MUTEX(sort_mutex);193193+static int *cur_orc_ip_table = __start_orc_unwind_ip;194194+static struct orc_entry *cur_orc_table = __start_orc_unwind;190195191196static void orc_sort_swap(void *_a, void *_b, int size)192197{···384381 return true;385382}386383384384+/*385385+ * If state->regs is non-NULL, and points to a full pt_regs, just get the reg386386+ * value from state->regs.387387+ *388388+ * Otherwise, if state->regs just points to IRET regs, and the previous frame389389+ * had full regs, it's safe to get the value from the previous regs. This can390390+ * happen when early/late IRQ entry code gets interrupted by an NMI.391391+ */392392+static bool get_reg(struct unwind_state *state, unsigned int reg_off,393393+ unsigned long *val)394394+{395395+ unsigned int reg = reg_off/8;396396+397397+ if (!state->regs)398398+ return false;399399+400400+ if (state->full_regs) {401401+ *val = ((unsigned long *)state->regs)[reg];402402+ return true;403403+ }404404+405405+ if (state->prev_regs) {406406+ *val = ((unsigned long *)state->prev_regs)[reg];407407+ return true;408408+ }409409+410410+ return false;411411+}412412+387413bool unwind_next_frame(struct unwind_state *state)388414{389389- unsigned long ip_p, sp, orig_ip = state->ip, prev_sp = state->sp;415415+ unsigned long ip_p, sp, tmp, orig_ip = state->ip, prev_sp = state->sp;390416 enum stack_type prev_type = state->stack_info.type;391417 struct orc_entry *orc;392418 bool indirect = false;···477445 break;478446479447 case ORC_REG_R10:480480- if (!state->regs || !state->full_regs) {481481- orc_warn("missing regs for base reg R10 at ip %pB\n",482482- (void *)state->ip);448448+ if (!get_reg(state, offsetof(struct pt_regs, r10), &sp)) {449449+ orc_warn_current("missing R10 value at %pB\n",450450+ (void *)state->ip);483451 goto err;484452 }485485- sp = state->regs->r10;486453 break;487454488455 case ORC_REG_R13:489489- if (!state->regs || !state->full_regs) {490490- orc_warn("missing regs for base reg R13 at ip %pB\n",491491- (void *)state->ip);456456+ if (!get_reg(state, offsetof(struct pt_regs, r13), &sp)) {457457+ orc_warn_current("missing R13 value at %pB\n",458458+ (void *)state->ip);492459 goto err;493460 }494494- sp = state->regs->r13;495461 break;496462497463 case ORC_REG_DI:498498- if (!state->regs || !state->full_regs) {499499- orc_warn("missing regs for base reg DI at ip %pB\n",500500- (void *)state->ip);464464+ if (!get_reg(state, offsetof(struct pt_regs, di), &sp)) {465465+ orc_warn_current("missing RDI value at %pB\n",466466+ (void *)state->ip);501467 goto err;502468 }503503- sp = state->regs->di;504469 break;505470506471 case ORC_REG_DX:507507- if (!state->regs || !state->full_regs) {508508- orc_warn("missing regs for base reg DX at ip %pB\n",509509- (void *)state->ip);472472+ if (!get_reg(state, offsetof(struct pt_regs, dx), &sp)) {473473+ orc_warn_current("missing DX value at %pB\n",474474+ (void *)state->ip);510475 goto err;511476 }512512- sp = state->regs->dx;513477 break;514478515479 default:516516- orc_warn("unknown SP base reg %d for ip %pB\n",480480+ orc_warn("unknown SP base reg %d at %pB\n",517481 orc->sp_reg, (void *)state->ip);518482 goto err;519483 }···532504533505 state->sp = sp;534506 state->regs = NULL;507507+ state->prev_regs = NULL;535508 state->signal = false;536509 break;537510538511 case ORC_TYPE_REGS:539512 if (!deref_stack_regs(state, sp, &state->ip, &state->sp)) {540540- orc_warn("can't dereference registers at %p for ip %pB\n",541541- (void *)sp, (void *)orig_ip);513513+ orc_warn_current("can't access registers at %pB\n",514514+ (void *)orig_ip);542515 goto err;543516 }544517545518 state->regs = (struct pt_regs *)sp;519519+ state->prev_regs = NULL;546520 state->full_regs = true;547521 state->signal = true;548522 break;549523550524 case ORC_TYPE_REGS_IRET:551525 if (!deref_stack_iret_regs(state, sp, &state->ip, &state->sp)) {552552- orc_warn("can't dereference iret registers at %p for ip %pB\n",553553- (void *)sp, (void *)orig_ip);526526+ orc_warn_current("can't access iret registers at %pB\n",527527+ (void *)orig_ip);554528 goto err;555529 }556530531531+ if (state->full_regs)532532+ state->prev_regs = state->regs;557533 state->regs = (void *)sp - IRET_FRAME_OFFSET;558534 state->full_regs = false;559535 state->signal = true;560536 break;561537562538 default:563563- orc_warn("unknown .orc_unwind entry type %d for ip %pB\n",539539+ orc_warn("unknown .orc_unwind entry type %d at %pB\n",564540 orc->type, (void *)orig_ip);565565- break;541541+ goto err;566542 }567543568544 /* Find BP: */569545 switch (orc->bp_reg) {570546 case ORC_REG_UNDEFINED:571571- if (state->regs && state->full_regs)572572- state->bp = state->regs->bp;547547+ if (get_reg(state, offsetof(struct pt_regs, bp), &tmp))548548+ state->bp = tmp;573549 break;574550575551 case ORC_REG_PREV_SP:···596564 if (state->stack_info.type == prev_type &&597565 on_stack(&state->stack_info, (void *)state->sp, sizeof(long)) &&598566 state->sp <= prev_sp) {599599- orc_warn("stack going in the wrong direction? ip=%pB\n",600600- (void *)orig_ip);567567+ orc_warn_current("stack going in the wrong direction? at %pB\n",568568+ (void *)orig_ip);601569 goto err;602570 }603571···617585void __unwind_start(struct unwind_state *state, struct task_struct *task,618586 struct pt_regs *regs, unsigned long *first_frame)619587{588588+ if (!orc_init)589589+ goto done;590590+620591 memset(state, 0, sizeof(*state));621592 state->task = task;622593···686651 /* Otherwise, skip ahead to the user-specified starting frame: */687652 while (!unwind_done(state) &&688653 (!on_stack(&state->stack_info, first_frame, sizeof(long)) ||689689- state->sp <= (unsigned long)first_frame))654654+ state->sp < (unsigned long)first_frame))690655 unwind_next_frame(state);691656692657 return;
+5-5
arch/x86/kvm/ioapic.c
···225225 }226226227227 /*228228- * AMD SVM AVIC accelerate EOI write and do not trap,229229- * in-kernel IOAPIC will not be able to receive the EOI.230230- * In this case, we do lazy update of the pending EOI when231231- * trying to set IOAPIC irq.228228+ * AMD SVM AVIC accelerate EOI write iff the interrupt is edge229229+ * triggered, in which case the in-kernel IOAPIC will not be able230230+ * to receive the EOI. In this case, we do a lazy update of the231231+ * pending EOI when trying to set IOAPIC irq.232232 */233233- if (kvm_apicv_activated(ioapic->kvm))233233+ if (edge && kvm_apicv_activated(ioapic->kvm))234234 ioapic_lazy_update_eoi(ioapic, irq);235235236236 /*
···8282 /* IMPORTANT: Stuff the RSB immediately after VM-Exit, before RET! */8383 FILL_RETURN_BUFFER %_ASM_AX, RSB_CLEAR_LOOPS, X86_FEATURE_RETPOLINE84848585+ /* Clear RFLAGS.CF and RFLAGS.ZF to preserve VM-Exit, i.e. !VM-Fail. */8686+ or $1, %_ASM_AX8787+8588 pop %_ASM_AX8689.Lvmexit_skip_rsb:8790#endif
+6-15
arch/x86/kvm/x86.c
···926926 __reserved_bits; \927927})928928929929-static u64 kvm_host_cr4_reserved_bits(struct cpuinfo_x86 *c)930930-{931931- u64 reserved_bits = __cr4_reserved_bits(cpu_has, c);932932-933933- if (kvm_cpu_cap_has(X86_FEATURE_LA57))934934- reserved_bits &= ~X86_CR4_LA57;935935-936936- if (kvm_cpu_cap_has(X86_FEATURE_UMIP))937937- reserved_bits &= ~X86_CR4_UMIP;938938-939939- return reserved_bits;940940-}941941-942929static int kvm_valid_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)943930{944931 if (cr4 & cr4_reserved_bits)···33723385 case KVM_CAP_GET_MSR_FEATURES:33733386 case KVM_CAP_MSR_PLATFORM_INFO:33743387 case KVM_CAP_EXCEPTION_PAYLOAD:33883388+ case KVM_CAP_SET_GUEST_DEBUG:33753389 r = 1;33763390 break;33773391 case KVM_CAP_SYNC_REGS:···96639675 if (!kvm_cpu_cap_has(X86_FEATURE_XSAVES))96649676 supported_xss = 0;9665967796669666- cr4_reserved_bits = kvm_host_cr4_reserved_bits(&boot_cpu_data);96789678+#define __kvm_cpu_cap_has(UNUSED_, f) kvm_cpu_cap_has(f)96799679+ cr4_reserved_bits = __cr4_reserved_bits(__kvm_cpu_cap_has, UNUSED_);96809680+#undef __kvm_cpu_cap_has9667968196689682 if (kvm_has_tsc_control) {96699683 /*···9697970796989708 WARN_ON(!irqs_disabled());9699970997009700- if (kvm_host_cr4_reserved_bits(c) != cr4_reserved_bits)97109710+ if (__cr4_reserved_bits(cpu_has, c) !=97119711+ __cr4_reserved_bits(cpu_has, &boot_cpu_data))97019712 return -EIO;9702971397039714 return ops->check_processor_compatibility();
+8-4
arch/x86/mm/pat/set_memory.c
···4343 unsigned long pfn;4444 unsigned int flags;4545 unsigned int force_split : 1,4646- force_static_prot : 1;4646+ force_static_prot : 1,4747+ force_flush_all : 1;4748 struct page **pages;4849};4950···356355 return;357356 }358357359359- if (cpa->numpages <= tlb_single_page_flush_ceiling)360360- on_each_cpu(__cpa_flush_tlb, cpa, 1);361361- else358358+ if (cpa->force_flush_all || cpa->numpages > tlb_single_page_flush_ceiling)362359 flush_tlb_all();360360+ else361361+ on_each_cpu(__cpa_flush_tlb, cpa, 1);363362364363 if (!cache)365364 return;···15991598 alias_cpa.flags &= ~(CPA_PAGES_ARRAY | CPA_ARRAY);16001599 alias_cpa.curpage = 0;1601160016011601+ cpa->force_flush_all = 1;16021602+16021603 ret = __change_page_attr_set_clr(&alias_cpa, 0);16031604 if (ret)16041605 return ret;···16211618 alias_cpa.flags &= ~(CPA_PAGES_ARRAY | CPA_ARRAY);16221619 alias_cpa.curpage = 0;1623162016211621+ cpa->force_flush_all = 1;16241622 /*16251623 * The high mapping range is imprecise, so ignore the16261624 * return value.
+4-2
block/bfq-iosched.c
···123123#include <linux/ioprio.h>124124#include <linux/sbitmap.h>125125#include <linux/delay.h>126126+#include <linux/backing-dev.h>126127127128#include "blk.h"128129#include "blk-mq.h"···49774976 ioprio_class = IOPRIO_PRIO_CLASS(bic->ioprio);49784977 switch (ioprio_class) {49794978 default:49804980- dev_err(bfqq->bfqd->queue->backing_dev_info->dev,49814981- "bfq: bad prio class %d\n", ioprio_class);49794979+ pr_err("bdi %s: bfq: bad prio class %d\n",49804980+ bdi_dev_name(bfqq->bfqd->queue->backing_dev_info),49814981+ ioprio_class);49824982 /* fall through */49834983 case IOPRIO_CLASS_NONE:49844984 /*
+1-1
block/blk-cgroup.c
···496496{497497 /* some drivers (floppy) instantiate a queue w/o disk registered */498498 if (blkg->q->backing_dev_info->dev)499499- return dev_name(blkg->q->backing_dev_info->dev);499499+ return bdi_dev_name(blkg->q->backing_dev_info);500500 return NULL;501501}502502
+71-46
block/blk-iocost.c
···466466 */467467 atomic64_t vtime;468468 atomic64_t done_vtime;469469- atomic64_t abs_vdebt;469469+ u64 abs_vdebt;470470 u64 last_vtime;471471472472 /*···11421142 struct iocg_wake_ctx ctx = { .iocg = iocg };11431143 u64 margin_ns = (u64)(ioc->period_us *11441144 WAITQ_TIMER_MARGIN_PCT / 100) * NSEC_PER_USEC;11451145- u64 abs_vdebt, vdebt, vshortage, expires, oexpires;11451145+ u64 vdebt, vshortage, expires, oexpires;11461146 s64 vbudget;11471147 u32 hw_inuse;11481148···11521152 vbudget = now->vnow - atomic64_read(&iocg->vtime);1153115311541154 /* pay off debt */11551155- abs_vdebt = atomic64_read(&iocg->abs_vdebt);11561156- vdebt = abs_cost_to_cost(abs_vdebt, hw_inuse);11551155+ vdebt = abs_cost_to_cost(iocg->abs_vdebt, hw_inuse);11571156 if (vdebt && vbudget > 0) {11581157 u64 delta = min_t(u64, vbudget, vdebt);11591158 u64 abs_delta = min(cost_to_abs_cost(delta, hw_inuse),11601160- abs_vdebt);11591159+ iocg->abs_vdebt);1161116011621161 atomic64_add(delta, &iocg->vtime);11631162 atomic64_add(delta, &iocg->done_vtime);11641164- atomic64_sub(abs_delta, &iocg->abs_vdebt);11651165- if (WARN_ON_ONCE(atomic64_read(&iocg->abs_vdebt) < 0))11661166- atomic64_set(&iocg->abs_vdebt, 0);11631163+ iocg->abs_vdebt -= abs_delta;11671164 }1168116511691166 /*···12161219 u64 expires, oexpires;12171220 u32 hw_inuse;1218122112221222+ lockdep_assert_held(&iocg->waitq.lock);12231223+12191224 /* debt-adjust vtime */12201225 current_hweight(iocg, NULL, &hw_inuse);12211221- vtime += abs_cost_to_cost(atomic64_read(&iocg->abs_vdebt), hw_inuse);12261226+ vtime += abs_cost_to_cost(iocg->abs_vdebt, hw_inuse);1222122712231223- /* clear or maintain depending on the overage */12241224- if (time_before_eq64(vtime, now->vnow)) {12281228+ /*12291229+ * Clear or maintain depending on the overage. Non-zero vdebt is what12301230+ * guarantees that @iocg is online and future iocg_kick_delay() will12311231+ * clear use_delay. Don't leave it on when there's no vdebt.12321232+ */12331233+ if (!iocg->abs_vdebt || time_before_eq64(vtime, now->vnow)) {12251234 blkcg_clear_delay(blkg);12261235 return false;12271236 }···12611258{12621259 struct ioc_gq *iocg = container_of(timer, struct ioc_gq, delay_timer);12631260 struct ioc_now now;12611261+ unsigned long flags;1264126212631263+ spin_lock_irqsave(&iocg->waitq.lock, flags);12651264 ioc_now(iocg->ioc, &now);12661265 iocg_kick_delay(iocg, &now, 0);12661266+ spin_unlock_irqrestore(&iocg->waitq.lock, flags);1267126712681268 return HRTIMER_NORESTART;12691269}···13741368 * should have woken up in the last period and expire idle iocgs.13751369 */13761370 list_for_each_entry_safe(iocg, tiocg, &ioc->active_iocgs, active_list) {13771377- if (!waitqueue_active(&iocg->waitq) &&13781378- !atomic64_read(&iocg->abs_vdebt) && !iocg_is_idle(iocg))13711371+ if (!waitqueue_active(&iocg->waitq) && iocg->abs_vdebt &&13721372+ !iocg_is_idle(iocg))13791373 continue;1380137413811375 spin_lock(&iocg->waitq.lock);1382137613831383- if (waitqueue_active(&iocg->waitq) ||13841384- atomic64_read(&iocg->abs_vdebt)) {13771377+ if (waitqueue_active(&iocg->waitq) || iocg->abs_vdebt) {13851378 /* might be oversleeping vtime / hweight changes, kick */13861379 iocg_kick_waitq(iocg, &now);13871380 iocg_kick_delay(iocg, &now, 0);···17231718 * tests are racy but the races aren't systemic - we only miss once17241719 * in a while which is fine.17251720 */17261726- if (!waitqueue_active(&iocg->waitq) &&17271727- !atomic64_read(&iocg->abs_vdebt) &&17211721+ if (!waitqueue_active(&iocg->waitq) && !iocg->abs_vdebt &&17281722 time_before_eq64(vtime + cost, now.vnow)) {17291723 iocg_commit_bio(iocg, bio, cost);17301724 return;17311725 }1732172617331727 /*17341734- * We're over budget. If @bio has to be issued regardless,17351735- * remember the abs_cost instead of advancing vtime.17361736- * iocg_kick_waitq() will pay off the debt before waking more IOs.17281728+ * We activated above but w/o any synchronization. Deactivation is17291729+ * synchronized with waitq.lock and we won't get deactivated as long17301730+ * as we're waiting or has debt, so we're good if we're activated17311731+ * here. In the unlikely case that we aren't, just issue the IO.17321732+ */17331733+ spin_lock_irq(&iocg->waitq.lock);17341734+17351735+ if (unlikely(list_empty(&iocg->active_list))) {17361736+ spin_unlock_irq(&iocg->waitq.lock);17371737+ iocg_commit_bio(iocg, bio, cost);17381738+ return;17391739+ }17401740+17411741+ /*17421742+ * We're over budget. If @bio has to be issued regardless, remember17431743+ * the abs_cost instead of advancing vtime. iocg_kick_waitq() will pay17441744+ * off the debt before waking more IOs.17451745+ *17371746 * This way, the debt is continuously paid off each period with the17381738- * actual budget available to the cgroup. If we just wound vtime,17391739- * we would incorrectly use the current hw_inuse for the entire17401740- * amount which, for example, can lead to the cgroup staying17411741- * blocked for a long time even with substantially raised hw_inuse.17471747+ * actual budget available to the cgroup. If we just wound vtime, we17481748+ * would incorrectly use the current hw_inuse for the entire amount17491749+ * which, for example, can lead to the cgroup staying blocked for a17501750+ * long time even with substantially raised hw_inuse.17511751+ *17521752+ * An iocg with vdebt should stay online so that the timer can keep17531753+ * deducting its vdebt and [de]activate use_delay mechanism17541754+ * accordingly. We don't want to race against the timer trying to17551755+ * clear them and leave @iocg inactive w/ dangling use_delay heavily17561756+ * penalizing the cgroup and its descendants.17421757 */17431758 if (bio_issue_as_root_blkg(bio) || fatal_signal_pending(current)) {17441744- atomic64_add(abs_cost, &iocg->abs_vdebt);17591759+ iocg->abs_vdebt += abs_cost;17451760 if (iocg_kick_delay(iocg, &now, cost))17461761 blkcg_schedule_throttle(rqos->q,17471762 (bio->bi_opf & REQ_SWAP) == REQ_SWAP);17631763+ spin_unlock_irq(&iocg->waitq.lock);17481764 return;17491765 }17501766···17821756 * All waiters are on iocg->waitq and the wait states are17831757 * synchronized using waitq.lock.17841758 */17851785- spin_lock_irq(&iocg->waitq.lock);17861786-17871787- /*17881788- * We activated above but w/o any synchronization. Deactivation is17891789- * synchronized with waitq.lock and we won't get deactivated as17901790- * long as we're waiting, so we're good if we're activated here.17911791- * In the unlikely case that we are deactivated, just issue the IO.17921792- */17931793- if (unlikely(list_empty(&iocg->active_list))) {17941794- spin_unlock_irq(&iocg->waitq.lock);17951795- iocg_commit_bio(iocg, bio, cost);17961796- return;17971797- }17981798-17991759 init_waitqueue_func_entry(&wait.wait, iocg_wake_fn);18001760 wait.wait.private = current;18011761 wait.bio = bio;···18131801 struct ioc_now now;18141802 u32 hw_inuse;18151803 u64 abs_cost, cost;18041804+ unsigned long flags;1816180518171806 /* bypass if disabled or for root cgroup */18181807 if (!ioc->enabled || !iocg->level)···18331820 iocg->cursor = bio_end;1834182118351822 /*18361836- * Charge if there's enough vtime budget and the existing request18371837- * has cost assigned. Otherwise, account it as debt. See debt18381838- * handling in ioc_rqos_throttle() for details.18231823+ * Charge if there's enough vtime budget and the existing request has18241824+ * cost assigned.18391825 */18401826 if (rq->bio && rq->bio->bi_iocost_cost &&18411841- time_before_eq64(atomic64_read(&iocg->vtime) + cost, now.vnow))18271827+ time_before_eq64(atomic64_read(&iocg->vtime) + cost, now.vnow)) {18421828 iocg_commit_bio(iocg, bio, cost);18431843- else18441844- atomic64_add(abs_cost, &iocg->abs_vdebt);18291829+ return;18301830+ }18311831+18321832+ /*18331833+ * Otherwise, account it as debt if @iocg is online, which it should18341834+ * be for the vast majority of cases. See debt handling in18351835+ * ioc_rqos_throttle() for details.18361836+ */18371837+ spin_lock_irqsave(&iocg->waitq.lock, flags);18381838+ if (likely(!list_empty(&iocg->active_list))) {18391839+ iocg->abs_vdebt += abs_cost;18401840+ iocg_kick_delay(iocg, &now, cost);18411841+ } else {18421842+ iocg_commit_bio(iocg, bio, cost);18431843+ }18441844+ spin_unlock_irqrestore(&iocg->waitq.lock, flags);18451845}1846184618471847static void ioc_rqos_done_bio(struct rq_qos *rqos, struct bio *bio)···20241998 iocg->ioc = ioc;20251999 atomic64_set(&iocg->vtime, now.vnow);20262000 atomic64_set(&iocg->done_vtime, now.vnow);20272027- atomic64_set(&iocg->abs_vdebt, 0);20282001 atomic64_set(&iocg->active_period, atomic64_read(&ioc->cur_period));20292002 INIT_LIST_HEAD(&iocg->active_list);20302003 iocg->hweight_active = HWEIGHT_WHOLE;
+1-1
block/partitions/core.c
···496496497497 if (!disk_part_scan_enabled(disk))498498 return 0;499499- if (bdev->bd_part_count || bdev->bd_openers > 1)499499+ if (bdev->bd_part_count)500500 return -EBUSY;501501 res = invalidate_partition(disk, 0);502502 if (res)
···23702370 return fw_devlink_flags;23712371}2372237223732373+static bool fw_devlink_is_permissive(void)23742374+{23752375+ return fw_devlink_flags == DL_FLAG_SYNC_STATE_ONLY;23762376+}23772377+23732378/**23742379 * device_add - add device to device hierarchy.23752380 * @dev: device.···25292524 if (fw_devlink_flags && is_fwnode_dev &&25302525 fwnode_has_op(dev->fwnode, add_links)) {25312526 fw_ret = fwnode_call_int_op(dev->fwnode, add_links, dev);25322532- if (fw_ret == -ENODEV)25272527+ if (fw_ret == -ENODEV && !fw_devlink_is_permissive())25332528 device_link_wait_for_mandatory_supplier(dev);25342529 else if (fw_ret)25352530 device_link_wait_for_optional_supplier(dev);
+8-12
drivers/base/dd.c
···224224}225225DEFINE_SHOW_ATTRIBUTE(deferred_devs);226226227227-#ifdef CONFIG_MODULES228228-/*229229- * In the case of modules, set the default probe timeout to230230- * 30 seconds to give userland some time to load needed modules231231- */232232-int driver_deferred_probe_timeout = 30;233233-#else234234-/* In the case of !modules, no probe timeout needed */235235-int driver_deferred_probe_timeout = -1;236236-#endif227227+int driver_deferred_probe_timeout;237228EXPORT_SYMBOL_GPL(driver_deferred_probe_timeout);229229+static DECLARE_WAIT_QUEUE_HEAD(probe_timeout_waitqueue);238230239231static int __init deferred_probe_timeout_setup(char *str)240232{···258266 return -ENODEV;259267 }260268261261- if (!driver_deferred_probe_timeout) {262262- dev_WARN(dev, "deferred probe timeout, ignoring dependency");269269+ if (!driver_deferred_probe_timeout && initcalls_done) {270270+ dev_warn(dev, "deferred probe timeout, ignoring dependency");263271 return -ETIMEDOUT;264272 }265273···276284277285 list_for_each_entry_safe(private, p, &deferred_probe_pending_list, deferred_probe)278286 dev_info(private->device, "deferred probe pending");287287+ wake_up(&probe_timeout_waitqueue);279288}280289static DECLARE_DELAYED_WORK(deferred_probe_timeout_work, deferred_probe_timeout_work_func);281290···651658 */652659void wait_for_device_probe(void)653660{661661+ /* wait for probe timeout */662662+ wait_event(probe_timeout_waitqueue, !driver_deferred_probe_timeout);663663+654664 /* wait for the deferred probe workqueue to finish */655665 flush_work(&deferred_probe_work);656666
+2
drivers/base/platform.c
···380380 */381381static void setup_pdev_dma_masks(struct platform_device *pdev)382382{383383+ pdev->dev.dma_parms = &pdev->dma_parms;384384+383385 if (!pdev->dev.coherent_dma_mask)384386 pdev->dev.coherent_dma_mask = DMA_BIT_MASK(32);385387 if (!pdev->dev.dma_mask) {
+78-8
drivers/block/virtio_blk.c
···3333} ____cacheline_aligned_in_smp;34343535struct virtio_blk {3636+ /*3737+ * This mutex must be held by anything that may run after3838+ * virtblk_remove() sets vblk->vdev to NULL.3939+ *4040+ * blk-mq, virtqueue processing, and sysfs attribute code paths are4141+ * shut down before vblk->vdev is set to NULL and therefore do not need4242+ * to hold this mutex.4343+ */4444+ struct mutex vdev_mutex;3645 struct virtio_device *vdev;37463847 /* The disk structure for the kernel. */···52435344 /* Process context for config space updates */5445 struct work_struct config_work;4646+4747+ /*4848+ * Tracks references from block_device_operations open/release and4949+ * virtio_driver probe/remove so this object can be freed once no5050+ * longer in use.5151+ */5252+ refcount_t refs;55535654 /* What host tells us, plus 2 for header & tailer. */5755 unsigned int sg_elems;···311295 return err;312296}313297298298+static void virtblk_get(struct virtio_blk *vblk)299299+{300300+ refcount_inc(&vblk->refs);301301+}302302+303303+static void virtblk_put(struct virtio_blk *vblk)304304+{305305+ if (refcount_dec_and_test(&vblk->refs)) {306306+ ida_simple_remove(&vd_index_ida, vblk->index);307307+ mutex_destroy(&vblk->vdev_mutex);308308+ kfree(vblk);309309+ }310310+}311311+312312+static int virtblk_open(struct block_device *bd, fmode_t mode)313313+{314314+ struct virtio_blk *vblk = bd->bd_disk->private_data;315315+ int ret = 0;316316+317317+ mutex_lock(&vblk->vdev_mutex);318318+319319+ if (vblk->vdev)320320+ virtblk_get(vblk);321321+ else322322+ ret = -ENXIO;323323+324324+ mutex_unlock(&vblk->vdev_mutex);325325+ return ret;326326+}327327+328328+static void virtblk_release(struct gendisk *disk, fmode_t mode)329329+{330330+ struct virtio_blk *vblk = disk->private_data;331331+332332+ virtblk_put(vblk);333333+}334334+314335/* We provide getgeo only to please some old bootloader/partitioning tools */315336static int virtblk_getgeo(struct block_device *bd, struct hd_geometry *geo)316337{317338 struct virtio_blk *vblk = bd->bd_disk->private_data;339339+ int ret = 0;340340+341341+ mutex_lock(&vblk->vdev_mutex);342342+343343+ if (!vblk->vdev) {344344+ ret = -ENXIO;345345+ goto out;346346+ }318347319348 /* see if the host passed in geometry config */320349 if (virtio_has_feature(vblk->vdev, VIRTIO_BLK_F_GEOMETRY)) {···375314 geo->sectors = 1 << 5;376315 geo->cylinders = get_capacity(bd->bd_disk) >> 11;377316 }378378- return 0;317317+out:318318+ mutex_unlock(&vblk->vdev_mutex);319319+ return ret;379320}380321381322static const struct block_device_operations virtblk_fops = {382323 .owner = THIS_MODULE,324324+ .open = virtblk_open,325325+ .release = virtblk_release,383326 .getgeo = virtblk_getgeo,384327};385328···720655 goto out_free_index;721656 }722657658658+ /* This reference is dropped in virtblk_remove(). */659659+ refcount_set(&vblk->refs, 1);660660+ mutex_init(&vblk->vdev_mutex);661661+723662 vblk->vdev = vdev;724663 vblk->sg_elems = sg_elems;725664···889820static void virtblk_remove(struct virtio_device *vdev)890821{891822 struct virtio_blk *vblk = vdev->priv;892892- int index = vblk->index;893893- int refc;894823895824 /* Make sure no work handler is accessing the device. */896825 flush_work(&vblk->config_work);···898831899832 blk_mq_free_tag_set(&vblk->tag_set);900833834834+ mutex_lock(&vblk->vdev_mutex);835835+901836 /* Stop all the virtqueues. */902837 vdev->config->reset(vdev);903838904904- refc = kref_read(&disk_to_dev(vblk->disk)->kobj.kref);839839+ /* Virtqueues are stopped, nothing can use vblk->vdev anymore. */840840+ vblk->vdev = NULL;841841+905842 put_disk(vblk->disk);906843 vdev->config->del_vqs(vdev);907844 kfree(vblk->vqs);908908- kfree(vblk);909845910910- /* Only free device id if we don't have any users */911911- if (refc == 1)912912- ida_simple_remove(&vd_index_ida, index);846846+ mutex_unlock(&vblk->vdev_mutex);847847+848848+ virtblk_put(vblk);913849}914850915851#ifdef CONFIG_PM_SLEEP
+3-4
drivers/bus/mhi/core/init.c
···812812 if (!mhi_cntrl)813813 return -EINVAL;814814815815- if (!mhi_cntrl->runtime_get || !mhi_cntrl->runtime_put)816816- return -EINVAL;817817-818818- if (!mhi_cntrl->status_cb || !mhi_cntrl->link_status)815815+ if (!mhi_cntrl->runtime_get || !mhi_cntrl->runtime_put ||816816+ !mhi_cntrl->status_cb || !mhi_cntrl->read_reg ||817817+ !mhi_cntrl->write_reg)819818 return -EINVAL;820819821820 ret = parse_config(mhi_cntrl, config);
···1059105910601060 update_turbo_state();10611061 if (global.turbo_disabled) {10621062- pr_warn("Turbo disabled by BIOS or unavailable on processor\n");10621062+ pr_notice_once("Turbo disabled by BIOS or unavailable on processor\n");10631063 mutex_unlock(&intel_pstate_limits_lock);10641064 mutex_unlock(&intel_pstate_driver_lock);10651065 return -EPERM;
+7-3
drivers/crypto/caam/caamalg.c
···963963 struct caam_drv_private_jr *jrp = dev_get_drvdata(jrdev);964964 struct aead_edesc *edesc;965965 int ecode = 0;966966+ bool has_bklog;966967967968 dev_dbg(jrdev, "%s %d: err 0x%x\n", __func__, __LINE__, err);968969969970 edesc = rctx->edesc;971971+ has_bklog = edesc->bklog;970972971973 if (err)972974 ecode = caam_jr_strstatus(jrdev, err);···981979 * If no backlog flag, the completion of the request is done982980 * by CAAM, not crypto engine.983981 */984984- if (!edesc->bklog)982982+ if (!has_bklog)985983 aead_request_complete(req, ecode);986984 else987985 crypto_finalize_aead_request(jrp->engine, req, ecode);···997995 struct caam_drv_private_jr *jrp = dev_get_drvdata(jrdev);998996 int ivsize = crypto_skcipher_ivsize(skcipher);999997 int ecode = 0;998998+ bool has_bklog;100099910011000 dev_dbg(jrdev, "%s %d: err 0x%x\n", __func__, __LINE__, err);1002100110031002 edesc = rctx->edesc;10031003+ has_bklog = edesc->bklog;10041004 if (err)10051005 ecode = caam_jr_strstatus(jrdev, err);10061006···10321028 * If no backlog flag, the completion of the request is done10331029 * by CAAM, not crypto engine.10341030 */10351035- if (!edesc->bklog)10311031+ if (!has_bklog)10361032 skcipher_request_complete(req, ecode);10371033 else10381034 crypto_finalize_skcipher_request(jrp->engine, req, ecode);···1715171117161712 if (ivsize || mapped_dst_nents > 1)17171713 sg_to_sec4_set_last(edesc->sec4_sg + dst_sg_idx +17181718- mapped_dst_nents);17141714+ mapped_dst_nents - 1 + !!ivsize);1719171517201716 if (sec4_sg_bytes) {17211717 edesc->sec4_sg_dma = dma_map_single(jrdev, edesc->sec4_sg,
+6-2
drivers/crypto/caam/caamhash.c
···583583 struct caam_hash_state *state = ahash_request_ctx(req);584584 struct caam_hash_ctx *ctx = crypto_ahash_ctx(ahash);585585 int ecode = 0;586586+ bool has_bklog;586587587588 dev_dbg(jrdev, "%s %d: err 0x%x\n", __func__, __LINE__, err);588589589590 edesc = state->edesc;591591+ has_bklog = edesc->bklog;590592591593 if (err)592594 ecode = caam_jr_strstatus(jrdev, err);···605603 * If no backlog flag, the completion of the request is done606604 * by CAAM, not crypto engine.607605 */608608- if (!edesc->bklog)606606+ if (!has_bklog)609607 req->base.complete(&req->base, ecode);610608 else611609 crypto_finalize_hash_request(jrp->engine, req, ecode);···634632 struct caam_hash_state *state = ahash_request_ctx(req);635633 int digestsize = crypto_ahash_digestsize(ahash);636634 int ecode = 0;635635+ bool has_bklog;637636638637 dev_dbg(jrdev, "%s %d: err 0x%x\n", __func__, __LINE__, err);639638640639 edesc = state->edesc;640640+ has_bklog = edesc->bklog;641641 if (err)642642 ecode = caam_jr_strstatus(jrdev, err);643643···667663 * If no backlog flag, the completion of the request is done668664 * by CAAM, not crypto engine.669665 */670670- if (!edesc->bklog)666666+ if (!has_bklog)671667 req->base.complete(&req->base, ecode);672668 else673669 crypto_finalize_hash_request(jrp->engine, req, ecode);
+6-2
drivers/crypto/caam/caampkc.c
···121121 struct caam_drv_private_jr *jrp = dev_get_drvdata(dev);122122 struct rsa_edesc *edesc;123123 int ecode = 0;124124+ bool has_bklog;124125125126 if (err)126127 ecode = caam_jr_strstatus(dev, err);127128128129 edesc = req_ctx->edesc;130130+ has_bklog = edesc->bklog;129131130132 rsa_pub_unmap(dev, edesc, req);131133 rsa_io_unmap(dev, edesc, req);···137135 * If no backlog flag, the completion of the request is done138136 * by CAAM, not crypto engine.139137 */140140- if (!edesc->bklog)138138+ if (!has_bklog)141139 akcipher_request_complete(req, ecode);142140 else143141 crypto_finalize_akcipher_request(jrp->engine, req, ecode);···154152 struct caam_rsa_req_ctx *req_ctx = akcipher_request_ctx(req);155153 struct rsa_edesc *edesc;156154 int ecode = 0;155155+ bool has_bklog;157156158157 if (err)159158 ecode = caam_jr_strstatus(dev, err);160159161160 edesc = req_ctx->edesc;161161+ has_bklog = edesc->bklog;162162163163 switch (key->priv_form) {164164 case FORM1:···180176 * If no backlog flag, the completion of the request is done181177 * by CAAM, not crypto engine.182178 */183183- if (!edesc->bklog)179179+ if (!has_bklog)184180 akcipher_request_complete(req, ecode);185181 else186182 crypto_finalize_akcipher_request(jrp->engine, req, ecode);
+46-37
drivers/crypto/chelsio/chcr_ktls.c
···673673 return 0;674674}675675676676-/*677677- * chcr_write_cpl_set_tcb_ulp: update tcb values.678678- * TCB is responsible to create tcp headers, so all the related values679679- * should be correctly updated.680680- * @tx_info - driver specific tls info.681681- * @q - tx queue on which packet is going out.682682- * @tid - TCB identifier.683683- * @pos - current index where should we start writing.684684- * @word - TCB word.685685- * @mask - TCB word related mask.686686- * @val - TCB word related value.687687- * @reply - set 1 if looking for TP response.688688- * return - next position to write.689689- */690690-static void *chcr_write_cpl_set_tcb_ulp(struct chcr_ktls_info *tx_info,691691- struct sge_eth_txq *q, u32 tid,692692- void *pos, u16 word, u64 mask,676676+static void *__chcr_write_cpl_set_tcb_ulp(struct chcr_ktls_info *tx_info,677677+ u32 tid, void *pos, u16 word, u64 mask,693678 u64 val, u32 reply)694679{695680 struct cpl_set_tcb_field_core *cpl;696681 struct ulptx_idata *idata;697682 struct ulp_txpkt *txpkt;698698- void *save_pos = NULL;699699- u8 buf[48] = {0};700700- int left;701683702702- left = (void *)q->q.stat - pos;703703- if (unlikely(left < CHCR_SET_TCB_FIELD_LEN)) {704704- if (!left) {705705- pos = q->q.desc;706706- } else {707707- save_pos = pos;708708- pos = buf;709709- }710710- }711684 /* ULP_TXPKT */712685 txpkt = pos;713686 txpkt->cmd_dest = htonl(ULPTX_CMD_V(ULP_TX_PKT) | ULP_TXPKT_DEST_V(0));···705732 idata = (struct ulptx_idata *)(cpl + 1);706733 idata->cmd_more = htonl(ULPTX_CMD_V(ULP_TX_SC_NOOP));707734 idata->len = htonl(0);735735+ pos = idata + 1;708736709709- if (save_pos) {710710- pos = chcr_copy_to_txd(buf, &q->q, save_pos,711711- CHCR_SET_TCB_FIELD_LEN);712712- } else {713713- /* check again if we are at the end of the queue */714714- if (left == CHCR_SET_TCB_FIELD_LEN)737737+ return pos;738738+}739739+740740+741741+/*742742+ * chcr_write_cpl_set_tcb_ulp: update tcb values.743743+ * TCB is responsible to create tcp headers, so all the related values744744+ * should be correctly updated.745745+ * @tx_info - driver specific tls info.746746+ * @q - tx queue on which packet is going out.747747+ * @tid - TCB identifier.748748+ * @pos - current index where should we start writing.749749+ * @word - TCB word.750750+ * @mask - TCB word related mask.751751+ * @val - TCB word related value.752752+ * @reply - set 1 if looking for TP response.753753+ * return - next position to write.754754+ */755755+static void *chcr_write_cpl_set_tcb_ulp(struct chcr_ktls_info *tx_info,756756+ struct sge_eth_txq *q, u32 tid,757757+ void *pos, u16 word, u64 mask,758758+ u64 val, u32 reply)759759+{760760+ int left = (void *)q->q.stat - pos;761761+762762+ if (unlikely(left < CHCR_SET_TCB_FIELD_LEN)) {763763+ if (!left) {715764 pos = q->q.desc;716716- else717717- pos = idata + 1;765765+ } else {766766+ u8 buf[48] = {0};767767+768768+ __chcr_write_cpl_set_tcb_ulp(tx_info, tid, buf, word,769769+ mask, val, reply);770770+771771+ return chcr_copy_to_txd(buf, &q->q, pos,772772+ CHCR_SET_TCB_FIELD_LEN);773773+ }718774 }775775+776776+ pos = __chcr_write_cpl_set_tcb_ulp(tx_info, tid, pos, word,777777+ mask, val, reply);778778+779779+ /* check again if we are at the end of the queue */780780+ if (left == CHCR_SET_TCB_FIELD_LEN)781781+ pos = q->q.desc;719782720783 return pos;721784}
+4-3
drivers/dma-buf/dma-buf.c
···388388389389 return ret;390390391391- case DMA_BUF_SET_NAME:391391+ case DMA_BUF_SET_NAME_A:392392+ case DMA_BUF_SET_NAME_B:392393 return dma_buf_set_name(dmabuf, (const char __user *)arg);393394394395 default:···656655 * calls attach() of dma_buf_ops to allow device-specific attach functionality657656 * @dmabuf: [in] buffer to attach device to.658657 * @dev: [in] device to be attached.659659- * @importer_ops [in] importer operations for the attachment660660- * @importer_priv [in] importer private pointer for the attachment658658+ * @importer_ops: [in] importer operations for the attachment659659+ * @importer_priv: [in] importer private pointer for the attachment661660 *662661 * Returns struct dma_buf_attachment pointer for this attachment. Attachments663662 * must be cleaned up by calling dma_buf_detach().
+2-1
drivers/dma/Kconfig
···241241242242config HISI_DMA243243 tristate "HiSilicon DMA Engine support"244244- depends on ARM64 || (COMPILE_TEST && PCI_MSI)244244+ depends on ARM64 || COMPILE_TEST245245+ depends on PCI_MSI245246 select DMA_ENGINE246247 select DMA_VIRTUAL_CHANNELS247248 help
+26-34
drivers/dma/dmaengine.c
···232232 struct dma_chan_dev *chan_dev;233233234234 chan_dev = container_of(dev, typeof(*chan_dev), device);235235- if (atomic_dec_and_test(chan_dev->idr_ref)) {236236- ida_free(&dma_ida, chan_dev->dev_id);237237- kfree(chan_dev->idr_ref);238238- }239235 kfree(chan_dev);240236}241237···10391043}1040104410411045static int __dma_async_device_channel_register(struct dma_device *device,10421042- struct dma_chan *chan,10431043- int chan_id)10461046+ struct dma_chan *chan)10441047{10451048 int rc = 0;10461046- int chancnt = device->chancnt;10471047- atomic_t *idr_ref;10481048- struct dma_chan *tchan;10491049-10501050- tchan = list_first_entry_or_null(&device->channels,10511051- struct dma_chan, device_node);10521052- if (!tchan)10531053- return -ENODEV;10541054-10551055- if (tchan->dev) {10561056- idr_ref = tchan->dev->idr_ref;10571057- } else {10581058- idr_ref = kmalloc(sizeof(*idr_ref), GFP_KERNEL);10591059- if (!idr_ref)10601060- return -ENOMEM;10611061- atomic_set(idr_ref, 0);10621062- }1063104910641050 chan->local = alloc_percpu(typeof(*chan->local));10651051 if (!chan->local)···10571079 * When the chan_id is a negative value, we are dynamically adding10581080 * the channel. Otherwise we are static enumerating.10591081 */10601060- chan->chan_id = chan_id < 0 ? chancnt : chan_id;10821082+ mutex_lock(&device->chan_mutex);10831083+ chan->chan_id = ida_alloc(&device->chan_ida, GFP_KERNEL);10841084+ mutex_unlock(&device->chan_mutex);10851085+ if (chan->chan_id < 0) {10861086+ pr_err("%s: unable to alloc ida for chan: %d\n",10871087+ __func__, chan->chan_id);10881088+ goto err_out;10891089+ }10901090+10611091 chan->dev->device.class = &dma_devclass;10621092 chan->dev->device.parent = device->dev;10631093 chan->dev->chan = chan;10641064- chan->dev->idr_ref = idr_ref;10651094 chan->dev->dev_id = device->dev_id;10661066- atomic_inc(idr_ref);10671095 dev_set_name(&chan->dev->device, "dma%dchan%d",10681096 device->dev_id, chan->chan_id);10691069-10701097 rc = device_register(&chan->dev->device);10711098 if (rc)10721072- goto err_out;10991099+ goto err_out_ida;10731100 chan->client_count = 0;10741074- device->chancnt = chan->chan_id + 1;11011101+ device->chancnt++;1075110210761103 return 0;1077110411051105+ err_out_ida:11061106+ mutex_lock(&device->chan_mutex);11071107+ ida_free(&device->chan_ida, chan->chan_id);11081108+ mutex_unlock(&device->chan_mutex);10781109 err_out:10791110 free_percpu(chan->local);10801111 kfree(chan->dev);10811081- if (atomic_dec_return(idr_ref) == 0)10821082- kfree(idr_ref);10831112 return rc;10841113}10851114···10951110{10961111 int rc;1097111210981098- rc = __dma_async_device_channel_register(device, chan, -1);11131113+ rc = __dma_async_device_channel_register(device, chan);10991114 if (rc < 0)11001115 return rc;11011116···11151130 device->chancnt--;11161131 chan->dev->chan = NULL;11171132 mutex_unlock(&dma_list_mutex);11331133+ mutex_lock(&device->chan_mutex);11341134+ ida_free(&device->chan_ida, chan->chan_id);11351135+ mutex_unlock(&device->chan_mutex);11181136 device_unregister(&chan->dev->device);11191137 free_percpu(chan->local);11201138}···11401152 */11411153int dma_async_device_register(struct dma_device *device)11421154{11431143- int rc, i = 0;11551155+ int rc;11441156 struct dma_chan* chan;1145115711461158 if (!device)···12451257 if (rc != 0)12461258 return rc;1247125912601260+ mutex_init(&device->chan_mutex);12611261+ ida_init(&device->chan_ida);12621262+12481263 /* represent channels in sysfs. Probably want devs too */12491264 list_for_each_entry(chan, &device->channels, device_node) {12501250- rc = __dma_async_device_channel_register(device, chan, i++);12651265+ rc = __dma_async_device_channel_register(device, chan);12511266 if (rc < 0)12521267 goto err_out;12531268 }···13251334 */13261335 dma_cap_set(DMA_PRIVATE, device->cap_mask);13271336 dma_channel_rebalance();13371337+ ida_free(&dma_ida, device->dev_id);13281338 dma_device_put(device);13291339 mutex_unlock(&dma_list_mutex);13301340}
···816816static void tegra_dma_synchronize(struct dma_chan *dc)817817{818818 struct tegra_dma_channel *tdc = to_tegra_dma_chan(dc);819819+ int err;820820+821821+ err = pm_runtime_get_sync(tdc->tdma->dev);822822+ if (err < 0) {823823+ dev_err(tdc2dev(tdc), "Failed to synchronize DMA: %d\n", err);824824+ return;825825+ }819826820827 /*821828 * CPU, which handles interrupt, could be busy in···832825 wait_event(tdc->wq, tegra_dma_eoc_interrupt_deasserted(tdc));833826834827 tasklet_kill(&tdc->tasklet);828828+829829+ pm_runtime_put(tdc->tdma->dev);835830}836831837832static unsigned int tegra_dma_sg_bytes_xferred(struct tegra_dma_channel *tdc,
+1
drivers/dma/ti/k3-psil.c
···2727 soc_ep_map = &j721e_ep_map;2828 } else {2929 pr_err("PSIL: No compatible machine found for map\n");3030+ mutex_unlock(&ep_map_mutex);3031 return ERR_PTR(-ENOTSUPP);3132 }3233 pr_debug("%s: Using map for %s\n", __func__, soc_ep_map->name);
+10-10
drivers/dma/xilinx/xilinx_dma.c
···12301230 return ret;1231123112321232 spin_lock_irqsave(&chan->lock, flags);12331233-12341234- desc = list_last_entry(&chan->active_list,12351235- struct xilinx_dma_tx_descriptor, node);12361236- /*12371237- * VDMA and simple mode do not support residue reporting, so the12381238- * residue field will always be 0.12391239- */12401240- if (chan->has_sg && chan->xdev->dma_config->dmatype != XDMA_TYPE_VDMA)12411241- residue = xilinx_dma_get_residue(chan, desc);12421242-12331233+ if (!list_empty(&chan->active_list)) {12341234+ desc = list_last_entry(&chan->active_list,12351235+ struct xilinx_dma_tx_descriptor, node);12361236+ /*12371237+ * VDMA and simple mode do not support residue reporting, so the12381238+ * residue field will always be 0.12391239+ */12401240+ if (chan->has_sg && chan->xdev->dma_config->dmatype != XDMA_TYPE_VDMA)12411241+ residue = xilinx_dma_get_residue(chan, desc);12421242+ }12431243 spin_unlock_irqrestore(&chan->lock, flags);1244124412451245 dma_set_residue(txstate, residue);
+1-1
drivers/firmware/efi/tpm.c
···1616int efi_tpm_final_log_size;1717EXPORT_SYMBOL(efi_tpm_final_log_size);18181919-static int tpm2_calc_event_log_size(void *data, int count, void *size_info)1919+static int __init tpm2_calc_event_log_size(void *data, int count, void *size_info)2020{2121 struct tcg_pcr_event2_head *header;2222 int event_size, size = 0;
···16251625 hws->funcs.verify_allow_pstate_change_high(dc);16261626}1627162716281628+void dcn10_cursor_lock(struct dc *dc, struct pipe_ctx *pipe, bool lock)16291629+{16301630+ /* cursor lock is per MPCC tree, so only need to lock one pipe per stream */16311631+ if (!pipe || pipe->top_pipe)16321632+ return;16331633+16341634+ dc->res_pool->mpc->funcs->cursor_lock(dc->res_pool->mpc,16351635+ pipe->stream_res.opp->inst, lock);16361636+}16371637+16281638static bool wait_for_reset_trigger_to_occur(16291639 struct dc_context *dc_ctx,16301640 struct timing_generator *tg)
···508508 .block ## _ ## reg_name[id] = BASE(mm ## block ## id ## _ ## reg_name ## _BASE_IDX) + \509509 mm ## block ## id ## _ ## reg_name510510511511+#define VUPDATE_SRII(reg_name, block, id)\512512+ .reg_name[id] = BASE(mm ## reg_name ## _ ## block ## id ## _BASE_IDX) + \513513+ mm ## reg_name ## _ ## block ## id514514+511515/* NBIO */512516#define NBIO_BASE_INNER(seg) \513517 NBIO_BASE__INST0_SEG ## seg···30683064 return out;30693065}3070306630713071-30723072-bool dcn20_validate_bandwidth(struct dc *dc, struct dc_state *context,30733073- bool fast_validate)30673067+/*30683068+ * This must be noinline to ensure anything that deals with FP registers30693069+ * is contained within this call; previously our compiling with hard-float30703070+ * would result in fp instructions being emitted outside of the boundaries30713071+ * of the DC_FP_START/END macros, which makes sense as the compiler has no30723072+ * idea about what is wrapped and what is not30733073+ *30743074+ * This is largely just a workaround to avoid breakage introduced with 5.6,30753075+ * ideally all fp-using code should be moved into its own file, only that30763076+ * should be compiled with hard-float, and all code exported from there30773077+ * should be strictly wrapped with DC_FP_START/END30783078+ */30793079+static noinline bool dcn20_validate_bandwidth_fp(struct dc *dc,30803080+ struct dc_state *context, bool fast_validate)30743081{30753082 bool voltage_supported = false;30763083 bool full_pstate_supported = false;30773084 bool dummy_pstate_supported = false;30783085 double p_state_latency_us;3079308630803080- DC_FP_START();30813087 p_state_latency_us = context->bw_ctx.dml.soc.dram_clock_change_latency_us;30823088 context->bw_ctx.dml.soc.disable_dram_clock_change_vactive_support =30833089 dc->debug.disable_dram_clock_change_vactive_support;3084309030853091 if (fast_validate) {30863086- voltage_supported = dcn20_validate_bandwidth_internal(dc, context, true);30873087-30883088- DC_FP_END();30893089- return voltage_supported;30923092+ return dcn20_validate_bandwidth_internal(dc, context, true);30903093 }3091309430923095 // Best case, we support full UCLK switch latency···3122311131233112restore_dml_state:31243113 context->bw_ctx.dml.soc.dram_clock_change_latency_us = p_state_latency_us;31143114+ return voltage_supported;31153115+}3125311631173117+bool dcn20_validate_bandwidth(struct dc *dc, struct dc_state *context,31183118+ bool fast_validate)31193119+{31203120+ bool voltage_supported = false;31213121+ DC_FP_START();31223122+ voltage_supported = dcn20_validate_bandwidth_fp(dc, context, fast_validate);31263123 DC_FP_END();31273124 return voltage_supported;31283125}
···210210 struct mpcc_blnd_cfg *blnd_cfg,211211 int mpcc_id);212212213213+ /*214214+ * Lock cursor updates for the specified OPP.215215+ * OPP defines the set of MPCC that are locked together for cursor.216216+ *217217+ * Parameters:218218+ * [in] mpc - MPC context.219219+ * [in] opp_id - The OPP to lock cursor updates on220220+ * [in] lock - lock/unlock the OPP221221+ *222222+ * Return: void223223+ */224224+ void (*cursor_lock)(225225+ struct mpc *mpc,226226+ int opp_id,227227+ bool lock);228228+213229 struct mpcc* (*get_mpcc_for_dpp)(214230 struct mpc_tree *tree,215231 int dpp_id);
···241241242242 ret = request_firmware_direct(&fw, (const char *)fw_name,243243 drm_dev->dev);244244- if (ret < 0)244244+ if (ret < 0) {245245+ *revoked_ksv_cnt = 0;246246+ *revoked_ksv_list = NULL;247247+ ret = 0;245248 goto exit;249249+ }246250247251 if (fw->size && fw->data)248252 ret = drm_hdcp_srm_update(fw->data, fw->size, revoked_ksv_list,···291287292288 ret = drm_hdcp_request_srm(drm_dev, &revoked_ksv_list,293289 &revoked_ksv_cnt);290290+ if (ret)291291+ return ret;294292295293 /* revoked_ksv_cnt will be zero when above function failed */296294 for (i = 0; i < revoked_ksv_cnt; i++)
+20-4
drivers/gpu/drm/i915/gem/i915_gem_tiling.c
···182182 int tiling_mode, unsigned int stride)183183{184184 struct i915_ggtt *ggtt = &to_i915(obj->base.dev)->ggtt;185185- struct i915_vma *vma;185185+ struct i915_vma *vma, *vn;186186+ LIST_HEAD(unbind);186187 int ret = 0;187188188189 if (tiling_mode == I915_TILING_NONE)189190 return 0;190191191192 mutex_lock(&ggtt->vm.mutex);193193+194194+ spin_lock(&obj->vma.lock);192195 for_each_ggtt_vma(vma, obj) {196196+ GEM_BUG_ON(vma->vm != &ggtt->vm);197197+193198 if (i915_vma_fence_prepare(vma, tiling_mode, stride))194199 continue;195200196196- ret = __i915_vma_unbind(vma);197197- if (ret)198198- break;201201+ list_move(&vma->vm_link, &unbind);199202 }203203+ spin_unlock(&obj->vma.lock);204204+205205+ list_for_each_entry_safe(vma, vn, &unbind, vm_link) {206206+ ret = __i915_vma_unbind(vma);207207+ if (ret) {208208+ /* Restore the remaining vma on an error */209209+ list_splice(&unbind, &ggtt->vm.bound_list);210210+ break;211211+ }212212+ }213213+200214 mutex_unlock(&ggtt->vm.mutex);201215202216 return ret;···282268 }283269 mutex_unlock(&obj->mm.lock);284270271271+ spin_lock(&obj->vma.lock);285272 for_each_ggtt_vma(vma, obj) {286273 vma->fence_size =287274 i915_gem_fence_size(i915, vma->size, tiling, stride);···293278 if (vma->fence)294279 vma->fence->dirty = true;295280 }281281+ spin_unlock(&obj->vma.lock);296282297283 obj->tiling_and_stride = tiling | stride;298284 i915_gem_object_unlock(obj);
+8-4
drivers/gpu/drm/i915/gem/selftests/huge_pages.c
···14771477 unsigned int page_size = BIT(first);1478147814791479 obj = i915_gem_object_create_internal(dev_priv, page_size);14801480- if (IS_ERR(obj))14811481- return PTR_ERR(obj);14801480+ if (IS_ERR(obj)) {14811481+ err = PTR_ERR(obj);14821482+ goto out_vm;14831483+ }1482148414831485 vma = i915_vma_instance(obj, vm, NULL);14841486 if (IS_ERR(vma)) {···15331531 }1534153215351533 obj = i915_gem_object_create_internal(dev_priv, PAGE_SIZE);15361536- if (IS_ERR(obj))15371537- return PTR_ERR(obj);15341534+ if (IS_ERR(obj)) {15351535+ err = PTR_ERR(obj);15361536+ goto out_vm;15371537+ }1538153815391539 vma = i915_vma_instance(obj, vm, NULL);15401540 if (IS_ERR(vma)) {
+2
drivers/gpu/drm/i915/gt/intel_timeline.c
···521521522522 rcu_read_lock();523523 cl = rcu_dereference(from->hwsp_cacheline);524524+ if (i915_request_completed(from)) /* confirm cacheline is valid */525525+ goto unlock;524526 if (unlikely(!i915_active_acquire_if_busy(&cl->active)))525527 goto unlock; /* seqno wrapped and completed! */526528 if (unlikely(i915_request_completed(from)))
···480480 return ret;481481482482 ret = qxl_release_reserve_list(release, true);483483- if (ret)483483+ if (ret) {484484+ qxl_release_free(qdev, release);484485 return ret;485485-486486+ }486487 cmd = (struct qxl_surface_cmd *)qxl_release_map(qdev, release);487488 cmd->type = QXL_SURFACE_CMD_CREATE;488489 cmd->flags = QXL_SURF_FLAG_KEEP_DATA;···500499 /* no need to add a release to the fence for this surface bo,501500 since it is only released when we ask to destroy the surface502501 and it would never signal otherwise */503503- qxl_push_command_ring_release(qdev, release, QXL_CMD_SURFACE, false);504502 qxl_release_fence_buffer_objects(release);503503+ qxl_push_command_ring_release(qdev, release, QXL_CMD_SURFACE, false);505504506505 surf->hw_surf_alloc = true;507506 spin_lock(&qdev->surf_id_idr_lock);···543542 cmd->surface_id = id;544543 qxl_release_unmap(qdev, release, &cmd->release_info);545544546546- qxl_push_command_ring_release(qdev, release, QXL_CMD_SURFACE, false);547547-548545 qxl_release_fence_buffer_objects(release);546546+ qxl_push_command_ring_release(qdev, release, QXL_CMD_SURFACE, false);549547550548 return 0;551549}
···11551155config HID_MCP222111561156 tristate "Microchip MCP2221 HID USB-to-I2C/SMbus host support"11571157 depends on USB_HID && I2C11581158+ depends on GPIOLIB11581159 ---help---11591160 Provides I2C and SMBUS host adapter functionality over USB-HID11601161 through MCP2221 device.
+1
drivers/hid/hid-alps.c
···802802 break;803803 case HID_DEVICE_ID_ALPS_U1_DUAL:804804 case HID_DEVICE_ID_ALPS_U1:805805+ case HID_DEVICE_ID_ALPS_U1_UNICORN_LEGACY:805806 data->dev_type = U1;806807 break;807808 default:
···872872}873873874874static const struct hid_device_id lg_g15_devices[] = {875875+ /* The G11 is a G15 without the LCD, treat it as a G15 */876876+ { HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH,877877+ USB_DEVICE_ID_LOGITECH_G11),878878+ .driver_data = LG_G15 },875879 { HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH,876880 USB_DEVICE_ID_LOGITECH_G15_LCD),877881 .driver_data = LG_G15 },
···978978979979 return drv->resume(dev);980980}981981+#else982982+#define vmbus_suspend NULL983983+#define vmbus_resume NULL981984#endif /* CONFIG_PM_SLEEP */982985983986/*···1000997}10019981002999/*10031003- * Note: we must use SET_NOIRQ_SYSTEM_SLEEP_PM_OPS rather than10041004- * SET_SYSTEM_SLEEP_PM_OPS: see the comment before vmbus_bus_pm.10001000+ * Note: we must use the "noirq" ops: see the comment before vmbus_bus_pm.10011001+ *10021002+ * suspend_noirq/resume_noirq are set to NULL to support Suspend-to-Idle: we10031003+ * shouldn't suspend the vmbus devices upon Suspend-to-Idle, otherwise there10041004+ * is no way to wake up a Generation-2 VM.10051005+ *10061006+ * The other 4 ops are for hibernation.10051007 */10081008+10061009static const struct dev_pm_ops vmbus_pm = {10071007- SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(vmbus_suspend, vmbus_resume)10101010+ .suspend_noirq = NULL,10111011+ .resume_noirq = NULL,10121012+ .freeze_noirq = vmbus_suspend,10131013+ .thaw_noirq = vmbus_resume,10141014+ .poweroff_noirq = vmbus_suspend,10151015+ .restore_noirq = vmbus_resume,10081016};1009101710101018/* The one and only one */···2295228122962282 return 0;22972283}22842284+#else22852285+#define vmbus_bus_suspend NULL22862286+#define vmbus_bus_resume NULL22982287#endif /* CONFIG_PM_SLEEP */2299228823002289static const struct acpi_device_id vmbus_acpi_device_ids[] = {···23082291MODULE_DEVICE_TABLE(acpi, vmbus_acpi_device_ids);2309229223102293/*23112311- * Note: we must use SET_NOIRQ_SYSTEM_SLEEP_PM_OPS rather than23122312- * SET_SYSTEM_SLEEP_PM_OPS, otherwise NIC SR-IOV can not work, because the23132313- * "pci_dev_pm_ops" uses the "noirq" callbacks: in the resume path, the23142314- * pci "noirq" restore callback runs before "non-noirq" callbacks (see22942294+ * Note: we must use the "no_irq" ops, otherwise hibernation can not work with22952295+ * PCI device assignment, because "pci_dev_pm_ops" uses the "noirq" ops: in22962296+ * the resume path, the pci "noirq" restore op runs before "non-noirq" op (see23152297 * resume_target_kernel() -> dpm_resume_start(), and hibernation_restore() ->23162298 * dpm_resume_end()). This means vmbus_bus_resume() and the pci-hyperv's23172317- * resume callback must also run via the "noirq" callbacks.22992299+ * resume callback must also run via the "noirq" ops.23002300+ *23012301+ * Set suspend_noirq/resume_noirq to NULL for Suspend-to-Idle: see the comment23022302+ * earlier in this file before vmbus_pm.23182303 */23042304+23192305static const struct dev_pm_ops vmbus_bus_pm = {23202320- SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(vmbus_bus_suspend, vmbus_bus_resume)23062306+ .suspend_noirq = NULL,23072307+ .resume_noirq = NULL,23082308+ .freeze_noirq = vmbus_bus_suspend,23092309+ .thaw_noirq = vmbus_bus_resume,23102310+ .poweroff_noirq = vmbus_bus_suspend,23112311+ .restore_noirq = vmbus_bus_resume23212312};2322231323232314static struct acpi_driver vmbus_acpi_driver = {
···360360 value = (u8)((val >> S_RX_DATA_SHIFT) & S_RX_DATA_MASK);361361 i2c_slave_event(iproc_i2c->slave,362362 I2C_SLAVE_WRITE_RECEIVED, &value);363363+ if (rx_status == I2C_SLAVE_RX_END)364364+ i2c_slave_event(iproc_i2c->slave,365365+ I2C_SLAVE_STOP, &value);363366 }364367 } else if (status & BIT(IS_S_TX_UNDERRUN_SHIFT)) {365368 /* Master read other than start */
+12-24
drivers/i2c/busses/i2c-tegra.c
···996996 do {997997 u32 status = i2c_readl(i2c_dev, I2C_INT_STATUS);998998999999- if (status)999999+ if (status) {10001000 tegra_i2c_isr(i2c_dev->irq, i2c_dev);1001100110021002- if (completion_done(complete)) {10031003- s64 delta = ktime_ms_delta(ktimeout, ktime);10021002+ if (completion_done(complete)) {10031003+ s64 delta = ktime_ms_delta(ktimeout, ktime);1004100410051005- return msecs_to_jiffies(delta) ?: 1;10051005+ return msecs_to_jiffies(delta) ?: 1;10061006+ }10061007 }1007100810081009 ktime = ktime_get();···10301029 disable_irq(i2c_dev->irq);1031103010321031 /*10331033- * Under some rare circumstances (like running KASAN +10341034- * NFS root) CPU, which handles interrupt, may stuck in10351035- * uninterruptible state for a significant time. In this10361036- * case we will get timeout if I2C transfer is running on10371037- * a sibling CPU, despite of IRQ being raised.10381038- *10391039- * In order to handle this rare condition, the IRQ status10401040- * needs to be checked after timeout.10321032+ * There is a chance that completion may happen after IRQ10331033+ * synchronization, which is done by disable_irq().10411034 */10421042- if (ret == 0)10431043- ret = tegra_i2c_poll_completion_timeout(i2c_dev,10441044- complete, 0);10351035+ if (ret == 0 && completion_done(complete)) {10361036+ dev_warn(i2c_dev->dev,10371037+ "completion done after timeout\n");10381038+ ret = 1;10391039+ }10451040 }1046104110471042 return ret;···12151218 if (dma) {12161219 time_left = tegra_i2c_wait_completion_timeout(12171220 i2c_dev, &i2c_dev->dma_complete, xfer_time);12181218-12191219- /*12201220- * Synchronize DMA first, since dmaengine_terminate_sync()12211221- * performs synchronization after the transfer's termination12221222- * and we want to get a completion if transfer succeeded.12231223- */12241224- dmaengine_synchronize(i2c_dev->msg_read ?12251225- i2c_dev->rx_dma_chan :12261226- i2c_dev->tx_dma_chan);1227122112281222 dmaengine_terminate_sync(i2c_dev->msg_read ?12291223 i2c_dev->rx_dma_chan :
···360360 * uverbs_uobject_fd_release(), and the caller is expected to ensure361361 * that release is never done while a call to lookup is possible.362362 */363363- if (f->f_op != fd_type->fops) {363363+ if (f->f_op != fd_type->fops || uobject->ufile != ufile) {364364 fput(f);365365 return ERR_PTR(-EBADF);366366 }···474474 filp = anon_inode_getfile(fd_type->name, fd_type->fops, NULL,475475 fd_type->flags);476476 if (IS_ERR(filp)) {477477+ uverbs_uobject_put(uobj);477478 uobj = ERR_CAST(filp);478478- goto err_uobj;479479+ goto err_fd;479480 }480481 uobj->object = filp;481482482483 uobj->id = new_fd;483484 return uobj;484485485485-err_uobj:486486- uverbs_uobject_put(uobj);487486err_fd:488487 put_unused_fd(new_fd);489488 return uobj;···678679 enum rdma_lookup_mode mode)679680{680681 assert_uverbs_usecnt(uobj, mode);681681- uobj->uapi_object->type_class->lookup_put(uobj, mode);682682 /*683683 * In order to unlock an object, either decrease its usecnt for684684 * read access or zero it in case of exclusive access. See···694696 break;695697 }696698699699+ uobj->uapi_object->type_class->lookup_put(uobj, mode);697700 /* Pairs with the kref obtained by type->lookup_get */698701 uverbs_uobject_put(uobj);699702}
+4
drivers/infiniband/core/uverbs_main.c
···820820 ret = mmget_not_zero(mm);821821 if (!ret) {822822 list_del_init(&priv->list);823823+ if (priv->entry) {824824+ rdma_user_mmap_entry_put(priv->entry);825825+ priv->entry = NULL;826826+ }823827 mm = NULL;824828 continue;825829 }
···14991499 int i;1500150015011501 for (i = 0; i < ARRAY_SIZE(pdefault_rules->rules_create_list); i++) {15021502+ union ib_flow_spec ib_spec = {};15021503 int ret;15031503- union ib_flow_spec ib_spec;15041504+15041505 switch (pdefault_rules->rules_create_list[i]) {15051506 case 0:15061507 /* no rule */
···29362936{29372937 for (; *str; ++str) {29382938 if (strncmp(str, "legacy", 6) == 0) {29392939- amd_iommu_guest_ir = AMD_IOMMU_GUEST_IR_LEGACY;29392939+ amd_iommu_guest_ir = AMD_IOMMU_GUEST_IR_LEGACY_GA;29402940 break;29412941 }29422942 if (strncmp(str, "vapic", 5) == 0) {
+7-2
drivers/iommu/amd_iommu_types.h
···468468 iommu core code */469469 spinlock_t lock; /* mostly used to lock the page table*/470470 u16 id; /* the domain id written to the device table */471471- int mode; /* paging mode (0-6 levels) */472472- u64 *pt_root; /* page table root pointer */471471+ atomic64_t pt_root; /* pgtable root and pgtable mode */473472 int glx; /* Number of levels for GCR3 table */474473 u64 *gcr3_tbl; /* Guest CR3 table */475474 unsigned long flags; /* flags to find out type of domain */476475 unsigned dev_cnt; /* devices assigned to this domain */477476 unsigned dev_iommu[MAX_IOMMUS]; /* per-IOMMU reference count */477477+};478478+479479+/* For decocded pt_root */480480+struct domain_pgtable {481481+ int mode;482482+ u64 *root;478483};479484480485/*
···585585586586 /* Do we need to select a new pgpath? */587587 pgpath = READ_ONCE(m->current_pgpath);588588- queue_io = test_bit(MPATHF_QUEUE_IO, &m->flags);589589- if (!pgpath || !queue_io)588588+ if (!pgpath || !test_bit(MPATHF_QUEUE_IO, &m->flags))590589 pgpath = choose_pgpath(m, bio->bi_iter.bi_size);590590+591591+ /* MPATHF_QUEUE_IO might have been cleared by choose_pgpath. */592592+ queue_io = test_bit(MPATHF_QUEUE_IO, &m->flags);591593592594 if ((pgpath && queue_io) ||593595 (!pgpath && test_bit(MPATHF_QUEUE_IF_NO_PATH, &m->flags))) {
+1-1
drivers/md/dm-verity-fec.c
···435435 fio->level++;436436437437 if (type == DM_VERITY_BLOCK_TYPE_METADATA)438438- block += v->data_blocks;438438+ block = block - v->hash_start + v->data_blocks;439439440440 /*441441 * For RS(M, N), the continuous FEC data is divided into blocks of N
···878878 * Issued High Priority Interrupt, and check for card status879879 * until out-of prg-state.880880 */881881-int mmc_interrupt_hpi(struct mmc_card *card)881881+static int mmc_interrupt_hpi(struct mmc_card *card)882882{883883 int err;884884 u32 status;
+10-11
drivers/mmc/host/cqhci.c
···55#include <linux/delay.h>66#include <linux/highmem.h>77#include <linux/io.h>88+#include <linux/iopoll.h>89#include <linux/module.h>910#include <linux/dma-mapping.h>1011#include <linux/slab.h>···350349/* CQHCI is idle and should halt immediately, so set a small timeout */351350#define CQHCI_OFF_TIMEOUT 100352351352352+static u32 cqhci_read_ctl(struct cqhci_host *cq_host)353353+{354354+ return cqhci_readl(cq_host, CQHCI_CTL);355355+}356356+353357static void cqhci_off(struct mmc_host *mmc)354358{355359 struct cqhci_host *cq_host = mmc->cqe_private;356356- ktime_t timeout;357357- bool timed_out;358360 u32 reg;361361+ int err;359362360363 if (!cq_host->enabled || !mmc->cqe_on || cq_host->recovery_halt)361364 return;···369364370365 cqhci_writel(cq_host, CQHCI_HALT, CQHCI_CTL);371366372372- timeout = ktime_add_us(ktime_get(), CQHCI_OFF_TIMEOUT);373373- while (1) {374374- timed_out = ktime_compare(ktime_get(), timeout) > 0;375375- reg = cqhci_readl(cq_host, CQHCI_CTL);376376- if ((reg & CQHCI_HALT) || timed_out)377377- break;378378- }379379-380380- if (timed_out)367367+ err = readx_poll_timeout(cqhci_read_ctl, cq_host, reg,368368+ reg & CQHCI_HALT, 0, CQHCI_OFF_TIMEOUT);369369+ if (err < 0)381370 pr_err("%s: cqhci: CQE stuck on\n", mmc_hostname(mmc));382371 else383372 pr_debug("%s: cqhci: CQE off\n", mmc_hostname(mmc));
···235235{236236 /* Wait for 5ms after set 1.8V signal enable bit */237237 usleep_range(5000, 5500);238238+239239+ /*240240+ * For some reason the controller's Host Control2 register reports241241+ * the bit representing 1.8V signaling as 0 when read after it was242242+ * written as 1. Subsequent read reports 1.243243+ *244244+ * Since this may cause some issues, do an empty read of the Host245245+ * Control2 register here to circumvent this.246246+ */247247+ sdhci_readw(host, SDHCI_HOST_CONTROL2);238248}239249240250static const struct sdhci_ops sdhci_xenon_ops = {
···2424 bool "PTP support for Marvell 88E6xxx"2525 default n2626 depends on NET_DSA_MV88E6XXX_GLOBAL22727+ depends on PTP_1588_CLOCK2728 imply NETWORK_PHY_TIMESTAMPING2828- imply PTP_1588_CLOCK2929 help3030 Say Y to enable PTP hardware timestamping on Marvell 88E6xxx switch3131 chips that support it.
···2020config NET_DSA_SJA1105_PTP2121 bool "Support for the PTP clock on the NXP SJA1105 Ethernet switch"2222 depends on NET_DSA_SJA11052323+ depends on PTP_1588_CLOCK2324 help2425 This enables support for timestamping and PTP clock manipulations in2526 the SJA1105 DSA driver.
+18-8
drivers/net/dsa/sja1105/sja1105_ptp.c
···16161717/* PTPSYNCTS has no interrupt or update mechanism, because the intended1818 * hardware use case is for the timestamp to be collected synchronously,1919- * immediately after the CAS_MASTER SJA1105 switch has triggered a CASSYNC2020- * pulse on the PTP_CLK pin. When used as a generic extts source, it needs2121- * polling and a comparison with the old value. The polling interval is just2222- * the Nyquist rate of a canonical PPS input (e.g. from a GPS module).2323- * Anything of higher frequency than 1 Hz will be lost, since there is no2424- * timestamp FIFO.1919+ * immediately after the CAS_MASTER SJA1105 switch has performed a CASSYNC2020+ * one-shot toggle (no return to level) on the PTP_CLK pin. When used as a2121+ * generic extts source, the PTPSYNCTS register needs polling and a comparison2222+ * with the old value. The polling interval is configured as the Nyquist rate2323+ * of a signal with 50% duty cycle and 1Hz frequency, which is sadly all that2424+ * this hardware can do (but may be enough for some setups). Anything of higher2525+ * frequency than 1 Hz will be lost, since there is no timestamp FIFO.2526 */2626-#define SJA1105_EXTTS_INTERVAL (HZ / 2)2727+#define SJA1105_EXTTS_INTERVAL (HZ / 4)27282829/* This range is actually +/- SJA1105_MAX_ADJ_PPB2930 * divided by 1000 (ppb -> ppm) and with a 16-bit···755754 return -EOPNOTSUPP;756755757756 /* Reject requests with unsupported flags */758758- if (extts->flags)757757+ if (extts->flags & ~(PTP_ENABLE_FEATURE |758758+ PTP_RISING_EDGE |759759+ PTP_FALLING_EDGE |760760+ PTP_STRICT_FLAGS))761761+ return -EOPNOTSUPP;762762+763763+ /* We can only enable time stamping on both edges, sadly. */764764+ if ((extts->flags & PTP_STRICT_FLAGS) &&765765+ (extts->flags & PTP_ENABLE_FEATURE) &&766766+ (extts->flags & PTP_EXTTS_EDGES) != PTP_EXTTS_EDGES)759767 return -EOPNOTSUPP;760768761769 rc = sja1105_change_ptp_clk_pin_func(priv, PTP_PF_EXTTS);
···3535config MACB_USE_HWSTAMP3636 bool "Use IEEE 1588 hwstamp"3737 depends on MACB3838+ depends on PTP_1588_CLOCK3839 default y3939- imply PTP_1588_CLOCK4040 ---help---4141 Enable IEEE 1588 Precision Time Protocol (PTP) support for MACB.4242
+12-12
drivers/net/ethernet/cadence/macb_main.c
···334334 int status;335335336336 status = pm_runtime_get_sync(&bp->pdev->dev);337337- if (status < 0)337337+ if (status < 0) {338338+ pm_runtime_put_noidle(&bp->pdev->dev);338339 goto mdio_pm_exit;340340+ }339341340342 status = macb_mdio_wait_for_idle(bp);341343 if (status < 0)···388386 int status;389387390388 status = pm_runtime_get_sync(&bp->pdev->dev);391391- if (status < 0)389389+ if (status < 0) {390390+ pm_runtime_put_noidle(&bp->pdev->dev);392391 goto mdio_pm_exit;392392+ }393393394394 status = macb_mdio_wait_for_idle(bp);395395 if (status < 0)···38203816 int ret;3821381738223818 ret = pm_runtime_get_sync(&lp->pdev->dev);38233823- if (ret < 0)38193819+ if (ret < 0) {38203820+ pm_runtime_put_noidle(&lp->pdev->dev);38243821 return ret;38223822+ }3825382338263824 /* Clear internal statistics */38273825 ctl = macb_readl(lp, NCR);···4178417241794173static int fu540_c000_init(struct platform_device *pdev)41804174{41814181- struct resource *res;41824182-41834183- res = platform_get_resource(pdev, IORESOURCE_MEM, 1);41844184- if (!res)41854185- return -ENODEV;41864186-41874187- mgmt->reg = ioremap(res->start, resource_size(res));41884188- if (!mgmt->reg)41894189- return -ENOMEM;41754175+ mgmt->reg = devm_platform_ioremap_resource(pdev, 1);41764176+ if (IS_ERR(mgmt->reg))41774177+ return PTR_ERR(mgmt->reg);4190417841914179 return macb_init(pdev);41924180}
+1-1
drivers/net/ethernet/cavium/Kconfig
···5454config CAVIUM_PTP5555 tristate "Cavium PTP coprocessor as PTP clock"5656 depends on 64BIT && PCI5757- imply PTP_1588_CLOCK5757+ depends on PTP_1588_CLOCK5858 ---help---5959 This driver adds support for the Precision Time Protocol Clocks and6060 Timestamping coprocessor (PTP) found on Cavium processors.
+37-3
drivers/net/ethernet/chelsio/cxgb4/sge.c
···22072207 if (unlikely(skip_eotx_wr)) {22082208 start = (u64 *)wr;22092209 eosw_txq->state = next_state;22102210+ eosw_txq->cred -= wrlen16;22112211+ eosw_txq->ncompl++;22122212+ eosw_txq->last_compl = 0;22102213 goto write_wr_headers;22112214 }22122215···23682365 return cxgb4_eth_xmit(skb, dev);23692366}2370236723682368+static void eosw_txq_flush_pending_skbs(struct sge_eosw_txq *eosw_txq)23692369+{23702370+ int pktcount = eosw_txq->pidx - eosw_txq->last_pidx;23712371+ int pidx = eosw_txq->pidx;23722372+ struct sk_buff *skb;23732373+23742374+ if (!pktcount)23752375+ return;23762376+23772377+ if (pktcount < 0)23782378+ pktcount += eosw_txq->ndesc;23792379+23802380+ while (pktcount--) {23812381+ pidx--;23822382+ if (pidx < 0)23832383+ pidx += eosw_txq->ndesc;23842384+23852385+ skb = eosw_txq->desc[pidx].skb;23862386+ if (skb) {23872387+ dev_consume_skb_any(skb);23882388+ eosw_txq->desc[pidx].skb = NULL;23892389+ eosw_txq->inuse--;23902390+ }23912391+ }23922392+23932393+ eosw_txq->pidx = eosw_txq->last_pidx + 1;23942394+}23952395+23712396/**23722397 * cxgb4_ethofld_send_flowc - Send ETHOFLD flowc request to bind eotid to tc.23732398 * @dev - netdevice···24712440 FW_FLOWC_MNEM_EOSTATE_CLOSING :24722441 FW_FLOWC_MNEM_EOSTATE_ESTABLISHED);2473244224742474- eosw_txq->cred -= len16;24752475- eosw_txq->ncompl++;24762476- eosw_txq->last_compl = 0;24432443+ /* Free up any pending skbs to ensure there's room for24442444+ * termination FLOWC.24452445+ */24462446+ if (tc == FW_SCHED_CLS_NONE)24472447+ eosw_txq_flush_pending_skbs(eosw_txq);2477244824782449 ret = eosw_txq_enqueue(eosw_txq, skb);24792450 if (ret) {···27282695 * is ever running at a time ...27292696 */27302697static void service_ofldq(struct sge_uld_txq *q)26982698+ __must_hold(&q->sendq.lock)27312699{27322700 u64 *pos, *before, *end;27332701 int credits;
···10311031{10321032 int i, j;1033103310341034- /* Loop through all the mac tables entries. There are 1024 rows of 410351035- * entries.10361036- */10371037- for (i = 0; i < 1024; i++) {10341034+ /* Loop through all the mac tables entries. */10351035+ for (i = 0; i < ocelot->num_mact_rows; i++) {10381036 for (j = 0; j < 4; j++) {10391037 struct ocelot_mact_entry entry;10401038 bool is_static;···1451145314521454void ocelot_set_ageing_time(struct ocelot *ocelot, unsigned int msecs)14531455{14541454- ocelot_write(ocelot, ANA_AUTOAGE_AGE_PERIOD(msecs / 2),14551455- ANA_AUTOAGE);14561456+ unsigned int age_period = ANA_AUTOAGE_AGE_PERIOD(msecs / 2000);14571457+14581458+ /* Setting AGE_PERIOD to zero effectively disables automatic aging,14591459+ * which is clearly not what our intention is. So avoid that.14601460+ */14611461+ if (!age_period)14621462+ age_period = 1;14631463+14641464+ ocelot_rmw(ocelot, age_period, ANA_AUTOAGE_AGE_PERIOD_M, ANA_AUTOAGE);14561465}14571466EXPORT_SYMBOL(ocelot_set_ageing_time);14581467
···283283 if (!nfp_nsp_has_hwinfo_lookup(nsp)) {284284 nfp_warn(pf->cpp, "NSP doesn't support PF MAC generation\n");285285 eth_hw_addr_random(nn->dp.netdev);286286+ nfp_nsp_close(nsp);286287 return;287288 }288289
···40604060/**40614061 * stmmac_interrupt - main ISR40624062 * @irq: interrupt number.40634063- * @dev_id: to pass the net device pointer.40634063+ * @dev_id: to pass the net device pointer (must be valid).40644064 * Description: this is the main driver interrupt service routine.40654065 * It can call:40664066 * o DMA service routine (to manage incoming frame reception and transmission···4083408340844084 if (priv->irq_wake)40854085 pm_wakeup_event(priv->device, 0);40864086-40874087- if (unlikely(!dev)) {40884088- netdev_err(priv->dev, "%s: invalid dev pointer\n", __func__);40894089- return IRQ_NONE;40904090- }4091408640924087 /* Check if adapter is up */40934088 if (test_bit(STMMAC_DOWN, &priv->state))···49864991 priv->plat->bsp_priv);4987499249884993 if (ret < 0)49894989- return ret;49944994+ goto error_serdes_powerup;49904995 }4991499649924997#ifdef CONFIG_DEBUG_FS···4995500049965001 return ret;4997500250035003+error_serdes_powerup:50045004+ unregister_netdev(ndev);49985005error_netdev_register:49995006 phylink_destroy(priv->phylink);50005007error_phy_setup:
+1-2
drivers/net/ethernet/ti/Kconfig
···9090config TI_CPTS_MOD9191 tristate9292 depends on TI_CPTS9393+ depends on PTP_1588_CLOCK9394 default y if TI_CPSW=y || TI_KEYSTONE_NETCP=y || TI_CPSW_SWITCHDEV=y9494- select NET_PTP_CLASSIFY9595- imply PTP_1588_CLOCK9695 default m97969897config TI_K3_AM65_CPSW_NUSS
···1041104110421042 complete(&gsi->completion);10431043}10441044+10441045/* Inter-EE interrupt handler */10451046static void gsi_isr_glob_ee(struct gsi *gsi)10461047{···14941493 struct completion *completion = &gsi->completion;14951494 u32 val;1496149514961496+ /* First zero the result code field */14971497+ val = ioread32(gsi->virt + GSI_CNTXT_SCRATCH_0_OFFSET);14981498+ val &= ~GENERIC_EE_RESULT_FMASK;14991499+ iowrite32(val, gsi->virt + GSI_CNTXT_SCRATCH_0_OFFSET);15001500+15011501+ /* Now issue the command */14971502 val = u32_encode_bits(opcode, GENERIC_OPCODE_FMASK);14981503 val |= u32_encode_bits(channel_id, GENERIC_CHID_FMASK);14991504 val |= u32_encode_bits(GSI_EE_MODEM, GENERIC_EE_FMASK);···1805179818061799 /* Worst case we need an event for every outstanding TRE */18071800 if (data->channel.tre_count > data->channel.event_count) {18081808- dev_warn(gsi->dev, "channel %u limited to %u TREs\n",18091809- data->channel_id, data->channel.tre_count);18101801 tre_count = data->channel.event_count;18021802+ dev_warn(gsi->dev, "channel %u limited to %u TREs\n",18031803+ data->channel_id, tre_count);18111804 } else {18121805 tre_count = data->channel.tre_count;18131806 }
···12831283 */12841284int ipa_endpoint_stop(struct ipa_endpoint *endpoint)12851285{12861286- u32 retries = endpoint->toward_ipa ? 0 : IPA_ENDPOINT_STOP_RX_RETRIES;12861286+ u32 retries = IPA_ENDPOINT_STOP_RX_RETRIES;12871287 int ret;1288128812891289 do {···12911291 struct gsi *gsi = &ipa->gsi;1292129212931293 ret = gsi_channel_stop(gsi, endpoint->channel_id);12941294- if (ret != -EAGAIN)12941294+ if (ret != -EAGAIN || endpoint->toward_ipa)12951295 break;12961296-12971297- if (endpoint->toward_ipa)12981298- continue;1299129613001297 /* For IPA v3.5.1, send a DMA read task and check again */13011298 if (ipa->version == IPA_VERSION_3_5_1) {
+4-2
drivers/net/macsec.c
···13051305 struct crypto_aead *tfm;13061306 int ret;1307130713081308- tfm = crypto_alloc_aead("gcm(aes)", 0, 0);13081308+ /* Pick a sync gcm(aes) cipher to ensure order is preserved. */13091309+ tfm = crypto_alloc_aead("gcm(aes)", 0, CRYPTO_ALG_ASYNC);1309131013101311 if (IS_ERR(tfm))13111312 return tfm;···26412640 if (ret)26422641 goto rollback;2643264226442644- rtnl_unlock();26452643 /* Force features update, since they are different for SW MACSec and26462644 * HW offloading cases.26472645 */26482646 netdev_update_features(dev);26472647+26482648+ rtnl_unlock();26492649 return 0;2650265026512651rollback:
···816816 .compatible = "qcom,msm8998-qusb2-phy",817817 .data = &msm8998_phy_cfg,818818 }, {819819+ /*820820+ * Deprecated. Only here to support legacy device821821+ * trees that didn't include "qcom,qusb2-v2-phy"822822+ */823823+ .compatible = "qcom,sdm845-qusb2-phy",824824+ .data = &qusb2_v2_phy_cfg,825825+ }, {819826 .compatible = "qcom,qusb2-v2-phy",820827 .data = &qusb2_v2_phy_cfg,821828 },
+21-11
drivers/phy/qualcomm/phy-qcom-usb-hs-28nm.c
···160160 ret = regulator_bulk_enable(VREG_NUM, priv->vregs);161161 if (ret)162162 return ret;163163- ret = clk_bulk_prepare_enable(priv->num_clks, priv->clks);164164- if (ret)165165- goto err_disable_regulator;163163+166164 qcom_snps_hsphy_disable_hv_interrupts(priv);167165 qcom_snps_hsphy_exit_retention(priv);168166169167 return 0;170170-171171-err_disable_regulator:172172- regulator_bulk_disable(VREG_NUM, priv->vregs);173173-174174- return ret;175168}176169177170static int qcom_snps_hsphy_power_off(struct phy *phy)···173180174181 qcom_snps_hsphy_enter_retention(priv);175182 qcom_snps_hsphy_enable_hv_interrupts(priv);176176- clk_bulk_disable_unprepare(priv->num_clks, priv->clks);177183 regulator_bulk_disable(VREG_NUM, priv->vregs);178184179185 return 0;···258266 struct hsphy_priv *priv = phy_get_drvdata(phy);259267 int ret;260268261261- ret = qcom_snps_hsphy_reset(priv);269269+ ret = clk_bulk_prepare_enable(priv->num_clks, priv->clks);262270 if (ret)263271 return ret;272272+273273+ ret = qcom_snps_hsphy_reset(priv);274274+ if (ret)275275+ goto disable_clocks;264276265277 qcom_snps_hsphy_init_sequence(priv);266278267279 ret = qcom_snps_hsphy_por_reset(priv);268280 if (ret)269269- return ret;281281+ goto disable_clocks;282282+283283+ return 0;284284+285285+disable_clocks:286286+ clk_bulk_disable_unprepare(priv->num_clks, priv->clks);287287+ return ret;288288+}289289+290290+static int qcom_snps_hsphy_exit(struct phy *phy)291291+{292292+ struct hsphy_priv *priv = phy_get_drvdata(phy);293293+294294+ clk_bulk_disable_unprepare(priv->num_clks, priv->clks);270295271296 return 0;272297}273298274299static const struct phy_ops qcom_snps_hsphy_ops = {275300 .init = qcom_snps_hsphy_init,301301+ .exit = qcom_snps_hsphy_exit,276302 .power_on = qcom_snps_hsphy_power_on,277303 .power_off = qcom_snps_hsphy_power_off,278304 .set_mode = qcom_snps_hsphy_set_mode,
+46-34
drivers/platform/chrome/cros_ec_sensorhub.c
···5252 int sensor_type[MOTIONSENSE_TYPE_MAX] = { 0 };5353 struct cros_ec_command *msg = sensorhub->msg;5454 struct cros_ec_dev *ec = sensorhub->ec;5555- int ret, i, sensor_num;5555+ int ret, i;5656 char *name;57575858- sensor_num = cros_ec_get_sensor_count(ec);5959- if (sensor_num < 0) {6060- dev_err(dev,6161- "Unable to retrieve sensor information (err:%d)\n",6262- sensor_num);6363- return sensor_num;6464- }6565-6666- sensorhub->sensor_num = sensor_num;6767- if (sensor_num == 0) {6868- dev_err(dev, "Zero sensors reported.\n");6969- return -EINVAL;7070- }71587259 msg->version = 1;7360 msg->insize = sizeof(struct ec_response_motion_sense);7461 msg->outsize = sizeof(struct ec_params_motion_sense);75627676- for (i = 0; i < sensor_num; i++) {6363+ for (i = 0; i < sensorhub->sensor_num; i++) {7764 sensorhub->params->cmd = MOTIONSENSE_CMD_INFO;7865 sensorhub->params->info.sensor_num = i;7966···127140 struct cros_ec_dev *ec = dev_get_drvdata(dev->parent);128141 struct cros_ec_sensorhub *data;129142 struct cros_ec_command *msg;130130- int ret;131131- int i;143143+ int ret, i, sensor_num;132144133145 msg = devm_kzalloc(dev, sizeof(struct cros_ec_command) +134146 max((u16)sizeof(struct ec_params_motion_sense),···152166 dev_set_drvdata(dev, data);153167154168 /* Check whether this EC is a sensor hub. */155155- if (cros_ec_check_features(data->ec, EC_FEATURE_MOTION_SENSE)) {169169+ if (cros_ec_check_features(ec, EC_FEATURE_MOTION_SENSE)) {170170+ sensor_num = cros_ec_get_sensor_count(ec);171171+ if (sensor_num < 0) {172172+ dev_err(dev,173173+ "Unable to retrieve sensor information (err:%d)\n",174174+ sensor_num);175175+ return sensor_num;176176+ }177177+ if (sensor_num == 0) {178178+ dev_err(dev, "Zero sensors reported.\n");179179+ return -EINVAL;180180+ }181181+ data->sensor_num = sensor_num;182182+183183+ /*184184+ * Prepare the ring handler before enumering the185185+ * sensors.186186+ */187187+ if (cros_ec_check_features(ec, EC_FEATURE_MOTION_SENSE_FIFO)) {188188+ ret = cros_ec_sensorhub_ring_allocate(data);189189+ if (ret)190190+ return ret;191191+ }192192+193193+ /* Enumerate the sensors.*/156194 ret = cros_ec_sensorhub_register(dev, data);157195 if (ret)158196 return ret;197197+198198+ /*199199+ * When the EC does not have a FIFO, the sensors will query200200+ * their data themselves via sysfs or a software trigger.201201+ */202202+ if (cros_ec_check_features(ec, EC_FEATURE_MOTION_SENSE_FIFO)) {203203+ ret = cros_ec_sensorhub_ring_add(data);204204+ if (ret)205205+ return ret;206206+ /*207207+ * The msg and its data is not under the control of the208208+ * ring handler.209209+ */210210+ return devm_add_action_or_reset(dev,211211+ cros_ec_sensorhub_ring_remove,212212+ data);213213+ }214214+159215 } else {160216 /*161217 * If the device has sensors but does not claim to···212184 }213185 }214186215215- /*216216- * If the EC does not have a FIFO, the sensors will query their data217217- * themselves via sysfs or a software trigger.218218- */219219- if (cros_ec_check_features(ec, EC_FEATURE_MOTION_SENSE_FIFO)) {220220- ret = cros_ec_sensorhub_ring_add(data);221221- if (ret)222222- return ret;223223- /*224224- * The msg and its data is not under the control of the ring225225- * handler.226226- */227227- return devm_add_action_or_reset(dev,228228- cros_ec_sensorhub_ring_remove,229229- data);230230- }231187232188 return 0;233189}
+47-28
drivers/platform/chrome/cros_ec_sensorhub_ring.c
···957957}958958959959/**960960+ * cros_ec_sensorhub_ring_allocate() - Prepare the FIFO functionality if the EC961961+ * supports it.962962+ *963963+ * @sensorhub : Sensor Hub object.964964+ *965965+ * Return: 0 on success.966966+ */967967+int cros_ec_sensorhub_ring_allocate(struct cros_ec_sensorhub *sensorhub)968968+{969969+ int fifo_info_length =970970+ sizeof(struct ec_response_motion_sense_fifo_info) +971971+ sizeof(u16) * sensorhub->sensor_num;972972+973973+ /* Allocate the array for lost events. */974974+ sensorhub->fifo_info = devm_kzalloc(sensorhub->dev, fifo_info_length,975975+ GFP_KERNEL);976976+ if (!sensorhub->fifo_info)977977+ return -ENOMEM;978978+979979+ /*980980+ * Allocate the callback area based on the number of sensors.981981+ * Add one for the sensor ring.982982+ */983983+ sensorhub->push_data = devm_kcalloc(sensorhub->dev,984984+ sensorhub->sensor_num,985985+ sizeof(*sensorhub->push_data),986986+ GFP_KERNEL);987987+ if (!sensorhub->push_data)988988+ return -ENOMEM;989989+990990+ sensorhub->tight_timestamps = cros_ec_check_features(991991+ sensorhub->ec,992992+ EC_FEATURE_MOTION_SENSE_TIGHT_TIMESTAMPS);993993+994994+ if (sensorhub->tight_timestamps) {995995+ sensorhub->batch_state = devm_kcalloc(sensorhub->dev,996996+ sensorhub->sensor_num,997997+ sizeof(*sensorhub->batch_state),998998+ GFP_KERNEL);999999+ if (!sensorhub->batch_state)10001000+ return -ENOMEM;10011001+ }10021002+10031003+ return 0;10041004+}10051005+10061006+/**9601007 * cros_ec_sensorhub_ring_add() - Add the FIFO functionality if the EC9611008 * supports it.9621009 *···1018971 int fifo_info_length =1019972 sizeof(struct ec_response_motion_sense_fifo_info) +1020973 sizeof(u16) * sensorhub->sensor_num;10211021-10221022- /* Allocate the array for lost events. */10231023- sensorhub->fifo_info = devm_kzalloc(sensorhub->dev, fifo_info_length,10241024- GFP_KERNEL);10251025- if (!sensorhub->fifo_info)10261026- return -ENOMEM;10279741028975 /* Retrieve FIFO information */1029976 sensorhub->msg->version = 2;···1039998 if (!sensorhub->ring)1040999 return -ENOMEM;1041100010421042- /*10431043- * Allocate the callback area based on the number of sensors.10441044- */10451045- sensorhub->push_data = devm_kcalloc(10461046- sensorhub->dev, sensorhub->sensor_num,10471047- sizeof(*sensorhub->push_data),10481048- GFP_KERNEL);10491049- if (!sensorhub->push_data)10501050- return -ENOMEM;10511051-10521001 sensorhub->fifo_timestamp[CROS_EC_SENSOR_LAST_TS] =10531002 cros_ec_get_time_ns();10541054-10551055- sensorhub->tight_timestamps = cros_ec_check_features(10561056- ec, EC_FEATURE_MOTION_SENSE_TIGHT_TIMESTAMPS);10571057-10581058- if (sensorhub->tight_timestamps) {10591059- sensorhub->batch_state = devm_kcalloc(sensorhub->dev,10601060- sensorhub->sensor_num,10611061- sizeof(*sensorhub->batch_state),10621062- GFP_KERNEL);10631063- if (!sensorhub->batch_state)10641064- return -ENOMEM;10651065- }1066100310671004 /* Register the notifier that will act as a top half interrupt. */10681005 sensorhub->notifier.notifier_call = cros_ec_sensorhub_event;
+24
drivers/platform/x86/asus-nb-wmi.c
···515515 .detect_quirks = asus_nb_wmi_quirks,516516};517517518518+static const struct dmi_system_id asus_nb_wmi_blacklist[] __initconst = {519519+ {520520+ /*521521+ * asus-nb-wm adds no functionality. The T100TA has a detachable522522+ * USB kbd, so no hotkeys and it has no WMI rfkill; and loading523523+ * asus-nb-wm causes the camera LED to turn and _stay_ on.524524+ */525525+ .matches = {526526+ DMI_EXACT_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),527527+ DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "T100TA"),528528+ },529529+ },530530+ {531531+ /* The Asus T200TA has the same issue as the T100TA */532532+ .matches = {533533+ DMI_EXACT_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),534534+ DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "T200TA"),535535+ },536536+ },537537+ {} /* Terminating entry */538538+};518539519540static int __init asus_nb_wmi_init(void)520541{542542+ if (dmi_check_system(asus_nb_wmi_blacklist))543543+ return -ENODEV;544544+521545 return asus_wmi_register_driver(&asus_nb_wmi_driver);522546}523547
+1-1
drivers/platform/x86/intel-uncore-frequency.c
···5353/* Storage for uncore data for all instances */5454static struct uncore_data *uncore_instances;5555/* Root of the all uncore sysfs kobjs */5656-struct kobject *uncore_root_kobj;5656+static struct kobject *uncore_root_kobj;5757/* Stores the CPU mask of the target CPUs to use during uncore read/write */5858static cpumask_t uncore_cpu_mask;5959/* CPU online callback register instance */
+5-19
drivers/platform/x86/intel_pmc_core.c
···255255};256256257257static const struct pmc_bit_map icl_pfear_map[] = {258258- /* Ice Lake generation onwards only */258258+ /* Ice Lake and Jasper Lake generation onwards only */259259 {"RES_65", BIT(0)},260260 {"RES_66", BIT(1)},261261 {"RES_67", BIT(2)},···274274};275275276276static const struct pmc_bit_map tgl_pfear_map[] = {277277- /* Tiger Lake, Elkhart Lake and Jasper Lake generation onwards only */277277+ /* Tiger Lake and Elkhart Lake generation onwards only */278278 {"PSF9", BIT(0)},279279 {"RES_66", BIT(1)},280280 {"RES_67", BIT(2)},···692692 kfree(lpm_regs);693693}694694695695-#if IS_ENABLED(CONFIG_DEBUG_FS)696695static bool slps0_dbg_latch;697696698697static inline u8 pmc_core_reg_read_byte(struct pmc_dev *pmcdev, int offset)···11321133 &pmc_core_substate_l_sts_regs_fops);11331134 }11341135}11351135-#else11361136-static inline void pmc_core_dbgfs_register(struct pmc_dev *pmcdev)11371137-{11381138-}11391139-11401140-static inline void pmc_core_dbgfs_unregister(struct pmc_dev *pmcdev)11411141-{11421142-}11431143-#endif /* CONFIG_DEBUG_FS */1144113611451137static const struct x86_cpu_id intel_pmc_core_ids[] = {11461138 X86_MATCH_INTEL_FAM6_MODEL(SKYLAKE_L, &spt_reg_map),···11461156 X86_MATCH_INTEL_FAM6_MODEL(TIGERLAKE_L, &tgl_reg_map),11471157 X86_MATCH_INTEL_FAM6_MODEL(TIGERLAKE, &tgl_reg_map),11481158 X86_MATCH_INTEL_FAM6_MODEL(ATOM_TREMONT, &tgl_reg_map),11491149- X86_MATCH_INTEL_FAM6_MODEL(ATOM_TREMONT_L, &tgl_reg_map),11591159+ X86_MATCH_INTEL_FAM6_MODEL(ATOM_TREMONT_L, &icl_reg_map),11501160 {}11511161};11521162···12501260 return 0;12511261}1252126212531253-#ifdef CONFIG_PM_SLEEP12541254-12551263static bool warn_on_s0ix_failures;12561264module_param(warn_on_s0ix_failures, bool, 0644);12571265MODULE_PARM_DESC(warn_on_s0ix_failures, "Check and warn for S0ix failures");1258126612591259-static int pmc_core_suspend(struct device *dev)12671267+static __maybe_unused int pmc_core_suspend(struct device *dev)12601268{12611269 struct pmc_dev *pmcdev = dev_get_drvdata(dev);12621270···13061318 return false;13071319}1308132013091309-static int pmc_core_resume(struct device *dev)13211321+static __maybe_unused int pmc_core_resume(struct device *dev)13101322{13111323 struct pmc_dev *pmcdev = dev_get_drvdata(dev);13121324 const struct pmc_bit_map **maps = pmcdev->map->lpm_sts;···1335134713361348 return 0;13371349}13381338-13391339-#endif1340135013411351static const struct dev_pm_ops pmc_core_pm_ops = {13421352 SET_LATE_SYSTEM_SLEEP_PM_OPS(pmc_core_suspend, pmc_core_resume)
···95489548 if (!battery_info.batteries[battery].start_support)95499549 return -ENODEV;95509550 /* valid values are [0, 99] */95519551- if (value < 0 || value > 99)95519551+ if (value > 99)95529552 return -EINVAL;95539553 if (value > battery_info.batteries[battery].charge_stop)95549554 return -EINVAL;
+2-2
drivers/platform/x86/xiaomi-wmi.c
···2323 unsigned int key_code;2424};25252626-int xiaomi_wmi_probe(struct wmi_device *wdev, const void *context)2626+static int xiaomi_wmi_probe(struct wmi_device *wdev, const void *context)2727{2828 struct xiaomi_wmi *data;2929···4848 return input_register_device(data->input_dev);4949}50505151-void xiaomi_wmi_notify(struct wmi_device *wdev, union acpi_object *dummy)5151+static void xiaomi_wmi_notify(struct wmi_device *wdev, union acpi_object *dummy)5252{5353 struct xiaomi_wmi *data;5454
+11-14
drivers/regulator/core.c
···5754575457555755static int __init regulator_init_complete(void)57565756{57575757- int delay = driver_deferred_probe_timeout;57585758-57595759- if (delay < 0)57605760- delay = 0;57615757 /*57625758 * Since DT doesn't provide an idiomatic mechanism for57635759 * enabling full constraints and since it's much more natural···57645768 has_full_constraints = true;5765576957665770 /*57675767- * If driver_deferred_probe_timeout is set, we punt57685768- * completion for that many seconds since systems like57695769- * distros will load many drivers from userspace so consumers57705770- * might not always be ready yet, this is particularly an57715771- * issue with laptops where this might bounce the display off57725772- * then on. Ideally we'd get a notification from userspace57735773- * when this happens but we don't so just wait a bit and hope57745774- * we waited long enough. It'd be better if we'd only do57755775- * this on systems that need it.57715771+ * We punt completion for an arbitrary amount of time since57725772+ * systems like distros will load many drivers from userspace57735773+ * so consumers might not always be ready yet, this is57745774+ * particularly an issue with laptops where this might bounce57755775+ * the display off then on. Ideally we'd get a notification57765776+ * from userspace when this happens but we don't so just wait57775777+ * a bit and hope we waited long enough. It'd be better if57785778+ * we'd only do this on systems that need it, and a kernel57795779+ * command line option might be useful.57765780 */57775777- schedule_delayed_work(®ulator_init_complete_work, delay * HZ);57815781+ schedule_delayed_work(®ulator_init_complete_work,57825782+ msecs_to_jiffies(30000));5778578357795784 return 0;57805785}
+5-5
drivers/s390/net/qeth_core_main.c
···67176717 unsigned int i;6718671867196719 /* Quiesce the NAPI instances: */67206720- qeth_for_each_output_queue(card, queue, i) {67206720+ qeth_for_each_output_queue(card, queue, i)67216721 napi_disable(&queue->napi);67226722- del_timer_sync(&queue->timer);67236723- }6724672267256723 /* Stop .ndo_start_xmit, might still access queue->napi. */67266724 netif_tx_disable(dev);6727672567286728- /* Queues may get re-allocated, so remove the NAPIs here. */67296729- qeth_for_each_output_queue(card, queue, i)67266726+ qeth_for_each_output_queue(card, queue, i) {67276727+ del_timer_sync(&queue->timer);67286728+ /* Queues may get re-allocated, so remove the NAPIs. */67306729 netif_napi_del(&queue->napi);67306730+ }67316731 } else {67326732 netif_tx_disable(dev);67336733 }
···37323732 }37333733 qla2x00_wait_for_hba_ready(base_vha);3734373437353735+ /*37363736+ * if UNLOADING flag is already set, then continue unload,37373737+ * where it was set first.37383738+ */37393739+ if (test_and_set_bit(UNLOADING, &base_vha->dpc_flags))37403740+ return;37413741+37353742 if (IS_QLA25XX(ha) || IS_QLA2031(ha) || IS_QLA27XX(ha) ||37363743 IS_QLA28XX(ha)) {37373744 if (ha->flags.fw_started)···37563749 }3757375037583751 qla2x00_wait_for_sess_deletion(base_vha);37593759-37603760- /*37613761- * if UNLOAD flag is already set, then continue unload,37623762- * where it was set first.37633763- */37643764- if (test_bit(UNLOADING, &base_vha->dpc_flags))37653765- return;37663766-37673767- set_bit(UNLOADING, &base_vha->dpc_flags);3768375237693753 qla_nvme_delete(base_vha);37703754···48614863{48624864 struct qla_work_evt *e;48634865 uint8_t bail;48664866+48674867+ if (test_bit(UNLOADING, &vha->dpc_flags))48684868+ return NULL;4864486948654870 QLA_VHA_MARK_BUSY(vha, bail);48664871 if (bail)···66296628 struct pci_dev *pdev = ha->pdev;66306629 scsi_qla_host_t *base_vha = pci_get_drvdata(ha->pdev);6631663066326632- /*66336633- * if UNLOAD flag is already set, then continue unload,66346634- * where it was set first.66356635- */66366636- if (test_bit(UNLOADING, &base_vha->dpc_flags))66376637- return;66386638-66396631 ql_log(ql_log_warn, base_vha, 0x015b,66406632 "Disabling adapter.\n");66416633···66396645 return;66406646 }6641664766426642- qla2x00_wait_for_sess_deletion(base_vha);66486648+ /*66496649+ * if UNLOADING flag is already set, then continue unload,66506650+ * where it was set first.66516651+ */66526652+ if (test_and_set_bit(UNLOADING, &base_vha->dpc_flags))66536653+ return;6643665466446644- set_bit(UNLOADING, &base_vha->dpc_flags);66556655+ qla2x00_wait_for_sess_deletion(base_vha);6645665666466657 qla2x00_delete_all_vps(ha, base_vha);66476658
···30303131Please send any patches to:3232Greg Kroah-Hartman <gregkh@linuxfoundation.org>3333-Wolfram Sang <wsa@the-dreams.de>3433Linux Driver Project Developer List <driverdev-devel@linuxdriverproject.org>
···182182 return ret;183183184184 ret = tb_sw_read(sw, &val, TB_CFG_SWITCH, ROUTER_CS_26, 1);185185+ if (ret)186186+ return ret;187187+185188 if (val & ROUTER_CS_26_ONS)186189 return -EOPNOTSUPP;187190
+1-1
drivers/tty/hvc/Kconfig
···88888989config HVC_RISCV_SBI9090 bool "RISC-V SBI console support"9191- depends on RISCV_SBI9191+ depends on RISCV_SBI_V019292 select HVC_DRIVER9393 help9494 This enables support for console output via RISC-V SBI calls, which
+1-1
drivers/tty/serial/Kconfig
···86868787config SERIAL_EARLYCON_RISCV_SBI8888 bool "Early console using RISC-V SBI"8989- depends on RISCV_SBI8989+ depends on RISCV_SBI_V019090 select SERIAL_CORE9191 select SERIAL_CORE_CONSOLE9292 select SERIAL_EARLYCON
+1-3
drivers/tty/serial/bcm63xx_uart.c
···843843 if (IS_ERR(clk) && pdev->dev.of_node)844844 clk = of_clk_get(pdev->dev.of_node, 0);845845846846- if (IS_ERR(clk)) {847847- clk_put(clk);846846+ if (IS_ERR(clk))848847 return -ENODEV;849849- }850848851849 port->iotype = UPIO_MEM;852850 port->irq = res_irq->start;
···1144114411451145 if (usb_endpoint_out(epaddr)) {11461146 ep = dev->ep_out[epnum];11471147- if (reset_hardware)11471147+ if (reset_hardware && epnum != 0)11481148 dev->ep_out[epnum] = NULL;11491149 } else {11501150 ep = dev->ep_in[epnum];11511151- if (reset_hardware)11511151+ if (reset_hardware && epnum != 0)11521152 dev->ep_in[epnum] = NULL;11531153 }11541154 if (ep) {
+2-2
drivers/usb/serial/garmin_gps.c
···11381138 send it directly to the tty port */11391139 if (garmin_data_p->flags & FLAGS_QUEUING) {11401140 pkt_add(garmin_data_p, data, data_length);11411141- } else if (bulk_data ||11421142- getLayerId(data) == GARMIN_LAYERID_APPL) {11411141+ } else if (bulk_data || (data_length >= sizeof(u32) &&11421142+ getLayerId(data) == GARMIN_LAYERID_APPL)) {1143114311441144 spin_lock_irqsave(&garmin_data_p->lock, flags);11451145 garmin_data_p->flags |= APP_RESP_SEEN;
···2828 * and don't forget to CC: the USB development list <linux-usb@vger.kernel.org>2929 */30303131+/* Reported-by: Julian Groß <julian.g@posteo.de> */3232+UNUSUAL_DEV(0x059f, 0x105f, 0x0000, 0x9999,3333+ "LaCie",3434+ "2Big Quadra USB3",3535+ USB_SC_DEVICE, USB_PR_DEVICE, NULL,3636+ US_FL_NO_REPORT_OPCODES),3737+3138/*3239 * Apricorn USB3 dongle sometimes returns "USBSUSBSUSBS" in response to SCSI3340 * commands in UAS mode. Observed with the 1.28 firmware; are there others?
+6-2
drivers/usb/typec/mux/intel_pmc_mux.c
···157157 req.mode_data |= (state->mode - TYPEC_STATE_MODAL) <<158158 PMC_USB_ALTMODE_DP_MODE_SHIFT;159159160160+ if (data->status & DP_STATUS_HPD_STATE)161161+ req.mode_data |= PMC_USB_DP_HPD_LVL <<162162+ PMC_USB_ALTMODE_DP_MODE_SHIFT;163163+160164 return pmc_usb_command(port, (void *)&req, sizeof(req));161165}162166···302298 struct typec_mux_desc mux_desc = { };303299 int ret;304300305305- ret = fwnode_property_read_u8(fwnode, "usb2-port", &port->usb2_port);301301+ ret = fwnode_property_read_u8(fwnode, "usb2-port-number", &port->usb2_port);306302 if (ret)307303 return ret;308304309309- ret = fwnode_property_read_u8(fwnode, "usb3-port", &port->usb3_port);305305+ ret = fwnode_property_read_u8(fwnode, "usb3-port-number", &port->usb3_port);310306 if (ret)311307 return ret;312308
···181181 break;182182 }183183184184- vhost_add_used(vq, head, sizeof(pkt->hdr) + payload_len);185185- added = true;186186-187187- /* Deliver to monitoring devices all correctly transmitted188188- * packets.184184+ /* Deliver to monitoring devices all packets that we185185+ * will transmit.189186 */190187 virtio_transport_deliver_tap_pkt(pkt);188188+189189+ vhost_add_used(vq, head, sizeof(pkt->hdr) + payload_len);190190+ added = true;191191192192 pkt->off += payload_len;193193 total_len += payload_len;···196196 * to send it with the next available buffer.197197 */198198 if (pkt->off < pkt->len) {199199+ /* We are queueing the same virtio_vsock_pkt to handle200200+ * the remaining bytes, and we want to deliver it201201+ * to monitoring devices in the next iteration.202202+ */203203+ pkt->tap_delivered = false;204204+199205 spin_lock_bh(&vsock->send_pkt_list_lock);200206 list_add(&pkt->list, &vsock->send_pkt_list);201207 spin_unlock_bh(&vsock->send_pkt_list_lock);···548542549543 mutex_unlock(&vq->mutex);550544 }545545+546546+ /* Some packets may have been queued before the device was started,547547+ * let's kick the send worker to send them.548548+ */549549+ vhost_work_queue(&vsock->dev, &vsock->send_pkt_work);551550552551 mutex_unlock(&vsock->dev.mutex);553552 return 0;
···916916 path = btrfs_alloc_path();917917 if (!path) {918918 ret = -ENOMEM;919919- goto out;919919+ goto out_put_group;920920 }921921922922 /*···954954 ret = btrfs_orphan_add(trans, BTRFS_I(inode));955955 if (ret) {956956 btrfs_add_delayed_iput(inode);957957- goto out;957957+ goto out_put_group;958958 }959959 clear_nlink(inode);960960 /* One for the block groups ref */···977977978978 ret = btrfs_search_slot(trans, tree_root, &key, path, -1, 1);979979 if (ret < 0)980980- goto out;980980+ goto out_put_group;981981 if (ret > 0)982982 btrfs_release_path(path);983983 if (ret == 0) {984984 ret = btrfs_del_item(trans, tree_root, path);985985 if (ret)986986- goto out;986986+ goto out_put_group;987987 btrfs_release_path(path);988988 }989989···1102110211031103 ret = remove_block_group_free_space(trans, block_group);11041104 if (ret)11051105- goto out;11051105+ goto out_put_group;1106110611071107- btrfs_put_block_group(block_group);11071107+ /* Once for the block groups rbtree */11081108 btrfs_put_block_group(block_group);1109110911101110 ret = btrfs_search_slot(trans, root, &key, path, -1, 1);···11271127 /* once for the tree */11281128 free_extent_map(em);11291129 }11301130+11311131+out_put_group:11321132+ /* Once for the lookup reference */11331133+ btrfs_put_block_group(block_group);11301134out:11311135 if (remove_rsv)11321136 btrfs_delayed_refs_rsv_release(fs_info, 1);···12921288 if (ret)12931289 goto err;12941290 mutex_unlock(&fs_info->unused_bg_unpin_mutex);12911291+ if (prev_trans)12921292+ btrfs_put_transaction(prev_trans);1295129312961294 return true;1297129512981296err:12991297 mutex_unlock(&fs_info->unused_bg_unpin_mutex);12981298+ if (prev_trans)12991299+ btrfs_put_transaction(prev_trans);13001300 btrfs_dec_block_group_ro(bg);13011301 return false;13021302}
···662662 }663663664664got_it:665665- btrfs_record_root_in_trans(h, root);666666-667665 if (!current->journal_info)668666 current->journal_info = h;667667+668668+ /*669669+ * btrfs_record_root_in_trans() needs to alloc new extents, and may670670+ * call btrfs_join_transaction() while we're also starting a671671+ * transaction.672672+ *673673+ * Thus it need to be called after current->journal_info initialized,674674+ * or we can deadlock.675675+ */676676+ btrfs_record_root_in_trans(h, root);677677+669678 return h;670679671680join_fail:
+40-3
fs/btrfs/tree-log.c
···42264226 const u64 ino = btrfs_ino(inode);42274227 struct btrfs_path *dst_path = NULL;42284228 bool dropped_extents = false;42294229+ u64 truncate_offset = i_size;42304230+ struct extent_buffer *leaf;42314231+ int slot;42294232 int ins_nr = 0;42304233 int start_slot;42314234 int ret;···42434240 if (ret < 0)42444241 goto out;4245424242434243+ /*42444244+ * We must check if there is a prealloc extent that starts before the42454245+ * i_size and crosses the i_size boundary. This is to ensure later we42464246+ * truncate down to the end of that extent and not to the i_size, as42474247+ * otherwise we end up losing part of the prealloc extent after a log42484248+ * replay and with an implicit hole if there is another prealloc extent42494249+ * that starts at an offset beyond i_size.42504250+ */42514251+ ret = btrfs_previous_item(root, path, ino, BTRFS_EXTENT_DATA_KEY);42524252+ if (ret < 0)42534253+ goto out;42544254+42554255+ if (ret == 0) {42564256+ struct btrfs_file_extent_item *ei;42574257+42584258+ leaf = path->nodes[0];42594259+ slot = path->slots[0];42604260+ ei = btrfs_item_ptr(leaf, slot, struct btrfs_file_extent_item);42614261+42624262+ if (btrfs_file_extent_type(leaf, ei) ==42634263+ BTRFS_FILE_EXTENT_PREALLOC) {42644264+ u64 extent_end;42654265+42664266+ btrfs_item_key_to_cpu(leaf, &key, slot);42674267+ extent_end = key.offset +42684268+ btrfs_file_extent_num_bytes(leaf, ei);42694269+42704270+ if (extent_end > i_size)42714271+ truncate_offset = extent_end;42724272+ }42734273+ } else {42744274+ ret = 0;42754275+ }42764276+42464277 while (true) {42474247- struct extent_buffer *leaf = path->nodes[0];42484248- int slot = path->slots[0];42784278+ leaf = path->nodes[0];42794279+ slot = path->slots[0];4249428042504281 if (slot >= btrfs_header_nritems(leaf)) {42514282 if (ins_nr > 0) {···43174280 ret = btrfs_truncate_inode_items(trans,43184281 root->log_root,43194282 &inode->vfs_inode,43204320- i_size,42834283+ truncate_offset,43214284 BTRFS_EXTENT_DATA_KEY);43224285 } while (ret == -EAGAIN);43234286 if (ret)
+2-1
fs/ceph/caps.c
···2749274927502750 ret = try_get_cap_refs(inode, need, want, 0, flags, got);27512751 /* three special error codes */27522752- if (ret == -EAGAIN || ret == -EFBIG || ret == -EAGAIN)27522752+ if (ret == -EAGAIN || ret == -EFBIG || ret == -ESTALE)27532753 ret = 0;27542754 return ret;27552755}···37463746 WARN_ON(1);37473747 tsession = NULL;37483748 target = -1;37493749+ mutex_lock(&session->s_mutex);37493750 }37503751 goto retry;37513752
···788788 if (displaced)789789 put_files_struct(displaced);790790 if (!dump_interrupted()) {791791+ /*792792+ * umh disabled with CONFIG_STATIC_USERMODEHELPER_PATH="" would793793+ * have this set to NULL.794794+ */795795+ if (!cprm.file) {796796+ pr_info("Core dump to |%s disabled\n", cn.corename);797797+ goto close_fail;798798+ }791799 file_start_write(cprm.file);792800 core_dumped = binfmt->core_dump(&cprm);793801 file_end_write(cprm.file);
+33-28
fs/eventpoll.c
···11711171{11721172 struct eventpoll *ep = epi->ep;1173117311741174+ /* Fast preliminary check */11751175+ if (epi->next != EP_UNACTIVE_PTR)11761176+ return false;11771177+11741178 /* Check that the same epi has not been just chained from another CPU */11751179 if (cmpxchg(&epi->next, EP_UNACTIVE_PTR, NULL) != EP_UNACTIVE_PTR)11761180 return false;···12411237 * chained in ep->ovflist and requeued later on.12421238 */12431239 if (READ_ONCE(ep->ovflist) != EP_UNACTIVE_PTR) {12441244- if (epi->next == EP_UNACTIVE_PTR &&12451245- chain_epi_lockless(epi))12401240+ if (chain_epi_lockless(epi))12461241 ep_pm_stay_awake_rcu(epi);12471247- goto out_unlock;12481248- }12491249-12501250- /* If this file is already in the ready list we exit soon */12511251- if (!ep_is_linked(epi) &&12521252- list_add_tail_lockless(&epi->rdllink, &ep->rdllist)) {12531253- ep_pm_stay_awake_rcu(epi);12421242+ } else if (!ep_is_linked(epi)) {12431243+ /* In the usual case, add event to ready list. */12441244+ if (list_add_tail_lockless(&epi->rdllink, &ep->rdllist))12451245+ ep_pm_stay_awake_rcu(epi);12541246 }1255124712561248 /*···18221822{18231823 int res = 0, eavail, timed_out = 0;18241824 u64 slack = 0;18251825- bool waiter = false;18261825 wait_queue_entry_t wait;18271826 ktime_t expires, *to = NULL;18281827···18661867 */18671868 ep_reset_busy_poll_napi_id(ep);1868186918691869- /*18701870- * We don't have any available event to return to the caller. We need18711871- * to sleep here, and we will be woken by ep_poll_callback() when events18721872- * become available.18731873- */18741874- if (!waiter) {18751875- waiter = true;18761876- init_waitqueue_entry(&wait, current);18771877-18701870+ do {18711871+ /*18721872+ * Internally init_wait() uses autoremove_wake_function(),18731873+ * thus wait entry is removed from the wait queue on each18741874+ * wakeup. Why it is important? In case of several waiters18751875+ * each new wakeup will hit the next waiter, giving it the18761876+ * chance to harvest new event. Otherwise wakeup can be18771877+ * lost. This is also good performance-wise, because on18781878+ * normal wakeup path no need to call __remove_wait_queue()18791879+ * explicitly, thus ep->lock is not taken, which halts the18801880+ * event delivery.18811881+ */18821882+ init_wait(&wait);18781883 write_lock_irq(&ep->lock);18791884 __add_wait_queue_exclusive(&ep->wq, &wait);18801885 write_unlock_irq(&ep->lock);18811881- }1882188618831883- for (;;) {18841887 /*18851888 * We don't want to sleep if the ep_poll_callback() sends us18861889 * a wakeup in between. That's why we set the task state···19121911 timed_out = 1;19131912 break;19141913 }19151915- }19141914+19151915+ /* We were woken up, thus go and try to harvest some events */19161916+ eavail = 1;19171917+19181918+ } while (0);1916191919171920 __set_current_state(TASK_RUNNING);19211921+19221922+ if (!list_empty_careful(&wait.entry)) {19231923+ write_lock_irq(&ep->lock);19241924+ __remove_wait_queue(&ep->wq, &wait);19251925+ write_unlock_irq(&ep->lock);19261926+ }1918192719191928send_events:19201929 /*···19351924 if (!res && eavail &&19361925 !(res = ep_send_events(ep, events, maxevents)) && !timed_out)19371926 goto fetch_events;19381938-19391939- if (waiter) {19401940- write_lock_irq(&ep->lock);19411941- __remove_wait_queue(&ep->wq, &wait);19421942- write_unlock_irq(&ep->lock);19431943- }1944192719451928 return res;19461929}
+53-70
fs/io_uring.c
···524524 REQ_F_OVERFLOW_BIT,525525 REQ_F_POLLED_BIT,526526 REQ_F_BUFFER_SELECTED_BIT,527527+ REQ_F_NO_FILE_TABLE_BIT,527528528529 /* not a real bit, just to check we're not overflowing the space */529530 __REQ_F_LAST_BIT,···578577 REQ_F_POLLED = BIT(REQ_F_POLLED_BIT),579578 /* buffer already selected */580579 REQ_F_BUFFER_SELECTED = BIT(REQ_F_BUFFER_SELECTED_BIT),580580+ /* doesn't need file table for this request */581581+ REQ_F_NO_FILE_TABLE = BIT(REQ_F_NO_FILE_TABLE_BIT),581582};582583583584struct async_poll {···680677 unsigned needs_mm : 1;681678 /* needs req->file assigned */682679 unsigned needs_file : 1;683683- /* needs req->file assigned IFF fd is >= 0 */684684- unsigned fd_non_neg : 1;685680 /* hash wq insertion if file is a regular file */686681 unsigned hash_reg_file : 1;687682 /* unbound wq insertion if file is a non-regular file */···782781 .needs_file = 1,783782 },784783 [IORING_OP_OPENAT] = {785785- .needs_file = 1,786786- .fd_non_neg = 1,787784 .file_table = 1,788785 .needs_fs = 1,789786 },···795796 },796797 [IORING_OP_STATX] = {797798 .needs_mm = 1,798798- .needs_file = 1,799799- .fd_non_neg = 1,800799 .needs_fs = 1,800800+ .file_table = 1,801801 },802802 [IORING_OP_READ] = {803803 .needs_mm = 1,···831833 .buffer_select = 1,832834 },833835 [IORING_OP_OPENAT2] = {834834- .needs_file = 1,835835- .fd_non_neg = 1,836836 .file_table = 1,837837 .needs_fs = 1,838838 },···12871291 struct io_kiocb *req;1288129212891293 req = ctx->fallback_req;12901290- if (!test_and_set_bit_lock(0, (unsigned long *) ctx->fallback_req))12941294+ if (!test_and_set_bit_lock(0, (unsigned long *) &ctx->fallback_req))12911295 return req;1292129612931297 return NULL;···13741378 if (likely(!io_is_fallback_req(req)))13751379 kmem_cache_free(req_cachep, req);13761380 else13771377- clear_bit_unlock(0, (unsigned long *) req->ctx->fallback_req);13811381+ clear_bit_unlock(0, (unsigned long *) &req->ctx->fallback_req);13781382}1379138313801384struct req_batch {···20302034 * any file. For now, just ensure that anything potentially problematic is done20312035 * inline.20322036 */20332033-static bool io_file_supports_async(struct file *file)20372037+static bool io_file_supports_async(struct file *file, int rw)20342038{20352039 umode_t mode = file_inode(file)->i_mode;20362040···20392043 if (S_ISREG(mode) && file->f_op != &io_uring_fops)20402044 return true;2041204520422042- return false;20462046+ if (!(file->f_mode & FMODE_NOWAIT))20472047+ return false;20482048+20492049+ if (rw == READ)20502050+ return file->f_op->read_iter != NULL;20512051+20522052+ return file->f_op->write_iter != NULL;20432053}2044205420452055static int io_prep_rw(struct io_kiocb *req, const struct io_uring_sqe *sqe,···25732571 * If the file doesn't support async, mark it as REQ_F_MUST_PUNT so25742572 * we know to async punt it even if it was opened O_NONBLOCK25752573 */25762576- if (force_nonblock && !io_file_supports_async(req->file))25742574+ if (force_nonblock && !io_file_supports_async(req->file, READ))25772575 goto copy_iov;2578257625792577 iov_count = iov_iter_count(&iter);···25962594 if (ret)25972595 goto out_free;25982596 /* any defer here is final, must blocking retry */25992599- if (!(req->flags & REQ_F_NOWAIT))25972597+ if (!(req->flags & REQ_F_NOWAIT) &&25982598+ !file_can_poll(req->file))26002599 req->flags |= REQ_F_MUST_PUNT;26012600 return -EAGAIN;26022601 }···26652662 * If the file doesn't support async, mark it as REQ_F_MUST_PUNT so26662663 * we know to async punt it even if it was opened O_NONBLOCK26672664 */26682668- if (force_nonblock && !io_file_supports_async(req->file))26652665+ if (force_nonblock && !io_file_supports_async(req->file, WRITE))26692666 goto copy_iov;2670266726712668 /* file path doesn't support NOWAIT for non-direct_IO */···27192716 if (ret)27202717 goto out_free;27212718 /* any defer here is final, must blocking retry */27222722- req->flags |= REQ_F_MUST_PUNT;27192719+ if (!file_can_poll(req->file))27202720+ req->flags |= REQ_F_MUST_PUNT;27232721 return -EAGAIN;27242722 }27252723 }···27602756 return 0;27612757}2762275827632763-static bool io_splice_punt(struct file *file)27642764-{27652765- if (get_pipe_info(file))27662766- return false;27672767- if (!io_file_supports_async(file))27682768- return true;27692769- return !(file->f_flags & O_NONBLOCK);27702770-}27712771-27722759static int io_splice(struct io_kiocb *req, bool force_nonblock)27732760{27742761 struct io_splice *sp = &req->splice;···27692774 loff_t *poff_in, *poff_out;27702775 long ret;2771277627722772- if (force_nonblock) {27732773- if (io_splice_punt(in) || io_splice_punt(out))27742774- return -EAGAIN;27752775- flags |= SPLICE_F_NONBLOCK;27762776- }27772777+ if (force_nonblock)27782778+ return -EAGAIN;2777277927782780 poff_in = (sp->off_in == -1) ? NULL : &sp->off_in;27792781 poff_out = (sp->off_out == -1) ? NULL : &sp->off_out;···33473355 struct kstat stat;33483356 int ret;3349335733503350- if (force_nonblock)33583358+ if (force_nonblock) {33593359+ /* only need file table for an actual valid fd */33603360+ if (ctx->dfd == -1 || ctx->dfd == AT_FDCWD)33613361+ req->flags |= REQ_F_NO_FILE_TABLE;33513362 return -EAGAIN;33633363+ }3352336433533365 if (vfs_stat_set_lookup_flags(&lookup_flags, ctx->how.flags))33543366 return -EINVAL;···34983502 if (io_req_cancelled(req))34993503 return;35003504 __io_sync_file_range(req);35013501- io_put_req(req); /* put submission ref */35053505+ io_steal_work(req, workptr);35023506}3503350735043508static int io_sync_file_range(struct io_kiocb *req, bool force_nonblock)···50115015 int ret;5012501650135017 /* Still need defer if there is pending req in defer list. */50145014- if (!req_need_defer(req) && list_empty(&ctx->defer_list))50185018+ if (!req_need_defer(req) && list_empty_careful(&ctx->defer_list))50155019 return 0;5016502050175021 if (!req->io && io_alloc_async_ctx(req))···53605364 io_steal_work(req, workptr);53615365}5362536653635363-static int io_req_needs_file(struct io_kiocb *req, int fd)53645364-{53655365- if (!io_op_defs[req->opcode].needs_file)53665366- return 0;53675367- if ((fd == -1 || fd == AT_FDCWD) && io_op_defs[req->opcode].fd_non_neg)53685368- return 0;53695369- return 1;53705370-}53715371-53725367static inline struct file *io_file_from_index(struct io_ring_ctx *ctx,53735368 int index)53745369{···53975410}5398541153995412static int io_req_set_file(struct io_submit_state *state, struct io_kiocb *req,54005400- int fd, unsigned int flags)54135413+ int fd)54015414{54025415 bool fixed;5403541654045404- if (!io_req_needs_file(req, fd))54055405- return 0;54065406-54075407- fixed = (flags & IOSQE_FIXED_FILE);54175417+ fixed = (req->flags & REQ_F_FIXED_FILE) != 0;54085418 if (unlikely(!fixed && req->needs_fixed_file))54095419 return -EBADF;54105420···54135429 int ret = -EBADF;54145430 struct io_ring_ctx *ctx = req->ctx;5415543154165416- if (req->work.files)54325432+ if (req->work.files || (req->flags & REQ_F_NO_FILE_TABLE))54175433 return 0;54185434 if (!ctx->ring_file)54195435 return -EBADF;···57785794 struct io_submit_state *state, bool async)57795795{57805796 unsigned int sqe_flags;57815781- int id, fd;57975797+ int id;5782579857835799 /*57845800 * All io need record the previous position, if LINK vs DARIN,···58305846 IOSQE_ASYNC | IOSQE_FIXED_FILE |58315847 IOSQE_BUFFER_SELECT | IOSQE_IO_LINK);5832584858335833- fd = READ_ONCE(sqe->fd);58345834- return io_req_set_file(state, req, fd, sqe_flags);58495849+ if (!io_op_defs[req->opcode].needs_file)58505850+ return 0;58515851+58525852+ return io_req_set_file(state, req, READ_ONCE(sqe->fd));58355853}5836585458375855static int io_submit_sqes(struct io_ring_ctx *ctx, unsigned int nr,···73137327 * it could cause shutdown to hang.73147328 */73157329 while (ctx->sqo_thread && !wq_has_sleeper(&ctx->sqo_wait))73167316- cpu_relax();73307330+ cond_resched();7317733173187332 io_kill_timeouts(ctx);73197333 io_poll_remove_all(ctx);···73427356static void io_uring_cancel_files(struct io_ring_ctx *ctx,73437357 struct files_struct *files)73447358{73457345- struct io_kiocb *req;73467346- DEFINE_WAIT(wait);73477347-73487359 while (!list_empty_careful(&ctx->inflight_list)) {73497349- struct io_kiocb *cancel_req = NULL;73607360+ struct io_kiocb *cancel_req = NULL, *req;73617361+ DEFINE_WAIT(wait);7350736273517363 spin_lock_irq(&ctx->inflight_lock);73527364 list_for_each_entry(req, &ctx->inflight_list, inflight_entry) {···73847400 */73857401 if (refcount_sub_and_test(2, &cancel_req->refs)) {73867402 io_put_req(cancel_req);74037403+ finish_wait(&ctx->inflight_wait, &wait);73877404 continue;73887405 }73897406 }···73927407 io_wq_cancel_work(ctx->io_wq, &cancel_req->work);73937408 io_put_req(cancel_req);73947409 schedule();74107410+ finish_wait(&ctx->inflight_wait, &wait);73957411 }73967396- finish_wait(&ctx->inflight_wait, &wait);73977412}7398741373997414static int io_uring_flush(struct file *file, void *data)···77427757 return ret;77437758}7744775977457745-static int io_uring_create(unsigned entries, struct io_uring_params *p)77607760+static int io_uring_create(unsigned entries, struct io_uring_params *p,77617761+ struct io_uring_params __user *params)77467762{77477763 struct user_struct *user = NULL;77487764 struct io_ring_ctx *ctx;···78357849 p->cq_off.overflow = offsetof(struct io_rings, cq_overflow);78367850 p->cq_off.cqes = offsetof(struct io_rings, cqes);7837785178527852+ p->features = IORING_FEAT_SINGLE_MMAP | IORING_FEAT_NODROP |78537853+ IORING_FEAT_SUBMIT_STABLE | IORING_FEAT_RW_CUR_POS |78547854+ IORING_FEAT_CUR_PERSONALITY | IORING_FEAT_FAST_POLL;78557855+78567856+ if (copy_to_user(params, p, sizeof(*p))) {78577857+ ret = -EFAULT;78587858+ goto err;78597859+ }78387860 /*78397861 * Install ring fd as the very last thing, so we don't risk someone78407862 * having closed it before we finish setup···78517857 if (ret < 0)78527858 goto err;7853785978547854- p->features = IORING_FEAT_SINGLE_MMAP | IORING_FEAT_NODROP |78557855- IORING_FEAT_SUBMIT_STABLE | IORING_FEAT_RW_CUR_POS |78567856- IORING_FEAT_CUR_PERSONALITY | IORING_FEAT_FAST_POLL;78577860 trace_io_uring_create(ret, ctx, p->sq_entries, p->cq_entries, p->flags);78587861 return ret;78597862err:···78667875static long io_uring_setup(u32 entries, struct io_uring_params __user *params)78677876{78687877 struct io_uring_params p;78697869- long ret;78707878 int i;7871787978727880 if (copy_from_user(&p, params, sizeof(p)))···78807890 IORING_SETUP_CLAMP | IORING_SETUP_ATTACH_WQ))78817891 return -EINVAL;7882789278837883- ret = io_uring_create(entries, &p);78847884- if (ret < 0)78857885- return ret;78867886-78877887- if (copy_to_user(params, &p, sizeof(p)))78887888- return -EFAULT;78897889-78907890- return ret;78937893+ return io_uring_create(entries, &p, params);78917894}7892789578937896SYSCALL_DEFINE2(io_uring_setup, u32, entries,
+8
fs/ioctl.c
···5555static int ioctl_fibmap(struct file *filp, int __user *p)5656{5757 struct inode *inode = file_inode(filp);5858+ struct super_block *sb = inode->i_sb;5859 int error, ur_block;5960 sector_t block;6061···71707271 block = ur_block;7372 error = bmap(inode, &block);7373+7474+ if (block > INT_MAX) {7575+ error = -ERANGE;7676+ pr_warn_ratelimited("[%s/%d] FS: %s File: %pD4 would truncate fibmap result\n",7777+ current->comm, task_pid_nr(current),7878+ sb->s_id, filp);7979+ }74807581 if (error)7682 ur_block = 0;
···329329330330/**331331 * struct dma_buf_attach_ops - importer operations for an attachment332332- * @move_notify: [optional] notification that the DMA-buf is moving333332 *334333 * Attachment operations implemented by the importer.335334 */336335struct dma_buf_attach_ops {337336 /**338338- * @move_notify337337+ * @move_notify: [optional] notification that the DMA-buf is moving339338 *340339 * If this callback is provided the framework can avoid pinning the341340 * backing store while mappings exists.
+6-6
include/linux/dmaengine.h
···8383/**8484 * Interleaved Transfer Request8585 * ----------------------------8686- * A chunk is collection of contiguous bytes to be transfered.8686+ * A chunk is collection of contiguous bytes to be transferred.8787 * The gap(in bytes) between two chunks is called inter-chunk-gap(ICG).8888- * ICGs may or maynot change between chunks.8888+ * ICGs may or may not change between chunks.8989 * A FRAME is the smallest series of contiguous {chunk,icg} pairs,9090 * that when repeated an integral number of times, specifies the transfer.9191 * A transfer template is specification of a Frame, the number of times···341341 * @chan: driver channel device342342 * @device: sysfs device343343 * @dev_id: parent dma_device dev_id344344- * @idr_ref: reference count to gate release of dma_device dev_id345344 */346345struct dma_chan_dev {347346 struct dma_chan *chan;348347 struct device device;349348 int dev_id;350350- atomic_t *idr_ref;351349};352350353351/**···833835 int dev_id;834836 struct device *dev;835837 struct module *owner;838838+ struct ida chan_ida;839839+ struct mutex chan_mutex; /* to protect chan_ida */836840837841 u32 src_addr_widths;838842 u32 dst_addr_widths;···10691069 * dmaengine_synchronize() needs to be called before it is safe to free10701070 * any memory that is accessed by previously submitted descriptors or before10711071 * freeing any resources accessed from within the completion callback of any10721072- * perviously submitted descriptors.10721072+ * previously submitted descriptors.10731073 *10741074 * This function can be called from atomic context as well as from within a10751075 * complete callback of a descriptor submitted on the same channel.···10911091 *10921092 * Synchronizes to the DMA channel termination to the current context. When this10931093 * function returns it is guaranteed that all transfers for previously issued10941094- * descriptors have stopped and and it is safe to free the memory assoicated10941094+ * descriptors have stopped and it is safe to free the memory associated10951095 * with them. Furthermore it is guaranteed that all complete callback functions10961096 * for a previously submitted descriptor have finished running and it is safe to10971097 * free resources accessed from within the complete callbacks.
···5353 * @MHI_CHAIN: Linked transfer5454 */5555enum mhi_flags {5656- MHI_EOB,5757- MHI_EOT,5858- MHI_CHAIN,5656+ MHI_EOB = BIT(0),5757+ MHI_EOT = BIT(1),5858+ MHI_CHAIN = BIT(2),5959};60606161/**···335335 * @syserr_worker: System error worker336336 * @state_event: State change event337337 * @status_cb: CB function to notify power states of the device (required)338338- * @link_status: CB function to query link status of the device (required)339338 * @wake_get: CB function to assert device wake (optional)340339 * @wake_put: CB function to de-assert device wake (optional)341340 * @wake_toggle: CB function to assert and de-assert device wake (optional)342341 * @runtime_get: CB function to controller runtime resume (required)343343- * @runtimet_put: CB function to decrement pm usage (required)342342+ * @runtime_put: CB function to decrement pm usage (required)344343 * @map_single: CB function to create TRE buffer345344 * @unmap_single: CB function to destroy TRE buffer345345+ * @read_reg: Read a MHI register via the physical link (required)346346+ * @write_reg: Write a MHI register via the physical link (required)346347 * @buffer_len: Bounce buffer length347348 * @bounce_buf: Use of bounce buffer348349 * @fbc_download: MHI host needs to do complete image transfer (optional)···418417419418 void (*status_cb)(struct mhi_controller *mhi_cntrl,420419 enum mhi_callback cb);421421- int (*link_status)(struct mhi_controller *mhi_cntrl);422420 void (*wake_get)(struct mhi_controller *mhi_cntrl, bool override);423421 void (*wake_put)(struct mhi_controller *mhi_cntrl, bool override);424422 void (*wake_toggle)(struct mhi_controller *mhi_cntrl);···427427 struct mhi_buf_info *buf);428428 void (*unmap_single)(struct mhi_controller *mhi_cntrl,429429 struct mhi_buf_info *buf);430430+ int (*read_reg)(struct mhi_controller *mhi_cntrl, void __iomem *addr,431431+ u32 *out);432432+ void (*write_reg)(struct mhi_controller *mhi_cntrl, void __iomem *addr,433433+ u32 val);430434431435 size_t buffer_len;432436 bool bounce_buf;
···7171#if IS_ENABLED(CONFIG_SUNRPC_DEBUG)7272 struct dentry *cl_debugfs; /* debugfs directory */7373#endif7474- struct rpc_xprt_iter cl_xpi;7474+ /* cl_work is only needed after cl_xpi is no longer used,7575+ * and that are of similar size7676+ */7777+ union {7878+ struct rpc_xprt_iter cl_xpi;7979+ struct work_struct cl_work;8080+ };7581 const struct cred *cl_cred;7682};7783···242236 (task->tk_msg.rpc_proc->p_decode != NULL);243237}244238239239+static inline void rpc_task_close_connection(struct rpc_task *task)240240+{241241+ if (task->tk_xprt)242242+ xprt_force_disconnect(task->tk_xprt);243243+}245244#endif /* _LINUX_SUNRPC_CLNT_H */
···6666 int read;6767 int flags;6868 /* Data points here */6969- unsigned long data[0];6969+ unsigned long data[];7070};71717272/* Values for .flags field of tty_buffer */
+24-2
include/linux/virtio_net.h
···33#define _LINUX_VIRTIO_NET_H4455#include <linux/if_vlan.h>66+#include <uapi/linux/tcp.h>77+#include <uapi/linux/udp.h>68#include <uapi/linux/virtio_net.h>79810static inline int virtio_net_hdr_set_proto(struct sk_buff *skb,···3028 bool little_endian)3129{3230 unsigned int gso_type = 0;3131+ unsigned int thlen = 0;3232+ unsigned int ip_proto;33333434 if (hdr->gso_type != VIRTIO_NET_HDR_GSO_NONE) {3535 switch (hdr->gso_type & ~VIRTIO_NET_HDR_GSO_ECN) {3636 case VIRTIO_NET_HDR_GSO_TCPV4:3737 gso_type = SKB_GSO_TCPV4;3838+ ip_proto = IPPROTO_TCP;3939+ thlen = sizeof(struct tcphdr);3840 break;3941 case VIRTIO_NET_HDR_GSO_TCPV6:4042 gso_type = SKB_GSO_TCPV6;4343+ ip_proto = IPPROTO_TCP;4444+ thlen = sizeof(struct tcphdr);4145 break;4246 case VIRTIO_NET_HDR_GSO_UDP:4347 gso_type = SKB_GSO_UDP;4848+ ip_proto = IPPROTO_UDP;4949+ thlen = sizeof(struct udphdr);4450 break;4551 default:4652 return -EINVAL;···67576858 if (!skb_partial_csum_set(skb, start, off))6959 return -EINVAL;6060+6161+ if (skb_transport_offset(skb) + thlen > skb_headlen(skb))6262+ return -EINVAL;7063 } else {7164 /* gso packets without NEEDS_CSUM do not set transport_offset.7265 * probe and drop if does not match one of the above types.7366 */7467 if (gso_type && skb->network_header) {6868+ struct flow_keys_basic keys;6969+7570 if (!skb->protocol)7671 virtio_net_hdr_set_proto(skb, hdr);7772retry:7878- skb_probe_transport_header(skb);7979- if (!skb_transport_header_was_set(skb)) {7373+ if (!skb_flow_dissect_flow_keys_basic(NULL, skb, &keys,7474+ NULL, 0, 0, 0,7575+ 0)) {8076 /* UFO does not specify ipv4 or 6: try both */8177 if (gso_type & SKB_GSO_UDP &&8278 skb->protocol == htons(ETH_P_IP)) {···9175 }9276 return -EINVAL;9377 }7878+7979+ if (keys.control.thoff + thlen > skb_headlen(skb) ||8080+ keys.basic.ip_proto != ip_proto)8181+ return -EINVAL;8282+8383+ skb_set_transport_header(skb, keys.control.thoff);9484 }9585 }9686
···437437 return atomic_read(&net->ipv4.rt_genid);438438}439439440440+#if IS_ENABLED(CONFIG_IPV6)441441+static inline int rt_genid_ipv6(const struct net *net)442442+{443443+ return atomic_read(&net->ipv6.fib6_sernum);444444+}445445+#endif446446+440447static inline void rt_genid_bump_ipv4(struct net *net)441448{442449 atomic_inc(&net->ipv4.rt_genid);
+1
include/net/sch_generic.h
···407407 struct mutex lock;408408 struct list_head chain_list;409409 u32 index; /* block index for shared blocks */410410+ u32 classid; /* which class this block belongs to */410411 refcount_t refcnt;411412 struct net *net;412413 struct Qdisc *q;
+1
include/soc/mscc/ocelot.h
···502502 unsigned int num_stats;503503504504 int shared_queue_sz;505505+ int num_mact_rows;505506506507 struct net_device *hw_bridge_dev;507508 u16 bridge_mask;
+1-1
include/trace/events/gpu_mem.h
···2424 *2525 * @pid: Put 0 for global total, while positive pid for process total.2626 *2727- * @size: Virtual size of the allocation in bytes.2727+ * @size: Size of the allocation in bytes.2828 *2929 */3030TRACE_EVENT(gpu_mem_total,
···39394040#define DMA_BUF_BASE 'b'4141#define DMA_BUF_IOCTL_SYNC _IOW(DMA_BUF_BASE, 0, struct dma_buf_sync)4242+4343+/* 32/64bitness of this uapi was botched in android, there's no difference4444+ * between them in actual uapi, they're just different numbers.4545+ */4246#define DMA_BUF_SET_NAME _IOW(DMA_BUF_BASE, 1, const char *)4747+#define DMA_BUF_SET_NAME_A _IOW(DMA_BUF_BASE, 1, u32)4848+#define DMA_BUF_SET_NAME_B _IOW(DMA_BUF_BASE, 1, u64)43494450#endif
+1-1
include/uapi/linux/fiemap.h
···3434 __u32 fm_mapped_extents;/* number of extents that were mapped (out) */3535 __u32 fm_extent_count; /* size of fm_extents array (in) */3636 __u32 fm_reserved;3737- struct fiemap_extent fm_extents[]; /* array of mapped extents (out) */3737+ struct fiemap_extent fm_extents[0]; /* array of mapped extents (out) */3838};39394040#define FIEMAP_MAX_OFFSET (~0ULL)
+2-2
include/uapi/linux/hyperv.h
···119119120120struct hv_fcopy_hdr {121121 __u32 operation;122122- uuid_le service_id0; /* currently unused */123123- uuid_le service_id1; /* currently unused */122122+ __u8 service_id0[16]; /* currently unused */123123+ __u8 service_id1[16]; /* currently unused */124124} __attribute__((packed));125125126126#define OVER_WRITE 0x1
+3-3
include/uapi/linux/if_arcnet.h
···6060 __u8 proto; /* protocol ID field - varies */6161 __u8 split_flag; /* for use with split packets */6262 __be16 sequence; /* sequence number */6363- __u8 payload[]; /* space remaining in packet (504 bytes)*/6363+ __u8 payload[0]; /* space remaining in packet (504 bytes)*/6464};6565#define RFC1201_HDR_SIZE 46666···6969 */7070struct arc_rfc1051 {7171 __u8 proto; /* ARC_P_RFC1051_ARP/RFC1051_IP */7272- __u8 payload[]; /* 507 bytes */7272+ __u8 payload[0]; /* 507 bytes */7373};7474#define RFC1051_HDR_SIZE 17575···8080struct arc_eth_encap {8181 __u8 proto; /* Always ARC_P_ETHER */8282 struct ethhdr eth; /* standard ethernet header (yuck!) */8383- __u8 payload[]; /* 493 bytes */8383+ __u8 payload[0]; /* 493 bytes */8484};8585#define ETH_ENCAP_HDR_SIZE 148686
···3939config CC_HAS_ASM_INLINE4040 def_bool $(success,echo 'void foo(void) { asm inline (""); }' | $(CC) -x c - -c -o /dev/null)41414242-config CC_HAS_WARN_MAYBE_UNINITIALIZED4343- def_bool $(cc-option,-Wmaybe-uninitialized)4444- help4545- GCC >= 4.7 supports this option.4646-4747-config CC_DISABLE_WARN_MAYBE_UNINITIALIZED4848- bool4949- depends on CC_HAS_WARN_MAYBE_UNINITIALIZED5050- default CC_IS_GCC && GCC_VERSION < 40900 # unreliable for GCC < 4.95151- help5252- GCC's -Wmaybe-uninitialized is not reliable by definition.5353- Lots of false positive warnings are produced in some cases.5454-5555- If this option is enabled, -Wno-maybe-uninitialzed is passed5656- to the compiler to suppress maybe-uninitialized warnings.5757-5842config CONSTRUCTORS5943 bool6044 depends on !UML···12411257config CC_OPTIMIZE_FOR_PERFORMANCE_O312421258 bool "Optimize more for performance (-O3)"12431259 depends on ARC12441244- imply CC_DISABLE_WARN_MAYBE_UNINITIALIZED # avoid false positives12451260 help12461261 Choosing this option will pass "-O3" to your compiler to optimize12471262 the kernel yet more for performance.1248126312491264config CC_OPTIMIZE_FOR_SIZE12501265 bool "Optimize for size (-Os)"12511251- imply CC_DISABLE_WARN_MAYBE_UNINITIALIZED # avoid false positives12521266 help12531267 Choosing this option will pass "-Os" to your compiler resulting12541268 in a smaller kernel.
+1-1
init/initramfs.c
···542542}543543544544#ifdef CONFIG_KEXEC_CORE545545-static bool kexec_free_initrd(void)545545+static bool __init kexec_free_initrd(void)546546{547547 unsigned long crashk_start = (unsigned long)__va(crashk_res.start);548548 unsigned long crashk_end = (unsigned long)__va(crashk_res.end);
+52-17
init/main.c
···257257258258early_param("loglevel", loglevel);259259260260+#ifdef CONFIG_BLK_DEV_INITRD261261+static void * __init get_boot_config_from_initrd(u32 *_size, u32 *_csum)262262+{263263+ u32 size, csum;264264+ char *data;265265+ u32 *hdr;266266+267267+ if (!initrd_end)268268+ return NULL;269269+270270+ data = (char *)initrd_end - BOOTCONFIG_MAGIC_LEN;271271+ if (memcmp(data, BOOTCONFIG_MAGIC, BOOTCONFIG_MAGIC_LEN))272272+ return NULL;273273+274274+ hdr = (u32 *)(data - 8);275275+ size = hdr[0];276276+ csum = hdr[1];277277+278278+ data = ((void *)hdr) - size;279279+ if ((unsigned long)data < initrd_start) {280280+ pr_err("bootconfig size %d is greater than initrd size %ld\n",281281+ size, initrd_end - initrd_start);282282+ return NULL;283283+ }284284+285285+ /* Remove bootconfig from initramfs/initrd */286286+ initrd_end = (unsigned long)data;287287+ if (_size)288288+ *_size = size;289289+ if (_csum)290290+ *_csum = csum;291291+292292+ return data;293293+}294294+#else295295+static void * __init get_boot_config_from_initrd(u32 *_size, u32 *_csum)296296+{297297+ return NULL;298298+}299299+#endif300300+260301#ifdef CONFIG_BOOT_CONFIG261302262303char xbc_namebuf[XBC_KEYLEN_MAX] __initdata;···398357 int pos;399358 u32 size, csum;400359 char *data, *copy;401401- u32 *hdr;402360 int ret;361361+362362+ data = get_boot_config_from_initrd(&size, &csum);363363+ if (!data)364364+ goto not_found;403365404366 strlcpy(tmp_cmdline, boot_command_line, COMMAND_LINE_SIZE);405367 parse_args("bootconfig", tmp_cmdline, NULL, 0, 0, 0, NULL,···411367 if (!bootconfig_found)412368 return;413369414414- if (!initrd_end)415415- goto not_found;416416-417417- data = (char *)initrd_end - BOOTCONFIG_MAGIC_LEN;418418- if (memcmp(data, BOOTCONFIG_MAGIC, BOOTCONFIG_MAGIC_LEN))419419- goto not_found;420420-421421- hdr = (u32 *)(data - 8);422422- size = hdr[0];423423- csum = hdr[1];424424-425370 if (size >= XBC_DATA_MAX) {426371 pr_err("bootconfig size %d greater than max size %d\n",427372 size, XBC_DATA_MAX);428373 return;429374 }430430-431431- data = ((void *)hdr) - size;432432- if ((unsigned long)data < initrd_start)433433- goto not_found;434375435376 if (boot_config_checksum((unsigned char *)data, size) != csum) {436377 pr_err("bootconfig checksum failed\n");···449420not_found:450421 pr_err("'bootconfig' found on command line, but no bootconfig found\n");451422}423423+452424#else453453-#define setup_boot_config(cmdline) do { } while (0)425425+426426+static void __init setup_boot_config(const char *cmdline)427427+{428428+ /* Remove bootconfig data from initrd */429429+ get_boot_config_from_initrd(NULL, NULL);430430+}454431455432static int __init warn_bootconfig(char *str)456433{
+26-8
ipc/mqueue.c
···142142143143 struct sigevent notify;144144 struct pid *notify_owner;145145+ u32 notify_self_exec_id;145146 struct user_namespace *notify_user_ns;146147 struct user_struct *user; /* user who created, for accounting */147148 struct sock *notify_sock;···774773 * synchronously. */775774 if (info->notify_owner &&776775 info->attr.mq_curmsgs == 1) {777777- struct kernel_siginfo sig_i;778776 switch (info->notify.sigev_notify) {779777 case SIGEV_NONE:780778 break;781781- case SIGEV_SIGNAL:782782- /* sends signal */779779+ case SIGEV_SIGNAL: {780780+ struct kernel_siginfo sig_i;781781+ struct task_struct *task;782782+783783+ /* do_mq_notify() accepts sigev_signo == 0, why?? */784784+ if (!info->notify.sigev_signo)785785+ break;783786784787 clear_siginfo(&sig_i);785788 sig_i.si_signo = info->notify.sigev_signo;786789 sig_i.si_errno = 0;787790 sig_i.si_code = SI_MESGQ;788791 sig_i.si_value = info->notify.sigev_value;789789- /* map current pid/uid into info->owner's namespaces */790792 rcu_read_lock();793793+ /* map current pid/uid into info->owner's namespaces */791794 sig_i.si_pid = task_tgid_nr_ns(current,792795 ns_of_pid(info->notify_owner));793793- sig_i.si_uid = from_kuid_munged(info->notify_user_ns, current_uid());796796+ sig_i.si_uid = from_kuid_munged(info->notify_user_ns,797797+ current_uid());798798+ /*799799+ * We can't use kill_pid_info(), this signal should800800+ * bypass check_kill_permission(). It is from kernel801801+ * but si_fromuser() can't know this.802802+ * We do check the self_exec_id, to avoid sending803803+ * signals to programs that don't expect them.804804+ */805805+ task = pid_task(info->notify_owner, PIDTYPE_TGID);806806+ if (task && task->self_exec_id ==807807+ info->notify_self_exec_id) {808808+ do_send_sig_info(info->notify.sigev_signo,809809+ &sig_i, task, PIDTYPE_TGID);810810+ }794811 rcu_read_unlock();795795-796796- kill_pid_info(info->notify.sigev_signo,797797- &sig_i, info->notify_owner);798812 break;813813+ }799814 case SIGEV_THREAD:800815 set_cookie(info->notify_cookie, NOTIFY_WOKENUP);801816 netlink_sendskb(info->notify_sock, info->notify_cookie);···14001383 info->notify.sigev_signo = notification->sigev_signo;14011384 info->notify.sigev_value = notification->sigev_value;14021385 info->notify.sigev_notify = SIGEV_SIGNAL;13861386+ info->notify_self_exec_id = current->self_exec_id;14031387 break;14041388 }14051389
+2-2
kernel/kcov.c
···740740 * kcov_remote_handle() with KCOV_SUBSYSTEM_COMMON as the subsystem id and an741741 * arbitrary 4-byte non-zero number as the instance id). This common handle742742 * then gets saved into the task_struct of the process that issued the743743- * KCOV_REMOTE_ENABLE ioctl. When this proccess issues system calls that spawn744744- * kernel threads, the common handle must be retrived via kcov_common_handle()743743+ * KCOV_REMOTE_ENABLE ioctl. When this process issues system calls that spawn744744+ * kernel threads, the common handle must be retrieved via kcov_common_handle()745745 * and passed to the spawned threads via custom annotations. Those kernel746746 * threads must in turn be annotated with kcov_remote_start(common_handle) and747747 * kcov_remote_stop(). All of the threads that are spawned by the same process
···466466config PROFILE_ALL_BRANCHES467467 bool "Profile all if conditionals" if !FORTIFY_SOURCE468468 select TRACE_BRANCH_PROFILING469469- imply CC_DISABLE_WARN_MAYBE_UNINITIALIZED # avoid false positives470469 help471470 This tracer profiles all branch conditions. Every if ()472471 taken in the kernel is recorded whether it hit or miss.
+24-6
kernel/trace/preemptirq_delay_test.c
···113113114114 for (i = 0; i < s; i++)115115 (testfuncs[i])(i);116116+117117+ set_current_state(TASK_INTERRUPTIBLE);118118+ while (!kthread_should_stop()) {119119+ schedule();120120+ set_current_state(TASK_INTERRUPTIBLE);121121+ }122122+123123+ __set_current_state(TASK_RUNNING);124124+116125 return 0;117126}118127119119-static struct task_struct *preemptirq_start_test(void)128128+static int preemptirq_run_test(void)120129{130130+ struct task_struct *task;131131+121132 char task_name[50];122133123134 snprintf(task_name, sizeof(task_name), "%s_test", test_mode);124124- return kthread_run(preemptirq_delay_run, NULL, task_name);135135+ task = kthread_run(preemptirq_delay_run, NULL, task_name);136136+ if (IS_ERR(task))137137+ return PTR_ERR(task);138138+ if (task)139139+ kthread_stop(task);140140+ return 0;125141}126142127143128144static ssize_t trigger_store(struct kobject *kobj, struct kobj_attribute *attr,129145 const char *buf, size_t count)130146{131131- preemptirq_start_test();147147+ ssize_t ret;148148+149149+ ret = preemptirq_run_test();150150+ if (ret)151151+ return ret;132152 return count;133153}134154···168148169149static int __init preemptirq_delay_init(void)170150{171171- struct task_struct *test_task;172151 int retval;173152174174- test_task = preemptirq_start_test();175175- retval = PTR_ERR_OR_ZERO(test_task);153153+ retval = preemptirq_run_test();176154 if (retval != 0)177155 return retval;178156
+15-1
kernel/trace/trace.c
···947947EXPORT_SYMBOL_GPL(__trace_bputs);948948949949#ifdef CONFIG_TRACER_SNAPSHOT950950-void tracing_snapshot_instance_cond(struct trace_array *tr, void *cond_data)950950+static void tracing_snapshot_instance_cond(struct trace_array *tr,951951+ void *cond_data)951952{952953 struct tracer *tracer = tr->current_trace;953954 unsigned long flags;···85268525 */85278526 allocate_snapshot = false;85288527#endif85288528+85298529+ /*85308530+ * Because of some magic with the way alloc_percpu() works on85318531+ * x86_64, we need to synchronize the pgd of all the tables,85328532+ * otherwise the trace events that happen in x86_64 page fault85338533+ * handlers can't cope with accessing the chance that a85348534+ * alloc_percpu()'d memory might be touched in the page fault trace85358535+ * event. Oh, and we need to audit all other alloc_percpu() and vmalloc()85368536+ * calls in tracing, because something might get triggered within a85378537+ * page fault trace event!85388538+ */85398539+ vmalloc_sync_mappings();85408540+85298541 return 0;85308542}85318543
+10-14
kernel/trace/trace_boot.c
···9595 struct xbc_node *anode;9696 char buf[MAX_BUF_LEN];9797 const char *val;9898- int ret;9999-100100- kprobe_event_cmd_init(&cmd, buf, MAX_BUF_LEN);101101-102102- ret = kprobe_event_gen_cmd_start(&cmd, event, NULL);103103- if (ret)104104- return ret;9898+ int ret = 0;10599106100 xbc_node_for_each_array_value(node, "probes", anode, val) {107107- ret = kprobe_event_add_field(&cmd, val);108108- if (ret)109109- return ret;110110- }101101+ kprobe_event_cmd_init(&cmd, buf, MAX_BUF_LEN);111102112112- ret = kprobe_event_gen_cmd_end(&cmd);113113- if (ret)114114- pr_err("Failed to add probe: %s\n", buf);103103+ ret = kprobe_event_gen_cmd_start(&cmd, event, val);104104+ if (ret)105105+ break;106106+107107+ ret = kprobe_event_gen_cmd_end(&cmd);108108+ if (ret)109109+ pr_err("Failed to add probe: %s\n", buf);110110+ }115111116112 return ret;117113}
+7-1
kernel/trace/trace_kprobe.c
···453453454454static bool within_notrace_func(struct trace_kprobe *tk)455455{456456- unsigned long addr = addr = trace_kprobe_address(tk);456456+ unsigned long addr = trace_kprobe_address(tk);457457 char symname[KSYM_NAME_LEN], *p;458458459459 if (!__within_notrace_func(addr))···940940 * complete command or only the first part of it; in the latter case,941941 * kprobe_event_add_fields() can be used to add more fields following this.942942 *943943+ * Unlikely the synth_event_gen_cmd_start(), @loc must be specified. This944944+ * returns -EINVAL if @loc == NULL.945945+ *943946 * Return: 0 if successful, error otherwise.944947 */945948int __kprobe_event_gen_cmd_start(struct dynevent_cmd *cmd, bool kretprobe,···954951 int ret;955952956953 if (cmd->type != DYNEVENT_TYPE_KPROBE)954954+ return -EINVAL;955955+956956+ if (!loc)957957 return -EINVAL;958958959959 if (kretprobe)
+5
kernel/umh.c
···544544 * Runs a user-space application. The application is started545545 * asynchronously if wait is not set, and runs as a child of system workqueues.546546 * (ie. it runs with full root capabilities and optimized affinity).547547+ *548548+ * Note: successful return value does not guarantee the helper was called at549549+ * all. You can't rely on sub_info->{init,cleanup} being called even for550550+ * UMH_WAIT_* wait modes as STATIC_USERMODEHELPER_PATH="" turns all helpers551551+ * into a successful no-op.547552 */548553int call_usermodehelper_exec(struct subprocess_info *sub_info, int wait)549554{
+7-10
lib/Kconfig.ubsan
···6060 Enabling this option will get kernel image size increased6161 significantly.62626363-config UBSAN_NO_ALIGNMENT6464- bool "Disable checking of pointers alignment"6565- default y if HAVE_EFFICIENT_UNALIGNED_ACCESS6666- help6767- This option disables the check of unaligned memory accesses.6868- This option should be used when building allmodconfig.6969- Disabling this option on architectures that support unaligned7070- accesses may produce a lot of false positives.7171-7263config UBSAN_ALIGNMENT7373- def_bool !UBSAN_NO_ALIGNMENT6464+ bool "Enable checks for pointers alignment"6565+ default !HAVE_EFFICIENT_UNALIGNED_ACCESS6666+ depends on !X86 || !COMPILE_TEST6767+ help6868+ This option enables the check of unaligned memory accesses.6969+ Enabling this option on architectures that support unaligned7070+ accesses may produce a lot of false positives.74717572config TEST_UBSAN7673 tristate "Module for testing for undefined behavior detection"
···2121EXPORT_SYMBOL_GPL(noop_backing_dev_info);22222323static struct class *bdi_class;2424-const char *bdi_unknown_name = "(unknown)";2424+static const char *bdi_unknown_name = "(unknown)";25252626/*2727 * bdi_lock protects bdi_tree and updates to bdi_list. bdi_list has RCU···938938 if (bdi->dev) /* The driver needs to use separate queues per device */939939 return 0;940940941941- dev = device_create_vargs(bdi_class, NULL, MKDEV(0, 0), bdi, fmt, args);941941+ vsnprintf(bdi->dev_name, sizeof(bdi->dev_name), fmt, args);942942+ dev = device_create(bdi_class, NULL, MKDEV(0, 0), bdi, bdi->dev_name);942943 if (IS_ERR(dev))943944 return PTR_ERR(dev);944945···10431042 kref_put(&bdi->refcnt, release_bdi);10441043}10451044EXPORT_SYMBOL(bdi_put);10451045+10461046+const char *bdi_dev_name(struct backing_dev_info *bdi)10471047+{10481048+ if (!bdi || !bdi->dev)10491049+ return bdi_unknown_name;10501050+ return bdi->dev_name;10511051+}10521052+EXPORT_SYMBOL_GPL(bdi_dev_name);1046105310471054static wait_queue_head_t congestion_wqh[2] = {10481055 __WAIT_QUEUE_HEAD_INITIALIZER(congestion_wqh[0]),
+9-6
mm/memcontrol.c
···49904990 unsigned int size;49914991 int node;49924992 int __maybe_unused i;49934993+ long error = -ENOMEM;4993499449944995 size = sizeof(struct mem_cgroup);49954996 size += nr_node_ids * sizeof(struct mem_cgroup_per_node *);4996499749974998 memcg = kzalloc(size, GFP_KERNEL);49984999 if (!memcg)49994999- return NULL;50005000+ return ERR_PTR(error);5000500150015002 memcg->id.id = idr_alloc(&mem_cgroup_idr, NULL,50025003 1, MEM_CGROUP_ID_MAX,50035004 GFP_KERNEL);50045004- if (memcg->id.id < 0)50055005+ if (memcg->id.id < 0) {50065006+ error = memcg->id.id;50055007 goto fail;50085008+ }5006500950075010 memcg->vmstats_local = alloc_percpu(struct memcg_vmstats_percpu);50085011 if (!memcg->vmstats_local)···50495046fail:50505047 mem_cgroup_id_remove(memcg);50515048 __mem_cgroup_free(memcg);50525052- return NULL;50495049+ return ERR_PTR(error);50535050}5054505150555052static struct cgroup_subsys_state * __ref···50605057 long error = -ENOMEM;5061505850625059 memcg = mem_cgroup_alloc();50635063- if (!memcg)50645064- return ERR_PTR(error);50605060+ if (IS_ERR(memcg))50615061+ return ERR_CAST(memcg);5065506250665063 WRITE_ONCE(memcg->high, PAGE_COUNTER_MAX);50675064 memcg->soft_limit = PAGE_COUNTER_MAX;···51115108fail:51125109 mem_cgroup_id_remove(memcg);51135110 mem_cgroup_free(memcg);51145114- return ERR_PTR(-ENOMEM);51115111+ return ERR_PTR(error);51155112}5116511351175114static int mem_cgroup_css_online(struct cgroup_subsys_state *css)
+9
mm/page_alloc.c
···16071607 if (!__pageblock_pfn_to_page(block_start_pfn,16081608 block_end_pfn, zone))16091609 return;16101610+ cond_resched();16101611 }1611161216121613 /* We confirm that there is no hole */···24002399 unsigned long max_boost;2401240024022401 if (!watermark_boost_factor)24022402+ return;24032403+ /*24042404+ * Don't bother in zones that are unlikely to produce results.24052405+ * On small machines, including kdump capture kernels running24062406+ * in a small area, boosting the watermark can cause an out of24072407+ * memory situation immediately.24082408+ */24092409+ if ((pageblock_nr_pages * 4) > zone_managed_pages(zone))24032410 return;2404241124052412 max_boost = mult_frac(zone->_watermark[WMARK_HIGH],
+10-4
mm/percpu.c
···8080#include <linux/workqueue.h>8181#include <linux/kmemleak.h>8282#include <linux/sched.h>8383+#include <linux/sched/mm.h>83848485#include <asm/cacheflush.h>8586#include <asm/sections.h>···15581557static void __percpu *pcpu_alloc(size_t size, size_t align, bool reserved,15591558 gfp_t gfp)15601559{15611561- /* whitelisted flags that can be passed to the backing allocators */15621562- gfp_t pcpu_gfp = gfp & (GFP_KERNEL | __GFP_NORETRY | __GFP_NOWARN);15631563- bool is_atomic = (gfp & GFP_KERNEL) != GFP_KERNEL;15641564- bool do_warn = !(gfp & __GFP_NOWARN);15601560+ gfp_t pcpu_gfp;15611561+ bool is_atomic;15621562+ bool do_warn;15651563 static int warn_limit = 10;15661564 struct pcpu_chunk *chunk, *next;15671565 const char *err;···15681568 unsigned long flags;15691569 void __percpu *ptr;15701570 size_t bits, bit_align;15711571+15721572+ gfp = current_gfp_context(gfp);15731573+ /* whitelisted flags that can be passed to the backing allocators */15741574+ pcpu_gfp = gfp & (GFP_KERNEL | __GFP_NORETRY | __GFP_NOWARN);15751575+ is_atomic = (gfp & GFP_KERNEL) != GFP_KERNEL;15761576+ do_warn = !(gfp & __GFP_NOWARN);1571157715721578 /*15731579 * There is now a minimum allocation size of PCPU_MIN_ALLOC_SIZE,
+30-15
mm/slub.c
···551551 metadata_access_disable();552552}553553554554+/*555555+ * See comment in calculate_sizes().556556+ */557557+static inline bool freeptr_outside_object(struct kmem_cache *s)558558+{559559+ return s->offset >= s->inuse;560560+}561561+562562+/*563563+ * Return offset of the end of info block which is inuse + free pointer if564564+ * not overlapping with object.565565+ */566566+static inline unsigned int get_info_end(struct kmem_cache *s)567567+{568568+ if (freeptr_outside_object(s))569569+ return s->inuse + sizeof(void *);570570+ else571571+ return s->inuse;572572+}573573+554574static struct track *get_track(struct kmem_cache *s, void *object,555575 enum track_item alloc)556576{557577 struct track *p;558578559559- if (s->offset)560560- p = object + s->offset + sizeof(void *);561561- else562562- p = object + s->inuse;579579+ p = object + get_info_end(s);563580564581 return p + alloc;565582}···703686 print_section(KERN_ERR, "Redzone ", p + s->object_size,704687 s->inuse - s->object_size);705688706706- if (s->offset)707707- off = s->offset + sizeof(void *);708708- else709709- off = s->inuse;689689+ off = get_info_end(s);710690711691 if (s->flags & SLAB_STORE_USER)712692 off += 2 * sizeof(struct track);···796782 * object address797783 * Bytes of the object to be managed.798784 * If the freepointer may overlay the object then the free799799- * pointer is the first word of the object.785785+ * pointer is at the middle of the object.800786 *801787 * Poisoning uses 0x6b (POISON_FREE) and the last byte is802788 * 0xa5 (POISON_END)···830816831817static int check_pad_bytes(struct kmem_cache *s, struct page *page, u8 *p)832818{833833- unsigned long off = s->inuse; /* The end of info */834834-835835- if (s->offset)836836- /* Freepointer is placed after the object. */837837- off += sizeof(void *);819819+ unsigned long off = get_info_end(s); /* The end of info */838820839821 if (s->flags & SLAB_STORE_USER)840822 /* We also have user information there */···917907 check_pad_bytes(s, page, p);918908 }919909920920- if (!s->offset && val == SLUB_RED_ACTIVE)910910+ if (!freeptr_outside_object(s) && val == SLUB_RED_ACTIVE)921911 /*922912 * Object and freepointer overlap. Cannot check923913 * freepointer while object is allocated.···35973587 *35983588 * This is the case if we do RCU, have a constructor or35993589 * destructor or are poisoning the objects.35903590+ *35913591+ * The assumption that s->offset >= s->inuse means free35923592+ * pointer is outside of the object is used in the35933593+ * freeptr_outside_object() function. If that is no35943594+ * longer true, the function needs to be modified.36003595 */36013596 s->offset = size;36023597 size += sizeof(void *);
-1
mm/vmscan.c
···16251625 * @dst: The temp list to put pages on to.16261626 * @nr_scanned: The number of pages that were scanned.16271627 * @sc: The scan_control struct for this reclaim session16281628- * @mode: One of the LRU isolation modes16291628 * @lru: LRU list id for isolating16301629 *16311630 * returns how many pages were moved onto *@dst.
+10-10
net/atm/common.c
···177177178178 set_bit(ATM_VF_CLOSE, &vcc->flags);179179 clear_bit(ATM_VF_READY, &vcc->flags);180180- if (vcc->dev) {181181- if (vcc->dev->ops->close)182182- vcc->dev->ops->close(vcc);183183- if (vcc->push)184184- vcc->push(vcc, NULL); /* atmarpd has no push */185185- module_put(vcc->owner);180180+ if (vcc->dev && vcc->dev->ops->close)181181+ vcc->dev->ops->close(vcc);182182+ if (vcc->push)183183+ vcc->push(vcc, NULL); /* atmarpd has no push */184184+ module_put(vcc->owner);186185187187- while ((skb = skb_dequeue(&sk->sk_receive_queue)) != NULL) {188188- atm_return(vcc, skb->truesize);189189- kfree_skb(skb);190190- }186186+ while ((skb = skb_dequeue(&sk->sk_receive_queue)) != NULL) {187187+ atm_return(vcc, skb->truesize);188188+ kfree_skb(skb);189189+ }191190191191+ if (vcc->dev && vcc->dev->ops->owner) {192192 module_put(vcc->dev->ops->owner);193193 atm_dev_put(vcc->dev);194194 }
···612612 v - 1, rtm_cmd);613613 v_change_start = 0;614614 }615615+ cond_resched();615616 }616617 /* v_change_start is set only if the last/whole range changed */617618 if (v_change_start)
+10-2
net/core/devlink.c
···42834283 end_offset = nla_get_u64(attrs[DEVLINK_ATTR_REGION_CHUNK_ADDR]);42844284 end_offset += nla_get_u64(attrs[DEVLINK_ATTR_REGION_CHUNK_LEN]);42854285 dump = false;42864286+42874287+ if (start_offset == end_offset) {42884288+ err = 0;42894289+ goto nla_put_failure;42904290+ }42864291 }4287429242884293 err = devlink_nl_region_read_snapshot_fill(skb, devlink,···53685363{53695364 enum devlink_health_reporter_state prev_health_state;53705365 struct devlink *devlink = reporter->devlink;53665366+ unsigned long recover_ts_threshold;5371536753725368 /* write a log message of the current error */53735369 WARN_ON(!msg);···53795373 devlink_recover_notify(reporter, DEVLINK_CMD_HEALTH_REPORTER_RECOVER);5380537453815375 /* abort if the previous error wasn't recovered */53765376+ recover_ts_threshold = reporter->last_recovery_ts +53775377+ msecs_to_jiffies(reporter->graceful_period);53825378 if (reporter->auto_recover &&53835379 (prev_health_state != DEVLINK_HEALTH_REPORTER_STATE_HEALTHY ||53845384- jiffies - reporter->last_recovery_ts <53855385- msecs_to_jiffies(reporter->graceful_period))) {53805380+ (reporter->last_recovery_ts && reporter->recovery_count &&53815381+ time_is_after_jiffies(recover_ts_threshold)))) {53865382 trace_devlink_health_recover_aborted(devlink,53875383 reporter->ops->name,53885384 reporter->health_state,
···23642364 }23652365}2366236623672367-/* On 32bit arches, an skb frag is limited to 2^15 */23682367#define SKB_FRAG_PAGE_ORDER get_order(32768)23692368DEFINE_STATIC_KEY_FALSE(net_high_order_alloc_disable_key);23702369
···39263926 */39273927 break;39283928#endif39293929- case TCPOPT_MPTCP:39303930- mptcp_parse_option(skb, ptr, opsize, opt_rx);39313931- break;39323932-39333929 case TCPOPT_FASTOPEN:39343930 tcp_parse_fastopen_option(39353931 opsize - TCPOLEN_FASTOPEN_BASE,···5985598959865990 tcp_sync_mss(sk, icsk->icsk_pmtu_cookie);59875991 tcp_initialize_rcv_mss(sk);59885988-59895989- if (sk_is_mptcp(sk))59905990- mptcp_rcv_synsent(sk);5991599259925993 /* Remember, tcp_poll() does not lock socket!59935994 * Change state from SYN-SENT only after copied_seq
+25
net/ipv6/route.c
···13851385 }13861386 ip6_rt_copy_init(pcpu_rt, res);13871387 pcpu_rt->rt6i_flags |= RTF_PCPU;13881388+13891389+ if (f6i->nh)13901390+ pcpu_rt->sernum = rt_genid_ipv6(dev_net(dev));13911391+13881392 return pcpu_rt;13931393+}13941394+13951395+static bool rt6_is_valid(const struct rt6_info *rt6)13961396+{13971397+ return rt6->sernum == rt_genid_ipv6(dev_net(rt6->dst.dev));13891398}1390139913911400/* It should be called with rcu_read_lock() acquired */···14031394 struct rt6_info *pcpu_rt;1404139514051396 pcpu_rt = this_cpu_read(*res->nh->rt6i_pcpu);13971397+13981398+ if (pcpu_rt && pcpu_rt->sernum && !rt6_is_valid(pcpu_rt)) {13991399+ struct rt6_info *prev, **p;14001400+14011401+ p = this_cpu_ptr(res->nh->rt6i_pcpu);14021402+ prev = xchg(p, NULL);14031403+ if (prev) {14041404+ dst_dev_put(&prev->dst);14051405+ dst_release(&prev->dst);14061406+ }14071407+14081408+ pcpu_rt = NULL;14091409+ }1406141014071411 return pcpu_rt;14081412}···26142592 struct rt6_info *rt;2615259326162594 rt = container_of(dst, struct rt6_info, dst);25952595+25962596+ if (rt->sernum)25972597+ return rt6_is_valid(rt) ? dst : NULL;2617259826182599 rcu_read_lock();26192600
+8-2
net/ipv6/seg6.c
···27272828bool seg6_validate_srh(struct ipv6_sr_hdr *srh, int len)2929{3030- int trailing;3130 unsigned int tlv_offset;3131+ int max_last_entry;3232+ int trailing;32333334 if (srh->type != IPV6_SRCRT_TYPE_4)3435 return false;···3736 if (((srh->hdrlen + 1) << 3) != len)3837 return false;39384040- if (srh->segments_left > srh->first_segment)3939+ max_last_entry = (srh->hdrlen / 2) - 1;4040+4141+ if (srh->first_segment > max_last_entry)4242+ return false;4343+4444+ if (srh->segments_left > srh->first_segment + 1)4145 return false;42464347 tlv_offset = sizeof(*srh) + ((srh->first_segment + 1) << 4);
+41-54
net/mptcp/options.c
···1616 return (flags & MPTCP_CAP_FLAG_MASK) == MPTCP_CAP_HMAC_SHA256;1717}18181919-void mptcp_parse_option(const struct sk_buff *skb, const unsigned char *ptr,2020- int opsize, struct tcp_options_received *opt_rx)1919+static void mptcp_parse_option(const struct sk_buff *skb,2020+ const unsigned char *ptr, int opsize,2121+ struct mptcp_options_received *mp_opt)2122{2222- struct mptcp_options_received *mp_opt = &opt_rx->mptcp;2323 u8 subtype = *ptr >> 4;2424 int expected_opsize;2525 u8 version;···283283}284284285285void mptcp_get_options(const struct sk_buff *skb,286286- struct tcp_options_received *opt_rx)286286+ struct mptcp_options_received *mp_opt)287287{288288- const unsigned char *ptr;289288 const struct tcphdr *th = tcp_hdr(skb);290290- int length = (th->doff * 4) - sizeof(struct tcphdr);289289+ const unsigned char *ptr;290290+ int length;291291292292+ /* initialize option status */293293+ mp_opt->mp_capable = 0;294294+ mp_opt->mp_join = 0;295295+ mp_opt->add_addr = 0;296296+ mp_opt->rm_addr = 0;297297+ mp_opt->dss = 0;298298+299299+ length = (th->doff * 4) - sizeof(struct tcphdr);292300 ptr = (const unsigned char *)(th + 1);293301294302 while (length > 0) {···316308 if (opsize > length)317309 return; /* don't parse partial options */318310 if (opcode == TCPOPT_MPTCP)319319- mptcp_parse_option(skb, ptr, opsize, opt_rx);311311+ mptcp_parse_option(skb, ptr, opsize, mp_opt);320312 ptr += opsize - 2;321313 length -= opsize;322314 }···350342 return true;351343 }352344 return false;353353-}354354-355355-void mptcp_rcv_synsent(struct sock *sk)356356-{357357- struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(sk);358358- struct tcp_sock *tp = tcp_sk(sk);359359-360360- if (subflow->request_mptcp && tp->rx_opt.mptcp.mp_capable) {361361- subflow->mp_capable = 1;362362- subflow->can_ack = 1;363363- subflow->remote_key = tp->rx_opt.mptcp.sndr_key;364364- pr_debug("subflow=%p, remote_key=%llu", subflow,365365- subflow->remote_key);366366- } else if (subflow->request_join && tp->rx_opt.mptcp.mp_join) {367367- subflow->mp_join = 1;368368- subflow->thmac = tp->rx_opt.mptcp.thmac;369369- subflow->remote_nonce = tp->rx_opt.mptcp.nonce;370370- pr_debug("subflow=%p, thmac=%llu, remote_nonce=%u", subflow,371371- subflow->thmac, subflow->remote_nonce);372372- } else if (subflow->request_mptcp) {373373- tcp_sk(sk)->is_mptcp = 0;374374- }375345}376346377347/* MP_JOIN client subflow must wait for 4th ack before sending any data:···695709 if (TCP_SKB_CB(skb)->seq != subflow->ssn_offset + 1)696710 return subflow->mp_capable;697711698698- if (mp_opt->use_ack) {712712+ if (mp_opt->dss && mp_opt->use_ack) {699713 /* subflows are fully established as soon as we get any700714 * additional ack.701715 */702716 subflow->fully_established = 1;703717 goto fully_established;704718 }705705-706706- WARN_ON_ONCE(subflow->can_ack);707719708720 /* If the first established packet does not contain MP_CAPABLE + data709721 * then fallback to TCP···712728 return false;713729 }714730731731+ if (unlikely(!READ_ONCE(msk->pm.server_side)))732732+ pr_warn_once("bogus mpc option on established client sk");715733 subflow->fully_established = 1;716734 subflow->remote_key = mp_opt->sndr_key;717735 subflow->can_ack = 1;···805819{806820 struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(sk);807821 struct mptcp_sock *msk = mptcp_sk(subflow->conn);808808- struct mptcp_options_received *mp_opt;822822+ struct mptcp_options_received mp_opt;809823 struct mptcp_ext *mpext;810824811811- mp_opt = &opt_rx->mptcp;812812- if (!check_fully_established(msk, sk, subflow, skb, mp_opt))825825+ mptcp_get_options(skb, &mp_opt);826826+ if (!check_fully_established(msk, sk, subflow, skb, &mp_opt))813827 return;814828815815- if (mp_opt->add_addr && add_addr_hmac_valid(msk, mp_opt)) {829829+ if (mp_opt.add_addr && add_addr_hmac_valid(msk, &mp_opt)) {816830 struct mptcp_addr_info addr;817831818818- addr.port = htons(mp_opt->port);819819- addr.id = mp_opt->addr_id;820820- if (mp_opt->family == MPTCP_ADDR_IPVERSION_4) {832832+ addr.port = htons(mp_opt.port);833833+ addr.id = mp_opt.addr_id;834834+ if (mp_opt.family == MPTCP_ADDR_IPVERSION_4) {821835 addr.family = AF_INET;822822- addr.addr = mp_opt->addr;836836+ addr.addr = mp_opt.addr;823837 }824838#if IS_ENABLED(CONFIG_MPTCP_IPV6)825825- else if (mp_opt->family == MPTCP_ADDR_IPVERSION_6) {839839+ else if (mp_opt.family == MPTCP_ADDR_IPVERSION_6) {826840 addr.family = AF_INET6;827827- addr.addr6 = mp_opt->addr6;841841+ addr.addr6 = mp_opt.addr6;828842 }829843#endif830830- if (!mp_opt->echo)844844+ if (!mp_opt.echo)831845 mptcp_pm_add_addr_received(msk, &addr);832832- mp_opt->add_addr = 0;846846+ mp_opt.add_addr = 0;833847 }834848835835- if (!mp_opt->dss)849849+ if (!mp_opt.dss)836850 return;837851838852 /* we can't wait for recvmsg() to update the ack_seq, otherwise839853 * monodirectional flows will stuck840854 */841841- if (mp_opt->use_ack)842842- update_una(msk, mp_opt);855855+ if (mp_opt.use_ack)856856+ update_una(msk, &mp_opt);843857844858 mpext = skb_ext_add(skb, SKB_EXT_MPTCP);845859 if (!mpext)···847861848862 memset(mpext, 0, sizeof(*mpext));849863850850- if (mp_opt->use_map) {851851- if (mp_opt->mpc_map) {864864+ if (mp_opt.use_map) {865865+ if (mp_opt.mpc_map) {852866 /* this is an MP_CAPABLE carrying MPTCP data853867 * we know this map the first chunk of data854868 */···858872 mpext->subflow_seq = 1;859873 mpext->dsn64 = 1;860874 mpext->mpc_map = 1;875875+ mpext->data_fin = 0;861876 } else {862862- mpext->data_seq = mp_opt->data_seq;863863- mpext->subflow_seq = mp_opt->subflow_seq;864864- mpext->dsn64 = mp_opt->dsn64;865865- mpext->data_fin = mp_opt->data_fin;877877+ mpext->data_seq = mp_opt.data_seq;878878+ mpext->subflow_seq = mp_opt.subflow_seq;879879+ mpext->dsn64 = mp_opt.dsn64;880880+ mpext->data_fin = mp_opt.data_fin;866881 }867867- mpext->data_len = mp_opt->data_len;882882+ mpext->data_len = mp_opt.data_len;868883 mpext->use_map = 1;869884 }870885}
+9-8
net/mptcp/protocol.c
···1316131613171317static int mptcp_disconnect(struct sock *sk, int flags)13181318{13191319- lock_sock(sk);13201320- __mptcp_clear_xmit(sk);13211321- release_sock(sk);13221322- mptcp_cancel_work(sk);13231323- return tcp_disconnect(sk, flags);13191319+ /* Should never be called.13201320+ * inet_stream_connect() calls ->disconnect, but that13211321+ * refers to the subflow socket, not the mptcp one.13221322+ */13231323+ WARN_ON_ONCE(1);13241324+ return 0;13241325}1325132613261327#if IS_ENABLED(CONFIG_MPTCP_IPV6)···13341333#endif1335133413361335struct sock *mptcp_sk_clone(const struct sock *sk,13371337- const struct tcp_options_received *opt_rx,13361336+ const struct mptcp_options_received *mp_opt,13381337 struct request_sock *req)13391338{13401339 struct mptcp_subflow_request_sock *subflow_req = mptcp_subflow_rsk(req);···1373137213741373 msk->write_seq = subflow_req->idsn + 1;13751374 atomic64_set(&msk->snd_una, msk->write_seq);13761376- if (opt_rx->mptcp.mp_capable) {13751375+ if (mp_opt->mp_capable) {13771376 msk->can_ack = true;13781378- msk->remote_key = opt_rx->mptcp.sndr_key;13771377+ msk->remote_key = mp_opt->sndr_key;13791378 mptcp_crypto_key_sha(msk->remote_key, NULL, &ack_seq);13801379 ack_seq++;13811380 msk->ack_seq = ack_seq;
···637637 if (ctl->divisor &&638638 (!is_power_of_2(ctl->divisor) || ctl->divisor > 65536))639639 return -EINVAL;640640+641641+ /* slot->allot is a short, make sure quantum is not too big. */642642+ if (ctl->quantum) {643643+ unsigned int scaled = SFQ_ALLOT_SIZE(ctl->quantum);644644+645645+ if (scaled <= 0 || scaled > SHRT_MAX)646646+ return -EINVAL;647647+ }648648+640649 if (ctl_v1 && !red_check_params(ctl_v1->qth_min, ctl_v1->qth_max,641650 ctl_v1->Wlog))642651 return -EINVAL;
···416416 * Note, TRACE_EVENT() itself is simply defined as:417417 *418418 * #define TRACE_EVENT(name, proto, args, tstruct, assign, printk) \419419- * DEFINE_EVENT_CLASS(name, proto, args, tstruct, assign, printk); \419419+ * DECLARE_EVENT_CLASS(name, proto, args, tstruct, assign, printk); \420420 * DEFINE_EVENT(name, name, proto, args)421421 *422422 * The DEFINE_EVENT() also can be declared with conditions and reg functions:
+1-1
scripts/decodecode
···126126faultline=`cat $T.dis | head -1 | cut -d":" -f2-`127127faultline=`echo "$faultline" | sed -e 's/\[/\\\[/g; s/\]/\\\]/g'`128128129129-cat $T.oo | sed -e "${faultlinenum}s/^\(.*:\)\(.*\)/\1\*\2\t\t<-- trapping instruction/"129129+cat $T.oo | sed -e "${faultlinenum}s/^\([^:]*:\)\(.*\)/\1\*\2\t\t<-- trapping instruction/"130130echo131131cat $T.aa132132cleanup
···1687168716881688 case USB_ID(0x0d8c, 0x0316): /* Hegel HD12 DSD */16891689 case USB_ID(0x10cb, 0x0103): /* The Bit Opus #3; with fp->dsd_raw */16901690- case USB_ID(0x16b0, 0x06b2): /* NuPrime DAC-10 */16901690+ case USB_ID(0x16d0, 0x06b2): /* NuPrime DAC-10 */16911691 case USB_ID(0x16d0, 0x09dd): /* Encore mDSD */16921692 case USB_ID(0x16d0, 0x0733): /* Furutech ADL Stratos */16931693 case USB_ID(0x16d0, 0x09db): /* NuPrime Audio DAC-9 */
+6-3
tools/bootconfig/main.c
···314314 ret = delete_xbc(path);315315 if (ret < 0) {316316 pr_err("Failed to delete previous boot config: %d\n", ret);317317+ free(data);317318 return ret;318319 }319320···322321 fd = open(path, O_RDWR | O_APPEND);323322 if (fd < 0) {324323 pr_err("Failed to open %s: %d\n", path, fd);324324+ free(data);325325 return fd;326326 }327327 /* TODO: Ensure the @path is initramfs/initrd image */328328 ret = write(fd, data, size + 8);329329 if (ret < 0) {330330 pr_err("Failed to apply a boot config: %d\n", ret);331331- return ret;331331+ goto out;332332 }333333 /* Write a magic word of the bootconfig */334334 ret = write(fd, BOOTCONFIG_MAGIC, BOOTCONFIG_MAGIC_LEN);335335 if (ret < 0) {336336 pr_err("Failed to apply a boot config magic: %d\n", ret);337337- return ret;337337+ goto out;338338 }339339+out:339340 close(fd);340341 free(data);341342342342- return 0;343343+ return ret;343344}344345345346int usage(void)
+6-1
tools/cgroup/iocost_monitor.py
···159159 else:160160 self.inflight_pct = 0161161162162- self.debt_ms = iocg.abs_vdebt.counter.value_() / VTIME_PER_USEC / 1000162162+ # vdebt used to be an atomic64_t and is now u64, support both163163+ try:164164+ self.debt_ms = iocg.abs_vdebt.counter.value_() / VTIME_PER_USEC / 1000165165+ except:166166+ self.debt_ms = iocg.abs_vdebt.value_() / VTIME_PER_USEC / 1000167167+163168 self.use_delay = blkg.use_delay.counter.value_()164169 self.delay_ms = blkg.delay_nsec.counter.value_() / 1_000_000165170
···33#define _GNU_SOURCE44#include <poll.h>55#include <unistd.h>66+#include <assert.h>67#include <signal.h>78#include <pthread.h>89#include <sys/epoll.h>···31353134 }31363135 close(ctx.efd[0]);31373136 close(ctx.sfd[0]);31373137+}31383138+31393139+enum {31403140+ EPOLL60_EVENTS_NR = 10,31413141+};31423142+31433143+struct epoll60_ctx {31443144+ volatile int stopped;31453145+ int ready;31463146+ int waiters;31473147+ int epfd;31483148+ int evfd[EPOLL60_EVENTS_NR];31493149+};31503150+31513151+static void *epoll60_wait_thread(void *ctx_)31523152+{31533153+ struct epoll60_ctx *ctx = ctx_;31543154+ struct epoll_event e;31553155+ sigset_t sigmask;31563156+ uint64_t v;31573157+ int ret;31583158+31593159+ /* Block SIGUSR1 */31603160+ sigemptyset(&sigmask);31613161+ sigaddset(&sigmask, SIGUSR1);31623162+ sigprocmask(SIG_SETMASK, &sigmask, NULL);31633163+31643164+ /* Prepare empty mask for epoll_pwait() */31653165+ sigemptyset(&sigmask);31663166+31673167+ while (!ctx->stopped) {31683168+ /* Mark we are ready */31693169+ __atomic_fetch_add(&ctx->ready, 1, __ATOMIC_ACQUIRE);31703170+31713171+ /* Start when all are ready */31723172+ while (__atomic_load_n(&ctx->ready, __ATOMIC_ACQUIRE) &&31733173+ !ctx->stopped);31743174+31753175+ /* Account this waiter */31763176+ __atomic_fetch_add(&ctx->waiters, 1, __ATOMIC_ACQUIRE);31773177+31783178+ ret = epoll_pwait(ctx->epfd, &e, 1, 2000, &sigmask);31793179+ if (ret != 1) {31803180+ /* We expect only signal delivery on stop */31813181+ assert(ret < 0 && errno == EINTR && "Lost wakeup!\n");31823182+ assert(ctx->stopped);31833183+ break;31843184+ }31853185+31863186+ ret = read(e.data.fd, &v, sizeof(v));31873187+ /* Since we are on ET mode, thus each thread gets its own fd. */31883188+ assert(ret == sizeof(v));31893189+31903190+ __atomic_fetch_sub(&ctx->waiters, 1, __ATOMIC_RELEASE);31913191+ }31923192+31933193+ return NULL;31943194+}31953195+31963196+static inline unsigned long long msecs(void)31973197+{31983198+ struct timespec ts;31993199+ unsigned long long msecs;32003200+32013201+ clock_gettime(CLOCK_REALTIME, &ts);32023202+ msecs = ts.tv_sec * 1000ull;32033203+ msecs += ts.tv_nsec / 1000000ull;32043204+32053205+ return msecs;32063206+}32073207+32083208+static inline int count_waiters(struct epoll60_ctx *ctx)32093209+{32103210+ return __atomic_load_n(&ctx->waiters, __ATOMIC_ACQUIRE);32113211+}32123212+32133213+TEST(epoll60)32143214+{32153215+ struct epoll60_ctx ctx = { 0 };32163216+ pthread_t waiters[ARRAY_SIZE(ctx.evfd)];32173217+ struct epoll_event e;32183218+ int i, n, ret;32193219+32203220+ signal(SIGUSR1, signal_handler);32213221+32223222+ ctx.epfd = epoll_create1(0);32233223+ ASSERT_GE(ctx.epfd, 0);32243224+32253225+ /* Create event fds */32263226+ for (i = 0; i < ARRAY_SIZE(ctx.evfd); i++) {32273227+ ctx.evfd[i] = eventfd(0, EFD_NONBLOCK);32283228+ ASSERT_GE(ctx.evfd[i], 0);32293229+32303230+ e.events = EPOLLIN | EPOLLET;32313231+ e.data.fd = ctx.evfd[i];32323232+ ASSERT_EQ(epoll_ctl(ctx.epfd, EPOLL_CTL_ADD, ctx.evfd[i], &e), 0);32333233+ }32343234+32353235+ /* Create waiter threads */32363236+ for (i = 0; i < ARRAY_SIZE(waiters); i++)32373237+ ASSERT_EQ(pthread_create(&waiters[i], NULL,32383238+ epoll60_wait_thread, &ctx), 0);32393239+32403240+ for (i = 0; i < 300; i++) {32413241+ uint64_t v = 1, ms;32423242+32433243+ /* Wait for all to be ready */32443244+ while (__atomic_load_n(&ctx.ready, __ATOMIC_ACQUIRE) !=32453245+ ARRAY_SIZE(ctx.evfd))32463246+ ;32473247+32483248+ /* Steady, go */32493249+ __atomic_fetch_sub(&ctx.ready, ARRAY_SIZE(ctx.evfd),32503250+ __ATOMIC_ACQUIRE);32513251+32523252+ /* Wait all have gone to kernel */32533253+ while (count_waiters(&ctx) != ARRAY_SIZE(ctx.evfd))32543254+ ;32553255+32563256+ /* 1ms should be enough to schedule away */32573257+ usleep(1000);32583258+32593259+ /* Quickly signal all handles at once */32603260+ for (n = 0; n < ARRAY_SIZE(ctx.evfd); n++) {32613261+ ret = write(ctx.evfd[n], &v, sizeof(v));32623262+ ASSERT_EQ(ret, sizeof(v));32633263+ }32643264+32653265+ /* Busy loop for 1s and wait for all waiters to wake up */32663266+ ms = msecs();32673267+ while (count_waiters(&ctx) && msecs() < ms + 1000)32683268+ ;32693269+32703270+ ASSERT_EQ(count_waiters(&ctx), 0);32713271+ }32723272+ ctx.stopped = 1;32733273+ /* Stop waiters */32743274+ for (i = 0; i < ARRAY_SIZE(waiters); i++)32753275+ ret = pthread_kill(waiters[i], SIGUSR1);32763276+ for (i = 0; i < ARRAY_SIZE(waiters); i++)32773277+ pthread_join(waiters[i], NULL);32783278+32793279+ for (i = 0; i < ARRAY_SIZE(waiters); i++)32803280+ close(ctx.evfd[i]);32813281+ close(ctx.epfd);31383282}3139328331403284TEST_HARNESS_MAIN
+30-2
tools/testing/selftests/ftrace/ftracetest
···1717echo " -vv Alias of -v -v (Show all results in stdout)"1818echo " -vvv Alias of -v -v -v (Show all commands immediately)"1919echo " --fail-unsupported Treat UNSUPPORTED as a failure"2020+echo " --fail-unresolved Treat UNRESOLVED as a failure"2021echo " -d|--debug Debug mode (trace all shell commands)"2122echo " -l|--logdir <dir> Save logs on the <dir>"2223echo " If <dir> is -, all logs output in console only"···3029# kselftest skip code is 43130err_skip=432313232+# cgroup RT scheduling prevents chrt commands from succeeding, which3333+# induces failures in test wakeup tests. Disable for the duration of3434+# the tests.3535+3636+readonly sched_rt_runtime=/proc/sys/kernel/sched_rt_runtime_us3737+3838+sched_rt_runtime_orig=$(cat $sched_rt_runtime)3939+4040+setup() {4141+ echo -1 > $sched_rt_runtime4242+}4343+4444+cleanup() {4545+ echo $sched_rt_runtime_orig > $sched_rt_runtime4646+}4747+3348errexit() { # message3449 echo "Error: $1" 1>&25050+ cleanup3551 exit $err_ret3652}3753···5638if [ `id -u` -ne 0 ]; then5739 errexit "this must be run by root user"5840fi4141+4242+setup59436044# Utilities6145absdir() { # file_path···11191 ;;11292 --fail-unsupported)11393 UNSUPPORTED_RESULT=19494+ shift 19595+ ;;9696+ --fail-unresolved)9797+ UNRESOLVED_RESULT=111498 shift 111599 ;;116100 --logdir|-l)···181157DEBUG=0182158VERBOSE=0183159UNSUPPORTED_RESULT=0160160+UNRESOLVED_RESULT=0184161STOP_FAILURE=0185162# Parse command-line options186163parse_opts $*···260235261236INSTANCE=262237CASENO=0238238+263239testcase() { # testfile264240 CASENO=$((CASENO+1))265241 desc=`grep "^#[ \t]*description:" $1 | cut -f2 -d:`···286260 $UNRESOLVED)287261 prlog " [${color_blue}UNRESOLVED${color_reset}]"288262 UNRESOLVED_CASES="$UNRESOLVED_CASES $CASENO"289289- return 1 # this is a kind of bug.. something happened.263263+ return $UNRESOLVED_RESULT # depends on use case290264 ;;291265 $UNTESTED)292266 prlog " [${color_blue}UNTESTED${color_reset}]"···299273 return $UNSUPPORTED_RESULT # depends on use case300274 ;;301275 $XFAIL)302302- prlog " [${color_red}XFAIL${color_reset}]"276276+ prlog " [${color_green}XFAIL${color_reset}]"303277 XFAILED_CASES="$XFAILED_CASES $CASENO"304278 return 0305279 ;;···431405prlog "# of unsupported: " `echo $UNSUPPORTED_CASES | wc -w`432406prlog "# of xfailed: " `echo $XFAILED_CASES | wc -w`433407prlog "# of undefined(test bug): " `echo $UNDEFINED_CASES | wc -w`408408+409409+cleanup434410435411# if no error, return 0436412exit $TOTAL_RESULT
···1515 exit_unsupported1616fi17171818-if [ ! -f set_ftrace_filter ]; then1919- echo "set_ftrace_filter not found? Is function tracer not set?"2020- exit_unsupported2121-fi1818+check_filter_file set_ftrace_filter22192320do_function_fork=12421
···1616 exit_unsupported1717fi18181919-if [ ! -f set_ftrace_filter ]; then2020- echo "set_ftrace_filter not found? Is function tracer not set?"2121- exit_unsupported2222-fi1919+check_filter_file set_ftrace_filter23202421do_function_fork=12522
···1111#12121313# The triggers are set within the set_ftrace_filter file1414-if [ ! -f set_ftrace_filter ]; then1515- echo "set_ftrace_filter not found? Is dynamic ftrace not set?"1616- exit_unsupported1717-fi1414+check_filter_file set_ftrace_filter18151916do_reset() {2017 reset_ftrace_filter
···1010#11111212# The triggers are set within the set_ftrace_filter file1313-if [ ! -f set_ftrace_filter ]; then1414- echo "set_ftrace_filter not found? Is dynamic ftrace not set?"1515- exit_unsupported1616-fi1313+check_filter_file set_ftrace_filter17141815fail() { # mesg1916 echo $1
···1111#12121313# The triggers are set within the set_ftrace_filter file1414-if [ ! -f set_ftrace_filter ]; then1515- echo "set_ftrace_filter not found? Is dynamic ftrace not set?"1616- exit_unsupported1717-fi1414+check_filter_file set_ftrace_filter18151916fail() { # mesg2017 echo $1
+6
tools/testing/selftests/ftrace/test.d/functions
···11+check_filter_file() { # check filter file introduced by dynamic ftrace22+ if [ ! -f "$1" ]; then33+ echo "$1 not found? Is dynamic ftrace not set?"44+ exit_unsupported55+ fi66+}1728clear_trace() { # reset trace output39 echo > trace
···11+#!/bin/bash22+# SPDX-License-Identifier: GPL-2.033+# kselftest_deps.sh44+#55+# Checks for kselftest build dependencies on the build system.66+# Copyright (c) 2020 Shuah Khan <skhan@linuxfoundation.org>77+#88+#99+1010+usage()1111+{1212+1313+echo -e "Usage: $0 -[p] <compiler> [test_name]\n"1414+echo -e "\tkselftest_deps.sh [-p] gcc"1515+echo -e "\tkselftest_deps.sh [-p] gcc vm"1616+echo -e "\tkselftest_deps.sh [-p] aarch64-linux-gnu-gcc"1717+echo -e "\tkselftest_deps.sh [-p] aarch64-linux-gnu-gcc vm\n"1818+echo "- Should be run in selftests directory in the kernel repo."1919+echo "- Checks if Kselftests can be built/cross-built on a system."2020+echo "- Parses all test/sub-test Makefile to find library dependencies."2121+echo "- Runs compile test on a trivial C file with LDLIBS specified"2222+echo " in the test Makefiles to identify missing library dependencies."2323+echo "- Prints suggested target list for a system filtering out tests"2424+echo " failed the build dependency check from the TARGETS in Selftests"2525+echo " main Makefile when optional -p is specified."2626+echo "- Prints pass/fail dependency check for each tests/sub-test."2727+echo "- Prints pass/fail targets and libraries."2828+echo "- Default: runs dependency checks on all tests."2929+echo "- Optional test name can be specified to check dependencies for it."3030+exit 13131+3232+}3333+3434+# Start main()3535+main()3636+{3737+3838+base_dir=`pwd`3939+# Make sure we're in the selftests top-level directory.4040+if [ $(basename "$base_dir") != "selftests" ]; then4141+ echo -e "\tPlease run $0 in"4242+ echo -e "\ttools/testing/selftests directory ..."4343+ exit 14444+fi4545+4646+print_targets=04747+4848+while getopts "p" arg; do4949+ case $arg in5050+ p)5151+ print_targets=15252+ shift;;5353+ esac5454+done5555+5656+if [ $# -eq 0 ]5757+then5858+ usage5959+fi6060+6161+# Compiler6262+CC=$16363+6464+tmp_file=$(mktemp).c6565+trap "rm -f $tmp_file.o $tmp_file $tmp_file.bin" EXIT6666+#echo $tmp_file6767+6868+pass=$(mktemp).out6969+trap "rm -f $pass" EXIT7070+#echo $pass7171+7272+fail=$(mktemp).out7373+trap "rm -f $fail" EXIT7474+#echo $fail7575+7676+# Generate tmp source fire for compile test7777+cat << "EOF" > $tmp_file7878+int main()7979+{8080+}8181+EOF8282+8383+# Save results8484+total_cnt=08585+fail_trgts=()8686+fail_libs=()8787+fail_cnt=08888+pass_trgts=()8989+pass_libs=()9090+pass_cnt=09191+9292+# Get all TARGETS from selftests Makefile9393+targets=$(egrep "^TARGETS +|^TARGETS =" Makefile | cut -d "=" -f2)9494+9595+# Single test case9696+if [ $# -eq 2 ]9797+then9898+ test=$2/Makefile9999+100100+ l1_test $test101101+ l2_test $test102102+ l3_test $test103103+104104+ print_results $1 $2105105+ exit $?106106+fi107107+108108+# Level 1: LDLIBS set static.109109+#110110+# Find all LDLIBS set statically for all executables built by a Makefile111111+# and filter out VAR_LDLIBS to discard the following:112112+# gpio/Makefile:LDLIBS += $(VAR_LDLIBS)113113+# Append space at the end of the list to append more tests.114114+115115+l1_tests=$(grep -r --include=Makefile "^LDLIBS" | \116116+ grep -v "VAR_LDLIBS" | awk -F: '{print $1}')117117+118118+# Level 2: LDLIBS set dynamically.119119+#120120+# Level 2121121+# Some tests have multiple valid LDLIBS lines for individual sub-tests122122+# that need dependency checks. Find them and append them to the tests123123+# e.g: vm/Makefile:$(OUTPUT)/userfaultfd: LDLIBS += -lpthread124124+# Filter out VAR_LDLIBS to discard the following:125125+# memfd/Makefile:$(OUTPUT)/fuse_mnt: LDLIBS += $(VAR_LDLIBS)126126+# Append space at the end of the list to append more tests.127127+128128+l2_tests=$(grep -r --include=Makefile ": LDLIBS" | \129129+ grep -v "VAR_LDLIBS" | awk -F: '{print $1}')130130+131131+# Level 3132132+# gpio, memfd and others use pkg-config to find mount and fuse libs133133+# respectively and save it in VAR_LDLIBS. If pkg-config doesn't find134134+# any, VAR_LDLIBS set to default.135135+# Use the default value and filter out pkg-config for dependency check.136136+# e.g:137137+# gpio/Makefile138138+# VAR_LDLIBS := $(shell pkg-config --libs mount) 2>/dev/null)139139+# memfd/Makefile140140+# VAR_LDLIBS := $(shell pkg-config fuse --libs 2>/dev/null)141141+142142+l3_tests=$(grep -r --include=Makefile "^VAR_LDLIBS" | \143143+ grep -v "pkg-config" | awk -F: '{print $1}')144144+145145+#echo $l1_tests146146+#echo $l2_1_tests147147+#echo $l3_tests148148+149149+all_tests150150+print_results $1 $2151151+152152+exit $?153153+}154154+# end main()155155+156156+all_tests()157157+{158158+ for test in $l1_tests; do159159+ l1_test $test160160+ done161161+162162+ for test in $l2_tests; do163163+ l2_test $test164164+ done165165+166166+ for test in $l3_tests; do167167+ l3_test $test168168+ done169169+}170170+171171+# Use same parsing used for l1_tests and pick libraries this time.172172+l1_test()173173+{174174+ test_libs=$(grep --include=Makefile "^LDLIBS" $test | \175175+ grep -v "VAR_LDLIBS" | \176176+ sed -e 's/\:/ /' | \177177+ sed -e 's/+/ /' | cut -d "=" -f 2)178178+179179+ check_libs $test $test_libs180180+}181181+182182+# Use same parsing used for l2__tests and pick libraries this time.183183+l2_test()184184+{185185+ test_libs=$(grep --include=Makefile ": LDLIBS" $test | \186186+ grep -v "VAR_LDLIBS" | \187187+ sed -e 's/\:/ /' | sed -e 's/+/ /' | \188188+ cut -d "=" -f 2)189189+190190+ check_libs $test $test_libs191191+}192192+193193+l3_test()194194+{195195+ test_libs=$(grep --include=Makefile "^VAR_LDLIBS" $test | \196196+ grep -v "pkg-config" | sed -e 's/\:/ /' |197197+ sed -e 's/+/ /' | cut -d "=" -f 2)198198+199199+ check_libs $test $test_libs200200+}201201+202202+check_libs()203203+{204204+205205+if [[ ! -z "${test_libs// }" ]]206206+then207207+208208+ #echo $test_libs209209+210210+ for lib in $test_libs; do211211+212212+ let total_cnt+=1213213+ $CC -o $tmp_file.bin $lib $tmp_file > /dev/null 2>&1214214+ if [ $? -ne 0 ]; then215215+ echo "FAIL: $test dependency check: $lib" >> $fail216216+ let fail_cnt+=1217217+ fail_libs+="$lib "218218+ fail_target=$(echo "$test" | cut -d "/" -f1)219219+ fail_trgts+="$fail_target "220220+ targets=$(echo "$targets" | grep -v "$fail_target")221221+ else222222+ echo "PASS: $test dependency check passed $lib" >> $pass223223+ let pass_cnt+=1224224+ pass_libs+="$lib "225225+ pass_trgts+="$(echo "$test" | cut -d "/" -f1) "226226+ fi227227+228228+ done229229+fi230230+}231231+232232+print_results()233233+{234234+ echo -e "========================================================";235235+ echo -e "Kselftest Dependency Check for [$0 $1 $2] results..."236236+237237+ if [ $print_targets -ne 0 ]238238+ then239239+ echo -e "Suggested Selftest Targets for your configuration:"240240+ echo -e "$targets";241241+ fi242242+243243+ echo -e "========================================================";244244+ echo -e "Checked tests defining LDLIBS dependencies"245245+ echo -e "--------------------------------------------------------";246246+ echo -e "Total tests with Dependencies:"247247+ echo -e "$total_cnt Pass: $pass_cnt Fail: $fail_cnt";248248+249249+ if [ $pass_cnt -ne 0 ]; then250250+ echo -e "--------------------------------------------------------";251251+ cat $pass252252+ echo -e "--------------------------------------------------------";253253+ echo -e "Targets passed build dependency check on system:"254254+ echo -e "$(echo "$pass_trgts" | xargs -n1 | sort -u | xargs)"255255+ fi256256+257257+ if [ $fail_cnt -ne 0 ]; then258258+ echo -e "--------------------------------------------------------";259259+ cat $fail260260+ echo -e "--------------------------------------------------------";261261+ echo -e "Targets failed build dependency check on system:"262262+ echo -e "$(echo "$fail_trgts" | xargs -n1 | sort -u | xargs)"263263+ echo -e "--------------------------------------------------------";264264+ echo -e "Missing libraries system"265265+ echo -e "$(echo "$fail_libs" | xargs -n1 | sort -u | xargs)"266266+ fi267267+268268+ echo -e "--------------------------------------------------------";269269+ echo -e "========================================================";270270+}271271+272272+main "$@"
+28-1
tools/testing/selftests/kvm/Makefile
···5566top_srcdir = ../../../..77KSFT_KHDR_INSTALL := 188+99+# For cross-builds to work, UNAME_M has to map to ARCH and arch specific1010+# directories and targets in this Makefile. "uname -m" doesn't map to1111+# arch specific sub-directory names.1212+#1313+# UNAME_M variable to used to run the compiles pointing to the right arch1414+# directories and build the right targets for these supported architectures.1515+#1616+# TEST_GEN_PROGS and LIBKVM are set using UNAME_M variable.1717+# LINUX_TOOL_ARCH_INCLUDE is set using ARCH variable.1818+#1919+# x86_64 targets are named to include x86_64 as a suffix and directories2020+# for includes are in x86_64 sub-directory. s390x and aarch64 follow the2121+# same convention. "uname -m" doesn't result in the correct mapping for2222+# s390x and aarch64.2323+#2424+# No change necessary for x86_64825UNAME_M := $(shell uname -m)2626+2727+# Set UNAME_M for arm64 compile/install to work2828+ifeq ($(ARCH),arm64)2929+ UNAME_M := aarch643030+endif3131+# Set UNAME_M s390x compile/install to work3232+ifeq ($(ARCH),s390)3333+ UNAME_M := s390x3434+endif9351036LIBKVM = lib/assert.c lib/elf.c lib/io.c lib/kvm_util.c lib/sparsebit.c lib/test_util.c1137LIBKVM_x86_64 = lib/x86_64/processor.c lib/x86_64/vmx.c lib/x86_64/svm.c lib/x86_64/ucall.c···7953INSTALL_HDR_PATH = $(top_srcdir)/usr8054LINUX_HDR_PATH = $(INSTALL_HDR_PATH)/include/8155LINUX_TOOL_INCLUDE = $(top_srcdir)/tools/include8282-LINUX_TOOL_ARCH_INCLUDE = $(top_srcdir)/tools/arch/x86/include5656+LINUX_TOOL_ARCH_INCLUDE = $(top_srcdir)/tools/arch/$(ARCH)/include8357CFLAGS += -Wall -Wstrict-prototypes -Wuninitialized -O2 -g -std=gnu99 \8458 -fno-stack-protector -fno-PIE -I$(LINUX_TOOL_INCLUDE) \8559 -I$(LINUX_TOOL_ARCH_INCLUDE) -I$(LINUX_HDR_PATH) -Iinclude \···11084$(OUTPUT)/libkvm.a: $(LIBKVM_OBJ)11185 $(AR) crs $@ $^112868787+x := $(shell mkdir -p $(sort $(dir $(TEST_GEN_PROGS))))11388all: $(STATIC_LIBS)11489$(TEST_GEN_PROGS): $(STATIC_LIBS)11590
···4848 exec 2>/dev/null4949 printf "$orig_message_cost" > /proc/sys/net/core/message_cost5050 ip0 link del dev wg05151+ ip0 link del dev wg15152 ip1 link del dev wg05353+ ip1 link del dev wg15254 ip2 link del dev wg05555+ ip2 link del dev wg15356 local to_kill="$(ip netns pids $netns0) $(ip netns pids $netns1) $(ip netns pids $netns2)"5457 [[ -n $to_kill ]] && kill $to_kill5558 pp ip netns del $netns1···8077key1="$(pp wg genkey)"8178key2="$(pp wg genkey)"8279key3="$(pp wg genkey)"8080+key4="$(pp wg genkey)"8381pub1="$(pp wg pubkey <<<"$key1")"8482pub2="$(pp wg pubkey <<<"$key2")"8583pub3="$(pp wg pubkey <<<"$key3")"8484+pub4="$(pp wg pubkey <<<"$key4")"8685psk="$(pp wg genpsk)"8786[[ -n $key1 && -n $key2 && -n $psk ]]88878988configure_peers() {9089 ip1 addr add 192.168.241.1/24 dev wg09191- ip1 addr add fd00::1/24 dev wg09090+ ip1 addr add fd00::1/112 dev wg092919392 ip2 addr add 192.168.241.2/24 dev wg09494- ip2 addr add fd00::2/24 dev wg09393+ ip2 addr add fd00::2/112 dev wg095949695 n1 wg set wg0 \9796 private-key <(echo "$key1") \···235230n1 wg set wg0 private-key <(echo "$key3")236231n2 wg set wg0 peer "$pub3" preshared-key <(echo "$psk") allowed-ips 192.168.241.1/32 peer "$pub1" remove237232n1 ping -W 1 -c 1 192.168.241.2233233+n2 wg set wg0 peer "$pub3" remove238234239239-ip1 link del wg0235235+# Test that we can route wg through wg236236+ip1 addr flush dev wg0237237+ip2 addr flush dev wg0238238+ip1 addr add fd00::5:1/112 dev wg0239239+ip2 addr add fd00::5:2/112 dev wg0240240+n1 wg set wg0 private-key <(echo "$key1") peer "$pub2" preshared-key <(echo "$psk") allowed-ips fd00::5:2/128 endpoint 127.0.0.1:2241241+n2 wg set wg0 private-key <(echo "$key2") listen-port 2 peer "$pub1" preshared-key <(echo "$psk") allowed-ips fd00::5:1/128 endpoint 127.212.121.99:9998242242+ip1 link add wg1 type wireguard243243+ip2 link add wg1 type wireguard244244+ip1 addr add 192.168.241.1/24 dev wg1245245+ip1 addr add fd00::1/112 dev wg1246246+ip2 addr add 192.168.241.2/24 dev wg1247247+ip2 addr add fd00::2/112 dev wg1248248+ip1 link set mtu 1340 up dev wg1249249+ip2 link set mtu 1340 up dev wg1250250+n1 wg set wg1 listen-port 5 private-key <(echo "$key3") peer "$pub4" allowed-ips 192.168.241.2/32,fd00::2/128 endpoint [fd00::5:2]:5251251+n2 wg set wg1 listen-port 5 private-key <(echo "$key4") peer "$pub3" allowed-ips 192.168.241.1/32,fd00::1/128 endpoint [fd00::5:1]:5252252+tests253253+# Try to set up a routing loop between the two namespaces254254+ip1 link set netns $netns0 dev wg1255255+ip0 addr add 192.168.241.1/24 dev wg1256256+ip0 link set up dev wg1257257+n0 ping -W 1 -c 1 192.168.241.2258258+n1 wg set wg0 peer "$pub2" endpoint 192.168.241.2:7240259ip2 link del wg0260260+ip2 link del wg1261261+! n0 ping -W 1 -c 10 -f 192.168.241.2 || false # Should not crash kernel262262+263263+ip0 link del wg1264264+ip1 link del wg0241265242266# Test using NAT. We now change the topology to this:243267# ┌────────────────────────────────────────┐ ┌────────────────────────────────────────────────┐ ┌────────────────────────────────────────┐···315281pp sleep 3316282n2 ping -W 1 -c 1 192.168.241.1317283n1 wg set wg0 peer "$pub2" persistent-keepalive 0284284+285285+# Test that onion routing works, even when it loops286286+n1 wg set wg0 peer "$pub3" allowed-ips 192.168.242.2/32 endpoint 192.168.241.2:5287287+ip1 addr add 192.168.242.1/24 dev wg0288288+ip2 link add wg1 type wireguard289289+ip2 addr add 192.168.242.2/24 dev wg1290290+n2 wg set wg1 private-key <(echo "$key3") listen-port 5 peer "$pub1" allowed-ips 192.168.242.1/32291291+ip2 link set wg1 up292292+n1 ping -W 1 -c 1 192.168.242.2293293+ip2 link del wg1294294+n1 wg set wg0 peer "$pub3" endpoint 192.168.242.2:5295295+! n1 ping -W 1 -c 1 192.168.242.2 || false # Should not crash kernel296296+n1 wg set wg0 peer "$pub3" remove297297+ip1 addr del 192.168.242.1/24 dev wg0318298319299# Do a wg-quick(8)-style policy routing for the default route, making sure vethc has a v6 address to tease out bugs.320300ip1 -6 addr add fc00::9/96 dev vethc
···125125 */126126void __hyp_text kvm_skip_instr32(struct kvm_vcpu *vcpu, bool is_wide_instr)127127{128128+ u32 pc = *vcpu_pc(vcpu);128129 bool is_thumb;129130130131 is_thumb = !!(*vcpu_cpsr(vcpu) & PSR_AA32_T_BIT);131132 if (is_thumb && !is_wide_instr)132132- *vcpu_pc(vcpu) += 2;133133+ pc += 2;133134 else134134- *vcpu_pc(vcpu) += 4;135135+ pc += 4;136136+137137+ *vcpu_pc(vcpu) = pc;138138+135139 kvm_adjust_itstate(vcpu);136140}
+40
virt/kvm/arm/psci.c
···186186 kvm_prepare_system_event(vcpu, KVM_SYSTEM_EVENT_RESET);187187}188188189189+static void kvm_psci_narrow_to_32bit(struct kvm_vcpu *vcpu)190190+{191191+ int i;192192+193193+ /*194194+ * Zero the input registers' upper 32 bits. They will be fully195195+ * zeroed on exit, so we're fine changing them in place.196196+ */197197+ for (i = 1; i < 4; i++)198198+ vcpu_set_reg(vcpu, i, lower_32_bits(vcpu_get_reg(vcpu, i)));199199+}200200+201201+static unsigned long kvm_psci_check_allowed_function(struct kvm_vcpu *vcpu, u32 fn)202202+{203203+ switch(fn) {204204+ case PSCI_0_2_FN64_CPU_SUSPEND:205205+ case PSCI_0_2_FN64_CPU_ON:206206+ case PSCI_0_2_FN64_AFFINITY_INFO:207207+ /* Disallow these functions for 32bit guests */208208+ if (vcpu_mode_is_32bit(vcpu))209209+ return PSCI_RET_NOT_SUPPORTED;210210+ break;211211+ }212212+213213+ return 0;214214+}215215+189216static int kvm_psci_0_2_call(struct kvm_vcpu *vcpu)190217{191218 struct kvm *kvm = vcpu->kvm;192219 u32 psci_fn = smccc_get_function(vcpu);193220 unsigned long val;194221 int ret = 1;222222+223223+ val = kvm_psci_check_allowed_function(vcpu, psci_fn);224224+ if (val)225225+ goto out;195226196227 switch (psci_fn) {197228 case PSCI_0_2_FN_PSCI_VERSION:···241210 val = PSCI_RET_SUCCESS;242211 break;243212 case PSCI_0_2_FN_CPU_ON:213213+ kvm_psci_narrow_to_32bit(vcpu);214214+ fallthrough;244215 case PSCI_0_2_FN64_CPU_ON:245216 mutex_lock(&kvm->lock);246217 val = kvm_psci_vcpu_on(vcpu);247218 mutex_unlock(&kvm->lock);248219 break;249220 case PSCI_0_2_FN_AFFINITY_INFO:221221+ kvm_psci_narrow_to_32bit(vcpu);222222+ fallthrough;250223 case PSCI_0_2_FN64_AFFINITY_INFO:251224 val = kvm_psci_vcpu_affinity_info(vcpu);252225 break;···291256 break;292257 }293258259259+out:294260 smccc_set_retval(vcpu, val, 0, 0, 0);295261 return ret;296262}···309273 break;310274 case PSCI_1_0_FN_PSCI_FEATURES:311275 feature = smccc_get_arg1(vcpu);276276+ val = kvm_psci_check_allowed_function(vcpu, feature);277277+ if (val)278278+ break;279279+312280 switch(feature) {313281 case PSCI_0_2_FN_PSCI_VERSION:314282 case PSCI_0_2_FN_CPU_SUSPEND:
+16-3
virt/kvm/arm/vgic/vgic-init.c
···294294 }295295 }296296297297- if (vgic_has_its(kvm)) {297297+ if (vgic_has_its(kvm))298298 vgic_lpi_translation_cache_init(kvm);299299+300300+ /*301301+ * If we have GICv4.1 enabled, unconditionnaly request enable the302302+ * v4 support so that we get HW-accelerated vSGIs. Otherwise, only303303+ * enable it if we present a virtual ITS to the guest.304304+ */305305+ if (vgic_supports_direct_msis(kvm)) {299306 ret = vgic_v4_init(kvm);300307 if (ret)301308 goto out;···355348{356349 struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu;357350351351+ /*352352+ * Retire all pending LPIs on this vcpu anyway as we're353353+ * going to destroy it.354354+ */355355+ vgic_flush_pending_lpis(vcpu);356356+358357 INIT_LIST_HEAD(&vgic_cpu->ap_list_head);359358}360359···372359373360 vgic_debug_destroy(kvm);374361375375- kvm_vgic_dist_destroy(kvm);376376-377362 kvm_for_each_vcpu(i, vcpu, kvm)378363 kvm_vgic_vcpu_destroy(vcpu);364364+365365+ kvm_vgic_dist_destroy(kvm);379366}380367381368void kvm_vgic_destroy(struct kvm *kvm)
+9-2
virt/kvm/arm/vgic/vgic-its.c
···9696 * We "cache" the configuration table entries in our struct vgic_irq's.9797 * However we only have those structs for mapped IRQs, so we read in9898 * the respective config data from memory here upon mapping the LPI.9999+ *100100+ * Should any of these fail, behave as if we couldn't create the LPI101101+ * by dropping the refcount and returning the error.99102 */100103 ret = update_lpi_config(kvm, irq, NULL, false);101101- if (ret)104104+ if (ret) {105105+ vgic_put_irq(kvm, irq);102106 return ERR_PTR(ret);107107+ }103108104109 ret = vgic_v3_lpi_sync_pending_status(kvm, irq);105105- if (ret)110110+ if (ret) {111111+ vgic_put_irq(kvm, irq);106112 return ERR_PTR(ret);113113+ }107114108115 return irq;109116}
···184184 }185185}186186187187+int vgic_uaccess_write_senable(struct kvm_vcpu *vcpu,188188+ gpa_t addr, unsigned int len,189189+ unsigned long val)190190+{191191+ u32 intid = VGIC_ADDR_TO_INTID(addr, 1);192192+ int i;193193+ unsigned long flags;194194+195195+ for_each_set_bit(i, &val, len * 8) {196196+ struct vgic_irq *irq = vgic_get_irq(vcpu->kvm, vcpu, intid + i);197197+198198+ raw_spin_lock_irqsave(&irq->irq_lock, flags);199199+ irq->enabled = true;200200+ vgic_queue_irq_unlock(vcpu->kvm, irq, flags);201201+202202+ vgic_put_irq(vcpu->kvm, irq);203203+ }204204+205205+ return 0;206206+}207207+208208+int vgic_uaccess_write_cenable(struct kvm_vcpu *vcpu,209209+ gpa_t addr, unsigned int len,210210+ unsigned long val)211211+{212212+ u32 intid = VGIC_ADDR_TO_INTID(addr, 1);213213+ int i;214214+ unsigned long flags;215215+216216+ for_each_set_bit(i, &val, len * 8) {217217+ struct vgic_irq *irq = vgic_get_irq(vcpu->kvm, vcpu, intid + i);218218+219219+ raw_spin_lock_irqsave(&irq->irq_lock, flags);220220+ irq->enabled = false;221221+ raw_spin_unlock_irqrestore(&irq->irq_lock, flags);222222+223223+ vgic_put_irq(vcpu->kvm, irq);224224+ }225225+226226+ return 0;227227+}228228+187229unsigned long vgic_mmio_read_pending(struct kvm_vcpu *vcpu,188230 gpa_t addr, unsigned int len)189231{···261219 return value;262220}263221264264-/* Must be called with irq->irq_lock held */265265-static void vgic_hw_irq_spending(struct kvm_vcpu *vcpu, struct vgic_irq *irq,266266- bool is_uaccess)267267-{268268- if (is_uaccess)269269- return;270270-271271- irq->pending_latch = true;272272- vgic_irq_set_phys_active(irq, true);273273-}274274-275222static bool is_vgic_v2_sgi(struct kvm_vcpu *vcpu, struct vgic_irq *irq)276223{277224 return (vgic_irq_is_sgi(irq->intid) &&···271240 gpa_t addr, unsigned int len,272241 unsigned long val)273242{274274- bool is_uaccess = !kvm_get_running_vcpu();275243 u32 intid = VGIC_ADDR_TO_INTID(addr, 1);276244 int i;277245 unsigned long flags;···300270 continue;301271 }302272273273+ irq->pending_latch = true;303274 if (irq->hw)304304- vgic_hw_irq_spending(vcpu, irq, is_uaccess);305305- else306306- irq->pending_latch = true;275275+ vgic_irq_set_phys_active(irq, true);276276+307277 vgic_queue_irq_unlock(vcpu->kvm, irq, flags);308278 vgic_put_irq(vcpu->kvm, irq);309279 }310280}311281312312-/* Must be called with irq->irq_lock held */313313-static void vgic_hw_irq_cpending(struct kvm_vcpu *vcpu, struct vgic_irq *irq,314314- bool is_uaccess)282282+int vgic_uaccess_write_spending(struct kvm_vcpu *vcpu,283283+ gpa_t addr, unsigned int len,284284+ unsigned long val)315285{316316- if (is_uaccess)317317- return;286286+ u32 intid = VGIC_ADDR_TO_INTID(addr, 1);287287+ int i;288288+ unsigned long flags;318289290290+ for_each_set_bit(i, &val, len * 8) {291291+ struct vgic_irq *irq = vgic_get_irq(vcpu->kvm, vcpu, intid + i);292292+293293+ raw_spin_lock_irqsave(&irq->irq_lock, flags);294294+ irq->pending_latch = true;295295+296296+ /*297297+ * GICv2 SGIs are terribly broken. We can't restore298298+ * the source of the interrupt, so just pick the vcpu299299+ * itself as the source...300300+ */301301+ if (is_vgic_v2_sgi(vcpu, irq))302302+ irq->source |= BIT(vcpu->vcpu_id);303303+304304+ vgic_queue_irq_unlock(vcpu->kvm, irq, flags);305305+306306+ vgic_put_irq(vcpu->kvm, irq);307307+ }308308+309309+ return 0;310310+}311311+312312+/* Must be called with irq->irq_lock held */313313+static void vgic_hw_irq_cpending(struct kvm_vcpu *vcpu, struct vgic_irq *irq)314314+{319315 irq->pending_latch = false;320316321317 /*···364308 gpa_t addr, unsigned int len,365309 unsigned long val)366310{367367- bool is_uaccess = !kvm_get_running_vcpu();368311 u32 intid = VGIC_ADDR_TO_INTID(addr, 1);369312 int i;370313 unsigned long flags;···394339 }395340396341 if (irq->hw)397397- vgic_hw_irq_cpending(vcpu, irq, is_uaccess);342342+ vgic_hw_irq_cpending(vcpu, irq);398343 else399344 irq->pending_latch = false;400345···403348 }404349}405350406406-unsigned long vgic_mmio_read_active(struct kvm_vcpu *vcpu,407407- gpa_t addr, unsigned int len)351351+int vgic_uaccess_write_cpending(struct kvm_vcpu *vcpu,352352+ gpa_t addr, unsigned int len,353353+ unsigned long val)354354+{355355+ u32 intid = VGIC_ADDR_TO_INTID(addr, 1);356356+ int i;357357+ unsigned long flags;358358+359359+ for_each_set_bit(i, &val, len * 8) {360360+ struct vgic_irq *irq = vgic_get_irq(vcpu->kvm, vcpu, intid + i);361361+362362+ raw_spin_lock_irqsave(&irq->irq_lock, flags);363363+ /*364364+ * More fun with GICv2 SGIs! If we're clearing one of them365365+ * from userspace, which source vcpu to clear? Let's not366366+ * even think of it, and blow the whole set.367367+ */368368+ if (is_vgic_v2_sgi(vcpu, irq))369369+ irq->source = 0;370370+371371+ irq->pending_latch = false;372372+373373+ raw_spin_unlock_irqrestore(&irq->irq_lock, flags);374374+375375+ vgic_put_irq(vcpu->kvm, irq);376376+ }377377+378378+ return 0;379379+}380380+381381+/*382382+ * If we are fiddling with an IRQ's active state, we have to make sure the IRQ383383+ * is not queued on some running VCPU's LRs, because then the change to the384384+ * active state can be overwritten when the VCPU's state is synced coming back385385+ * from the guest.386386+ *387387+ * For shared interrupts as well as GICv3 private interrupts, we have to388388+ * stop all the VCPUs because interrupts can be migrated while we don't hold389389+ * the IRQ locks and we don't want to be chasing moving targets.390390+ *391391+ * For GICv2 private interrupts we don't have to do anything because392392+ * userspace accesses to the VGIC state already require all VCPUs to be393393+ * stopped, and only the VCPU itself can modify its private interrupts394394+ * active state, which guarantees that the VCPU is not running.395395+ */396396+static void vgic_access_active_prepare(struct kvm_vcpu *vcpu, u32 intid)397397+{398398+ if (vcpu->kvm->arch.vgic.vgic_model == KVM_DEV_TYPE_ARM_VGIC_V3 ||399399+ intid >= VGIC_NR_PRIVATE_IRQS)400400+ kvm_arm_halt_guest(vcpu->kvm);401401+}402402+403403+/* See vgic_access_active_prepare */404404+static void vgic_access_active_finish(struct kvm_vcpu *vcpu, u32 intid)405405+{406406+ if (vcpu->kvm->arch.vgic.vgic_model == KVM_DEV_TYPE_ARM_VGIC_V3 ||407407+ intid >= VGIC_NR_PRIVATE_IRQS)408408+ kvm_arm_resume_guest(vcpu->kvm);409409+}410410+411411+static unsigned long __vgic_mmio_read_active(struct kvm_vcpu *vcpu,412412+ gpa_t addr, unsigned int len)408413{409414 u32 intid = VGIC_ADDR_TO_INTID(addr, 1);410415 u32 value = 0;···474359 for (i = 0; i < len * 8; i++) {475360 struct vgic_irq *irq = vgic_get_irq(vcpu->kvm, vcpu, intid + i);476361362362+ /*363363+ * Even for HW interrupts, don't evaluate the HW state as364364+ * all the guest is interested in is the virtual state.365365+ */477366 if (irq->active)478367 value |= (1U << i);479368···485366 }486367487368 return value;369369+}370370+371371+unsigned long vgic_mmio_read_active(struct kvm_vcpu *vcpu,372372+ gpa_t addr, unsigned int len)373373+{374374+ u32 intid = VGIC_ADDR_TO_INTID(addr, 1);375375+ u32 val;376376+377377+ mutex_lock(&vcpu->kvm->lock);378378+ vgic_access_active_prepare(vcpu, intid);379379+380380+ val = __vgic_mmio_read_active(vcpu, addr, len);381381+382382+ vgic_access_active_finish(vcpu, intid);383383+ mutex_unlock(&vcpu->kvm->lock);384384+385385+ return val;386386+}387387+388388+unsigned long vgic_uaccess_read_active(struct kvm_vcpu *vcpu,389389+ gpa_t addr, unsigned int len)390390+{391391+ return __vgic_mmio_read_active(vcpu, addr, len);488392}489393490394/* Must be called with irq->irq_lock held */···568426 raw_spin_unlock_irqrestore(&irq->irq_lock, flags);569427}570428571571-/*572572- * If we are fiddling with an IRQ's active state, we have to make sure the IRQ573573- * is not queued on some running VCPU's LRs, because then the change to the574574- * active state can be overwritten when the VCPU's state is synced coming back575575- * from the guest.576576- *577577- * For shared interrupts, we have to stop all the VCPUs because interrupts can578578- * be migrated while we don't hold the IRQ locks and we don't want to be579579- * chasing moving targets.580580- *581581- * For private interrupts we don't have to do anything because userspace582582- * accesses to the VGIC state already require all VCPUs to be stopped, and583583- * only the VCPU itself can modify its private interrupts active state, which584584- * guarantees that the VCPU is not running.585585- */586586-static void vgic_change_active_prepare(struct kvm_vcpu *vcpu, u32 intid)587587-{588588- if (vcpu->kvm->arch.vgic.vgic_model == KVM_DEV_TYPE_ARM_VGIC_V3 ||589589- intid > VGIC_NR_PRIVATE_IRQS)590590- kvm_arm_halt_guest(vcpu->kvm);591591-}592592-593593-/* See vgic_change_active_prepare */594594-static void vgic_change_active_finish(struct kvm_vcpu *vcpu, u32 intid)595595-{596596- if (vcpu->kvm->arch.vgic.vgic_model == KVM_DEV_TYPE_ARM_VGIC_V3 ||597597- intid > VGIC_NR_PRIVATE_IRQS)598598- kvm_arm_resume_guest(vcpu->kvm);599599-}600600-601429static void __vgic_mmio_write_cactive(struct kvm_vcpu *vcpu,602430 gpa_t addr, unsigned int len,603431 unsigned long val)···589477 u32 intid = VGIC_ADDR_TO_INTID(addr, 1);590478591479 mutex_lock(&vcpu->kvm->lock);592592- vgic_change_active_prepare(vcpu, intid);480480+ vgic_access_active_prepare(vcpu, intid);593481594482 __vgic_mmio_write_cactive(vcpu, addr, len, val);595483596596- vgic_change_active_finish(vcpu, intid);484484+ vgic_access_active_finish(vcpu, intid);597485 mutex_unlock(&vcpu->kvm->lock);598486}599487···626514 u32 intid = VGIC_ADDR_TO_INTID(addr, 1);627515628516 mutex_lock(&vcpu->kvm->lock);629629- vgic_change_active_prepare(vcpu, intid);517517+ vgic_access_active_prepare(vcpu, intid);630518631519 __vgic_mmio_write_sactive(vcpu, addr, len, val);632520633633- vgic_change_active_finish(vcpu, intid);521521+ vgic_access_active_finish(vcpu, intid);634522 mutex_unlock(&vcpu->kvm->lock);635523}636524
+19
virt/kvm/arm/vgic/vgic-mmio.h
···138138 gpa_t addr, unsigned int len,139139 unsigned long val);140140141141+int vgic_uaccess_write_senable(struct kvm_vcpu *vcpu,142142+ gpa_t addr, unsigned int len,143143+ unsigned long val);144144+145145+int vgic_uaccess_write_cenable(struct kvm_vcpu *vcpu,146146+ gpa_t addr, unsigned int len,147147+ unsigned long val);148148+141149unsigned long vgic_mmio_read_pending(struct kvm_vcpu *vcpu,142150 gpa_t addr, unsigned int len);143151···157149 gpa_t addr, unsigned int len,158150 unsigned long val);159151152152+int vgic_uaccess_write_spending(struct kvm_vcpu *vcpu,153153+ gpa_t addr, unsigned int len,154154+ unsigned long val);155155+156156+int vgic_uaccess_write_cpending(struct kvm_vcpu *vcpu,157157+ gpa_t addr, unsigned int len,158158+ unsigned long val);159159+160160unsigned long vgic_mmio_read_active(struct kvm_vcpu *vcpu,161161+ gpa_t addr, unsigned int len);162162+163163+unsigned long vgic_uaccess_read_active(struct kvm_vcpu *vcpu,161164 gpa_t addr, unsigned int len);162165163166void vgic_mmio_write_cactive(struct kvm_vcpu *vcpu,