···390390modprobe391391========392392393393-This gives the full path of the modprobe command which the kernel will394394-use to load modules. This can be used to debug module loading395395-requests::393393+The full path to the usermode helper for autoloading kernel modules,394394+by default "/sbin/modprobe". This binary is executed when the kernel395395+requests a module. For example, if userspace passes an unknown396396+filesystem type to mount(), then the kernel will automatically request397397+the corresponding filesystem module by executing this usermode helper.398398+This usermode helper should insert the needed module into the kernel.399399+400400+This sysctl only affects module autoloading. It has no effect on the401401+ability to explicitly insert modules.402402+403403+This sysctl can be used to debug module loading requests::396404397405 echo '#! /bin/sh' > /tmp/modprobe398406 echo 'echo "$@" >> /tmp/modprobe.log' >> /tmp/modprobe···408400 chmod a+x /tmp/modprobe409401 echo /tmp/modprobe > /proc/sys/kernel/modprobe410402411411-This only applies when the *kernel* is requesting that the module be412412-loaded; it won't have any effect if the module is being loaded413413-explicitly using ``modprobe`` from userspace.403403+Alternatively, if this sysctl is set to the empty string, then module404404+autoloading is completely disabled. The kernel will not try to405405+execute a usermode helper at all, nor will it call the406406+kernel_module_request LSM hook.414407408408+If CONFIG_STATIC_USERMODEHELPER=y is set in the kernel configuration,409409+then the configured static usermode helper overrides this sysctl,410410+except that the empty string is still accepted to completely disable411411+module autoloading as described above.415412416413modules_disabled417414================···458445 2) Toggle with non-default value will be set back to -1 by kernel after459446 successful IPC object allocation. If an IPC object allocation syscall460447 fails, it is undefined if the value remains unmodified or is reset to -1.461461-462462-modprobe:463463-=========464464-465465-The path to the usermode helper for autoloading kernel modules, by466466-default "/sbin/modprobe". This binary is executed when the kernel467467-requests a module. For example, if userspace passes an unknown468468-filesystem type to mount(), then the kernel will automatically request469469-the corresponding filesystem module by executing this usermode helper.470470-This usermode helper should insert the needed module into the kernel.471471-472472-This sysctl only affects module autoloading. It has no effect on the473473-ability to explicitly insert modules.474474-475475-If this sysctl is set to the empty string, then module autoloading is476476-completely disabled. The kernel will not try to execute a usermode477477-helper at all, nor will it call the kernel_module_request LSM hook.478478-479479-If CONFIG_STATIC_USERMODEHELPER=y is set in the kernel configuration,480480-then the configured static usermode helper overrides this sysctl,481481-except that the empty string is still accepted to completely disable482482-module autoloading as described above.483448484449nmi_watchdog485450============
+3-3
Documentation/core-api/timekeeping.rst
···154154155155 Use ktime_get() or ktime_get_ts64() instead.156156157157-.. c:function:: struct timeval do_gettimeofday( void )158158- struct timespec getnstimeofday( void )159159- struct timespec64 getnstimeofday64( void )157157+.. c:function:: void do_gettimeofday( struct timeval * )158158+ void getnstimeofday( struct timespec * )159159+ void getnstimeofday64( struct timespec64 * )160160 void ktime_get_real_ts( struct timespec * )161161162162 ktime_get_real_ts64() is a direct replacement, but consider using
···11-Analog Device ADV7123 Video DAC22--------------------------------11+Analog Devices ADV7123 Video DAC22+--------------------------------3344The ADV7123 is a digital-to-analog converter that outputs VGA signals from a55parallel video input.
···11-Analog Device ADV7511(W)/13/33/35 HDMI Encoders22------------------------------------------11+Analog Devices ADV7511(W)/13/33/35 HDMI Encoders22+------------------------------------------------3344The ADV7511, ADV7511W, ADV7513, ADV7533 and ADV7535 are HDMI audio and video55transmitters compatible with HDMI 1.4 and DVI 1.0. They support color space
···11-* Analog Device AD5755 IIO Multi-Channel DAC Linux Driver11+* Analog Devices AD5755 IIO Multi-Channel DAC Linux Driver2233Required properties:44 - compatible: Has to contain one of the following:
···4545 bits of a vendor specific ID.4646 - items:4747 - pattern: "^ethernet-phy-id[a-f0-9]{4}\\.[a-f0-9]{4}$"4848+ - const: ethernet-phy-ieee802.3-c224949+ - items:5050+ - pattern: "^ethernet-phy-id[a-f0-9]{4}\\.[a-f0-9]{4}$"4851 - const: ethernet-phy-ieee802.3-c4549525053 reg:
+2
Documentation/devicetree/bindings/net/fsl-fec.txt
···2222- fsl,err006687-workaround-present: If present indicates that the system has2323 the hardware workaround for ERR006687 applied and does not need a software2424 workaround.2525+- gpr: phandle of SoC general purpose register mode. Required for wake on LAN2626+ on some SoCs2527 -interrupt-names: names of the interrupts listed in interrupts property in2628 the same order. The defaults if not specified are2729 __Number of interrupts__ __Default__
···29293030Optional properties for compatible string qcom,wcn399x-bt:31313232- - max-speed: see Documentation/devicetree/bindings/serial/slave-device.txt3232+ - max-speed: see Documentation/devicetree/bindings/serial/serial.yaml3333 - firmware-name: specify the name of nvm firmware to load3434 - clocks: clock provided to the controller3535
···5252the node is not important. The content of the node is defined in dwc3.txt.53535454Phy documentation is provided in the following places:5555-Documentation/devicetree/bindings/phy/qcom-qmp-phy.txt - USB3 QMP PHY5656-Documentation/devicetree/bindings/phy/qcom-qusb2-phy.txt - USB2 QUSB2 PHY5555+Documentation/devicetree/bindings/phy/qcom-qmp-phy.txt - USB3 QMP PHY5656+Documentation/devicetree/bindings/phy/qcom,qusb2-phy.yaml - USB2 QUSB2 PHY57575858Example device nodes:5959
···1616the node is not important. The content of the node is defined in dwc3.txt.17171818Phy documentation is provided in the following places:1919-Documentation/devicetree/bindings/phy/phy-rockchip-inno-usb2.txt - USB2.0 PHY1919+Documentation/devicetree/bindings/phy/phy-rockchip-inno-usb2.yaml - USB2.0 PHY2020Documentation/devicetree/bindings/phy/phy-rockchip-typec.txt - Type-C PHY21212222Example device nodes:
···812812tcp_challenge_ack_limit - INTEGER813813 Limits number of Challenge ACK sent per second, as recommended814814 in RFC 5961 (Improving TCP's Robustness to Blind In-Window Attacks)815815- Default: 100815815+ Default: 1000816816817817tcp_rx_skb_cache - BOOLEAN818818 Controls a per TCP socket cache of one skb, that might help
···11+======================================================12Net DIM - Generic Network Dynamic Interrupt Moderation23======================================================3444-Author:55- Tal Gilboa <talgi@mellanox.com>55+:Author: Tal Gilboa <talgi@mellanox.com>6677+.. contents:: :depth: 27888-Contents99-=========1010-1111-- Assumptions1212-- Introduction1313-- The Net DIM Algorithm1414-- Registering a Network Device to DIM1515-- Example1616-1717-Part 0: Assumptions1818-======================99+Assumptions1010+===========19112012This document assumes the reader has basic knowledge in network drivers2113and in general interrupt moderation.221423152424-Part I: Introduction2525-======================1616+Introduction1717+============26182719Dynamic Interrupt Moderation (DIM) (in networking) refers to changing the2820interrupt moderation configuration of a channel in order to optimize packet···3341increase bandwidth over reducing interrupt rate.344235433636-Part II: The Net DIM Algorithm3737-===============================4444+Net DIM Algorithm4545+=================38463947Each iteration of the Net DIM algorithm follows these steps:4040-1. Calculates new data sample.4141-2. Compares it to previous sample.4242-3. Makes a decision - suggests interrupt moderation configuration fields.4343-4. Applies a schedule work function, which applies suggested configuration.4848+4949+#. Calculates new data sample.5050+#. Compares it to previous sample.5151+#. Makes a decision - suggests interrupt moderation configuration fields.5252+#. Applies a schedule work function, which applies suggested configuration.44534554The first two steps are straightforward, both the new and the previous data are4655supplied by the driver registered to Net DIM. The previous data is the new data···8289under some conditions.839084918585-Part III: Registering a Network Device to DIM8686-==============================================9292+Registering a Network Device to DIM9393+===================================87948888-Net DIM API exposes the main function net_dim(struct dim *dim,8989-struct dim_sample end_sample). This function is the entry point to the Net9595+Net DIM API exposes the main function net_dim().9696+This function is the entry point to the Net9097DIM algorithm and has to be called every time the driver would like to check if9198it should change interrupt moderation parameters. The driver should provide two9292-data structures: struct dim and struct dim_sample. Struct dim9999+data structures: :c:type:`struct dim <dim>` and100100+:c:type:`struct dim_sample <dim_sample>`. :c:type:`struct dim <dim>`93101describes the state of DIM for a specific object (RX queue, TX queue,94102other queues, etc.). This includes the current selected profile, previous data95103samples, the callback function provided by the driver and more.9696-Struct dim_sample describes a data sample, which will be compared to the9797-data sample stored in struct dim in order to decide on the algorithm's next104104+:c:type:`struct dim_sample <dim_sample>` describes a data sample,105105+which will be compared to the data sample stored in :c:type:`struct dim <dim>`106106+in order to decide on the algorithm's next98107step. The sample should include bytes, packets and interrupts, measured by99108the driver.100109···105110interrupt. Since Net DIM has a built-in moderation and it might decide to skip106111iterations under certain conditions, there is no need to moderate the net_dim()107112calls as well. As mentioned above, the driver needs to provide an object of type108108-struct dim to the net_dim() function call. It is advised for each entity109109-using Net DIM to hold a struct dim as part of its data structure and use it110110-as the main Net DIM API object. The struct dim_sample should hold the latest113113+:c:type:`struct dim <dim>` to the net_dim() function call. It is advised for114114+each entity using Net DIM to hold a :c:type:`struct dim <dim>` as part of its115115+data structure and use it as the main Net DIM API object.116116+The :c:type:`struct dim_sample <dim_sample>` should hold the latest111117bytes, packets and interrupts count. No need to perform any calculations, just112118include the raw data.113119···120124the proper state in order to move to the next iteration.121125122126123123-Part IV: Example124124-=================127127+Example128128+=======125129126130The following code demonstrates how to register a driver to Net DIM. The actual127131usage is not complete but it should make the outline of the usage clear.128132129129-my_driver.c:133133+.. code-block:: c130134131131-#include <linux/dim.h>135135+ #include <linux/dim.h>132136133133-/* Callback for net DIM to schedule on a decision to change moderation */134134-void my_driver_do_dim_work(struct work_struct *work)135135-{137137+ /* Callback for net DIM to schedule on a decision to change moderation */138138+ void my_driver_do_dim_work(struct work_struct *work)139139+ {136140 /* Get struct dim from struct work_struct */137141 struct dim *dim = container_of(work, struct dim,138142 work);···141145142146 /* Signal net DIM work is done and it should move to next iteration */143147 dim->state = DIM_START_MEASURE;144144-}148148+ }145149146146-/* My driver's interrupt handler */147147-int my_driver_handle_interrupt(struct my_driver_entity *my_entity, ...)148148-{150150+ /* My driver's interrupt handler */151151+ int my_driver_handle_interrupt(struct my_driver_entity *my_entity, ...)152152+ {149153 ...150154 /* A struct to hold current measured data */151155 struct dim_sample dim_sample;···158162 /* Call net DIM */159163 net_dim(&my_entity->dim, dim_sample);160164 ...161161-}165165+ }162166163163-/* My entity's initialization function (my_entity was already allocated) */164164-int my_driver_init_my_entity(struct my_driver_entity *my_entity, ...)165165-{167167+ /* My entity's initialization function (my_entity was already allocated) */168168+ int my_driver_init_my_entity(struct my_driver_entity *my_entity, ...)169169+ {166170 ...167171 /* Initiate struct work_struct with my driver's callback function */168172 INIT_WORK(&my_entity->dim.work, my_driver_do_dim_work);169173 ...170170-}174174+ }175175+176176+Dynamic Interrupt Moderation (DIM) library API177177+==============================================178178+179179+.. kernel-doc:: include/linux/dim.h180180+ :internal:
+18-3
Documentation/x86/boot.rst
···13991399must be __BOOT_DS; interrupt must be disabled; %rsi must hold the base14001400address of the struct boot_params.1401140114021402-EFI Handover Protocol14031403-=====================14021402+EFI Handover Protocol (deprecated)14031403+==================================1404140414051405This protocol allows boot loaders to defer initialisation to the EFI14061406boot stub. The boot loader is required to load the kernel/initrd(s)14071407from the boot media and jump to the EFI handover protocol entry point14081408which is hdr->handover_offset bytes from the beginning of14091409startup_{32,64}.14101410+14111411+The boot loader MUST respect the kernel's PE/COFF metadata when it comes14121412+to section alignment, the memory footprint of the executable image beyond14131413+the size of the file itself, and any other aspect of the PE/COFF header14141414+that may affect correct operation of the image as a PE/COFF binary in the14151415+execution context provided by the EFI firmware.1410141614111417The function prototype for the handover entry point looks like this::14121418···1425141914261420The boot loader *must* fill out the following fields in bp::1427142114281428- - hdr.code32_start14291422 - hdr.cmd_line_ptr14301423 - hdr.ramdisk_image (if applicable)14311424 - hdr.ramdisk_size (if applicable)1432142514331426All other fields should be zero.14271427+14281428+NOTE: The EFI Handover Protocol is deprecated in favour of the ordinary PE/COFF14291429+ entry point, combined with the LINUX_EFI_INITRD_MEDIA_GUID based initrd14301430+ loading protocol (refer to [0] for an example of the bootloader side of14311431+ this), which removes the need for any knowledge on the part of the EFI14321432+ bootloader regarding the internal representation of boot_params or any14331433+ requirements/limitations regarding the placement of the command line14341434+ and ramdisk in memory, or the placement of the kernel image itself.14351435+14361436+[0] https://github.com/u-boot/u-boot/commit/ec80b4735a593961fe701cc3a5d717d4739b0fd0
···14501450 @ running beyond the PoU, and so calling cache_off below from14511451 @ inside the PE/COFF loader allocated region is unsafe unless14521452 @ we explicitly clean it to the PoC.14531453- adr r0, call_cache_fn @ region of code we will14531453+ ARM( adrl r0, call_cache_fn )14541454+ THUMB( adr r0, call_cache_fn ) @ region of code we will14541455 adr r1, 0f @ run with MMU off14551456 bl cache_clean_flush14561457 bl cache_off
···4949#ifndef CONFIG_BROKEN_GAS_INST50505151#ifdef __ASSEMBLY__5252-#define __emit_inst(x) .inst (x)5252+// The space separator is omitted so that __emit_inst(x) can be parsed as5353+// either an assembler directive or an assembler macro argument.5454+#define __emit_inst(x) .inst(x)5355#else5456#define __emit_inst(x) ".inst " __stringify((x)) "\n\t"5557#endif
+1-12
arch/arm64/kernel/vdso.c
···260260 if (ret)261261 return ret;262262263263- ret = aarch32_alloc_kuser_vdso_page();264264- if (ret) {265265- unsigned long c_vvar =266266- (unsigned long)page_to_virt(aarch32_vdso_pages[C_VVAR]);267267- unsigned long c_vdso =268268- (unsigned long)page_to_virt(aarch32_vdso_pages[C_VDSO]);269269-270270- free_page(c_vvar);271271- free_page(c_vdso);272272- }273273-274274- return ret;263263+ return aarch32_alloc_kuser_vdso_page();275264}276265#else277266static int __aarch32_alloc_vdso_pages(void)
···110110 return -(1L << 31) <= val && val < (1L << 31);111111}112112113113+static bool in_auipc_jalr_range(s64 val)114114+{115115+ /*116116+ * auipc+jalr can reach any signed PC-relative offset in the range117117+ * [-2^31 - 2^11, 2^31 - 2^11).118118+ */119119+ return (-(1L << 31) - (1L << 11)) <= val &&120120+ val < ((1L << 31) - (1L << 11));121121+}122122+113123static void emit_imm(u8 rd, s64 val, struct rv_jit_context *ctx)114124{115125 /* Note that the immediate from the add is sign-extended,···390380 *rd = RV_REG_T2;391381}392382393393-static void emit_jump_and_link(u8 rd, s64 rvoff, bool force_jalr,394394- struct rv_jit_context *ctx)383383+static int emit_jump_and_link(u8 rd, s64 rvoff, bool force_jalr,384384+ struct rv_jit_context *ctx)395385{396386 s64 upper, lower;397387398388 if (rvoff && is_21b_int(rvoff) && !force_jalr) {399389 emit(rv_jal(rd, rvoff >> 1), ctx);400400- return;390390+ return 0;391391+ } else if (in_auipc_jalr_range(rvoff)) {392392+ upper = (rvoff + (1 << 11)) >> 12;393393+ lower = rvoff & 0xfff;394394+ emit(rv_auipc(RV_REG_T1, upper), ctx);395395+ emit(rv_jalr(rd, RV_REG_T1, lower), ctx);396396+ return 0;401397 }402398403403- upper = (rvoff + (1 << 11)) >> 12;404404- lower = rvoff & 0xfff;405405- emit(rv_auipc(RV_REG_T1, upper), ctx);406406- emit(rv_jalr(rd, RV_REG_T1, lower), ctx);399399+ pr_err("bpf-jit: target offset 0x%llx is out of range\n", rvoff);400400+ return -ERANGE;407401}408402409403static bool is_signed_bpf_cond(u8 cond)···421407 s64 off = 0;422408 u64 ip;423409 u8 rd;410410+ int ret;424411425412 if (addr && ctx->insns) {426413 ip = (u64)(long)(ctx->insns + ctx->ninsns);427414 off = addr - ip;428428- if (!is_32b_int(off)) {429429- pr_err("bpf-jit: target call addr %pK is out of range\n",430430- (void *)addr);431431- return -ERANGE;432432- }433415 }434416435435- emit_jump_and_link(RV_REG_RA, off, !fixed, ctx);417417+ ret = emit_jump_and_link(RV_REG_RA, off, !fixed, ctx);418418+ if (ret)419419+ return ret;436420 rd = bpf_to_rv_reg(BPF_REG_0, ctx);437421 emit(rv_addi(rd, RV_REG_A0, 0), ctx);438422 return 0;···441429{442430 bool is64 = BPF_CLASS(insn->code) == BPF_ALU64 ||443431 BPF_CLASS(insn->code) == BPF_JMP;444444- int s, e, rvoff, i = insn - ctx->prog->insnsi;432432+ int s, e, rvoff, ret, i = insn - ctx->prog->insnsi;445433 struct bpf_prog_aux *aux = ctx->prog->aux;446434 u8 rd = -1, rs = -1, code = insn->code;447435 s16 off = insn->off;···711699 /* JUMP off */712700 case BPF_JMP | BPF_JA:713701 rvoff = rv_offset(i, off, ctx);714714- emit_jump_and_link(RV_REG_ZERO, rvoff, false, ctx);702702+ ret = emit_jump_and_link(RV_REG_ZERO, rvoff, false, ctx);703703+ if (ret)704704+ return ret;715705 break;716706717707 /* IF (dst COND src) JUMP off */···815801 case BPF_JMP | BPF_CALL:816802 {817803 bool fixed;818818- int ret;819804 u64 addr;820805821806 mark_call(ctx);···839826 break;840827841828 rvoff = epilogue_offset(ctx);842842- emit_jump_and_link(RV_REG_ZERO, rvoff, false, ctx);829829+ ret = emit_jump_and_link(RV_REG_ZERO, rvoff, false, ctx);830830+ if (ret)831831+ return ret;843832 break;844833845834 /* dst = imm64 */
+5-1
arch/x86/hyperv/hv_init.c
···2020#include <linux/mm.h>2121#include <linux/hyperv.h>2222#include <linux/slab.h>2323+#include <linux/kernel.h>2324#include <linux/cpuhotplug.h>2425#include <linux/syscore_ops.h>2526#include <clocksource/hyperv_timer.h>···420419}421420EXPORT_SYMBOL_GPL(hyperv_cleanup);422421423423-void hyperv_report_panic(struct pt_regs *regs, long err)422422+void hyperv_report_panic(struct pt_regs *regs, long err, bool in_die)424423{425424 static bool panic_reported;426425 u64 guest_id;426426+427427+ if (in_die && !panic_on_oops)428428+ return;427429428430 /*429431 * We prefer to report panic on 'die' chain as we have proper
···4141 unsigned int mpb[0];4242};43434444-#define PATCH_MAX_SIZE PAGE_SIZE4444+#define PATCH_MAX_SIZE (3 * PAGE_SIZE)45454646#ifdef CONFIG_MICROCODE_AMD4747extern void __init load_ucode_amd_bsp(unsigned int family);
+36-18
arch/x86/kernel/cpu/intel.c
···11191119 sld_update_msr(!(tifn & _TIF_SLD));11201120}1121112111221122-#define SPLIT_LOCK_CPU(model) {X86_VENDOR_INTEL, 6, model, X86_FEATURE_ANY}11231123-11241122/*11251125- * The following processors have the split lock detection feature. But11261126- * since they don't have the IA32_CORE_CAPABILITIES MSR, the feature cannot11271127- * be enumerated. Enable it by family and model matching on these11281128- * processors.11231123+ * Bits in the IA32_CORE_CAPABILITIES are not architectural, so they should11241124+ * only be trusted if it is confirmed that a CPU model implements a11251125+ * specific feature at a particular bit position.11261126+ *11271127+ * The possible driver data field values:11281128+ *11291129+ * - 0: CPU models that are known to have the per-core split-lock detection11301130+ * feature even though they do not enumerate IA32_CORE_CAPABILITIES.11311131+ *11321132+ * - 1: CPU models which may enumerate IA32_CORE_CAPABILITIES and if so use11331133+ * bit 5 to enumerate the per-core split-lock detection feature.11291134 */11301135static const struct x86_cpu_id split_lock_cpu_ids[] __initconst = {11311131- SPLIT_LOCK_CPU(INTEL_FAM6_ICELAKE_X),11321132- SPLIT_LOCK_CPU(INTEL_FAM6_ICELAKE_L),11361136+ X86_MATCH_INTEL_FAM6_MODEL(ICELAKE_X, 0),11371137+ X86_MATCH_INTEL_FAM6_MODEL(ICELAKE_L, 0),11381138+ X86_MATCH_INTEL_FAM6_MODEL(ATOM_TREMONT, 1),11391139+ X86_MATCH_INTEL_FAM6_MODEL(ATOM_TREMONT_D, 1),11401140+ X86_MATCH_INTEL_FAM6_MODEL(ATOM_TREMONT_L, 1),11331141 {}11341142};1135114311361144void __init cpu_set_core_cap_bits(struct cpuinfo_x86 *c)11371145{11381138- u64 ia32_core_caps = 0;11461146+ const struct x86_cpu_id *m;11471147+ u64 ia32_core_caps;1139114811401140- if (c->x86_vendor != X86_VENDOR_INTEL)11491149+ if (boot_cpu_has(X86_FEATURE_HYPERVISOR))11411150 return;11421142- if (cpu_has(c, X86_FEATURE_CORE_CAPABILITIES)) {11431143- /* Enumerate features reported in IA32_CORE_CAPABILITIES MSR. */11511151+11521152+ m = x86_match_cpu(split_lock_cpu_ids);11531153+ if (!m)11541154+ return;11551155+11561156+ switch (m->driver_data) {11571157+ case 0:11581158+ break;11591159+ case 1:11601160+ if (!cpu_has(c, X86_FEATURE_CORE_CAPABILITIES))11611161+ return;11441162 rdmsrl(MSR_IA32_CORE_CAPS, ia32_core_caps);11451145- } else if (!boot_cpu_has(X86_FEATURE_HYPERVISOR)) {11461146- /* Enumerate split lock detection by family and model. */11471147- if (x86_match_cpu(split_lock_cpu_ids))11481148- ia32_core_caps |= MSR_IA32_CORE_CAPS_SPLIT_LOCK_DETECT;11631163+ if (!(ia32_core_caps & MSR_IA32_CORE_CAPS_SPLIT_LOCK_DETECT))11641164+ return;11651165+ break;11661166+ default:11671167+ return;11491168 }1150116911511151- if (ia32_core_caps & MSR_IA32_CORE_CAPS_SPLIT_LOCK_DETECT)11521152- split_lock_setup();11701170+ split_lock_setup();11531171}
+12-2
arch/x86/kernel/cpu/mshyperv.c
···227227 ms_hyperv.misc_features = cpuid_edx(HYPERV_CPUID_FEATURES);228228 ms_hyperv.hints = cpuid_eax(HYPERV_CPUID_ENLIGHTMENT_INFO);229229230230- pr_info("Hyper-V: features 0x%x, hints 0x%x\n",231231- ms_hyperv.features, ms_hyperv.hints);230230+ pr_info("Hyper-V: features 0x%x, hints 0x%x, misc 0x%x\n",231231+ ms_hyperv.features, ms_hyperv.hints, ms_hyperv.misc_features);232232233233 ms_hyperv.max_vp_index = cpuid_eax(HYPERV_CPUID_IMPLEMENT_LIMITS);234234 ms_hyperv.max_lp_index = cpuid_ebx(HYPERV_CPUID_IMPLEMENT_LIMITS);···262262 ms_hyperv.nested_features =263263 cpuid_eax(HYPERV_CPUID_NESTED_FEATURES);264264 }265265+266266+ /*267267+ * Hyper-V expects to get crash register data or kmsg when268268+ * crash enlightment is available and system crashes. Set269269+ * crash_kexec_post_notifiers to be true to make sure that270270+ * calling crash enlightment interface before running kdump271271+ * kernel.272272+ */273273+ if (ms_hyperv.misc_features & HV_FEATURE_GUEST_CRASH_MSR_AVAILABLE)274274+ crash_kexec_post_notifiers = true;265275266276#ifdef CONFIG_X86_LOCAL_APIC267277 if (ms_hyperv.features & HV_X64_ACCESS_FREQUENCY_MSRS &&
···18591859 return 0;18601860}1861186118621862+/* Restore the qos cfg state when a domain comes online */18631863+void rdt_domain_reconfigure_cdp(struct rdt_resource *r)18641864+{18651865+ if (!r->alloc_capable)18661866+ return;18671867+18681868+ if (r == &rdt_resources_all[RDT_RESOURCE_L2DATA])18691869+ l2_qos_cfg_update(&r->alloc_enabled);18701870+18711871+ if (r == &rdt_resources_all[RDT_RESOURCE_L3DATA])18721872+ l3_qos_cfg_update(&r->alloc_enabled);18731873+}18741874+18621875/*18631876 * Enable or disable the MBA software controller18641877 * which helps user specify bandwidth in MBps.···30853072 * If the rdtgroup is a mon group and parent directory30863073 * is a valid "mon_groups" directory, remove the mon group.30873074 */30883088- if (rdtgrp->type == RDTCTRL_GROUP && parent_kn == rdtgroup_default.kn) {30753075+ if (rdtgrp->type == RDTCTRL_GROUP && parent_kn == rdtgroup_default.kn &&30763076+ rdtgrp != &rdtgroup_default) {30893077 if (rdtgrp->mode == RDT_MODE_PSEUDO_LOCKSETUP ||30903078 rdtgrp->mode == RDT_MODE_PSEUDO_LOCKED) {30913079 ret = rdtgroup_ctrl_remove(kn, rdtgrp);
···727727EXPORT_SYMBOL_GPL(software_node_unregister_nodes);728728729729/**730730+ * software_node_register_node_group - Register a group of software nodes731731+ * @node_group: NULL terminated array of software node pointers to be registered732732+ *733733+ * Register multiple software nodes at once.734734+ */735735+int software_node_register_node_group(const struct software_node **node_group)736736+{737737+ unsigned int i;738738+ int ret;739739+740740+ if (!node_group)741741+ return 0;742742+743743+ for (i = 0; node_group[i]; i++) {744744+ ret = software_node_register(node_group[i]);745745+ if (ret) {746746+ software_node_unregister_node_group(node_group);747747+ return ret;748748+ }749749+ }750750+751751+ return 0;752752+}753753+EXPORT_SYMBOL_GPL(software_node_register_node_group);754754+755755+/**756756+ * software_node_unregister_node_group - Unregister a group of software nodes757757+ * @node_group: NULL terminated array of software node pointers to be unregistered758758+ *759759+ * Unregister multiple software nodes at once.760760+ */761761+void software_node_unregister_node_group(const struct software_node **node_group)762762+{763763+ struct swnode *swnode;764764+ unsigned int i;765765+766766+ if (!node_group)767767+ return;768768+769769+ for (i = 0; node_group[i]; i++) {770770+ swnode = software_node_to_swnode(node_group[i]);771771+ if (swnode)772772+ fwnode_remove_software_node(&swnode->fwnode);773773+ }774774+}775775+EXPORT_SYMBOL_GPL(software_node_unregister_node_group);776776+777777+/**730778 * software_node_register - Register static software node731779 * @node: The software node to be registered732780 */
+19-14
drivers/block/rbd.c
···37543754static void rbd_notify_op_lock(struct rbd_device *rbd_dev,37553755 enum rbd_notify_op notify_op)37563756{37573757- struct page **reply_pages;37583758- size_t reply_len;37593759-37603760- __rbd_notify_op_lock(rbd_dev, notify_op, &reply_pages, &reply_len);37613761- ceph_release_page_vector(reply_pages, calc_pages_for(0, reply_len));37573757+ __rbd_notify_op_lock(rbd_dev, notify_op, NULL, NULL);37623758}3763375937643760static void rbd_notify_acquired_lock(struct work_struct *work)···45234527 cancel_work_sync(&rbd_dev->unlock_work);45244528}4525452945304530+/*45314531+ * header_rwsem must not be held to avoid a deadlock with45324532+ * rbd_dev_refresh() when flushing notifies.45334533+ */45264534static void rbd_unregister_watch(struct rbd_device *rbd_dev)45274535{45284536 cancel_tasks_sync(rbd_dev);···6894689468956895static void rbd_dev_image_release(struct rbd_device *rbd_dev)68966896{68976897- rbd_dev_unprobe(rbd_dev);68986898- if (rbd_dev->opts)68976897+ if (!rbd_is_ro(rbd_dev))68996898 rbd_unregister_watch(rbd_dev);68996899+69006900+ rbd_dev_unprobe(rbd_dev);69006901 rbd_dev->image_format = 0;69016902 kfree(rbd_dev->spec->image_id);69026903 rbd_dev->spec->image_id = NULL;···69086907 * device. If this image is the one being mapped (i.e., not a69096908 * parent), initiate a watch on its header object before using that69106909 * object to get detailed information about the rbd image.69106910+ *69116911+ * On success, returns with header_rwsem held for write if called69126912+ * with @depth == 0.69116913 */69126914static int rbd_dev_image_probe(struct rbd_device *rbd_dev, int depth)69136915{···69406936 }69416937 }6942693869396939+ if (!depth)69406940+ down_write(&rbd_dev->header_rwsem);69416941+69436942 ret = rbd_dev_header_info(rbd_dev);69446943 if (ret) {69456944 if (ret == -ENOENT && !need_watch)69466945 rbd_print_dne(rbd_dev, false);69476947- goto err_out_watch;69466946+ goto err_out_probe;69486947 }6949694869506949 /*···69926985 return 0;6993698669946987err_out_probe:69956995- rbd_dev_unprobe(rbd_dev);69966996-err_out_watch:69886988+ if (!depth)69896989+ up_write(&rbd_dev->header_rwsem);69976990 if (need_watch)69986991 rbd_unregister_watch(rbd_dev);69926992+ rbd_dev_unprobe(rbd_dev);69996993err_out_format:70006994 rbd_dev->image_format = 0;70016995 kfree(rbd_dev->spec->image_id);···70587050 goto err_out_rbd_dev;70597051 }7060705270617061- down_write(&rbd_dev->header_rwsem);70627053 rc = rbd_dev_image_probe(rbd_dev, 0);70637063- if (rc < 0) {70647064- up_write(&rbd_dev->header_rwsem);70547054+ if (rc < 0)70657055 goto err_out_rbd_dev;70667066- }7067705670687057 if (rbd_dev->opts->alloc_size > rbd_dev->layout.object_size) {70697058 rbd_warn(rbd_dev, "alloc_size adjusted to %u",
···2929 */3030#define EFI_READ_CHUNK_SIZE SZ_1M31313232+struct finfo {3333+ efi_file_info_t info;3434+ efi_char16_t filename[MAX_FILENAME_SIZE];3535+};3636+3237static efi_status_t efi_open_file(efi_file_protocol_t *volume,3333- efi_char16_t *filename_16,3838+ struct finfo *fi,3439 efi_file_protocol_t **handle,3540 unsigned long *file_size)3641{3737- struct {3838- efi_file_info_t info;3939- efi_char16_t filename[MAX_FILENAME_SIZE];4040- } finfo;4142 efi_guid_t info_guid = EFI_FILE_INFO_ID;4243 efi_file_protocol_t *fh;4344 unsigned long info_sz;4445 efi_status_t status;45464646- status = volume->open(volume, &fh, filename_16, EFI_FILE_MODE_READ, 0);4747+ status = volume->open(volume, &fh, fi->filename, EFI_FILE_MODE_READ, 0);4748 if (status != EFI_SUCCESS) {4849 pr_efi_err("Failed to open file: ");4949- efi_char16_printk(filename_16);5050+ efi_char16_printk(fi->filename);5051 efi_printk("\n");5152 return status;5253 }53545454- info_sz = sizeof(finfo);5555- status = fh->get_info(fh, &info_guid, &info_sz, &finfo);5555+ info_sz = sizeof(struct finfo);5656+ status = fh->get_info(fh, &info_guid, &info_sz, fi);5657 if (status != EFI_SUCCESS) {5758 pr_efi_err("Failed to get file info\n");5859 fh->close(fh);···6160 }62616362 *handle = fh;6464- *file_size = finfo.info.file_size;6363+ *file_size = fi->info.file_size;6564 return EFI_SUCCESS;6665}6766···147146148147 alloc_addr = alloc_size = 0;149148 do {150150- efi_char16_t filename[MAX_FILENAME_SIZE];149149+ struct finfo fi;151150 unsigned long size;152151 void *addr;153152154153 offset = find_file_option(cmdline, cmdline_len,155154 optstr, optstr_size,156156- filename, ARRAY_SIZE(filename));155155+ fi.filename, ARRAY_SIZE(fi.filename));157156158157 if (!offset)159158 break;···167166 return status;168167 }169168170170- status = efi_open_file(volume, filename, &file, &size);169169+ status = efi_open_file(volume, &fi, &file, &size);171170 if (status != EFI_SUCCESS)172171 goto err_close_volume;173172
+11-7
drivers/firmware/efi/libstub/x86-stub.c
···2020/* Maximum physical address for 64-bit kernel with 4-level paging */2121#define MAXMEM_X86_64_4LEVEL (1ull << 46)22222323-static efi_system_table_t *sys_table;2323+static efi_system_table_t *sys_table __efistub_global;2424extern const bool efi_is64;2525extern u32 image_offset;2626···392392 image_base = efi_table_attr(image, image_base);393393 image_offset = (void *)startup_32 - image_base;394394395395- hdr = &((struct boot_params *)image_base)->hdr;396396-397395 status = efi_allocate_pages(0x4000, (unsigned long *)&boot_params, ULONG_MAX);398396 if (status != EFI_SUCCESS) {399397 efi_printk("Failed to allocate lowmem for boot params\n");···740742 * now use KERNEL_IMAGE_SIZE, which will be 512MiB, the same as what741743 * KASLR uses.742744 *743743- * Also relocate it if image_offset is zero, i.e. we weren't loaded by744744- * LoadImage, but we are not aligned correctly.745745+ * Also relocate it if image_offset is zero, i.e. the kernel wasn't746746+ * loaded by LoadImage, but rather by a bootloader that called the747747+ * handover entry. The reason we must always relocate in this case is748748+ * to handle the case of systemd-boot booting a unified kernel image,749749+ * which is a PE executable that contains the bzImage and an initrd as750750+ * COFF sections. The initrd section is placed after the bzImage751751+ * without ensuring that there are at least init_size bytes available752752+ * for the bzImage, and thus the compressed kernel's startup code may753753+ * overwrite the initrd unless it is moved out of the way.745754 */746755747756 buffer_start = ALIGN(bzimage_addr - image_offset,···758753 if ((buffer_start < LOAD_PHYSICAL_ADDR) ||759754 (IS_ENABLED(CONFIG_X86_32) && buffer_end > KERNEL_IMAGE_SIZE) ||760755 (IS_ENABLED(CONFIG_X86_64) && buffer_end > MAXMEM_X86_64_4LEVEL) ||761761- (image_offset == 0 && !IS_ALIGNED(bzimage_addr,762762- hdr->kernel_alignment))) {756756+ (image_offset == 0)) {763757 status = efi_relocate_kernel(&bzimage_addr,764758 hdr->init_size, hdr->init_size,765759 hdr->pref_address,
+20-2
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
···20082008 */20092009static bool amdgpu_device_check_vram_lost(struct amdgpu_device *adev)20102010{20112011- return !!memcmp(adev->gart.ptr, adev->reset_magic,20122012- AMDGPU_RESET_MAGIC_NUM);20112011+ if (memcmp(adev->gart.ptr, adev->reset_magic,20122012+ AMDGPU_RESET_MAGIC_NUM))20132013+ return true;20142014+20152015+ if (!adev->in_gpu_reset)20162016+ return false;20172017+20182018+ /*20192019+ * For all ASICs with baco/mode1 reset, the VRAM is20202020+ * always assumed to be lost.20212021+ */20222022+ switch (amdgpu_asic_reset_method(adev)) {20232023+ case AMD_RESET_METHOD_BACO:20242024+ case AMD_RESET_METHOD_MODE1:20252025+ return true;20262026+ default:20272027+ return false;20282028+ }20132029}2014203020152031/**···23562340{23572341 int i, r;2358234223432343+ amdgpu_device_set_pg_state(adev, AMD_PG_STATE_UNGATE);23442344+ amdgpu_device_set_cg_state(adev, AMD_CG_STATE_UNGATE);2359234523602346 for (i = adev->num_ip_blocks - 1; i >= 0; i--) {23612347 if (!adev->ip_blocks[i].status.valid)
-2
drivers/gpu/drm/amd/amdgpu/cik.c
···13581358 int r;1359135913601360 if (cik_asic_reset_method(adev) == AMD_RESET_METHOD_BACO) {13611361- if (!adev->in_suspend)13621362- amdgpu_inc_vram_lost(adev);13631361 r = amdgpu_dpm_baco_reset(adev);13641362 } else {13651363 r = cik_asic_pci_config_reset(adev);
···351351 struct smu_context *smu = &adev->smu;352352353353 if (nv_asic_reset_method(adev) == AMD_RESET_METHOD_BACO) {354354- if (!adev->in_suspend)355355- amdgpu_inc_vram_lost(adev);356354 ret = smu_baco_enter(smu);357355 if (ret)358356 return ret;···358360 if (ret)359361 return ret;360362 } else {361361- if (!adev->in_suspend)362362- amdgpu_inc_vram_lost(adev);363363 ret = nv_asic_mode1_reset(adev);364364 }365365
-4
drivers/gpu/drm/amd/amdgpu/soc15.c
···569569570570 switch (soc15_asic_reset_method(adev)) {571571 case AMD_RESET_METHOD_BACO:572572- if (!adev->in_suspend)573573- amdgpu_inc_vram_lost(adev);574572 return soc15_asic_baco_reset(adev);575573 case AMD_RESET_METHOD_MODE2:576574 return amdgpu_dpm_mode2_reset(adev);577575 default:578578- if (!adev->in_suspend)579579- amdgpu_inc_vram_lost(adev);580576 return soc15_asic_mode1_reset(adev);581577 }582578}
-2
drivers/gpu/drm/amd/amdgpu/vi.c
···765765 int r;766766767767 if (vi_asic_reset_method(adev) == AMD_RESET_METHOD_BACO) {768768- if (!adev->in_suspend)769769- amdgpu_inc_vram_lost(adev);770768 r = amdgpu_dpm_baco_reset(adev);771769 } else {772770 r = vi_asic_pci_config_reset(adev);
+4-1
drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
···38043804{38053805 uint32_t i;3806380638073807+ /* force the trim if mclk_switching is disabled to prevent flicker */38083808+ bool force_trim = (low_limit == high_limit);38073809 for (i = 0; i < dpm_table->count; i++) {38083810 /*skip the trim if od is enabled*/38093809- if (!hwmgr->od_enabled && (dpm_table->dpm_levels[i].value < low_limit38113811+ if ((!hwmgr->od_enabled || force_trim)38123812+ && (dpm_table->dpm_levels[i].value < low_limit38103813 || dpm_table->dpm_levels[i].value > high_limit))38113814 dpm_table->dpm_levels[i].enabled = false;38123815 else
+6
drivers/gpu/drm/amd/powerplay/smu_v11_0.c
···17181718 if (ret)17191719 goto out;1720172017211721+ if (ras && ras->supported) {17221722+ ret = smu_send_smc_msg(smu, SMU_MSG_PrepareMp1ForUnload, NULL);17231723+ if (ret)17241724+ goto out;17251725+ }17261726+17211727 /* clear vbios scratch 6 and 7 for coming asic reinit */17221728 WREG32(adev->bios_scratch_reg_offset + 6, 0);17231729 WREG32(adev->bios_scratch_reg_offset + 7, 0);
+22-24
drivers/gpu/drm/i915/gvt/kvmgt.c
···131131 struct work_struct release_work;132132 atomic_t released;133133 struct vfio_device *vfio_device;134134+ struct vfio_group *vfio_group;134135};135136136137static inline struct kvmgt_vdev *kvmgt_vdev(struct intel_vgpu *vgpu)···152151 unsigned long size)153152{154153 struct drm_i915_private *i915 = vgpu->gvt->gt->i915;154154+ struct kvmgt_vdev *vdev = kvmgt_vdev(vgpu);155155 int total_pages;156156 int npage;157157 int ret;···162160 for (npage = 0; npage < total_pages; npage++) {163161 unsigned long cur_gfn = gfn + npage;164162165165- ret = vfio_unpin_pages(mdev_dev(kvmgt_vdev(vgpu)->mdev), &cur_gfn, 1);163163+ ret = vfio_group_unpin_pages(vdev->vfio_group, &cur_gfn, 1);166164 drm_WARN_ON(&i915->drm, ret != 1);167165 }168166}···171169static int gvt_pin_guest_page(struct intel_vgpu *vgpu, unsigned long gfn,172170 unsigned long size, struct page **page)173171{172172+ struct kvmgt_vdev *vdev = kvmgt_vdev(vgpu);174173 unsigned long base_pfn = 0;175174 int total_pages;176175 int npage;···186183 unsigned long cur_gfn = gfn + npage;187184 unsigned long pfn;188185189189- ret = vfio_pin_pages(mdev_dev(kvmgt_vdev(vgpu)->mdev), &cur_gfn, 1,190190- IOMMU_READ | IOMMU_WRITE, &pfn);186186+ ret = vfio_group_pin_pages(vdev->vfio_group, &cur_gfn, 1,187187+ IOMMU_READ | IOMMU_WRITE, &pfn);191188 if (ret != 1) {192189 gvt_vgpu_err("vfio_pin_pages failed for gfn 0x%lx, ret %d\n",193190 cur_gfn, ret);···795792 struct kvmgt_vdev *vdev = kvmgt_vdev(vgpu);796793 unsigned long events;797794 int ret;795795+ struct vfio_group *vfio_group;798796799797 vdev->iommu_notifier.notifier_call = intel_vgpu_iommu_notifier;800798 vdev->group_notifier.notifier_call = intel_vgpu_group_notifier;···818814 goto undo_iommu;819815 }820816817817+ vfio_group = vfio_group_get_external_user_from_dev(mdev_dev(mdev));818818+ if (IS_ERR_OR_NULL(vfio_group)) {819819+ ret = !vfio_group ? -EFAULT : PTR_ERR(vfio_group);820820+ gvt_vgpu_err("vfio_group_get_external_user_from_dev failed\n");821821+ goto undo_register;822822+ }823823+ vdev->vfio_group = vfio_group;824824+821825 /* Take a module reference as mdev core doesn't take822826 * a reference for vendor driver.823827 */···842830 return ret;843831844832undo_group:833833+ vfio_group_put_external_user(vdev->vfio_group);834834+ vdev->vfio_group = NULL;835835+836836+undo_register:845837 vfio_unregister_notifier(mdev_dev(mdev), VFIO_GROUP_NOTIFY,846838 &vdev->group_notifier);847839···900884 kvmgt_guest_exit(info);901885902886 intel_vgpu_release_msi_eventfd_ctx(vgpu);887887+ vfio_group_put_external_user(vdev->vfio_group);903888904889 vdev->kvm = NULL;905890 vgpu->handle = 0;···20522035 void *buf, unsigned long len, bool write)20532036{20542037 struct kvmgt_guest_info *info;20552055- struct kvm *kvm;20562056- int idx, ret;20572057- bool kthread = current->mm == NULL;2058203820592039 if (!handle_valid(handle))20602040 return -ESRCH;2061204120622042 info = (struct kvmgt_guest_info *)handle;20632063- kvm = info->kvm;2064204320652065- if (kthread) {20662066- if (!mmget_not_zero(kvm->mm))20672067- return -EFAULT;20682068- use_mm(kvm->mm);20692069- }20702070-20712071- idx = srcu_read_lock(&kvm->srcu);20722072- ret = write ? kvm_write_guest(kvm, gpa, buf, len) :20732073- kvm_read_guest(kvm, gpa, buf, len);20742074- srcu_read_unlock(&kvm->srcu, idx);20752075-20762076- if (kthread) {20772077- unuse_mm(kvm->mm);20782078- mmput(kvm->mm);20792079- }20802080-20812081- return ret;20442044+ return vfio_dma_rw(kvmgt_vdev(info->vgpu)->vfio_group,20452045+ gpa, buf, len, write);20822046}2083204720842048static int kvmgt_read_gpa(unsigned long handle, unsigned long gpa,
+11-54
drivers/gpu/drm/i915/i915_perf.c
···29412941}2942294229432943/**29442944- * i915_perf_read_locked - &i915_perf_stream_ops->read with error normalisation29452945- * @stream: An i915 perf stream29462946- * @file: An i915 perf stream file29472947- * @buf: destination buffer given by userspace29482948- * @count: the number of bytes userspace wants to read29492949- * @ppos: (inout) file seek position (unused)29502950- *29512951- * Besides wrapping &i915_perf_stream_ops->read this provides a common place to29522952- * ensure that if we've successfully copied any data then reporting that takes29532953- * precedence over any internal error status, so the data isn't lost.29542954- *29552955- * For example ret will be -ENOSPC whenever there is more buffered data than29562956- * can be copied to userspace, but that's only interesting if we weren't able29572957- * to copy some data because it implies the userspace buffer is too small to29582958- * receive a single record (and we never split records).29592959- *29602960- * Another case with ret == -EFAULT is more of a grey area since it would seem29612961- * like bad form for userspace to ask us to overrun its buffer, but the user29622962- * knows best:29632963- *29642964- * http://yarchive.net/comp/linux/partial_reads_writes.html29652965- *29662966- * Returns: The number of bytes copied or a negative error code on failure.29672967- */29682968-static ssize_t i915_perf_read_locked(struct i915_perf_stream *stream,29692969- struct file *file,29702970- char __user *buf,29712971- size_t count,29722972- loff_t *ppos)29732973-{29742974- /* Note we keep the offset (aka bytes read) separate from any29752975- * error status so that the final check for whether we return29762976- * the bytes read with a higher precedence than any error (see29772977- * comment below) doesn't need to be handled/duplicated in29782978- * stream->ops->read() implementations.29792979- */29802980- size_t offset = 0;29812981- int ret = stream->ops->read(stream, buf, count, &offset);29822982-29832983- return offset ?: (ret ?: -EAGAIN);29842984-}29852985-29862986-/**29872944 * i915_perf_read - handles read() FOP for i915 perf stream FDs29882945 * @file: An i915 perf stream file29892946 * @buf: destination buffer given by userspace···29653008{29663009 struct i915_perf_stream *stream = file->private_data;29673010 struct i915_perf *perf = stream->perf;29682968- ssize_t ret;30113011+ size_t offset = 0;30123012+ int ret;2969301329703014 /* To ensure it's handled consistently we simply treat all reads of a29713015 * disabled stream as an error. In particular it might otherwise lead···29893031 return ret;2990303229913033 mutex_lock(&perf->lock);29922992- ret = i915_perf_read_locked(stream, file,29932993- buf, count, ppos);30343034+ ret = stream->ops->read(stream, buf, count, &offset);29943035 mutex_unlock(&perf->lock);29952995- } while (ret == -EAGAIN);30363036+ } while (!offset && !ret);29963037 } else {29973038 mutex_lock(&perf->lock);29982998- ret = i915_perf_read_locked(stream, file, buf, count, ppos);30393039+ ret = stream->ops->read(stream, buf, count, &offset);29993040 mutex_unlock(&perf->lock);30003041 }30013042···30053048 * and read() returning -EAGAIN. Clearing the oa.pollin state here30063049 * effectively ensures we back off until the next hrtimer callback30073050 * before reporting another EPOLLIN event.30513051+ * The exception to this is if ops->read() returned -ENOSPC which means30523052+ * that more OA data is available than could fit in the user provided30533053+ * buffer. In this case we want the next poll() call to not block.30083054 */30093009- if (ret >= 0 || ret == -EAGAIN) {30103010- /* Maybe make ->pollin per-stream state if we support multiple30113011- * concurrent streams in the future.30123012- */30553055+ if (ret != -ENOSPC)30133056 stream->pollin = false;30143014- }3015305730163016- return ret;30583058+ /* Possible values for ret are 0, -EFAULT, -ENOSPC, -EIO, ... */30593059+ return offset ?: (ret ?: -EAGAIN);30173060}3018306130193062static enum hrtimer_restart oa_poll_check_timer_cb(struct hrtimer *hrtimer)
···3131#include <linux/kdebug.h>3232#include <linux/efi.h>3333#include <linux/random.h>3434+#include <linux/kernel.h>3435#include <linux/syscore_ops.h>3536#include <clocksource/hyperv_timer.h>3637#include "hyperv_vmbus.h"···49485049static void *hv_panic_page;51505151+/*5252+ * Boolean to control whether to report panic messages over Hyper-V.5353+ *5454+ * It can be set via /proc/sys/kernel/hyperv/record_panic_msg5555+ */5656+static int sysctl_record_panic_msg = 1;5757+5858+static int hyperv_report_reg(void)5959+{6060+ return !sysctl_record_panic_msg || !hv_panic_page;6161+}6262+5263static int hyperv_panic_event(struct notifier_block *nb, unsigned long val,5364 void *args)5465{5566 struct pt_regs *regs;56675757- regs = current_pt_regs();6868+ vmbus_initiate_unload(true);58695959- hyperv_report_panic(regs, val);7070+ /*7171+ * Hyper-V should be notified only once about a panic. If we will be7272+ * doing hyperv_report_panic_msg() later with kmsg data, don't do7373+ * the notification here.7474+ */7575+ if (ms_hyperv.misc_features & HV_FEATURE_GUEST_CRASH_MSR_AVAILABLE7676+ && hyperv_report_reg()) {7777+ regs = current_pt_regs();7878+ hyperv_report_panic(regs, val, false);7979+ }6080 return NOTIFY_DONE;6181}6282···8765 struct die_args *die = (struct die_args *)args;8866 struct pt_regs *regs = die->regs;89679090- hyperv_report_panic(regs, val);6868+ /*6969+ * Hyper-V should be notified only once about a panic. If we will be7070+ * doing hyperv_report_panic_msg() later with kmsg data, don't do7171+ * the notification here.7272+ */7373+ if (hyperv_report_reg())7474+ hyperv_report_panic(regs, val, true);9175 return NOTIFY_DONE;9276}9377···12811253}1282125412831255/*12841284- * Boolean to control whether to report panic messages over Hyper-V.12851285- *12861286- * It can be set via /proc/sys/kernel/hyperv/record_panic_msg12871287- */12881288-static int sysctl_record_panic_msg = 1;12891289-12901290-/*12911256 * Callback from kmsg_dump. Grab as much as possible from the end of the kmsg12921257 * buffer and call into Hyper-V to transfer the data.12931258 */···14031382 hv_panic_page = (void *)hv_alloc_hyperv_zeroed_page();14041383 if (hv_panic_page) {14051384 ret = kmsg_dump_register(&hv_kmsg_dumper);14061406- if (ret)13851385+ if (ret) {14071386 pr_err("Hyper-V: kmsg dump register "14081387 "error 0x%x\n", ret);13881388+ hv_free_hyperv_page(13891389+ (unsigned long)hv_panic_page);13901390+ hv_panic_page = NULL;13911391+ }14091392 } else14101393 pr_err("Hyper-V: panic message page memory "14111394 "allocation failed");14121395 }1413139614141397 register_die_notifier(&hyperv_die_block);14151415- atomic_notifier_chain_register(&panic_notifier_list,14161416- &hyperv_panic_block);14171398 }13991399+14001400+ /*14011401+ * Always register the panic notifier because we need to unload14021402+ * the VMbus channel connection to prevent any VMbus14031403+ * activity after the VM panics.14041404+ */14051405+ atomic_notifier_chain_register(&panic_notifier_list,14061406+ &hyperv_panic_block);1418140714191408 vmbus_request_offers();14201409···14381407 hv_remove_vmbus_irq();1439140814401409 bus_unregister(&hv_bus);14411441- hv_free_hyperv_page((unsigned long)hv_panic_page);14421410 unregister_sysctl_table(hv_ctl_table_hdr);14431411 hv_ctl_table_hdr = NULL;14441412 return ret;···2234220422352205 vmbus_initiate_unload(false);2236220622372237- vmbus_connection.conn_state = DISCONNECTED;22382238-22392207 /* Reset the event for the next resume. */22402208 reinit_completion(&vmbus_connection.ready_for_resume_event);22412209···23172289{23182290 hv_stimer_global_cleanup();23192291 vmbus_initiate_unload(false);23202320- vmbus_connection.conn_state = DISCONNECTED;23212292 /* Make sure conn_state is set as hv_synic_cleanup checks for it */23222293 mb();23232294 cpuhp_remove_state(hyperv_cpuhp_online);···23332306 * doing the cleanup for current CPU only. This should be sufficient23342307 * for kdump.23352308 */23362336- vmbus_connection.conn_state = DISCONNECTED;23372309 cpu = smp_processor_id();23382310 hv_stimer_cleanup(cpu);23392311 hv_synic_disable_regs(cpu);
+1-1
drivers/hwmon/Kconfig
···412412 hard disk drives.413413414414 This driver can also be built as a module. If so, the module415415- will be called satatemp.415415+ will be called drivetemp.416416417417config SENSORS_DS620418418 tristate "Dallas Semiconductor DS620"
+6
drivers/hwmon/drivetemp.c
···264264 return err;265265 switch (attr) {266266 case hwmon_temp_input:267267+ if (!temp_is_valid(buf[SCT_STATUS_TEMP]))268268+ return -ENODATA;267269 *val = temp_from_sct(buf[SCT_STATUS_TEMP]);268270 break;269271 case hwmon_temp_lowest:272272+ if (!temp_is_valid(buf[SCT_STATUS_TEMP_LOWEST]))273273+ return -ENODATA;270274 *val = temp_from_sct(buf[SCT_STATUS_TEMP_LOWEST]);271275 break;272276 case hwmon_temp_highest:277277+ if (!temp_is_valid(buf[SCT_STATUS_TEMP_HIGHEST]))278278+ return -ENODATA;273279 *val = temp_from_sct(buf[SCT_STATUS_TEMP_HIGHEST]);274280 break;275281 default:
···996996 do {997997 u32 status = i2c_readl(i2c_dev, I2C_INT_STATUS);998998999999- if (status) {999999+ if (status)10001000 tegra_i2c_isr(i2c_dev->irq, i2c_dev);1001100110021002- if (completion_done(complete)) {10031003- s64 delta = ktime_ms_delta(ktimeout, ktime);10021002+ if (completion_done(complete)) {10031003+ s64 delta = ktime_ms_delta(ktimeout, ktime);1004100410051005- return msecs_to_jiffies(delta) ?: 1;10061006- }10051005+ return msecs_to_jiffies(delta) ?: 1;10071006 }1008100710091008 ktime = ktime_get();···10291030 disable_irq(i2c_dev->irq);1030103110311032 /*10321032- * There is a chance that completion may happen after IRQ10331033- * synchronization, which is done by disable_irq().10331033+ * Under some rare circumstances (like running KASAN +10341034+ * NFS root) CPU, which handles interrupt, may stuck in10351035+ * uninterruptible state for a significant time. In this10361036+ * case we will get timeout if I2C transfer is running on10371037+ * a sibling CPU, despite of IRQ being raised.10381038+ *10391039+ * In order to handle this rare condition, the IRQ status10401040+ * needs to be checked after timeout.10341041 */10351035- if (ret == 0 && completion_done(complete)) {10361036- dev_warn(i2c_dev->dev,10371037- "completion done after timeout\n");10381038- ret = 1;10391039- }10421042+ if (ret == 0)10431043+ ret = tegra_i2c_poll_completion_timeout(i2c_dev,10441044+ complete, 0);10401045 }1041104610421047 return ret;···12181215 if (dma) {12191216 time_left = tegra_i2c_wait_completion_timeout(12201217 i2c_dev, &i2c_dev->dma_complete, xfer_time);12181218+12191219+ /*12201220+ * Synchronize DMA first, since dmaengine_terminate_sync()12211221+ * performs synchronization after the transfer's termination12221222+ * and we want to get a completion if transfer succeeded.12231223+ */12241224+ dmaengine_synchronize(i2c_dev->msg_read ?12251225+ i2c_dev->rx_dma_chan :12261226+ i2c_dev->tx_dma_chan);1221122712221228 dmaengine_terminate_sync(i2c_dev->msg_read ?12231229 i2c_dev->rx_dma_chan :
···6767};68686969static int7070-mt7623_trgmii_write(struct mt7530_priv *priv, u32 reg, u32 val)7171-{7272- int ret;7373-7474- ret = regmap_write(priv->ethernet, TRGMII_BASE(reg), val);7575- if (ret < 0)7676- dev_err(priv->dev,7777- "failed to priv write register\n");7878- return ret;7979-}8080-8181-static u328282-mt7623_trgmii_read(struct mt7530_priv *priv, u32 reg)8383-{8484- int ret;8585- u32 val;8686-8787- ret = regmap_read(priv->ethernet, TRGMII_BASE(reg), &val);8888- if (ret < 0) {8989- dev_err(priv->dev,9090- "failed to priv read register\n");9191- return ret;9292- }9393-9494- return val;9595-}9696-9797-static void9898-mt7623_trgmii_rmw(struct mt7530_priv *priv, u32 reg,9999- u32 mask, u32 set)100100-{101101- u32 val;102102-103103- val = mt7623_trgmii_read(priv, reg);104104- val &= ~mask;105105- val |= set;106106- mt7623_trgmii_write(priv, reg, val);107107-}108108-109109-static void110110-mt7623_trgmii_set(struct mt7530_priv *priv, u32 reg, u32 val)111111-{112112- mt7623_trgmii_rmw(priv, reg, 0, val);113113-}114114-115115-static void116116-mt7623_trgmii_clear(struct mt7530_priv *priv, u32 reg, u32 val)117117-{118118- mt7623_trgmii_rmw(priv, reg, val, 0);119119-}120120-121121-static int12270core_read_mmd_indirect(struct mt7530_priv *priv, int prtad, int devad)12371{12472 struct mii_bus *bus = priv->bus;···478530 for (i = 0 ; i < NUM_TRGMII_CTRL; i++)479531 mt7530_rmw(priv, MT7530_TRGMII_RD(i),480532 RD_TAP_MASK, RD_TAP(16));481481- else482482- if (priv->id != ID_MT7621)483483- mt7623_trgmii_set(priv, GSW_INTF_MODE,484484- INTF_MODE_TRGMII);485485-486486- return 0;487487-}488488-489489-static int490490-mt7623_pad_clk_setup(struct dsa_switch *ds)491491-{492492- struct mt7530_priv *priv = ds->priv;493493- int i;494494-495495- for (i = 0 ; i < NUM_TRGMII_CTRL; i++)496496- mt7623_trgmii_write(priv, GSW_TRGMII_TD_ODT(i),497497- TD_DM_DRVP(8) | TD_DM_DRVN(8));498498-499499- mt7623_trgmii_set(priv, GSW_TRGMII_RCK_CTRL, RX_RST | RXC_DQSISEL);500500- mt7623_trgmii_clear(priv, GSW_TRGMII_RCK_CTRL, RX_RST);501501-502533 return 0;503534}504535···773846 */774847 mt7530_rmw(priv, MT7530_PCR_P(port), PCR_PORT_VLAN_MASK,775848 MT7530_PORT_MATRIX_MODE);776776- mt7530_rmw(priv, MT7530_PVC_P(port), VLAN_ATTR_MASK,777777- VLAN_ATTR(MT7530_VLAN_TRANSPARENT));849849+ mt7530_rmw(priv, MT7530_PVC_P(port), VLAN_ATTR_MASK | PVC_EG_TAG_MASK,850850+ VLAN_ATTR(MT7530_VLAN_TRANSPARENT) |851851+ PVC_EG_TAG(MT7530_VLAN_EG_CONSISTENT));778852779853 for (i = 0; i < MT7530_NUM_PORTS; i++) {780854 if (dsa_is_user_port(ds, i) &&···791863 if (all_user_ports_removed) {792864 mt7530_write(priv, MT7530_PCR_P(MT7530_CPU_PORT),793865 PCR_MATRIX(dsa_user_ports(priv->ds)));794794- mt7530_write(priv, MT7530_PVC_P(MT7530_CPU_PORT),795795- PORT_SPEC_TAG);866866+ mt7530_write(priv, MT7530_PVC_P(MT7530_CPU_PORT), PORT_SPEC_TAG867867+ | PVC_EG_TAG(MT7530_VLAN_EG_CONSISTENT));796868 }797869}798870···818890 /* Set the port as a user port which is to be able to recognize VID819891 * from incoming packets before fetching entry within the VLAN table.820892 */821821- mt7530_rmw(priv, MT7530_PVC_P(port), VLAN_ATTR_MASK,822822- VLAN_ATTR(MT7530_VLAN_USER));893893+ mt7530_rmw(priv, MT7530_PVC_P(port), VLAN_ATTR_MASK | PVC_EG_TAG_MASK,894894+ VLAN_ATTR(MT7530_VLAN_USER) |895895+ PVC_EG_TAG(MT7530_VLAN_EG_DISABLED));823896}824897825898static void···12321303 dn = dsa_to_port(ds, MT7530_CPU_PORT)->master->dev.of_node->parent;1233130412341305 if (priv->id == ID_MT7530) {12351235- priv->ethernet = syscon_node_to_regmap(dn);12361236- if (IS_ERR(priv->ethernet))12371237- return PTR_ERR(priv->ethernet);12381238-12391306 regulator_set_voltage(priv->core_pwr, 1000000, 1000000);12401307 ret = regulator_enable(priv->core_pwr);12411308 if (ret < 0) {···13051380 mt7530_cpu_port_enable(priv, i);13061381 else13071382 mt7530_port_disable(ds, i);13831383+13841384+ /* Enable consistent egress tag */13851385+ mt7530_rmw(priv, MT7530_PVC_P(i), PVC_EG_TAG_MASK,13861386+ PVC_EG_TAG(MT7530_VLAN_EG_CONSISTENT));13081387 }1309138813101389 /* Setup port 5 */···1396146713971468 /* Setup TX circuit incluing relevant PAD and driving */13981469 mt7530_pad_clk_setup(ds, state->interface);13991399-14001400- if (priv->id == ID_MT7530) {14011401- /* Setup RX circuit, relevant PAD and driving on the14021402- * host which must be placed after the setup on the14031403- * device side is all finished.14041404- */14051405- mt7623_pad_clk_setup(ds);14061406- }1407147014081471 priv->p6_interface = state->interface;14091472 break;
+7-10
drivers/net/dsa/mt7530.h
···172172/* Register for port vlan control */173173#define MT7530_PVC_P(x) (0x2010 + ((x) * 0x100))174174#define PORT_SPEC_TAG BIT(5)175175+#define PVC_EG_TAG(x) (((x) & 0x7) << 8)176176+#define PVC_EG_TAG_MASK PVC_EG_TAG(7)175177#define VLAN_ATTR(x) (((x) & 0x3) << 6)176178#define VLAN_ATTR_MASK VLAN_ATTR(3)179179+180180+enum mt7530_vlan_port_eg_tag {181181+ MT7530_VLAN_EG_DISABLED = 0,182182+ MT7530_VLAN_EG_CONSISTENT = 1,183183+};177184178185enum mt7530_vlan_port_attr {179186 MT7530_VLAN_USER = 0,···284277285278/* Registers for TRGMII on the both side */286279#define MT7530_TRGMII_RCK_CTRL 0x7a00287287-#define GSW_TRGMII_RCK_CTRL 0x300288280#define RX_RST BIT(31)289281#define RXC_DQSISEL BIT(30)290282#define DQSI1_TAP_MASK (0x7f << 8)···292286#define DQSI0_TAP(x) ((x) & 0x7f)293287294288#define MT7530_TRGMII_RCK_RTT 0x7a04295295-#define GSW_TRGMII_RCK_RTT 0x304296289#define DQS1_GATE BIT(31)297290#define DQS0_GATE BIT(30)298291299292#define MT7530_TRGMII_RD(x) (0x7a10 + (x) * 8)300300-#define GSW_TRGMII_RD(x) (0x310 + (x) * 8)301293#define BSLIP_EN BIT(31)302294#define EDGE_CHK BIT(30)303295#define RD_TAP_MASK 0x7f304296#define RD_TAP(x) ((x) & 0x7f)305297306306-#define GSW_TRGMII_TXCTRL 0x340307298#define MT7530_TRGMII_TXCTRL 0x7a40308299#define TRAIN_TXEN BIT(31)309300#define TXC_INV BIT(30)310301#define TX_RST BIT(28)311302312303#define MT7530_TRGMII_TD_ODT(i) (0x7a54 + 8 * (i))313313-#define GSW_TRGMII_TD_ODT(i) (0x354 + 8 * (i))314304#define TD_DM_DRVP(x) ((x) & 0xf)315305#define TD_DM_DRVN(x) (((x) & 0xf) << 4)316316-317317-#define GSW_INTF_MODE 0x390318318-#define INTF_MODE_TRGMII BIT(1)319306320307#define MT7530_TRGMII_TCK_CTRL 0x7a78321308#define TCK_TAP(x) (((x) & 0xf) << 8)···442443 * @ds: The pointer to the dsa core structure443444 * @bus: The bus used for the device and built-in PHY444445 * @rstc: The pointer to reset control used by MCM445445- * @ethernet: The regmap used for access TRGMII-based registers446446 * @core_pwr: The power supplied into the core447447 * @io_pwr: The power supplied into the I/O448448 * @reset: The descriptor for GPIO line tied to its reset pin···458460 struct dsa_switch *ds;459461 struct mii_bus *bus;460462 struct reset_control *rstc;461461- struct regmap *ethernet;462463 struct regulator *core_pwr;463464 struct regulator *io_pwr;464465 struct gpio_desc *reset;
+3-2
drivers/net/dsa/mv88e6xxx/chip.c
···709709 ops = chip->info->ops;710710711711 mv88e6xxx_reg_lock(chip);712712- if (!mv88e6xxx_port_ppu_updates(chip, port) && ops->port_set_link)712712+ if ((!mv88e6xxx_port_ppu_updates(chip, port) ||713713+ mode == MLO_AN_FIXED) && ops->port_set_link)713714 err = ops->port_set_link(chip, port, LINK_FORCED_DOWN);714715 mv88e6xxx_reg_unlock(chip);715716···732731 ops = chip->info->ops;733732734733 mv88e6xxx_reg_lock(chip);735735- if (!mv88e6xxx_port_ppu_updates(chip, port)) {734734+ if (!mv88e6xxx_port_ppu_updates(chip, port) || mode == MLO_AN_FIXED) {736735 /* FIXME: for an automedia port, should we force the link737736 * down here - what if the link comes up due to "other" media738737 * while we're bringing the port up, how is the exclusivity
···488488 struct sk_buff *rx_skbuff[RX_RING_SIZE];489489};490490491491+struct fec_stop_mode_gpr {492492+ struct regmap *gpr;493493+ u8 reg;494494+ u8 bit;495495+};496496+491497/* The FEC buffer descriptors track the ring buffers. The rx_bd_base and492498 * tx_bd_base always point to the base of the buffer descriptors. The493499 * cur_rx and cur_tx point to the currently available buffer.···568562 int hwts_tx_en;569563 struct delayed_work time_keep;570564 struct regulator *reg_phy;565565+ struct fec_stop_mode_gpr stop_gpr;571566572567 unsigned int tx_align;573568 unsigned int rx_align;
···53835383{53845384 int ret;5385538553865386- ret = cpuhp_setup_state_multi(CPUHP_AP_ONLINE_DYN, "net/mvmeta:online",53865386+ ret = cpuhp_setup_state_multi(CPUHP_AP_ONLINE_DYN, "net/mvneta:online",53875387 mvneta_cpu_online,53885388 mvneta_cpu_down_prepare);53895389 if (ret < 0)
+23-1
drivers/net/ethernet/mediatek/mtk_eth_soc.c
···6565 return __raw_readl(eth->base + reg);6666}67676868+u32 mtk_m32(struct mtk_eth *eth, u32 mask, u32 set, unsigned reg)6969+{7070+ u32 val;7171+7272+ val = mtk_r32(eth, reg);7373+ val &= ~mask;7474+ val |= set;7575+ mtk_w32(eth, val, reg);7676+ return reg;7777+}7878+6879static int mtk_mdio_busy_wait(struct mtk_eth *eth)6980{7081 unsigned long t_start = jiffies;···204193 struct mtk_mac *mac = container_of(config, struct mtk_mac,205194 phylink_config);206195 struct mtk_eth *eth = mac->hw;207207- u32 mcr_cur, mcr_new, sid;196196+ u32 mcr_cur, mcr_new, sid, i;208197 int val, ge_mode, err;209198210199 /* MT76x8 has no hardware settings between for the MAC */···266255 PHY_INTERFACE_MODE_TRGMII)267256 mtk_gmac0_rgmii_adjust(mac->hw,268257 state->speed);258258+259259+ /* mt7623_pad_clk_setup */260260+ for (i = 0 ; i < NUM_TRGMII_CTRL; i++)261261+ mtk_w32(mac->hw,262262+ TD_DM_DRVP(8) | TD_DM_DRVN(8),263263+ TRGMII_TD_ODT(i));264264+265265+ /* Assert/release MT7623 RXC reset */266266+ mtk_m32(mac->hw, 0, RXC_RST | RXC_DQSISEL,267267+ TRGMII_RCK_CTRL);268268+ mtk_m32(mac->hw, RXC_RST, 0, TRGMII_RCK_CTRL);269269 }270270 }271271
···241241 switch (phymode) {242242 case PHY_INTERFACE_MODE_RGMII:243243 case PHY_INTERFACE_MODE_RGMII_ID:244244+ case PHY_INTERFACE_MODE_RGMII_RXID:245245+ case PHY_INTERFACE_MODE_RGMII_TXID:244246 *val = SYSMGR_EMACGRP_CTRL_PHYSEL_ENUM_RGMII;245247 break;246248 case PHY_INTERFACE_MODE_MII:
···12631263 int lpa;12641264 int err;1265126512661266+ if (!(status & MII_M1011_PHY_STATUS_RESOLVED)) {12671267+ phydev->link = 0;12681268+ return 0;12691269+ }12701270+12711271+ if (status & MII_M1011_PHY_STATUS_FULLDUPLEX)12721272+ phydev->duplex = DUPLEX_FULL;12731273+ else12741274+ phydev->duplex = DUPLEX_HALF;12751275+12761276+ switch (status & MII_M1011_PHY_STATUS_SPD_MASK) {12771277+ case MII_M1011_PHY_STATUS_1000:12781278+ phydev->speed = SPEED_1000;12791279+ break;12801280+12811281+ case MII_M1011_PHY_STATUS_100:12821282+ phydev->speed = SPEED_100;12831283+ break;12841284+12851285+ default:12861286+ phydev->speed = SPEED_10;12871287+ break;12881288+ }12891289+12661290 if (!fiber) {12671291 err = genphy_read_lpa(phydev);12681292 if (err < 0)···13131289 phydev->asym_pause = 0;13141290 }13151291 }13161316- }13171317-13181318- if (!(status & MII_M1011_PHY_STATUS_RESOLVED))13191319- return 0;13201320-13211321- if (status & MII_M1011_PHY_STATUS_FULLDUPLEX)13221322- phydev->duplex = DUPLEX_FULL;13231323- else13241324- phydev->duplex = DUPLEX_HALF;13251325-13261326- switch (status & MII_M1011_PHY_STATUS_SPD_MASK) {13271327- case MII_M1011_PHY_STATUS_1000:13281328- phydev->speed = SPEED_1000;13291329- break;13301330-13311331- case MII_M1011_PHY_STATUS_100:13321332- phydev->speed = SPEED_100;13331333- break;13341334-13351335- default:13361336- phydev->speed = SPEED_10;13371337- break;13381292 }1339129313401294 return 0;
+33-3
drivers/net/phy/marvell10g.c
···3333#define MV_PHY_ALASKA_NBT_QUIRK_REV (MARVELL_PHY_ID_88X3310 | 0xa)34343535enum {3636+ MV_PMA_FW_VER0 = 0xc011,3737+ MV_PMA_FW_VER1 = 0xc012,3638 MV_PMA_BOOT = 0xc050,3739 MV_PMA_BOOT_FATAL = BIT(0),3840···75737674 /* Vendor2 MMD registers */7775 MV_V2_PORT_CTRL = 0xf001,7878- MV_V2_PORT_CTRL_PWRDOWN = 0x0800,7676+ MV_V2_PORT_CTRL_SWRST = BIT(15),7777+ MV_V2_PORT_CTRL_PWRDOWN = BIT(11),7978 MV_V2_TEMP_CTRL = 0xf08a,8079 MV_V2_TEMP_CTRL_MASK = 0xc000,8180 MV_V2_TEMP_CTRL_SAMPLE = 0x0000,···8683};87848885struct mv3310_priv {8686+ u32 firmware_ver;8787+8988 struct device *hwmon_dev;9089 char *hwmon_name;9190};···240235241236static int mv3310_power_up(struct phy_device *phydev)242237{243243- return phy_clear_bits_mmd(phydev, MDIO_MMD_VEND2, MV_V2_PORT_CTRL,244244- MV_V2_PORT_CTRL_PWRDOWN);238238+ struct mv3310_priv *priv = dev_get_drvdata(&phydev->mdio.dev);239239+ int ret;240240+241241+ ret = phy_clear_bits_mmd(phydev, MDIO_MMD_VEND2, MV_V2_PORT_CTRL,242242+ MV_V2_PORT_CTRL_PWRDOWN);243243+244244+ if (priv->firmware_ver < 0x00030000)245245+ return ret;246246+247247+ return phy_set_bits_mmd(phydev, MDIO_MMD_VEND2, MV_V2_PORT_CTRL,248248+ MV_V2_PORT_CTRL_SWRST);245249}246250247251static int mv3310_reset(struct phy_device *phydev, u32 unit)···368354 return -ENOMEM;369355370356 dev_set_drvdata(&phydev->mdio.dev, priv);357357+358358+ ret = phy_read_mmd(phydev, MDIO_MMD_PMAPMD, MV_PMA_FW_VER0);359359+ if (ret < 0)360360+ return ret;361361+362362+ priv->firmware_ver = ret << 16;363363+364364+ ret = phy_read_mmd(phydev, MDIO_MMD_PMAPMD, MV_PMA_FW_VER1);365365+ if (ret < 0)366366+ return ret;367367+368368+ priv->firmware_ver |= ret;369369+370370+ phydev_info(phydev, "Firmware version %u.%u.%u.%u\n",371371+ priv->firmware_ver >> 24, (priv->firmware_ver >> 16) & 255,372372+ (priv->firmware_ver >> 8) & 255, priv->firmware_ver & 255);371373372374 /* Powering down the port when not in use saves about 600mW */373375 ret = mv3310_power_down(phydev);
+1-1
drivers/net/phy/mdio_bus.c
···464464465465/**466466 * mdio_find_bus - Given the name of a mdiobus, find the mii_bus.467467- * @mdio_bus_np: Pointer to the mii_bus.467467+ * @mdio_name: The name of a mdiobus.468468 *469469 * Returns a reference to the mii_bus, or NULL if none found. The470470 * embedded struct device will have its reference count incremented,
···1888188818891889 skb_reset_network_header(skb);18901890 skb_probe_transport_header(skb);18911891+ skb_record_rx_queue(skb, tfile->queue_index);1891189218921893 if (skb_xdp) {18931894 struct bpf_prog *xdp_prog;···24602459 skb->protocol = eth_type_trans(skb, tun->dev);24612460 skb_reset_network_header(skb);24622461 skb_probe_transport_header(skb);24622462+ skb_record_rx_queue(skb, tfile->queue_index);2463246324642464 if (skb_xdp) {24652465 err = do_xdp_generic(xdp_prog, skb);···24722470 !tfile->detached)24732471 rxhash = __skb_get_hash_symmetric(skb);2474247224752475- skb_record_rx_queue(skb, tfile->queue_index);24762473 netif_receive_skb(skb);2477247424782475 /* No need for get_cpu_ptr() here since this function is
···33/plugin/;4455/*66- * &electric_1/motor-1 and &spin_ctrl_1 are the same node:77- * /testcase-data-2/substation@100/motor-166+ * &electric_1/motor-1/electric and &spin_ctrl_1/electric are the same node:77+ * /testcase-data-2/substation@100/motor-1/electric88 *99 * Thus the property "rpm_avail" in each fragment will1010 * result in an attempt to update the same property twice.1111 * This will result in an error and the overlay apply1212 * will fail.1313+ *1414+ * The previous version of this test did not include the extra1515+ * level of node 'electric'. That resulted in the 'rpm_avail'1616+ * property being located in the pre-existing node 'motor-1'.1717+ * Modifying a property results in a WARNING that a memory leak1818+ * will occur if the overlay is removed. Since the overlay apply1919+ * fails, the memory leak does actually occur, and kmemleak will2020+ * further report the memory leak if CONFIG_DEBUG_KMEMLEAK is2121+ * enabled. Adding the overlay node 'electric' avoids the2222+ * memory leak and thus people who use kmemleak will not2323+ * have to debug this non-problem again.1324 */14251526&electric_1 {16271728 motor-1 {1818- rpm_avail = < 100 >;2929+ electric {3030+ rpm_avail = < 100 >;3131+ };1932 };2033};21342235&spin_ctrl_1 {2323- rpm_avail = < 100 200 >;3636+ electric {3737+ rpm_avail = < 100 200 >;3838+ };2439};
···819819 if (unlikely(!target_freq)) {820820 if (opp_table->required_opp_tables) {821821 ret = _set_required_opps(dev, opp_table, NULL);822822+ } else if (!_get_opp_count(opp_table)) {823823+ return 0;822824 } else {823825 dev_err(dev, "target frequency can't be 0\n");824826 ret = -EINVAL;···848846 dev_dbg(dev, "%s: old/new frequencies (%lu Hz) are same, nothing to do\n",849847 __func__, freq);850848 ret = 0;849849+ goto put_opp_table;850850+ }851851+852852+ /*853853+ * For IO devices which require an OPP on some platforms/SoCs854854+ * while just needing to scale the clock on some others855855+ * we look for empty OPP tables with just a clock handle and856856+ * scale only the clk. This makes dev_pm_opp_set_rate()857857+ * equivalent to a clk_set_rate()858858+ */859859+ if (!_get_opp_count(opp_table)) {860860+ ret = _generic_set_opp_clk_only(dev, clk, freq);851861 goto put_opp_table;852862 }853863
+4-4
drivers/platform/chrome/cros_ec_sensorhub_ring.c
···4040 int id = sample->sensor_id;4141 struct iio_dev *indio_dev;42424343- if (id > sensorhub->sensor_num)4343+ if (id >= sensorhub->sensor_num)4444 return -EINVAL;45454646 cb = sensorhub->push_data[id].push_data_cb;···820820 if (fifo_info->count > sensorhub->fifo_size ||821821 fifo_info->size != sensorhub->fifo_size) {822822 dev_warn(sensorhub->dev,823823- "Mismatch EC data: count %d, size %d - expected %d",823823+ "Mismatch EC data: count %d, size %d - expected %d\n",824824 fifo_info->count, fifo_info->size,825825 sensorhub->fifo_size);826826 goto error;···851851 }852852 if (number_data > fifo_info->count - i) {853853 dev_warn(sensorhub->dev,854854- "Invalid EC data: too many entry received: %d, expected %d",854854+ "Invalid EC data: too many entry received: %d, expected %d\n",855855 number_data, fifo_info->count - i);856856 break;857857 }858858 if (out + number_data >859859 sensorhub->ring + fifo_info->count) {860860 dev_warn(sensorhub->dev,861861- "Too many samples: %d (%zd data) to %d entries for expected %d entries",861861+ "Too many samples: %d (%zd data) to %d entries for expected %d entries\n",862862 i, out - sensorhub->ring, i + number_data,863863 fifo_info->count);864864 break;
+56-50
drivers/platform/x86/intel_cht_int33fe_typec.c
···66 *77 * Some Intel Cherry Trail based device which ship with Windows 10, have88 * this weird INT33FE ACPI device with a CRS table with 4 I2cSerialBusV299- * resources, for 4 different chips attached to various i2c busses:1010- * 1. The Whiskey Cove pmic, which is also described by the INT34D3 ACPI device99+ * resources, for 4 different chips attached to various I²C buses:1010+ * 1. The Whiskey Cove PMIC, which is also described by the INT34D3 ACPI device1111 * 2. Maxim MAX17047 Fuel Gauge Controller1212 * 3. FUSB302 USB Type-C Controller1313 * 4. PI3USB30532 USB switch1414 *1515 * So this driver is a stub / pseudo driver whose only purpose is to1616- * instantiate i2c-clients for chips 2 - 4, so that standard i2c drivers1616+ * instantiate I²C clients for chips 2 - 4, so that standard I²C drivers1717 * for these chips can bind to the them.1818 */1919···2121#include <linux/interrupt.h>2222#include <linux/pci.h>2323#include <linux/platform_device.h>2424+#include <linux/property.h>2425#include <linux/regulator/consumer.h>2526#include <linux/slab.h>2627#include <linux/usb/pd.h>27282829#include "intel_cht_int33fe_common.h"29303030-enum {3131- INT33FE_NODE_FUSB302,3232- INT33FE_NODE_MAX17047,3333- INT33FE_NODE_PI3USB30532,3434- INT33FE_NODE_DISPLAYPORT,3535- INT33FE_NODE_USB_CONNECTOR,3636- INT33FE_NODE_MAX,3737-};3838-3931/*4040- * Grrr I severly dislike buggy BIOS-es. At least one BIOS enumerates3232+ * Grrr, I severely dislike buggy BIOS-es. At least one BIOS enumerates4133 * the max17047 both through the INT33FE ACPI device (it is right there4234 * in the resources table) as well as through a separate MAX17047 device.4335 *4444- * These helpers are used to work around this by checking if an i2c-client3636+ * These helpers are used to work around this by checking if an I²C client4537 * for the max17047 has already been registered.4638 */4739static int cht_int33fe_check_for_max17047(struct device *dev, void *data)4840{4941 struct i2c_client **max17047 = data;5042 struct acpi_device *adev;5151- const char *hid;52435344 adev = ACPI_COMPANION(dev);5445 if (!adev)5546 return 0;56475757- hid = acpi_device_hid(adev);5858-5948 /* The MAX17047 ACPI node doesn't have an UID, so we don't check that */6060- if (strcmp(hid, "MAX17047"))4949+ if (!acpi_dev_hid_uid_match(adev, "MAX17047", NULL))6150 return 0;62516352 *max17047 = to_i2c_client(dev);···55665667static const char * const max17047_suppliers[] = { "bq24190-charger" };57685858-static const struct property_entry max17047_props[] = {6969+static const struct property_entry max17047_properties[] = {5970 PROPERTY_ENTRY_STRING_ARRAY("supplied-from", max17047_suppliers),6071 { }7272+};7373+7474+static const struct software_node max17047_node = {7575+ .name = "max17047",7676+ .properties = max17047_properties,6177};62786379/*···7480 { .node = NULL },7581};76827777-static const struct property_entry fusb302_props[] = {8383+static const struct property_entry fusb302_properties[] = {7884 PROPERTY_ENTRY_STRING("linux,extcon-name", "cht_wcove_pwrsrc"),7985 PROPERTY_ENTRY_REF_ARRAY("usb-role-switch", fusb302_mux_refs),8086 { }8787+};8888+8989+static const struct software_node fusb302_node = {9090+ .name = "fusb302",9191+ .properties = fusb302_properties,8192};82938394#define PDO_FIXED_FLAGS \···9798 PDO_VAR(5000, 12000, 3000),9899};99100100100-static const struct software_node nodes[];101101+static const struct software_node pi3usb30532_node = {102102+ .name = "pi3usb30532",103103+};101104102102-static const struct property_entry usb_connector_props[] = {105105+static const struct software_node displayport_node = {106106+ .name = "displayport",107107+};108108+109109+static const struct property_entry usb_connector_properties[] = {103110 PROPERTY_ENTRY_STRING("data-role", "dual"),104111 PROPERTY_ENTRY_STRING("power-role", "dual"),105112 PROPERTY_ENTRY_STRING("try-power-role", "sink"),106113 PROPERTY_ENTRY_U32_ARRAY("source-pdos", src_pdo),107114 PROPERTY_ENTRY_U32_ARRAY("sink-pdos", snk_pdo),108115 PROPERTY_ENTRY_U32("op-sink-microwatt", 2500000),109109- PROPERTY_ENTRY_REF("orientation-switch",110110- &nodes[INT33FE_NODE_PI3USB30532]),111111- PROPERTY_ENTRY_REF("mode-switch",112112- &nodes[INT33FE_NODE_PI3USB30532]),113113- PROPERTY_ENTRY_REF("displayport",114114- &nodes[INT33FE_NODE_DISPLAYPORT]),116116+ PROPERTY_ENTRY_REF("orientation-switch", &pi3usb30532_node),117117+ PROPERTY_ENTRY_REF("mode-switch", &pi3usb30532_node),118118+ PROPERTY_ENTRY_REF("displayport", &displayport_node),115119 { }116120};117121118118-static const struct software_node nodes[] = {119119- { "fusb302", NULL, fusb302_props },120120- { "max17047", NULL, max17047_props },121121- { "pi3usb30532" },122122- { "displayport" },123123- { "connector", &nodes[0], usb_connector_props },124124- { }122122+static const struct software_node usb_connector_node = {123123+ .name = "connector",124124+ .parent = &fusb302_node,125125+ .properties = usb_connector_properties,126126+};127127+128128+static const struct software_node *node_group[] = {129129+ &fusb302_node,130130+ &max17047_node,131131+ &pi3usb30532_node,132132+ &displayport_node,133133+ &usb_connector_node,134134+ NULL125135};126136127137static int cht_int33fe_setup_dp(struct cht_int33fe_data *data)···138130 struct fwnode_handle *fwnode;139131 struct pci_dev *pdev;140132141141- fwnode = software_node_fwnode(&nodes[INT33FE_NODE_DISPLAYPORT]);133133+ fwnode = software_node_fwnode(&displayport_node);142134 if (!fwnode)143135 return -ENODEV;144136···163155164156static void cht_int33fe_remove_nodes(struct cht_int33fe_data *data)165157{166166- software_node_unregister_nodes(nodes);158158+ software_node_unregister_node_group(node_group);167159168160 if (fusb302_mux_refs[0].node) {169169- fwnode_handle_put(170170- software_node_fwnode(fusb302_mux_refs[0].node));161161+ fwnode_handle_put(software_node_fwnode(fusb302_mux_refs[0].node));171162 fusb302_mux_refs[0].node = NULL;172163 }173164···199192 */200193 fusb302_mux_refs[0].node = mux_ref_node;201194202202- ret = software_node_register_nodes(nodes);195195+ ret = software_node_register_node_group(node_group);203196 if (ret)204197 return ret;205198···229222 struct fwnode_handle *fwnode;230223 int ret;231224232232- fwnode = software_node_fwnode(&nodes[INT33FE_NODE_MAX17047]);225225+ fwnode = software_node_fwnode(&max17047_node);233226 if (!fwnode)234227 return -ENODEV;235228236229 i2c_for_each_dev(&max17047, cht_int33fe_check_for_max17047);237230 if (max17047) {238238- /* Pre-existing i2c-client for the max17047, add device-props */239239- fwnode->secondary = ERR_PTR(-ENODEV);240240- max17047->dev.fwnode->secondary = fwnode;241241- /* And re-probe to get the new device-props applied. */231231+ /* Pre-existing I²C client for the max17047, add device properties */232232+ set_secondary_fwnode(&max17047->dev, fwnode);233233+ /* And re-probe to get the new device properties applied */242234 ret = device_reprobe(&max17047->dev);243235 if (ret)244236 dev_warn(dev, "Reprobing max17047 error: %d\n", ret);···272266 * must be registered before the fusb302 is instantiated, otherwise273267 * it will end up with a dummy-regulator.274268 * Note "cht_wc_usb_typec_vbus" comes from the regulator_init_data275275- * which is defined in i2c-cht-wc.c from where the bq24292i i2c-client269269+ * which is defined in i2c-cht-wc.c from where the bq24292i I²C client276270 * gets instantiated. We use regulator_get_optional here so that we277271 * don't end up getting a dummy-regulator ourselves.278272 */···283277 }284278 regulator_put(regulator);285279286286- /* The FUSB302 uses the irq at index 1 and is the only irq user */280280+ /* The FUSB302 uses the IRQ at index 1 and is the only IRQ user */287281 fusb302_irq = acpi_dev_gpio_irq_get(ACPI_COMPANION(dev), 1);288282 if (fusb302_irq < 0) {289283 if (fusb302_irq != -EPROBE_DEFER)···295289 if (ret)296290 return ret;297291298298- /* Work around BIOS bug, see comment on cht_int33fe_check_for_max17047 */292292+ /* Work around BIOS bug, see comment on cht_int33fe_check_for_max17047() */299293 ret = cht_int33fe_register_max17047(dev, data);300294 if (ret)301295 goto out_remove_nodes;302296303303- fwnode = software_node_fwnode(&nodes[INT33FE_NODE_FUSB302]);297297+ fwnode = software_node_fwnode(&fusb302_node);304298 if (!fwnode) {305299 ret = -ENODEV;306300 goto out_unregister_max17047;···318312 goto out_unregister_max17047;319313 }320314321321- fwnode = software_node_fwnode(&nodes[INT33FE_NODE_PI3USB30532]);315315+ fwnode = software_node_fwnode(&pi3usb30532_node);322316 if (!fwnode) {323317 ret = -ENODEV;324318 goto out_unregister_fusb302;
-1
drivers/s390/block/Kconfig
···2626 def_tristate y2727 prompt "Support for DASD devices"2828 depends on CCW && BLOCK2929- select IOSCHED_DEADLINE3029 help3130 Enable this option if you want to access DASDs directly utilizing3231 S/390s channel subsystem commands. This is necessary for running
+1
drivers/scsi/hisi_sas/Kconfig
···66 select SCSI_SAS_LIBSAS77 select BLK_DEV_INTEGRITY88 depends on ATA99+ select SATA_HOST910 help1011 This driver supports HiSilicon's SAS HBA, including support based1112 on platform device
···20982098 atomic_inc(&root->log_batch);2099209921002100 /*21012101+ * If the inode needs a full sync, make sure we use a full range to21022102+ * avoid log tree corruption, due to hole detection racing with ordered21032103+ * extent completion for adjacent ranges and races between logging and21042104+ * completion of ordered extents for adjancent ranges - both races21052105+ * could lead to file extent items in the log with overlapping ranges.21062106+ * Do this while holding the inode lock, to avoid races with other21072107+ * tasks.21082108+ */21092109+ if (test_bit(BTRFS_INODE_NEEDS_FULL_SYNC,21102110+ &BTRFS_I(inode)->runtime_flags)) {21112111+ start = 0;21122112+ end = LLONG_MAX;21132113+ }21142114+21152115+ /*21012116 * Before we acquired the inode's lock, someone may have dirtied more21022117 * pages in the target range. We need to make sure that writeback for21032118 * any such pages does not start while we are logging the inode, because
+1
fs/btrfs/reflink.c
···264264 size);265265 inode_add_bytes(dst, datal);266266 set_bit(BTRFS_INODE_NEEDS_FULL_SYNC, &BTRFS_I(dst)->runtime_flags);267267+ ret = btrfs_inode_set_file_extent_range(BTRFS_I(dst), 0, aligned_end);267268out:268269 if (!ret && !trans) {269270 /*
+19-4
fs/btrfs/relocation.c
···611611 if (!reloc_root)612612 return 0;613613614614- if (btrfs_root_last_snapshot(&reloc_root->root_item) ==615615- root->fs_info->running_transaction->transid - 1)614614+ if (btrfs_header_generation(reloc_root->commit_root) ==615615+ root->fs_info->running_transaction->transid)616616 return 0;617617 /*618618 * if there is reloc tree and it was created in previous···15271527 int clear_rsv = 0;15281528 int ret;1529152915301530- if (!rc || !rc->create_reloc_tree ||15311531- root->root_key.objectid == BTRFS_TREE_RELOC_OBJECTID)15301530+ if (!rc)15321531 return 0;1533153215341533 /*···15371538 if (reloc_root_is_dead(root))15381539 return 0;1539154015411541+ /*15421542+ * This is subtle but important. We do not do15431543+ * record_root_in_transaction for reloc roots, instead we record their15441544+ * corresponding fs root, and then here we update the last trans for the15451545+ * reloc root. This means that we have to do this for the entire life15461546+ * of the reloc root, regardless of which stage of the relocation we are15471547+ * in.15481548+ */15401549 if (root->reloc_root) {15411550 reloc_root = root->reloc_root;15421551 reloc_root->last_trans = trans->transid;15431552 return 0;15441553 }15541554+15551555+ /*15561556+ * We are merging reloc roots, we do not need new reloc trees. Also15571557+ * reloc trees never need their own reloc tree.15581558+ */15591559+ if (!rc->create_reloc_tree ||15601560+ root->root_key.objectid == BTRFS_TREE_RELOC_OBJECTID)15611561+ return 0;1545156215461563 if (!trans->reloc_reserved) {15471564 rsv = trans->block_rsv;
+14-6
fs/btrfs/space-info.c
···361361 return 0;362362}363363364364+static void remove_ticket(struct btrfs_space_info *space_info,365365+ struct reserve_ticket *ticket)366366+{367367+ if (!list_empty(&ticket->list)) {368368+ list_del_init(&ticket->list);369369+ ASSERT(space_info->reclaim_size >= ticket->bytes);370370+ space_info->reclaim_size -= ticket->bytes;371371+ }372372+}373373+364374/*365375 * This is for space we already have accounted in space_info->bytes_may_use, so366376 * basically when we're returning space from block_rsv's.···398388 btrfs_space_info_update_bytes_may_use(fs_info,399389 space_info,400390 ticket->bytes);401401- list_del_init(&ticket->list);402402- ASSERT(space_info->reclaim_size >= ticket->bytes);403403- space_info->reclaim_size -= ticket->bytes;391391+ remove_ticket(space_info, ticket);404392 ticket->bytes = 0;405393 space_info->tickets_id++;406394 wake_up(&ticket->wait);···907899 btrfs_info(fs_info, "failing ticket with %llu bytes",908900 ticket->bytes);909901910910- list_del_init(&ticket->list);902902+ remove_ticket(space_info, ticket);911903 ticket->error = -ENOSPC;912904 wake_up(&ticket->wait);913905···10711063 * despite getting an error, resulting in a space leak10721064 * (bytes_may_use counter of our space_info).10731065 */10741074- list_del_init(&ticket->list);10661066+ remove_ticket(space_info, ticket);10751067 ticket->error = -EINTR;10761068 break;10771069 }···11291121 * either the async reclaim job deletes the ticket from the list11301122 * or we delete it ourselves at wait_reserve_ticket().11311123 */11321132- list_del_init(&ticket->list);11241124+ remove_ticket(space_info, ticket);11331125 if (!ret)11341126 ret = -ENOSPC;11351127 }
+14-79
fs/btrfs/tree-log.c
···9696static int btrfs_log_inode(struct btrfs_trans_handle *trans,9797 struct btrfs_root *root, struct btrfs_inode *inode,9898 int inode_only,9999- u64 start,100100- u64 end,9999+ const loff_t start,100100+ const loff_t end,101101 struct btrfs_log_ctx *ctx);102102static int link_to_fixup_dir(struct btrfs_trans_handle *trans,103103 struct btrfs_root *root,···45334533static int btrfs_log_holes(struct btrfs_trans_handle *trans,45344534 struct btrfs_root *root,45354535 struct btrfs_inode *inode,45364536- struct btrfs_path *path,45374537- const u64 start,45384538- const u64 end)45364536+ struct btrfs_path *path)45394537{45404538 struct btrfs_fs_info *fs_info = root->fs_info;45414539 struct btrfs_key key;45424540 const u64 ino = btrfs_ino(inode);45434541 const u64 i_size = i_size_read(&inode->vfs_inode);45444544- u64 prev_extent_end = start;45424542+ u64 prev_extent_end = 0;45454543 int ret;4546454445474545 if (!btrfs_fs_incompat(fs_info, NO_HOLES) || i_size == 0)···4547454945484550 key.objectid = ino;45494551 key.type = BTRFS_EXTENT_DATA_KEY;45504550- key.offset = start;45524552+ key.offset = 0;4551455345524554 ret = btrfs_search_slot(NULL, root, &key, path, 0, 0);45534555 if (ret < 0)45544556 return ret;4555455745564556- if (ret > 0 && path->slots[0] > 0) {45574557- btrfs_item_key_to_cpu(path->nodes[0], &key, path->slots[0] - 1);45584558- if (key.objectid == ino && key.type == BTRFS_EXTENT_DATA_KEY)45594559- path->slots[0]--;45604560- }45614561-45624558 while (true) {45634559 struct extent_buffer *leaf = path->nodes[0];45644564- u64 extent_end;4565456045664561 if (path->slots[0] >= btrfs_header_nritems(path->nodes[0])) {45674562 ret = btrfs_next_leaf(root, path);···45714580 if (key.objectid != ino || key.type != BTRFS_EXTENT_DATA_KEY)45724581 break;4573458245744574- extent_end = btrfs_file_extent_end(path);45754575- if (extent_end <= start)45764576- goto next_slot;45774577-45784583 /* We have a hole, log it. */45794584 if (prev_extent_end < key.offset) {45804580- u64 hole_len;45814581-45824582- if (key.offset >= end)45834583- hole_len = end - prev_extent_end;45844584- else45854585- hole_len = key.offset - prev_extent_end;45854585+ const u64 hole_len = key.offset - prev_extent_end;4586458645874587 /*45884588 * Release the path to avoid deadlocks with other code···46034621 leaf = path->nodes[0];46044622 }4605462346064606- prev_extent_end = min(extent_end, end);46074607- if (extent_end >= end)46084608- break;46094609-next_slot:46244624+ prev_extent_end = btrfs_file_extent_end(path);46104625 path->slots[0]++;46114626 cond_resched();46124627 }4613462846144614- if (prev_extent_end < end && prev_extent_end < i_size) {46294629+ if (prev_extent_end < i_size) {46154630 u64 hole_len;4616463146174632 btrfs_release_path(path);46184618- hole_len = min(ALIGN(i_size, fs_info->sectorsize), end);46194619- hole_len -= prev_extent_end;46334633+ hole_len = ALIGN(i_size - prev_extent_end, fs_info->sectorsize);46204634 ret = btrfs_insert_file_extent(trans, root->log_root,46214635 ino, prev_extent_end, 0, 0,46224636 hole_len, 0, hole_len,···49494971 const u64 logged_isize,49504972 const bool recursive_logging,49514973 const int inode_only,49524952- const u64 start,49534953- const u64 end,49544974 struct btrfs_log_ctx *ctx,49554975 bool *need_log_inode_item)49564976{···49574981 int ins_nr = 0;49584982 int ret;4959498349604960- /*49614961- * We must make sure we don't copy extent items that are entirely out of49624962- * the range [start, end - 1]. This is not just an optimization to avoid49634963- * copying but also needed to avoid a corruption where we end up with49644964- * file extent items in the log tree that have overlapping ranges - this49654965- * can happen if we race with ordered extent completion for ranges that49664966- * are outside our target range. For example we copy an extent item and49674967- * when we move to the next leaf, that extent was trimmed and a new one49684968- * covering a subrange of it, but with a higher key, was inserted - we49694969- * would then copy this other extent too, resulting in a log tree with49704970- * 2 extent items that represent overlapping ranges.49714971- *49724972- * We can copy the entire extents at the range bondaries however, even49734973- * if they cover an area outside the target range. That's ok.49744974- */49754984 while (1) {49764985 ret = btrfs_search_forward(root, min_key, path, trans->transid);49774986 if (ret < 0)···50245063 goto next_slot;50255064 }5026506550275027- if (min_key->type == BTRFS_EXTENT_DATA_KEY) {50285028- const u64 extent_end = btrfs_file_extent_end(path);50295029-50305030- if (extent_end <= start) {50315031- if (ins_nr > 0) {50325032- ret = copy_items(trans, inode, dst_path,50335033- path, ins_start_slot,50345034- ins_nr, inode_only,50355035- logged_isize);50365036- if (ret < 0)50375037- return ret;50385038- ins_nr = 0;50395039- }50405040- goto next_slot;50415041- }50425042- if (extent_end >= end) {50435043- ins_nr++;50445044- if (ins_nr == 1)50455045- ins_start_slot = path->slots[0];50465046- break;50475047- }50485048- }50495049-50505066 if (ins_nr && ins_start_slot + ins_nr == path->slots[0]) {50515067 ins_nr++;50525068 goto next_slot;···50895151static int btrfs_log_inode(struct btrfs_trans_handle *trans,50905152 struct btrfs_root *root, struct btrfs_inode *inode,50915153 int inode_only,50925092- u64 start,50935093- u64 end,51545154+ const loff_t start,51555155+ const loff_t end,50945156 struct btrfs_log_ctx *ctx)50955157{50965158 struct btrfs_fs_info *fs_info = root->fs_info;···51175179 btrfs_free_path(path);51185180 return -ENOMEM;51195181 }51205120-51215121- start = ALIGN_DOWN(start, fs_info->sectorsize);51225122- end = ALIGN(end, fs_info->sectorsize);5123518251245183 min_key.objectid = ino;51255184 min_key.type = BTRFS_INODE_ITEM_KEY;···5233529852345299 err = copy_inode_items_to_log(trans, inode, &min_key, &max_key,52355300 path, dst_path, logged_isize,52365236- recursive_logging, inode_only,52375237- start, end, ctx, &need_log_inode_item);53015301+ recursive_logging, inode_only, ctx,53025302+ &need_log_inode_item);52385303 if (err)52395304 goto out_unlock;52405305···52475312 if (max_key.type >= BTRFS_EXTENT_DATA_KEY && !fast_search) {52485313 btrfs_release_path(path);52495314 btrfs_release_path(dst_path);52505250- err = btrfs_log_holes(trans, root, inode, path, start, end);53155315+ err = btrfs_log_holes(trans, root, inode, path);52515316 if (err)52525317 goto out_unlock;52535318 }
+11
fs/buffer.c
···13711371}13721372EXPORT_SYMBOL(__breadahead);1373137313741374+void __breadahead_gfp(struct block_device *bdev, sector_t block, unsigned size,13751375+ gfp_t gfp)13761376+{13771377+ struct buffer_head *bh = __getblk_gfp(bdev, block, size, gfp);13781378+ if (likely(bh)) {13791379+ ll_rw_block(REQ_OP_READ, REQ_RAHEAD, 1, &bh);13801380+ brelse(bh);13811381+ }13821382+}13831383+EXPORT_SYMBOL(__breadahead_gfp);13841384+13741385/**13751386 * __bread_gfp() - reads a specified block and returns the bh13761387 * @bdev: the block_device to read from
+2-2
fs/ceph/dir.c
···1051105110521052 /* If op failed, mark everyone involved for errors */10531053 if (result) {10541054- int pathlen;10551055- u64 base;10541054+ int pathlen = 0;10551055+ u64 base = 0;10561056 char *path = ceph_mdsc_build_path(req->r_dentry, &pathlen,10571057 &base, 0);10581058
+2-2
fs/ceph/file.c
···527527528528 if (result) {529529 struct dentry *dentry = req->r_dentry;530530- int pathlen;531531- u64 base;530530+ int pathlen = 0;531531+ u64 base = 0;532532 char *path = ceph_mdsc_build_path(req->r_dentry, &pathlen,533533 &base, 0);534534
+1-1
fs/ceph/mds_client.h
···521521522522static inline void ceph_mdsc_free_path(char *path, int len)523523{524524- if (path)524524+ if (!IS_ERR_OR_NULL(path))525525 __putname(path - (PATH_MAX - 1 - len));526526}527527
+4
fs/cifs/cifssmb.c
···594594 cifs_max_pending);595595 set_credits(server, server->maxReq);596596 server->maxBuf = le16_to_cpu(rsp->MaxBufSize);597597+ /* set up max_read for readpages check */598598+ server->max_read = server->maxBuf;597599 /* even though we do not use raw we might as well set this598600 accurately, in case we ever find a need for it */599601 if ((le16_to_cpu(rsp->RawMode) & RAW_ENABLE) == RAW_ENABLE) {···757755 set_credits(server, server->maxReq);758756 /* probably no need to store and check maxvcs */759757 server->maxBuf = le32_to_cpu(pSMBr->MaxBufferSize);758758+ /* set up max_read for readpages check */759759+ server->max_read = server->maxBuf;760760 server->max_rw = le32_to_cpu(pSMBr->MaxRawSize);761761 cifs_dbg(NOISY, "Max buf = %d\n", ses->server->maxBuf);762762 server->capabilities = le32_to_cpu(pSMBr->Capabilities);
+1-1
fs/cifs/inode.c
···6161 }62626363 /* check if server can support readpages */6464- if (cifs_sb_master_tcon(cifs_sb)->ses->server->maxBuf <6464+ if (cifs_sb_master_tcon(cifs_sb)->ses->server->max_read <6565 PAGE_SIZE + MAX_CIFS_HDR_SIZE)6666 inode->i_data.a_ops = &cifs_addr_ops_smallbuf;6767 else
+15
fs/cifs/smb2pdu.c
···15521552 }1553155315541554 rc = SMB2_sess_establish_session(sess_data);15551555+#ifdef CONFIG_CIFS_DEBUG_DUMP_KEYS15561556+ if (ses->server->dialect < SMB30_PROT_ID) {15571557+ cifs_dbg(VFS, "%s: dumping generated SMB2 session keys\n", __func__);15581558+ /*15591559+ * The session id is opaque in terms of endianness, so we can't15601560+ * print it as a long long. we dump it as we got it on the wire15611561+ */15621562+ cifs_dbg(VFS, "Session Id %*ph\n", (int)sizeof(ses->Suid),15631563+ &ses->Suid);15641564+ cifs_dbg(VFS, "Session Key %*ph\n",15651565+ SMB2_NTLMV2_SESSKEY_SIZE, ses->auth_key.response);15661566+ cifs_dbg(VFS, "Signing Key %*ph\n",15671567+ SMB3_SIGN_KEY_SIZE, ses->auth_key.response);15681568+ }15691569+#endif15551570out:15561571 kfree(ntlmssp_blob);15571572 SMB2_sess_free_buffer(sess_data);
···410410 * Read the bitmap for a given block_group,and validate the411411 * bits for block/inode/inode tables are set in the bitmaps412412 *413413- * Return buffer_head on success or NULL in case of failure.413413+ * Return buffer_head on success or an ERR_PTR in case of failure.414414 */415415struct buffer_head *416416ext4_read_block_bitmap_nowait(struct super_block *sb, ext4_group_t block_group)···502502 return ERR_PTR(err);503503}504504505505-/* Returns 0 on success, 1 on error */505505+/* Returns 0 on success, -errno on error */506506int ext4_wait_block_bitmap(struct super_block *sb, ext4_group_t block_group,507507 struct buffer_head *bh)508508{
-3
fs/ext4/ext4_jbd2.c
···338338 if (inode && inode_needs_sync(inode)) {339339 sync_dirty_buffer(bh);340340 if (buffer_req(bh) && !buffer_uptodate(bh)) {341341- struct ext4_super_block *es;342342-343343- es = EXT4_SB(inode->i_sb)->s_es;344341 ext4_error_inode_err(inode, where, line,345342 bh->b_blocknr, EIO,346343 "IO error syncing itable block");
···113113 * Read the inode allocation bitmap for a given block_group, reading114114 * into the specified slot in the superblock's bitmap cache.115115 *116116- * Return buffer_head of bitmap on success or NULL.116116+ * Return buffer_head of bitmap on success, or an ERR_PTR on error.117117 */118118static struct buffer_head *119119ext4_read_inode_bitmap(struct super_block *sb, ext4_group_t block_group)···662662 * block has been written back to disk. (Yes, these values are663663 * somewhat arbitrary...)664664 */665665-#define RECENTCY_MIN 5665665+#define RECENTCY_MIN 60666666#define RECENTCY_DIRTY 300667667668668static int recently_deleted(struct super_block *sb, ext4_group_t group, int ino)
+2-2
fs/ext4/inode.c
···19731973 bool keep_towrite = false;1974197419751975 if (unlikely(ext4_forced_shutdown(EXT4_SB(inode->i_sb)))) {19761976- ext4_invalidatepage(page, 0, PAGE_SIZE);19761976+ inode->i_mapping->a_ops->invalidatepage(page, 0, PAGE_SIZE);19771977 unlock_page(page);19781978 return -EIO;19791979 }···43644364 if (end > table)43654365 end = table;43664366 while (b <= end)43674367- sb_breadahead(sb, b++);43674367+ sb_breadahead_unmovable(sb, b++);43684368 }4369436943704370 /*
+4-2
fs/ext4/mballoc.c
···19431943 int free;1944194419451945 free = e4b->bd_info->bb_free;19461946- BUG_ON(free <= 0);19461946+ if (WARN_ON(free <= 0))19471947+ return;1947194819481949 i = e4b->bd_info->bb_first_free;19491950···19671966 }1968196719691968 mb_find_extent(e4b, i, ac->ac_g_ex.fe_len, &ex);19701970- BUG_ON(ex.fe_len <= 0);19691969+ if (WARN_ON(ex.fe_len <= 0))19701970+ break;19711971 if (free < ex.fe_len) {19721972 ext4_grp_locked_error(sb, e4b->bd_group, 0, 0,19731973 "%d free clusters as per "
+1-3
fs/ext4/super.c
···596596{597597 va_list args;598598 struct va_format vaf;599599- struct ext4_super_block *es;600599 struct inode *inode = file_inode(file);601600 char pathname[80], *path;602601···603604 return;604605605606 trace_ext4_error(inode->i_sb, function, line);606606- es = EXT4_SB(inode->i_sb)->s_es;607607 if (ext4_error_ratelimit(inode->i_sb)) {608608 path = file_path(file, pathname, sizeof(pathname));609609 if (IS_ERR(path))···43384340 /* Pre-read the descriptors into the buffer cache */43394341 for (i = 0; i < db_count; i++) {43404342 block = descriptor_loc(sb, logical_sb_block, i);43414341- sb_breadahead(sb, block);43434343+ sb_breadahead_unmovable(sb, block);43424344 }4343434543444346 for (i = 0; i < db_count; i++) {
+163-140
fs/io_uring.c
···357357 struct hrtimer timer;358358 struct timespec64 ts;359359 enum hrtimer_mode mode;360360- u32 seq_offset;361360};362361363362struct io_accept {···384385 struct file *file;385386 u64 addr;386387 int flags;387387- unsigned count;388388+ u32 count;388389};389390390391struct io_rw {···507508 REQ_F_FORCE_ASYNC_BIT = IOSQE_ASYNC_BIT,508509 REQ_F_BUFFER_SELECT_BIT = IOSQE_BUFFER_SELECT_BIT,509510511511+ REQ_F_LINK_HEAD_BIT,510512 REQ_F_LINK_NEXT_BIT,511513 REQ_F_FAIL_LINK_BIT,512514 REQ_F_INFLIGHT_BIT,···543543 /* IOSQE_BUFFER_SELECT */544544 REQ_F_BUFFER_SELECT = BIT(REQ_F_BUFFER_SELECT_BIT),545545546546+ /* head of a link */547547+ REQ_F_LINK_HEAD = BIT(REQ_F_LINK_HEAD_BIT),546548 /* already grabbed next link */547549 REQ_F_LINK_NEXT = BIT(REQ_F_LINK_NEXT_BIT),548550 /* fail rest of links */···957955{958956 struct io_ring_ctx *ctx = req->ctx;959957960960- return req->sequence != ctx->cached_cq_tail + ctx->cached_sq_dropped961961- + atomic_read(&ctx->cached_cq_overflow);958958+ return req->sequence != ctx->cached_cq_tail959959+ + atomic_read(&ctx->cached_cq_overflow);962960}963961964962static inline bool req_need_defer(struct io_kiocb *req)···14391437 if (ret != -1) {14401438 io_cqring_fill_event(req, -ECANCELED);14411439 io_commit_cqring(ctx);14421442- req->flags &= ~REQ_F_LINK;14401440+ req->flags &= ~REQ_F_LINK_HEAD;14431441 io_put_req(req);14441442 return true;14451443 }···1475147314761474 list_del_init(&req->link_list);14771475 if (!list_empty(&nxt->link_list))14781478- nxt->flags |= REQ_F_LINK;14761476+ nxt->flags |= REQ_F_LINK_HEAD;14791477 *nxtptr = nxt;14801478 break;14811479 }···14861484}1487148514881486/*14891489- * Called if REQ_F_LINK is set, and we fail the head request14871487+ * Called if REQ_F_LINK_HEAD is set, and we fail the head request14901488 */14911489static void io_fail_links(struct io_kiocb *req)14921490{···1519151715201518static void io_req_find_next(struct io_kiocb *req, struct io_kiocb **nxt)15211519{15221522- if (likely(!(req->flags & REQ_F_LINK)))15201520+ if (likely(!(req->flags & REQ_F_LINK_HEAD)))15231521 return;1524152215251523 /*···1671166916721670static inline bool io_req_multi_free(struct req_batch *rb, struct io_kiocb *req)16731671{16741674- if ((req->flags & REQ_F_LINK) || io_is_fallback_req(req))16721672+ if ((req->flags & REQ_F_LINK_HEAD) || io_is_fallback_req(req))16751673 return false;1676167416771675 if (!(req->flags & REQ_F_FIXED_FILE) || req->io)···2564256225652563 req->result = 0;25662564 io_size = ret;25672567- if (req->flags & REQ_F_LINK)25652565+ if (req->flags & REQ_F_LINK_HEAD)25682566 req->result = io_size;2569256725702568 /*···2655265326562654 req->result = 0;26572655 io_size = ret;26582658- if (req->flags & REQ_F_LINK)26562656+ if (req->flags & REQ_F_LINK_HEAD)26592657 req->result = io_size;2660265826612659 /*···27622760 return false;27632761 if (!io_file_supports_async(file))27642762 return true;27652765- return !(file->f_mode & O_NONBLOCK);27632763+ return !(file->f_flags & O_NONBLOCK);27662764}2767276527682766static int io_splice(struct io_kiocb *req, bool force_nonblock)···41554153 return 1;41564154}4157415541564156+static bool io_poll_rewait(struct io_kiocb *req, struct io_poll_iocb *poll)41574157+ __acquires(&req->ctx->completion_lock)41584158+{41594159+ struct io_ring_ctx *ctx = req->ctx;41604160+41614161+ if (!req->result && !READ_ONCE(poll->canceled)) {41624162+ struct poll_table_struct pt = { ._key = poll->events };41634163+41644164+ req->result = vfs_poll(req->file, &pt) & poll->events;41654165+ }41664166+41674167+ spin_lock_irq(&ctx->completion_lock);41684168+ if (!req->result && !READ_ONCE(poll->canceled)) {41694169+ add_wait_queue(poll->head, &poll->wait);41704170+ return true;41714171+ }41724172+41734173+ return false;41744174+}41754175+41584176static void io_async_task_func(struct callback_head *cb)41594177{41604178 struct io_kiocb *req = container_of(cb, struct io_kiocb, task_work);41614179 struct async_poll *apoll = req->apoll;41624180 struct io_ring_ctx *ctx = req->ctx;41814181+ bool canceled;4163418241644183 trace_io_uring_task_run(req->ctx, req->opcode, req->user_data);4165418441664166- WARN_ON_ONCE(!list_empty(&req->apoll->poll.wait.entry));41674167-41684168- if (hash_hashed(&req->hash_node)) {41694169- spin_lock_irq(&ctx->completion_lock);41704170- hash_del(&req->hash_node);41854185+ if (io_poll_rewait(req, &apoll->poll)) {41714186 spin_unlock_irq(&ctx->completion_lock);41874187+ return;41884188+ }41894189+41904190+ if (hash_hashed(&req->hash_node))41914191+ hash_del(&req->hash_node);41924192+41934193+ canceled = READ_ONCE(apoll->poll.canceled);41944194+ if (canceled) {41954195+ io_cqring_fill_event(req, -ECANCELED);41964196+ io_commit_cqring(ctx);41974197+ }41984198+41994199+ spin_unlock_irq(&ctx->completion_lock);42004200+42014201+ if (canceled) {42024202+ kfree(apoll);42034203+ io_cqring_ev_posted(ctx);42044204+ req_set_fail_links(req);42054205+ io_put_req(req);42064206+ return;41724207 }4173420841744209 /* restore ->work in case we need to retry again */···4354431543554316static bool io_poll_remove_one(struct io_kiocb *req)43564317{43184318+ struct async_poll *apoll = NULL;43574319 bool do_complete;4358432043594321 if (req->opcode == IORING_OP_POLL_ADD) {43604322 do_complete = __io_poll_remove_one(req, &req->poll);43614323 } else {43244324+ apoll = req->apoll;43624325 /* non-poll requests have submit ref still */43634326 do_complete = __io_poll_remove_one(req, &req->apoll->poll);43644327 if (do_complete)···43684327 }4369432843704329 hash_del(&req->hash_node);43304330+43314331+ if (apoll) {43324332+ /*43334333+ * restore ->work because we need to call io_req_work_drop_env.43344334+ */43354335+ memcpy(&req->work, &apoll->work, sizeof(req->work));43364336+ kfree(apoll);43374337+ }4371433843724339 if (do_complete) {43734340 io_cqring_fill_event(req, -ECANCELED);···43914342{43924343 struct hlist_node *tmp;43934344 struct io_kiocb *req;43944394- int i;43454345+ int posted = 0, i;4395434643964347 spin_lock_irq(&ctx->completion_lock);43974348 for (i = 0; i < (1U << ctx->cancel_hash_bits); i++) {···4399435044004351 list = &ctx->cancel_hash[i];44014352 hlist_for_each_entry_safe(req, tmp, list, hash_node)44024402- io_poll_remove_one(req);43534353+ posted += io_poll_remove_one(req);44034354 }44044355 spin_unlock_irq(&ctx->completion_lock);4405435644064406- io_cqring_ev_posted(ctx);43574357+ if (posted)43584358+ io_cqring_ev_posted(ctx);44074359}4408436044094361static int io_poll_cancel(struct io_ring_ctx *ctx, __u64 sqe_addr)···44734423 struct io_ring_ctx *ctx = req->ctx;44744424 struct io_poll_iocb *poll = &req->poll;4475442544764476- if (!req->result && !READ_ONCE(poll->canceled)) {44774477- struct poll_table_struct pt = { ._key = poll->events };44784478-44794479- req->result = vfs_poll(req->file, &pt) & poll->events;44804480- }44814481-44824482- spin_lock_irq(&ctx->completion_lock);44834483- if (!req->result && !READ_ONCE(poll->canceled)) {44844484- add_wait_queue(poll->head, &poll->wait);44264426+ if (io_poll_rewait(req, poll)) {44854427 spin_unlock_irq(&ctx->completion_lock);44864428 return;44874429 }44304430+44884431 hash_del(&req->hash_node);44894432 io_poll_complete(req, req->result, 0);44904433 req->flags |= REQ_F_COMP_LOCKED;···4708466547094666static int io_timeout(struct io_kiocb *req)47104667{47114711- unsigned count;47124668 struct io_ring_ctx *ctx = req->ctx;47134669 struct io_timeout_data *data;47144670 struct list_head *entry;47154671 unsigned span = 0;46724672+ u32 count = req->timeout.count;46734673+ u32 seq = req->sequence;4716467447174675 data = &req->io->timeout;47184676···47224678 * timeout event to be satisfied. If it isn't set, then this is47234679 * a pure timeout request, sequence isn't used.47244680 */47254725- count = req->timeout.count;47264681 if (!count) {47274682 req->flags |= REQ_F_TIMEOUT_NOSEQ;47284683 spin_lock_irq(&ctx->completion_lock);···47294686 goto add;47304687 }4731468847324732- req->sequence = ctx->cached_sq_head + count - 1;47334733- data->seq_offset = count;46894689+ req->sequence = seq + count;4734469047354691 /*47364692 * Insertion sort, ensuring the first entry in the list is always···47384696 spin_lock_irq(&ctx->completion_lock);47394697 list_for_each_prev(entry, &ctx->timeout_list) {47404698 struct io_kiocb *nxt = list_entry(entry, struct io_kiocb, list);47414741- unsigned nxt_sq_head;46994699+ unsigned nxt_seq;47424700 long long tmp, tmp_nxt;47434743- u32 nxt_offset = nxt->io->timeout.seq_offset;47014701+ u32 nxt_offset = nxt->timeout.count;4744470247454703 if (nxt->flags & REQ_F_TIMEOUT_NOSEQ)47464704 continue;4747470547484706 /*47494749- * Since cached_sq_head + count - 1 can overflow, use type long47074707+ * Since seq + count can overflow, use type long47504708 * long to store it.47514709 */47524752- tmp = (long long)ctx->cached_sq_head + count - 1;47534753- nxt_sq_head = nxt->sequence - nxt_offset + 1;47544754- tmp_nxt = (long long)nxt_sq_head + nxt_offset - 1;47104710+ tmp = (long long)seq + count;47114711+ nxt_seq = nxt->sequence - nxt_offset;47124712+ tmp_nxt = (long long)nxt_seq + nxt_offset;4755471347564714 /*47574715 * cached_sq_head may overflow, and it will never overflow twice47584716 * once there is some timeout req still be valid.47594717 */47604760- if (ctx->cached_sq_head < nxt_sq_head)47184718+ if (seq < nxt_seq)47614719 tmp += UINT_MAX;4762472047634721 if (tmp > tmp_nxt)···55185476{55195477 struct io_kiocb *nxt;5520547855215521- if (!(req->flags & REQ_F_LINK))54795479+ if (!(req->flags & REQ_F_LINK_HEAD))55225480 return NULL;55235481 /* for polled retry, if flag is set, we already went through here */55245482 if (req->flags & REQ_F_POLLED)···56465604 io_queue_sqe(req, NULL);56475605}5648560656495649-#define SQE_VALID_FLAGS (IOSQE_FIXED_FILE|IOSQE_IO_DRAIN|IOSQE_IO_LINK| \56505650- IOSQE_IO_HARDLINK | IOSQE_ASYNC | \56515651- IOSQE_BUFFER_SELECT)56525652-56535653-static bool io_submit_sqe(struct io_kiocb *req, const struct io_uring_sqe *sqe,56075607+static int io_submit_sqe(struct io_kiocb *req, const struct io_uring_sqe *sqe,56545608 struct io_submit_state *state, struct io_kiocb **link)56555609{56565610 struct io_ring_ctx *ctx = req->ctx;56575657- unsigned int sqe_flags;56585658- int ret, id, fd;56595659-56605660- sqe_flags = READ_ONCE(sqe->flags);56615661-56625662- /* enforce forwards compatibility on users */56635663- if (unlikely(sqe_flags & ~SQE_VALID_FLAGS)) {56645664- ret = -EINVAL;56655665- goto err_req;56665666- }56675667-56685668- if ((sqe_flags & IOSQE_BUFFER_SELECT) &&56695669- !io_op_defs[req->opcode].buffer_select) {56705670- ret = -EOPNOTSUPP;56715671- goto err_req;56725672- }56735673-56745674- id = READ_ONCE(sqe->personality);56755675- if (id) {56765676- req->work.creds = idr_find(&ctx->personality_idr, id);56775677- if (unlikely(!req->work.creds)) {56785678- ret = -EINVAL;56795679- goto err_req;56805680- }56815681- get_cred(req->work.creds);56825682- }56835683-56845684- /* same numerical values with corresponding REQ_F_*, safe to copy */56855685- req->flags |= sqe_flags & (IOSQE_IO_DRAIN | IOSQE_IO_HARDLINK |56865686- IOSQE_ASYNC | IOSQE_FIXED_FILE |56875687- IOSQE_BUFFER_SELECT);56885688-56895689- fd = READ_ONCE(sqe->fd);56905690- ret = io_req_set_file(state, req, fd, sqe_flags);56915691- if (unlikely(ret)) {56925692-err_req:56935693- io_cqring_add_event(req, ret);56945694- io_double_put_req(req);56955695- return false;56965696- }56115611+ int ret;5697561256985613 /*56995614 * If we already have a head request, queue this one for async···56695670 * next after the link request. The last one is done via56705671 * drain_next flag to persist the effect across calls.56715672 */56725672- if (sqe_flags & IOSQE_IO_DRAIN) {56735673+ if (req->flags & REQ_F_IO_DRAIN) {56735674 head->flags |= REQ_F_IO_DRAIN;56745675 ctx->drain_next = 1;56755676 }56765676- if (io_alloc_async_ctx(req)) {56775677- ret = -EAGAIN;56785678- goto err_req;56795679- }56775677+ if (io_alloc_async_ctx(req))56785678+ return -EAGAIN;5680567956815680 ret = io_req_defer_prep(req, sqe);56825681 if (ret) {56835682 /* fail even hard links since we don't submit */56845683 head->flags |= REQ_F_FAIL_LINK;56855685- goto err_req;56845684+ return ret;56865685 }56875686 trace_io_uring_link(ctx, req, head);56885687 list_add_tail(&req->link_list, &head->link_list);5689568856905689 /* last request of a link, enqueue the link */56915691- if (!(sqe_flags & (IOSQE_IO_LINK|IOSQE_IO_HARDLINK))) {56905690+ if (!(req->flags & (REQ_F_LINK | REQ_F_HARDLINK))) {56925691 io_queue_link_head(head);56935692 *link = NULL;56945693 }56955694 } else {56965695 if (unlikely(ctx->drain_next)) {56975696 req->flags |= REQ_F_IO_DRAIN;56985698- req->ctx->drain_next = 0;56975697+ ctx->drain_next = 0;56995698 }57005700- if (sqe_flags & (IOSQE_IO_LINK|IOSQE_IO_HARDLINK)) {57015701- req->flags |= REQ_F_LINK;56995699+ if (req->flags & (REQ_F_LINK | REQ_F_HARDLINK)) {57005700+ req->flags |= REQ_F_LINK_HEAD;57025701 INIT_LIST_HEAD(&req->link_list);5703570257045704- if (io_alloc_async_ctx(req)) {57055705- ret = -EAGAIN;57065706- goto err_req;57075707- }57035703+ if (io_alloc_async_ctx(req))57045704+ return -EAGAIN;57055705+57085706 ret = io_req_defer_prep(req, sqe);57095707 if (ret)57105708 req->flags |= REQ_F_FAIL_LINK;···57115715 }57125716 }5713571757145714- return true;57185718+ return 0;57155719}5716572057175721/*···57855789 ctx->cached_sq_head++;57865790}5787579157885788-static void io_init_req(struct io_ring_ctx *ctx, struct io_kiocb *req,57895789- const struct io_uring_sqe *sqe)57925792+#define SQE_VALID_FLAGS (IOSQE_FIXED_FILE|IOSQE_IO_DRAIN|IOSQE_IO_LINK| \57935793+ IOSQE_IO_HARDLINK | IOSQE_ASYNC | \57945794+ IOSQE_BUFFER_SELECT)57955795+57965796+static int io_init_req(struct io_ring_ctx *ctx, struct io_kiocb *req,57975797+ const struct io_uring_sqe *sqe,57985798+ struct io_submit_state *state, bool async)57905799{58005800+ unsigned int sqe_flags;58015801+ int id, fd;58025802+57915803 /*57925804 * All io need record the previous position, if LINK vs DARIN,57935805 * it can be used to mark the position of the first IO in the57945806 * link list.57955807 */57965796- req->sequence = ctx->cached_sq_head;58085808+ req->sequence = ctx->cached_sq_head - ctx->cached_sq_dropped;57975809 req->opcode = READ_ONCE(sqe->opcode);57985810 req->user_data = READ_ONCE(sqe->user_data);57995811 req->io = NULL;···58125808 refcount_set(&req->refs, 2);58135809 req->task = NULL;58145810 req->result = 0;58115811+ req->needs_fixed_file = async;58155812 INIT_IO_WORK(&req->work, io_wq_submit_work);58135813+58145814+ if (unlikely(req->opcode >= IORING_OP_LAST))58155815+ return -EINVAL;58165816+58175817+ if (io_op_defs[req->opcode].needs_mm && !current->mm) {58185818+ if (unlikely(!mmget_not_zero(ctx->sqo_mm)))58195819+ return -EFAULT;58205820+ use_mm(ctx->sqo_mm);58215821+ }58225822+58235823+ sqe_flags = READ_ONCE(sqe->flags);58245824+ /* enforce forwards compatibility on users */58255825+ if (unlikely(sqe_flags & ~SQE_VALID_FLAGS))58265826+ return -EINVAL;58275827+58285828+ if ((sqe_flags & IOSQE_BUFFER_SELECT) &&58295829+ !io_op_defs[req->opcode].buffer_select)58305830+ return -EOPNOTSUPP;58315831+58325832+ id = READ_ONCE(sqe->personality);58335833+ if (id) {58345834+ req->work.creds = idr_find(&ctx->personality_idr, id);58355835+ if (unlikely(!req->work.creds))58365836+ return -EINVAL;58375837+ get_cred(req->work.creds);58385838+ }58395839+58405840+ /* same numerical values with corresponding REQ_F_*, safe to copy */58415841+ req->flags |= sqe_flags & (IOSQE_IO_DRAIN | IOSQE_IO_HARDLINK |58425842+ IOSQE_ASYNC | IOSQE_FIXED_FILE |58435843+ IOSQE_BUFFER_SELECT | IOSQE_IO_LINK);58445844+58455845+ fd = READ_ONCE(sqe->fd);58465846+ return io_req_set_file(state, req, fd, sqe_flags);58165847}5817584858185849static int io_submit_sqes(struct io_ring_ctx *ctx, unsigned int nr,58195819- struct file *ring_file, int ring_fd,58205820- struct mm_struct **mm, bool async)58505850+ struct file *ring_file, int ring_fd, bool async)58215851{58225852 struct io_submit_state state, *statep = NULL;58235853 struct io_kiocb *link = NULL;58245854 int i, submitted = 0;58255825- bool mm_fault = false;5826585558275856 /* if we have a backlog and couldn't flush it all, return BUSY */58285857 if (test_bit(0, &ctx->sq_check_overflow)) {···58955858 break;58965859 }5897586058985898- io_init_req(ctx, req, sqe);58615861+ err = io_init_req(ctx, req, sqe, statep, async);58995862 io_consume_sqe(ctx);59005863 /* will complete beyond this point, count as submitted */59015864 submitted++;5902586559035903- if (unlikely(req->opcode >= IORING_OP_LAST)) {59045904- err = -EINVAL;58665866+ if (unlikely(err)) {59055867fail_req:59065868 io_cqring_add_event(req, err);59075869 io_double_put_req(req);59085870 break;59095871 }5910587259115911- if (io_op_defs[req->opcode].needs_mm && !*mm) {59125912- mm_fault = mm_fault || !mmget_not_zero(ctx->sqo_mm);59135913- if (unlikely(mm_fault)) {59145914- err = -EFAULT;59155915- goto fail_req;59165916- }59175917- use_mm(ctx->sqo_mm);59185918- *mm = ctx->sqo_mm;59195919- }59205920-59215921- req->needs_fixed_file = async;59225873 trace_io_uring_submit_sqe(ctx, req->opcode, req->user_data,59235874 true, async);59245924- if (!io_submit_sqe(req, sqe, statep, &link))59255925- break;58755875+ err = io_submit_sqe(req, sqe, statep, &link);58765876+ if (err)58775877+ goto fail_req;59265878 }5927587959285880 if (unlikely(submitted != nr)) {···59305904 return submitted;59315905}5932590659075907+static inline void io_sq_thread_drop_mm(struct io_ring_ctx *ctx)59085908+{59095909+ struct mm_struct *mm = current->mm;59105910+59115911+ if (mm) {59125912+ unuse_mm(mm);59135913+ mmput(mm);59145914+ }59155915+}59165916+59335917static int io_sq_thread(void *data)59345918{59355919 struct io_ring_ctx *ctx = data;59365936- struct mm_struct *cur_mm = NULL;59375920 const struct cred *old_cred;59385921 mm_segment_t old_fs;59395922 DEFINE_WAIT(wait);···59835948 * adding ourselves to the waitqueue, as the unuse/drop59845949 * may sleep.59855950 */59865986- if (cur_mm) {59875987- unuse_mm(cur_mm);59885988- mmput(cur_mm);59895989- cur_mm = NULL;59905990- }59515951+ io_sq_thread_drop_mm(ctx);5991595259925953 /*59935954 * We're polling. If we're within the defined idle···60476016 }6048601760496018 mutex_lock(&ctx->uring_lock);60506050- ret = io_submit_sqes(ctx, to_submit, NULL, -1, &cur_mm, true);60196019+ ret = io_submit_sqes(ctx, to_submit, NULL, -1, true);60516020 mutex_unlock(&ctx->uring_lock);60526021 timeout = jiffies + ctx->sq_thread_idle;60536022 }···60566025 task_work_run();6057602660586027 set_fs(old_fs);60596059- if (cur_mm) {60606060- unuse_mm(cur_mm);60616061- mmput(cur_mm);60626062- }60286028+ io_sq_thread_drop_mm(ctx);60636029 revert_creds(old_cred);6064603060656031 kthread_parkme();···75377509 wake_up(&ctx->sqo_wait);75387510 submitted = to_submit;75397511 } else if (to_submit) {75407540- struct mm_struct *cur_mm;75417541-75427512 mutex_lock(&ctx->uring_lock);75437543- /* already have mm, so io_submit_sqes() won't try to grab it */75447544- cur_mm = ctx->sqo_mm;75457545- submitted = io_submit_sqes(ctx, to_submit, f.file, fd,75467546- &cur_mm, false);75137513+ submitted = io_submit_sqes(ctx, to_submit, f.file, fd, false);75477514 mutex_unlock(&ctx->uring_lock);7548751575497516 if (submitted != to_submit)
···15731573 noffsets = 0;15741574 for (pos = kbuf; pos; pos = next_line) {15751575 struct proc_timens_offset *off = &offsets[noffsets];15761576+ char clock[10];15761577 int err;1577157815781579 /* Find the end of line and ensure we don't look past it */···15851584 next_line = NULL;15861585 }1587158615881588- err = sscanf(pos, "%u %lld %lu", &off->clockid,15871587+ err = sscanf(pos, "%9s %lld %lu", clock,15891588 &off->val.tv_sec, &off->val.tv_nsec);15901589 if (err != 3 || off->val.tv_nsec >= NSEC_PER_SEC)15911590 goto out;15911591+15921592+ clock[sizeof(clock) - 1] = 0;15931593+ if (strcmp(clock, "monotonic") == 0 ||15941594+ strcmp(clock, __stringify(CLOCK_MONOTONIC)) == 0)15951595+ off->clockid = CLOCK_MONOTONIC;15961596+ else if (strcmp(clock, "boottime") == 0 ||15971597+ strcmp(clock, __stringify(CLOCK_BOOTTIME)) == 0)15981598+ off->clockid = CLOCK_BOOTTIME;15991599+ else16001600+ goto out;16011601+15921602 noffsets++;15931603 if (noffsets == ARRAY_SIZE(offsets)) {15941604 if (next_line)
+7
fs/proc/root.c
···196196 if (ns->proc_thread_self)197197 dput(ns->proc_thread_self);198198 kill_anon_super(sb);199199+200200+ /* Make the pid namespace safe for the next mount of proc */201201+ ns->proc_self = NULL;202202+ ns->proc_thread_self = NULL;203203+ ns->pid_gid = GLOBAL_ROOT_GID;204204+ ns->hide_pid = 0;205205+199206 put_pid_ns(ns);200207}201208
···167167 struct xfs_kobj m_error_meta_kobj;168168 struct xfs_error_cfg m_error_cfg[XFS_ERR_CLASS_MAX][XFS_ERR_ERRNO_MAX];169169 struct xstats m_stats; /* per-fs stats */170170- struct ratelimit_state m_flush_inodes_ratelimit;171170171171+ /*172172+ * Workqueue item so that we can coalesce multiple inode flush attempts173173+ * into a single flush.174174+ */175175+ struct work_struct m_flush_inodes_work;172176 struct workqueue_struct *m_buf_workqueue;173177 struct workqueue_struct *m_unwritten_workqueue;174178 struct workqueue_struct *m_cil_workqueue;
+1
fs/xfs/xfs_reflink.c
···10511051 uirec.br_startblock = irec->br_startblock + rlen;10521052 uirec.br_startoff = irec->br_startoff + rlen;10531053 uirec.br_blockcount = unmap_len - rlen;10541054+ uirec.br_state = irec->br_state;10541055 unmap_len = rlen;1055105610561057 /* If this isn't a real mapping, we're done. */
+22-18
fs/xfs/xfs_super.c
···516516 destroy_workqueue(mp->m_buf_workqueue);517517}518518519519+static void520520+xfs_flush_inodes_worker(521521+ struct work_struct *work)522522+{523523+ struct xfs_mount *mp = container_of(work, struct xfs_mount,524524+ m_flush_inodes_work);525525+ struct super_block *sb = mp->m_super;526526+527527+ if (down_read_trylock(&sb->s_umount)) {528528+ sync_inodes_sb(sb);529529+ up_read(&sb->s_umount);530530+ }531531+}532532+519533/*520534 * Flush all dirty data to disk. Must not be called while holding an XFS_ILOCK521535 * or a page lock. We use sync_inodes_sb() here to ensure we block while waiting···540526xfs_flush_inodes(541527 struct xfs_mount *mp)542528{543543- struct super_block *sb = mp->m_super;544544-545545- if (!__ratelimit(&mp->m_flush_inodes_ratelimit))529529+ /*530530+ * If flush_work() returns true then that means we waited for a flush531531+ * which was already in progress. Don't bother running another scan.532532+ */533533+ if (flush_work(&mp->m_flush_inodes_work))546534 return;547535548548- if (down_read_trylock(&sb->s_umount)) {549549- sync_inodes_sb(sb);550550- up_read(&sb->s_umount);551551- }536536+ queue_work(mp->m_sync_workqueue, &mp->m_flush_inodes_work);537537+ flush_work(&mp->m_flush_inodes_work);552538}553539554540/* Catch misguided souls that try to use this interface on XFS */···13831369 if (error)13841370 goto out_free_names;1385137113861386- /*13871387- * Cap the number of invocations of xfs_flush_inodes to 16 for every13881388- * quarter of a second. The magic numbers here were determined by13891389- * observation neither to cause stalls in writeback when there are a13901390- * lot of IO threads and the fs is near ENOSPC, nor cause any fstest13911391- * regressions. YMMV.13921392- */13931393- ratelimit_state_init(&mp->m_flush_inodes_ratelimit, HZ / 4, 16);13941394- ratelimit_set_flags(&mp->m_flush_inodes_ratelimit,13951395- RATELIMIT_MSG_ON_RELEASE);13961396-13971372 error = xfs_init_mount_workqueues(mp);13981373 if (error)13991374 goto out_close_devices;···17551752 spin_lock_init(&mp->m_perag_lock);17561753 mutex_init(&mp->m_growlock);17571754 atomic_set(&mp->m_active_trans, 0);17551755+ INIT_WORK(&mp->m_flush_inodes_work, xfs_flush_inodes_worker);17581756 INIT_DELAYED_WORK(&mp->m_reclaim_work, xfs_reclaim_worker);17591757 INIT_DELAYED_WORK(&mp->m_eofblocks_work, xfs_eofblocks_worker);17601758 INIT_DELAYED_WORK(&mp->m_cowblocks_work, xfs_cowblocks_worker);
···173173 * blocking (BLK_MQ_F_BLOCKING). Must be the last member - see also174174 * blk_mq_hw_ctx_size().175175 */176176- struct srcu_struct srcu[0];176176+ struct srcu_struct srcu[];177177};178178179179/**
+1-1
include/linux/blk_types.h
···198198 * double allocations for a small number of bio_vecs. This member199199 * MUST obviously be kept at the very end of the bio.200200 */201201- struct bio_vec bi_inline_vecs[0];201201+ struct bio_vec bi_inline_vecs[];202202};203203204204#define BIO_RESET_BYTES offsetof(struct bio, bi_max_vecs)
···3636struct em_perf_domain {3737 struct em_cap_state *table;3838 int nr_cap_states;3939- unsigned long cpus[0];3939+ unsigned long cpus[];4040};41414242#ifdef CONFIG_ENERGY_MODEL
···7676 void *owner; /* private data to retrieve at alloc time */7777 unsigned long start_addr; /* start address of memory chunk */7878 unsigned long end_addr; /* end address of memory chunk (inclusive) */7979- unsigned long bits[0]; /* bitmap for allocating memory chunk */7979+ unsigned long bits[]; /* bitmap for allocating memory chunk */8080};81818282/*
-6
include/linux/i2c.h
···461461 unsigned short const *addr_list,462462 int (*probe)(struct i2c_adapter *adap, unsigned short addr));463463464464-struct i2c_client *465465-i2c_new_probed_device(struct i2c_adapter *adap,466466- struct i2c_board_info *info,467467- unsigned short const *addr_list,468468- int (*probe)(struct i2c_adapter *adap, unsigned short addr));469469-470464/* Common custom probe functions */471465int i2c_probe_func_quick_read(struct i2c_adapter *adap, unsigned short addr);472466
+1-1
include/linux/igmp.h
···3838 unsigned int sl_max;3939 unsigned int sl_count;4040 struct rcu_head rcu;4141- __be32 sl_addr[0];4141+ __be32 sl_addr[];4242};43434444#define IP_SFLSIZE(count) (sizeof(struct ip_sf_socklist) + \
···5454 */5555struct rs_control {5656 struct rs_codec *codec;5757- uint16_t buffers[0];5757+ uint16_t buffers[];5858};59596060/* General purpose RS codec, 8-bit data width, symbol width 1-15 bit */
+1-1
include/linux/sched/topology.h
···142142 * by attaching extra space to the end of the structure,143143 * depending on how many CPUs the kernel has booted up with)144144 */145145- unsigned long span[0];145145+ unsigned long span[];146146};147147148148static inline struct cpumask *sched_domain_span(struct sched_domain *sd)
+1-1
include/linux/skbuff.h
···41624162 refcount_t refcnt;41634163 u8 offset[SKB_EXT_NUM]; /* in chunks of 8 bytes */41644164 u8 chunks; /* same */41654165- char data[0] __aligned(8);41654165+ char data[] __aligned(8);41664166};4167416741684168struct skb_ext *__skb_ext_alloc(void);
+1-1
include/linux/swap.h
···275275 */276276 struct work_struct discard_work; /* discard worker */277277 struct swap_cluster_list discard_clusters; /* discard clusters list */278278- struct plist_node avail_lists[0]; /*278278+ struct plist_node avail_lists[]; /*279279 * entries in swap_avail_heads, one280280 * entry per node.281281 * Must be last as the number of the
···905905 * protocol frames.906906 * @control_port_over_nl80211: TRUE if userspace expects to exchange control907907 * port frames over NL80211 instead of the network interface.908908+ * @control_port_no_preauth: disables pre-auth rx over the nl80211 control909909+ * port for mac80211908910 * @wep_keys: static WEP keys, if not NULL points to an array of909911 * CFG80211_MAX_WEP_KEYS WEP keys910912 * @wep_tx_key: key index (0..3) of the default TX static WEP key···12241222 * @he_capa: HE capabilities of station12251223 * @he_capa_len: the length of the HE capabilities12261224 * @airtime_weight: airtime scheduler weight for this station12251225+ * @txpwr: transmit power for an associated station12271226 */12281227struct station_parameters {12291228 const u8 *supported_rates;···46694666 * @txq_memory_limit: configuration internal TX queue memory limit46704667 * @txq_quantum: configuration of internal TX queue scheduler quantum46714668 *46694669+ * @tx_queue_len: allow setting transmit queue len for drivers not using46704670+ * wake_tx_queue46714671+ *46724672 * @support_mbssid: can HW support association with nontransmitted AP46734673 * @support_only_he_mbssid: don't parse MBSSID elements if it is not46744674 * HE AP, in order to avoid compatibility issues.···46874681 * supported by the driver for each peer46884682 * @tid_config_support.max_retry: maximum supported retry count for46894683 * long/short retry configuration46844684+ *46854685+ * @max_data_retry_count: maximum supported per TID retry count for46864686+ * configuration through the %NL80211_TID_CONFIG_ATTR_RETRY_SHORT and46874687+ * %NL80211_TID_CONFIG_ATTR_RETRY_LONG attributes46904688 */46914689struct wiphy {46924690 /* assign these fields before you register the wiphy */
···901901{902902 struct nft_expr *expr;903903904904- if (nft_set_ext_exists(ext, NFT_SET_EXT_EXPR)) {904904+ if (__nft_set_ext_exists(ext, NFT_SET_EXT_EXPR)) {905905 expr = nft_set_ext_expr(ext);906906 expr->ops->eval(expr, regs, pkt);907907 }
+3-3
include/net/sock.h
···25532553}2554255425552555/**25562556- * skb_steal_sock25572557- * @skb to steal the socket from25582558- * @refcounted is set to true if the socket is reference-counted25562556+ * skb_steal_sock - steal a socket from an sk_buff25572557+ * @skb: sk_buff to steal the socket from25582558+ * @refcounted: is set to true if the socket is reference-counted25592559 */25602560static inline struct sock *25612561skb_steal_sock(struct sk_buff *skb, bool *refcounted)
···3434 __u32 fm_mapped_extents;/* number of extents that were mapped (out) */3535 __u32 fm_extent_count; /* size of fm_extents array (in) */3636 __u32 fm_reserved;3737- struct fiemap_extent fm_extents[0]; /* array of mapped extents (out) */3737+ struct fiemap_extent fm_extents[]; /* array of mapped extents (out) */3838};39394040#define FIEMAP_MAX_OFFSET (~0ULL)
+2
include/uapi/linux/netfilter/nf_tables.h
···276276 * @NFT_SET_TIMEOUT: set uses timeouts277277 * @NFT_SET_EVAL: set can be updated from the evaluation path278278 * @NFT_SET_OBJECT: set contains stateful objects279279+ * @NFT_SET_CONCAT: set contains a concatenation279280 */280281enum nft_set_flags {281282 NFT_SET_ANONYMOUS = 0x1,···286285 NFT_SET_TIMEOUT = 0x10,287286 NFT_SET_EVAL = 0x20,288287 NFT_SET_OBJECT = 0x40,288288+ NFT_SET_CONCAT = 0x80,289289};290290291291/**
+1
include/uapi/linux/netfilter/xt_IDLETIMER.h
···48484949 char label[MAX_IDLETIMER_LABEL_SIZE];50505151+ __u8 send_nl_msg; /* unused: for compatibility with Android */5152 __u8 timer_type;52535354 /* for kernel module internal use only */
+1-1
kernel/bpf/bpf_lru_list.h
···3030struct bpf_lru_list {3131 struct list_head lists[NR_BPF_LRU_LIST_T];3232 unsigned int counts[NR_BPF_LRU_LIST_COUNT];3333- /* The next inacitve list rotation starts from here */3333+ /* The next inactive list rotation starts from here */3434 struct list_head *next_inactive_rotation;35353636 raw_spinlock_t lock ____cacheline_aligned_in_smp;
+7-9
kernel/bpf/syscall.c
···586586{587587 struct bpf_map *map = vma->vm_file->private_data;588588589589- bpf_map_inc_with_uref(map);590590-591591- if (vma->vm_flags & VM_WRITE) {589589+ if (vma->vm_flags & VM_MAYWRITE) {592590 mutex_lock(&map->freeze_mutex);593591 map->writecnt++;594592 mutex_unlock(&map->freeze_mutex);···598600{599601 struct bpf_map *map = vma->vm_file->private_data;600602601601- if (vma->vm_flags & VM_WRITE) {603603+ if (vma->vm_flags & VM_MAYWRITE) {602604 mutex_lock(&map->freeze_mutex);603605 map->writecnt--;604606 mutex_unlock(&map->freeze_mutex);605607 }606606-607607- bpf_map_put_with_uref(map);608608}609609610610static const struct vm_operations_struct bpf_map_default_vmops = {···631635 /* set default open/close callbacks */632636 vma->vm_ops = &bpf_map_default_vmops;633637 vma->vm_private_data = map;638638+ vma->vm_flags &= ~VM_MAYEXEC;639639+ if (!(vma->vm_flags & VM_WRITE))640640+ /* disallow re-mapping with PROT_WRITE */641641+ vma->vm_flags &= ~VM_MAYWRITE;634642635643 err = map->ops->map_mmap(map, vma);636644 if (err)637645 goto out;638646639639- bpf_map_inc_with_uref(map);640640-641641- if (vma->vm_flags & VM_WRITE)647647+ if (vma->vm_flags & VM_MAYWRITE)642648 map->writecnt++;643649out:644650 mutex_unlock(&map->freeze_mutex);
···16901690 return ret;16911691}1692169216931693-/**16941694- * setup_irq - setup an interrupt16951695- * @irq: Interrupt line to setup16961696- * @act: irqaction for the interrupt16971697- *16981698- * Used to statically setup interrupts in the early boot process.16991699- */17001700-int setup_irq(unsigned int irq, struct irqaction *act)17011701-{17021702- int retval;17031703- struct irq_desc *desc = irq_to_desc(irq);17041704-17051705- if (!desc || WARN_ON(irq_settings_is_per_cpu_devid(desc)))17061706- return -EINVAL;17071707-17081708- retval = irq_chip_pm_get(&desc->irq_data);17091709- if (retval < 0)17101710- return retval;17111711-17121712- retval = __setup_irq(irq, desc, act);17131713-17141714- if (retval)17151715- irq_chip_pm_put(&desc->irq_data);17161716-17171717- return retval;17181718-}17191719-EXPORT_SYMBOL_GPL(setup_irq);17201720-17211693/*17221694 * Internal function to unregister an irqaction - used to free17231695 * regular and special interrupts that are part of the architecture.···18291857 kfree(action->secondary);18301858 return action;18311859}18321832-18331833-/**18341834- * remove_irq - free an interrupt18351835- * @irq: Interrupt line to free18361836- * @act: irqaction for the interrupt18371837- *18381838- * Used to remove interrupts statically setup by the early boot process.18391839- */18401840-void remove_irq(unsigned int irq, struct irqaction *act)18411841-{18421842- struct irq_desc *desc = irq_to_desc(irq);18431843-18441844- if (desc && !WARN_ON(irq_settings_is_per_cpu_devid(desc)))18451845- __free_irq(desc, act->dev_id);18461846-}18471847-EXPORT_SYMBOL_GPL(remove_irq);1848186018491861/**18501862 * free_irq - free an interrupt allocated with request_irq
···149149static int __init housekeeping_isolcpus_setup(char *str)150150{151151 unsigned int flags = 0;152152+ bool illegal = false;153153+ char *par;154154+ int len;152155153156 while (isalpha(*str)) {154157 if (!strncmp(str, "nohz,", 5)) {···172169 continue;173170 }174171175175- pr_warn("isolcpus: Error, unknown flag\n");176176- return 0;172172+ /*173173+ * Skip unknown sub-parameter and validate that it is not174174+ * containing an invalid character.175175+ */176176+ for (par = str, len = 0; *str && *str != ','; str++, len++) {177177+ if (!isalpha(*str) && *str != '_')178178+ illegal = true;179179+ }180180+181181+ if (illegal) {182182+ pr_warn("isolcpus: Invalid flag %.*s\n", len, par);183183+ return 0;184184+ }185185+186186+ pr_info("isolcpus: Skipped unknown flag %.*s\n", len, par);187187+ str++;177188 }178189179190 /* Default behaviour for isolcpus without flags */
+5-9
kernel/signal.c
···15101510 unsigned long flags;15111511 int ret = -EINVAL;1512151215131513+ if (!valid_signal(sig))15141514+ return ret;15151515+15131516 clear_siginfo(&info);15141517 info.si_signo = sig;15151518 info.si_errno = errno;15161519 info.si_code = SI_ASYNCIO;15171520 *((sigval_t *)&info.si_pid) = addr;15181518-15191519- if (!valid_signal(sig))15201520- return ret;1521152115221522 rcu_read_lock();15231523 p = pid_task(pid, PIDTYPE_PID);···15571557{15581558 int ret;1559155915601560- if (pid > 0) {15611561- rcu_read_lock();15621562- ret = kill_pid_info(sig, info, find_vpid(pid));15631563- rcu_read_unlock();15641564- return ret;15651565- }15601560+ if (pid > 0)15611561+ return kill_proc_info(sig, info, pid);1566156215671563 /* -INT_MIN is undefined. Exclude this case to avoid a UBSAN warning */15681564 if (pid == INT_MIN)
···10881088 struct event_trigger_data *data,10891089 struct trace_event_file *file)10901090{10911091- int ret = register_trigger(glob, ops, data, file);10911091+ if (tracing_alloc_snapshot_instance(file->tr) != 0)10921092+ return 0;1092109310931093- if (ret > 0 && tracing_alloc_snapshot_instance(file->tr) != 0) {10941094- unregister_trigger(glob, ops, data, file);10951095- ret = 0;10961096- }10971097-10981098- return ret;10941094+ return register_trigger(glob, ops, data, file);10991095}1100109611011097static int
+2
lib/Kconfig.debug
···242242config DEBUG_INFO_BTF243243 bool "Generate BTF typeinfo"244244 depends on DEBUG_INFO245245+ depends on !DEBUG_INFO_SPLIT && !DEBUG_INFO_REDUCED246246+ depends on !GCC_PLUGIN_RANDSTRUCT || COMPILE_TEST245247 help246248 Generate deduplicated BTF type information from DWARF debug info.247249 Turning this on expects presence of pahole tool, which will convert
+12-1
mm/mremap.c
···413413 /* Always put back VM_ACCOUNT since we won't unmap */414414 vma->vm_flags |= VM_ACCOUNT;415415416416- vm_acct_memory(vma_pages(new_vma));416416+ vm_acct_memory(new_len >> PAGE_SHIFT);417417 }418418+419419+ /*420420+ * VMAs can actually be merged back together in copy_vma421421+ * calling merge_vma. This can happen with anonymous vmas422422+ * which have not yet been faulted, so if we were to consider423423+ * this VMA split we'll end up adding VM_ACCOUNT on the424424+ * next VMA, which is completely unrelated if this VMA425425+ * was re-merged.426426+ */427427+ if (split && new_vma == vma)428428+ split = 0;418429419430 /* We always clear VM_LOCKED[ONFAULT] on the old vma */420431 vma->vm_flags &= VM_LOCKED_CLEAR_MASK;
+4-2
net/core/dev.c
···4140414041414141int netdev_tstamp_prequeue __read_mostly = 1;41424142int netdev_budget __read_mostly = 300;41434143-unsigned int __read_mostly netdev_budget_usecs = 2000;41434143+/* Must be at least 2 jiffes to guarantee 1 jiffy timeout */41444144+unsigned int __read_mostly netdev_budget_usecs = 2 * USEC_PER_SEC / HZ;41444145int weight_p __read_mostly = 64; /* old backlog weight */41454146int dev_weight_rx_bias __read_mostly = 1; /* bias for backlog weight */41464147int dev_weight_tx_bias __read_mostly = 1; /* bias for output_queue quota */···86678666 const struct net_device_ops *ops = dev->netdev_ops;86688667 enum bpf_netdev_command query;86698668 u32 prog_id, expected_id = 0;86708670- struct bpf_prog *prog = NULL;86718669 bpf_op_t bpf_op, bpf_chk;86708670+ struct bpf_prog *prog;86728671 bool offload;86738672 int err;86748673···87348733 } else {87358734 if (!prog_id)87368735 return 0;87368736+ prog = NULL;87378737 }8738873887398739 err = dev_xdp_install(dev, bpf_op, extack, flags, prog);
+1-1
net/core/filter.c
···59255925 return -EOPNOTSUPP;59265926 if (unlikely(dev_net(skb->dev) != sock_net(sk)))59275927 return -ENETUNREACH;59285928- if (unlikely(sk->sk_reuseport))59285928+ if (unlikely(sk_fullsock(sk) && sk->sk_reuseport))59295929 return -ESOCKTNOSUPPORT;59305930 if (sk_is_refcounted(sk) &&59315931 unlikely(!refcount_inc_not_zero(&sk->sk_refcnt)))
+1-1
net/core/net-sysfs.c
···8080 struct net_device *netdev = to_net_dev(dev);8181 struct net *net = dev_net(netdev);8282 unsigned long new;8383- int ret = -EINVAL;8383+ int ret;84848585 if (!ns_capable(net->user_ns, CAP_NET_ADMIN))8686 return -EPERM;
+1-1
net/core/sock.c
···18721872 * as not suitable for copying when cloning.18731873 */18741874 if (sk_user_data_is_nocopy(newsk))18751875- RCU_INIT_POINTER(newsk->sk_user_data, NULL);18751875+ newsk->sk_user_data = NULL;1876187618771877 newsk->sk_err = 0;18781878 newsk->sk_err_soft = 0;
+6-1
net/dsa/port.c
···670670{671671 struct dsa_switch *ds = dp->ds;672672 struct device_node *phy_np;673673+ int port = dp->index;673674674675 if (!ds->ops->adjust_link) {675676 phy_np = of_parse_phandle(dp->dn, "phy-handle", 0);676676- if (of_phy_is_fixed_link(dp->dn) || phy_np)677677+ if (of_phy_is_fixed_link(dp->dn) || phy_np) {678678+ if (ds->ops->phylink_mac_link_down)679679+ ds->ops->phylink_mac_link_down(ds, port,680680+ MLO_AN_FIXED, PHY_INTERFACE_MODE_NA);677681 return dsa_port_phylink_register(dp);682682+ }678683 return 0;679684 }680685
···218218219219 /* Detect overlaps as we descend the tree. Set the flag in these cases:220220 *221221- * a1. |__ _ _? >|__ _ _ (insert start after existing start)222222- * a2. _ _ __>| ?_ _ __| (insert end before existing end)223223- * a3. _ _ ___| ?_ _ _>| (insert end after existing end)224224- * a4. >|__ _ _ _ _ __| (insert start before existing end)221221+ * a1. _ _ __>| ?_ _ __| (insert end before existing end)222222+ * a2. _ _ ___| ?_ _ _>| (insert end after existing end)223223+ * a3. _ _ ___? >|_ _ __| (insert start before existing end)225224 *226225 * and clear it later on, as we eventually reach the points indicated by227226 * '?' above, in the cases described below. We'll always meet these228227 * later, locally, due to tree ordering, and overlaps for the intervals229228 * that are the closest together are always evaluated last.230229 *231231- * b1. |__ _ _! >|__ _ _ (insert start after existing end)232232- * b2. _ _ __>| !_ _ __| (insert end before existing start)233233- * b3. !_____>| (insert end after existing start)230230+ * b1. _ _ __>| !_ _ __| (insert end before existing start)231231+ * b2. _ _ ___| !_ _ _>| (insert end after existing start)232232+ * b3. _ _ ___! >|_ _ __| (insert start after existing end)234233 *235235- * Case a4. resolves to b1.:234234+ * Case a3. resolves to b3.:236235 * - if the inserted start element is the leftmost, because the '0'237236 * element in the tree serves as end element238237 * - otherwise, if an existing end is found. Note that end elements are239238 * always inserted after corresponding start elements.240239 *241241- * For a new, rightmost pair of elements, we'll hit cases b1. and b3.,240240+ * For a new, rightmost pair of elements, we'll hit cases b3. and b2.,242241 * in that order.243242 *244243 * The flag is also cleared in two special cases:···261262 p = &parent->rb_left;262263263264 if (nft_rbtree_interval_start(new)) {264264- overlap = nft_rbtree_interval_start(rbe) &&265265- nft_set_elem_active(&rbe->ext,266266- genmask);265265+ if (nft_rbtree_interval_end(rbe) &&266266+ nft_set_elem_active(&rbe->ext, genmask))267267+ overlap = false;267268 } else {268269 overlap = nft_rbtree_interval_end(rbe) &&269270 nft_set_elem_active(&rbe->ext,
+3
net/netfilter/xt_IDLETIMER.c
···346346347347 pr_debug("checkentry targinfo%s\n", info->label);348348349349+ if (info->send_nl_msg)350350+ return -EOPNOTSUPP;351351+349352 ret = idletimer_tg_helper((struct idletimer_tg_info *)info);350353 if(ret < 0)351354 {
···11/*22- * Copyright (c) 2006 Oracle. All rights reserved.22+ * Copyright (c) 2006, 2020 Oracle and/or its affiliates.33 *44 * This software is available to you under a choice of one of two55 * licenses. You may choose to be licensed under the terms of the GNU···162162 if (rm->rdma.op_active)163163 rds_rdma_free_op(&rm->rdma);164164 if (rm->rdma.op_rdma_mr)165165- rds_mr_put(rm->rdma.op_rdma_mr);165165+ kref_put(&rm->rdma.op_rdma_mr->r_kref, __rds_put_mr_final);166166167167 if (rm->atomic.op_active)168168 rds_atomic_free_op(&rm->atomic);169169 if (rm->atomic.op_rdma_mr)170170- rds_mr_put(rm->atomic.op_rdma_mr);170170+ kref_put(&rm->atomic.op_rdma_mr->r_kref, __rds_put_mr_final);171171}172172173173void rds_message_put(struct rds_message *rm)···308308/*309309 * RDS ops use this to grab SG entries from the rm's sg pool.310310 */311311-struct scatterlist *rds_message_alloc_sgs(struct rds_message *rm, int nents,312312- int *ret)311311+struct scatterlist *rds_message_alloc_sgs(struct rds_message *rm, int nents)313312{314313 struct scatterlist *sg_first = (struct scatterlist *) &rm[1];315314 struct scatterlist *sg_ret;316315317317- if (WARN_ON(!ret))318318- return NULL;319319-320316 if (nents <= 0) {321317 pr_warn("rds: alloc sgs failed! nents <= 0\n");322322- *ret = -EINVAL;323323- return NULL;318318+ return ERR_PTR(-EINVAL);324319 }325320326321 if (rm->m_used_sgs + nents > rm->m_total_sgs) {327322 pr_warn("rds: alloc sgs failed! total %d used %d nents %d\n",328323 rm->m_total_sgs, rm->m_used_sgs, nents);329329- *ret = -ENOMEM;330330- return NULL;324324+ return ERR_PTR(-ENOMEM);331325 }332326333327 sg_ret = &sg_first[rm->m_used_sgs];···337343 unsigned int i;338344 int num_sgs = DIV_ROUND_UP(total_len, PAGE_SIZE);339345 int extra_bytes = num_sgs * sizeof(struct scatterlist);340340- int ret;341346342347 rm = rds_message_alloc(extra_bytes, GFP_NOWAIT);343348 if (!rm)···345352 set_bit(RDS_MSG_PAGEVEC, &rm->m_flags);346353 rm->m_inc.i_hdr.h_len = cpu_to_be32(total_len);347354 rm->data.op_nents = DIV_ROUND_UP(total_len, PAGE_SIZE);348348- rm->data.op_sg = rds_message_alloc_sgs(rm, num_sgs, &ret);349349- if (!rm->data.op_sg) {355355+ rm->data.op_sg = rds_message_alloc_sgs(rm, num_sgs);356356+ if (IS_ERR(rm->data.op_sg)) {350357 rds_message_put(rm);351351- return ERR_PTR(ret);358358+ return ERR_CAST(rm->data.op_sg);352359 }353360354361 for (i = 0; i < rm->data.op_nents; ++i) {
+35-30
net/rds/rdma.c
···11/*22- * Copyright (c) 2007, 2017 Oracle and/or its affiliates. All rights reserved.22+ * Copyright (c) 2007, 2020 Oracle and/or its affiliates.33 *44 * This software is available to you under a choice of one of two55 * licenses. You may choose to be licensed under the terms of the GNU···8484 if (insert) {8585 rb_link_node(&insert->r_rb_node, parent, p);8686 rb_insert_color(&insert->r_rb_node, root);8787- refcount_inc(&insert->r_refcount);8787+ kref_get(&insert->r_kref);8888 }8989 return NULL;9090}···9999 unsigned long flags;100100101101 rdsdebug("RDS: destroy mr key is %x refcnt %u\n",102102- mr->r_key, refcount_read(&mr->r_refcount));103103-104104- if (test_and_set_bit(RDS_MR_DEAD, &mr->r_state))105105- return;102102+ mr->r_key, kref_read(&mr->r_kref));106103107104 spin_lock_irqsave(&rs->rs_rdma_lock, flags);108105 if (!RB_EMPTY_NODE(&mr->r_rb_node))···112115 mr->r_trans->free_mr(trans_private, mr->r_invalidate);113116}114117115115-void __rds_put_mr_final(struct rds_mr *mr)118118+void __rds_put_mr_final(struct kref *kref)116119{120120+ struct rds_mr *mr = container_of(kref, struct rds_mr, r_kref);121121+117122 rds_destroy_mr(mr);118123 kfree(mr);119124}···139140 rb_erase(&mr->r_rb_node, &rs->rs_rdma_keys);140141 RB_CLEAR_NODE(&mr->r_rb_node);141142 spin_unlock_irqrestore(&rs->rs_rdma_lock, flags);142142- rds_destroy_mr(mr);143143- rds_mr_put(mr);143143+ kref_put(&mr->r_kref, __rds_put_mr_final);144144 spin_lock_irqsave(&rs->rs_rdma_lock, flags);145145 }146146 spin_unlock_irqrestore(&rs->rs_rdma_lock, flags);···240242 goto out;241243 }242244243243- refcount_set(&mr->r_refcount, 1);245245+ kref_init(&mr->r_kref);244246 RB_CLEAR_NODE(&mr->r_rb_node);245247 mr->r_trans = rs->rs_transport;246248 mr->r_sock = rs;···341343342344 rdsdebug("RDS: get_mr key is %x\n", mr->r_key);343345 if (mr_ret) {344344- refcount_inc(&mr->r_refcount);346346+ kref_get(&mr->r_kref);345347 *mr_ret = mr;346348 }347349···349351out:350352 kfree(pages);351353 if (mr)352352- rds_mr_put(mr);354354+ kref_put(&mr->r_kref, __rds_put_mr_final);353355 return ret;354356}355357···432434 if (!mr)433435 return -EINVAL;434436435435- /*436436- * call rds_destroy_mr() ourselves so that we're sure it's done by the time437437- * we return. If we let rds_mr_put() do it it might not happen until438438- * someone else drops their ref.439439- */440440- rds_destroy_mr(mr);441441- rds_mr_put(mr);437437+ kref_put(&mr->r_kref, __rds_put_mr_final);442438 return 0;443439}444440···456464 return;457465 }458466467467+ /* Get a reference so that the MR won't go away before calling468468+ * sync_mr() below.469469+ */470470+ kref_get(&mr->r_kref);471471+472472+ /* If it is going to be freed, remove it from the tree now so473473+ * that no other thread can find it and free it.474474+ */459475 if (mr->r_use_once || force) {460476 rb_erase(&mr->r_rb_node, &rs->rs_rdma_keys);461477 RB_CLEAR_NODE(&mr->r_rb_node);···477477 if (mr->r_trans->sync_mr)478478 mr->r_trans->sync_mr(mr->r_trans_private, DMA_FROM_DEVICE);479479480480+ /* Release the reference held above. */481481+ kref_put(&mr->r_kref, __rds_put_mr_final);482482+480483 /* If the MR was marked as invalidate, this will481484 * trigger an async flush. */482482- if (zot_me) {483483- rds_destroy_mr(mr);484484- rds_mr_put(mr);485485- }485485+ if (zot_me)486486+ kref_put(&mr->r_kref, __rds_put_mr_final);486487}487488488489void rds_rdma_free_op(struct rm_rdma_op *ro)···491490 unsigned int i;492491493492 if (ro->op_odp_mr) {494494- rds_mr_put(ro->op_odp_mr);493493+ kref_put(&ro->op_odp_mr->r_kref, __rds_put_mr_final);495494 } else {496495 for (i = 0; i < ro->op_nents; i++) {497496 struct page *page = sg_page(&ro->op_sg[i]);···665664 op->op_odp_mr = NULL;666665667666 WARN_ON(!nr_pages);668668- op->op_sg = rds_message_alloc_sgs(rm, nr_pages, &ret);669669- if (!op->op_sg)667667+ op->op_sg = rds_message_alloc_sgs(rm, nr_pages);668668+ if (IS_ERR(op->op_sg)) {669669+ ret = PTR_ERR(op->op_sg);670670 goto out_pages;671671+ }671672672673 if (op->op_notify || op->op_recverr) {673674 /* We allocate an uninitialized notifier here, because···733730 goto out_pages;734731 }735732 RB_CLEAR_NODE(&local_odp_mr->r_rb_node);736736- refcount_set(&local_odp_mr->r_refcount, 1);733733+ kref_init(&local_odp_mr->r_kref);737734 local_odp_mr->r_trans = rs->rs_transport;738735 local_odp_mr->r_sock = rs;739736 local_odp_mr->r_trans_private =···830827 if (!mr)831828 err = -EINVAL; /* invalid r_key */832829 else833833- refcount_inc(&mr->r_refcount);830830+ kref_get(&mr->r_kref);834831 spin_unlock_irqrestore(&rs->rs_rdma_lock, flags);835832836833 if (mr) {···908905 rm->atomic.op_silent = !!(args->flags & RDS_RDMA_SILENT);909906 rm->atomic.op_active = 1;910907 rm->atomic.op_recverr = rs->rs_recverr;911911- rm->atomic.op_sg = rds_message_alloc_sgs(rm, 1, &ret);912912- if (!rm->atomic.op_sg)908908+ rm->atomic.op_sg = rds_message_alloc_sgs(rm, 1);909909+ if (IS_ERR(rm->atomic.op_sg)) {910910+ ret = PTR_ERR(rm->atomic.op_sg);913911 goto err;912912+ }914913915914 /* verify 8 byte-aligned */916915 if (args->local_addr & 0x7) {
+3-17
net/rds/rds.h
···291291292292struct rds_mr {293293 struct rb_node r_rb_node;294294- refcount_t r_refcount;294294+ struct kref r_kref;295295 u32 r_key;296296297297 /* A copy of the creation flags */···299299 unsigned int r_invalidate:1;300300 unsigned int r_write:1;301301302302- /* This is for RDS_MR_DEAD.303303- * It would be nice & consistent to make this part of the above304304- * bit field here, but we need to use test_and_set_bit.305305- */306306- unsigned long r_state;307302 struct rds_sock *r_sock; /* back pointer to the socket that owns us */308303 struct rds_transport *r_trans;309304 void *r_trans_private;310305};311311-312312-/* Flags for mr->r_state */313313-#define RDS_MR_DEAD 0314306315307static inline rds_rdma_cookie_t rds_rdma_make_cookie(u32 r_key, u32 offset)316308{···844852845853/* message.c */846854struct rds_message *rds_message_alloc(unsigned int nents, gfp_t gfp);847847-struct scatterlist *rds_message_alloc_sgs(struct rds_message *rm, int nents,848848- int *ret);855855+struct scatterlist *rds_message_alloc_sgs(struct rds_message *rm, int nents);849856int rds_message_copy_from_user(struct rds_message *rm, struct iov_iter *from,850857 bool zcopy);851858struct rds_message *rds_message_map_pages(unsigned long *page_addrs, unsigned int total_len);···937946int rds_cmsg_atomic(struct rds_sock *rs, struct rds_message *rm,938947 struct cmsghdr *cmsg);939948940940-void __rds_put_mr_final(struct rds_mr *mr);941941-static inline void rds_mr_put(struct rds_mr *mr)942942-{943943- if (refcount_dec_and_test(&mr->r_refcount))944944- __rds_put_mr_final(mr);945945-}949949+void __rds_put_mr_final(struct kref *kref);946950947951static inline bool rds_destroy_pending(struct rds_connection *conn)948952{
+4-2
net/rds/send.c
···1274127412751275 /* Attach data to the rm */12761276 if (payload_len) {12771277- rm->data.op_sg = rds_message_alloc_sgs(rm, num_sgs, &ret);12781278- if (!rm->data.op_sg)12771277+ rm->data.op_sg = rds_message_alloc_sgs(rm, num_sgs);12781278+ if (IS_ERR(rm->data.op_sg)) {12791279+ ret = PTR_ERR(rm->data.op_sg);12791280 goto out;12811281+ }12801282 ret = rds_message_copy_from_user(rm, &msg->msg_iter, zcopy);12811283 if (ret)12821284 goto out;
-9
net/rxrpc/local_object.c
···165165 goto error;166166 }167167168168- /* we want to set the don't fragment bit */169169- opt = IPV6_PMTUDISC_DO;170170- ret = kernel_setsockopt(local->socket, SOL_IPV6, IPV6_MTU_DISCOVER,171171- (char *) &opt, sizeof(opt));172172- if (ret < 0) {173173- _debug("setsockopt failed");174174- goto error;175175- }176176-177168 /* Fall through and set IPv4 options too otherwise we don't get178169 * errors from IPv4 packets sent through the IPv6 socket.179170 */
···10651065 /* Enter fast recovery */10661066 if (unlikely(retransmitted)) {10671067 l->ssthresh = max_t(u16, l->window / 2, 300);10681068- l->window = l->ssthresh;10681068+ l->window = min_t(u16, l->ssthresh, l->window);10691069 return;10701070 }10711071 /* Enter slow start */
+2-2
net/tls/tls_main.c
···5656 TLS_NUM_PROTS,5757};58585959-static struct proto *saved_tcpv6_prot;5959+static const struct proto *saved_tcpv6_prot;6060static DEFINE_MUTEX(tcpv6_prot_mutex);6161-static struct proto *saved_tcpv4_prot;6161+static const struct proto *saved_tcpv4_prot;6262static DEFINE_MUTEX(tcpv4_prot_mutex);6363static struct proto tls_prots[TLS_NUM_PROTS][TLS_NUM_CONFIG][TLS_NUM_CONFIG];6464static struct proto_ops tls_sw_proto_ops;
···2525my $warn = 0;26262727if (! -d ".git") {2828- printf "Warning: can't check if file exists, as this is not a git tree";2828+ printf "Warning: can't check if file exists, as this is not a git tree\n";2929 exit 0;3030}3131
+1-1
scripts/dtc/Makefile
···1313HOST_EXTRACFLAGS := -I $(srctree)/$(src)/libfdt14141515ifeq ($(shell pkg-config --exists yaml-0.1 2>/dev/null && echo yes),)1616-ifneq ($(CHECK_DTBS),)1616+ifneq ($(CHECK_DT_BINDING)$(CHECK_DTBS),)1717$(error dtc needs libyaml for DT schema validation support. \1818 Install the necessary libyaml development package.)1919endif
···2121 select SND_HDA_CORE22222323config SND_HDA_PREALLOC_SIZE2424- int "Pre-allocated buffer size for HD-audio driver" if !SND_DMA_SGBUF2424+ int "Pre-allocated buffer size for HD-audio driver"2525 range 0 327682626- default 0 if SND_DMA_SGBUF2626+ default 2048 if SND_DMA_SGBUF2727 default 64 if !SND_DMA_SGBUF2828 help2929 Specifies the default pre-allocated buffer-size in kB for the3030 HD-audio driver. A larger buffer (e.g. 2048) is preferred3131 for systems using PulseAudio. The default 64 is chosen just3232 for compatibility reasons.3333- On x86 systems, the default is zero as we need no preallocation.3333+ On x86 systems, the default is 2048 as a reasonable value for3434+ most of modern systems.34353536 Note that the pre-allocation size can be changed dynamically3637 via a proc file (/proc/asound/card*/pcm*/sub*/prealloc), too.
···29512951static int hda_codec_force_resume(struct device *dev)29522952{29532953 struct hda_codec *codec = dev_to_hda_codec(dev);29542954- bool forced_resume = !codec->relaxed_resume && codec->jacktbl.used;29542954+ bool forced_resume = hda_codec_need_resume(codec);29552955 int ret;2956295629572957 /* The get/put pair below enforces the runtime resume even if the
+63-44
sound/pci/hda/hda_intel.c
···10271027 chip = card->private_data;10281028 bus = azx_bus(chip);10291029 snd_power_change_state(card, SNDRV_CTL_POWER_D3hot);10301030- __azx_runtime_suspend(chip);10301030+ pm_runtime_force_suspend(dev);10311031 if (bus->irq >= 0) {10321032 free_irq(bus->irq, chip);10331033 bus->irq = -1;···10441044static int azx_resume(struct device *dev)10451045{10461046 struct snd_card *card = dev_get_drvdata(dev);10471047+ struct hda_codec *codec;10471048 struct azx *chip;10491049+ bool forced_resume = false;1048105010491051 if (!azx_is_pm_ready(card))10501052 return 0;···10571055 chip->msi = 0;10581056 if (azx_acquire_irq(chip, 1) < 0)10591057 return -EIO;10601060- __azx_runtime_resume(chip, false);10581058+10591059+ /* check for the forced resume */10601060+ list_for_each_codec(codec, &chip->bus) {10611061+ if (hda_codec_need_resume(codec)) {10621062+ forced_resume = true;10631063+ break;10641064+ }10651065+ }10661066+10671067+ if (forced_resume)10681068+ pm_runtime_get_noresume(dev);10691069+ pm_runtime_force_resume(dev);10701070+ if (forced_resume)10711071+ pm_runtime_put(dev);10611072 snd_power_change_state(card, SNDRV_CTL_POWER_D0);1062107310631074 trace_azx_resume(chip);···10861071 struct azx *chip = card->private_data;10871072 struct pci_dev *pci = to_pci_dev(dev);1088107310741074+ if (!azx_is_pm_ready(card))10751075+ return 0;10891076 if (chip->driver_type == AZX_DRIVER_SKL)10901077 pci_set_power_state(pci, PCI_D3hot);10911078···11001083 struct azx *chip = card->private_data;11011084 struct pci_dev *pci = to_pci_dev(dev);1102108510861086+ if (!azx_is_pm_ready(card))10871087+ return 0;11031088 if (chip->driver_type == AZX_DRIVER_SKL)11041089 pci_set_power_state(pci, PCI_D0);11051090···11171098 if (!azx_is_pm_ready(card))11181099 return 0;11191100 chip = card->private_data;11201120- if (!azx_has_pm_runtime(chip))11211121- return 0;1122110111231102 /* enable controller wake up event */11241124- azx_writew(chip, WAKEEN, azx_readw(chip, WAKEEN) |11251125- STATESTS_INT_MASK);11031103+ if (snd_power_get_state(card) == SNDRV_CTL_POWER_D0) {11041104+ azx_writew(chip, WAKEEN, azx_readw(chip, WAKEEN) |11051105+ STATESTS_INT_MASK);11061106+ }1126110711271108 __azx_runtime_suspend(chip);11281109 trace_azx_runtime_suspend(chip);···11331114{11341115 struct snd_card *card = dev_get_drvdata(dev);11351116 struct azx *chip;11171117+ bool from_rt = snd_power_get_state(card) == SNDRV_CTL_POWER_D0;1136111811371119 if (!azx_is_pm_ready(card))11381120 return 0;11391121 chip = card->private_data;11401140- if (!azx_has_pm_runtime(chip))11411141- return 0;11421142- __azx_runtime_resume(chip, true);11221122+ __azx_runtime_resume(chip, from_rt);1143112311441124 /* disable controller Wake Up event*/11451145- azx_writew(chip, WAKEEN, azx_readw(chip, WAKEEN) &11461146- ~STATESTS_INT_MASK);11251125+ if (from_rt) {11261126+ azx_writew(chip, WAKEEN, azx_readw(chip, WAKEEN) &11271127+ ~STATESTS_INT_MASK);11281128+ }1147112911481130 trace_azx_runtime_resume(chip);11491131 return 0;···12191199 if (!disabled) {12201200 dev_info(chip->card->dev,12211201 "Start delayed initialization\n");12221222- if (azx_probe_continue(chip) < 0) {12021202+ if (azx_probe_continue(chip) < 0)12231203 dev_err(chip->card->dev, "initialization error\n");12241224- hda->init_failed = true;12251225- }12261204 }12271205 } else {12281206 dev_info(chip->card->dev, "%s via vga_switcheroo\n",···13531335/*13541336 * destructor13551337 */13561356-static int azx_free(struct azx *chip)13381338+static void azx_free(struct azx *chip)13571339{13581340 struct pci_dev *pci = chip->pci;13591341 struct hda_intel *hda = container_of(chip, struct hda_intel, chip);13601342 struct hdac_bus *bus = azx_bus(chip);13431343+13441344+ if (hda->freed)13451345+ return;1361134613621347 if (azx_has_pm_runtime(chip) && chip->running)13631348 pm_runtime_get_noresume(&pci->dev);···1405138414061385 if (chip->driver_caps & AZX_DCAPS_I915_COMPONENT)14071386 snd_hdac_i915_exit(bus);14081408- kfree(hda);1409138714101410- return 0;13881388+ hda->freed = 1;14111389}1412139014131391static int azx_dev_disconnect(struct snd_device *device)···1422140214231403static int azx_dev_free(struct snd_device *device)14241404{14251425- return azx_free(device->device_data);14051405+ azx_free(device->device_data);14061406+ return 0;14261407}1427140814281409#ifdef SUPPORT_VGA_SWITCHEROO···17901769 if (err < 0)17911770 return err;1792177117931793- hda = kzalloc(sizeof(*hda), GFP_KERNEL);17721772+ hda = devm_kzalloc(&pci->dev, sizeof(*hda), GFP_KERNEL);17941773 if (!hda) {17951774 pci_disable_device(pci);17961775 return -ENOMEM;···1831181018321811 err = azx_bus_init(chip, model[dev]);18331812 if (err < 0) {18341834- kfree(hda);18351813 pci_disable_device(pci);18361814 return err;18371815 }···20252005 /* codec detection */20262006 if (!azx_bus(chip)->codec_mask) {20272007 dev_err(card->dev, "no codecs found!\n");20282028- return -ENODEV;20082008+ /* keep running the rest for the runtime PM */20292009 }2030201020312011 if (azx_acquire_irq(chip, 0) < 0)···20472027{20482028 struct snd_card *card = context;20492029 struct azx *chip = card->private_data;20502050- struct pci_dev *pci = chip->pci;2051203020522052- if (!fw) {20532053- dev_err(card->dev, "Cannot load firmware, aborting\n");20542054- goto error;20552055- }20562056-20572057- chip->fw = fw;20312031+ if (fw)20322032+ chip->fw = fw;20332033+ else20342034+ dev_err(card->dev, "Cannot load firmware, continue without patching\n");20582035 if (!chip->disabled) {20592036 /* continue probing */20602060- if (azx_probe_continue(chip))20612061- goto error;20372037+ azx_probe_continue(chip);20622038 }20632063- return; /* OK */20642064-20652065- error:20662066- snd_card_free(card);20672067- pci_set_drvdata(pci, NULL);20682039}20692040#endif20702041···23192308#endif2320230923212310 /* create codec instances */23222322- err = azx_probe_codecs(chip, azx_max_codecs[chip->driver_type]);23232323- if (err < 0)23242324- goto out_free;23112311+ if (bus->codec_mask) {23122312+ err = azx_probe_codecs(chip, azx_max_codecs[chip->driver_type]);23132313+ if (err < 0)23142314+ goto out_free;23152315+ }2325231623262317#ifdef CONFIG_SND_HDA_PATCH_LOADER23272318 if (chip->fw) {···23372324#endif23382325 }23392326#endif23402340- if ((probe_only[dev] & 1) == 0) {23272327+ if (bus->codec_mask && !(probe_only[dev] & 1)) {23412328 err = azx_codec_configure(chip);23422329 if (err < 0)23432330 goto out_free;···2354234123552342 set_default_power_save(chip);2356234323572357- if (azx_has_pm_runtime(chip))23442344+ if (azx_has_pm_runtime(chip)) {23452345+ pm_runtime_use_autosuspend(&pci->dev);23462346+ pm_runtime_allow(&pci->dev);23582347 pm_runtime_put_autosuspend(&pci->dev);23482348+ }2359234923602350out_free:23612361- if (err < 0 || !hda->need_i915_power)23512351+ if (err < 0) {23522352+ azx_free(chip);23532353+ return err;23542354+ }23552355+23562356+ if (!hda->need_i915_power)23622357 display_power(chip, false);23632363- if (err < 0)23642364- hda->init_failed = 1;23652358 complete_all(&hda->probe_wait);23662359 to_hda_bus(bus)->bus_probing = 0;23672367- return err;23602360+ return 0;23682361}2369236223702363static void azx_remove(struct pci_dev *pci)
+1
sound/pci/hda/hda_intel.h
···2727 unsigned int use_vga_switcheroo:1;2828 unsigned int vga_switcheroo_registered:1;2929 unsigned int init_failed:1; /* delayed init failed */3030+ unsigned int freed:1; /* resources already released */30313132 bool need_i915_power:1; /* the hda controller needs i915 power */3233};
···360360};361361362362/* Some mobos shipped with a dummy HD-audio show the invalid GET_MIN/GET_MAX363363- * response for Input Gain Pad (id=19, control=12). Skip it.363363+ * response for Input Gain Pad (id=19, control=12) and the connector status364364+ * for SPDIF terminal (id=18). Skip them.364365 */365366static const struct usbmix_name_map asus_rog_map[] = {367367+ { 18, NULL }, /* OT, connector control */366368 { 19, NULL, 12 }, /* FU, Input Gain Pad */367369 {}368370};
+4-1
tools/arch/x86/include/asm/cpufeatures.h
···217217#define X86_FEATURE_IBRS ( 7*32+25) /* Indirect Branch Restricted Speculation */218218#define X86_FEATURE_IBPB ( 7*32+26) /* Indirect Branch Prediction Barrier */219219#define X86_FEATURE_STIBP ( 7*32+27) /* Single Thread Indirect Branch Predictors */220220-#define X86_FEATURE_ZEN ( 7*32+28) /* "" CPU is AMD family 0x17 (Zen) */220220+#define X86_FEATURE_ZEN ( 7*32+28) /* "" CPU is AMD family 0x17 or above (Zen) */221221#define X86_FEATURE_L1TF_PTEINV ( 7*32+29) /* "" L1TF workaround PTE inversion */222222#define X86_FEATURE_IBRS_ENHANCED ( 7*32+30) /* Enhanced IBRS */223223#define X86_FEATURE_MSR_IA32_FEAT_CTL ( 7*32+31) /* "" MSR IA32_FEAT_CTL configured */···285285#define X86_FEATURE_CQM_MBM_LOCAL (11*32+ 3) /* LLC Local MBM monitoring */286286#define X86_FEATURE_FENCE_SWAPGS_USER (11*32+ 4) /* "" LFENCE in user entry SWAPGS path */287287#define X86_FEATURE_FENCE_SWAPGS_KERNEL (11*32+ 5) /* "" LFENCE in kernel entry SWAPGS path */288288+#define X86_FEATURE_SPLIT_LOCK_DETECT (11*32+ 6) /* #AC for split lock */288289289290/* Intel-defined CPU features, CPUID level 0x00000007:1 (EAX), word 12 */290291#define X86_FEATURE_AVX512_BF16 (12*32+ 5) /* AVX512 BFLOAT16 instructions */···300299#define X86_FEATURE_AMD_IBRS (13*32+14) /* "" Indirect Branch Restricted Speculation */301300#define X86_FEATURE_AMD_STIBP (13*32+15) /* "" Single Thread Indirect Branch Predictors */302301#define X86_FEATURE_AMD_STIBP_ALWAYS_ON (13*32+17) /* "" Single Thread Indirect Branch Predictors always-on preferred */302302+#define X86_FEATURE_AMD_PPIN (13*32+23) /* Protected Processor Inventory Number */303303#define X86_FEATURE_AMD_SSBD (13*32+24) /* "" Speculative Store Bypass Disable */304304#define X86_FEATURE_VIRT_SSBD (13*32+25) /* Virtualized Speculative Store Bypass Disable */305305#define X86_FEATURE_AMD_SSB_NO (13*32+26) /* "" Speculative Store Bypass is fixed in hardware. */···369367#define X86_FEATURE_INTEL_STIBP (18*32+27) /* "" Single Thread Indirect Branch Predictors */370368#define X86_FEATURE_FLUSH_L1D (18*32+28) /* Flush L1D cache */371369#define X86_FEATURE_ARCH_CAPABILITIES (18*32+29) /* IA32_ARCH_CAPABILITIES MSR (Intel) */370370+#define X86_FEATURE_CORE_CAPABILITIES (18*32+30) /* "" IA32_CORE_CAPABILITIES MSR */372371#define X86_FEATURE_SPEC_CTRL_SSBD (18*32+31) /* "" Speculative Store Bypass Disable */373372374373/*
+9
tools/arch/x86/include/asm/msr-index.h
···41414242/* Intel MSRs. Some also available on other CPUs */43434444+#define MSR_TEST_CTRL 0x000000334545+#define MSR_TEST_CTRL_SPLIT_LOCK_DETECT_BIT 294646+#define MSR_TEST_CTRL_SPLIT_LOCK_DETECT BIT(MSR_TEST_CTRL_SPLIT_LOCK_DETECT_BIT)4747+4448#define MSR_IA32_SPEC_CTRL 0x00000048 /* Speculation Control */4549#define SPEC_CTRL_IBRS BIT(0) /* Indirect Branch Restricted Speculation */4650#define SPEC_CTRL_STIBP_SHIFT 1 /* Single Thread Indirect Branch Predictor (STIBP) bit */···7369 * bit[1:0] zero.7470 */7571#define MSR_IA32_UMWAIT_CONTROL_TIME_MASK (~0x03U)7272+7373+/* Abbreviated from Intel SDM name IA32_CORE_CAPABILITIES */7474+#define MSR_IA32_CORE_CAPS 0x000000cf7575+#define MSR_IA32_CORE_CAPS_SPLIT_LOCK_DETECT_BIT 57676+#define MSR_IA32_CORE_CAPS_SPLIT_LOCK_DETECT BIT(MSR_IA32_CORE_CAPS_SPLIT_LOCK_DETECT_BIT)76777778#define MSR_PKG_CST_CONFIG_CONTROL 0x000000e27879#define NHM_C3_AUTO_DEMOTE (1UL << 25)
···33#define __LINUX_BITS_H4455#include <linux/const.h>66+#include <vdso/bits.h>67#include <asm/bitsperlong.h>7888-#define BIT(nr) (UL(1) << (nr))99#define BIT_ULL(nr) (ULL(1) << (nr))1010#define BIT_MASK(nr) (UL(1) << ((nr) % BITS_PER_LONG))1111#define BIT_WORD(nr) ((nr) / BITS_PER_LONG)···1818 * position @h. For example1919 * GENMASK_ULL(39, 21) gives us the 64bit vector 0x000000ffffe00000.2020 */2121-#define GENMASK(h, l) \2121+#if !defined(__ASSEMBLY__) && \2222+ (!defined(CONFIG_CC_IS_GCC) || CONFIG_GCC_VERSION >= 49000)2323+#include <linux/build_bug.h>2424+#define GENMASK_INPUT_CHECK(h, l) \2525+ (BUILD_BUG_ON_ZERO(__builtin_choose_expr( \2626+ __builtin_constant_p((l) > (h)), (l) > (h), 0)))2727+#else2828+/*2929+ * BUILD_BUG_ON_ZERO is not available in h files included from asm files,3030+ * disable the input check if that is the case.3131+ */3232+#define GENMASK_INPUT_CHECK(h, l) 03333+#endif3434+3535+#define __GENMASK(h, l) \2236 (((~UL(0)) - (UL(1) << (l)) + 1) & \2337 (~UL(0) >> (BITS_PER_LONG - 1 - (h))))3838+#define GENMASK(h, l) \3939+ (GENMASK_INPUT_CHECK(h, l) + __GENMASK(h, l))24402525-#define GENMASK_ULL(h, l) \4141+#define __GENMASK_ULL(h, l) \2642 (((~ULL(0)) - (ULL(1) << (l)) + 1) & \2743 (~ULL(0) >> (BITS_PER_LONG_LONG - 1 - (h))))4444+#define GENMASK_ULL(h, l) \4545+ (GENMASK_INPUT_CHECK(h, l) + __GENMASK_ULL(h, l))28462947#endif /* __LINUX_BITS_H */
+82
tools/include/linux/build_bug.h
···11+/* SPDX-License-Identifier: GPL-2.0 */22+#ifndef _LINUX_BUILD_BUG_H33+#define _LINUX_BUILD_BUG_H44+55+#include <linux/compiler.h>66+77+#ifdef __CHECKER__88+#define BUILD_BUG_ON_ZERO(e) (0)99+#else /* __CHECKER__ */1010+/*1111+ * Force a compilation error if condition is true, but also produce a1212+ * result (of value 0 and type int), so the expression can be used1313+ * e.g. in a structure initializer (or where-ever else comma expressions1414+ * aren't permitted).1515+ */1616+#define BUILD_BUG_ON_ZERO(e) ((int)(sizeof(struct { int:(-!!(e)); })))1717+#endif /* __CHECKER__ */1818+1919+/* Force a compilation error if a constant expression is not a power of 2 */2020+#define __BUILD_BUG_ON_NOT_POWER_OF_2(n) \2121+ BUILD_BUG_ON(((n) & ((n) - 1)) != 0)2222+#define BUILD_BUG_ON_NOT_POWER_OF_2(n) \2323+ BUILD_BUG_ON((n) == 0 || (((n) & ((n) - 1)) != 0))2424+2525+/*2626+ * BUILD_BUG_ON_INVALID() permits the compiler to check the validity of the2727+ * expression but avoids the generation of any code, even if that expression2828+ * has side-effects.2929+ */3030+#define BUILD_BUG_ON_INVALID(e) ((void)(sizeof((__force long)(e))))3131+3232+/**3333+ * BUILD_BUG_ON_MSG - break compile if a condition is true & emit supplied3434+ * error message.3535+ * @condition: the condition which the compiler should know is false.3636+ *3737+ * See BUILD_BUG_ON for description.3838+ */3939+#define BUILD_BUG_ON_MSG(cond, msg) compiletime_assert(!(cond), msg)4040+4141+/**4242+ * BUILD_BUG_ON - break compile if a condition is true.4343+ * @condition: the condition which the compiler should know is false.4444+ *4545+ * If you have some code which relies on certain constants being equal, or4646+ * some other compile-time-evaluated condition, you should use BUILD_BUG_ON to4747+ * detect if someone changes it.4848+ */4949+#define BUILD_BUG_ON(condition) \5050+ BUILD_BUG_ON_MSG(condition, "BUILD_BUG_ON failed: " #condition)5151+5252+/**5353+ * BUILD_BUG - break compile if used.5454+ *5555+ * If you have some code that you expect the compiler to eliminate at5656+ * build time, you should use BUILD_BUG to detect if it is5757+ * unexpectedly used.5858+ */5959+#define BUILD_BUG() BUILD_BUG_ON_MSG(1, "BUILD_BUG failed")6060+6161+/**6262+ * static_assert - check integer constant expression at build time6363+ *6464+ * static_assert() is a wrapper for the C11 _Static_assert, with a6565+ * little macro magic to make the message optional (defaulting to the6666+ * stringification of the tested expression).6767+ *6868+ * Contrary to BUILD_BUG_ON(), static_assert() can be used at global6969+ * scope, but requires the expression to be an integer constant7070+ * expression (i.e., it is not enough that __builtin_constant_p() is7171+ * true for expr).7272+ *7373+ * Also note that BUILD_BUG_ON() fails the build if the condition is7474+ * true, while static_assert() fails the build if the expression is7575+ * false.7676+ */7777+#ifndef static_assert7878+#define static_assert(expr, ...) __static_assert(expr, ##__VA_ARGS__, #expr)7979+#define __static_assert(expr, msg, ...) _Static_assert(expr, msg)8080+#endif // static_assert8181+8282+#endif /* _LINUX_BUILD_BUG_H */
+26
tools/include/linux/compiler.h
···1010# define __compiletime_error(message)1111#endif12121313+#ifdef __OPTIMIZE__1414+# define __compiletime_assert(condition, msg, prefix, suffix) \1515+ do { \1616+ extern void prefix ## suffix(void) __compiletime_error(msg); \1717+ if (!(condition)) \1818+ prefix ## suffix(); \1919+ } while (0)2020+#else2121+# define __compiletime_assert(condition, msg, prefix, suffix) do { } while (0)2222+#endif2323+2424+#define _compiletime_assert(condition, msg, prefix, suffix) \2525+ __compiletime_assert(condition, msg, prefix, suffix)2626+2727+/**2828+ * compiletime_assert - break build and emit msg if condition is false2929+ * @condition: a compile-time constant condition to check3030+ * @msg: a message to emit if condition is false3131+ *3232+ * In tradition of POSIX assert, this macro will break the build if the3333+ * supplied condition is *false*, emitting the supplied error message if the3434+ * compiler has support to do so.3535+ */3636+#define compiletime_assert(condition, msg) \3737+ _compiletime_assert(condition, msg, __compiletime_assert_, __COUNTER__)3838+1339/* Optimization barrier */1440/* The "volatile" is due to gcc bugs */1541#define barrier() __asm__ __volatile__("": : :"memory")
···948948#define DRM_IOCTL_SYNCOBJ_TRANSFER DRM_IOWR(0xCC, struct drm_syncobj_transfer)949949#define DRM_IOCTL_SYNCOBJ_TIMELINE_SIGNAL DRM_IOWR(0xCD, struct drm_syncobj_timeline_array)950950951951+#define DRM_IOCTL_MODE_GETFB2 DRM_IOWR(0xCE, struct drm_mode_fb_cmd2)952952+951953/**952954 * Device specific ioctls should only be in their respective headers953955 * The device specific ioctl range is from 0x40 to 0x9f.
+21
tools/include/uapi/drm/i915_drm.h
···16191619 * By default, new contexts allow persistence.16201620 */16211621#define I915_CONTEXT_PARAM_PERSISTENCE 0xb16221622+16231623+/*16241624+ * I915_CONTEXT_PARAM_RINGSIZE:16251625+ *16261626+ * Sets the size of the CS ringbuffer to use for logical ring contexts. This16271627+ * applies a limit of how many batches can be queued to HW before the caller16281628+ * is blocked due to lack of space for more commands.16291629+ *16301630+ * Only reliably possible to be set prior to first use, i.e. during16311631+ * construction. At any later point, the current execution must be flushed as16321632+ * the ring can only be changed while the context is idle. Note, the ringsize16331633+ * can be specified as a constructor property, see16341634+ * I915_CONTEXT_CREATE_EXT_SETPARAM, but can also be set later if required.16351635+ *16361636+ * Only applies to the current set of engine and lost when those engines16371637+ * are replaced by a new mapping (see I915_CONTEXT_PARAM_ENGINES).16381638+ *16391639+ * Must be between 4 - 512 KiB, in intervals of page size [4 KiB].16401640+ * Default is 16 KiB.16411641+ */16421642+#define I915_CONTEXT_PARAM_RINGSIZE 0xc16221643/* Must be kept compact -- no holes and well documented */1623164416241645 __u64 value;
···35353636/* Flags for the clone3() syscall. */3737#define CLONE_CLEAR_SIGHAND 0x100000000ULL /* Clear any signal handler and reset to SIG_DFL. */3838+#define CLONE_INTO_CGROUP 0x200000000ULL /* Clone into a specific cgroup given the right permissions. */38393940/*4041 * cloning flags intersect with CSIGNAL so can be used with unshare and clone3···8281 * @set_tid_size: This defines the size of the array referenced8382 * in @set_tid. This cannot be larger than the8483 * kernel's limit of nested PID namespaces.8484+ * @cgroup: If CLONE_INTO_CGROUP is specified set this to8585+ * a file descriptor for the cgroup.8586 *8687 * The structure is versioned by size and thus extensible.8788 * New struct members must go at the end of the struct and···10097 __aligned_u64 tls;10198 __aligned_u64 set_tid;10299 __aligned_u64 set_tid_size;100100+ __aligned_u64 cgroup;103101};104102#endif105103106104#define CLONE_ARGS_SIZE_VER0 64 /* sizeof first published struct */107105#define CLONE_ARGS_SIZE_VER1 80 /* sizeof second published struct */106106+#define CLONE_ARGS_SIZE_VER2 88 /* sizeof third published struct */108107109108/*110109 * Scheduling policies
+24
tools/include/uapi/linux/vhost.h
···116116#define VHOST_VSOCK_SET_GUEST_CID _IOW(VHOST_VIRTIO, 0x60, __u64)117117#define VHOST_VSOCK_SET_RUNNING _IOW(VHOST_VIRTIO, 0x61, int)118118119119+/* VHOST_VDPA specific defines */120120+121121+/* Get the device id. The device ids follow the same definition of122122+ * the device id defined in virtio-spec.123123+ */124124+#define VHOST_VDPA_GET_DEVICE_ID _IOR(VHOST_VIRTIO, 0x70, __u32)125125+/* Get and set the status. The status bits follow the same definition126126+ * of the device status defined in virtio-spec.127127+ */128128+#define VHOST_VDPA_GET_STATUS _IOR(VHOST_VIRTIO, 0x71, __u8)129129+#define VHOST_VDPA_SET_STATUS _IOW(VHOST_VIRTIO, 0x72, __u8)130130+/* Get and set the device config. The device config follows the same131131+ * definition of the device config defined in virtio-spec.132132+ */133133+#define VHOST_VDPA_GET_CONFIG _IOR(VHOST_VIRTIO, 0x73, \134134+ struct vhost_vdpa_config)135135+#define VHOST_VDPA_SET_CONFIG _IOW(VHOST_VIRTIO, 0x74, \136136+ struct vhost_vdpa_config)137137+/* Enable/disable the ring. */138138+#define VHOST_VDPA_SET_VRING_ENABLE _IOW(VHOST_VIRTIO, 0x75, \139139+ struct vhost_vring_state)140140+/* Get the max ring size. */141141+#define VHOST_VDPA_GET_VRING_NUM _IOR(VHOST_VIRTIO, 0x76, __u16)142142+119143#endif
···10501050 * it.10511051 */10521052 for (;10531053- &insn->list != &file->insn_list &&10541054- insn->sec == func->sec &&10551055- insn->offset >= func->offset;10561056-10531053+ &insn->list != &file->insn_list && insn->func && insn->func->pfunc == func;10571054 insn = insn->first_jump_src ?: list_prev_entry(insn, list)) {1058105510591056 if (insn != orig_insn && insn->type == INSN_JUMP_DYNAMIC)···20052008 }2006200920072010 if (state->bp_scratch) {20082008- WARN("%s uses BP as a scratch register",20092009- func->name);20112011+ WARN_FUNC("BP used as a scratch register",20122012+ insn->sec, insn->offset);20102013 return 1;20112014 }20122015···23612364 !strcmp(insn->sec->name, ".altinstr_aux"))23622365 return true;2363236623672367+ if (!insn->func)23682368+ return false;23692369+23702370+ /*23712371+ * CONFIG_UBSAN_TRAP inserts a UD2 when it sees23722372+ * __builtin_unreachable(). The BUG() macro has an unreachable() after23732373+ * the UD2, which causes GCC's undefined trap logic to emit another UD223742374+ * (or occasionally a JMP to UD2).23752375+ */23762376+ if (list_prev_entry(insn, list)->dead_end &&23772377+ (insn->type == INSN_BUG ||23782378+ (insn->type == INSN_JUMP_UNCONDITIONAL &&23792379+ insn->jump_dest && insn->jump_dest->type == INSN_BUG)))23802380+ return true;23812381+23642382 /*23652383 * Check if this (or a subsequent) instruction is related to23662384 * CONFIG_UBSAN or CONFIG_KASAN.23672385 *23682386 * End the search at 5 instructions to avoid going into the weeds.23692387 */23702370- if (!insn->func)23712371- return false;23722388 for (i = 0; i < 5; i++) {2373238923742390 if (is_kasan_insn(insn) || is_ubsan_insn(insn))
+25-15
tools/objtool/orc_dump.c
···6666 char *name;6767 size_t nr_sections;6868 Elf64_Addr orc_ip_addr = 0;6969- size_t shstrtab_idx;6969+ size_t shstrtab_idx, strtab_idx = 0;7070 Elf *elf;7171 Elf_Scn *scn;7272 GElf_Shdr sh;···127127128128 if (!strcmp(name, ".symtab")) {129129 symtab = data;130130+ } else if (!strcmp(name, ".strtab")) {131131+ strtab_idx = i;130132 } else if (!strcmp(name, ".orc_unwind")) {131133 orc = data->d_buf;132134 orc_size = sh.sh_size;···140138 }141139 }142140143143- if (!symtab || !orc || !orc_ip)141141+ if (!symtab || !strtab_idx || !orc || !orc_ip)144142 return 0;145143146144 if (orc_size % sizeof(*orc) != 0) {···161159 return -1;162160 }163161164164- scn = elf_getscn(elf, sym.st_shndx);165165- if (!scn) {166166- WARN_ELF("elf_getscn");167167- return -1;168168- }162162+ if (GELF_ST_TYPE(sym.st_info) == STT_SECTION) {163163+ scn = elf_getscn(elf, sym.st_shndx);164164+ if (!scn) {165165+ WARN_ELF("elf_getscn");166166+ return -1;167167+ }169168170170- if (!gelf_getshdr(scn, &sh)) {171171- WARN_ELF("gelf_getshdr");172172- return -1;173173- }169169+ if (!gelf_getshdr(scn, &sh)) {170170+ WARN_ELF("gelf_getshdr");171171+ return -1;172172+ }174173175175- name = elf_strptr(elf, shstrtab_idx, sh.sh_name);176176- if (!name || !*name) {177177- WARN_ELF("elf_strptr");178178- return -1;174174+ name = elf_strptr(elf, shstrtab_idx, sh.sh_name);175175+ if (!name) {176176+ WARN_ELF("elf_strptr");177177+ return -1;178178+ }179179+ } else {180180+ name = elf_strptr(elf, strtab_idx, sym.st_name);181181+ if (!name) {182182+ WARN_ELF("elf_strptr");183183+ return -1;184184+ }179185 }180186181187 printf("%s+%llx:", name, (unsigned long long)rela.r_addend);
+26-7
tools/objtool/orc_gen.c
···8888 struct orc_entry *orc;8989 struct rela *rela;90909191- if (!insn_sec->sym) {9292- WARN("missing symbol for section %s", insn_sec->name);9393- return -1;9494- }9595-9691 /* populate ORC data */9792 orc = (struct orc_entry *)u_sec->data->d_buf + idx;9893 memcpy(orc, o, sizeof(*orc));···100105 }101106 memset(rela, 0, sizeof(*rela));102107103103- rela->sym = insn_sec->sym;104104- rela->addend = insn_off;108108+ if (insn_sec->sym) {109109+ rela->sym = insn_sec->sym;110110+ rela->addend = insn_off;111111+ } else {112112+ /*113113+ * The Clang assembler doesn't produce section symbols, so we114114+ * have to reference the function symbol instead:115115+ */116116+ rela->sym = find_symbol_containing(insn_sec, insn_off);117117+ if (!rela->sym) {118118+ /*119119+ * Hack alert. This happens when we need to reference120120+ * the NOP pad insn immediately after the function.121121+ */122122+ rela->sym = find_symbol_containing(insn_sec,123123+ insn_off - 1);124124+ }125125+ if (!rela->sym) {126126+ WARN("missing symbol for insn at offset 0x%lx\n",127127+ insn_off);128128+ return -1;129129+ }130130+131131+ rela->addend = insn_off - rela->sym->offset;132132+ }133133+105134 rela->type = R_X86_64_PC32;106135 rela->offset = idx * sizeof(int);107136 rela->sec = ip_relasec;
+370-370
tools/perf/arch/x86/entry/syscalls/syscall_64.tbl
···88#99# The abi is "common", "64" or "x32" for this file.1010#1111-0 common read __x64_sys_read1212-1 common write __x64_sys_write1313-2 common open __x64_sys_open1414-3 common close __x64_sys_close1515-4 common stat __x64_sys_newstat1616-5 common fstat __x64_sys_newfstat1717-6 common lstat __x64_sys_newlstat1818-7 common poll __x64_sys_poll1919-8 common lseek __x64_sys_lseek2020-9 common mmap __x64_sys_mmap2121-10 common mprotect __x64_sys_mprotect2222-11 common munmap __x64_sys_munmap2323-12 common brk __x64_sys_brk2424-13 64 rt_sigaction __x64_sys_rt_sigaction2525-14 common rt_sigprocmask __x64_sys_rt_sigprocmask2626-15 64 rt_sigreturn __x64_sys_rt_sigreturn/ptregs2727-16 64 ioctl __x64_sys_ioctl2828-17 common pread64 __x64_sys_pread642929-18 common pwrite64 __x64_sys_pwrite643030-19 64 readv __x64_sys_readv3131-20 64 writev __x64_sys_writev3232-21 common access __x64_sys_access3333-22 common pipe __x64_sys_pipe3434-23 common select __x64_sys_select3535-24 common sched_yield __x64_sys_sched_yield3636-25 common mremap __x64_sys_mremap3737-26 common msync __x64_sys_msync3838-27 common mincore __x64_sys_mincore3939-28 common madvise __x64_sys_madvise4040-29 common shmget __x64_sys_shmget4141-30 common shmat __x64_sys_shmat4242-31 common shmctl __x64_sys_shmctl4343-32 common dup __x64_sys_dup4444-33 common dup2 __x64_sys_dup24545-34 common pause __x64_sys_pause4646-35 common nanosleep __x64_sys_nanosleep4747-36 common getitimer __x64_sys_getitimer4848-37 common alarm __x64_sys_alarm4949-38 common setitimer __x64_sys_setitimer5050-39 common getpid __x64_sys_getpid5151-40 common sendfile __x64_sys_sendfile645252-41 common socket __x64_sys_socket5353-42 common connect __x64_sys_connect5454-43 common accept __x64_sys_accept5555-44 common sendto __x64_sys_sendto5656-45 64 recvfrom __x64_sys_recvfrom5757-46 64 sendmsg __x64_sys_sendmsg5858-47 64 recvmsg __x64_sys_recvmsg5959-48 common shutdown __x64_sys_shutdown6060-49 common bind __x64_sys_bind6161-50 common listen __x64_sys_listen6262-51 common getsockname __x64_sys_getsockname6363-52 common getpeername __x64_sys_getpeername6464-53 common socketpair __x64_sys_socketpair6565-54 64 setsockopt __x64_sys_setsockopt6666-55 64 getsockopt __x64_sys_getsockopt6767-56 common clone __x64_sys_clone/ptregs6868-57 common fork __x64_sys_fork/ptregs6969-58 common vfork __x64_sys_vfork/ptregs7070-59 64 execve __x64_sys_execve/ptregs7171-60 common exit __x64_sys_exit7272-61 common wait4 __x64_sys_wait47373-62 common kill __x64_sys_kill7474-63 common uname __x64_sys_newuname7575-64 common semget __x64_sys_semget7676-65 common semop __x64_sys_semop7777-66 common semctl __x64_sys_semctl7878-67 common shmdt __x64_sys_shmdt7979-68 common msgget __x64_sys_msgget8080-69 common msgsnd __x64_sys_msgsnd8181-70 common msgrcv __x64_sys_msgrcv8282-71 common msgctl __x64_sys_msgctl8383-72 common fcntl __x64_sys_fcntl8484-73 common flock __x64_sys_flock8585-74 common fsync __x64_sys_fsync8686-75 common fdatasync __x64_sys_fdatasync8787-76 common truncate __x64_sys_truncate8888-77 common ftruncate __x64_sys_ftruncate8989-78 common getdents __x64_sys_getdents9090-79 common getcwd __x64_sys_getcwd9191-80 common chdir __x64_sys_chdir9292-81 common fchdir __x64_sys_fchdir9393-82 common rename __x64_sys_rename9494-83 common mkdir __x64_sys_mkdir9595-84 common rmdir __x64_sys_rmdir9696-85 common creat __x64_sys_creat9797-86 common link __x64_sys_link9898-87 common unlink __x64_sys_unlink9999-88 common symlink __x64_sys_symlink100100-89 common readlink __x64_sys_readlink101101-90 common chmod __x64_sys_chmod102102-91 common fchmod __x64_sys_fchmod103103-92 common chown __x64_sys_chown104104-93 common fchown __x64_sys_fchown105105-94 common lchown __x64_sys_lchown106106-95 common umask __x64_sys_umask107107-96 common gettimeofday __x64_sys_gettimeofday108108-97 common getrlimit __x64_sys_getrlimit109109-98 common getrusage __x64_sys_getrusage110110-99 common sysinfo __x64_sys_sysinfo111111-100 common times __x64_sys_times112112-101 64 ptrace __x64_sys_ptrace113113-102 common getuid __x64_sys_getuid114114-103 common syslog __x64_sys_syslog115115-104 common getgid __x64_sys_getgid116116-105 common setuid __x64_sys_setuid117117-106 common setgid __x64_sys_setgid118118-107 common geteuid __x64_sys_geteuid119119-108 common getegid __x64_sys_getegid120120-109 common setpgid __x64_sys_setpgid121121-110 common getppid __x64_sys_getppid122122-111 common getpgrp __x64_sys_getpgrp123123-112 common setsid __x64_sys_setsid124124-113 common setreuid __x64_sys_setreuid125125-114 common setregid __x64_sys_setregid126126-115 common getgroups __x64_sys_getgroups127127-116 common setgroups __x64_sys_setgroups128128-117 common setresuid __x64_sys_setresuid129129-118 common getresuid __x64_sys_getresuid130130-119 common setresgid __x64_sys_setresgid131131-120 common getresgid __x64_sys_getresgid132132-121 common getpgid __x64_sys_getpgid133133-122 common setfsuid __x64_sys_setfsuid134134-123 common setfsgid __x64_sys_setfsgid135135-124 common getsid __x64_sys_getsid136136-125 common capget __x64_sys_capget137137-126 common capset __x64_sys_capset138138-127 64 rt_sigpending __x64_sys_rt_sigpending139139-128 64 rt_sigtimedwait __x64_sys_rt_sigtimedwait140140-129 64 rt_sigqueueinfo __x64_sys_rt_sigqueueinfo141141-130 common rt_sigsuspend __x64_sys_rt_sigsuspend142142-131 64 sigaltstack __x64_sys_sigaltstack143143-132 common utime __x64_sys_utime144144-133 common mknod __x64_sys_mknod1111+0 common read sys_read1212+1 common write sys_write1313+2 common open sys_open1414+3 common close sys_close1515+4 common stat sys_newstat1616+5 common fstat sys_newfstat1717+6 common lstat sys_newlstat1818+7 common poll sys_poll1919+8 common lseek sys_lseek2020+9 common mmap sys_mmap2121+10 common mprotect sys_mprotect2222+11 common munmap sys_munmap2323+12 common brk sys_brk2424+13 64 rt_sigaction sys_rt_sigaction2525+14 common rt_sigprocmask sys_rt_sigprocmask2626+15 64 rt_sigreturn sys_rt_sigreturn2727+16 64 ioctl sys_ioctl2828+17 common pread64 sys_pread642929+18 common pwrite64 sys_pwrite643030+19 64 readv sys_readv3131+20 64 writev sys_writev3232+21 common access sys_access3333+22 common pipe sys_pipe3434+23 common select sys_select3535+24 common sched_yield sys_sched_yield3636+25 common mremap sys_mremap3737+26 common msync sys_msync3838+27 common mincore sys_mincore3939+28 common madvise sys_madvise4040+29 common shmget sys_shmget4141+30 common shmat sys_shmat4242+31 common shmctl sys_shmctl4343+32 common dup sys_dup4444+33 common dup2 sys_dup24545+34 common pause sys_pause4646+35 common nanosleep sys_nanosleep4747+36 common getitimer sys_getitimer4848+37 common alarm sys_alarm4949+38 common setitimer sys_setitimer5050+39 common getpid sys_getpid5151+40 common sendfile sys_sendfile645252+41 common socket sys_socket5353+42 common connect sys_connect5454+43 common accept sys_accept5555+44 common sendto sys_sendto5656+45 64 recvfrom sys_recvfrom5757+46 64 sendmsg sys_sendmsg5858+47 64 recvmsg sys_recvmsg5959+48 common shutdown sys_shutdown6060+49 common bind sys_bind6161+50 common listen sys_listen6262+51 common getsockname sys_getsockname6363+52 common getpeername sys_getpeername6464+53 common socketpair sys_socketpair6565+54 64 setsockopt sys_setsockopt6666+55 64 getsockopt sys_getsockopt6767+56 common clone sys_clone6868+57 common fork sys_fork6969+58 common vfork sys_vfork7070+59 64 execve sys_execve7171+60 common exit sys_exit7272+61 common wait4 sys_wait47373+62 common kill sys_kill7474+63 common uname sys_newuname7575+64 common semget sys_semget7676+65 common semop sys_semop7777+66 common semctl sys_semctl7878+67 common shmdt sys_shmdt7979+68 common msgget sys_msgget8080+69 common msgsnd sys_msgsnd8181+70 common msgrcv sys_msgrcv8282+71 common msgctl sys_msgctl8383+72 common fcntl sys_fcntl8484+73 common flock sys_flock8585+74 common fsync sys_fsync8686+75 common fdatasync sys_fdatasync8787+76 common truncate sys_truncate8888+77 common ftruncate sys_ftruncate8989+78 common getdents sys_getdents9090+79 common getcwd sys_getcwd9191+80 common chdir sys_chdir9292+81 common fchdir sys_fchdir9393+82 common rename sys_rename9494+83 common mkdir sys_mkdir9595+84 common rmdir sys_rmdir9696+85 common creat sys_creat9797+86 common link sys_link9898+87 common unlink sys_unlink9999+88 common symlink sys_symlink100100+89 common readlink sys_readlink101101+90 common chmod sys_chmod102102+91 common fchmod sys_fchmod103103+92 common chown sys_chown104104+93 common fchown sys_fchown105105+94 common lchown sys_lchown106106+95 common umask sys_umask107107+96 common gettimeofday sys_gettimeofday108108+97 common getrlimit sys_getrlimit109109+98 common getrusage sys_getrusage110110+99 common sysinfo sys_sysinfo111111+100 common times sys_times112112+101 64 ptrace sys_ptrace113113+102 common getuid sys_getuid114114+103 common syslog sys_syslog115115+104 common getgid sys_getgid116116+105 common setuid sys_setuid117117+106 common setgid sys_setgid118118+107 common geteuid sys_geteuid119119+108 common getegid sys_getegid120120+109 common setpgid sys_setpgid121121+110 common getppid sys_getppid122122+111 common getpgrp sys_getpgrp123123+112 common setsid sys_setsid124124+113 common setreuid sys_setreuid125125+114 common setregid sys_setregid126126+115 common getgroups sys_getgroups127127+116 common setgroups sys_setgroups128128+117 common setresuid sys_setresuid129129+118 common getresuid sys_getresuid130130+119 common setresgid sys_setresgid131131+120 common getresgid sys_getresgid132132+121 common getpgid sys_getpgid133133+122 common setfsuid sys_setfsuid134134+123 common setfsgid sys_setfsgid135135+124 common getsid sys_getsid136136+125 common capget sys_capget137137+126 common capset sys_capset138138+127 64 rt_sigpending sys_rt_sigpending139139+128 64 rt_sigtimedwait sys_rt_sigtimedwait140140+129 64 rt_sigqueueinfo sys_rt_sigqueueinfo141141+130 common rt_sigsuspend sys_rt_sigsuspend142142+131 64 sigaltstack sys_sigaltstack143143+132 common utime sys_utime144144+133 common mknod sys_mknod145145134 64 uselib146146-135 common personality __x64_sys_personality147147-136 common ustat __x64_sys_ustat148148-137 common statfs __x64_sys_statfs149149-138 common fstatfs __x64_sys_fstatfs150150-139 common sysfs __x64_sys_sysfs151151-140 common getpriority __x64_sys_getpriority152152-141 common setpriority __x64_sys_setpriority153153-142 common sched_setparam __x64_sys_sched_setparam154154-143 common sched_getparam __x64_sys_sched_getparam155155-144 common sched_setscheduler __x64_sys_sched_setscheduler156156-145 common sched_getscheduler __x64_sys_sched_getscheduler157157-146 common sched_get_priority_max __x64_sys_sched_get_priority_max158158-147 common sched_get_priority_min __x64_sys_sched_get_priority_min159159-148 common sched_rr_get_interval __x64_sys_sched_rr_get_interval160160-149 common mlock __x64_sys_mlock161161-150 common munlock __x64_sys_munlock162162-151 common mlockall __x64_sys_mlockall163163-152 common munlockall __x64_sys_munlockall164164-153 common vhangup __x64_sys_vhangup165165-154 common modify_ldt __x64_sys_modify_ldt166166-155 common pivot_root __x64_sys_pivot_root167167-156 64 _sysctl __x64_sys_sysctl168168-157 common prctl __x64_sys_prctl169169-158 common arch_prctl __x64_sys_arch_prctl170170-159 common adjtimex __x64_sys_adjtimex171171-160 common setrlimit __x64_sys_setrlimit172172-161 common chroot __x64_sys_chroot173173-162 common sync __x64_sys_sync174174-163 common acct __x64_sys_acct175175-164 common settimeofday __x64_sys_settimeofday176176-165 common mount __x64_sys_mount177177-166 common umount2 __x64_sys_umount178178-167 common swapon __x64_sys_swapon179179-168 common swapoff __x64_sys_swapoff180180-169 common reboot __x64_sys_reboot181181-170 common sethostname __x64_sys_sethostname182182-171 common setdomainname __x64_sys_setdomainname183183-172 common iopl __x64_sys_iopl/ptregs184184-173 common ioperm __x64_sys_ioperm146146+135 common personality sys_personality147147+136 common ustat sys_ustat148148+137 common statfs sys_statfs149149+138 common fstatfs sys_fstatfs150150+139 common sysfs sys_sysfs151151+140 common getpriority sys_getpriority152152+141 common setpriority sys_setpriority153153+142 common sched_setparam sys_sched_setparam154154+143 common sched_getparam sys_sched_getparam155155+144 common sched_setscheduler sys_sched_setscheduler156156+145 common sched_getscheduler sys_sched_getscheduler157157+146 common sched_get_priority_max sys_sched_get_priority_max158158+147 common sched_get_priority_min sys_sched_get_priority_min159159+148 common sched_rr_get_interval sys_sched_rr_get_interval160160+149 common mlock sys_mlock161161+150 common munlock sys_munlock162162+151 common mlockall sys_mlockall163163+152 common munlockall sys_munlockall164164+153 common vhangup sys_vhangup165165+154 common modify_ldt sys_modify_ldt166166+155 common pivot_root sys_pivot_root167167+156 64 _sysctl sys_sysctl168168+157 common prctl sys_prctl169169+158 common arch_prctl sys_arch_prctl170170+159 common adjtimex sys_adjtimex171171+160 common setrlimit sys_setrlimit172172+161 common chroot sys_chroot173173+162 common sync sys_sync174174+163 common acct sys_acct175175+164 common settimeofday sys_settimeofday176176+165 common mount sys_mount177177+166 common umount2 sys_umount178178+167 common swapon sys_swapon179179+168 common swapoff sys_swapoff180180+169 common reboot sys_reboot181181+170 common sethostname sys_sethostname182182+171 common setdomainname sys_setdomainname183183+172 common iopl sys_iopl184184+173 common ioperm sys_ioperm185185174 64 create_module186186-175 common init_module __x64_sys_init_module187187-176 common delete_module __x64_sys_delete_module186186+175 common init_module sys_init_module187187+176 common delete_module sys_delete_module188188177 64 get_kernel_syms189189178 64 query_module190190-179 common quotactl __x64_sys_quotactl190190+179 common quotactl sys_quotactl191191180 64 nfsservctl192192181 common getpmsg193193182 common putpmsg194194183 common afs_syscall195195184 common tuxcall196196185 common security197197-186 common gettid __x64_sys_gettid198198-187 common readahead __x64_sys_readahead199199-188 common setxattr __x64_sys_setxattr200200-189 common lsetxattr __x64_sys_lsetxattr201201-190 common fsetxattr __x64_sys_fsetxattr202202-191 common getxattr __x64_sys_getxattr203203-192 common lgetxattr __x64_sys_lgetxattr204204-193 common fgetxattr __x64_sys_fgetxattr205205-194 common listxattr __x64_sys_listxattr206206-195 common llistxattr __x64_sys_llistxattr207207-196 common flistxattr __x64_sys_flistxattr208208-197 common removexattr __x64_sys_removexattr209209-198 common lremovexattr __x64_sys_lremovexattr210210-199 common fremovexattr __x64_sys_fremovexattr211211-200 common tkill __x64_sys_tkill212212-201 common time __x64_sys_time213213-202 common futex __x64_sys_futex214214-203 common sched_setaffinity __x64_sys_sched_setaffinity215215-204 common sched_getaffinity __x64_sys_sched_getaffinity197197+186 common gettid sys_gettid198198+187 common readahead sys_readahead199199+188 common setxattr sys_setxattr200200+189 common lsetxattr sys_lsetxattr201201+190 common fsetxattr sys_fsetxattr202202+191 common getxattr sys_getxattr203203+192 common lgetxattr sys_lgetxattr204204+193 common fgetxattr sys_fgetxattr205205+194 common listxattr sys_listxattr206206+195 common llistxattr sys_llistxattr207207+196 common flistxattr sys_flistxattr208208+197 common removexattr sys_removexattr209209+198 common lremovexattr sys_lremovexattr210210+199 common fremovexattr sys_fremovexattr211211+200 common tkill sys_tkill212212+201 common time sys_time213213+202 common futex sys_futex214214+203 common sched_setaffinity sys_sched_setaffinity215215+204 common sched_getaffinity sys_sched_getaffinity216216205 64 set_thread_area217217-206 64 io_setup __x64_sys_io_setup218218-207 common io_destroy __x64_sys_io_destroy219219-208 common io_getevents __x64_sys_io_getevents220220-209 64 io_submit __x64_sys_io_submit221221-210 common io_cancel __x64_sys_io_cancel217217+206 64 io_setup sys_io_setup218218+207 common io_destroy sys_io_destroy219219+208 common io_getevents sys_io_getevents220220+209 64 io_submit sys_io_submit221221+210 common io_cancel sys_io_cancel222222211 64 get_thread_area223223-212 common lookup_dcookie __x64_sys_lookup_dcookie224224-213 common epoll_create __x64_sys_epoll_create223223+212 common lookup_dcookie sys_lookup_dcookie224224+213 common epoll_create sys_epoll_create225225214 64 epoll_ctl_old226226215 64 epoll_wait_old227227-216 common remap_file_pages __x64_sys_remap_file_pages228228-217 common getdents64 __x64_sys_getdents64229229-218 common set_tid_address __x64_sys_set_tid_address230230-219 common restart_syscall __x64_sys_restart_syscall231231-220 common semtimedop __x64_sys_semtimedop232232-221 common fadvise64 __x64_sys_fadvise64233233-222 64 timer_create __x64_sys_timer_create234234-223 common timer_settime __x64_sys_timer_settime235235-224 common timer_gettime __x64_sys_timer_gettime236236-225 common timer_getoverrun __x64_sys_timer_getoverrun237237-226 common timer_delete __x64_sys_timer_delete238238-227 common clock_settime __x64_sys_clock_settime239239-228 common clock_gettime __x64_sys_clock_gettime240240-229 common clock_getres __x64_sys_clock_getres241241-230 common clock_nanosleep __x64_sys_clock_nanosleep242242-231 common exit_group __x64_sys_exit_group243243-232 common epoll_wait __x64_sys_epoll_wait244244-233 common epoll_ctl __x64_sys_epoll_ctl245245-234 common tgkill __x64_sys_tgkill246246-235 common utimes __x64_sys_utimes227227+216 common remap_file_pages sys_remap_file_pages228228+217 common getdents64 sys_getdents64229229+218 common set_tid_address sys_set_tid_address230230+219 common restart_syscall sys_restart_syscall231231+220 common semtimedop sys_semtimedop232232+221 common fadvise64 sys_fadvise64233233+222 64 timer_create sys_timer_create234234+223 common timer_settime sys_timer_settime235235+224 common timer_gettime sys_timer_gettime236236+225 common timer_getoverrun sys_timer_getoverrun237237+226 common timer_delete sys_timer_delete238238+227 common clock_settime sys_clock_settime239239+228 common clock_gettime sys_clock_gettime240240+229 common clock_getres sys_clock_getres241241+230 common clock_nanosleep sys_clock_nanosleep242242+231 common exit_group sys_exit_group243243+232 common epoll_wait sys_epoll_wait244244+233 common epoll_ctl sys_epoll_ctl245245+234 common tgkill sys_tgkill246246+235 common utimes sys_utimes247247236 64 vserver248248-237 common mbind __x64_sys_mbind249249-238 common set_mempolicy __x64_sys_set_mempolicy250250-239 common get_mempolicy __x64_sys_get_mempolicy251251-240 common mq_open __x64_sys_mq_open252252-241 common mq_unlink __x64_sys_mq_unlink253253-242 common mq_timedsend __x64_sys_mq_timedsend254254-243 common mq_timedreceive __x64_sys_mq_timedreceive255255-244 64 mq_notify __x64_sys_mq_notify256256-245 common mq_getsetattr __x64_sys_mq_getsetattr257257-246 64 kexec_load __x64_sys_kexec_load258258-247 64 waitid __x64_sys_waitid259259-248 common add_key __x64_sys_add_key260260-249 common request_key __x64_sys_request_key261261-250 common keyctl __x64_sys_keyctl262262-251 common ioprio_set __x64_sys_ioprio_set263263-252 common ioprio_get __x64_sys_ioprio_get264264-253 common inotify_init __x64_sys_inotify_init265265-254 common inotify_add_watch __x64_sys_inotify_add_watch266266-255 common inotify_rm_watch __x64_sys_inotify_rm_watch267267-256 common migrate_pages __x64_sys_migrate_pages268268-257 common openat __x64_sys_openat269269-258 common mkdirat __x64_sys_mkdirat270270-259 common mknodat __x64_sys_mknodat271271-260 common fchownat __x64_sys_fchownat272272-261 common futimesat __x64_sys_futimesat273273-262 common newfstatat __x64_sys_newfstatat274274-263 common unlinkat __x64_sys_unlinkat275275-264 common renameat __x64_sys_renameat276276-265 common linkat __x64_sys_linkat277277-266 common symlinkat __x64_sys_symlinkat278278-267 common readlinkat __x64_sys_readlinkat279279-268 common fchmodat __x64_sys_fchmodat280280-269 common faccessat __x64_sys_faccessat281281-270 common pselect6 __x64_sys_pselect6282282-271 common ppoll __x64_sys_ppoll283283-272 common unshare __x64_sys_unshare284284-273 64 set_robust_list __x64_sys_set_robust_list285285-274 64 get_robust_list __x64_sys_get_robust_list286286-275 common splice __x64_sys_splice287287-276 common tee __x64_sys_tee288288-277 common sync_file_range __x64_sys_sync_file_range289289-278 64 vmsplice __x64_sys_vmsplice290290-279 64 move_pages __x64_sys_move_pages291291-280 common utimensat __x64_sys_utimensat292292-281 common epoll_pwait __x64_sys_epoll_pwait293293-282 common signalfd __x64_sys_signalfd294294-283 common timerfd_create __x64_sys_timerfd_create295295-284 common eventfd __x64_sys_eventfd296296-285 common fallocate __x64_sys_fallocate297297-286 common timerfd_settime __x64_sys_timerfd_settime298298-287 common timerfd_gettime __x64_sys_timerfd_gettime299299-288 common accept4 __x64_sys_accept4300300-289 common signalfd4 __x64_sys_signalfd4301301-290 common eventfd2 __x64_sys_eventfd2302302-291 common epoll_create1 __x64_sys_epoll_create1303303-292 common dup3 __x64_sys_dup3304304-293 common pipe2 __x64_sys_pipe2305305-294 common inotify_init1 __x64_sys_inotify_init1306306-295 64 preadv __x64_sys_preadv307307-296 64 pwritev __x64_sys_pwritev308308-297 64 rt_tgsigqueueinfo __x64_sys_rt_tgsigqueueinfo309309-298 common perf_event_open __x64_sys_perf_event_open310310-299 64 recvmmsg __x64_sys_recvmmsg311311-300 common fanotify_init __x64_sys_fanotify_init312312-301 common fanotify_mark __x64_sys_fanotify_mark313313-302 common prlimit64 __x64_sys_prlimit64314314-303 common name_to_handle_at __x64_sys_name_to_handle_at315315-304 common open_by_handle_at __x64_sys_open_by_handle_at316316-305 common clock_adjtime __x64_sys_clock_adjtime317317-306 common syncfs __x64_sys_syncfs318318-307 64 sendmmsg __x64_sys_sendmmsg319319-308 common setns __x64_sys_setns320320-309 common getcpu __x64_sys_getcpu321321-310 64 process_vm_readv __x64_sys_process_vm_readv322322-311 64 process_vm_writev __x64_sys_process_vm_writev323323-312 common kcmp __x64_sys_kcmp324324-313 common finit_module __x64_sys_finit_module325325-314 common sched_setattr __x64_sys_sched_setattr326326-315 common sched_getattr __x64_sys_sched_getattr327327-316 common renameat2 __x64_sys_renameat2328328-317 common seccomp __x64_sys_seccomp329329-318 common getrandom __x64_sys_getrandom330330-319 common memfd_create __x64_sys_memfd_create331331-320 common kexec_file_load __x64_sys_kexec_file_load332332-321 common bpf __x64_sys_bpf333333-322 64 execveat __x64_sys_execveat/ptregs334334-323 common userfaultfd __x64_sys_userfaultfd335335-324 common membarrier __x64_sys_membarrier336336-325 common mlock2 __x64_sys_mlock2337337-326 common copy_file_range __x64_sys_copy_file_range338338-327 64 preadv2 __x64_sys_preadv2339339-328 64 pwritev2 __x64_sys_pwritev2340340-329 common pkey_mprotect __x64_sys_pkey_mprotect341341-330 common pkey_alloc __x64_sys_pkey_alloc342342-331 common pkey_free __x64_sys_pkey_free343343-332 common statx __x64_sys_statx344344-333 common io_pgetevents __x64_sys_io_pgetevents345345-334 common rseq __x64_sys_rseq248248+237 common mbind sys_mbind249249+238 common set_mempolicy sys_set_mempolicy250250+239 common get_mempolicy sys_get_mempolicy251251+240 common mq_open sys_mq_open252252+241 common mq_unlink sys_mq_unlink253253+242 common mq_timedsend sys_mq_timedsend254254+243 common mq_timedreceive sys_mq_timedreceive255255+244 64 mq_notify sys_mq_notify256256+245 common mq_getsetattr sys_mq_getsetattr257257+246 64 kexec_load sys_kexec_load258258+247 64 waitid sys_waitid259259+248 common add_key sys_add_key260260+249 common request_key sys_request_key261261+250 common keyctl sys_keyctl262262+251 common ioprio_set sys_ioprio_set263263+252 common ioprio_get sys_ioprio_get264264+253 common inotify_init sys_inotify_init265265+254 common inotify_add_watch sys_inotify_add_watch266266+255 common inotify_rm_watch sys_inotify_rm_watch267267+256 common migrate_pages sys_migrate_pages268268+257 common openat sys_openat269269+258 common mkdirat sys_mkdirat270270+259 common mknodat sys_mknodat271271+260 common fchownat sys_fchownat272272+261 common futimesat sys_futimesat273273+262 common newfstatat sys_newfstatat274274+263 common unlinkat sys_unlinkat275275+264 common renameat sys_renameat276276+265 common linkat sys_linkat277277+266 common symlinkat sys_symlinkat278278+267 common readlinkat sys_readlinkat279279+268 common fchmodat sys_fchmodat280280+269 common faccessat sys_faccessat281281+270 common pselect6 sys_pselect6282282+271 common ppoll sys_ppoll283283+272 common unshare sys_unshare284284+273 64 set_robust_list sys_set_robust_list285285+274 64 get_robust_list sys_get_robust_list286286+275 common splice sys_splice287287+276 common tee sys_tee288288+277 common sync_file_range sys_sync_file_range289289+278 64 vmsplice sys_vmsplice290290+279 64 move_pages sys_move_pages291291+280 common utimensat sys_utimensat292292+281 common epoll_pwait sys_epoll_pwait293293+282 common signalfd sys_signalfd294294+283 common timerfd_create sys_timerfd_create295295+284 common eventfd sys_eventfd296296+285 common fallocate sys_fallocate297297+286 common timerfd_settime sys_timerfd_settime298298+287 common timerfd_gettime sys_timerfd_gettime299299+288 common accept4 sys_accept4300300+289 common signalfd4 sys_signalfd4301301+290 common eventfd2 sys_eventfd2302302+291 common epoll_create1 sys_epoll_create1303303+292 common dup3 sys_dup3304304+293 common pipe2 sys_pipe2305305+294 common inotify_init1 sys_inotify_init1306306+295 64 preadv sys_preadv307307+296 64 pwritev sys_pwritev308308+297 64 rt_tgsigqueueinfo sys_rt_tgsigqueueinfo309309+298 common perf_event_open sys_perf_event_open310310+299 64 recvmmsg sys_recvmmsg311311+300 common fanotify_init sys_fanotify_init312312+301 common fanotify_mark sys_fanotify_mark313313+302 common prlimit64 sys_prlimit64314314+303 common name_to_handle_at sys_name_to_handle_at315315+304 common open_by_handle_at sys_open_by_handle_at316316+305 common clock_adjtime sys_clock_adjtime317317+306 common syncfs sys_syncfs318318+307 64 sendmmsg sys_sendmmsg319319+308 common setns sys_setns320320+309 common getcpu sys_getcpu321321+310 64 process_vm_readv sys_process_vm_readv322322+311 64 process_vm_writev sys_process_vm_writev323323+312 common kcmp sys_kcmp324324+313 common finit_module sys_finit_module325325+314 common sched_setattr sys_sched_setattr326326+315 common sched_getattr sys_sched_getattr327327+316 common renameat2 sys_renameat2328328+317 common seccomp sys_seccomp329329+318 common getrandom sys_getrandom330330+319 common memfd_create sys_memfd_create331331+320 common kexec_file_load sys_kexec_file_load332332+321 common bpf sys_bpf333333+322 64 execveat sys_execveat334334+323 common userfaultfd sys_userfaultfd335335+324 common membarrier sys_membarrier336336+325 common mlock2 sys_mlock2337337+326 common copy_file_range sys_copy_file_range338338+327 64 preadv2 sys_preadv2339339+328 64 pwritev2 sys_pwritev2340340+329 common pkey_mprotect sys_pkey_mprotect341341+330 common pkey_alloc sys_pkey_alloc342342+331 common pkey_free sys_pkey_free343343+332 common statx sys_statx344344+333 common io_pgetevents sys_io_pgetevents345345+334 common rseq sys_rseq346346# don't use numbers 387 through 423, add new calls after the last347347# 'common' entry348348-424 common pidfd_send_signal __x64_sys_pidfd_send_signal349349-425 common io_uring_setup __x64_sys_io_uring_setup350350-426 common io_uring_enter __x64_sys_io_uring_enter351351-427 common io_uring_register __x64_sys_io_uring_register352352-428 common open_tree __x64_sys_open_tree353353-429 common move_mount __x64_sys_move_mount354354-430 common fsopen __x64_sys_fsopen355355-431 common fsconfig __x64_sys_fsconfig356356-432 common fsmount __x64_sys_fsmount357357-433 common fspick __x64_sys_fspick358358-434 common pidfd_open __x64_sys_pidfd_open359359-435 common clone3 __x64_sys_clone3/ptregs360360-437 common openat2 __x64_sys_openat2361361-438 common pidfd_getfd __x64_sys_pidfd_getfd348348+424 common pidfd_send_signal sys_pidfd_send_signal349349+425 common io_uring_setup sys_io_uring_setup350350+426 common io_uring_enter sys_io_uring_enter351351+427 common io_uring_register sys_io_uring_register352352+428 common open_tree sys_open_tree353353+429 common move_mount sys_move_mount354354+430 common fsopen sys_fsopen355355+431 common fsconfig sys_fsconfig356356+432 common fsmount sys_fsmount357357+433 common fspick sys_fspick358358+434 common pidfd_open sys_pidfd_open359359+435 common clone3 sys_clone3360360+437 common openat2 sys_openat2361361+438 common pidfd_getfd sys_pidfd_getfd362362363363#364364# x32-specific system call numbers start at 512 to avoid cache impact···366366# on-the-fly for compat_sys_*() compatibility system calls if X86_X32367367# is defined.368368#369369-512 x32 rt_sigaction __x32_compat_sys_rt_sigaction370370-513 x32 rt_sigreturn sys32_x32_rt_sigreturn371371-514 x32 ioctl __x32_compat_sys_ioctl372372-515 x32 readv __x32_compat_sys_readv373373-516 x32 writev __x32_compat_sys_writev374374-517 x32 recvfrom __x32_compat_sys_recvfrom375375-518 x32 sendmsg __x32_compat_sys_sendmsg376376-519 x32 recvmsg __x32_compat_sys_recvmsg377377-520 x32 execve __x32_compat_sys_execve/ptregs378378-521 x32 ptrace __x32_compat_sys_ptrace379379-522 x32 rt_sigpending __x32_compat_sys_rt_sigpending380380-523 x32 rt_sigtimedwait __x32_compat_sys_rt_sigtimedwait_time64381381-524 x32 rt_sigqueueinfo __x32_compat_sys_rt_sigqueueinfo382382-525 x32 sigaltstack __x32_compat_sys_sigaltstack383383-526 x32 timer_create __x32_compat_sys_timer_create384384-527 x32 mq_notify __x32_compat_sys_mq_notify385385-528 x32 kexec_load __x32_compat_sys_kexec_load386386-529 x32 waitid __x32_compat_sys_waitid387387-530 x32 set_robust_list __x32_compat_sys_set_robust_list388388-531 x32 get_robust_list __x32_compat_sys_get_robust_list389389-532 x32 vmsplice __x32_compat_sys_vmsplice390390-533 x32 move_pages __x32_compat_sys_move_pages391391-534 x32 preadv __x32_compat_sys_preadv64392392-535 x32 pwritev __x32_compat_sys_pwritev64393393-536 x32 rt_tgsigqueueinfo __x32_compat_sys_rt_tgsigqueueinfo394394-537 x32 recvmmsg __x32_compat_sys_recvmmsg_time64395395-538 x32 sendmmsg __x32_compat_sys_sendmmsg396396-539 x32 process_vm_readv __x32_compat_sys_process_vm_readv397397-540 x32 process_vm_writev __x32_compat_sys_process_vm_writev398398-541 x32 setsockopt __x32_compat_sys_setsockopt399399-542 x32 getsockopt __x32_compat_sys_getsockopt400400-543 x32 io_setup __x32_compat_sys_io_setup401401-544 x32 io_submit __x32_compat_sys_io_submit402402-545 x32 execveat __x32_compat_sys_execveat/ptregs403403-546 x32 preadv2 __x32_compat_sys_preadv64v2404404-547 x32 pwritev2 __x32_compat_sys_pwritev64v2369369+512 x32 rt_sigaction compat_sys_rt_sigaction370370+513 x32 rt_sigreturn compat_sys_x32_rt_sigreturn371371+514 x32 ioctl compat_sys_ioctl372372+515 x32 readv compat_sys_readv373373+516 x32 writev compat_sys_writev374374+517 x32 recvfrom compat_sys_recvfrom375375+518 x32 sendmsg compat_sys_sendmsg376376+519 x32 recvmsg compat_sys_recvmsg377377+520 x32 execve compat_sys_execve378378+521 x32 ptrace compat_sys_ptrace379379+522 x32 rt_sigpending compat_sys_rt_sigpending380380+523 x32 rt_sigtimedwait compat_sys_rt_sigtimedwait_time64381381+524 x32 rt_sigqueueinfo compat_sys_rt_sigqueueinfo382382+525 x32 sigaltstack compat_sys_sigaltstack383383+526 x32 timer_create compat_sys_timer_create384384+527 x32 mq_notify compat_sys_mq_notify385385+528 x32 kexec_load compat_sys_kexec_load386386+529 x32 waitid compat_sys_waitid387387+530 x32 set_robust_list compat_sys_set_robust_list388388+531 x32 get_robust_list compat_sys_get_robust_list389389+532 x32 vmsplice compat_sys_vmsplice390390+533 x32 move_pages compat_sys_move_pages391391+534 x32 preadv compat_sys_preadv64392392+535 x32 pwritev compat_sys_pwritev64393393+536 x32 rt_tgsigqueueinfo compat_sys_rt_tgsigqueueinfo394394+537 x32 recvmmsg compat_sys_recvmmsg_time64395395+538 x32 sendmmsg compat_sys_sendmmsg396396+539 x32 process_vm_readv compat_sys_process_vm_readv397397+540 x32 process_vm_writev compat_sys_process_vm_writev398398+541 x32 setsockopt compat_sys_setsockopt399399+542 x32 getsockopt compat_sys_getsockopt400400+543 x32 io_setup compat_sys_io_setup401401+544 x32 io_submit compat_sys_io_submit402402+545 x32 execveat compat_sys_execveat403403+546 x32 preadv2 compat_sys_preadv64v2404404+547 x32 pwritev2 compat_sys_pwritev64v2
···713713 exit(0)714714715715 if args.list:716716- if args.list:717717- list_test_cases(alltests)718718- exit(0)716716+ list_test_cases(alltests)717717+ exit(0)719718720719 if len(alltests):721720 req_plugins = pm.get_required_plugins(alltests)