···13241324 pgmajfault13251325 Number of major page faults incurred1326132613271327- workingset_refault13281328- Number of refaults of previously evicted pages13271327+ workingset_refault_anon13281328+ Number of refaults of previously evicted anonymous pages.1329132913301330- workingset_activate13311331- Number of refaulted pages that were immediately activated13301330+ workingset_refault_file13311331+ Number of refaults of previously evicted file pages.1332133213331333- workingset_restore13341334- Number of restored pages which have been detected as an active13351335- workingset before they got reclaimed.13331333+ workingset_activate_anon13341334+ Number of refaulted anonymous pages that were immediately13351335+ activated.13361336+13371337+ workingset_activate_file13381338+ Number of refaulted file pages that were immediately activated.13391339+13401340+ workingset_restore_anon13411341+ Number of restored anonymous pages which have been detected as13421342+ an active workingset before they got reclaimed.13431343+13441344+ workingset_restore_file13451345+ Number of restored file pages which have been detected as an13461346+ active workingset before they got reclaimed.1336134713371348 workingset_nodereclaim13381349 Number of times a shadow node has been reclaimed
···6767 the value passed in <key_size>.68686969<key_type>7070- Either 'logon' or 'user' kernel key type.7070+ Either 'logon', 'user' or 'encrypted' kernel key type.71717272<key_description>7373 The kernel keyring key description crypt target should look for···120120 significantly. The default is to offload write bios to the same121121 thread because it benefits CFQ to have writes submitted using the122122 same context.123123+124124+no_read_workqueue125125+ Bypass dm-crypt internal workqueue and process read requests synchronously.126126+127127+no_write_workqueue128128+ Bypass dm-crypt internal workqueue and process write requests synchronously.129129+ This option is automatically enabled for host-managed zoned block devices130130+ (e.g. host-managed SMR hard-disks).123131124132integrity:<bytes>:<type>125133 The device requires additional <bytes> metadata per-sector stored
+1-1
Documentation/admin-guide/pm/cpuidle.rst
···690690instruction of the CPUs (which, as a rule, suspends the execution of the program691691and causes the hardware to attempt to enter the shallowest available idle state)692692for this purpose, and if ``idle=poll`` is used, idle CPUs will execute a693693-more or less ``lightweight'' sequence of instructions in a tight loop. [Note693693+more or less "lightweight" sequence of instructions in a tight loop. [Note694694that using ``idle=poll`` is somewhat drastic in many cases, as preventing idle695695CPUs from saving almost any energy at all may not be the only effect of it.696696For example, on Intel hardware it effectively prevents CPUs from using
+1-4
Documentation/bpf/ringbuf.rst
···182182already committed. It is thus possible for slow producers to temporarily hold183183off submitted records, that were reserved later.184184185185-Reservation/commit/consumer protocol is verified by litmus tests in186186-Documentation/litmus_tests/bpf-rb/_.187187-188185One interesting implementation bit, that significantly simplifies (and thus189186speeds up as well) implementation of both producers and consumers is how data190187area is mapped twice contiguously back-to-back in the virtual memory. This···197200being available after commit only if consumer has already caught up right up to198201the record being committed. If not, consumer still has to catch up and thus199202will see new data anyways without needing an extra poll notification.200200-Benchmarks (see tools/testing/selftests/bpf/benchs/bench_ringbuf.c_) show that203203+Benchmarks (see tools/testing/selftests/bpf/benchs/bench_ringbufs.c) show that201204this allows to achieve a very high throughput without having to resort to202205tricks like "notify only every Nth sample", which are necessary with perf203206buffer. For extreme cases, when BPF program wants more manual control of
···2020- gpio-controller : Marks the device node as a GPIO controller2121- interrupts : Interrupt specifier, see interrupt-controller/interrupts.txt2222- interrupt-controller : Mark the GPIO controller as an interrupt-controller2323-- ngpios : number of GPIO lines, see gpio.txt2424- (should be multiple of 8, up to 80 pins)2323+- ngpios : number of *hardware* GPIO lines, see gpio.txt. This will expose2424+ 2 software GPIOs per hardware GPIO: one for hardware input, one for hardware2525+ output. Up to 80 pins, must be a multiple of 8.2526- clocks : A phandle to the APB clock for SGPM clock division2627- bus-frequency : SGPM CLK frequency2728
···11-* Sony 1/2.5-Inch 8.51Mp CMOS Digital Image Sensor22-33-The Sony imx274 is a 1/2.5-inch CMOS active pixel digital image sensor with44-an active array size of 3864H x 2202V. It is programmable through I2C55-interface. The I2C address is fixed to 0x1a as per sensor data sheet.66-Image data is sent through MIPI CSI-2, which is configured as 4 lanes77-at 1440 Mbps.88-99-1010-Required Properties:1111-- compatible: value should be "sony,imx274" for imx274 sensor1212-- reg: I2C bus address of the device1313-1414-Optional Properties:1515-- reset-gpios: Sensor reset GPIO1616-- clocks: Reference to the input clock.1717-- clock-names: Should be "inck".1818-- VANA-supply: Sensor 2.8v analog supply.1919-- VDIG-supply: Sensor 1.8v digital core supply.2020-- VDDL-supply: Sensor digital IO 1.2v supply.2121-2222-The imx274 device node should contain one 'port' child node with2323-an 'endpoint' subnode. For further reading on port node refer to2424-Documentation/devicetree/bindings/media/video-interfaces.txt.2525-2626-Example:2727- sensor@1a {2828- compatible = "sony,imx274";2929- reg = <0x1a>;3030- #address-cells = <1>;3131- #size-cells = <0>;3232- reset-gpios = <&gpio_sensor 0 0>;3333- port {3434- sensor_out: endpoint {3535- remote-endpoint = <&csiss_in>;3636- };3737- };3838- };
···11+# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)22+%YAML 1.233+---44+$id: http://devicetree.org/schemas/media/i2c/sony,imx274.yaml#55+$schema: http://devicetree.org/meta-schemas/core.yaml#66+77+title: Sony 1/2.5-Inch 8.51MP CMOS Digital Image Sensor88+99+maintainers:1010+ - Leon Luo <leonl@leopardimaging.com>1111+1212+description: |1313+ The Sony IMX274 is a 1/2.5-inch CMOS active pixel digital image sensor with an1414+ active array size of 3864H x 2202V. It is programmable through I2C interface.1515+ Image data is sent through MIPI CSI-2, which is configured as 4 lanes at 14401616+ Mbps.1717+1818+properties:1919+ compatible:2020+ const: sony,imx2742121+2222+ reg:2323+ const: 0x1a2424+2525+ reset-gpios:2626+ maxItems: 12727+2828+ clocks:2929+ maxItems: 13030+3131+ clock-names:3232+ const: inck3333+3434+ vana-supply:3535+ description: Sensor 2.8 V analog supply.3636+ maxItems: 13737+3838+ vdig-supply:3939+ description: Sensor 1.8 V digital core supply.4040+ maxItems: 14141+4242+ vddl-supply:4343+ description: Sensor digital IO 1.2 V supply.4444+ maxItems: 14545+4646+ port:4747+ type: object4848+ description: Output video port. See ../video-interfaces.txt.4949+5050+required:5151+ - compatible5252+ - reg5353+ - port5454+5555+additionalProperties: false5656+5757+examples:5858+ - |5959+ i2c0 {6060+ #address-cells = <1>;6161+ #size-cells = <0>;6262+6363+ imx274: camera-sensor@1a {6464+ compatible = "sony,imx274";6565+ reg = <0x1a>;6666+ reset-gpios = <&gpio_sensor 0 0>;6767+6868+ port {6969+ sensor_out: endpoint {7070+ remote-endpoint = <&csiss_in>;7171+ };7272+ };7373+ };7474+ };7575+7676+...
+2-2
Documentation/kbuild/llvm.rst
···3939 ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu- make CC=clang40404141``CROSS_COMPILE`` is not used to prefix the Clang compiler binary, instead4242-``CROSS_COMPILE`` is used to set a command line flag: ``--target <triple>``. For4242+``CROSS_COMPILE`` is used to set a command line flag: ``--target=<triple>``. For4343example: ::44444545- clang --target aarch64-linux-gnu foo.c4545+ clang --target=aarch64-linux-gnu foo.c46464747LLVM Utilities4848--------------
+3
Documentation/networking/ethtool-netlink.rst
···206206 ``ETHTOOL_MSG_TSINFO_GET`` get timestamping info207207 ``ETHTOOL_MSG_CABLE_TEST_ACT`` action start cable test208208 ``ETHTOOL_MSG_CABLE_TEST_TDR_ACT`` action start raw TDR cable test209209+ ``ETHTOOL_MSG_TUNNEL_INFO_GET`` get tunnel offload info209210 ===================================== ================================210211211212Kernel to userspace:···240239 ``ETHTOOL_MSG_TSINFO_GET_REPLY`` timestamping info241240 ``ETHTOOL_MSG_CABLE_TEST_NTF`` Cable test results242241 ``ETHTOOL_MSG_CABLE_TEST_TDR_NTF`` Cable test TDR results242242+ ``ETHTOOL_MSG_TUNNEL_INFO_GET_REPLY`` tunnel offload info243243 ===================================== =================================244244245245``GET`` requests are sent by userspace applications to retrieve device···13651363 ``ETHTOOL_SFECPARAM`` n/a13661364 n/a ''ETHTOOL_MSG_CABLE_TEST_ACT''13671365 n/a ''ETHTOOL_MSG_CABLE_TEST_TDR_ACT''13661366+ n/a ``ETHTOOL_MSG_TUNNEL_INFO_GET``13681367 =================================== =====================================
-17
Documentation/userspace-api/media/v4l/buffer.rst
···701701 :stub-columns: 0702702 :widths: 3 1 4703703704704- * .. _`V4L2-FLAG-MEMORY-NON-CONSISTENT`:705705-706706- - ``V4L2_FLAG_MEMORY_NON_CONSISTENT``707707- - 0x00000001708708- - A buffer is allocated either in consistent (it will be automatically709709- coherent between the CPU and the bus) or non-consistent memory. The710710- latter can provide performance gains, for instance the CPU cache711711- sync/flush operations can be avoided if the buffer is accessed by the712712- corresponding device only and the CPU does not read/write to/from that713713- buffer. However, this requires extra care from the driver -- it must714714- guarantee memory consistency by issuing a cache flush/sync when715715- consistency is needed. If this flag is set V4L2 will attempt to716716- allocate the buffer in non-consistent memory. The flag takes effect717717- only if the buffer is used for :ref:`memory mapping <mmap>` I/O and the718718- queue reports the :ref:`V4L2_BUF_CAP_SUPPORTS_MMAP_CACHE_HINTS719719- <V4L2-BUF-CAP-SUPPORTS-MMAP-CACHE-HINTS>` capability.720720-721704.. c:type:: v4l2_memory722705723706enum v4l2_memory
···120120 If you want to just query the capabilities without making any121121 other changes, then set ``count`` to 0, ``memory`` to122122 ``V4L2_MEMORY_MMAP`` and ``format.type`` to the buffer type.123123- * - __u32124124- - ``flags``125125- - Specifies additional buffer management attributes.126126- See :ref:`memory-flags`.127123128124 * - __u32129129- - ``reserved``\ [6]125125+ - ``reserved``\ [7]130126 - A place holder for future extensions. Drivers and applications131127 must set the array to zero.132128
···112112 ``V4L2_MEMORY_MMAP`` and ``type`` set to the buffer type. This will113113 free any previously allocated buffers, so this is typically something114114 that will be done at the start of the application.115115- * - union {116116- - (anonymous)117117- * - __u32118118- - ``flags``119119- - Specifies additional buffer management attributes.120120- See :ref:`memory-flags`.121115 * - __u32122116 - ``reserved``\ [1]123123- - Kept for backwards compatibility. Use ``flags`` instead.124124- * - }125125- -117117+ - A place holder for future extensions. Drivers and applications118118+ must set the array to zero.126119127120.. tabularcolumns:: |p{6.1cm}|p{2.2cm}|p{8.7cm}|128121···162169 - This capability is set by the driver to indicate that the queue supports163170 cache and memory management hints. However, it's only valid when the164171 queue is used for :ref:`memory mapping <mmap>` streaming I/O. See165165- :ref:`V4L2_FLAG_MEMORY_NON_CONSISTENT <V4L2-FLAG-MEMORY-NON-CONSISTENT>`,166172 :ref:`V4L2_BUF_FLAG_NO_CACHE_INVALIDATE <V4L2-BUF-FLAG-NO-CACHE-INVALIDATE>` and167173 :ref:`V4L2_BUF_FLAG_NO_CACHE_CLEAN <V4L2-BUF-FLAG-NO-CACHE-CLEAN>`.168174
+20
Documentation/virt/kvm/api.rst
···61736173is supported, than the other should as well and vice versa. For arm6461746174see Documentation/virt/kvm/devices/vcpu.rst "KVM_ARM_VCPU_PVTIME_CTRL".61756175For x86 see Documentation/virt/kvm/msr.rst "MSR_KVM_STEAL_TIME".61766176+61776177+8.25 KVM_CAP_S390_DIAG31861786178+-------------------------61796179+61806180+:Architectures: s39061816181+61826182+This capability enables a guest to set information about its control program61836183+(i.e. guest kernel type and version). The information is helpful during61846184+system/firmware service events, providing additional data about the guest61856185+environments running on the machine.61866186+61876187+The information is associated with the DIAGNOSE 0x318 instruction, which sets61886188+an 8-byte value consisting of a one-byte Control Program Name Code (CPNC) and61896189+a 7-byte Control Program Version Code (CPVC). The CPNC determines what61906190+environment the control program is running in (e.g. Linux, z/VM...), and the61916191+CPVC is used for information specific to OS (e.g. Linux version, Linux61926192+distribution...)61936193+61946194+If this capability is available, then the CPNC and CPVC can be synchronized61956195+between KVM and userspace via the sync regs mechanism (KVM_SYNC_DIAG318).
+8-10
MAINTAINERS
···44264426F: fs/configfs/44274427F: include/linux/configfs.h4428442844294429-CONNECTOR44304430-M: Evgeniy Polyakov <zbr@ioremap.net>44314431-L: netdev@vger.kernel.org44324432-S: Maintained44334433-F: drivers/connector/44344434-44354429CONSOLE SUBSYSTEM44364430M: Greg Kroah-Hartman <gregkh@linuxfoundation.org>44374431S: Supported···83418347F: drivers/pci/hotplug/rpaphp*8342834883438349IBM Power SRIOV Virtual NIC Device Driver83448344-M: Thomas Falcon <tlfalcon@linux.ibm.com>83458345-M: John Allen <jallen@linux.ibm.com>83508350+M: Dany Madden <drt@linux.ibm.com>83518351+M: Lijun Pan <ljp@linux.ibm.com>83528352+M: Sukadev Bhattiprolu <sukadev@linux.ibm.com>83468353L: netdev@vger.kernel.org83478354S: Supported83488355F: drivers/net/ethernet/ibm/ibmvnic.*···83578362F: arch/powerpc/platforms/powernv/vas*8358836383598364IBM Power Virtual Ethernet Device Driver83608360-M: Thomas Falcon <tlfalcon@linux.ibm.com>83658365+M: Cristobal Forno <cforno12@linux.ibm.com>83618366L: netdev@vger.kernel.org83628367S: Supported83638368F: drivers/net/ethernet/ibm/ibmveth.*···11055110601105611061MEDIATEK SWITCH DRIVER1105711062M: Sean Wang <sean.wang@mediatek.com>1106311063+M: Landen Chao <Landen.Chao@mediatek.com>1105811064L: netdev@vger.kernel.org1105911065S: Maintained1106011066F: drivers/net/dsa/mt7530.*···1206912073T: git git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net.git1207012074T: git git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next.git1207112075F: Documentation/devicetree/bindings/net/1207612076+F: drivers/connector/1207212077F: drivers/net/1207312078F: include/linux/etherdevice.h1207412079F: include/linux/fcdevice.h···13200132031320113204PCI DRIVER FOR AARDVARK (Marvell Armada 3700)1320213205M: Thomas Petazzoni <thomas.petazzoni@bootlin.com>1320613206+M: Pali Rohár <pali@kernel.org>1320313207L: linux-pci@vger.kernel.org1320413208L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)1320513209S: Maintained···1617316175L: linux-media@vger.kernel.org1617416176S: Maintained1617516177T: git git://linuxtv.org/media_tree.git1617616176-F: Documentation/devicetree/bindings/media/i2c/imx274.txt1617816178+F: Documentation/devicetree/bindings/media/i2c/sony,imx274.yaml1617716179F: drivers/media/i2c/imx274.c16178161801617916181SONY IMX290 SENSOR DRIVER
···298298 case EFI_BOOT_SERVICES_DATA:299299 case EFI_CONVENTIONAL_MEMORY:300300 case EFI_PERSISTENT_MEMORY:301301- pr_warn(FW_BUG "requested region covers kernel memory @ %pa\n", &phys);302302- return NULL;301301+ if (memblock_is_map_memory(phys) ||302302+ !memblock_is_region_memory(phys, size)) {303303+ pr_warn(FW_BUG "requested region covers kernel memory @ %pa\n", &phys);304304+ return NULL;305305+ }306306+ /*307307+ * Mapping kernel memory is permitted if the region in308308+ * question is covered by a single memblock with the309309+ * NOMAP attribute set: this enables the use of ACPI310310+ * table overrides passed via initramfs, which are311311+ * reserved in memory using arch_reserve_mem_area()312312+ * below. As this particular use case only requires313313+ * read access, fall through to the R/O mapping case.314314+ */315315+ fallthrough;303316304317 case EFI_RUNTIME_SERVICES_CODE:305318 /*···400387 local_daif_restore(current_flags);401388402389 return err;390390+}391391+392392+void arch_reserve_mem_area(acpi_physical_address addr, size_t size)393393+{394394+ memblock_mark_nomap(addr, size);403395}
+1-1
arch/arm64/kvm/hyp/include/hyp/switch.h
···449449 kvm_vcpu_trap_get_fault_type(vcpu) == FSC_FAULT &&450450 kvm_vcpu_dabt_isvalid(vcpu) &&451451 !kvm_vcpu_abt_issea(vcpu) &&452452- !kvm_vcpu_dabt_iss1tw(vcpu);452452+ !kvm_vcpu_abt_iss1tw(vcpu);453453454454 if (valid) {455455 int ret = __vgic_v2_perform_cpuif_access(vcpu);
+7
arch/arm64/kvm/hyp/nvhe/tlb.c
···3131 isb();3232 }33333434+ /*3535+ * __load_guest_stage2() includes an ISB only when the AT3636+ * workaround is applied. Take care of the opposite condition,3737+ * ensuring that we always have an ISB, but not two ISBs back3838+ * to back.3939+ */3440 __load_guest_stage2(mmu);4141+ asm(ALTERNATIVE("isb", "nop", ARM64_WORKAROUND_SPECULATIVE_AT));3542}36433744static void __tlb_switch_to_host(struct tlb_inv_context *cxt)
···4747 case CPU_34K:4848 case CPU_1004K:4949 case CPU_74K:5050+ case CPU_1074K:5051 case CPU_M14KC:5152 case CPU_M14KEC:5253 case CPU_INTERAPTIV:
+4
arch/mips/loongson2ef/Platform
···4444 endif4545endif46464747+# Some -march= flags enable MMI instructions, and GCC complains about that4848+# support being enabled alongside -msoft-float. Thus explicitly disable MMI.4949+cflags-y += $(call cc-option,-mno-loongson-mmi)5050+4751#4852# Loongson Machines' Support4953#
+8-16
arch/mips/loongson64/cop2-ex.c
···9595 if (res)9696 goto fault;97979898- set_fpr64(current->thread.fpu.fpr,9999- insn.loongson3_lswc2_format.rt, value);100100- set_fpr64(current->thread.fpu.fpr,101101- insn.loongson3_lswc2_format.rq, value_next);9898+ set_fpr64(¤t->thread.fpu.fpr[insn.loongson3_lswc2_format.rt], 0, value);9999+ set_fpr64(¤t->thread.fpu.fpr[insn.loongson3_lswc2_format.rq], 0, value_next);102100 compute_return_epc(regs);103101 own_fpu(1);104102 }···128130 goto sigbus;129131130132 lose_fpu(1);131131- value_next = get_fpr64(current->thread.fpu.fpr,132132- insn.loongson3_lswc2_format.rq);133133+ value_next = get_fpr64(¤t->thread.fpu.fpr[insn.loongson3_lswc2_format.rq], 0);133134134135 StoreDW(addr + 8, value_next, res);135136 if (res)136137 goto fault;137138138138- value = get_fpr64(current->thread.fpu.fpr,139139- insn.loongson3_lswc2_format.rt);139139+ value = get_fpr64(¤t->thread.fpu.fpr[insn.loongson3_lswc2_format.rt], 0);140140141141 StoreDW(addr, value, res);142142 if (res)···200204 if (res)201205 goto fault;202206203203- set_fpr64(current->thread.fpu.fpr,204204- insn.loongson3_lsdc2_format.rt, value);207207+ set_fpr64(¤t->thread.fpu.fpr[insn.loongson3_lsdc2_format.rt], 0, value);205208 compute_return_epc(regs);206209 own_fpu(1);207210···216221 if (res)217222 goto fault;218223219219- set_fpr64(current->thread.fpu.fpr,220220- insn.loongson3_lsdc2_format.rt, value);224224+ set_fpr64(¤t->thread.fpu.fpr[insn.loongson3_lsdc2_format.rt], 0, value);221225 compute_return_epc(regs);222226 own_fpu(1);223227 break;···280286 goto sigbus;281287282288 lose_fpu(1);283283- value = get_fpr64(current->thread.fpu.fpr,284284- insn.loongson3_lsdc2_format.rt);289289+ value = get_fpr64(¤t->thread.fpu.fpr[insn.loongson3_lsdc2_format.rt], 0);285290286291 StoreW(addr, value, res);287292 if (res)···298305 goto sigbus;299306300307 lose_fpu(1);301301- value = get_fpr64(current->thread.fpu.fpr,302302- insn.loongson3_lsdc2_format.rt);308308+ value = get_fpr64(¤t->thread.fpu.fpr[insn.loongson3_lsdc2_format.rt], 0);303309304310 StoreDW(addr, value, res);305311 if (res)
-4
arch/riscv/include/asm/stackprotector.h
···5566#include <linux/random.h>77#include <linux/version.h>88-#include <asm/timex.h>98109extern unsigned long __stack_chk_guard;1110···1718static __always_inline void boot_init_stack_canary(void)1819{1920 unsigned long canary;2020- unsigned long tsc;21212222 /* Try to get a semi random initial value. */2323 get_random_bytes(&canary, sizeof(canary));2424- tsc = get_cycles();2525- canary += tsc + (tsc << BITS_PER_LONG/2);2624 canary ^= LINUX_VERSION_CODE;2725 canary &= CANARY_MASK;2826
+13
arch/riscv/include/asm/timex.h
···3333#define get_cycles_hi get_cycles_hi3434#endif /* CONFIG_64BIT */35353636+/*3737+ * Much like MIPS, we may not have a viable counter to use at an early point3838+ * in the boot process. Unfortunately we don't have a fallback, so instead3939+ * we just return 0.4040+ */4141+static inline unsigned long random_get_entropy(void)4242+{4343+ if (unlikely(clint_time_val == NULL))4444+ return 0;4545+ return get_cycles();4646+}4747+#define random_get_entropy() random_get_entropy()4848+3649#else /* CONFIG_RISCV_M_MODE */37503851static inline cycles_t get_cycles(void)
···682682 * rdx: Function argument (can be NULL if none)683683 */684684SYM_FUNC_START(asm_call_on_stack)685685+SYM_INNER_LABEL(asm_call_sysvec_on_stack, SYM_L_GLOBAL)686686+SYM_INNER_LABEL(asm_call_irq_on_stack, SYM_L_GLOBAL)685687 /*686688 * Save the frame pointer unconditionally. This allows the ORC687689 * unwinder to handle the stack switch.
···21832183 return 1;21842184}2185218521862186+static int invd_interception(struct vcpu_svm *svm)21872187+{21882188+ /* Treat an INVD instruction as a NOP and just skip it. */21892189+ return kvm_skip_emulated_instruction(&svm->vcpu);21902190+}21912191+21862192static int invlpg_interception(struct vcpu_svm *svm)21872193{21882194 if (!static_cpu_has(X86_FEATURE_DECODEASSISTS))···27802774 [SVM_EXIT_RDPMC] = rdpmc_interception,27812775 [SVM_EXIT_CPUID] = cpuid_interception,27822776 [SVM_EXIT_IRET] = iret_interception,27832783- [SVM_EXIT_INVD] = emulate_on_interception,27772777+ [SVM_EXIT_INVD] = invd_interception,27842778 [SVM_EXIT_PAUSE] = pause_interception,27852779 [SVM_EXIT_HLT] = halt_interception,27862780 [SVM_EXIT_INVLPG] = invlpg_interception,
+22-15
arch/x86/kvm/vmx/vmx.c
···129129module_param_named(preemption_timer, enable_preemption_timer, bool, S_IRUGO);130130#endif131131132132+extern bool __read_mostly allow_smaller_maxphyaddr;133133+module_param(allow_smaller_maxphyaddr, bool, S_IRUGO);134134+132135#define KVM_VM_CR0_ALWAYS_OFF (X86_CR0_NW | X86_CR0_CD)133136#define KVM_VM_CR0_ALWAYS_ON_UNRESTRICTED_GUEST X86_CR0_NE134137#define KVM_VM_CR0_ALWAYS_ON \···794791 */795792 if (is_guest_mode(vcpu))796793 eb |= get_vmcs12(vcpu)->exception_bitmap;794794+ else {795795+ /*796796+ * If EPT is enabled, #PF is only trapped if MAXPHYADDR is mismatched797797+ * between guest and host. In that case we only care about present798798+ * faults. For vmcs02, however, PFEC_MASK and PFEC_MATCH are set in799799+ * prepare_vmcs02_rare.800800+ */801801+ bool selective_pf_trap = enable_ept && (eb & (1u << PF_VECTOR));802802+ int mask = selective_pf_trap ? PFERR_PRESENT_MASK : 0;803803+ vmcs_write32(PAGE_FAULT_ERROR_CODE_MASK, mask);804804+ vmcs_write32(PAGE_FAULT_ERROR_CODE_MATCH, mask);805805+ }797806798807 vmcs_write32(EXCEPTION_BITMAP, eb);799808}···43674352 vmx->pt_desc.guest.output_mask = 0x7F;43684353 vmcs_write64(GUEST_IA32_RTIT_CTL, 0);43694354 }43704370-43714371- /*43724372- * If EPT is enabled, #PF is only trapped if MAXPHYADDR is mismatched43734373- * between guest and host. In that case we only care about present43744374- * faults.43754375- */43764376- if (enable_ept) {43774377- vmcs_write32(PAGE_FAULT_ERROR_CODE_MASK, PFERR_PRESENT_MASK);43784378- vmcs_write32(PAGE_FAULT_ERROR_CODE_MATCH, PFERR_PRESENT_MASK);43794379- }43804355}4381435643824357static void vmx_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event)···48084803 * EPT will cause page fault only if we need to48094804 * detect illegal GPAs.48104805 */48064806+ WARN_ON_ONCE(!allow_smaller_maxphyaddr);48114807 kvm_fixup_and_inject_pf_error(vcpu, cr2, error_code);48124808 return 1;48134809 } else···53375331 * would also use advanced VM-exit information for EPT violations to53385332 * reconstruct the page fault error code.53395333 */53405340- if (unlikely(kvm_mmu_is_illegal_gpa(vcpu, gpa)))53345334+ if (unlikely(allow_smaller_maxphyaddr && kvm_mmu_is_illegal_gpa(vcpu, gpa)))53415335 return kvm_emulate_instruction(vcpu, 0);5342533653435337 return kvm_mmu_page_fault(vcpu, gpa, error_code, NULL, 0);···83118305 vmx_check_vmcs12_offsets();8312830683138307 /*83148314- * Intel processors don't have problems with83158315- * GUEST_MAXPHYADDR < HOST_MAXPHYADDR so enable83168316- * it for VMX by default83088308+ * Shadow paging doesn't have a (further) performance penalty83098309+ * from GUEST_MAXPHYADDR < HOST_MAXPHYADDR so enable it83108310+ * by default83178311 */83188318- allow_smaller_maxphyaddr = true;83128312+ if (!enable_ept)83138313+ allow_smaller_maxphyaddr = true;8319831483208315 return 0;83218316}
···188188u64 __read_mostly host_efer;189189EXPORT_SYMBOL_GPL(host_efer);190190191191-bool __read_mostly allow_smaller_maxphyaddr;191191+bool __read_mostly allow_smaller_maxphyaddr = 0;192192EXPORT_SYMBOL_GPL(allow_smaller_maxphyaddr);193193194194static u64 __read_mostly host_xss;···976976 unsigned long old_cr4 = kvm_read_cr4(vcpu);977977 unsigned long pdptr_bits = X86_CR4_PGE | X86_CR4_PSE | X86_CR4_PAE |978978 X86_CR4_SMEP;979979+ unsigned long mmu_role_bits = pdptr_bits | X86_CR4_SMAP | X86_CR4_PKE;979980980981 if (kvm_valid_cr4(vcpu, cr4))981982 return 1;···10041003 if (kvm_x86_ops.set_cr4(vcpu, cr4))10051004 return 1;1006100510071007- if (((cr4 ^ old_cr4) & pdptr_bits) ||10061006+ if (((cr4 ^ old_cr4) & mmu_role_bits) ||10081007 (!(cr4 & X86_CR4_PCIDE) && (old_cr4 & X86_CR4_PCIDE)))10091008 kvm_mmu_reset_context(vcpu);10101009···32223221 case MSR_IA32_POWER_CTL:32233222 msr_info->data = vcpu->arch.msr_ia32_power_ctl;32243223 break;32253225- case MSR_IA32_TSC:32263226- msr_info->data = kvm_scale_tsc(vcpu, rdtsc()) + vcpu->arch.tsc_offset;32243224+ case MSR_IA32_TSC: {32253225+ /*32263226+ * Intel SDM states that MSR_IA32_TSC read adds the TSC offset32273227+ * even when not intercepted. AMD manual doesn't explicitly32283228+ * state this but appears to behave the same.32293229+ *32303230+ * On userspace reads and writes, however, we unconditionally32313231+ * operate L1's TSC value to ensure backwards-compatible32323232+ * behavior for migration.32333233+ */32343234+ u64 tsc_offset = msr_info->host_initiated ? vcpu->arch.l1_tsc_offset :32353235+ vcpu->arch.tsc_offset;32363236+32373237+ msr_info->data = kvm_scale_tsc(vcpu, rdtsc()) + tsc_offset;32273238 break;32393239+ }32283240 case MSR_MTRRcap:32293241 case 0x200 ... 0x2ff:32303242 return kvm_mtrr_get_msr(vcpu, msr_info->index, &msr_info->data);
+1-1
arch/x86/lib/usercopy_64.c
···120120 */121121 if (size < 8) {122122 if (!IS_ALIGNED(dest, 4) || size != 4)123123- clean_cache_range(dst, 1);123123+ clean_cache_range(dst, size);124124 } else {125125 if (!IS_ALIGNED(dest, 8)) {126126 dest = ALIGN(dest, boot_cpu_data.x86_clflush_size);
+9-9
block/blk-mq.c
···1412141214131413 hctx->dispatched[queued_to_index(queued)]++;1414141414151415+ /* If we didn't flush the entire list, we could have told the driver14161416+ * there was more coming, but that turned out to be a lie.14171417+ */14181418+ if ((!list_empty(list) || errors) && q->mq_ops->commit_rqs && queued)14191419+ q->mq_ops->commit_rqs(hctx);14151420 /*14161421 * Any items that need requeuing? Stuff them into hctx->dispatch,14171422 * that is where we will continue on next queue run.···14291424 bool no_budget_avail = prep == PREP_DISPATCH_NO_BUDGET;1430142514311426 blk_mq_release_budgets(q, nr_budgets);14321432-14331433- /*14341434- * If we didn't flush the entire list, we could have told14351435- * the driver there was more coming, but that turned out to14361436- * be a lie.14371437- */14381438- if (q->mq_ops->commit_rqs && queued)14391439- q->mq_ops->commit_rqs(hctx);1440142714411428 spin_lock(&hctx->lock);14421429 list_splice_tail_init(list, &hctx->dispatch);···20762079 struct list_head *list)20772080{20782081 int queued = 0;20822082+ int errors = 0;2079208320802084 while (!list_empty(list)) {20812085 blk_status_t ret;···20932095 break;20942096 }20952097 blk_mq_end_request(rq, ret);20982098+ errors++;20962099 } else20972100 queued++;20982101 }···21032104 * the driver there was more coming, but that turned out to21042105 * be a lie.21052106 */21062106- if (!list_empty(list) && hctx->queue->mq_ops->commit_rqs && queued)21072107+ if ((!list_empty(list) || errors) &&21082108+ hctx->queue->mq_ops->commit_rqs && queued)21072109 hctx->queue->mq_ops->commit_rqs(hctx);21082110}21092111
+46
block/blk-settings.c
···801801}802802EXPORT_SYMBOL_GPL(blk_queue_can_use_dma_map_merging);803803804804+/**805805+ * blk_queue_set_zoned - configure a disk queue zoned model.806806+ * @disk: the gendisk of the queue to configure807807+ * @model: the zoned model to set808808+ *809809+ * Set the zoned model of the request queue of @disk according to @model.810810+ * When @model is BLK_ZONED_HM (host managed), this should be called only811811+ * if zoned block device support is enabled (CONFIG_BLK_DEV_ZONED option).812812+ * If @model specifies BLK_ZONED_HA (host aware), the effective model used813813+ * depends on CONFIG_BLK_DEV_ZONED settings and on the existence of partitions814814+ * on the disk.815815+ */816816+void blk_queue_set_zoned(struct gendisk *disk, enum blk_zoned_model model)817817+{818818+ switch (model) {819819+ case BLK_ZONED_HM:820820+ /*821821+ * Host managed devices are supported only if822822+ * CONFIG_BLK_DEV_ZONED is enabled.823823+ */824824+ WARN_ON_ONCE(!IS_ENABLED(CONFIG_BLK_DEV_ZONED));825825+ break;826826+ case BLK_ZONED_HA:827827+ /*828828+ * Host aware devices can be treated either as regular block829829+ * devices (similar to drive managed devices) or as zoned block830830+ * devices to take advantage of the zone command set, similarly831831+ * to host managed devices. We try the latter if there are no832832+ * partitions and zoned block device support is enabled, else833833+ * we do nothing special as far as the block layer is concerned.834834+ */835835+ if (!IS_ENABLED(CONFIG_BLK_DEV_ZONED) ||836836+ disk_has_partitions(disk))837837+ model = BLK_ZONED_NONE;838838+ break;839839+ case BLK_ZONED_NONE:840840+ default:841841+ if (WARN_ON_ONCE(model != BLK_ZONED_NONE))842842+ model = BLK_ZONED_NONE;843843+ break;844844+ }845845+846846+ disk->queue->limits.zoned = model;847847+}848848+EXPORT_SYMBOL_GPL(blk_queue_set_zoned);849849+804850static int __init blk_settings_init(void)805851{806852 blk_max_low_pfn = max_low_pfn - 1;
···761761 return pfn_to_nid(pfn);762762}763763764764+static int do_register_memory_block_under_node(int nid,765765+ struct memory_block *mem_blk)766766+{767767+ int ret;768768+769769+ /*770770+ * If this memory block spans multiple nodes, we only indicate771771+ * the last processed node.772772+ */773773+ mem_blk->nid = nid;774774+775775+ ret = sysfs_create_link_nowarn(&node_devices[nid]->dev.kobj,776776+ &mem_blk->dev.kobj,777777+ kobject_name(&mem_blk->dev.kobj));778778+ if (ret)779779+ return ret;780780+781781+ return sysfs_create_link_nowarn(&mem_blk->dev.kobj,782782+ &node_devices[nid]->dev.kobj,783783+ kobject_name(&node_devices[nid]->dev.kobj));784784+}785785+764786/* register memory section under specified node if it spans that node */765765-static int register_mem_sect_under_node(struct memory_block *mem_blk,766766- void *arg)787787+static int register_mem_block_under_node_early(struct memory_block *mem_blk,788788+ void *arg)767789{768790 unsigned long memory_block_pfns = memory_block_size_bytes() / PAGE_SIZE;769791 unsigned long start_pfn = section_nr_to_pfn(mem_blk->start_section_nr);770792 unsigned long end_pfn = start_pfn + memory_block_pfns - 1;771771- int ret, nid = *(int *)arg;793793+ int nid = *(int *)arg;772794 unsigned long pfn;773795774796 for (pfn = start_pfn; pfn <= end_pfn; pfn++) {···807785 }808786809787 /*810810- * We need to check if page belongs to nid only for the boot811811- * case, during hotplug we know that all pages in the memory812812- * block belong to the same node.788788+ * We need to check if page belongs to nid only at the boot789789+ * case because node's ranges can be interleaved.813790 */814814- if (system_state == SYSTEM_BOOTING) {815815- page_nid = get_nid_for_pfn(pfn);816816- if (page_nid < 0)817817- continue;818818- if (page_nid != nid)819819- continue;820820- }791791+ page_nid = get_nid_for_pfn(pfn);792792+ if (page_nid < 0)793793+ continue;794794+ if (page_nid != nid)795795+ continue;821796822822- /*823823- * If this memory block spans multiple nodes, we only indicate824824- * the last processed node.825825- */826826- mem_blk->nid = nid;827827-828828- ret = sysfs_create_link_nowarn(&node_devices[nid]->dev.kobj,829829- &mem_blk->dev.kobj,830830- kobject_name(&mem_blk->dev.kobj));831831- if (ret)832832- return ret;833833-834834- return sysfs_create_link_nowarn(&mem_blk->dev.kobj,835835- &node_devices[nid]->dev.kobj,836836- kobject_name(&node_devices[nid]->dev.kobj));797797+ return do_register_memory_block_under_node(nid, mem_blk);837798 }838799 /* mem section does not span the specified node */839800 return 0;801801+}802802+803803+/*804804+ * During hotplug we know that all pages in the memory block belong to the same805805+ * node.806806+ */807807+static int register_mem_block_under_node_hotplug(struct memory_block *mem_blk,808808+ void *arg)809809+{810810+ int nid = *(int *)arg;811811+812812+ return do_register_memory_block_under_node(nid, mem_blk);840813}841814842815/*···849832 kobject_name(&node_devices[mem_blk->nid]->dev.kobj));850833}851834852852-int link_mem_sections(int nid, unsigned long start_pfn, unsigned long end_pfn)835835+int link_mem_sections(int nid, unsigned long start_pfn, unsigned long end_pfn,836836+ enum meminit_context context)853837{838838+ walk_memory_blocks_func_t func;839839+840840+ if (context == MEMINIT_HOTPLUG)841841+ func = register_mem_block_under_node_hotplug;842842+ else843843+ func = register_mem_block_under_node_early;844844+854845 return walk_memory_blocks(PFN_PHYS(start_pfn),855846 PFN_PHYS(end_pfn - start_pfn), (void *)&nid,856856- register_mem_sect_under_node);847847+ func);857848}858849859850#ifdef CONFIG_HUGETLBFS
···16111611 unsigned long flags = 0;16121612 unsigned long input_rate;1613161316141614- if (clk_pll_is_enabled(hw))16151615- return 0;16161616-16171614 input_rate = clk_hw_get_rate(clk_hw_get_parent(hw));1618161516191616 if (_get_table_rate(hw, &sel, pll->params->fixed_rate, input_rate))···16701673 pll_writel(val, PLLE_SS_CTRL, pll);16711674 udelay(1);1672167516731673- /* Enable hw control of xusb brick pll */16761676+ /* Enable HW control of XUSB brick PLL */16741677 val = pll_readl_misc(pll);16751678 val &= ~PLLE_MISC_IDDQ_SW_CTRL;16761679 pll_writel_misc(val, pll);···16931696 val |= XUSBIO_PLL_CFG0_SEQ_ENABLE;16941697 pll_writel(val, XUSBIO_PLL_CFG0, pll);1695169816961696- /* Enable hw control of SATA pll */16991699+ /* Enable HW control of SATA PLL */16971700 val = pll_readl(SATA_PLL_CFG0, pll);16981701 val &= ~SATA_PLL_CFG0_PADPLL_RESET_SWCTL;16991702 val |= SATA_PLL_CFG0_PADPLL_USE_LOCKDET;
···169169 return PTR_ERR(clk);170170 }171171172172- ret = ENXIO;172172+ ret = -ENXIO;173173 base = of_iomap(node, 0);174174 if (!base) {175175 pr_err("failed to map registers for clockevent\n");
···6666 return -1;67676868 /* Do runtime PM to manage a hierarchical CPU toplogy. */6969- pm_runtime_put_sync_suspend(pd_dev);6969+ RCU_NONIDLE(pm_runtime_put_sync_suspend(pd_dev));70707171 state = psci_get_domain_state();7272 if (!state)···74747575 ret = psci_cpu_suspend_enter(state) ? -1 : idx;76767777- pm_runtime_get_sync(pd_dev);7777+ RCU_NONIDLE(pm_runtime_get_sync(pd_dev));78787979 cpu_pm_exit();8080
-10
drivers/cpuidle/cpuidle.c
···142142143143 time_start = ns_to_ktime(local_clock());144144145145- /*146146- * trace_suspend_resume() called by tick_freeze() for the last CPU147147- * executing it contains RCU usage regarded as invalid in the idle148148- * context, so tell RCU about that.149149- */150145 tick_freeze();151146 /*152147 * The state used here cannot be a "coupled" one, because the "coupled"···154159 target_state->enter_s2idle(dev, drv, index);155160 if (WARN_ON_ONCE(!irqs_disabled()))156161 local_irq_disable();157157- /*158158- * timekeeping_resume() that will be called by tick_unfreeze() for the159159- * first CPU executing it calls functions containing RCU read-side160160- * critical sections, so tell RCU about that.161161- */162162 if (!(target_state->flags & CPUIDLE_FLAG_RCU_IDLE))163163 rcu_idle_exit();164164 tick_unfreeze();
···129129 * @nr_channels: number of channels under test130130 * @lock: access protection to the fields of this structure131131 * @did_init: module has been initialized completely132132+ * @last_error: test has faced configuration issues132133 */133134static struct dmatest_info {134135 /* Test parameters */···138137 /* Internal state */139138 struct list_head channels;140139 unsigned int nr_channels;140140+ int last_error;141141 struct mutex lock;142142 bool did_init;143143} test_info = {···11861184 return ret;11871185 } else if (dmatest_run) {11881186 if (!is_threaded_test_pending(info)) {11891189- pr_info("No channels configured, continue with any\n");11901190- if (!is_threaded_test_run(info))11911191- stop_threaded_test(info);11921192- add_threaded_test(info);11871187+ /*11881188+ * We have nothing to run. This can be due to:11891189+ */11901190+ ret = info->last_error;11911191+ if (ret) {11921192+ /* 1) Misconfiguration */11931193+ pr_err("Channel misconfigured, can't continue\n");11941194+ mutex_unlock(&info->lock);11951195+ return ret;11961196+ } else {11971197+ /* 2) We rely on defaults */11981198+ pr_info("No channels configured, continue with any\n");11991199+ if (!is_threaded_test_run(info))12001200+ stop_threaded_test(info);12011201+ add_threaded_test(info);12021202+ }11931203 }11941204 start_threaded_tests(info);11951205 } else {···12181204 struct dmatest_info *info = &test_info;12191205 struct dmatest_chan *dtc;12201206 char chan_reset_val[20];12211221- int ret = 0;12071207+ int ret;1222120812231209 mutex_lock(&info->lock);12241210 ret = param_set_copystring(val, kp);···12731259 goto add_chan_err;12741260 }1275126112621262+ info->last_error = ret;12761263 mutex_unlock(&info->lock);1277126412781265 return ret;1279126612801267add_chan_err:12811268 param_set_copystring(chan_reset_val, kp);12691269+ info->last_error = ret;12821270 mutex_unlock(&info->lock);1283127112841272 return ret;
+1-1
drivers/gpio/gpio-amd-fch.c
···9292 ret = (readl_relaxed(ptr) & AMD_FCH_GPIO_FLAG_DIRECTION);9393 spin_unlock_irqrestore(&priv->lock, flags);94949595- return ret ? GPIO_LINE_DIRECTION_IN : GPIO_LINE_DIRECTION_OUT;9595+ return ret ? GPIO_LINE_DIRECTION_OUT : GPIO_LINE_DIRECTION_IN;9696}97979898static void amd_fch_gpio_set(struct gpio_chip *gc,
+87-47
drivers/gpio/gpio-aspeed-sgpio.c
···1717#include <linux/spinlock.h>1818#include <linux/string.h>19192020-#define MAX_NR_SGPIO 802020+/*2121+ * MAX_NR_HW_GPIO represents the number of actual hardware-supported GPIOs (ie,2222+ * slots within the clocked serial GPIO data). Since each HW GPIO is both an2323+ * input and an output, we provide MAX_NR_HW_GPIO * 2 lines on our gpiochip2424+ * device.2525+ *2626+ * We use SGPIO_OUTPUT_OFFSET to define the split between the inputs and2727+ * outputs; the inputs start at line 0, the outputs start at OUTPUT_OFFSET.2828+ */2929+#define MAX_NR_HW_SGPIO 803030+#define SGPIO_OUTPUT_OFFSET MAX_NR_HW_SGPIO21312232#define ASPEED_SGPIO_CTRL 0x542333···4030 struct clk *pclk;4131 spinlock_t lock;4232 void __iomem *base;4343- uint32_t dir_in[3];4433 int irq;3434+ int n_sgpio;4535};46364737struct aspeed_sgpio_bank {···121111 }122112}123113124124-#define GPIO_BANK(x) ((x) >> 5)125125-#define GPIO_OFFSET(x) ((x) & 0x1f)114114+#define GPIO_BANK(x) ((x % SGPIO_OUTPUT_OFFSET) >> 5)115115+#define GPIO_OFFSET(x) ((x % SGPIO_OUTPUT_OFFSET) & 0x1f)126116#define GPIO_BIT(x) BIT(GPIO_OFFSET(x))127117128118static const struct aspeed_sgpio_bank *to_bank(unsigned int offset)129119{130130- unsigned int bank = GPIO_BANK(offset);120120+ unsigned int bank;121121+122122+ bank = GPIO_BANK(offset);131123132124 WARN_ON(bank >= ARRAY_SIZE(aspeed_sgpio_banks));133125 return &aspeed_sgpio_banks[bank];126126+}127127+128128+static int aspeed_sgpio_init_valid_mask(struct gpio_chip *gc,129129+ unsigned long *valid_mask, unsigned int ngpios)130130+{131131+ struct aspeed_sgpio *sgpio = gpiochip_get_data(gc);132132+ int n = sgpio->n_sgpio;133133+ int c = SGPIO_OUTPUT_OFFSET - n;134134+135135+ WARN_ON(ngpios < MAX_NR_HW_SGPIO * 2);136136+137137+ /* input GPIOs in the lower range */138138+ bitmap_set(valid_mask, 0, n);139139+ bitmap_clear(valid_mask, n, c);140140+141141+ /* output GPIOS above SGPIO_OUTPUT_OFFSET */142142+ bitmap_set(valid_mask, SGPIO_OUTPUT_OFFSET, n);143143+ bitmap_clear(valid_mask, SGPIO_OUTPUT_OFFSET + n, c);144144+145145+ return 0;146146+}147147+148148+static void aspeed_sgpio_irq_init_valid_mask(struct gpio_chip *gc,149149+ unsigned long *valid_mask, unsigned int ngpios)150150+{151151+ struct aspeed_sgpio *sgpio = gpiochip_get_data(gc);152152+ int n = sgpio->n_sgpio;153153+154154+ WARN_ON(ngpios < MAX_NR_HW_SGPIO * 2);155155+156156+ /* input GPIOs in the lower range */157157+ bitmap_set(valid_mask, 0, n);158158+ bitmap_clear(valid_mask, n, ngpios - n);159159+}160160+161161+static bool aspeed_sgpio_is_input(unsigned int offset)162162+{163163+ return offset < SGPIO_OUTPUT_OFFSET;134164}135165136166static int aspeed_sgpio_get(struct gpio_chip *gc, unsigned int offset)···179129 const struct aspeed_sgpio_bank *bank = to_bank(offset);180130 unsigned long flags;181131 enum aspeed_sgpio_reg reg;182182- bool is_input;183132 int rc = 0;184133185134 spin_lock_irqsave(&gpio->lock, flags);186135187187- is_input = gpio->dir_in[GPIO_BANK(offset)] & GPIO_BIT(offset);188188- reg = is_input ? reg_val : reg_rdata;136136+ reg = aspeed_sgpio_is_input(offset) ? reg_val : reg_rdata;189137 rc = !!(ioread32(bank_reg(gpio, bank, reg)) & GPIO_BIT(offset));190138191139 spin_unlock_irqrestore(&gpio->lock, flags);···191143 return rc;192144}193145194194-static void sgpio_set_value(struct gpio_chip *gc, unsigned int offset, int val)146146+static int sgpio_set_value(struct gpio_chip *gc, unsigned int offset, int val)195147{196148 struct aspeed_sgpio *gpio = gpiochip_get_data(gc);197149 const struct aspeed_sgpio_bank *bank = to_bank(offset);198198- void __iomem *addr;150150+ void __iomem *addr_r, *addr_w;199151 u32 reg = 0;200152201201- addr = bank_reg(gpio, bank, reg_val);202202- reg = ioread32(addr);153153+ if (aspeed_sgpio_is_input(offset))154154+ return -EINVAL;155155+156156+ /* Since this is an output, read the cached value from rdata, then157157+ * update val. */158158+ addr_r = bank_reg(gpio, bank, reg_rdata);159159+ addr_w = bank_reg(gpio, bank, reg_val);160160+161161+ reg = ioread32(addr_r);203162204163 if (val)205164 reg |= GPIO_BIT(offset);206165 else207166 reg &= ~GPIO_BIT(offset);208167209209- iowrite32(reg, addr);168168+ iowrite32(reg, addr_w);169169+170170+ return 0;210171}211172212173static void aspeed_sgpio_set(struct gpio_chip *gc, unsigned int offset, int val)···232175233176static int aspeed_sgpio_dir_in(struct gpio_chip *gc, unsigned int offset)234177{235235- struct aspeed_sgpio *gpio = gpiochip_get_data(gc);236236- unsigned long flags;237237-238238- spin_lock_irqsave(&gpio->lock, flags);239239- gpio->dir_in[GPIO_BANK(offset)] |= GPIO_BIT(offset);240240- spin_unlock_irqrestore(&gpio->lock, flags);241241-242242- return 0;178178+ return aspeed_sgpio_is_input(offset) ? 0 : -EINVAL;243179}244180245181static int aspeed_sgpio_dir_out(struct gpio_chip *gc, unsigned int offset, int val)246182{247183 struct aspeed_sgpio *gpio = gpiochip_get_data(gc);248184 unsigned long flags;185185+ int rc;186186+187187+ /* No special action is required for setting the direction; we'll188188+ * error-out in sgpio_set_value if this isn't an output GPIO */249189250190 spin_lock_irqsave(&gpio->lock, flags);251251-252252- gpio->dir_in[GPIO_BANK(offset)] &= ~GPIO_BIT(offset);253253- sgpio_set_value(gc, offset, val);254254-191191+ rc = sgpio_set_value(gc, offset, val);255192 spin_unlock_irqrestore(&gpio->lock, flags);256193257257- return 0;194194+ return rc;258195}259196260197static int aspeed_sgpio_get_direction(struct gpio_chip *gc, unsigned int offset)261198{262262- int dir_status;263263- struct aspeed_sgpio *gpio = gpiochip_get_data(gc);264264- unsigned long flags;265265-266266- spin_lock_irqsave(&gpio->lock, flags);267267- dir_status = gpio->dir_in[GPIO_BANK(offset)] & GPIO_BIT(offset);268268- spin_unlock_irqrestore(&gpio->lock, flags);269269-270270- return dir_status;271271-199199+ return !!aspeed_sgpio_is_input(offset);272200}273201274202static void irqd_to_aspeed_sgpio_data(struct irq_data *d,···444402445403 irq = &gpio->chip.irq;446404 irq->chip = &aspeed_sgpio_irqchip;405405+ irq->init_valid_mask = aspeed_sgpio_irq_init_valid_mask;447406 irq->handler = handle_bad_irq;448407 irq->default_type = IRQ_TYPE_NONE;449408 irq->parent_handler = aspeed_sgpio_irq_handler;···452409 irq->parents = &gpio->irq;453410 irq->num_parents = 1;454411455455- /* set IRQ settings and Enable Interrupt */412412+ /* Apply default IRQ settings */456413 for (i = 0; i < ARRAY_SIZE(aspeed_sgpio_banks); i++) {457414 bank = &aspeed_sgpio_banks[i];458415 /* set falling or level-low irq */459416 iowrite32(0x00000000, bank_reg(gpio, bank, reg_irq_type0));460417 /* trigger type is edge */461418 iowrite32(0x00000000, bank_reg(gpio, bank, reg_irq_type1));462462- /* dual edge trigger mode. */463463- iowrite32(0xffffffff, bank_reg(gpio, bank, reg_irq_type2));464464- /* enable irq */465465- iowrite32(0xffffffff, bank_reg(gpio, bank, reg_irq_enable));419419+ /* single edge trigger */420420+ iowrite32(0x00000000, bank_reg(gpio, bank, reg_irq_type2));466421 }467422468423 return 0;···493452 if (rc < 0) {494453 dev_err(&pdev->dev, "Could not read ngpios property\n");495454 return -EINVAL;496496- } else if (nr_gpios > MAX_NR_SGPIO) {455455+ } else if (nr_gpios > MAX_NR_HW_SGPIO) {497456 dev_err(&pdev->dev, "Number of GPIOs exceeds the maximum of %d: %d\n",498498- MAX_NR_SGPIO, nr_gpios);457457+ MAX_NR_HW_SGPIO, nr_gpios);499458 return -EINVAL;500459 }460460+ gpio->n_sgpio = nr_gpios;501461502462 rc = of_property_read_u32(pdev->dev.of_node, "bus-frequency", &sgpio_freq);503463 if (rc < 0) {···539497 spin_lock_init(&gpio->lock);540498541499 gpio->chip.parent = &pdev->dev;542542- gpio->chip.ngpio = nr_gpios;500500+ gpio->chip.ngpio = MAX_NR_HW_SGPIO * 2;501501+ gpio->chip.init_valid_mask = aspeed_sgpio_init_valid_mask;543502 gpio->chip.direction_input = aspeed_sgpio_dir_in;544503 gpio->chip.direction_output = aspeed_sgpio_dir_out;545504 gpio->chip.get_direction = aspeed_sgpio_get_direction;···551508 gpio->chip.set_config = NULL;552509 gpio->chip.label = dev_name(&pdev->dev);553510 gpio->chip.base = -1;554554-555555- /* set all SGPIO pins as input (1). */556556- memset(gpio->dir_in, 0xff, sizeof(gpio->dir_in));557511558512 aspeed_sgpio_setup_irqs(gpio, pdev);559513
···15161516 return 0;15171517}1518151815191519-static int omap_gpio_suspend(struct device *dev)15191519+static int __maybe_unused omap_gpio_suspend(struct device *dev)15201520{15211521 struct gpio_bank *bank = dev_get_drvdata(dev);15221522···15281528 return omap_gpio_runtime_suspend(dev);15291529}1530153015311531-static int omap_gpio_resume(struct device *dev)15311531+static int __maybe_unused omap_gpio_resume(struct device *dev)15321532{15331533 struct gpio_bank *bank = dev_get_drvdata(dev);15341534
+6-1
drivers/gpio/gpio-pca953x.c
···818818 int level;819819 bool ret;820820821821+ bitmap_zero(pending, MAX_LINE);822822+821823 mutex_lock(&chip->i2c_lock);822824 ret = pca953x_irq_pending(chip, pending);823825 mutex_unlock(&chip->i2c_lock);···942940static int device_pca957x_init(struct pca953x_chip *chip, u32 invert)943941{944942 DECLARE_BITMAP(val, MAX_LINE);943943+ unsigned int i;945944 int ret;946945947946 ret = device_pca95xx_init(chip, invert);···950947 goto out;951948952949 /* To enable register 6, 7 to control pull up and pull down */953953- memset(val, 0x02, NBANK(chip));950950+ for (i = 0; i < NBANK(chip); i++)951951+ bitmap_set_value8(val, 0x02, i * BANK_SZ);952952+954953 ret = pca953x_write_regs(chip, PCA957X_BKEN, val);955954 if (ret)956955 goto out;
+1
drivers/gpio/gpio-siox.c
···245245 girq->chip = &ddata->ichip;246246 girq->default_type = IRQ_TYPE_NONE;247247 girq->handler = handle_level_irq;248248+ girq->threaded = true;248249249250 ret = devm_gpiochip_add_data(dev, &ddata->gchip, NULL);250251 if (ret)
···423423 return events;424424}425425426426+static ssize_t lineevent_get_size(void)427427+{428428+#ifdef __x86_64__429429+ /* i386 has no padding after 'id' */430430+ if (in_ia32_syscall()) {431431+ struct compat_gpioeevent_data {432432+ compat_u64 timestamp;433433+ u32 id;434434+ };435435+436436+ return sizeof(struct compat_gpioeevent_data);437437+ }438438+#endif439439+ return sizeof(struct gpioevent_data);440440+}426441427442static ssize_t lineevent_read(struct file *file,428443 char __user *buf,···447432 struct lineevent_state *le = file->private_data;448433 struct gpioevent_data ge;449434 ssize_t bytes_read = 0;435435+ ssize_t ge_size;450436 int ret;451437452452- if (count < sizeof(ge))438438+ /*439439+ * When compatible system call is being used the struct gpioevent_data,440440+ * in case of at least ia32, has different size due to the alignment441441+ * differences. Because we have first member 64 bits followed by one of442442+ * 32 bits there is no gap between them. The only difference is the443443+ * padding at the end of the data structure. Hence, we calculate the444444+ * actual sizeof() and pass this as an argument to copy_to_user() to445445+ * drop unneeded bytes from the output.446446+ */447447+ ge_size = lineevent_get_size();448448+ if (count < ge_size)453449 return -EINVAL;454450455451 do {···496470 break;497471 }498472499499- if (copy_to_user(buf + bytes_read, &ge, sizeof(ge)))473473+ if (copy_to_user(buf + bytes_read, &ge, ge_size))500474 return -EFAULT;501501- bytes_read += sizeof(ge);502502- } while (count >= bytes_read + sizeof(ge));475475+ bytes_read += ge_size;476476+ } while (count >= bytes_read + ge_size);503477504478 return bytes_read;505479}
+2-8
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
···8080MODULE_FIRMWARE("amdgpu/navi10_gpu_info.bin");8181MODULE_FIRMWARE("amdgpu/navi14_gpu_info.bin");8282MODULE_FIRMWARE("amdgpu/navi12_gpu_info.bin");8383-MODULE_FIRMWARE("amdgpu/sienna_cichlid_gpu_info.bin");8484-MODULE_FIRMWARE("amdgpu/navy_flounder_gpu_info.bin");85838684#define AMDGPU_RESUME_MS 20008785···15981600 case CHIP_CARRIZO:15991601 case CHIP_STONEY:16001602 case CHIP_VEGA20:16031603+ case CHIP_SIENNA_CICHLID:16041604+ case CHIP_NAVY_FLOUNDER:16011605 default:16021606 return 0;16031607 case CHIP_VEGA10:···16301630 break;16311631 case CHIP_NAVI12:16321632 chip_name = "navi12";16331633- break;16341634- case CHIP_SIENNA_CICHLID:16351635- chip_name = "sienna_cichlid";16361636- break;16371637- case CHIP_NAVY_FLOUNDER:16381638- chip_name = "navy_flounder";16391633 break;16401634 }16411635
+1-1
drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
···297297 take the current one */298298 if (active && !adev->have_disp_power_ref) {299299 adev->have_disp_power_ref = true;300300- goto out;300300+ return ret;301301 }302302 /* if we have no active crtcs, then drop the power ref303303 we got before */
···783783 } else {784784 struct clk_log_info log_info = {0};785785786786- clk_mgr->smu_ver = rn_vbios_smu_get_smu_version(clk_mgr);787786 clk_mgr->periodic_retraining_disabled = rn_vbios_smu_is_periodic_retraining_disabled(clk_mgr);788787789788 /* SMU Version 55.51.0 and up no longer have an issue
···479479 return ret;480480 }481481482482- /*483483- * Set initialized values (get from vbios) to dpm tables context such as484484- * gfxclk, memclk, dcefclk, and etc. And enable the DPM feature for each485485- * type of clks.486486- */487487- ret = smu_set_default_dpm_table(smu);488488- if (ret) {489489- dev_err(adev->dev, "Failed to setup default dpm clock tables!\n");490490- return ret;491491- }492492-493482 ret = smu_populate_umd_state_clk(smu);494483 if (ret) {495484 dev_err(adev->dev, "Failed to populate UMD state clocks!\n");···970981 SMU_POWER_SOURCE_DC);971982 if (ret) {972983 dev_err(adev->dev, "Failed to switch to %s mode!\n", adev->pm.ac_power ? "AC" : "DC");984984+ return ret;985985+ }986986+987987+ /*988988+ * Set initialized values (get from vbios) to dpm tables context such as989989+ * gfxclk, memclk, dcefclk, and etc. And enable the DPM feature for each990990+ * type of clks.991991+ */992992+ ret = smu_set_default_dpm_table(smu);993993+ if (ret) {994994+ dev_err(adev->dev, "Failed to setup default dpm clock tables!\n");973995 return ret;974996 }975997
···232232 *sclk_mask = 0;233233 } else if (level == AMD_DPM_FORCED_LEVEL_PROFILE_MIN_MCLK) {234234 if (mclk_mask)235235- *mclk_mask = 0;235235+ /* mclk levels are in reverse order */236236+ *mclk_mask = NUM_MEMCLK_DPM_LEVELS - 1;236237 } else if (level == AMD_DPM_FORCED_LEVEL_PROFILE_PEAK) {237238 if(sclk_mask)238239 /* The sclk as gfxclk and has three level about max/min/current */239240 *sclk_mask = 3 - 1;240241241242 if(mclk_mask)242242- *mclk_mask = NUM_MEMCLK_DPM_LEVELS - 1;243243+ /* mclk levels are in reverse order */244244+ *mclk_mask = 0;243245244246 if(soc_mask)245247 *soc_mask = NUM_SOCCLK_DPM_LEVELS - 1;···335333 case SMU_UCLK:336334 case SMU_FCLK:337335 case SMU_MCLK:338338- ret = renoir_get_dpm_clk_limited(smu, clk_type, 0, min);336336+ ret = renoir_get_dpm_clk_limited(smu, clk_type, NUM_MEMCLK_DPM_LEVELS - 1, min);339337 if (ret)340338 goto failed;341339 break;
+5-1
drivers/gpu/drm/i915/gvt/vgpu.c
···368368static struct intel_vgpu *__intel_gvt_create_vgpu(struct intel_gvt *gvt,369369 struct intel_vgpu_creation_params *param)370370{371371+ struct drm_i915_private *dev_priv = gvt->gt->i915;371372 struct intel_vgpu *vgpu;372373 int ret;373374···437436 if (ret)438437 goto out_clean_sched_policy;439438440440- ret = intel_gvt_hypervisor_set_edid(vgpu, PORT_D);439439+ if (IS_BROADWELL(dev_priv))440440+ ret = intel_gvt_hypervisor_set_edid(vgpu, PORT_B);441441+ else442442+ ret = intel_gvt_hypervisor_set_edid(vgpu, PORT_D);441443 if (ret)442444 goto out_clean_sched_policy;443445
+5-7
drivers/gpu/drm/i915/selftests/mock_gem_device.c
···118118119119struct drm_i915_private *mock_gem_device(void)120120{121121+#if IS_ENABLED(CONFIG_IOMMU_API) && defined(CONFIG_INTEL_IOMMU)122122+ static struct dev_iommu fake_iommu = { .priv = (void *)-1 };123123+#endif121124 struct drm_i915_private *i915;122125 struct pci_dev *pdev;123123-#if IS_ENABLED(CONFIG_IOMMU_API) && defined(CONFIG_INTEL_IOMMU)124124- struct dev_iommu iommu;125125-#endif126126 int err;127127128128 pdev = kzalloc(sizeof(*pdev), GFP_KERNEL);···141141 dma_coerce_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64));142142143143#if IS_ENABLED(CONFIG_IOMMU_API) && defined(CONFIG_INTEL_IOMMU)144144- /* HACK HACK HACK to disable iommu for the fake device; force identity mapping */145145- memset(&iommu, 0, sizeof(iommu));146146- iommu.priv = (void *)-1;147147- pdev->dev.iommu = &iommu;144144+ /* HACK to disable iommu for the fake device; force identity mapping */145145+ pdev->dev.iommu = &fake_iommu;148146#endif149147150148 pci_set_drvdata(pdev, i915);
+1-1
drivers/gpu/drm/sun4i/sun8i_csc.h
···12121313/* VI channel CSC units offsets */1414#define CCSC00_OFFSET 0xAA0501515-#define CCSC01_OFFSET 0xFA0001515+#define CCSC01_OFFSET 0xFA0501616#define CCSC10_OFFSET 0xA00001717#define CCSC11_OFFSET 0xF00001818
···21632163 if (bus->cmd_err == -EAGAIN)21642164 ret = i2c_recover_bus(adap);2165216521662166+ /*21672167+ * After any type of error, check if LAST bit is still set,21682168+ * due to a HW issue.21692169+ * It cannot be cleared without resetting the module.21702170+ */21712171+ if (bus->cmd_err &&21722172+ (NPCM_I2CRXF_CTL_LAST_PEC & ioread8(bus->reg + NPCM_I2CRXF_CTL)))21732173+ npcm_i2c_reset(bus);21742174+21662175#if IS_ENABLED(CONFIG_I2C_SLAVE)21672176 /* reenable slave if it was enabled */21682177 if (bus->slave)
···12851285 remove_client_context(device, cid);12861286 }1287128712881288+ ib_cq_pool_destroy(device);12891289+12881290 /* Pairs with refcount_set in enable_device */12891291 ib_device_put(device);12901292 wait_for_completion(&device->unreg_completion);···13291327 if (ret)13301328 goto out;13311329 }13301330+13311331+ ib_cq_pool_init(device);1332133213331333 down_read(&clients_rwsem);13341334 xa_for_each_marked (&clients, index, client, CLIENT_REGISTERED) {···14041400 goto dev_cleanup;14051401 }1406140214071407- ib_cq_pool_init(device);14081403 ret = enable_device_and_get(device);14091404 dev_set_uevent_suppress(&device->dev, false);14101405 /* Mark for userspace that device is ready */···14581455 goto out;1459145614601457 disable_device(ib_dev);14611461- ib_cq_pool_destroy(ib_dev);1462145814631459 /* Expedite removing unregistered pointers from the hash table */14641460 free_netdevs(ib_dev);
+2
drivers/input/mouse/trackpoint.c
···282282 case TP_VARIANT_ALPS:283283 case TP_VARIANT_ELAN:284284 case TP_VARIANT_NXP:285285+ case TP_VARIANT_JYT_SYNAPTICS:286286+ case TP_VARIANT_SYNAPTICS:285287 if (variant_id)286288 *variant_id = param[0];287289 if (firmware_id)
···11041104}1105110511061106/*11071107- * Reads the device exclusion range from ACPI and initializes the IOMMU with11081108- * it11091109- */11101110-static void __init set_device_exclusion_range(u16 devid, struct ivmd_header *m)11111111-{11121112- if (!(m->flags & IVMD_FLAG_EXCL_RANGE))11131113- return;11141114-11151115- /*11161116- * Treat per-device exclusion ranges as r/w unity-mapped regions11171117- * since some buggy BIOSes might lead to the overwritten exclusion11181118- * range (exclusion_start and exclusion_length members). This11191119- * happens when there are multiple exclusion ranges (IVMD entries)11201120- * defined in ACPI table.11211121- */11221122- m->flags = (IVMD_FLAG_IW | IVMD_FLAG_IR | IVMD_FLAG_UNITY_MAP);11231123-}11241124-11251125-/*11261107 * Takes a pointer to an AMD IOMMU entry in the ACPI table and11271108 * initializes the hardware and our data structures with it.11281109 */···20542073 }20552074}2056207520572057-/* called when we find an exclusion range definition in ACPI */20582058-static int __init init_exclusion_range(struct ivmd_header *m)20592059-{20602060- int i;20612061-20622062- switch (m->type) {20632063- case ACPI_IVMD_TYPE:20642064- set_device_exclusion_range(m->devid, m);20652065- break;20662066- case ACPI_IVMD_TYPE_ALL:20672067- for (i = 0; i <= amd_iommu_last_bdf; ++i)20682068- set_device_exclusion_range(i, m);20692069- break;20702070- case ACPI_IVMD_TYPE_RANGE:20712071- for (i = m->devid; i <= m->aux; ++i)20722072- set_device_exclusion_range(i, m);20732073- break;20742074- default:20752075- break;20762076- }20772077-20782078- return 0;20792079-}20802080-20812076/* called for unity map ACPI definition */20822077static int __init init_unity_map_range(struct ivmd_header *m)20832078{···20632106 e = kzalloc(sizeof(*e), GFP_KERNEL);20642107 if (e == NULL)20652108 return -ENOMEM;20662066-20672067- if (m->flags & IVMD_FLAG_EXCL_RANGE)20682068- init_exclusion_range(m);2069210920702110 switch (m->type) {20712111 default:···20862132 e->address_start = PAGE_ALIGN(m->range_start);20872133 e->address_end = e->address_start + PAGE_ALIGN(m->range_length);20882134 e->prot = m->flags >> 1;21352135+21362136+ /*21372137+ * Treat per-device exclusion ranges as r/w unity-mapped regions21382138+ * since some buggy BIOSes might lead to the overwritten exclusion21392139+ * range (exclusion_start and exclusion_length members). This21402140+ * happens when there are multiple exclusion ranges (IVMD entries)21412141+ * defined in ACPI table.21422142+ */21432143+ if (m->flags & IVMD_FLAG_EXCL_RANGE)21442144+ e->prot = (IVMD_FLAG_IW | IVMD_FLAG_IR) >> 1;2089214520902146 DUMP_printk("%s devid_start: %02x:%02x.%x devid_end: %02x:%02x.%x"20912147 " range_start: %016llx range_end: %016llx flags: %x\n", s,
+6-2
drivers/iommu/exynos-iommu.c
···12951295 return -ENODEV;1296129612971297 data = platform_get_drvdata(sysmmu);12981298- if (!data)12981298+ if (!data) {12991299+ put_device(&sysmmu->dev);12991300 return -ENODEV;13011301+ }1300130213011303 if (!owner) {13021304 owner = kzalloc(sizeof(*owner), GFP_KERNEL);13031303- if (!owner)13051305+ if (!owner) {13061306+ put_device(&sysmmu->dev);13041307 return -ENOMEM;13081308+ }1305130913061310 INIT_LIST_HEAD(&owner->controllers);13071311 mutex_init(&owner->rpm_lock);
+2-2
drivers/iommu/intel/iommu.c
···26642664 }2665266526662666 /* Setup the PASID entry for requests without PASID: */26672667- spin_lock(&iommu->lock);26672667+ spin_lock_irqsave(&iommu->lock, flags);26682668 if (hw_pass_through && domain_type_is_si(domain))26692669 ret = intel_pasid_setup_pass_through(iommu, domain,26702670 dev, PASID_RID2PASID);···26742674 else26752675 ret = intel_pasid_setup_second_level(iommu, domain,26762676 dev, PASID_RID2PASID);26772677- spin_unlock(&iommu->lock);26772677+ spin_unlock_irqrestore(&iommu->lock, flags);26782678 if (ret) {26792679 dev_err(dev, "Setup RID2PASID failed\n");26802680 dmar_remove_one_dev_info(dev);
+5-22
drivers/md/dm.c
···17241724 return ret;17251725}1726172617271727-static void dm_queue_split(struct mapped_device *md, struct dm_target *ti, struct bio **bio)17281728-{17291729- unsigned len, sector_count;17301730-17311731- sector_count = bio_sectors(*bio);17321732- len = min_t(sector_t, max_io_len((*bio)->bi_iter.bi_sector, ti), sector_count);17331733-17341734- if (sector_count > len) {17351735- struct bio *split = bio_split(*bio, len, GFP_NOIO, &md->queue->bio_split);17361736-17371737- bio_chain(split, *bio);17381738- trace_block_split(md->queue, split, (*bio)->bi_iter.bi_sector);17391739- submit_bio_noacct(*bio);17401740- *bio = split;17411741- }17421742-}17431743-17441727static blk_qc_t dm_process_bio(struct mapped_device *md,17451728 struct dm_table *map, struct bio *bio)17461729{···17441761 }1745176217461763 /*17471747- * If in ->queue_bio we need to use blk_queue_split(), otherwise17641764+ * If in ->submit_bio we need to use blk_queue_split(), otherwise17481765 * queue_limits for abnormal requests (e.g. discard, writesame, etc)17491766 * won't be imposed.17671767+ * If called from dm_wq_work() for deferred bio processing, bio17681768+ * was already handled by following code with previous ->submit_bio.17501769 */17511770 if (current->bio_list) {17521771 if (is_abnormal_io(bio))17531772 blk_queue_split(&bio);17541754- else17551755- dm_queue_split(md, ti, &bio);17731773+ /* regular IO is split by __split_and_process_bio */17561774 }1757177517581776 if (dm_get_md_type(md) == DM_TYPE_NVME_BIO_BASED)17591777 return __process_bio(md, map, bio, ti);17601760- else17611761- return __split_and_process_bio(md, map, bio);17781778+ return __split_and_process_bio(md, map, bio);17621779}1763178017641781static blk_qc_t dm_submit_bio(struct bio *bio)
+1-1
drivers/media/cec/core/cec-adap.c
···11991199 /* Cancel the pending timeout work */12001200 if (!cancel_delayed_work(&data->work)) {12011201 mutex_unlock(&adap->lock);12021202- flush_scheduled_work();12021202+ cancel_delayed_work_sync(&data->work);12031203 mutex_lock(&adap->lock);12041204 }12051205 /*
+6-40
drivers/media/common/videobuf2/videobuf2-core.c
···721721}722722EXPORT_SYMBOL(vb2_verify_memory_type);723723724724-static void set_queue_consistency(struct vb2_queue *q, bool consistent_mem)725725-{726726- q->dma_attrs &= ~DMA_ATTR_NON_CONSISTENT;727727-728728- if (!vb2_queue_allows_cache_hints(q))729729- return;730730- if (!consistent_mem)731731- q->dma_attrs |= DMA_ATTR_NON_CONSISTENT;732732-}733733-734734-static bool verify_consistency_attr(struct vb2_queue *q, bool consistent_mem)735735-{736736- bool queue_is_consistent = !(q->dma_attrs & DMA_ATTR_NON_CONSISTENT);737737-738738- if (consistent_mem != queue_is_consistent) {739739- dprintk(q, 1, "memory consistency model mismatch\n");740740- return false;741741- }742742- return true;743743-}744744-745724int vb2_core_reqbufs(struct vb2_queue *q, enum vb2_memory memory,746746- unsigned int flags, unsigned int *count)725725+ unsigned int *count)747726{748727 unsigned int num_buffers, allocated_buffers, num_planes = 0;749728 unsigned plane_sizes[VB2_MAX_PLANES] = { };750750- bool consistent_mem = true;751729 unsigned int i;752730 int ret;753753-754754- if (flags & V4L2_FLAG_MEMORY_NON_CONSISTENT)755755- consistent_mem = false;756731757732 if (q->streaming) {758733 dprintk(q, 1, "streaming active\n");···740765 }741766742767 if (*count == 0 || q->num_buffers != 0 ||743743- (q->memory != VB2_MEMORY_UNKNOWN && q->memory != memory) ||744744- !verify_consistency_attr(q, consistent_mem)) {768768+ (q->memory != VB2_MEMORY_UNKNOWN && q->memory != memory)) {745769 /*746770 * We already have buffers allocated, so first check if they747771 * are not in use and can be freed.···777803 num_buffers = min_t(unsigned int, num_buffers, VB2_MAX_FRAME);778804 memset(q->alloc_devs, 0, sizeof(q->alloc_devs));779805 q->memory = memory;780780- set_queue_consistency(q, consistent_mem);781806782807 /*783808 * Ask the driver how many buffers and planes per buffer it requires.···861888EXPORT_SYMBOL_GPL(vb2_core_reqbufs);862889863890int vb2_core_create_bufs(struct vb2_queue *q, enum vb2_memory memory,864864- unsigned int flags, unsigned int *count,891891+ unsigned int *count,865892 unsigned int requested_planes,866893 const unsigned int requested_sizes[])867894{868895 unsigned int num_planes = 0, num_buffers, allocated_buffers;869896 unsigned plane_sizes[VB2_MAX_PLANES] = { };870870- bool consistent_mem = true;871897 int ret;872872-873873- if (flags & V4L2_FLAG_MEMORY_NON_CONSISTENT)874874- consistent_mem = false;875898876899 if (q->num_buffers == VB2_MAX_FRAME) {877900 dprintk(q, 1, "maximum number of buffers already allocated\n");···881912 }882913 memset(q->alloc_devs, 0, sizeof(q->alloc_devs));883914 q->memory = memory;884884- set_queue_consistency(q, consistent_mem);885915 q->waiting_for_buffers = !q->is_output;886916 } else {887917 if (q->memory != memory) {888918 dprintk(q, 1, "memory model mismatch\n");889919 return -EINVAL;890920 }891891- if (!verify_consistency_attr(q, consistent_mem))892892- return -EINVAL;893921 }894922895923 num_buffers = min(*count, VB2_MAX_FRAME - q->num_buffers);···25472581 fileio->memory = VB2_MEMORY_MMAP;25482582 fileio->type = q->type;25492583 q->fileio = fileio;25502550- ret = vb2_core_reqbufs(q, fileio->memory, 0, &fileio->count);25842584+ ret = vb2_core_reqbufs(q, fileio->memory, &fileio->count);25512585 if (ret)25522586 goto err_kfree;25532587···2604263826052639err_reqbufs:26062640 fileio->count = 0;26072607- vb2_core_reqbufs(q, fileio->memory, 0, &fileio->count);26412641+ vb2_core_reqbufs(q, fileio->memory, &fileio->count);2608264226092643err_kfree:26102644 q->fileio = NULL;···26242658 vb2_core_streamoff(q, q->type);26252659 q->fileio = NULL;26262660 fileio->count = 0;26272627- vb2_core_reqbufs(q, fileio->memory, 0, &fileio->count);26612661+ vb2_core_reqbufs(q, fileio->memory, &fileio->count);26282662 kfree(fileio);26292663 dprintk(q, 3, "file io emulator closed\n");26302664 }
···123123 /*124124 * NOTE: dma-sg allocates memory using the page allocator directly, so125125 * there is no memory consistency guarantee, hence dma-sg ignores DMA126126- * attributes passed from the upper layer. That means that127127- * V4L2_FLAG_MEMORY_NON_CONSISTENT has no effect on dma-sg buffers.126126+ * attributes passed from the upper layer.128127 */129128 buf->pages = kvmalloc_array(buf->num_pages, sizeof(struct page *),130129 GFP_KERNEL | __GFP_ZERO);
+2-16
drivers/media/common/videobuf2/videobuf2-v4l2.c
···722722#endif723723}724724725725-static void clear_consistency_attr(struct vb2_queue *q,726726- int memory,727727- unsigned int *flags)728728-{729729- if (!q->allow_cache_hints || memory != V4L2_MEMORY_MMAP)730730- *flags &= ~V4L2_FLAG_MEMORY_NON_CONSISTENT;731731-}732732-733725int vb2_reqbufs(struct vb2_queue *q, struct v4l2_requestbuffers *req)734726{735727 int ret = vb2_verify_memory_type(q, req->memory, req->type);736728737729 fill_buf_caps(q, &req->capabilities);738738- clear_consistency_attr(q, req->memory, &req->flags);739739- return ret ? ret : vb2_core_reqbufs(q, req->memory,740740- req->flags, &req->count);730730+ return ret ? ret : vb2_core_reqbufs(q, req->memory, &req->count);741731}742732EXPORT_SYMBOL_GPL(vb2_reqbufs);743733···759769 unsigned i;760770761771 fill_buf_caps(q, &create->capabilities);762762- clear_consistency_attr(q, create->memory, &create->flags);763772 create->index = q->num_buffers;764773 if (create->count == 0)765774 return ret != -EBUSY ? ret : 0;···802813 if (requested_sizes[i] == 0)803814 return -EINVAL;804815 return ret ? ret : vb2_core_create_bufs(q, create->memory,805805- create->flags,806816 &create->count,807817 requested_planes,808818 requested_sizes);···986998 int res = vb2_verify_memory_type(vdev->queue, p->memory, p->type);9879999881000 fill_buf_caps(vdev->queue, &p->capabilities);989989- clear_consistency_attr(vdev->queue, p->memory, &p->flags);9901001 if (res)9911002 return res;9921003 if (vb2_queue_is_busy(vdev, file))9931004 return -EBUSY;994994- res = vb2_core_reqbufs(vdev->queue, p->memory, p->flags, &p->count);10051005+ res = vb2_core_reqbufs(vdev->queue, p->memory, &p->count);9951006 /* If count == 0, then the owner has released all buffers and he9961007 is no longer owner of the queue. Otherwise we have a new owner. */9971008 if (res == 0)···1008102110091022 p->index = vdev->queue->num_buffers;10101023 fill_buf_caps(vdev->queue, &p->capabilities);10111011- clear_consistency_attr(vdev->queue, p->memory, &p->flags);10121024 /*10131025 * If count == 0, then just check if memory and type are valid.10141026 * Any -EBUSY result from vb2_verify_memory_type can be mapped to 0.
+1-1
drivers/media/dvb-core/dvb_vb2.c
···342342343343 ctx->buf_siz = req->size;344344 ctx->buf_cnt = req->count;345345- ret = vb2_core_reqbufs(&ctx->vb_q, VB2_MEMORY_MMAP, 0, &req->count);345345+ ret = vb2_core_reqbufs(&ctx->vb_q, VB2_MEMORY_MMAP, &req->count);346346 if (ret) {347347 ctx->state = DVB_VB2_STATE_NONE;348348 dprintk(1, "[%s] count=%d size=%d errno=%d\n", ctx->name,
···932932 ksz_port_cfg(dev, port, P_PRIO_CTRL, PORT_802_1P_ENABLE, true);933933934934 if (cpu_port) {935935+ if (!p->interface && dev->compat_interface) {936936+ dev_warn(dev->dev,937937+ "Using legacy switch \"phy-mode\" property, because it is missing on port %d node. "938938+ "Please update your device tree.\n",939939+ port);940940+ p->interface = dev->compat_interface;941941+ }942942+935943 /* Configure MII interface for proper network communication. */936944 ksz_read8(dev, REG_PORT_5_CTRL_6, &data8);937945 data8 &= ~PORT_INTERFACE_TYPE;938946 data8 &= ~PORT_GMII_1GPS_MODE;939939- switch (dev->interface) {947947+ switch (p->interface) {940948 case PHY_INTERFACE_MODE_MII:941949 p->phydev.speed = SPEED_100;942950 break;···960952 default:961953 data8 &= ~PORT_RGMII_ID_IN_ENABLE;962954 data8 &= ~PORT_RGMII_ID_OUT_ENABLE;963963- if (dev->interface == PHY_INTERFACE_MODE_RGMII_ID ||964964- dev->interface == PHY_INTERFACE_MODE_RGMII_RXID)955955+ if (p->interface == PHY_INTERFACE_MODE_RGMII_ID ||956956+ p->interface == PHY_INTERFACE_MODE_RGMII_RXID)965957 data8 |= PORT_RGMII_ID_IN_ENABLE;966966- if (dev->interface == PHY_INTERFACE_MODE_RGMII_ID ||967967- dev->interface == PHY_INTERFACE_MODE_RGMII_TXID)958958+ if (p->interface == PHY_INTERFACE_MODE_RGMII_ID ||959959+ p->interface == PHY_INTERFACE_MODE_RGMII_TXID)968960 data8 |= PORT_RGMII_ID_OUT_ENABLE;969961 data8 |= PORT_GMII_1GPS_MODE;970962 data8 |= PORT_INTERFACE_RGMII;···12601252 }1261125312621254 /* set the real number of ports */12631263- dev->ds->num_ports = dev->port_cnt;12551255+ dev->ds->num_ports = dev->port_cnt + 1;1264125612651257 return 0;12661258}
+19-10
drivers/net/dsa/microchip/ksz9477.c
···1208120812091209 /* configure MAC to 1G & RGMII mode */12101210 ksz_pread8(dev, port, REG_PORT_XMII_CTRL_1, &data8);12111211- switch (dev->interface) {12111211+ switch (p->interface) {12121212 case PHY_INTERFACE_MODE_MII:12131213 ksz9477_set_xmii(dev, 0, &data8);12141214 ksz9477_set_gbit(dev, false, &data8);···12291229 ksz9477_set_gbit(dev, true, &data8);12301230 data8 &= ~PORT_RGMII_ID_IG_ENABLE;12311231 data8 &= ~PORT_RGMII_ID_EG_ENABLE;12321232- if (dev->interface == PHY_INTERFACE_MODE_RGMII_ID ||12331233- dev->interface == PHY_INTERFACE_MODE_RGMII_RXID)12321232+ if (p->interface == PHY_INTERFACE_MODE_RGMII_ID ||12331233+ p->interface == PHY_INTERFACE_MODE_RGMII_RXID)12341234 data8 |= PORT_RGMII_ID_IG_ENABLE;12351235- if (dev->interface == PHY_INTERFACE_MODE_RGMII_ID ||12361236- dev->interface == PHY_INTERFACE_MODE_RGMII_TXID)12351235+ if (p->interface == PHY_INTERFACE_MODE_RGMII_ID ||12361236+ p->interface == PHY_INTERFACE_MODE_RGMII_TXID)12371237 data8 |= PORT_RGMII_ID_EG_ENABLE;12381238 p->phydev.speed = SPEED_1000;12391239 break;···12691269 dev->cpu_port = i;12701270 dev->host_mask = (1 << dev->cpu_port);12711271 dev->port_mask |= dev->host_mask;12721272+ p = &dev->ports[i];1272127312731274 /* Read from XMII register to determine host port12741275 * interface. If set specifically in device tree12751276 * note the difference to help debugging.12761277 */12771278 interface = ksz9477_get_interface(dev, i);12781278- if (!dev->interface)12791279- dev->interface = interface;12801280- if (interface && interface != dev->interface)12791279+ if (!p->interface) {12801280+ if (dev->compat_interface) {12811281+ dev_warn(dev->dev,12821282+ "Using legacy switch \"phy-mode\" property, because it is missing on port %d node. "12831283+ "Please update your device tree.\n",12841284+ i);12851285+ p->interface = dev->compat_interface;12861286+ } else {12871287+ p->interface = interface;12881288+ }12891289+ }12901290+ if (interface && interface != p->interface)12811291 dev_info(dev->dev,12821292 "use %s instead of %s\n",12831283- phy_modes(dev->interface),12931293+ phy_modes(p->interface),12841294 phy_modes(interface));1285129512861296 /* enable cpu port */12871297 ksz9477_port_setup(dev, i, true);12881288- p = &dev->ports[dev->cpu_port];12891298 p->vid_member = dev->port_mask;12901299 p->on = 1;12911300 }
+12-1
drivers/net/dsa/microchip/ksz_common.c
···388388 const struct ksz_dev_ops *ops)389389{390390 phy_interface_t interface;391391+ struct device_node *port;392392+ unsigned int port_num;391393 int ret;392394393395 if (dev->pdata)···423421 /* Host port interface will be self detected, or specifically set in424422 * device tree.425423 */424424+ for (port_num = 0; port_num < dev->port_cnt; ++port_num)425425+ dev->ports[port_num].interface = PHY_INTERFACE_MODE_NA;426426 if (dev->dev->of_node) {427427 ret = of_get_phy_mode(dev->dev->of_node, &interface);428428 if (ret == 0)429429- dev->interface = interface;429429+ dev->compat_interface = interface;430430+ for_each_available_child_of_node(dev->dev->of_node, port) {431431+ if (of_property_read_u32(port, "reg", &port_num))432432+ continue;433433+ if (port_num >= dev->port_cnt)434434+ return -EINVAL;435435+ of_get_phy_mode(port, &dev->ports[port_num].interface);436436+ }430437 dev->synclko_125 = of_property_read_bool(dev->dev->of_node,431438 "microchip,synclko-125");432439 }
+2-1
drivers/net/dsa/microchip/ksz_common.h
···3939 u32 freeze:1; /* MIB counter freeze is enabled */40404141 struct ksz_port_mib mib;4242+ phy_interface_t interface;4243};43444445struct ksz_device {···7372 int mib_cnt;7473 int mib_port_cnt;7574 int last_port; /* ports after that not used */7676- phy_interface_t interface;7575+ phy_interface_t compat_interface;7776 u32 regs_size;7877 bool phy_errata_9477;7978 bool synclko_125;
+7-1
drivers/net/dsa/ocelot/felix.c
···585585 if (err)586586 return err;587587588588- ocelot_init(ocelot);588588+ err = ocelot_init(ocelot);589589+ if (err)590590+ return err;591591+589592 if (ocelot->ptp) {590593 err = ocelot_init_timestamp(ocelot, &ocelot_ptp_clock_info);591594 if (err) {···643640{644641 struct ocelot *ocelot = ds->priv;645642 struct felix *felix = ocelot_to_felix(ocelot);643643+ int port;646644647645 if (felix->info->mdio_bus_free)648646 felix->info->mdio_bus_free(ocelot);649647648648+ for (port = 0; port < ocelot->num_phys_ports; port++)649649+ ocelot_deinit_port(ocelot, port);650650 ocelot_deinit_timestamp(ocelot);651651 /* stop workqueue thread */652652 ocelot_deinit(ocelot);
···452452 return ret;453453454454 if (vid == vlanmc.vid) {455455- /* clear VLAN member configurations */456456- vlanmc.vid = 0;457457- vlanmc.priority = 0;458458- vlanmc.member = 0;459459- vlanmc.untag = 0;460460- vlanmc.fid = 0;461461-455455+ /* Remove this port from the VLAN */456456+ vlanmc.member &= ~BIT(port);457457+ vlanmc.untag &= ~BIT(port);458458+ /*459459+ * If no ports are members of this VLAN460460+ * anymore then clear the whole member461461+ * config so it can be reused.462462+ */463463+ if (!vlanmc.member && vlanmc.untag) {464464+ vlanmc.vid = 0;465465+ vlanmc.priority = 0;466466+ vlanmc.fid = 0;467467+ }462468 ret = smi->ops->set_vlan_mc(smi, i, &vlanmc);463469 if (ret) {464470 dev_err(smi->dev,
···13221322 struct bnxt *bp = netdev_priv(dev);13231323 int reg_len;1324132413251325+ if (!BNXT_PF(bp))13261326+ return -EOPNOTSUPP;13271327+13251328 reg_len = BNXT_PXP_REG_LEN;1326132913271330 if (bp->fw_cap & BNXT_FW_CAP_PCIE_STATS_SUPPORTED)···17911788 if (!BNXT_PHY_CFG_ABLE(bp))17921789 return -EOPNOTSUPP;1793179017911791+ mutex_lock(&bp->link_lock);17941792 if (epause->autoneg) {17951795- if (!(link_info->autoneg & BNXT_AUTONEG_SPEED))17961796- return -EINVAL;17931793+ if (!(link_info->autoneg & BNXT_AUTONEG_SPEED)) {17941794+ rc = -EINVAL;17951795+ goto pause_exit;17961796+ }1797179717981798 link_info->autoneg |= BNXT_AUTONEG_FLOW_CTRL;17991799 if (bp->hwrm_spec_code >= 0x10201)···18171811 if (epause->tx_pause)18181812 link_info->req_flow_ctrl |= BNXT_LINK_PAUSE_TX;1819181318201820- if (netif_running(dev)) {18211821- mutex_lock(&bp->link_lock);18141814+ if (netif_running(dev))18221815 rc = bnxt_hwrm_set_pause(bp);18231823- mutex_unlock(&bp->link_lock);18241824- }18161816+18171817+pause_exit:18181818+ mutex_unlock(&bp->link_lock);18251819 return rc;18261820}18271821···25582552 struct bnxt *bp = netdev_priv(dev);25592553 struct ethtool_eee *eee = &bp->eee;25602554 struct bnxt_link_info *link_info = &bp->link_info;25612561- u32 advertising =25622562- _bnxt_fw_to_ethtool_adv_spds(link_info->advertising, 0);25552555+ u32 advertising;25632556 int rc = 0;2564255725652558 if (!BNXT_PHY_CFG_ABLE(bp))···25672562 if (!(bp->flags & BNXT_FLAG_EEE_CAP))25682563 return -EOPNOTSUPP;2569256425652565+ mutex_lock(&bp->link_lock);25662566+ advertising = _bnxt_fw_to_ethtool_adv_spds(link_info->advertising, 0);25702567 if (!edata->eee_enabled)25712568 goto eee_ok;2572256925732570 if (!(link_info->autoneg & BNXT_AUTONEG_SPEED)) {25742571 netdev_warn(dev, "EEE requires autoneg\n");25752575- return -EINVAL;25722572+ rc = -EINVAL;25732573+ goto eee_exit;25762574 }25772575 if (edata->tx_lpi_enabled) {25782576 if (bp->lpi_tmr_hi && (edata->tx_lpi_timer > bp->lpi_tmr_hi ||25792577 edata->tx_lpi_timer < bp->lpi_tmr_lo)) {25802578 netdev_warn(dev, "Valid LPI timer range is %d and %d microsecs\n",25812579 bp->lpi_tmr_lo, bp->lpi_tmr_hi);25822582- return -EINVAL;25802580+ rc = -EINVAL;25812581+ goto eee_exit;25832582 } else if (!bp->lpi_tmr_hi) {25842583 edata->tx_lpi_timer = eee->tx_lpi_timer;25852584 }···25932584 } else if (edata->advertised & ~advertising) {25942585 netdev_warn(dev, "EEE advertised %x must be a subset of autoneg advertised speeds %x\n",25952586 edata->advertised, advertising);25962596- return -EINVAL;25872587+ rc = -EINVAL;25882588+ goto eee_exit;25972589 }2598259025992591 eee->advertised = edata->advertised;···26062596 if (netif_running(dev))26072597 rc = bnxt_hwrm_set_link_setting(bp, false, true);2608259825992599+eee_exit:26002600+ mutex_unlock(&bp->link_lock);26092601 return rc;26102602}26112603
+1-2
drivers/net/ethernet/cadence/macb_main.c
···647647 ctrl |= GEM_BIT(GBE);648648 }649649650650- /* We do not support MLO_PAUSE_RX yet */651651- if (tx_pause)650650+ if (rx_pause)652651 ctrl |= MACB_BIT(PAE);653652654653 macb_set_tx_clk(bp->tx_clk, speed, ndev);
+6-3
drivers/net/ethernet/chelsio/cxgb4/cxgb4_filter.c
···19111911static int configure_filter_tcb(struct adapter *adap, unsigned int tid,19121912 struct filter_entry *f)19131913{19141914- if (f->fs.hitcnts)19141914+ if (f->fs.hitcnts) {19151915 set_tcb_field(adap, f, tid, TCB_TIMESTAMP_W,19161916- TCB_TIMESTAMP_V(TCB_TIMESTAMP_M) |19161916+ TCB_TIMESTAMP_V(TCB_TIMESTAMP_M),19171917+ TCB_TIMESTAMP_V(0ULL),19181918+ 1);19191919+ set_tcb_field(adap, f, tid, TCB_RTT_TS_RECENT_AGE_W,19171920 TCB_RTT_TS_RECENT_AGE_V(TCB_RTT_TS_RECENT_AGE_M),19181918- TCB_TIMESTAMP_V(0ULL) |19191921 TCB_RTT_TS_RECENT_AGE_V(0ULL),19201922 1);19231923+ }1921192419221925 if (f->fs.newdmac)19231926 set_tcb_tflag(adap, f, tid, TF_CCTRL_ECE_S, 1,
+1-1
drivers/net/ethernet/chelsio/cxgb4/cxgb4_mps.c
···229229{230230 struct mps_entries_ref *mps_entry, *tmp;231231232232- if (!list_empty(&adap->mps_ref))232232+ if (list_empty(&adap->mps_ref))233233 return;234234235235 spin_lock(&adap->mps_ref_lock);
···174174 return err;175175}176176177177+static void enable_txqs_napi(struct hinic_dev *nic_dev)178178+{179179+ int num_txqs = hinic_hwdev_num_qps(nic_dev->hwdev);180180+ int i;181181+182182+ for (i = 0; i < num_txqs; i++)183183+ napi_enable(&nic_dev->txqs[i].napi);184184+}185185+186186+static void disable_txqs_napi(struct hinic_dev *nic_dev)187187+{188188+ int num_txqs = hinic_hwdev_num_qps(nic_dev->hwdev);189189+ int i;190190+191191+ for (i = 0; i < num_txqs; i++)192192+ napi_disable(&nic_dev->txqs[i].napi);193193+}194194+177195/**178196 * free_txqs - Free the Logical Tx Queues of specific NIC device179197 * @nic_dev: the specific NIC device···418400 goto err_create_txqs;419401 }420402403403+ enable_txqs_napi(nic_dev);404404+421405 err = create_rxqs(nic_dev);422406 if (err) {423407 netif_err(nic_dev, drv, netdev,···504484 }505485506486err_create_rxqs:487487+ disable_txqs_napi(nic_dev);507488 free_txqs(nic_dev);508489509490err_create_txqs:···517496{518497 struct hinic_dev *nic_dev = netdev_priv(netdev);519498 unsigned int flags;499499+500500+ /* Disable txq napi firstly to aviod rewaking txq in free_tx_poll */501501+ disable_txqs_napi(nic_dev);520502521503 down(&nic_dev->mgmt_lock);522504
···11151115static int i40e_getnum_vf_vsi_vlan_filters(struct i40e_vsi *vsi)11161116{11171117 struct i40e_mac_filter *f;11181118- int num_vlans = 0, bkt;11181118+ u16 num_vlans = 0, bkt;1119111911201120 hash_for_each(vsi->mac_filter_hash, bkt, f, hlist) {11211121 if (f->vlan >= 0 && f->vlan <= I40E_MAX_VLANID)···11341134 *11351135 * Called to get number of VLANs and VLAN list present in mac_filter_hash.11361136 **/11371137-static void i40e_get_vlan_list_sync(struct i40e_vsi *vsi, int *num_vlans,11381138- s16 **vlan_list)11371137+static void i40e_get_vlan_list_sync(struct i40e_vsi *vsi, u16 *num_vlans,11381138+ s16 **vlan_list)11391139{11401140 struct i40e_mac_filter *f;11411141 int i = 0;···11691169 **/11701170static i40e_status11711171i40e_set_vsi_promisc(struct i40e_vf *vf, u16 seid, bool multi_enable,11721172- bool unicast_enable, s16 *vl, int num_vlans)11721172+ bool unicast_enable, s16 *vl, u16 num_vlans)11731173{11741174+ i40e_status aq_ret, aq_tmp = 0;11741175 struct i40e_pf *pf = vf->pf;11751176 struct i40e_hw *hw = &pf->hw;11761176- i40e_status aq_ret;11771177 int i;1178117811791179 /* No VLAN to set promisc on, set on VSI */···12221222 vf->vf_id,12231223 i40e_stat_str(&pf->hw, aq_ret),12241224 i40e_aq_str(&pf->hw, aq_err));12251225+12261226+ if (!aq_tmp)12271227+ aq_tmp = aq_ret;12251228 }1226122912271230 aq_ret = i40e_aq_set_vsi_uc_promisc_on_vlan(hw, seid,···12381235 vf->vf_id,12391236 i40e_stat_str(&pf->hw, aq_ret),12401237 i40e_aq_str(&pf->hw, aq_err));12381238+12391239+ if (!aq_tmp)12401240+ aq_tmp = aq_ret;12411241 }12421242 }12431243+12441244+ if (aq_tmp)12451245+ aq_ret = aq_tmp;12461246+12431247 return aq_ret;12441248}12451249···12681258 i40e_status aq_ret = I40E_SUCCESS;12691259 struct i40e_pf *pf = vf->pf;12701260 struct i40e_vsi *vsi;12711271- int num_vlans;12611261+ u16 num_vlans;12721262 s16 *vl;1273126312741264 vsi = i40e_find_vsi_from_id(pf, vsi_id);
+8-12
drivers/net/ethernet/intel/igc/igc.h
···299299#define IGC_RX_HDR_LEN IGC_RXBUFFER_256300300301301/* Transmit and receive latency (for PTP timestamps) */302302-/* FIXME: These values were estimated using the ones that i225 has as303303- * basis, they seem to provide good numbers with ptp4l/phc2sys, but we304304- * need to confirm them.305305- */306306-#define IGC_I225_TX_LATENCY_10 9542307307-#define IGC_I225_TX_LATENCY_100 1024308308-#define IGC_I225_TX_LATENCY_1000 178309309-#define IGC_I225_TX_LATENCY_2500 64310310-#define IGC_I225_RX_LATENCY_10 20662311311-#define IGC_I225_RX_LATENCY_100 2213312312-#define IGC_I225_RX_LATENCY_1000 448313313-#define IGC_I225_RX_LATENCY_2500 160302302+#define IGC_I225_TX_LATENCY_10 240303303+#define IGC_I225_TX_LATENCY_100 58304304+#define IGC_I225_TX_LATENCY_1000 80305305+#define IGC_I225_TX_LATENCY_2500 1325306306+#define IGC_I225_RX_LATENCY_10 6450307307+#define IGC_I225_RX_LATENCY_100 185308308+#define IGC_I225_RX_LATENCY_1000 300309309+#define IGC_I225_RX_LATENCY_2500 1485314310315311/* RX and TX descriptor control thresholds.316312 * PTHRESH - MAC will consider prefetch if it has fewer than this number of
+19
drivers/net/ethernet/intel/igc/igc_ptp.c
···364364 struct sk_buff *skb = adapter->ptp_tx_skb;365365 struct skb_shared_hwtstamps shhwtstamps;366366 struct igc_hw *hw = &adapter->hw;367367+ int adjust = 0;367368 u64 regval;368369369370 if (WARN_ON_ONCE(!skb))···373372 regval = rd32(IGC_TXSTMPL);374373 regval |= (u64)rd32(IGC_TXSTMPH) << 32;375374 igc_ptp_systim_to_hwtstamp(adapter, &shhwtstamps, regval);375375+376376+ switch (adapter->link_speed) {377377+ case SPEED_10:378378+ adjust = IGC_I225_TX_LATENCY_10;379379+ break;380380+ case SPEED_100:381381+ adjust = IGC_I225_TX_LATENCY_100;382382+ break;383383+ case SPEED_1000:384384+ adjust = IGC_I225_TX_LATENCY_1000;385385+ break;386386+ case SPEED_2500:387387+ adjust = IGC_I225_TX_LATENCY_2500;388388+ break;389389+ }390390+391391+ shhwtstamps.hwtstamp =392392+ ktime_add_ns(shhwtstamps.hwtstamp, adjust);376393377394 /* Clear the lock early before calling skb_tstamp_tx so that378395 * applications are not woken up before the lock bit is clear. We use
···3131{3232 struct xdp_buff *xdp = wi->umr.dma_info[page_idx].xsk;3333 u32 cqe_bcnt32 = cqe_bcnt;3434- bool consumed;35343635 /* Check packet size. Note LRO doesn't use linear SKB */3736 if (unlikely(cqe_bcnt > rq->hw_mtu)) {···5051 xsk_buff_dma_sync_for_cpu(xdp);5152 prefetch(xdp->data);52535353- rcu_read_lock();5454- consumed = mlx5e_xdp_handle(rq, NULL, &cqe_bcnt32, xdp);5555- rcu_read_unlock();5656-5754 /* Possible flows:5855 * - XDP_REDIRECT to XSKMAP:5956 * The page is owned by the userspace from now.···6570 * allocated first from the Reuse Ring, so it has enough space.6671 */67726868- if (likely(consumed)) {7373+ if (likely(mlx5e_xdp_handle(rq, NULL, &cqe_bcnt32, xdp))) {6974 if (likely(__test_and_clear_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags)))7075 __set_bit(page_idx, wi->xdp_xmit_bitmap); /* non-atomic */7176 return NULL; /* page/packet was consumed by XDP */···8388 u32 cqe_bcnt)8489{8590 struct xdp_buff *xdp = wi->di->xsk;8686- bool consumed;87918892 /* wi->offset is not used in this function, because xdp->data and the8993 * DMA address point directly to the necessary place. Furthermore, the···101107 return NULL;102108 }103109104104- rcu_read_lock();105105- consumed = mlx5e_xdp_handle(rq, NULL, &cqe_bcnt, xdp);106106- rcu_read_unlock();107107-108108- if (likely(consumed))110110+ if (likely(mlx5e_xdp_handle(rq, NULL, &cqe_bcnt, xdp)))109111 return NULL; /* page/packet was consumed by XDP */110112111113 /* XDP_PASS: copy the data from the UMEM to a new SKB. The frame reuse
···106106void mlx5e_close_xsk(struct mlx5e_channel *c)107107{108108 clear_bit(MLX5E_CHANNEL_STATE_XSK, c->state);109109- napi_synchronize(&c->napi);110110- synchronize_rcu(); /* Sync with the XSK wakeup. */109109+ synchronize_rcu(); /* Sync with the XSK wakeup and with NAPI. */111110112111 mlx5e_close_rq(&c->xskrq);113112 mlx5e_close_cq(&c->xskrq.cq);
···3535#include <net/sock.h>36363737#include "en.h"3838-#include "accel/tls.h"3938#include "fpga/sdk.h"4039#include "en_accel/tls.h"4140···50515152#define NUM_TLS_SW_COUNTERS ARRAY_SIZE(mlx5e_tls_sw_stats_desc)52535454+static bool is_tls_atomic_stats(struct mlx5e_priv *priv)5555+{5656+ return priv->tls && !mlx5_accel_is_ktls_device(priv->mdev);5757+}5858+5359int mlx5e_tls_get_count(struct mlx5e_priv *priv)5460{5555- if (!priv->tls)6161+ if (!is_tls_atomic_stats(priv))5662 return 0;57635864 return NUM_TLS_SW_COUNTERS;···6763{6864 unsigned int i, idx = 0;69657070- if (!priv->tls)6666+ if (!is_tls_atomic_stats(priv))7167 return 0;72687369 for (i = 0; i < NUM_TLS_SW_COUNTERS; i++)···8177{8278 int i, idx = 0;83798484- if (!priv->tls)8080+ if (!is_tls_atomic_stats(priv))8581 return 0;86828783 for (i = 0; i < NUM_TLS_SW_COUNTERS; i++)
+31-54
drivers/net/ethernet/mellanox/mlx5/core/en_main.c
···158158 mutex_unlock(&priv->state_lock);159159}160160161161-void mlx5e_update_ndo_stats(struct mlx5e_priv *priv)162162-{163163- int i;164164-165165- for (i = mlx5e_nic_stats_grps_num(priv) - 1; i >= 0; i--)166166- if (mlx5e_nic_stats_grps[i]->update_stats_mask &167167- MLX5E_NDO_UPDATE_STATS)168168- mlx5e_nic_stats_grps[i]->update_stats(priv);169169-}170170-171161static void mlx5e_update_stats_work(struct work_struct *work)172162{173163 struct mlx5e_priv *priv = container_of(work, struct mlx5e_priv,···389399390400 if (params->xdp_prog)391401 bpf_prog_inc(params->xdp_prog);392392- rq->xdp_prog = params->xdp_prog;402402+ RCU_INIT_POINTER(rq->xdp_prog, params->xdp_prog);393403394404 rq_xdp_ix = rq->ix;395405 if (xsk)···398408 if (err < 0)399409 goto err_rq_wq_destroy;400410401401- rq->buff.map_dir = rq->xdp_prog ? DMA_BIDIRECTIONAL : DMA_FROM_DEVICE;411411+ rq->buff.map_dir = params->xdp_prog ? DMA_BIDIRECTIONAL : DMA_FROM_DEVICE;402412 rq->buff.headroom = mlx5e_get_rq_headroom(mdev, params, xsk);403413 pool_size = 1 << params->log_rq_mtu_frames;404414···554564 }555565556566err_rq_wq_destroy:557557- if (rq->xdp_prog)558558- bpf_prog_put(rq->xdp_prog);567567+ if (params->xdp_prog)568568+ bpf_prog_put(params->xdp_prog);559569 xdp_rxq_info_unreg(&rq->xdp_rxq);560570 page_pool_destroy(rq->page_pool);561571 mlx5_wq_destroy(&rq->wq_ctrl);···565575566576static void mlx5e_free_rq(struct mlx5e_rq *rq)567577{578578+ struct mlx5e_channel *c = rq->channel;579579+ struct bpf_prog *old_prog = NULL;568580 int i;569581570570- if (rq->xdp_prog)571571- bpf_prog_put(rq->xdp_prog);582582+ /* drop_rq has neither channel nor xdp_prog. */583583+ if (c)584584+ old_prog = rcu_dereference_protected(rq->xdp_prog,585585+ lockdep_is_held(&c->priv->state_lock));586586+ if (old_prog)587587+ bpf_prog_put(old_prog);572588573589 switch (rq->wq_type) {574590 case MLX5_WQ_TYPE_LINKED_LIST_STRIDING_RQ:···863867void mlx5e_deactivate_rq(struct mlx5e_rq *rq)864868{865869 clear_bit(MLX5E_RQ_STATE_ENABLED, &rq->state);866866- napi_synchronize(&rq->channel->napi); /* prevent mlx5e_post_rx_wqes */870870+ synchronize_rcu(); /* Sync with NAPI to prevent mlx5e_post_rx_wqes. */867871}868872869873void mlx5e_close_rq(struct mlx5e_rq *rq)···1308131213091313static void mlx5e_deactivate_txqsq(struct mlx5e_txqsq *sq)13101314{13111311- struct mlx5e_channel *c = sq->channel;13121315 struct mlx5_wq_cyc *wq = &sq->wq;1313131613141317 clear_bit(MLX5E_SQ_STATE_ENABLED, &sq->state);13151315- /* prevent netif_tx_wake_queue */13161316- napi_synchronize(&c->napi);13181318+ synchronize_rcu(); /* Sync with NAPI to prevent netif_tx_wake_queue. */1317131913181320 mlx5e_tx_disable_queue(sq->txq);13191321···1386139213871393void mlx5e_deactivate_icosq(struct mlx5e_icosq *icosq)13881394{13891389- struct mlx5e_channel *c = icosq->channel;13901390-13911395 clear_bit(MLX5E_SQ_STATE_ENABLED, &icosq->state);13921392- napi_synchronize(&c->napi);13961396+ synchronize_rcu(); /* Sync with NAPI. */13931397}1394139813951399void mlx5e_close_icosq(struct mlx5e_icosq *sq)···14661474 struct mlx5e_channel *c = sq->channel;1467147514681476 clear_bit(MLX5E_SQ_STATE_ENABLED, &sq->state);14691469- napi_synchronize(&c->napi);14771477+ synchronize_rcu(); /* Sync with NAPI. */1470147814711479 mlx5e_destroy_sq(c->mdev, sq->sqn);14721480 mlx5e_free_xdpsq_descs(sq);···3559356735603568 s->rx_packets += rq_stats->packets + xskrq_stats->packets;35613569 s->rx_bytes += rq_stats->bytes + xskrq_stats->bytes;35703570+ s->multicast += rq_stats->mcast_packets + xskrq_stats->mcast_packets;3562357135633572 for (j = 0; j < priv->max_opened_tc; j++) {35643573 struct mlx5e_sq_stats *sq_stats = &channel_stats->sq[j];···35753582mlx5e_get_stats(struct net_device *dev, struct rtnl_link_stats64 *stats)35763583{35773584 struct mlx5e_priv *priv = netdev_priv(dev);35783578- struct mlx5e_vport_stats *vstats = &priv->stats.vport;35793585 struct mlx5e_pport_stats *pstats = &priv->stats.pport;3580358635813587 /* In switchdev mode, monitor counters doesn't monitor···36093617 stats->rx_errors = stats->rx_length_errors + stats->rx_crc_errors +36103618 stats->rx_frame_errors;36113619 stats->tx_errors = stats->tx_aborted_errors + stats->tx_carrier_errors;36123612-36133613- /* vport multicast also counts packets that are dropped due to steering36143614- * or rx out of buffer36153615- */36163616- stats->multicast =36173617- VPORT_COUNTER_GET(vstats, received_eth_multicast.packets);36183620}3619362136203622static void mlx5e_set_rx_mode(struct net_device *dev)···43164330 return 0;43174331}4318433243334333+static void mlx5e_rq_replace_xdp_prog(struct mlx5e_rq *rq, struct bpf_prog *prog)43344334+{43354335+ struct bpf_prog *old_prog;43364336+43374337+ old_prog = rcu_replace_pointer(rq->xdp_prog, prog,43384338+ lockdep_is_held(&rq->channel->priv->state_lock));43394339+ if (old_prog)43404340+ bpf_prog_put(old_prog);43414341+}43424342+43194343static int mlx5e_xdp_set(struct net_device *netdev, struct bpf_prog *prog)43204344{43214345 struct mlx5e_priv *priv = netdev_priv(netdev);···43844388 */43854389 for (i = 0; i < priv->channels.num; i++) {43864390 struct mlx5e_channel *c = priv->channels.c[i];43874387- bool xsk_open = test_bit(MLX5E_CHANNEL_STATE_XSK, c->state);4388439143894389- clear_bit(MLX5E_RQ_STATE_ENABLED, &c->rq.state);43904390- if (xsk_open)43914391- clear_bit(MLX5E_RQ_STATE_ENABLED, &c->xskrq.state);43924392- napi_synchronize(&c->napi);43934393- /* prevent mlx5e_poll_rx_cq from accessing rq->xdp_prog */43944394-43954395- old_prog = xchg(&c->rq.xdp_prog, prog);43964396- if (old_prog)43974397- bpf_prog_put(old_prog);43984398-43994399- if (xsk_open) {44004400- old_prog = xchg(&c->xskrq.xdp_prog, prog);44014401- if (old_prog)44024402- bpf_prog_put(old_prog);44034403- }44044404-44054405- set_bit(MLX5E_RQ_STATE_ENABLED, &c->rq.state);44064406- if (xsk_open)44074407- set_bit(MLX5E_RQ_STATE_ENABLED, &c->xskrq.state);44084408- /* napi_schedule in case we have missed anything */44094409- napi_schedule(&c->napi);43924392+ mlx5e_rq_replace_xdp_prog(&c->rq, prog);43934393+ if (test_bit(MLX5E_CHANNEL_STATE_XSK, c->state))43944394+ mlx5e_rq_replace_xdp_prog(&c->xskrq, prog);44104395 }4411439644124397unlock:···51775200 .enable = mlx5e_nic_enable,51785201 .disable = mlx5e_nic_disable,51795202 .update_rx = mlx5e_update_nic_rx,51805180- .update_stats = mlx5e_update_ndo_stats,52035203+ .update_stats = mlx5e_stats_update_ndo_stats,51815204 .update_carrier = mlx5e_update_carrier,51825205 .rx_handlers = &mlx5e_rx_handlers_nic,51835206 .max_tc = MLX5E_MAX_NUM_TC,
···42534253 cdev->mf_bits = BIT(QED_MF_LLH_MAC_CLSS) |42544254 BIT(QED_MF_LLH_PROTO_CLSS) |42554255 BIT(QED_MF_LL2_NON_UNICAST) |42564256- BIT(QED_MF_INTER_PF_SWITCH);42564256+ BIT(QED_MF_INTER_PF_SWITCH) |42574257+ BIT(QED_MF_DISABLE_ARFS);42574258 break;42584259 case NVM_CFG1_GLOB_MF_MODE_DEFAULT:42594260 cdev->mf_bits = BIT(QED_MF_LLH_MAC_CLSS) |···4267426642684267 DP_INFO(p_hwfn, "Multi function mode is 0x%lx\n",42694268 cdev->mf_bits);42694269+42704270+ /* In CMT the PF is unknown when the GFS block processes the42714271+ * packet. Therefore cannot use searcher as it has a per PF42724272+ * database, and thus ARFS must be disabled.42734273+ *42744274+ */42754275+ if (QED_IS_CMT(cdev))42764276+ cdev->mf_bits |= BIT(QED_MF_DISABLE_ARFS);42704277 }4271427842724279 DP_INFO(p_hwfn, "Multi function mode is 0x%lx\n",
+3
drivers/net/ethernet/qlogic/qed/qed_l2.c
···19801980 struct qed_ptt *p_ptt,19811981 struct qed_arfs_config_params *p_cfg_params)19821982{19831983+ if (test_bit(QED_MF_DISABLE_ARFS, &p_hwfn->cdev->mf_bits))19841984+ return;19851985+19831986 if (p_cfg_params->mode != QED_FILTER_CONFIG_MODE_DISABLE) {19841987 qed_gft_config(p_hwfn, p_ptt, p_hwfn->rel_pf_id,19851988 p_cfg_params->tcp,
···7171 p_ramrod->personality = PERSONALITY_ETH;7272 break;7373 case QED_PCI_ETH_ROCE:7474+ case QED_PCI_ETH_IWARP:7475 p_ramrod->personality = PERSONALITY_RDMA_AND_ETH;7576 break;7677 default:
+3
drivers/net/ethernet/qlogic/qede/qede_filter.c
···311311{312312 int i;313313314314+ if (!edev->dev_info.common.b_arfs_capable)315315+ return -EINVAL;316316+314317 edev->arfs = vzalloc(sizeof(*edev->arfs));315318 if (!edev->arfs)316319 return -ENOMEM;
+5-6
drivers/net/ethernet/qlogic/qede/qede_main.c
···804804 NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM |805805 NETIF_F_TSO | NETIF_F_TSO6 | NETIF_F_HW_TC;806806807807- if (!IS_VF(edev) && edev->dev_info.common.num_hwfns == 1)807807+ if (edev->dev_info.common.b_arfs_capable)808808 hw_features |= NETIF_F_NTUPLE;809809810810 if (edev->dev_info.common.vxlan_enable ||···22742274 qede_vlan_mark_nonconfigured(edev);22752275 edev->ops->fastpath_stop(edev->cdev);2276227622772277- if (!IS_VF(edev) && edev->dev_info.common.num_hwfns == 1) {22772277+ if (edev->dev_info.common.b_arfs_capable) {22782278 qede_poll_for_freeing_arfs_filters(edev);22792279 qede_free_arfs(edev);22802280 }···23412341 if (rc)23422342 goto err2;2343234323442344- if (!IS_VF(edev) && edev->dev_info.common.num_hwfns == 1) {23452345- rc = qede_alloc_arfs(edev);23462346- if (rc)23472347- DP_NOTICE(edev, "aRFS memory allocation failed\n");23442344+ if (qede_alloc_arfs(edev)) {23452345+ edev->ndev->features &= ~NETIF_F_NTUPLE;23462346+ edev->dev_info.common.b_arfs_capable = false;23482347 }2349234823502349 qede_napi_add_enable(edev);
···847847848848#define NETVSC_XDP_HDRM 256849849850850+#define NETVSC_XFER_HEADER_SIZE(rng_cnt) \851851+ (offsetof(struct vmtransfer_page_packet_header, ranges) + \852852+ (rng_cnt) * sizeof(struct vmtransfer_page_range))853853+850854struct multi_send_data {851855 struct sk_buff *skb; /* skb containing the pkt */852856 struct hv_netvsc_packet *pkt; /* netvsc pkt pending */···977973 u32 vf_alloc;978974 /* Serial number of the VF to team with */979975 u32 vf_serial;976976+977977+ /* Is the current data path through the VF NIC? */978978+ bool data_path_is_vf;980979981980 /* Used to temporarily save the config info across hibernation */982981 struct netvsc_device_info *saved_netvsc_dev_info;
+111-13
drivers/net/hyperv/netvsc.c
···388388 net_device->recv_section_size = resp->sections[0].sub_alloc_size;389389 net_device->recv_section_cnt = resp->sections[0].num_sub_allocs;390390391391+ /* Ensure buffer will not overflow */392392+ if (net_device->recv_section_size < NETVSC_MTU_MIN || (u64)net_device->recv_section_size *393393+ (u64)net_device->recv_section_cnt > (u64)buf_size) {394394+ netdev_err(ndev, "invalid recv_section_size %u\n",395395+ net_device->recv_section_size);396396+ ret = -EINVAL;397397+ goto cleanup;398398+ }399399+391400 /* Setup receive completion ring.392401 * Add 1 to the recv_section_cnt because at least one entry in a393402 * ring buffer has to be empty.···469460 /* Parse the response */470461 net_device->send_section_size = init_packet->msg.471462 v1_msg.send_send_buf_complete.section_size;463463+ if (net_device->send_section_size < NETVSC_MTU_MIN) {464464+ netdev_err(ndev, "invalid send_section_size %u\n",465465+ net_device->send_section_size);466466+ ret = -EINVAL;467467+ goto cleanup;468468+ }472469473470 /* Section count is simply the size divided by the section size. */474471 net_device->send_section_cnt = buf_size / net_device->send_section_size;···746731 int budget)747732{748733 const struct nvsp_message *nvsp_packet = hv_pkt_data(desc);734734+ u32 msglen = hv_pkt_datalen(desc);735735+736736+ /* Ensure packet is big enough to read header fields */737737+ if (msglen < sizeof(struct nvsp_message_header)) {738738+ netdev_err(ndev, "nvsp_message length too small: %u\n", msglen);739739+ return;740740+ }749741750742 switch (nvsp_packet->hdr.msg_type) {751743 case NVSP_MSG_TYPE_INIT_COMPLETE:744744+ if (msglen < sizeof(struct nvsp_message_header) +745745+ sizeof(struct nvsp_message_init_complete)) {746746+ netdev_err(ndev, "nvsp_msg length too small: %u\n",747747+ msglen);748748+ return;749749+ }750750+ fallthrough;751751+752752 case NVSP_MSG1_TYPE_SEND_RECV_BUF_COMPLETE:753753+ if (msglen < sizeof(struct nvsp_message_header) +754754+ sizeof(struct nvsp_1_message_send_receive_buffer_complete)) {755755+ netdev_err(ndev, "nvsp_msg1 length too small: %u\n",756756+ msglen);757757+ return;758758+ }759759+ fallthrough;760760+753761 case NVSP_MSG1_TYPE_SEND_SEND_BUF_COMPLETE:762762+ if (msglen < sizeof(struct nvsp_message_header) +763763+ sizeof(struct nvsp_1_message_send_send_buffer_complete)) {764764+ netdev_err(ndev, "nvsp_msg1 length too small: %u\n",765765+ msglen);766766+ return;767767+ }768768+ fallthrough;769769+754770 case NVSP_MSG5_TYPE_SUBCHANNEL:771771+ if (msglen < sizeof(struct nvsp_message_header) +772772+ sizeof(struct nvsp_5_subchannel_complete)) {773773+ netdev_err(ndev, "nvsp_msg5 length too small: %u\n",774774+ msglen);775775+ return;776776+ }755777 /* Copy the response back */756778 memcpy(&net_device->channel_init_pkt, nvsp_packet,757779 sizeof(struct nvsp_message));···11691117static int netvsc_receive(struct net_device *ndev,11701118 struct netvsc_device *net_device,11711119 struct netvsc_channel *nvchan,11721172- const struct vmpacket_descriptor *desc,11731173- const struct nvsp_message *nvsp)11201120+ const struct vmpacket_descriptor *desc)11741121{11751122 struct net_device_context *net_device_ctx = netdev_priv(ndev);11761123 struct vmbus_channel *channel = nvchan->channel;11771124 const struct vmtransfer_page_packet_header *vmxferpage_packet11781125 = container_of(desc, const struct vmtransfer_page_packet_header, d);11261126+ const struct nvsp_message *nvsp = hv_pkt_data(desc);11271127+ u32 msglen = hv_pkt_datalen(desc);11791128 u16 q_idx = channel->offermsg.offer.sub_channel_index;11801129 char *recv_buf = net_device->recv_buf;11811130 u32 status = NVSP_STAT_SUCCESS;11821131 int i;11831132 int count = 0;1184113311341134+ /* Ensure packet is big enough to read header fields */11351135+ if (msglen < sizeof(struct nvsp_message_header)) {11361136+ netif_err(net_device_ctx, rx_err, ndev,11371137+ "invalid nvsp header, length too small: %u\n",11381138+ msglen);11391139+ return 0;11401140+ }11411141+11851142 /* Make sure this is a valid nvsp packet */11861143 if (unlikely(nvsp->hdr.msg_type != NVSP_MSG1_TYPE_SEND_RNDIS_PKT)) {11871144 netif_err(net_device_ctx, rx_err, ndev,11881145 "Unknown nvsp packet type received %u\n",11891146 nvsp->hdr.msg_type);11471147+ return 0;11481148+ }11491149+11501150+ /* Validate xfer page pkt header */11511151+ if ((desc->offset8 << 3) < sizeof(struct vmtransfer_page_packet_header)) {11521152+ netif_err(net_device_ctx, rx_err, ndev,11531153+ "Invalid xfer page pkt, offset too small: %u\n",11541154+ desc->offset8 << 3);11901155 return 0;11911156 }11921157···1217114812181149 count = vmxferpage_packet->range_cnt;1219115011511151+ /* Check count for a valid value */11521152+ if (NETVSC_XFER_HEADER_SIZE(count) > desc->offset8 << 3) {11531153+ netif_err(net_device_ctx, rx_err, ndev,11541154+ "Range count is not valid: %d\n",11551155+ count);11561156+ return 0;11571157+ }11581158+12201159 /* Each range represents 1 RNDIS pkt that contains 1 ethernet frame */12211160 for (i = 0; i < count; i++) {12221161 u32 offset = vmxferpage_packet->ranges[i].byte_offset;···12321155 void *data;12331156 int ret;1234115712351235- if (unlikely(offset + buflen > net_device->recv_buf_size)) {11581158+ if (unlikely(offset > net_device->recv_buf_size ||11591159+ buflen > net_device->recv_buf_size - offset)) {12361160 nvchan->rsc.cnt = 0;12371161 status = NVSP_STAT_FAIL;12381162 netif_err(net_device_ctx, rx_err, ndev,···12721194 u32 count, offset, *tab;12731195 int i;1274119611971197+ /* Ensure packet is big enough to read send_table fields */11981198+ if (msglen < sizeof(struct nvsp_message_header) +11991199+ sizeof(struct nvsp_5_send_indirect_table)) {12001200+ netdev_err(ndev, "nvsp_v5_msg length too small: %u\n", msglen);12011201+ return;12021202+ }12031203+12751204 count = nvmsg->msg.v5_msg.send_table.count;12761205 offset = nvmsg->msg.v5_msg.send_table.offset;12771206···13101225}1311122613121227static void netvsc_send_vf(struct net_device *ndev,13131313- const struct nvsp_message *nvmsg)12281228+ const struct nvsp_message *nvmsg,12291229+ u32 msglen)13141230{13151231 struct net_device_context *net_device_ctx = netdev_priv(ndev);12321232+12331233+ /* Ensure packet is big enough to read its fields */12341234+ if (msglen < sizeof(struct nvsp_message_header) +12351235+ sizeof(struct nvsp_4_send_vf_association)) {12361236+ netdev_err(ndev, "nvsp_v4_msg length too small: %u\n", msglen);12371237+ return;12381238+ }1316123913171240 net_device_ctx->vf_alloc = nvmsg->msg.v4_msg.vf_assoc.allocated;13181241 net_device_ctx->vf_serial = nvmsg->msg.v4_msg.vf_assoc.serial;···1331123813321239static void netvsc_receive_inband(struct net_device *ndev,13331240 struct netvsc_device *nvscdev,13341334- const struct nvsp_message *nvmsg,13351335- u32 msglen)12411241+ const struct vmpacket_descriptor *desc)13361242{12431243+ const struct nvsp_message *nvmsg = hv_pkt_data(desc);12441244+ u32 msglen = hv_pkt_datalen(desc);12451245+12461246+ /* Ensure packet is big enough to read header fields */12471247+ if (msglen < sizeof(struct nvsp_message_header)) {12481248+ netdev_err(ndev, "inband nvsp_message length too small: %u\n", msglen);12491249+ return;12501250+ }12511251+13371252 switch (nvmsg->hdr.msg_type) {13381253 case NVSP_MSG5_TYPE_SEND_INDIRECTION_TABLE:13391254 netvsc_send_table(ndev, nvscdev, nvmsg, msglen);13401255 break;1341125613421257 case NVSP_MSG4_TYPE_SEND_VF_ASSOCIATION:13431343- netvsc_send_vf(ndev, nvmsg);12581258+ netvsc_send_vf(ndev, nvmsg, msglen);13441259 break;13451260 }13461261}···13621261{13631262 struct vmbus_channel *channel = nvchan->channel;13641263 const struct nvsp_message *nvmsg = hv_pkt_data(desc);13651365- u32 msglen = hv_pkt_datalen(desc);1366126413671265 trace_nvsp_recv(ndev, channel, nvmsg);1368126613691267 switch (desc->type) {13701268 case VM_PKT_COMP:13711371- netvsc_send_completion(ndev, net_device, channel,13721372- desc, budget);12691269+ netvsc_send_completion(ndev, net_device, channel, desc, budget);13731270 break;1374127113751272 case VM_PKT_DATA_USING_XFER_PAGES:13761376- return netvsc_receive(ndev, net_device, nvchan,13771377- desc, nvmsg);12731273+ return netvsc_receive(ndev, net_device, nvchan, desc);13781274 break;1379127513801276 case VM_PKT_DATA_INBAND:13811381- netvsc_receive_inband(ndev, net_device, nvmsg, msglen);12771277+ netvsc_receive_inband(ndev, net_device, desc);13821278 break;1383127913841280 default:
+29-6
drivers/net/hyperv/netvsc_drv.c
···748748 struct netvsc_reconfig *event;749749 unsigned long flags;750750751751+ /* Ensure the packet is big enough to access its fields */752752+ if (resp->msg_len - RNDIS_HEADER_SIZE < sizeof(struct rndis_indicate_status)) {753753+ netdev_err(net, "invalid rndis_indicate_status packet, len: %u\n",754754+ resp->msg_len);755755+ return;756756+ }757757+751758 /* Update the physical link speed when changing to another vSwitch */752759 if (indicate->status == RNDIS_STATUS_LINK_SPEED_CHANGE) {753760 u32 speed;···23732366 return NOTIFY_OK;23742367}2375236823762376-/* VF up/down change detected, schedule to change data path */23692369+/* Change the data path when VF UP/DOWN/CHANGE are detected.23702370+ *23712371+ * Typically a UP or DOWN event is followed by a CHANGE event, so23722372+ * net_device_ctx->data_path_is_vf is used to cache the current data path23732373+ * to avoid the duplicate call of netvsc_switch_datapath() and the duplicate23742374+ * message.23752375+ *23762376+ * During hibernation, if a VF NIC driver (e.g. mlx5) preserves the network23772377+ * interface, there is only the CHANGE event and no UP or DOWN event.23782378+ */23772379static int netvsc_vf_changed(struct net_device *vf_netdev)23782380{23792381 struct net_device_context *net_device_ctx;···23982382 netvsc_dev = rtnl_dereference(net_device_ctx->nvdev);23992383 if (!netvsc_dev)24002384 return NOTIFY_DONE;23852385+23862386+ if (net_device_ctx->data_path_is_vf == vf_is_up)23872387+ return NOTIFY_OK;23882388+ net_device_ctx->data_path_is_vf = vf_is_up;2401238924022390 netvsc_switch_datapath(ndev, vf_is_up);24032391 netdev_info(ndev, "Data path switched %s VF: %s\n",···26072587static int netvsc_suspend(struct hv_device *dev)26082588{26092589 struct net_device_context *ndev_ctx;26102610- struct net_device *vf_netdev, *net;26112590 struct netvsc_device *nvdev;25912591+ struct net_device *net;26122592 int ret;2613259326142594 net = hv_get_drvdata(dev);···26232603 ret = -ENODEV;26242604 goto out;26252605 }26262626-26272627- vf_netdev = rtnl_dereference(ndev_ctx->vf_netdev);26282628- if (vf_netdev)26292629- netvsc_unregister_vf(vf_netdev);2630260626312607 /* Save the current config info */26322608 ndev_ctx->saved_netvsc_dev_info = netvsc_devinfo_get(nvdev);···26442628 rtnl_lock();2645262926462630 net_device_ctx = netdev_priv(net);26312631+26322632+ /* Reset the data path to the netvsc NIC before re-opening the vmbus26332633+ * channel. Later netvsc_netdev_event() will switch the data path to26342634+ * the VF upon the UP or CHANGE event.26352635+ */26362636+ net_device_ctx->data_path_is_vf = false;26472637 device_info = net_device_ctx->saved_netvsc_dev_info;2648263826492639 ret = netvsc_attach(net, device_info);···27172695 return netvsc_unregister_vf(event_dev);27182696 case NETDEV_UP:27192697 case NETDEV_DOWN:26982698+ case NETDEV_CHANGE:27202699 return netvsc_vf_changed(event_dev);27212700 default:27222701 return NOTIFY_DONE;
+66-7
drivers/net/hyperv/rndis_filter.c
···275275 return;276276 }277277278278+ /* Ensure the packet is big enough to read req_id. Req_id is the 1st279279+ * field in any request/response message, so the payload should have at280280+ * least sizeof(u32) bytes281281+ */282282+ if (resp->msg_len - RNDIS_HEADER_SIZE < sizeof(u32)) {283283+ netdev_err(ndev, "rndis msg_len too small: %u\n",284284+ resp->msg_len);285285+ return;286286+ }287287+278288 spin_lock_irqsave(&dev->request_lock, flags);279289 list_for_each_entry(request, &dev->req_list, list_ent) {280290 /*···341331 * Get the Per-Packet-Info with the specified type342332 * return NULL if not found.343333 */344344-static inline void *rndis_get_ppi(struct rndis_packet *rpkt,345345- u32 type, u8 internal)334334+static inline void *rndis_get_ppi(struct net_device *ndev,335335+ struct rndis_packet *rpkt,336336+ u32 rpkt_len, u32 type, u8 internal)346337{347338 struct rndis_per_packet_info *ppi;348339 int len;···351340 if (rpkt->per_pkt_info_offset == 0)352341 return NULL;353342343343+ /* Validate info_offset and info_len */344344+ if (rpkt->per_pkt_info_offset < sizeof(struct rndis_packet) ||345345+ rpkt->per_pkt_info_offset > rpkt_len) {346346+ netdev_err(ndev, "Invalid per_pkt_info_offset: %u\n",347347+ rpkt->per_pkt_info_offset);348348+ return NULL;349349+ }350350+351351+ if (rpkt->per_pkt_info_len > rpkt_len - rpkt->per_pkt_info_offset) {352352+ netdev_err(ndev, "Invalid per_pkt_info_len: %u\n",353353+ rpkt->per_pkt_info_len);354354+ return NULL;355355+ }356356+354357 ppi = (struct rndis_per_packet_info *)((ulong)rpkt +355358 rpkt->per_pkt_info_offset);356359 len = rpkt->per_pkt_info_len;357360358361 while (len > 0) {362362+ /* Validate ppi_offset and ppi_size */363363+ if (ppi->size > len) {364364+ netdev_err(ndev, "Invalid ppi size: %u\n", ppi->size);365365+ continue;366366+ }367367+368368+ if (ppi->ppi_offset >= ppi->size) {369369+ netdev_err(ndev, "Invalid ppi_offset: %u\n", ppi->ppi_offset);370370+ continue;371371+ }372372+359373 if (ppi->type == type && ppi->internal == internal)360374 return (void *)((ulong)ppi + ppi->ppi_offset);361375 len -= ppi->size;···424388 const struct ndis_pkt_8021q_info *vlan;425389 const struct rndis_pktinfo_id *pktinfo_id;426390 const u32 *hash_info;427427- u32 data_offset;391391+ u32 data_offset, rpkt_len;428392 void *data;429393 bool rsc_more = false;430394 int ret;431395396396+ /* Ensure data_buflen is big enough to read header fields */397397+ if (data_buflen < RNDIS_HEADER_SIZE + sizeof(struct rndis_packet)) {398398+ netdev_err(ndev, "invalid rndis pkt, data_buflen too small: %u\n",399399+ data_buflen);400400+ return NVSP_STAT_FAIL;401401+ }402402+403403+ /* Validate rndis_pkt offset */404404+ if (rndis_pkt->data_offset >= data_buflen - RNDIS_HEADER_SIZE) {405405+ netdev_err(ndev, "invalid rndis packet offset: %u\n",406406+ rndis_pkt->data_offset);407407+ return NVSP_STAT_FAIL;408408+ }409409+432410 /* Remove the rndis header and pass it back up the stack */433411 data_offset = RNDIS_HEADER_SIZE + rndis_pkt->data_offset;434412413413+ rpkt_len = data_buflen - RNDIS_HEADER_SIZE;435414 data_buflen -= data_offset;436415437416 /*···461410 return NVSP_STAT_FAIL;462411 }463412464464- vlan = rndis_get_ppi(rndis_pkt, IEEE_8021Q_INFO, 0);413413+ vlan = rndis_get_ppi(ndev, rndis_pkt, rpkt_len, IEEE_8021Q_INFO, 0);465414466466- csum_info = rndis_get_ppi(rndis_pkt, TCPIP_CHKSUM_PKTINFO, 0);415415+ csum_info = rndis_get_ppi(ndev, rndis_pkt, rpkt_len, TCPIP_CHKSUM_PKTINFO, 0);467416468468- hash_info = rndis_get_ppi(rndis_pkt, NBL_HASH_VALUE, 0);417417+ hash_info = rndis_get_ppi(ndev, rndis_pkt, rpkt_len, NBL_HASH_VALUE, 0);469418470470- pktinfo_id = rndis_get_ppi(rndis_pkt, RNDIS_PKTINFO_ID, 1);419419+ pktinfo_id = rndis_get_ppi(ndev, rndis_pkt, rpkt_len, RNDIS_PKTINFO_ID, 1);471420472421 data = (void *)msg + data_offset;473422···524473525474 if (netif_msg_rx_status(net_device_ctx))526475 dump_rndis_message(ndev, rndis_msg);476476+477477+ /* Validate incoming rndis_message packet */478478+ if (buflen < RNDIS_HEADER_SIZE || rndis_msg->msg_len < RNDIS_HEADER_SIZE ||479479+ buflen < rndis_msg->msg_len) {480480+ netdev_err(ndev, "Invalid rndis_msg (buflen: %u, msg_len: %u)\n",481481+ buflen, rndis_msg->msg_len);482482+ return NVSP_STAT_FAIL;483483+ }527484528485 switch (rndis_msg->ndis_msg_type) {529486 case RNDIS_MSG_PACKET:
+3-1
drivers/net/ieee802154/adf7242.c
···882882 int ret;883883 u8 lqi, len_u8, *data;884884885885- adf7242_read_reg(lp, 0, &len_u8);885885+ ret = adf7242_read_reg(lp, 0, &len_u8);886886+ if (ret)887887+ return ret;886888887889 len = len_u8;888890
+1
drivers/net/ieee802154/ca8210.c
···29252925 );29262926 if (!priv->irq_workqueue) {29272927 dev_crit(&priv->spi->dev, "alloc of irq_workqueue failed!\n");29282928+ destroy_workqueue(priv->mlme_workqueue);29282929 return -ENOMEM;29292930 }29302931
+2-2
drivers/net/ipa/ipa_table.c
···521521 val = ioread32(endpoint->ipa->reg_virt + offset);522522523523 /* Zero all filter-related fields, preserving the rest */524524- u32_replace_bits(val, 0, IPA_REG_ENDP_FILTER_HASH_MSK_ALL);524524+ u32p_replace_bits(&val, 0, IPA_REG_ENDP_FILTER_HASH_MSK_ALL);525525526526 iowrite32(val, endpoint->ipa->reg_virt + offset);527527}···573573 val = ioread32(ipa->reg_virt + offset);574574575575 /* Zero all route-related fields, preserving the rest */576576- u32_replace_bits(val, 0, IPA_REG_ENDP_ROUTER_HASH_MSK_ALL);576576+ u32p_replace_bits(&val, 0, IPA_REG_ENDP_ROUTER_HASH_MSK_ALL);577577578578 iowrite32(val, ipa->reg_virt + offset);579579}
+1-1
drivers/net/phy/phy.c
···996996{997997 struct net_device *dev = phydev->attached_dev;998998999999- if (!phy_is_started(phydev)) {999999+ if (!phy_is_started(phydev) && phydev->state != PHY_DOWN) {10001000 WARN(1, "called from state %s\n",10011001 phy_state_to_str(phydev->state));10021002 return;
+6-5
drivers/net/phy/phy_device.c
···11431143 if (ret < 0)11441144 return ret;1145114511461146- ret = phy_disable_interrupts(phydev);11471147- if (ret)11481148- return ret;11491149-11501146 if (phydev->drv->config_init)11511147 ret = phydev->drv->config_init(phydev);11521148···14191423 if (err)14201424 goto error;1421142514261426+ err = phy_disable_interrupts(phydev);14271427+ if (err)14281428+ return err;14291429+14221430 phy_resume(phydev);14231431 phy_led_triggers_register(phydev);14241432···1682168216831683 phy_led_triggers_unregister(phydev);1684168416851685- module_put(phydev->mdio.dev.driver->owner);16851685+ if (phydev->mdio.dev.driver)16861686+ module_put(phydev->mdio.dev.driver->owner);1686168716871688 /* If the device had no specific driver before (i.e. - it16881689 * was using the generic driver), we unbind the device
···35593559 case WL1271_CIPHER_SUITE_GEM:35603560 key_type = KEY_GEM;35613561 break;35623562- case WLAN_CIPHER_SUITE_AES_CMAC:35633563- key_type = KEY_IGTK;35643564- break;35653562 default:35663563 wl1271_error("Unknown key algo 0x%x", key_conf->cipher);35673564···62286231 WLAN_CIPHER_SUITE_TKIP,62296232 WLAN_CIPHER_SUITE_CCMP,62306233 WL1271_CIPHER_SUITE_GEM,62316231- WLAN_CIPHER_SUITE_AES_CMAC,62326234 };6233623562346236 /* The tx descriptor buffer */
+1
drivers/nvme/host/Kconfig
···7373 depends on INET7474 depends on BLK_DEV_NVME7575 select NVME_FABRICS7676+ select CRYPTO7677 select CRYPTO_CRC32C7778 help7879 This provides support for the NVMe over Fabrics protocol using
+21-3
drivers/nvme/host/core.c
···30413041 if (!cel)30423042 return -ENOMEM;3043304330443044- ret = nvme_get_log(ctrl, NVME_NSID_ALL, NVME_LOG_CMD_EFFECTS, 0, csi,30443044+ ret = nvme_get_log(ctrl, 0x00, NVME_LOG_CMD_EFFECTS, 0, csi,30453045 &cel->log, sizeof(cel->log), 0);30463046 if (ret) {30473047 kfree(cel);···32363236 if (ret < 0)32373237 return ret;3238323832393239- if (!ctrl->identified)32403240- nvme_hwmon_init(ctrl);32393239+ if (!ctrl->identified) {32403240+ ret = nvme_hwmon_init(ctrl);32413241+ if (ret < 0)32423242+ return ret;32433243+ }3241324432423245 ctrl->identified = true;32433246···32643261 return -EWOULDBLOCK;32653262 }3266326332643264+ nvme_get_ctrl(ctrl);32653265+ if (!try_module_get(ctrl->ops->module))32663266+ return -EINVAL;32673267+32673268 file->private_data = ctrl;32693269+ return 0;32703270+}32713271+32723272+static int nvme_dev_release(struct inode *inode, struct file *file)32733273+{32743274+ struct nvme_ctrl *ctrl =32753275+ container_of(inode->i_cdev, struct nvme_ctrl, cdev);32763276+32773277+ module_put(ctrl->ops->module);32783278+ nvme_put_ctrl(ctrl);32683279 return 0;32693280}32703281···33443327static const struct file_operations nvme_dev_fops = {33453328 .owner = THIS_MODULE,33463329 .open = nvme_dev_open,33303330+ .release = nvme_dev_release,33473331 .unlocked_ioctl = nvme_dev_ioctl,33483332 .compat_ioctl = compat_ptr_ioctl,33493333};
+4-2
drivers/nvme/host/fc.c
···36713671 spin_lock_irqsave(&nvme_fc_lock, flags);36723672 list_for_each_entry(lport, &nvme_fc_lport_list, port_list) {36733673 if (lport->localport.node_name != laddr.nn ||36743674- lport->localport.port_name != laddr.pn)36743674+ lport->localport.port_name != laddr.pn ||36753675+ lport->localport.port_state != FC_OBJSTATE_ONLINE)36753676 continue;3676367736773678 list_for_each_entry(rport, &lport->endp_list, endp_list) {36783679 if (rport->remoteport.node_name != raddr.nn ||36793679- rport->remoteport.port_name != raddr.pn)36803680+ rport->remoteport.port_name != raddr.pn ||36813681+ rport->remoteport.port_state != FC_OBJSTATE_ONLINE)36803682 continue;3681368336823684 /* if fail to get reference fall through. Will error */
+6-8
drivers/nvme/host/hwmon.c
···59596060static int nvme_hwmon_get_smart_log(struct nvme_hwmon_data *data)6161{6262- int ret;6363-6464- ret = nvme_get_log(data->ctrl, NVME_NSID_ALL, NVME_LOG_SMART, 0,6262+ return nvme_get_log(data->ctrl, NVME_NSID_ALL, NVME_LOG_SMART, 0,6563 NVME_CSI_NVM, &data->log, sizeof(data->log), 0);6666-6767- return ret <= 0 ? ret : -EIO;6864}69657066static int nvme_hwmon_read(struct device *dev, enum hwmon_sensor_types type,···221225 .info = nvme_hwmon_info,222226};223227224224-void nvme_hwmon_init(struct nvme_ctrl *ctrl)228228+int nvme_hwmon_init(struct nvme_ctrl *ctrl)225229{226230 struct device *dev = ctrl->dev;227231 struct nvme_hwmon_data *data;···230234231235 data = devm_kzalloc(dev, sizeof(*data), GFP_KERNEL);232236 if (!data)233233- return;237237+ return 0;234238235239 data->ctrl = ctrl;236240 mutex_init(&data->read_lock);···240244 dev_warn(ctrl->device,241245 "Failed to read smart log (error %d)\n", err);242246 devm_kfree(dev, data);243243- return;247247+ return err;244248 }245249246250 hwmon = devm_hwmon_device_register_with_info(dev, "nvme", data,···250254 dev_warn(dev, "Failed to instantiate hwmon device\n");251255 devm_kfree(dev, data);252256 }257257+258258+ return 0;253259}
···7171static int rockchip_pcie_valid_device(struct rockchip_pcie *rockchip,7272 struct pci_bus *bus, int dev)7373{7474- /* access only one slot on each root port */7575- if (pci_is_root_bus(bus) && dev > 0)7676- return 0;7777-7874 /*7979- * do not read more than one device on the bus directly attached7575+ * Access only one slot on each root port.7676+ * Do not read more than one device on the bus directly attached8077 * to RC's downstream side.8178 */8282- if (pci_is_root_bus(bus->parent) && dev > 0)8383- return 0;7979+ if (pci_is_root_bus(bus) || pci_is_root_bus(bus->parent))8080+ return dev == 0;84818582 return 1;8683}
+4-2
drivers/phy/ti/phy-am654-serdes.c
···822822 pm_runtime_enable(dev);823823824824 phy = devm_phy_create(dev, NULL, &ops);825825- if (IS_ERR(phy))826826- return PTR_ERR(phy);825825+ if (IS_ERR(phy)) {826826+ ret = PTR_ERR(phy);827827+ goto clk_err;828828+ }827829828830 phy_set_drvdata(phy, am654_phy);829831 phy_provider = devm_of_phy_provider_register(dev, serdes_am654_xlate);
+13-1
drivers/pinctrl/intel/pinctrl-cherryview.c
···5858#define CHV_PADCTRL1_CFGLOCK BIT(31)5959#define CHV_PADCTRL1_INVRXTX_SHIFT 46060#define CHV_PADCTRL1_INVRXTX_MASK GENMASK(7, 4)6161+#define CHV_PADCTRL1_INVRXTX_TXDATA BIT(7)6162#define CHV_PADCTRL1_INVRXTX_RXDATA BIT(6)6263#define CHV_PADCTRL1_INVRXTX_TXENABLE BIT(5)6364#define CHV_PADCTRL1_ODEN BIT(3)···793792static void chv_gpio_clear_triggering(struct chv_pinctrl *pctrl,794793 unsigned int offset)795794{795795+ u32 invrxtx_mask = CHV_PADCTRL1_INVRXTX_MASK;796796 u32 value;797797+798798+ /*799799+ * One some devices the GPIO should output the inverted value from what800800+ * device-drivers / ACPI code expects (inverted external buffer?). The801801+ * BIOS makes this work by setting the CHV_PADCTRL1_INVRXTX_TXDATA flag,802802+ * preserve this flag if the pin is already setup as GPIO.803803+ */804804+ value = chv_readl(pctrl, offset, CHV_PADCTRL0);805805+ if (value & CHV_PADCTRL0_GPIOEN)806806+ invrxtx_mask &= ~CHV_PADCTRL1_INVRXTX_TXDATA;797807798808 value = chv_readl(pctrl, offset, CHV_PADCTRL1);799809 value &= ~CHV_PADCTRL1_INTWAKECFG_MASK;800800- value &= ~CHV_PADCTRL1_INVRXTX_MASK;810810+ value &= ~invrxtx_mask;801811 chv_writel(pctrl, offset, CHV_PADCTRL1, value);802812}803813
+4
drivers/pinctrl/mediatek/pinctrl-mtk-common-v2.c
···259259260260 desc = (const struct mtk_pin_desc *)&hw->soc->pins[gpio_n];261261262262+ /* if the GPIO is not supported for eint mode */263263+ if (desc->eint.eint_m == NO_EINT_SUPPORT)264264+ return virt_gpio;265265+262266 if (desc->funcs && !desc->funcs[desc->eint.eint_m].name)263267 virt_gpio = true;264268
···651651 sdkp->zone_blocks);652652}653653654654+static int sd_zbc_init_disk(struct scsi_disk *sdkp)655655+{656656+ sdkp->zones_wp_offset = NULL;657657+ spin_lock_init(&sdkp->zones_wp_offset_lock);658658+ sdkp->rev_wp_offset = NULL;659659+ mutex_init(&sdkp->rev_mutex);660660+ INIT_WORK(&sdkp->zone_wp_offset_work, sd_zbc_update_wp_offset_workfn);661661+ sdkp->zone_wp_update_buf = kzalloc(SD_BUF_SIZE, GFP_KERNEL);662662+ if (!sdkp->zone_wp_update_buf)663663+ return -ENOMEM;664664+665665+ return 0;666666+}667667+668668+void sd_zbc_release_disk(struct scsi_disk *sdkp)669669+{670670+ kvfree(sdkp->zones_wp_offset);671671+ sdkp->zones_wp_offset = NULL;672672+ kfree(sdkp->zone_wp_update_buf);673673+ sdkp->zone_wp_update_buf = NULL;674674+}675675+654676static void sd_zbc_revalidate_zones_cb(struct gendisk *disk)655677{656678 struct scsi_disk *sdkp = scsi_disk(disk);···689667 u32 max_append;690668 int ret = 0;691669692692- if (!sd_is_zoned(sdkp))670670+ /*671671+ * For all zoned disks, initialize zone append emulation data if not672672+ * already done. This is necessary also for host-aware disks used as673673+ * regular disks due to the presence of partitions as these partitions674674+ * may be deleted and the disk zoned model changed back from675675+ * BLK_ZONED_NONE to BLK_ZONED_HA.676676+ */677677+ if (sd_is_zoned(sdkp) && !sdkp->zone_wp_update_buf) {678678+ ret = sd_zbc_init_disk(sdkp);679679+ if (ret)680680+ return ret;681681+ }682682+683683+ /*684684+ * There is nothing to do for regular disks, including host-aware disks685685+ * that have partitions.686686+ */687687+ if (!blk_queue_is_zoned(q))693688 return 0;694689695690 /*···802763 sdkp->capacity = 0;803764804765 return ret;805805-}806806-807807-int sd_zbc_init_disk(struct scsi_disk *sdkp)808808-{809809- if (!sd_is_zoned(sdkp))810810- return 0;811811-812812- sdkp->zones_wp_offset = NULL;813813- spin_lock_init(&sdkp->zones_wp_offset_lock);814814- sdkp->rev_wp_offset = NULL;815815- mutex_init(&sdkp->rev_mutex);816816- INIT_WORK(&sdkp->zone_wp_offset_work, sd_zbc_update_wp_offset_workfn);817817- sdkp->zone_wp_update_buf = kzalloc(SD_BUF_SIZE, GFP_KERNEL);818818- if (!sdkp->zone_wp_update_buf)819819- return -ENOMEM;820820-821821- return 0;822822-}823823-824824-void sd_zbc_release_disk(struct scsi_disk *sdkp)825825-{826826- kvfree(sdkp->zones_wp_offset);827827- sdkp->zones_wp_offset = NULL;828828- kfree(sdkp->zone_wp_update_buf);829829- sdkp->zone_wp_update_buf = NULL;830766}
···7575#define DRV_NAME "spi-bcm2835"76767777/* define polling limits */7878-unsigned int polling_limit_us = 30;7878+static unsigned int polling_limit_us = 30;7979module_param(polling_limit_us, uint, 0664);8080MODULE_PARM_DESC(polling_limit_us,8181 "time in us to run a transfer in polling mode\n");
···18401840 * out unpacked_lun for the original se_cmd.18411841 */18421842 if (tm_type == TMR_ABORT_TASK && (flags & TARGET_SCF_LOOKUP_LUN_FROM_TAG)) {18431843- if (!target_lookup_lun_from_tag(se_sess, tag, &unpacked_lun))18431843+ if (!target_lookup_lun_from_tag(se_sess, tag,18441844+ &se_cmd->orig_fe_lun))18441845 goto failure;18451846 }18461847
+34-16
drivers/usb/core/driver.c
···269269 if (error)270270 return error;271271272272+ /* Probe the USB device with the driver in hand, but only273273+ * defer to a generic driver in case the current USB274274+ * device driver has an id_table or a match function; i.e.,275275+ * when the device driver was explicitly matched against276276+ * a device.277277+ *278278+ * If the device driver does not have either of these,279279+ * then we assume that it can bind to any device and is280280+ * not truly a more specialized/non-generic driver, so a281281+ * return value of -ENODEV should not force the device282282+ * to be handled by the generic USB driver, as there283283+ * can still be another, more specialized, device driver.284284+ *285285+ * This accommodates the usbip driver.286286+ *287287+ * TODO: What if, in the future, there are multiple288288+ * specialized USB device drivers for a particular device?289289+ * In such cases, there is a need to try all matching290290+ * specialised device drivers prior to setting the291291+ * use_generic_driver bit.292292+ */272293 error = udriver->probe(udev);273273- if (error == -ENODEV && udriver != &usb_generic_driver) {294294+ if (error == -ENODEV && udriver != &usb_generic_driver &&295295+ (udriver->id_table || udriver->match)) {274296 udev->use_generic_driver = 1;275297 return -EPROBE_DEFER;276298 }···853831 udev = to_usb_device(dev);854832 udrv = to_usb_device_driver(drv);855833856856- if (udrv->id_table &&857857- usb_device_match_id(udev, udrv->id_table) != NULL) {858858- return 1;859859- }834834+ if (udrv->id_table)835835+ return usb_device_match_id(udev, udrv->id_table) != NULL;860836861837 if (udrv->match)862838 return udrv->match(udev);863863- return 0;839839+840840+ /* If the device driver under consideration does not have a841841+ * id_table or a match function, then let the driver's probe842842+ * function decide.843843+ */844844+ return 1;864845865846 } else if (is_usb_interface(dev)) {866847 struct usb_interface *intf;···930905 return 0;931906}932907933933-static bool is_dev_usb_generic_driver(struct device *dev)934934-{935935- struct usb_device_driver *udd = dev->driver ?936936- to_usb_device_driver(dev->driver) : NULL;937937-938938- return udd == &usb_generic_driver;939939-}940940-941908static int __usb_bus_reprobe_drivers(struct device *dev, void *data)942909{943910 struct usb_device_driver *new_udriver = data;944911 struct usb_device *udev;945912 int ret;946913947947- if (!is_dev_usb_generic_driver(dev))914914+ /* Don't reprobe if current driver isn't usb_generic_driver */915915+ if (dev->driver != &usb_generic_driver.drvwrap.driver)948916 return 0;949917950918 udev = to_usb_device(dev);951919 if (usb_device_match_id(udev, new_udriver->id_table) == NULL &&952952- (!new_udriver->match || new_udriver->match(udev) != 0))920920+ (!new_udriver->match || new_udriver->match(udev) == 0))953921 return 0;954922955923 ret = device_reprobe(dev);
···149149 * vhost_iotlb_itree_first - return the first overlapped range150150 * @iotlb: the IOTLB151151 * @start: start of IOVA range152152- * @end: end of IOVA range152152+ * @last: last byte in IOVA range153153 */154154struct vhost_iotlb_map *155155vhost_iotlb_itree_first(struct vhost_iotlb *iotlb, u64 start, u64 last)···162162 * vhost_iotlb_itree_next - return the next overlapped range163163 * @map: the starting map node164164 * @start: start of IOVA range165165- * @end: end of IOVA range165165+ * @last: last byte IOVA range166166 */167167struct vhost_iotlb_map *168168vhost_iotlb_itree_next(struct vhost_iotlb_map *map, u64 start, u64 last)
+16-14
drivers/vhost/vdpa.c
···353353 struct vdpa_callback cb;354354 struct vhost_virtqueue *vq;355355 struct vhost_vring_state s;356356- u64 __user *featurep = argp;357357- u64 features;358356 u32 idx;359357 long r;360358···379381380382 vq->last_avail_idx = vq_state.avail_index;381383 break;382382- case VHOST_GET_BACKEND_FEATURES:383383- features = VHOST_VDPA_BACKEND_FEATURES;384384- if (copy_to_user(featurep, &features, sizeof(features)))385385- return -EFAULT;386386- return 0;387387- case VHOST_SET_BACKEND_FEATURES:388388- if (copy_from_user(&features, featurep, sizeof(features)))389389- return -EFAULT;390390- if (features & ~VHOST_VDPA_BACKEND_FEATURES)391391- return -EOPNOTSUPP;392392- vhost_set_backend_features(&v->vdev, features);393393- return 0;394384 }395385396386 r = vhost_vring_ioctl(&v->vdev, cmd, argp);···426440 struct vhost_vdpa *v = filep->private_data;427441 struct vhost_dev *d = &v->vdev;428442 void __user *argp = (void __user *)arg;443443+ u64 __user *featurep = argp;444444+ u64 features;429445 long r;446446+447447+ if (cmd == VHOST_SET_BACKEND_FEATURES) {448448+ r = copy_from_user(&features, featurep, sizeof(features));449449+ if (r)450450+ return r;451451+ if (features & ~VHOST_VDPA_BACKEND_FEATURES)452452+ return -EOPNOTSUPP;453453+ vhost_set_backend_features(&v->vdev, features);454454+ return 0;455455+ }430456431457 mutex_lock(&d->mutex);432458···473475 break;474476 case VHOST_VDPA_SET_CONFIG_CALL:475477 r = vhost_vdpa_set_config_call(v, argp);478478+ break;479479+ case VHOST_GET_BACKEND_FEATURES:480480+ features = VHOST_VDPA_BACKEND_FEATURES;481481+ r = copy_to_user(featurep, &features, sizeof(features));476482 break;477483 default:478484 r = vhost_dev_ioctl(&v->vdev, cmd, argp);
+21-8
drivers/xen/events/events_base.c
···9292/* Xen will never allocate port zero for any purpose. */9393#define VALID_EVTCHN(chn) ((chn) != 0)94949595+static struct irq_info *legacy_info_ptrs[NR_IRQS_LEGACY];9696+9597static struct irq_chip xen_dynamic_chip;9698static struct irq_chip xen_percpu_chip;9799static struct irq_chip xen_pirq_chip;···158156/* Get info for IRQ */159157struct irq_info *info_for_irq(unsigned irq)160158{161161- return irq_get_chip_data(irq);159159+ if (irq < nr_legacy_irqs())160160+ return legacy_info_ptrs[irq];161161+ else162162+ return irq_get_chip_data(irq);163163+}164164+165165+static void set_info_for_irq(unsigned int irq, struct irq_info *info)166166+{167167+ if (irq < nr_legacy_irqs())168168+ legacy_info_ptrs[irq] = info;169169+ else170170+ irq_set_chip_data(irq, info);162171}163172164173/* Constructors for packed IRQ information. */···390377 info->type = IRQT_UNBOUND;391378 info->refcnt = -1;392379393393- irq_set_chip_data(irq, info);380380+ set_info_for_irq(irq, info);394381395382 list_add_tail(&info->list, &xen_irq_list_head);396383}···439426440427static void xen_free_irq(unsigned irq)441428{442442- struct irq_info *info = irq_get_chip_data(irq);429429+ struct irq_info *info = info_for_irq(irq);443430444431 if (WARN_ON(!info))445432 return;446433447434 list_del(&info->list);448435449449- irq_set_chip_data(irq, NULL);436436+ set_info_for_irq(irq, NULL);450437451438 WARN_ON(info->refcnt > 0);452439···616603static void __unbind_from_irq(unsigned int irq)617604{618605 evtchn_port_t evtchn = evtchn_from_irq(irq);619619- struct irq_info *info = irq_get_chip_data(irq);606606+ struct irq_info *info = info_for_irq(irq);620607621608 if (info->refcnt > 0) {622609 info->refcnt--;···1121110811221109void unbind_from_irqhandler(unsigned int irq, void *dev_id)11231110{11241124- struct irq_info *info = irq_get_chip_data(irq);11111111+ struct irq_info *info = info_for_irq(irq);1125111211261113 if (WARN_ON(!info))11271114 return;···11551142 if (irq == -1)11561143 return -ENOENT;1157114411581158- info = irq_get_chip_data(irq);11451145+ info = info_for_irq(irq);1159114611601147 if (!info)11611148 return -ENOENT;···11831170 if (irq == -1)11841171 goto done;1185117211861186- info = irq_get_chip_data(irq);11731173+ info = info_for_irq(irq);1187117411881175 if (!info)11891176 goto done;
+1-1
fs/autofs/waitq.c
···53535454 mutex_lock(&sbi->pipe_mutex);5555 while (bytes) {5656- wr = kernel_write(file, data, bytes, &file->f_pos);5656+ wr = __kernel_write(file, data, bytes, NULL);5757 if (wr <= 0)5858 break;5959 data += wr;
+44-2
fs/btrfs/dev-replace.c
···599599 wake_up(&fs_info->dev_replace.replace_wait);600600}601601602602+/*603603+ * When finishing the device replace, before swapping the source device with the604604+ * target device we must update the chunk allocation state in the target device,605605+ * as it is empty because replace works by directly copying the chunks and not606606+ * through the normal chunk allocation path.607607+ */608608+static int btrfs_set_target_alloc_state(struct btrfs_device *srcdev,609609+ struct btrfs_device *tgtdev)610610+{611611+ struct extent_state *cached_state = NULL;612612+ u64 start = 0;613613+ u64 found_start;614614+ u64 found_end;615615+ int ret = 0;616616+617617+ lockdep_assert_held(&srcdev->fs_info->chunk_mutex);618618+619619+ while (!find_first_extent_bit(&srcdev->alloc_state, start,620620+ &found_start, &found_end,621621+ CHUNK_ALLOCATED, &cached_state)) {622622+ ret = set_extent_bits(&tgtdev->alloc_state, found_start,623623+ found_end, CHUNK_ALLOCATED);624624+ if (ret)625625+ break;626626+ start = found_end + 1;627627+ }628628+629629+ free_extent_state(cached_state);630630+ return ret;631631+}632632+602633static int btrfs_dev_replace_finishing(struct btrfs_fs_info *fs_info,603634 int scrub_ret)604635{···704673 dev_replace->time_stopped = ktime_get_real_seconds();705674 dev_replace->item_needs_writeback = 1;706675707707- /* replace old device with new one in mapping tree */676676+ /*677677+ * Update allocation state in the new device and replace the old device678678+ * with the new one in the mapping tree.679679+ */708680 if (!scrub_ret) {681681+ scrub_ret = btrfs_set_target_alloc_state(src_device, tgt_device);682682+ if (scrub_ret)683683+ goto error;709684 btrfs_dev_replace_update_device_in_mapping_tree(fs_info,710685 src_device,711686 tgt_device);···722685 btrfs_dev_name(src_device),723686 src_device->devid,724687 rcu_str_deref(tgt_device->name), scrub_ret);688688+error:725689 up_write(&dev_replace->rwsem);726690 mutex_unlock(&fs_info->chunk_mutex);727691 mutex_unlock(&fs_info->fs_devices->device_list_mutex);···783745 /* replace the sysfs entry */784746 btrfs_sysfs_remove_devices_dir(fs_info->fs_devices, src_device);785747 btrfs_sysfs_update_devid(tgt_device);786786- btrfs_rm_dev_replace_free_srcdev(src_device);748748+ if (test_bit(BTRFS_DEV_STATE_WRITEABLE, &src_device->dev_state))749749+ btrfs_scratch_superblocks(fs_info, src_device->bdev,750750+ src_device->name->str);787751788752 /* write back the superblocks */789753 trans = btrfs_start_transaction(root, 0);···793753 btrfs_commit_transaction(trans);794754795755 mutex_unlock(&dev_replace->lock_finishing_cancel_unmount);756756+757757+ btrfs_rm_dev_replace_free_srcdev(src_device);796758797759 return 0;798760}
···218218 struct file *file;219219220220 /* used to optimize loop detection check */221221- struct list_head visited_list_link;222222- int visited;221221+ u64 gen;223222224223#ifdef CONFIG_NET_RX_BUSY_POLL225224 /* used to track busy poll napi_id */···273274 */274275static DEFINE_MUTEX(epmutex);275276277277+static u64 loop_check_gen = 0;278278+276279/* Used to check for epoll file descriptor inclusion loops */277280static struct nested_calls poll_loop_ncalls;278281···283282284283/* Slab cache used to allocate "struct eppoll_entry" */285284static struct kmem_cache *pwq_cache __read_mostly;286286-287287-/* Visited nodes during ep_loop_check(), so we can unset them when we finish */288288-static LIST_HEAD(visited_list);289285290286/*291287 * List of files with newly added links, where we may need to limit the number···1448145014491451static int ep_create_wakeup_source(struct epitem *epi)14501452{14511451- const char *name;14531453+ struct name_snapshot n;14521454 struct wakeup_source *ws;1453145514541456 if (!epi->ep->ws) {···14571459 return -ENOMEM;14581460 }1459146114601460- name = epi->ffd.file->f_path.dentry->d_name.name;14611461- ws = wakeup_source_register(NULL, name);14621462+ take_dentry_name_snapshot(&n, epi->ffd.file->f_path.dentry);14631463+ ws = wakeup_source_register(NULL, n.name.name);14641464+ release_dentry_name_snapshot(&n);1462146514631466 if (!ws)14641467 return -ENOMEM;···15211522 RCU_INIT_POINTER(epi->ws, NULL);15221523 }1523152415251525+ /* Add the current item to the list of active epoll hook for this file */15261526+ spin_lock(&tfile->f_lock);15271527+ list_add_tail_rcu(&epi->fllink, &tfile->f_ep_links);15281528+ spin_unlock(&tfile->f_lock);15291529+15301530+ /*15311531+ * Add the current item to the RB tree. All RB tree operations are15321532+ * protected by "mtx", and ep_insert() is called with "mtx" held.15331533+ */15341534+ ep_rbtree_insert(ep, epi);15351535+15361536+ /* now check if we've created too many backpaths */15371537+ error = -EINVAL;15381538+ if (full_check && reverse_path_check())15391539+ goto error_remove_epi;15401540+15241541 /* Initialize the poll table using the queue callback */15251542 epq.epi = epi;15261543 init_poll_funcptr(&epq.pt, ep_ptable_queue_proc);···15581543 error = -ENOMEM;15591544 if (epi->nwait < 0)15601545 goto error_unregister;15611561-15621562- /* Add the current item to the list of active epoll hook for this file */15631563- spin_lock(&tfile->f_lock);15641564- list_add_tail_rcu(&epi->fllink, &tfile->f_ep_links);15651565- spin_unlock(&tfile->f_lock);15661566-15671567- /*15681568- * Add the current item to the RB tree. All RB tree operations are15691569- * protected by "mtx", and ep_insert() is called with "mtx" held.15701570- */15711571- ep_rbtree_insert(ep, epi);15721572-15731573- /* now check if we've created too many backpaths */15741574- error = -EINVAL;15751575- if (full_check && reverse_path_check())15761576- goto error_remove_epi;1577154615781547 /* We have to drop the new item inside our item list to keep track of it */15791548 write_lock_irq(&ep->lock);···1587158815881589 return 0;1589159015911591+error_unregister:15921592+ ep_unregister_pollwait(ep, epi);15901593error_remove_epi:15911594 spin_lock(&tfile->f_lock);15921595 list_del_rcu(&epi->fllink);15931596 spin_unlock(&tfile->f_lock);1594159715951598 rb_erase_cached(&epi->rbn, &ep->rbr);15961596-15971597-error_unregister:15981598- ep_unregister_pollwait(ep, epi);1599159916001600 /*16011601 * We need to do this because an event could have been arrived on some···19701972 struct epitem *epi;1971197319721974 mutex_lock_nested(&ep->mtx, call_nests + 1);19731973- ep->visited = 1;19741974- list_add(&ep->visited_list_link, &visited_list);19751975+ ep->gen = loop_check_gen;19751976 for (rbp = rb_first_cached(&ep->rbr); rbp; rbp = rb_next(rbp)) {19761977 epi = rb_entry(rbp, struct epitem, rbn);19771978 if (unlikely(is_file_epoll(epi->ffd.file))) {19781979 ep_tovisit = epi->ffd.file->private_data;19791979- if (ep_tovisit->visited)19801980+ if (ep_tovisit->gen == loop_check_gen)19801981 continue;19811982 error = ep_call_nested(&poll_loop_ncalls,19821983 ep_loop_check_proc, epi->ffd.file,···20162019 */20172020static int ep_loop_check(struct eventpoll *ep, struct file *file)20182021{20192019- int ret;20202020- struct eventpoll *ep_cur, *ep_next;20212021-20222022- ret = ep_call_nested(&poll_loop_ncalls,20222022+ return ep_call_nested(&poll_loop_ncalls,20232023 ep_loop_check_proc, file, ep, current);20242024- /* clear visited list */20252025- list_for_each_entry_safe(ep_cur, ep_next, &visited_list,20262026- visited_list_link) {20272027- ep_cur->visited = 0;20282028- list_del(&ep_cur->visited_list_link);20292029- }20302030- return ret;20312024}2032202520332026static void clear_tfile_check_list(void)···21822195 goto error_tgt_fput;21832196 if (op == EPOLL_CTL_ADD) {21842197 if (!list_empty(&f.file->f_ep_links) ||21982198+ ep->gen == loop_check_gen ||21852199 is_file_epoll(tf.file)) {21862200 mutex_unlock(&ep->mtx);21872201 error = epoll_mutex_lock(&epmutex, 0, nonblock);21882202 if (error)21892203 goto error_tgt_fput;22042204+ loop_check_gen++;21902205 full_check = 1;21912206 if (is_file_epoll(tf.file)) {21922207 error = -ELOOP;···22522263error_tgt_fput:22532264 if (full_check) {22542265 clear_tfile_check_list();22662266+ loop_check_gen++;22552267 mutex_unlock(&epmutex);22562268 }22572269
+12-13
fs/fuse/file.c
···30913091 ssize_t ret = 0;30923092 struct file *file = iocb->ki_filp;30933093 struct fuse_file *ff = file->private_data;30943094- bool async_dio = ff->fc->async_dio;30953094 loff_t pos = 0;30963095 struct inode *inode;30973096 loff_t i_size;30983098- size_t count = iov_iter_count(iter);30973097+ size_t count = iov_iter_count(iter), shortened = 0;30993098 loff_t offset = iocb->ki_pos;31003099 struct fuse_io_priv *io;31013100···31023103 inode = file->f_mapping->host;31033104 i_size = i_size_read(inode);3104310531053105- if ((iov_iter_rw(iter) == READ) && (offset > i_size))31063106+ if ((iov_iter_rw(iter) == READ) && (offset >= i_size))31063107 return 0;31073107-31083108- /* optimization for short read */31093109- if (async_dio && iov_iter_rw(iter) != WRITE && offset + count > i_size) {31103110- if (offset >= i_size)31113111- return 0;31123112- iov_iter_truncate(iter, fuse_round_up(ff->fc, i_size - offset));31133113- count = iov_iter_count(iter);31143114- }3115310831163109 io = kmalloc(sizeof(struct fuse_io_priv), GFP_KERNEL);31173110 if (!io)···31203129 * By default, we want to optimize all I/Os with async request31213130 * submission to the client filesystem if supported.31223131 */31233123- io->async = async_dio;31323132+ io->async = ff->fc->async_dio;31243133 io->iocb = iocb;31253134 io->blocking = is_sync_kiocb(iocb);31353135+31363136+ /* optimization for short read */31373137+ if (io->async && !io->write && offset + count > i_size) {31383138+ iov_iter_truncate(iter, fuse_round_up(ff->fc, i_size - offset));31393139+ shortened = count - iov_iter_count(iter);31403140+ count -= shortened;31413141+ }3126314231273143 /*31283144 * We cannot asynchronously extend the size of a file.31293145 * In such case the aio will behave exactly like sync io.31303146 */31313131- if ((offset + count > i_size) && iov_iter_rw(iter) == WRITE)31473147+ if ((offset + count > i_size) && io->write)31323148 io->blocking = true;3133314931343150 if (io->async && io->blocking) {···31533155 } else {31543156 ret = __fuse_direct_read(io, iter, &pos);31553157 }31583158+ iov_iter_reexpand(iter, iov_iter_count(iter) + shortened);3156315931573160 if (io->async) {31583161 bool blocking = io->blocking;
+63-23
fs/io_uring.c
···17531753 struct io_ring_ctx *ctx = req->ctx;17541754 int ret, notify;1755175517561756+ if (tsk->flags & PF_EXITING)17571757+ return -ESRCH;17581758+17561759 /*17571760 * SQPOLL kernel thread doesn't need notification, just a wakeup. For17581761 * all other cases, use TWA_SIGNAL unconditionally to ensure we're···17901787static void io_req_task_cancel(struct callback_head *cb)17911788{17921789 struct io_kiocb *req = container_of(cb, struct io_kiocb, task_work);17901790+ struct io_ring_ctx *ctx = req->ctx;1793179117941792 __io_req_task_cancel(req, -ECANCELED);17931793+ percpu_ref_put(&ctx->refs);17951794}1796179517971796static void __io_req_task_submit(struct io_kiocb *req)···2015201020162011static inline bool io_run_task_work(void)20172012{20132013+ /*20142014+ * Not safe to run on exiting task, and the task_work handling will20152015+ * not add work to such a task.20162016+ */20172017+ if (unlikely(current->flags & PF_EXITING))20182018+ return false;20182019 if (current->task_works) {20192020 __set_current_state(TASK_RUNNING);20202021 task_work_run();···22942283 goto end_req;22952284 }2296228522972297- ret = io_import_iovec(rw, req, &iovec, &iter, false);22982298- if (ret < 0)22992299- goto end_req;23002300- ret = io_setup_async_rw(req, iovec, inline_vecs, &iter, false);23012301- if (!ret)22862286+ if (!req->io) {22872287+ ret = io_import_iovec(rw, req, &iovec, &iter, false);22882288+ if (ret < 0)22892289+ goto end_req;22902290+ ret = io_setup_async_rw(req, iovec, inline_vecs, &iter, false);22912291+ if (!ret)22922292+ return true;22932293+ kfree(iovec);22942294+ } else {23022295 return true;23032303- kfree(iovec);22962296+ }23042297end_req:23052298 req_set_fail_links(req);23062299 io_req_complete(req, ret);···30493034 if (!wake_page_match(wpq, key))30503035 return 0;3051303630373037+ req->rw.kiocb.ki_flags &= ~IOCB_WAITQ;30523038 list_del_init(&wait->entry);3053303930543040 init_task_work(&req->task_work, io_req_task_submit);···31073091 wait->wait.flags = 0;31083092 INIT_LIST_HEAD(&wait->wait.entry);31093093 kiocb->ki_flags |= IOCB_WAITQ;30943094+ kiocb->ki_flags &= ~IOCB_NOWAIT;31103095 kiocb->ki_waitq = wait;3111309631123097 io_get_req_task(req);···31323115 struct iov_iter __iter, *iter = &__iter;31333116 ssize_t io_size, ret, ret2;31343117 size_t iov_count;31183118+ bool no_async;3135311931363120 if (req->io)31373121 iter = &req->io->rw.iter;···31503132 kiocb->ki_flags &= ~IOCB_NOWAIT;3151313331523134 /* If the file doesn't support async, just async punt */31533153- if (force_nonblock && !io_file_supports_async(req->file, READ))31353135+ no_async = force_nonblock && !io_file_supports_async(req->file, READ);31363136+ if (no_async)31543137 goto copy_iov;3155313831563139 ret = rw_verify_area(READ, req->file, io_kiocb_ppos(kiocb), iov_count);···31743155 goto done;31753156 /* some cases will consume bytes even on error returns */31763157 iov_iter_revert(iter, iov_count - iov_iter_count(iter));31773177- ret = io_setup_async_rw(req, iovec, inline_vecs, iter, false);31783178- if (ret)31793179- goto out_free;31803180- return -EAGAIN;31583158+ ret = 0;31593159+ goto copy_iov;31813160 } else if (ret < 0) {31823161 /* make sure -ERESTARTSYS -> -EINTR is done */31833162 goto done;···31933176 ret = ret2;31943177 goto out_free;31953178 }31793179+ if (no_async)31803180+ return -EAGAIN;31963181 /* it's copied and will be cleaned with ->io */31973182 iovec = NULL;31983183 /* now use our persistent iterator, if we aren't already */···35273508 const char __user *fname;35283509 int ret;3529351035303530- if (unlikely(req->ctx->flags & (IORING_SETUP_IOPOLL|IORING_SETUP_SQPOLL)))35313531- return -EINVAL;35323511 if (unlikely(sqe->ioprio || sqe->buf_index))35333512 return -EINVAL;35343513 if (unlikely(req->flags & REQ_F_FIXED_FILE))···35533536{35543537 u64 flags, mode;3555353835393539+ if (unlikely(req->ctx->flags & (IORING_SETUP_IOPOLL|IORING_SETUP_SQPOLL)))35403540+ return -EINVAL;35563541 if (req->flags & REQ_F_NEED_CLEANUP)35573542 return 0;35583543 mode = READ_ONCE(sqe->len);···35693550 size_t len;35703551 int ret;3571355235533553+ if (unlikely(req->ctx->flags & (IORING_SETUP_IOPOLL|IORING_SETUP_SQPOLL)))35543554+ return -EINVAL;35723555 if (req->flags & REQ_F_NEED_CLEANUP)35733556 return 0;35743557 how = u64_to_user_ptr(READ_ONCE(sqe->addr2));···37883767#if defined(CONFIG_EPOLL)37893768 if (sqe->ioprio || sqe->buf_index)37903769 return -EINVAL;37913791- if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))37703770+ if (unlikely(req->ctx->flags & (IORING_SETUP_IOPOLL | IORING_SETUP_SQPOLL)))37923771 return -EINVAL;3793377237943773 req->epoll.epfd = READ_ONCE(sqe->fd);···3903388239043883static int io_statx_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)39053884{39063906- if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))38853885+ if (unlikely(req->ctx->flags & (IORING_SETUP_IOPOLL | IORING_SETUP_SQPOLL)))39073886 return -EINVAL;39083887 if (sqe->ioprio || sqe->buf_index)39093888 return -EINVAL;···47454724 if (mask && !(mask & poll->events))47464725 return 0;4747472647274727+ list_del_init(&wait->entry);47284728+47484729 if (poll && poll->head) {47494730 bool done;47504731···54225399static int io_files_update_prep(struct io_kiocb *req,54235400 const struct io_uring_sqe *sqe)54245401{54025402+ if (unlikely(req->ctx->flags & IORING_SETUP_SQPOLL))54035403+ return -EINVAL;54255404 if (unlikely(req->flags & (REQ_F_FIXED_FILE | REQ_F_BUFFER_SELECT)))54265405 return -EINVAL;54275406 if (sqe->ioprio || sqe->rw_flags)···54735448 ret = io_prep_work_files(req);54745449 if (unlikely(ret))54755450 return ret;54515451+54525452+ io_prep_async_work(req);5476545354775454 switch (req->opcode) {54785455 case IORING_OP_NOP:···56725645 case IORING_OP_TEE:56735646 io_put_file(req, req->splice.file_in,56745647 (req->splice.flags & SPLICE_F_FD_IN_FIXED));56485648+ break;56495649+ case IORING_OP_OPENAT:56505650+ case IORING_OP_OPENAT2:56515651+ if (req->open.filename)56525652+ putname(req->open.filename);56755653 break;56765654 }56775655 req->flags &= ~REQ_F_NEED_CLEANUP;···63556323 struct io_ring_ctx *ctx, unsigned int max_ios)63566324{63576325 blk_start_plug(&state->plug);63586358-#ifdef CONFIG_BLOCK63596359- state->plug.nowait = true;63606360-#endif63616326 state->comp.nr = 0;63626327 INIT_LIST_HEAD(&state->comp.list);63636328 state->comp.ctx = ctx;···82098180 /* cancel this request, or head link requests */82108181 io_attempt_cancel(ctx, cancel_req);82118182 io_put_req(cancel_req);81838183+ /* cancellations _may_ trigger task work */81848184+ io_run_task_work();82128185 schedule();82138186 finish_wait(&ctx->inflight_wait, &wait);82148187 }···8416838584178386static void __io_uring_show_fdinfo(struct io_ring_ctx *ctx, struct seq_file *m)84188387{83888388+ bool has_lock;84198389 int i;8420839084218421- mutex_lock(&ctx->uring_lock);83918391+ /*83928392+ * Avoid ABBA deadlock between the seq lock and the io_uring mutex,83938393+ * since fdinfo case grabs it in the opposite direction of normal use83948394+ * cases. If we fail to get the lock, we just don't iterate any83958395+ * structures that could be going away outside the io_uring mutex.83968396+ */83978397+ has_lock = mutex_trylock(&ctx->uring_lock);83988398+84228399 seq_printf(m, "UserFiles:\t%u\n", ctx->nr_user_files);84238423- for (i = 0; i < ctx->nr_user_files; i++) {84008400+ for (i = 0; has_lock && i < ctx->nr_user_files; i++) {84248401 struct fixed_file_table *table;84258402 struct file *f;84268403···84408401 seq_printf(m, "%5u: <none>\n", i);84418402 }84428403 seq_printf(m, "UserBufs:\t%u\n", ctx->nr_user_bufs);84438443- for (i = 0; i < ctx->nr_user_bufs; i++) {84048404+ for (i = 0; has_lock && i < ctx->nr_user_bufs; i++) {84448405 struct io_mapped_ubuf *buf = &ctx->user_bufs[i];8445840684468407 seq_printf(m, "%5u: 0x%llx/%u\n", i, buf->ubuf,84478408 (unsigned int) buf->len);84488409 }84498449- if (!idr_is_empty(&ctx->personality_idr)) {84108410+ if (has_lock && !idr_is_empty(&ctx->personality_idr)) {84508411 seq_printf(m, "Personalities:\n");84518412 idr_for_each(&ctx->personality_idr, io_uring_show_cred, m);84528413 }···84618422 req->task->task_works != NULL);84628423 }84638424 spin_unlock_irq(&ctx->completion_lock);84648464- mutex_unlock(&ctx->uring_lock);84258425+ if (has_lock)84268426+ mutex_unlock(&ctx->uring_lock);84658427}8466842884678429static void io_uring_show_fdinfo(struct seq_file *m, struct file *f)
+3
fs/nfs/dir.c
···579579 xdr_set_scratch_buffer(&stream, page_address(scratch), PAGE_SIZE);580580581581 do {582582+ if (entry->label)583583+ entry->label->len = NFS4_MAXLABELLEN;584584+582585 status = xdr_decode(desc, entry, &stream);583586 if (status != 0) {584587 if (status == -EAGAIN)
+22-21
fs/nfs/flexfilelayout/flexfilelayout.c
···715715}716716717717static void718718-ff_layout_mark_ds_unreachable(struct pnfs_layout_segment *lseg, int idx)718718+ff_layout_mark_ds_unreachable(struct pnfs_layout_segment *lseg, u32 idx)719719{720720 struct nfs4_deviceid_node *devid = FF_LAYOUT_DEVID_NODE(lseg, idx);721721···724724}725725726726static void727727-ff_layout_mark_ds_reachable(struct pnfs_layout_segment *lseg, int idx)727727+ff_layout_mark_ds_reachable(struct pnfs_layout_segment *lseg, u32 idx)728728{729729 struct nfs4_deviceid_node *devid = FF_LAYOUT_DEVID_NODE(lseg, idx);730730···734734735735static struct nfs4_pnfs_ds *736736ff_layout_choose_ds_for_read(struct pnfs_layout_segment *lseg,737737- int start_idx, int *best_idx,737737+ u32 start_idx, u32 *best_idx,738738 bool check_device)739739{740740 struct nfs4_ff_layout_segment *fls = FF_LAYOUT_LSEG(lseg);741741 struct nfs4_ff_layout_mirror *mirror;742742 struct nfs4_pnfs_ds *ds;743743 bool fail_return = false;744744- int idx;744744+ u32 idx;745745746746 /* mirrors are initially sorted by efficiency */747747 for (idx = start_idx; idx < fls->mirror_array_cnt; idx++) {···766766767767static struct nfs4_pnfs_ds *768768ff_layout_choose_any_ds_for_read(struct pnfs_layout_segment *lseg,769769- int start_idx, int *best_idx)769769+ u32 start_idx, u32 *best_idx)770770{771771 return ff_layout_choose_ds_for_read(lseg, start_idx, best_idx, false);772772}773773774774static struct nfs4_pnfs_ds *775775ff_layout_choose_valid_ds_for_read(struct pnfs_layout_segment *lseg,776776- int start_idx, int *best_idx)776776+ u32 start_idx, u32 *best_idx)777777{778778 return ff_layout_choose_ds_for_read(lseg, start_idx, best_idx, true);779779}780780781781static struct nfs4_pnfs_ds *782782ff_layout_choose_best_ds_for_read(struct pnfs_layout_segment *lseg,783783- int start_idx, int *best_idx)783783+ u32 start_idx, u32 *best_idx)784784{785785 struct nfs4_pnfs_ds *ds;786786···791791}792792793793static struct nfs4_pnfs_ds *794794-ff_layout_get_ds_for_read(struct nfs_pageio_descriptor *pgio, int *best_idx)794794+ff_layout_get_ds_for_read(struct nfs_pageio_descriptor *pgio,795795+ u32 *best_idx)795796{796797 struct pnfs_layout_segment *lseg = pgio->pg_lseg;797798 struct nfs4_pnfs_ds *ds;···838837 struct nfs_pgio_mirror *pgm;839838 struct nfs4_ff_layout_mirror *mirror;840839 struct nfs4_pnfs_ds *ds;841841- int ds_idx;840840+ u32 ds_idx, i;842841843842retry:844843 ff_layout_pg_check_layout(pgio, req);···864863 goto retry;865864 }866865867867- mirror = FF_LAYOUT_COMP(pgio->pg_lseg, ds_idx);866866+ for (i = 0; i < pgio->pg_mirror_count; i++) {867867+ mirror = FF_LAYOUT_COMP(pgio->pg_lseg, i);868868+ pgm = &pgio->pg_mirrors[i];869869+ pgm->pg_bsize = mirror->mirror_ds->ds_versions[0].rsize;870870+ }868871869872 pgio->pg_mirror_idx = ds_idx;870870-871871- /* read always uses only one mirror - idx 0 for pgio layer */872872- pgm = &pgio->pg_mirrors[0];873873- pgm->pg_bsize = mirror->mirror_ds->ds_versions[0].rsize;874873875874 if (NFS_SERVER(pgio->pg_inode)->flags &876875 (NFS_MOUNT_SOFT|NFS_MOUNT_SOFTERR))···895894 struct nfs4_ff_layout_mirror *mirror;896895 struct nfs_pgio_mirror *pgm;897896 struct nfs4_pnfs_ds *ds;898898- int i;897897+ u32 i;899898900899retry:901900 ff_layout_pg_check_layout(pgio, req);···10391038static void ff_layout_resend_pnfs_read(struct nfs_pgio_header *hdr)10401039{10411040 u32 idx = hdr->pgio_mirror_idx + 1;10421042- int new_idx = 0;10411041+ u32 new_idx = 0;1043104210441043 if (ff_layout_choose_any_ds_for_read(hdr->lseg, idx + 1, &new_idx))10451044 ff_layout_send_layouterror(hdr->lseg);···10761075 struct nfs4_state *state,10771076 struct nfs_client *clp,10781077 struct pnfs_layout_segment *lseg,10791079- int idx)10781078+ u32 idx)10801079{10811080 struct pnfs_layout_hdr *lo = lseg->pls_layout;10821081 struct inode *inode = lo->plh_inode;···11501149/* Retry all errors through either pNFS or MDS except for -EJUKEBOX */11511150static int ff_layout_async_handle_error_v3(struct rpc_task *task,11521151 struct pnfs_layout_segment *lseg,11531153- int idx)11521152+ u32 idx)11541153{11551154 struct nfs4_deviceid_node *devid = FF_LAYOUT_DEVID_NODE(lseg, idx);11561155···11851184 struct nfs4_state *state,11861185 struct nfs_client *clp,11871186 struct pnfs_layout_segment *lseg,11881188- int idx)11871187+ u32 idx)11891188{11901189 int vers = clp->cl_nfs_mod->rpc_vers->number;11911190···12121211}1213121212141213static void ff_layout_io_track_ds_error(struct pnfs_layout_segment *lseg,12151215- int idx, u64 offset, u64 length,12141214+ u32 idx, u64 offset, u64 length,12161215 u32 *op_status, int opnum, int error)12171216{12181217 struct nfs4_ff_layout_mirror *mirror;···18101809 loff_t offset = hdr->args.offset;18111810 int vers;18121811 struct nfs_fh *fh;18131813- int idx = hdr->pgio_mirror_idx;18121812+ u32 idx = hdr->pgio_mirror_idx;1814181318151814 mirror = FF_LAYOUT_COMP(lseg, idx);18161815 ds = nfs4_ff_layout_prepare_ds(lseg, mirror, true);
···106106 }107107}108108109109-/* Drop the inode semaphore and wait for a pipe event, atomically */110110-void pipe_wait(struct pipe_inode_info *pipe)111111-{112112- DEFINE_WAIT(rdwait);113113- DEFINE_WAIT(wrwait);114114-115115- /*116116- * Pipes are system-local resources, so sleeping on them117117- * is considered a noninteractive wait:118118- */119119- prepare_to_wait(&pipe->rd_wait, &rdwait, TASK_INTERRUPTIBLE);120120- prepare_to_wait(&pipe->wr_wait, &wrwait, TASK_INTERRUPTIBLE);121121- pipe_unlock(pipe);122122- schedule();123123- finish_wait(&pipe->rd_wait, &rdwait);124124- finish_wait(&pipe->wr_wait, &wrwait);125125- pipe_lock(pipe);126126-}127127-128109static void anon_pipe_buf_release(struct pipe_inode_info *pipe,129110 struct pipe_buffer *buf)130111{···10161035 return do_pipe2(fildes, 0);10171036}1018103710381038+/*10391039+ * This is the stupid "wait for pipe to be readable or writable"10401040+ * model.10411041+ *10421042+ * See pipe_read/write() for the proper kind of exclusive wait,10431043+ * but that requires that we wake up any other readers/writers10441044+ * if we then do not end up reading everything (ie the whole10451045+ * "wake_next_reader/writer" logic in pipe_read/write()).10461046+ */10471047+void pipe_wait_readable(struct pipe_inode_info *pipe)10481048+{10491049+ pipe_unlock(pipe);10501050+ wait_event_interruptible(pipe->rd_wait, pipe_readable(pipe));10511051+ pipe_lock(pipe);10521052+}10531053+10541054+void pipe_wait_writable(struct pipe_inode_info *pipe)10551055+{10561056+ pipe_unlock(pipe);10571057+ wait_event_interruptible(pipe->wr_wait, pipe_writable(pipe));10581058+ pipe_lock(pipe);10591059+}10601060+10611061+/*10621062+ * This depends on both the wait (here) and the wakeup (wake_up_partner)10631063+ * holding the pipe lock, so "*cnt" is stable and we know a wakeup cannot10641064+ * race with the count check and waitqueue prep.10651065+ *10661066+ * Normally in order to avoid races, you'd do the prepare_to_wait() first,10671067+ * then check the condition you're waiting for, and only then sleep. But10681068+ * because of the pipe lock, we can check the condition before being on10691069+ * the wait queue.10701070+ *10711071+ * We use the 'rd_wait' waitqueue for pipe partner waiting.10721072+ */10191073static int wait_for_partner(struct pipe_inode_info *pipe, unsigned int *cnt)10201074{10751075+ DEFINE_WAIT(rdwait);10211076 int cur = *cnt;1022107710231078 while (cur == *cnt) {10241024- pipe_wait(pipe);10791079+ prepare_to_wait(&pipe->rd_wait, &rdwait, TASK_INTERRUPTIBLE);10801080+ pipe_unlock(pipe);10811081+ schedule();10821082+ finish_wait(&pipe->rd_wait, &rdwait);10831083+ pipe_lock(pipe);10251084 if (signal_pending(current))10261085 break;10271086 }···10711050static void wake_up_partner(struct pipe_inode_info *pipe)10721051{10731052 wake_up_interruptible_all(&pipe->rd_wait);10741074- wake_up_interruptible_all(&pipe->wr_wait);10751053}1076105410771055static int fifo_open(struct inode *inode, struct file *filp)
+8
fs/read_write.c
···538538 inc_syscw(current);539539 return ret;540540}541541+/*542542+ * This "EXPORT_SYMBOL_GPL()" is more of a "EXPORT_SYMBOL_DONTUSE()",543543+ * but autofs is one of the few internal kernel users that actually544544+ * wants this _and_ can be built as a module. So we need to export545545+ * this symbol for autofs, even though it really isn't appropriate546546+ * for any other kernel modules.547547+ */548548+EXPORT_SYMBOL_GPL(__kernel_write);541549542550ssize_t kernel_write(struct file *file, const void *buf, size_t count,543551 loff_t *pos)
···281281282282 struct memstick_dev *card;283283 unsigned int retries;284284+ bool removing;284285285286 /* Notify the host that some requests are pending. */286287 void (*request)(struct memstick_host *host);
···436436 */437437 atomic_t mm_count;438438439439+ /**440440+ * @has_pinned: Whether this mm has pinned any pages. This can441441+ * be either replaced in the future by @pinned_vm when it442442+ * becomes stable, or grow into a counter on its own. We're443443+ * aggresive on this bit now - even if the pinned pages were444444+ * unpinned later on, we'll still keep this bit set for the445445+ * lifecycle of this mm just for simplicity.446446+ */447447+ atomic_t has_pinned;448448+439449#ifdef CONFIG_MMU440450 atomic_long_t pgtables_bytes; /* PTE page table pages */441451#endif
+8-3
include/linux/mmzone.h
···824824 unsigned int alloc_flags);825825bool zone_watermark_ok_safe(struct zone *z, unsigned int order,826826 unsigned long mark, int highest_zoneidx);827827-enum memmap_context {828828- MEMMAP_EARLY,829829- MEMMAP_HOTPLUG,827827+/*828828+ * Memory initialization context, use to differentiate memory added by829829+ * the platform statically or via memory hotplug interface.830830+ */831831+enum meminit_context {832832+ MEMINIT_EARLY,833833+ MEMINIT_HOTPLUG,830834};835835+831836extern void init_currently_empty_zone(struct zone *zone, unsigned long start_pfn,832837 unsigned long size);833838
+1-1
include/linux/netdev_features.h
···193193#define NETIF_F_GSO_MASK (__NETIF_F_BIT(NETIF_F_GSO_LAST + 1) - \194194 __NETIF_F_BIT(NETIF_F_GSO_SHIFT))195195196196-/* List of IP checksum features. Note that NETIF_F_ HW_CSUM should not be196196+/* List of IP checksum features. Note that NETIF_F_HW_CSUM should not be197197 * set in features when NETIF_F_IP_CSUM or NETIF_F_IPV6_CSUM are set--198198 * this would be contradictory199199 */
+2
include/linux/netdevice.h
···17841784 * the watchdog (see dev_watchdog())17851785 * @watchdog_timer: List of timers17861786 *17871787+ * @proto_down_reason: reason a netdev interface is held down17871788 * @pcpu_refcnt: Number of references to this device17881789 * @todo_list: Delayed register/unregister17891790 * @link_watch_list: XXX: need comments on this one···18491848 * @udp_tunnel_nic_info: static structure describing the UDP tunnel18501849 * offload capabilities of the device18511850 * @udp_tunnel_nic: UDP tunnel offload state18511851+ * @xdp_state: stores info on attached XDP BPF programs18521852 *18531853 * FIXME: cleanup struct net_device such that network protocol info18541854 * moves out.
+2-2
include/linux/nfs_xdr.h
···16111611 __u64 mds_offset; /* Filelayout dense stripe */16121612 struct nfs_page_array page_array;16131613 struct nfs_client *ds_clp; /* pNFS data server */16141614- int ds_commit_idx; /* ds index if ds_clp is set */16151615- int pgio_mirror_idx;/* mirror index in pgio layer */16141614+ u32 ds_commit_idx; /* ds index if ds_clp is set */16151615+ u32 pgio_mirror_idx;/* mirror index in pgio layer */16161616};1617161716181618struct nfs_mds_commit_info {
+7-4
include/linux/node.h
···9999typedef void (*node_registration_func_t)(struct node *);100100101101#if defined(CONFIG_MEMORY_HOTPLUG_SPARSE) && defined(CONFIG_NUMA)102102-extern int link_mem_sections(int nid, unsigned long start_pfn,103103- unsigned long end_pfn);102102+int link_mem_sections(int nid, unsigned long start_pfn,103103+ unsigned long end_pfn,104104+ enum meminit_context context);104105#else105106static inline int link_mem_sections(int nid, unsigned long start_pfn,106106- unsigned long end_pfn)107107+ unsigned long end_pfn,108108+ enum meminit_context context)107109{108110 return 0;109111}···130128 if (error)131129 return error;132130 /* link memory sections under this node */133133- error = link_mem_sections(nid, start_pfn, end_pfn);131131+ error = link_mem_sections(nid, start_pfn, end_pfn,132132+ MEMINIT_EARLY);134133 }135134136135 return error;
+10
include/linux/pgtable.h
···14271427#define mm_pmd_folded(mm) __is_defined(__PAGETABLE_PMD_FOLDED)14281428#endif1429142914301430+#ifndef p4d_offset_lockless14311431+#define p4d_offset_lockless(pgdp, pgd, address) p4d_offset(&(pgd), address)14321432+#endif14331433+#ifndef pud_offset_lockless14341434+#define pud_offset_lockless(p4dp, p4d, address) pud_offset(&(p4d), address)14351435+#endif14361436+#ifndef pmd_offset_lockless14371437+#define pmd_offset_lockless(pudp, pud, address) pmd_offset(&(pud), address)14381438+#endif14391439+14301440/*14311441 * p?d_leaf() - true if this entry is a final mapping to a physical address.14321442 * This differs from p?d_huge() by the fact that they are always available (if
+3-2
include/linux/pipe_fs_i.h
···240240extern unsigned long pipe_user_pages_hard;241241extern unsigned long pipe_user_pages_soft;242242243243-/* Drop the inode semaphore and wait for a pipe event, atomically */244244-void pipe_wait(struct pipe_inode_info *pipe);243243+/* Wait for a pipe to be readable/writable while dropping the pipe lock */244244+void pipe_wait_readable(struct pipe_inode_info *);245245+void pipe_wait_writable(struct pipe_inode_info *);245246246247struct pipe_inode_info *alloc_pipe_info(void);247248void free_pipe_info(struct pipe_inode_info *);
···32233223 * is untouched. Otherwise it is extended. Returns zero on32243224 * success. The skb is freed on error if @free_on_error is true.32253225 */32263226-static inline int __skb_put_padto(struct sk_buff *skb, unsigned int len,32273227- bool free_on_error)32263226+static inline int __must_check __skb_put_padto(struct sk_buff *skb,32273227+ unsigned int len,32283228+ bool free_on_error)32283229{32293230 unsigned int size = skb->len;32303231···32483247 * is untouched. Otherwise it is extended. Returns zero on32493248 * success. The skb is freed on error.32503249 */32513251-static inline int skb_put_padto(struct sk_buff *skb, unsigned int len)32503250+static inline int __must_check skb_put_padto(struct sk_buff *skb, unsigned int len)32523251{32533252 return __skb_put_padto(skb, len, true);32543253}
···744744 * vb2_core_reqbufs() - Initiate streaming.745745 * @q: pointer to &struct vb2_queue with videobuf2 queue.746746 * @memory: memory type, as defined by &enum vb2_memory.747747- * @flags: auxiliary queue/buffer management flags. Currently, the only748748- * used flag is %V4L2_FLAG_MEMORY_NON_CONSISTENT.749747 * @count: requested buffer count.750748 *751749 * Videobuf2 core helper to implement VIDIOC_REQBUF() operation. It is called···768770 * Return: returns zero on success; an error code otherwise.769771 */770772int vb2_core_reqbufs(struct vb2_queue *q, enum vb2_memory memory,771771- unsigned int flags, unsigned int *count);773773+ unsigned int *count);772774773775/**774776 * vb2_core_create_bufs() - Allocate buffers and any required auxiliary structs775777 * @q: pointer to &struct vb2_queue with videobuf2 queue.776778 * @memory: memory type, as defined by &enum vb2_memory.777777- * @flags: auxiliary queue/buffer management flags.778779 * @count: requested buffer count.779780 * @requested_planes: number of planes requested.780781 * @requested_sizes: array with the size of the planes.···791794 * Return: returns zero on success; an error code otherwise.792795 */793796int vb2_core_create_bufs(struct vb2_queue *q, enum vb2_memory memory,794794- unsigned int flags, unsigned int *count,797797+ unsigned int *count,795798 unsigned int requested_planes,796799 const unsigned int requested_sizes[]);797800
···726726 * @hdrlen: length of family specific header727727 * @tb: destination array with maxtype+1 elements728728 * @maxtype: maximum attribute type to be expected729729- * @validate: validation strictness730729 * @extack: extended ACK report struct731730 *732731 * See nla_parse()···823824 * @len: length of attribute stream824825 * @maxtype: maximum attribute type to be expected825826 * @policy: validation policy826826- * @validate: validation strictness827827 * @extack: extended ACK report struct828828 *829829 * Validates all attributes in the specified attribute stream against the
···226226 data_ready_signalled:1;227227228228 atomic_t pd_mode;229229+230230+ /* Fields after this point will be skipped on copies, like on accept231231+ * and peeloff operations232232+ */233233+229234 /* Receive to here while partial delivery is in effect. */230235 struct sk_buff_head pd_lobby;231236232232- /* These must be the last fields, as they will skipped on copies,233233- * like on accept and peeloff operations234234- */235237 struct list_head auto_asconf_list;236238 int do_auto_asconf;237239};
···2162216221632163 /*21642164 * The module is going away. We should disarm the kprobe which21652165- * is using ftrace.21652165+ * is using ftrace, because ftrace framework is still available at21662166+ * MODULE_STATE_GOING notification.21662167 */21672167- if (kprobe_ftrace(p))21682168+ if (kprobe_ftrace(p) && !kprobe_disabled(p) && !kprobes_all_disarmed)21682169 disarm_kprobe_ftrace(p);21692170}21702171···24592458/* Markers of _kprobe_blacklist section */24602459extern unsigned long __start_kprobe_blacklist[];24612460extern unsigned long __stop_kprobe_blacklist[];24612461+24622462+void kprobe_free_init_mem(void)24632463+{24642464+ void *start = (void *)(&__init_begin);24652465+ void *end = (void *)(&__init_end);24662466+ struct hlist_head *head;24672467+ struct kprobe *p;24682468+ int i;24692469+24702470+ mutex_lock(&kprobe_mutex);24712471+24722472+ /* Kill all kprobes on initmem */24732473+ for (i = 0; i < KPROBE_TABLE_SIZE; i++) {24742474+ head = &kprobe_table[i];24752475+ hlist_for_each_entry(p, head, hlist) {24762476+ if (start <= (void *)p->addr && (void *)p->addr < end)24772477+ kill_kprobe(p);24782478+ }24792479+ }24802480+24812481+ mutex_unlock(&kprobe_mutex);24822482+}2462248324632484static int __init init_kprobes(void)24642485{
···272272}273273EXPORT_SYMBOL(strscpy_pad);274274275275+/**276276+ * stpcpy - copy a string from src to dest returning a pointer to the new end277277+ * of dest, including src's %NUL-terminator. May overrun dest.278278+ * @dest: pointer to end of string being copied into. Must be large enough279279+ * to receive copy.280280+ * @src: pointer to the beginning of string being copied from. Must not overlap281281+ * dest.282282+ *283283+ * stpcpy differs from strcpy in a key way: the return value is a pointer284284+ * to the new %NUL-terminating character in @dest. (For strcpy, the return285285+ * value is a pointer to the start of @dest). This interface is considered286286+ * unsafe as it doesn't perform bounds checking of the inputs. As such it's287287+ * not recommended for usage. Instead, its definition is provided in case288288+ * the compiler lowers other libcalls to stpcpy.289289+ */290290+char *stpcpy(char *__restrict__ dest, const char *__restrict__ src);291291+char *stpcpy(char *__restrict__ dest, const char *__restrict__ src)292292+{293293+ while ((*dest++ = *src++) != '\0')294294+ /* nothing */;295295+ return --dest;296296+}297297+EXPORT_SYMBOL(stpcpy);298298+275299#ifndef __HAVE_ARCH_STRCAT276300/**277301 * strcat - Append one %NUL-terminated string to another
···23652365 }2366236623672367 if (!PageUptodate(page)) {23682368- error = lock_page_killable(page);23682368+ if (iocb->ki_flags & IOCB_WAITQ)23692369+ error = lock_page_async(page, iocb->ki_waitq);23702370+ else23712371+ error = lock_page_killable(page);23722372+23692373 if (unlikely(error))23702374 goto readpage_error;23712375 if (!PageUptodate(page)) {
+15-9
mm/gup.c
···12551255 BUG_ON(*locked != 1);12561256 }1257125712581258+ if (flags & FOLL_PIN)12591259+ atomic_set(&mm->has_pinned, 1);12601260+12581261 /*12591262 * FOLL_PIN and FOLL_GET are mutually exclusive. Traditional behavior12601263 * is to set FOLL_GET if the caller wants pages[] filled in (but has···24882485 return 1;24892486}2490248724912491-static int gup_pmd_range(pud_t pud, unsigned long addr, unsigned long end,24882488+static int gup_pmd_range(pud_t *pudp, pud_t pud, unsigned long addr, unsigned long end,24922489 unsigned int flags, struct page **pages, int *nr)24932490{24942491 unsigned long next;24952492 pmd_t *pmdp;2496249324972497- pmdp = pmd_offset(&pud, addr);24942494+ pmdp = pmd_offset_lockless(pudp, pud, addr);24982495 do {24992496 pmd_t pmd = READ_ONCE(*pmdp);25002497···25312528 return 1;25322529}2533253025342534-static int gup_pud_range(p4d_t p4d, unsigned long addr, unsigned long end,25312531+static int gup_pud_range(p4d_t *p4dp, p4d_t p4d, unsigned long addr, unsigned long end,25352532 unsigned int flags, struct page **pages, int *nr)25362533{25372534 unsigned long next;25382535 pud_t *pudp;2539253625402540- pudp = pud_offset(&p4d, addr);25372537+ pudp = pud_offset_lockless(p4dp, p4d, addr);25412538 do {25422539 pud_t pud = READ_ONCE(*pudp);25432540···25522549 if (!gup_huge_pd(__hugepd(pud_val(pud)), addr,25532550 PUD_SHIFT, next, flags, pages, nr))25542551 return 0;25552555- } else if (!gup_pmd_range(pud, addr, next, flags, pages, nr))25522552+ } else if (!gup_pmd_range(pudp, pud, addr, next, flags, pages, nr))25562553 return 0;25572554 } while (pudp++, addr = next, addr != end);2558255525592556 return 1;25602557}2561255825622562-static int gup_p4d_range(pgd_t pgd, unsigned long addr, unsigned long end,25592559+static int gup_p4d_range(pgd_t *pgdp, pgd_t pgd, unsigned long addr, unsigned long end,25632560 unsigned int flags, struct page **pages, int *nr)25642561{25652562 unsigned long next;25662563 p4d_t *p4dp;2567256425682568- p4dp = p4d_offset(&pgd, addr);25652565+ p4dp = p4d_offset_lockless(pgdp, pgd, addr);25692566 do {25702567 p4d_t p4d = READ_ONCE(*p4dp);25712568···25772574 if (!gup_huge_pd(__hugepd(p4d_val(p4d)), addr,25782575 P4D_SHIFT, next, flags, pages, nr))25792576 return 0;25802580- } else if (!gup_pud_range(p4d, addr, next, flags, pages, nr))25772577+ } else if (!gup_pud_range(p4dp, p4d, addr, next, flags, pages, nr))25812578 return 0;25822579 } while (p4dp++, addr = next, addr != end);25832580···26052602 if (!gup_huge_pd(__hugepd(pgd_val(pgd)), addr,26062603 PGDIR_SHIFT, next, flags, pages, nr))26072604 return;26082608- } else if (!gup_p4d_range(pgd, addr, next, flags, pages, nr))26052605+ } else if (!gup_p4d_range(pgdp, pgd, addr, next, flags, pages, nr))26092606 return;26102607 } while (pgdp++, addr = next, addr != end);26112608}···26622659 FOLL_FORCE | FOLL_PIN | FOLL_GET |26632660 FOLL_FAST_ONLY)))26642661 return -EINVAL;26622662+26632663+ if (gup_flags & FOLL_PIN)26642664+ atomic_set(¤t->mm->has_pinned, 1);2665266526662666 if (!(gup_flags & FOLL_FAST_ONLY))26672667 might_lock_read(¤t->mm->mmap_lock);
+28
mm/huge_memory.c
···1074107410751075 src_page = pmd_page(pmd);10761076 VM_BUG_ON_PAGE(!PageHead(src_page), src_page);10771077+10781078+ /*10791079+ * If this page is a potentially pinned page, split and retry the fault10801080+ * with smaller page size. Normally this should not happen because the10811081+ * userspace should use MADV_DONTFORK upon pinned regions. This is a10821082+ * best effort that the pinned pages won't be replaced by another10831083+ * random page during the coming copy-on-write.10841084+ */10851085+ if (unlikely(is_cow_mapping(vma->vm_flags) &&10861086+ atomic_read(&src_mm->has_pinned) &&10871087+ page_maybe_dma_pinned(src_page))) {10881088+ pte_free(dst_mm, pgtable);10891089+ spin_unlock(src_ptl);10901090+ spin_unlock(dst_ptl);10911091+ __split_huge_pmd(vma, src_pmd, addr, false, NULL);10921092+ return -EAGAIN;10931093+ }10941094+10771095 get_page(src_page);10781096 page_dup_rmap(src_page, true);10791097 add_mm_counter(dst_mm, MM_ANONPAGES, HPAGE_PMD_NR);···11931175 */11941176 if (is_huge_zero_pud(pud)) {11951177 /* No huge zero pud yet */11781178+ }11791179+11801180+ /* Please refer to comments in copy_huge_pmd() */11811181+ if (unlikely(is_cow_mapping(vma->vm_flags) &&11821182+ atomic_read(&src_mm->has_pinned) &&11831183+ page_maybe_dma_pinned(pud_page(pud)))) {11841184+ spin_unlock(src_ptl);11851185+ spin_unlock(dst_ptl);11861186+ __split_huge_pud(vma, src_pud, addr);11871187+ return -EAGAIN;11961188 }1197118911981190 pudp_set_wrprotect(src_mm, addr, src_pud);
···695695 * covered by this vma.696696 */697697698698-static inline unsigned long699699-copy_one_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm,698698+static unsigned long699699+copy_nonpresent_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm,700700 pte_t *dst_pte, pte_t *src_pte, struct vm_area_struct *vma,701701 unsigned long addr, int *rss)702702{703703 unsigned long vm_flags = vma->vm_flags;704704 pte_t pte = *src_pte;705705 struct page *page;706706+ swp_entry_t entry = pte_to_swp_entry(pte);706707707707- /* pte contains position in swap or file, so copy. */708708- if (unlikely(!pte_present(pte))) {709709- swp_entry_t entry = pte_to_swp_entry(pte);708708+ if (likely(!non_swap_entry(entry))) {709709+ if (swap_duplicate(entry) < 0)710710+ return entry.val;710711711711- if (likely(!non_swap_entry(entry))) {712712- if (swap_duplicate(entry) < 0)713713- return entry.val;714714-715715- /* make sure dst_mm is on swapoff's mmlist. */716716- if (unlikely(list_empty(&dst_mm->mmlist))) {717717- spin_lock(&mmlist_lock);718718- if (list_empty(&dst_mm->mmlist))719719- list_add(&dst_mm->mmlist,720720- &src_mm->mmlist);721721- spin_unlock(&mmlist_lock);722722- }723723- rss[MM_SWAPENTS]++;724724- } else if (is_migration_entry(entry)) {725725- page = migration_entry_to_page(entry);726726-727727- rss[mm_counter(page)]++;728728-729729- if (is_write_migration_entry(entry) &&730730- is_cow_mapping(vm_flags)) {731731- /*732732- * COW mappings require pages in both733733- * parent and child to be set to read.734734- */735735- make_migration_entry_read(&entry);736736- pte = swp_entry_to_pte(entry);737737- if (pte_swp_soft_dirty(*src_pte))738738- pte = pte_swp_mksoft_dirty(pte);739739- if (pte_swp_uffd_wp(*src_pte))740740- pte = pte_swp_mkuffd_wp(pte);741741- set_pte_at(src_mm, addr, src_pte, pte);742742- }743743- } else if (is_device_private_entry(entry)) {744744- page = device_private_entry_to_page(entry);745745-746746- /*747747- * Update rss count even for unaddressable pages, as748748- * they should treated just like normal pages in this749749- * respect.750750- *751751- * We will likely want to have some new rss counters752752- * for unaddressable pages, at some point. But for now753753- * keep things as they are.754754- */755755- get_page(page);756756- rss[mm_counter(page)]++;757757- page_dup_rmap(page, false);758758-759759- /*760760- * We do not preserve soft-dirty information, because so761761- * far, checkpoint/restore is the only feature that762762- * requires that. And checkpoint/restore does not work763763- * when a device driver is involved (you cannot easily764764- * save and restore device driver state).765765- */766766- if (is_write_device_private_entry(entry) &&767767- is_cow_mapping(vm_flags)) {768768- make_device_private_entry_read(&entry);769769- pte = swp_entry_to_pte(entry);770770- if (pte_swp_uffd_wp(*src_pte))771771- pte = pte_swp_mkuffd_wp(pte);772772- set_pte_at(src_mm, addr, src_pte, pte);773773- }712712+ /* make sure dst_mm is on swapoff's mmlist. */713713+ if (unlikely(list_empty(&dst_mm->mmlist))) {714714+ spin_lock(&mmlist_lock);715715+ if (list_empty(&dst_mm->mmlist))716716+ list_add(&dst_mm->mmlist,717717+ &src_mm->mmlist);718718+ spin_unlock(&mmlist_lock);774719 }775775- goto out_set_pte;720720+ rss[MM_SWAPENTS]++;721721+ } else if (is_migration_entry(entry)) {722722+ page = migration_entry_to_page(entry);723723+724724+ rss[mm_counter(page)]++;725725+726726+ if (is_write_migration_entry(entry) &&727727+ is_cow_mapping(vm_flags)) {728728+ /*729729+ * COW mappings require pages in both730730+ * parent and child to be set to read.731731+ */732732+ make_migration_entry_read(&entry);733733+ pte = swp_entry_to_pte(entry);734734+ if (pte_swp_soft_dirty(*src_pte))735735+ pte = pte_swp_mksoft_dirty(pte);736736+ if (pte_swp_uffd_wp(*src_pte))737737+ pte = pte_swp_mkuffd_wp(pte);738738+ set_pte_at(src_mm, addr, src_pte, pte);739739+ }740740+ } else if (is_device_private_entry(entry)) {741741+ page = device_private_entry_to_page(entry);742742+743743+ /*744744+ * Update rss count even for unaddressable pages, as745745+ * they should treated just like normal pages in this746746+ * respect.747747+ *748748+ * We will likely want to have some new rss counters749749+ * for unaddressable pages, at some point. But for now750750+ * keep things as they are.751751+ */752752+ get_page(page);753753+ rss[mm_counter(page)]++;754754+ page_dup_rmap(page, false);755755+756756+ /*757757+ * We do not preserve soft-dirty information, because so758758+ * far, checkpoint/restore is the only feature that759759+ * requires that. And checkpoint/restore does not work760760+ * when a device driver is involved (you cannot easily761761+ * save and restore device driver state).762762+ */763763+ if (is_write_device_private_entry(entry) &&764764+ is_cow_mapping(vm_flags)) {765765+ make_device_private_entry_read(&entry);766766+ pte = swp_entry_to_pte(entry);767767+ if (pte_swp_uffd_wp(*src_pte))768768+ pte = pte_swp_mkuffd_wp(pte);769769+ set_pte_at(src_mm, addr, src_pte, pte);770770+ }771771+ }772772+ set_pte_at(dst_mm, addr, dst_pte, pte);773773+ return 0;774774+}775775+776776+/*777777+ * Copy a present and normal page if necessary.778778+ *779779+ * NOTE! The usual case is that this doesn't need to do780780+ * anything, and can just return a positive value. That781781+ * will let the caller know that it can just increase782782+ * the page refcount and re-use the pte the traditional783783+ * way.784784+ *785785+ * But _if_ we need to copy it because it needs to be786786+ * pinned in the parent (and the child should get its own787787+ * copy rather than just a reference to the same page),788788+ * we'll do that here and return zero to let the caller789789+ * know we're done.790790+ *791791+ * And if we need a pre-allocated page but don't yet have792792+ * one, return a negative error to let the preallocation793793+ * code know so that it can do so outside the page table794794+ * lock.795795+ */796796+static inline int797797+copy_present_page(struct mm_struct *dst_mm, struct mm_struct *src_mm,798798+ pte_t *dst_pte, pte_t *src_pte,799799+ struct vm_area_struct *vma, struct vm_area_struct *new,800800+ unsigned long addr, int *rss, struct page **prealloc,801801+ pte_t pte, struct page *page)802802+{803803+ struct page *new_page;804804+805805+ if (!is_cow_mapping(vma->vm_flags))806806+ return 1;807807+808808+ /*809809+ * The trick starts.810810+ *811811+ * What we want to do is to check whether this page may812812+ * have been pinned by the parent process. If so,813813+ * instead of wrprotect the pte on both sides, we copy814814+ * the page immediately so that we'll always guarantee815815+ * the pinned page won't be randomly replaced in the816816+ * future.817817+ *818818+ * To achieve this, we do the following:819819+ *820820+ * 1. Write-protect the pte if it's writable. This is821821+ * to protect concurrent write fast-gup with822822+ * FOLL_PIN, so that we'll fail the fast-gup with823823+ * the write bit removed.824824+ *825825+ * 2. Check page_maybe_dma_pinned() to see whether this826826+ * page may have been pinned.827827+ *828828+ * The order of these steps is important to serialize829829+ * against the fast-gup code (gup_pte_range()) on the830830+ * pte check and try_grab_compound_head(), so that831831+ * we'll make sure either we'll capture that fast-gup832832+ * so we'll copy the pinned page here, or we'll fail833833+ * that fast-gup.834834+ *835835+ * NOTE! Even if we don't end up copying the page,836836+ * we won't undo this wrprotect(), because the normal837837+ * reference copy will need it anyway.838838+ */839839+ if (pte_write(pte))840840+ ptep_set_wrprotect(src_mm, addr, src_pte);841841+842842+ /*843843+ * These are the "normally we can just copy by reference"844844+ * checks.845845+ */846846+ if (likely(!atomic_read(&src_mm->has_pinned)))847847+ return 1;848848+ if (likely(!page_maybe_dma_pinned(page)))849849+ return 1;850850+851851+ /*852852+ * Uhhuh. It looks like the page might be a pinned page,853853+ * and we actually need to copy it. Now we can set the854854+ * source pte back to being writable.855855+ */856856+ if (pte_write(pte))857857+ set_pte_at(src_mm, addr, src_pte, pte);858858+859859+ new_page = *prealloc;860860+ if (!new_page)861861+ return -EAGAIN;862862+863863+ /*864864+ * We have a prealloc page, all good! Take it865865+ * over and copy the page & arm it.866866+ */867867+ *prealloc = NULL;868868+ copy_user_highpage(new_page, page, addr, vma);869869+ __SetPageUptodate(new_page);870870+ page_add_new_anon_rmap(new_page, new, addr, false);871871+ lru_cache_add_inactive_or_unevictable(new_page, new);872872+ rss[mm_counter(new_page)]++;873873+874874+ /* All done, just insert the new page copy in the child */875875+ pte = mk_pte(new_page, new->vm_page_prot);876876+ pte = maybe_mkwrite(pte_mkdirty(pte), new);877877+ set_pte_at(dst_mm, addr, dst_pte, pte);878878+ return 0;879879+}880880+881881+/*882882+ * Copy one pte. Returns 0 if succeeded, or -EAGAIN if one preallocated page883883+ * is required to copy this pte.884884+ */885885+static inline int886886+copy_present_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm,887887+ pte_t *dst_pte, pte_t *src_pte, struct vm_area_struct *vma,888888+ struct vm_area_struct *new,889889+ unsigned long addr, int *rss, struct page **prealloc)890890+{891891+ unsigned long vm_flags = vma->vm_flags;892892+ pte_t pte = *src_pte;893893+ struct page *page;894894+895895+ page = vm_normal_page(vma, addr, pte);896896+ if (page) {897897+ int retval;898898+899899+ retval = copy_present_page(dst_mm, src_mm,900900+ dst_pte, src_pte,901901+ vma, new,902902+ addr, rss, prealloc,903903+ pte, page);904904+ if (retval <= 0)905905+ return retval;906906+907907+ get_page(page);908908+ page_dup_rmap(page, false);909909+ rss[mm_counter(page)]++;776910 }777911778912 /*···934800 if (!(vm_flags & VM_UFFD_WP))935801 pte = pte_clear_uffd_wp(pte);936802937937- page = vm_normal_page(vma, addr, pte);938938- if (page) {939939- get_page(page);940940- page_dup_rmap(page, false);941941- rss[mm_counter(page)]++;942942- }943943-944944-out_set_pte:945803 set_pte_at(dst_mm, addr, dst_pte, pte);946804 return 0;947805}948806807807+static inline struct page *808808+page_copy_prealloc(struct mm_struct *src_mm, struct vm_area_struct *vma,809809+ unsigned long addr)810810+{811811+ struct page *new_page;812812+813813+ new_page = alloc_page_vma(GFP_HIGHUSER_MOVABLE, vma, addr);814814+ if (!new_page)815815+ return NULL;816816+817817+ if (mem_cgroup_charge(new_page, src_mm, GFP_KERNEL)) {818818+ put_page(new_page);819819+ return NULL;820820+ }821821+ cgroup_throttle_swaprate(new_page, GFP_KERNEL);822822+823823+ return new_page;824824+}825825+949826static int copy_pte_range(struct mm_struct *dst_mm, struct mm_struct *src_mm,950827 pmd_t *dst_pmd, pmd_t *src_pmd, struct vm_area_struct *vma,828828+ struct vm_area_struct *new,951829 unsigned long addr, unsigned long end)952830{953831 pte_t *orig_src_pte, *orig_dst_pte;954832 pte_t *src_pte, *dst_pte;955833 spinlock_t *src_ptl, *dst_ptl;956956- int progress = 0;834834+ int progress, ret = 0;957835 int rss[NR_MM_COUNTERS];958836 swp_entry_t entry = (swp_entry_t){0};837837+ struct page *prealloc = NULL;959838960839again:840840+ progress = 0;961841 init_rss_vec(rss);962842963843 dst_pte = pte_alloc_map_lock(dst_mm, dst_pmd, addr, &dst_ptl);964964- if (!dst_pte)965965- return -ENOMEM;844844+ if (!dst_pte) {845845+ ret = -ENOMEM;846846+ goto out;847847+ }966848 src_pte = pte_offset_map(src_pmd, addr);967849 src_ptl = pte_lockptr(src_mm, src_pmd);968850 spin_lock_nested(src_ptl, SINGLE_DEPTH_NESTING);···1001851 progress++;1002852 continue;1003853 }10041004- entry.val = copy_one_pte(dst_mm, src_mm, dst_pte, src_pte,854854+ if (unlikely(!pte_present(*src_pte))) {855855+ entry.val = copy_nonpresent_pte(dst_mm, src_mm,856856+ dst_pte, src_pte,1005857 vma, addr, rss);10061006- if (entry.val)858858+ if (entry.val)859859+ break;860860+ progress += 8;861861+ continue;862862+ }863863+ /* copy_present_pte() will clear `*prealloc' if consumed */864864+ ret = copy_present_pte(dst_mm, src_mm, dst_pte, src_pte,865865+ vma, new, addr, rss, &prealloc);866866+ /*867867+ * If we need a pre-allocated page for this pte, drop the868868+ * locks, allocate, and try again.869869+ */870870+ if (unlikely(ret == -EAGAIN))1007871 break;872872+ if (unlikely(prealloc)) {873873+ /*874874+ * pre-alloc page cannot be reused by next time so as875875+ * to strictly follow mempolicy (e.g., alloc_page_vma()876876+ * will allocate page according to address). This877877+ * could only happen if one pinned pte changed.878878+ */879879+ put_page(prealloc);880880+ prealloc = NULL;881881+ }1008882 progress += 8;1009883 } while (dst_pte++, src_pte++, addr += PAGE_SIZE, addr != end);1010884···1040866 cond_resched();10418671042868 if (entry.val) {10431043- if (add_swap_count_continuation(entry, GFP_KERNEL) < 0)869869+ if (add_swap_count_continuation(entry, GFP_KERNEL) < 0) {870870+ ret = -ENOMEM;871871+ goto out;872872+ }873873+ entry.val = 0;874874+ } else if (ret) {875875+ WARN_ON_ONCE(ret != -EAGAIN);876876+ prealloc = page_copy_prealloc(src_mm, vma, addr);877877+ if (!prealloc)1044878 return -ENOMEM;10451045- progress = 0;879879+ /* We've captured and resolved the error. Reset, try again. */880880+ ret = 0;1046881 }1047882 if (addr != end)1048883 goto again;10491049- return 0;884884+out:885885+ if (unlikely(prealloc))886886+ put_page(prealloc);887887+ return ret;1050888}10518891052890static inline int copy_pmd_range(struct mm_struct *dst_mm, struct mm_struct *src_mm,1053891 pud_t *dst_pud, pud_t *src_pud, struct vm_area_struct *vma,892892+ struct vm_area_struct *new,1054893 unsigned long addr, unsigned long end)1055894{1056895 pmd_t *src_pmd, *dst_pmd;···1090903 if (pmd_none_or_clear_bad(src_pmd))1091904 continue;1092905 if (copy_pte_range(dst_mm, src_mm, dst_pmd, src_pmd,10931093- vma, addr, next))906906+ vma, new, addr, next))1094907 return -ENOMEM;1095908 } while (dst_pmd++, src_pmd++, addr = next, addr != end);1096909 return 0;···10989111099912static inline int copy_pud_range(struct mm_struct *dst_mm, struct mm_struct *src_mm,1100913 p4d_t *dst_p4d, p4d_t *src_p4d, struct vm_area_struct *vma,914914+ struct vm_area_struct *new,1101915 unsigned long addr, unsigned long end)1102916{1103917 pud_t *src_pud, *dst_pud;···1125937 if (pud_none_or_clear_bad(src_pud))1126938 continue;1127939 if (copy_pmd_range(dst_mm, src_mm, dst_pud, src_pud,11281128- vma, addr, next))940940+ vma, new, addr, next))1129941 return -ENOMEM;1130942 } while (dst_pud++, src_pud++, addr = next, addr != end);1131943 return 0;···11339451134946static inline int copy_p4d_range(struct mm_struct *dst_mm, struct mm_struct *src_mm,1135947 pgd_t *dst_pgd, pgd_t *src_pgd, struct vm_area_struct *vma,948948+ struct vm_area_struct *new,1136949 unsigned long addr, unsigned long end)1137950{1138951 p4d_t *src_p4d, *dst_p4d;···1148959 if (p4d_none_or_clear_bad(src_p4d))1149960 continue;1150961 if (copy_pud_range(dst_mm, src_mm, dst_p4d, src_p4d,11511151- vma, addr, next))962962+ vma, new, addr, next))1152963 return -ENOMEM;1153964 } while (dst_p4d++, src_p4d++, addr = next, addr != end);1154965 return 0;1155966}11569671157968int copy_page_range(struct mm_struct *dst_mm, struct mm_struct *src_mm,11581158- struct vm_area_struct *vma)969969+ struct vm_area_struct *vma, struct vm_area_struct *new)1159970{1160971 pgd_t *src_pgd, *dst_pgd;1161972 unsigned long next;···12101021 if (pgd_none_or_clear_bad(src_pgd))12111022 continue;12121023 if (unlikely(copy_p4d_range(dst_mm, src_mm, dst_pgd, src_pgd,12131213- vma, addr, next))) {10241024+ vma, new, addr, next))) {12141025 ret = -ENOMEM;12151026 break;12161027 }···31442955 * page count reference, and the page is locked,31452956 * it's dark out, and we're wearing sunglasses. Hit it.31462957 */31473147- wp_page_reuse(vmf);31482958 unlock_page(page);29592959+ wp_page_reuse(vmf);31492960 return VM_FAULT_WRITE;31502961 } else if (unlikely((vma->vm_flags & (VM_WRITE|VM_SHARED)) ==31512962 (VM_WRITE|VM_SHARED))) {
+3-2
mm/memory_hotplug.c
···729729 * are reserved so nobody should be touching them so we should be safe730730 */731731 memmap_init_zone(nr_pages, nid, zone_idx(zone), start_pfn,732732- MEMMAP_HOTPLUG, altmap);732732+ MEMINIT_HOTPLUG, altmap);733733734734 set_zone_contiguous(zone);735735}···10801080 }1081108110821082 /* link memory sections under this node.*/10831083- ret = link_mem_sections(nid, PFN_DOWN(start), PFN_UP(start + size - 1));10831083+ ret = link_mem_sections(nid, PFN_DOWN(start), PFN_UP(start + size - 1),10841084+ MEMINIT_HOTPLUG);10841085 BUG_ON(ret);1085108610861087 /* create new memmap entry */
+3-4
mm/migrate.c
···14461446 * Capture required information that might get lost14471447 * during migration.14481448 */14491449- is_thp = PageTransHuge(page);14491449+ is_thp = PageTransHuge(page) && !PageHuge(page);14501450 nr_subpages = thp_nr_pages(page);14511451 cond_resched();14521452···14721472 * we encounter them after the rest of the list14731473 * is processed.14741474 */14751475- if (PageTransHuge(page) && !PageHuge(page)) {14751475+ if (is_thp) {14761476 lock_page(page);14771477 rc = split_huge_page_to_list(page, from);14781478 unlock_page(page);···14811481 nr_thp_split++;14821482 goto retry;14831483 }14841484- }14851485- if (is_thp) {14841484+14861485 nr_thp_failed++;14871486 nr_failed += nr_subpages;14881487 goto out;
+21-8
mm/page_alloc.c
···33673367 struct page *page;3368336833693369 if (likely(order == 0)) {33703370- page = rmqueue_pcplist(preferred_zone, zone, gfp_flags,33703370+ /*33713371+ * MIGRATE_MOVABLE pcplist could have the pages on CMA area and33723372+ * we need to skip it when CMA area isn't allowed.33733373+ */33743374+ if (!IS_ENABLED(CONFIG_CMA) || alloc_flags & ALLOC_CMA ||33753375+ migratetype != MIGRATE_MOVABLE) {33763376+ page = rmqueue_pcplist(preferred_zone, zone, gfp_flags,33713377 migratetype, alloc_flags);33723372- goto out;33783378+ goto out;33793379+ }33733380 }3374338133753382 /*···3388338133893382 do {33903383 page = NULL;33913391- if (alloc_flags & ALLOC_HARDER) {33843384+ /*33853385+ * order-0 request can reach here when the pcplist is skipped33863386+ * due to non-CMA allocation context. HIGHATOMIC area is33873387+ * reserved for high-order atomic allocation, so order-033883388+ * request should skip it.33893389+ */33903390+ if (order > 0 && alloc_flags & ALLOC_HARDER) {33923391 page = __rmqueue_smallest(zone, order, MIGRATE_HIGHATOMIC);33933392 if (page)33943393 trace_mm_page_alloc_zone_locked(page, order, migratetype);···59885975 * done. Non-atomic initialization, single-pass.59895976 */59905977void __meminit memmap_init_zone(unsigned long size, int nid, unsigned long zone,59915991- unsigned long start_pfn, enum memmap_context context,59785978+ unsigned long start_pfn, enum meminit_context context,59925979 struct vmem_altmap *altmap)59935980{59945981 unsigned long pfn, end_pfn = start_pfn + size;···60206007 * There can be holes in boot-time mem_map[]s handed to this60216008 * function. They do not exist on hotplugged memory.60226009 */60236023- if (context == MEMMAP_EARLY) {60106010+ if (context == MEMINIT_EARLY) {60246011 if (overlap_memmap_init(zone, &pfn))60256012 continue;60266013 if (defer_init(nid, pfn, end_pfn))···6029601660306017 page = pfn_to_page(pfn);60316018 __init_single_page(page, pfn, zone, nid);60326032- if (context == MEMMAP_HOTPLUG)60196019+ if (context == MEMINIT_HOTPLUG)60336020 __SetPageReserved(page);6034602160356022 /*···61126099 * check here not to call set_pageblock_migratetype() against61136100 * pfn out of zone.61146101 *61156115- * Please note that MEMMAP_HOTPLUG path doesn't clear memmap61026102+ * Please note that MEMINIT_HOTPLUG path doesn't clear memmap61166103 * because this is done early in section_activate()61176104 */61186105 if (!(pfn & (pageblock_nr_pages - 1))) {···61506137 if (end_pfn > start_pfn) {61516138 size = end_pfn - start_pfn;61526139 memmap_init_zone(size, nid, zone, start_pfn,61536153- MEMMAP_EARLY, NULL);61406140+ MEMINIT_EARLY, NULL);61546141 }61556142 }61566143}
···14131413 char *next_block;14141414 slab_flags_t block_flags;1415141514161416- /* If slub_debug = 0, it folds into the if conditional. */14171417- if (!slub_debug_string)14181418- return flags | slub_debug;14191419-14201416 len = strlen(name);14211417 next_block = slub_debug_string;14221418 /* Go through all blocks of debug options, see if any matches our slab's name */···14461450 }14471451 }1448145214491449- return slub_debug;14531453+ return flags | slub_debug;14501454}14511455#else /* !CONFIG_SLUB_DEBUG */14521456static inline void setup_object_debug(struct kmem_cache *s,
+1-1
mm/swapfile.c
···10781078 goto nextsi;10791079 }10801080 if (size == SWAPFILE_CLUSTER) {10811081- if (!(si->flags & SWP_FS))10811081+ if (si->flags & SWP_BLKDEV)10821082 n_ret = swap_alloc_cluster(si, swp_entries);10831083 } else10841084 n_ret = scan_swap_map_slots(si, SWAP_HAS_CACHE,
+117-28
net/batman-adv/bridge_loop_avoidance.c
···2525#include <linux/lockdep.h>2626#include <linux/netdevice.h>2727#include <linux/netlink.h>2828+#include <linux/preempt.h>2829#include <linux/rculist.h>2930#include <linux/rcupdate.h>3031#include <linux/seq_file.h>···8483 */8584static inline u32 batadv_choose_backbone_gw(const void *data, u32 size)8685{8787- const struct batadv_bla_claim *claim = (struct batadv_bla_claim *)data;8686+ const struct batadv_bla_backbone_gw *gw;8887 u32 hash = 0;89889090- hash = jhash(&claim->addr, sizeof(claim->addr), hash);9191- hash = jhash(&claim->vid, sizeof(claim->vid), hash);8989+ gw = (struct batadv_bla_backbone_gw *)data;9090+ hash = jhash(&gw->orig, sizeof(gw->orig), hash);9191+ hash = jhash(&gw->vid, sizeof(gw->vid), hash);92929393 return hash % size;9494}···15811579}1582158015831581/**15841584- * batadv_bla_check_bcast_duplist() - Check if a frame is in the broadcast dup.15821582+ * batadv_bla_check_duplist() - Check if a frame is in the broadcast dup.15851583 * @bat_priv: the bat priv with all the soft interface information15861586- * @skb: contains the bcast_packet to be checked15841584+ * @skb: contains the multicast packet to be checked15851585+ * @payload_ptr: pointer to position inside the head buffer of the skb15861586+ * marking the start of the data to be CRC'ed15871587+ * @orig: originator mac address, NULL if unknown15871588 *15881588- * check if it is on our broadcast list. Another gateway might15891589- * have sent the same packet because it is connected to the same backbone,15901590- * so we have to remove this duplicate.15891589+ * Check if it is on our broadcast list. Another gateway might have sent the15901590+ * same packet because it is connected to the same backbone, so we have to15911591+ * remove this duplicate.15911592 *15921593 * This is performed by checking the CRC, which will tell us15931594 * with a good chance that it is the same packet. If it is furthermore···15991594 *16001595 * Return: true if a packet is in the duplicate list, false otherwise.16011596 */16021602-bool batadv_bla_check_bcast_duplist(struct batadv_priv *bat_priv,16031603- struct sk_buff *skb)15971597+static bool batadv_bla_check_duplist(struct batadv_priv *bat_priv,15981598+ struct sk_buff *skb, u8 *payload_ptr,15991599+ const u8 *orig)16041600{16051605- int i, curr;16061606- __be32 crc;16071607- struct batadv_bcast_packet *bcast_packet;16081601 struct batadv_bcast_duplist_entry *entry;16091602 bool ret = false;16101610-16111611- bcast_packet = (struct batadv_bcast_packet *)skb->data;16031603+ int i, curr;16041604+ __be32 crc;1612160516131606 /* calculate the crc ... */16141614- crc = batadv_skb_crc32(skb, (u8 *)(bcast_packet + 1));16071607+ crc = batadv_skb_crc32(skb, payload_ptr);1615160816161609 spin_lock_bh(&bat_priv->bla.bcast_duplist_lock);16171610···16281625 if (entry->crc != crc)16291626 continue;1630162716311631- if (batadv_compare_eth(entry->orig, bcast_packet->orig))16321632- continue;16281628+ /* are the originators both known and not anonymous? */16291629+ if (orig && !is_zero_ether_addr(orig) &&16301630+ !is_zero_ether_addr(entry->orig)) {16311631+ /* If known, check if the new frame came from16321632+ * the same originator:16331633+ * We are safe to take identical frames from the16341634+ * same orig, if known, as multiplications in16351635+ * the mesh are detected via the (orig, seqno) pair.16361636+ * So we can be a bit more liberal here and allow16371637+ * identical frames from the same orig which the source16381638+ * host might have sent multiple times on purpose.16391639+ */16401640+ if (batadv_compare_eth(entry->orig, orig))16411641+ continue;16421642+ }1633164316341644 /* this entry seems to match: same crc, not too old,16351645 * and from another gw. therefore return true to forbid it.···16581642 entry = &bat_priv->bla.bcast_duplist[curr];16591643 entry->crc = crc;16601644 entry->entrytime = jiffies;16611661- ether_addr_copy(entry->orig, bcast_packet->orig);16451645+16461646+ /* known originator */16471647+ if (orig)16481648+ ether_addr_copy(entry->orig, orig);16491649+ /* anonymous originator */16501650+ else16511651+ eth_zero_addr(entry->orig);16521652+16621653 bat_priv->bla.bcast_duplist_curr = curr;1663165416641655out:16651656 spin_unlock_bh(&bat_priv->bla.bcast_duplist_lock);1666165716671658 return ret;16591659+}16601660+16611661+/**16621662+ * batadv_bla_check_ucast_duplist() - Check if a frame is in the broadcast dup.16631663+ * @bat_priv: the bat priv with all the soft interface information16641664+ * @skb: contains the multicast packet to be checked, decapsulated from a16651665+ * unicast_packet16661666+ *16671667+ * Check if it is on our broadcast list. Another gateway might have sent the16681668+ * same packet because it is connected to the same backbone, so we have to16691669+ * remove this duplicate.16701670+ *16711671+ * Return: true if a packet is in the duplicate list, false otherwise.16721672+ */16731673+static bool batadv_bla_check_ucast_duplist(struct batadv_priv *bat_priv,16741674+ struct sk_buff *skb)16751675+{16761676+ return batadv_bla_check_duplist(bat_priv, skb, (u8 *)skb->data, NULL);16771677+}16781678+16791679+/**16801680+ * batadv_bla_check_bcast_duplist() - Check if a frame is in the broadcast dup.16811681+ * @bat_priv: the bat priv with all the soft interface information16821682+ * @skb: contains the bcast_packet to be checked16831683+ *16841684+ * Check if it is on our broadcast list. Another gateway might have sent the16851685+ * same packet because it is connected to the same backbone, so we have to16861686+ * remove this duplicate.16871687+ *16881688+ * Return: true if a packet is in the duplicate list, false otherwise.16891689+ */16901690+bool batadv_bla_check_bcast_duplist(struct batadv_priv *bat_priv,16911691+ struct sk_buff *skb)16921692+{16931693+ struct batadv_bcast_packet *bcast_packet;16941694+ u8 *payload_ptr;16951695+16961696+ bcast_packet = (struct batadv_bcast_packet *)skb->data;16971697+ payload_ptr = (u8 *)(bcast_packet + 1);16981698+16991699+ return batadv_bla_check_duplist(bat_priv, skb, payload_ptr,17001700+ bcast_packet->orig);16681701}1669170216701703/**···18771812 * @bat_priv: the bat priv with all the soft interface information18781813 * @skb: the frame to be checked18791814 * @vid: the VLAN ID of the frame18801880- * @is_bcast: the packet came in a broadcast packet type.18151815+ * @packet_type: the batman packet type this frame came in18811816 *18821817 * batadv_bla_rx avoidance checks if:18831818 * * we have to race for a claim···18891824 * further process the skb.18901825 */18911826bool batadv_bla_rx(struct batadv_priv *bat_priv, struct sk_buff *skb,18921892- unsigned short vid, bool is_bcast)18271827+ unsigned short vid, int packet_type)18931828{18941829 struct batadv_bla_backbone_gw *backbone_gw;18951830 struct ethhdr *ethhdr;···19111846 goto handled;1912184719131848 if (unlikely(atomic_read(&bat_priv->bla.num_requests)))19141914- /* don't allow broadcasts while requests are in flight */19151915- if (is_multicast_ether_addr(ethhdr->h_dest) && is_bcast)19161916- goto handled;18491849+ /* don't allow multicast packets while requests are in flight */18501850+ if (is_multicast_ether_addr(ethhdr->h_dest))18511851+ /* Both broadcast flooding or multicast-via-unicasts18521852+ * delivery might send to multiple backbone gateways18531853+ * sharing the same LAN and therefore need to coordinate18541854+ * which backbone gateway forwards into the LAN,18551855+ * by claiming the payload source address.18561856+ *18571857+ * Broadcast flooding and multicast-via-unicasts18581858+ * delivery use the following two batman packet types.18591859+ * Note: explicitly exclude BATADV_UNICAST_4ADDR,18601860+ * as the DHCP gateway feature will send explicitly18611861+ * to only one BLA gateway, so the claiming process18621862+ * should be avoided there.18631863+ */18641864+ if (packet_type == BATADV_BCAST ||18651865+ packet_type == BATADV_UNICAST)18661866+ goto handled;18671867+18681868+ /* potential duplicates from foreign BLA backbone gateways via18691869+ * multicast-in-unicast packets18701870+ */18711871+ if (is_multicast_ether_addr(ethhdr->h_dest) &&18721872+ packet_type == BATADV_UNICAST &&18731873+ batadv_bla_check_ucast_duplist(bat_priv, skb))18741874+ goto handled;1917187519181876 ether_addr_copy(search_claim.addr, ethhdr->h_source);19191877 search_claim.vid = vid;···19711883 goto allow;19721884 }1973188519741974- /* if it is a broadcast ... */19751975- if (is_multicast_ether_addr(ethhdr->h_dest) && is_bcast) {18861886+ /* if it is a multicast ... */18871887+ if (is_multicast_ether_addr(ethhdr->h_dest) &&18881888+ (packet_type == BATADV_BCAST || packet_type == BATADV_UNICAST)) {19761889 /* ... drop it. the responsible gateway is in charge.19771890 *19781978- * We need to check is_bcast because with the gateway18911891+ * We need to check packet type because with the gateway19791892 * feature, broadcasts (like DHCP requests) may be sent19801980- * using a unicast packet type.18931893+ * using a unicast 4 address packet type. See comment above.19811894 */19821895 goto handled;19831896 } else {
+2-2
net/batman-adv/bridge_loop_avoidance.h
···35353636#ifdef CONFIG_BATMAN_ADV_BLA3737bool batadv_bla_rx(struct batadv_priv *bat_priv, struct sk_buff *skb,3838- unsigned short vid, bool is_bcast);3838+ unsigned short vid, int packet_type);3939bool batadv_bla_tx(struct batadv_priv *bat_priv, struct sk_buff *skb,4040 unsigned short vid);4141bool batadv_bla_is_backbone_gw(struct sk_buff *skb,···66666767static inline bool batadv_bla_rx(struct batadv_priv *bat_priv,6868 struct sk_buff *skb, unsigned short vid,6969- bool is_bcast)6969+ int packet_type)7070{7171 return false;7272}
+36-10
net/batman-adv/multicast.c
···5151#include <uapi/linux/batadv_packet.h>5252#include <uapi/linux/batman_adv.h>53535454+#include "bridge_loop_avoidance.h"5455#include "hard-interface.h"5556#include "hash.h"5657#include "log.h"···14361435}1437143614381437/**14381438+ * batadv_mcast_forw_send_orig() - send a multicast packet to an originator14391439+ * @bat_priv: the bat priv with all the soft interface information14401440+ * @skb: the multicast packet to send14411441+ * @vid: the vlan identifier14421442+ * @orig_node: the originator to send the packet to14431443+ *14441444+ * Return: NET_XMIT_DROP in case of error or NET_XMIT_SUCCESS otherwise.14451445+ */14461446+int batadv_mcast_forw_send_orig(struct batadv_priv *bat_priv,14471447+ struct sk_buff *skb,14481448+ unsigned short vid,14491449+ struct batadv_orig_node *orig_node)14501450+{14511451+ /* Avoid sending multicast-in-unicast packets to other BLA14521452+ * gateways - they already got the frame from the LAN side14531453+ * we share with them.14541454+ * TODO: Refactor to take BLA into account earlier, to avoid14551455+ * reducing the mcast_fanout count.14561456+ */14571457+ if (batadv_bla_is_backbone_gw_orig(bat_priv, orig_node->orig, vid)) {14581458+ dev_kfree_skb(skb);14591459+ return NET_XMIT_SUCCESS;14601460+ }14611461+14621462+ return batadv_send_skb_unicast(bat_priv, skb, BATADV_UNICAST, 0,14631463+ orig_node, vid);14641464+}14651465+14661466+/**14391467 * batadv_mcast_forw_tt() - forwards a packet to multicast listeners14401468 * @bat_priv: the bat priv with all the soft interface information14411469 * @skb: the multicast packet to transmit···15011471 break;15021472 }1503147315041504- batadv_send_skb_unicast(bat_priv, newskb, BATADV_UNICAST, 0,15051505- orig_entry->orig_node, vid);14741474+ batadv_mcast_forw_send_orig(bat_priv, newskb, vid,14751475+ orig_entry->orig_node);15061476 }15071477 rcu_read_unlock();15081478···15431513 break;15441514 }1545151515461546- batadv_send_skb_unicast(bat_priv, newskb, BATADV_UNICAST, 0,15471547- orig_node, vid);15161516+ batadv_mcast_forw_send_orig(bat_priv, newskb, vid, orig_node);15481517 }15491518 rcu_read_unlock();15501519 return ret;···15801551 break;15811552 }1582155315831583- batadv_send_skb_unicast(bat_priv, newskb, BATADV_UNICAST, 0,15841584- orig_node, vid);15541554+ batadv_mcast_forw_send_orig(bat_priv, newskb, vid, orig_node);15851555 }15861556 rcu_read_unlock();15871557 return ret;···16461618 break;16471619 }1648162016491649- batadv_send_skb_unicast(bat_priv, newskb, BATADV_UNICAST, 0,16501650- orig_node, vid);16211621+ batadv_mcast_forw_send_orig(bat_priv, newskb, vid, orig_node);16511622 }16521623 rcu_read_unlock();16531624 return ret;···16831656 break;16841657 }1685165816861686- batadv_send_skb_unicast(bat_priv, newskb, BATADV_UNICAST, 0,16871687- orig_node, vid);16591659+ batadv_mcast_forw_send_orig(bat_priv, newskb, vid, orig_node);16881660 }16891661 rcu_read_unlock();16901662 return ret;
···826826 vid = batadv_get_vid(skb, hdr_len);827827 ethhdr = (struct ethhdr *)(skb->data + hdr_len);828828829829+ /* do not reroute multicast frames in a unicast header */830830+ if (is_multicast_ether_addr(ethhdr->h_dest))831831+ return true;832832+829833 /* check if the destination client was served by this node and it is now830834 * roaming. In this case, it means that the node has got a ROAM_ADV831835 * message and that it knows the new destination in the mesh to re-route
+5-6
net/batman-adv/soft-interface.c
···364364 goto dropped;365365 ret = batadv_send_skb_via_gw(bat_priv, skb, vid);366366 } else if (mcast_single_orig) {367367- ret = batadv_send_skb_unicast(bat_priv, skb,368368- BATADV_UNICAST, 0,369369- mcast_single_orig, vid);367367+ ret = batadv_mcast_forw_send_orig(bat_priv, skb, vid,368368+ mcast_single_orig);370369 } else if (forw_mode == BATADV_FORW_SOME) {371370 ret = batadv_mcast_forw_send(bat_priv, skb, vid);372371 } else {···424425 struct vlan_ethhdr *vhdr;425426 struct ethhdr *ethhdr;426427 unsigned short vid;427427- bool is_bcast;428428+ int packet_type;428429429430 batadv_bcast_packet = (struct batadv_bcast_packet *)skb->data;430430- is_bcast = (batadv_bcast_packet->packet_type == BATADV_BCAST);431431+ packet_type = batadv_bcast_packet->packet_type;431432432433 skb_pull_rcsum(skb, hdr_size);433434 skb_reset_mac_header(skb);···470471 /* Let the bridge loop avoidance check the packet. If will471472 * not handle it, we can safely push it up.472473 */473473- if (batadv_bla_rx(bat_priv, skb, vid, is_bcast))474474+ if (batadv_bla_rx(bat_priv, skb, vid, packet_type))474475 goto out;475476476477 if (orig_node)
···144144145145/* Operations to mark dst as DEAD and clean up the net device referenced146146 * by dst:147147- * 1. put the dst under loopback interface and discard all tx/rx packets147147+ * 1. put the dst under blackhole interface and discard all tx/rx packets148148 * on this route.149149 * 2. release the net_device150150 * This function should be called when removing routes from the fib tree
···332332{333333 struct qrtr_hdr_v1 *hdr;334334 size_t len = skb->len;335335- int rc = -ENODEV;336336- int confirm_rx;335335+ int rc, confirm_rx;337336338337 confirm_rx = qrtr_tx_wait(node, to->sq_node, to->sq_port, type);339338 if (confirm_rx < 0) {···356357 hdr->size = cpu_to_le32(len);357358 hdr->confirm_rx = !!confirm_rx;358359359359- skb_put_padto(skb, ALIGN(len, 4) + sizeof(*hdr));360360+ rc = skb_put_padto(skb, ALIGN(len, 4) + sizeof(*hdr));360361361361- mutex_lock(&node->ep_lock);362362- if (node->ep)363363- rc = node->ep->xmit(node->ep, skb);364364- else365365- kfree_skb(skb);366366- mutex_unlock(&node->ep_lock);367367-362362+ if (!rc) {363363+ mutex_lock(&node->ep_lock);364364+ rc = -ENODEV;365365+ if (node->ep)366366+ rc = node->ep->xmit(node->ep, skb);367367+ else368368+ kfree_skb(skb);369369+ mutex_unlock(&node->ep_lock);370370+ }368371 /* Need to ensure that a subsequent message carries the otherwise lost369372 * confirm_rx flag if we dropped this one */370373 if (rc && confirm_rx)
+34-10
net/sched/act_ife.c
···436436 kfree_rcu(p, rcu);437437}438438439439+static int load_metalist(struct nlattr **tb, bool rtnl_held)440440+{441441+ int i;442442+443443+ for (i = 1; i < max_metacnt; i++) {444444+ if (tb[i]) {445445+ void *val = nla_data(tb[i]);446446+ int len = nla_len(tb[i]);447447+ int rc;448448+449449+ rc = load_metaops_and_vet(i, val, len, rtnl_held);450450+ if (rc != 0)451451+ return rc;452452+ }453453+ }454454+455455+ return 0;456456+}457457+439458static int populate_metalist(struct tcf_ife_info *ife, struct nlattr **tb,440459 bool exists, bool rtnl_held)441460{···467448 if (tb[i]) {468449 val = nla_data(tb[i]);469450 len = nla_len(tb[i]);470470-471471- rc = load_metaops_and_vet(i, val, len, rtnl_held);472472- if (rc != 0)473473- return rc;474451475452 rc = add_metainfo(ife, i, val, len, exists);476453 if (rc)···523508 p = kzalloc(sizeof(*p), GFP_KERNEL);524509 if (!p)525510 return -ENOMEM;511511+512512+ if (tb[TCA_IFE_METALST]) {513513+ err = nla_parse_nested_deprecated(tb2, IFE_META_MAX,514514+ tb[TCA_IFE_METALST], NULL,515515+ NULL);516516+ if (err) {517517+ kfree(p);518518+ return err;519519+ }520520+ err = load_metalist(tb2, rtnl_held);521521+ if (err) {522522+ kfree(p);523523+ return err;524524+ }525525+ }526526527527 index = parm->index;528528 err = tcf_idr_check_alloc(tn, &index, a, bind);···600570 }601571602572 if (tb[TCA_IFE_METALST]) {603603- err = nla_parse_nested_deprecated(tb2, IFE_META_MAX,604604- tb[TCA_IFE_METALST], NULL,605605- NULL);606606- if (err)607607- goto metadata_parse_err;608573 err = populate_metalist(ife, tb2, exists, rtnl_held);609574 if (err)610575 goto metadata_parse_err;611611-612576 } else {613577 /* if no passed metadata allow list or passed allow-all614578 * then here we process by adding as many supported metadatum
···532532 * tipc_link_bc_create - create new link to be used for broadcast533533 * @net: pointer to associated network namespace534534 * @mtu: mtu to be used initially if no peers535535- * @window: send window to be used535535+ * @min_win: minimal send window to be used by link536536+ * @max_win: maximal send window to be used by link536537 * @inputq: queue to put messages ready for delivery537538 * @namedq: queue to put binding table update messages ready for delivery538539 * @link: return value, pointer to put the created link
+2-1
net/tipc/msg.c
···150150 if (fragid == FIRST_FRAGMENT) {151151 if (unlikely(head))152152 goto err;153153- if (unlikely(skb_unclone(frag, GFP_ATOMIC)))153153+ frag = skb_unshare(frag, GFP_ATOMIC);154154+ if (unlikely(!frag))154155 goto err;155156 head = *headbuf = frag;156157 *buf = NULL;
···99dtc-objs += dtc-lexer.lex.o dtc-parser.tab.o10101111# Source files need to get at the userspace version of libfdt_env.h to compile1212-HOST_EXTRACFLAGS := -I $(srctree)/$(src)/libfdt1212+HOST_EXTRACFLAGS += -I $(srctree)/$(src)/libfdt13131414ifeq ($(shell pkg-config --exists yaml-0.1 2>/dev/null && echo yes),)1515ifneq ($(CHECK_DT_BINDING)$(CHECK_DTBS),)
+15-1
scripts/kallsyms.c
···82828383static bool is_ignored_symbol(const char *name, char type)8484{8585+ /* Symbol names that exactly match to the following are ignored.*/8586 static const char * const ignored_symbols[] = {8687 /*8788 * Symbols which vary between passes. Passes 1 and 2 must have···105104 NULL106105 };107106107107+ /* Symbol names that begin with the following are ignored.*/108108 static const char * const ignored_prefixes[] = {109109 "$", /* local symbols for ARM, MIPS, etc. */110110 ".LASANPC", /* s390 kasan local symbols */···115113 NULL116114 };117115116116+ /* Symbol names that end with the following are ignored.*/118117 static const char * const ignored_suffixes[] = {119118 "_from_arm", /* arm */120119 "_from_thumb", /* arm */···123120 NULL124121 };125122123123+ /* Symbol names that contain the following are ignored.*/124124+ static const char * const ignored_matches[] = {125125+ ".long_branch.", /* ppc stub */126126+ ".plt_branch.", /* ppc stub */127127+ NULL128128+ };129129+126130 const char * const *p;127131128128- /* Exclude symbols which vary between passes. */129132 for (p = ignored_symbols; *p; p++)130133 if (!strcmp(name, *p))131134 return true;···144135 int l = strlen(name) - strlen(*p);145136146137 if (l >= 0 && !strcmp(name + l, *p))138138+ return true;139139+ }140140+141141+ for (p = ignored_matches; *p; p++) {142142+ if (strstr(name, *p))147143 return true;148144 }149145
···4747 __u32 seq_num = ctx->meta->seq_num;4848 struct bpf_map *map = ctx->map;4949 struct key_t *key = ctx->key;5050+ struct key_t tmp_key;5051 __u64 *val = ctx->value;5252+ __u64 tmp_val = 0;5353+ int ret;51545255 if (in_test_mode) {5356 /* test mode is used by selftests to···6259 * size.6360 */6461 if (key == (void *)0 || val == (void *)0)6262+ return 0;6363+6464+ /* update the value and then delete the <key, value> pair.6565+ * it should not impact the existing 'val' which is still6666+ * accessible under rcu.6767+ */6868+ __builtin_memcpy(&tmp_key, key, sizeof(struct key_t));6969+ ret = bpf_map_update_elem(&hashmap1, &tmp_key, &tmp_val, 0);7070+ if (ret)7171+ return 0;7272+ ret = bpf_map_delete_elem(&hashmap1, &tmp_key);7373+ if (ret)6574 return 0;66756776 key_sum_a += key->a;
···11751175 echo "PASS: neigh get"11761176}1177117711781178+kci_test_bridge_parent_id()11791179+{11801180+ local ret=011811181+ sysfsnet=/sys/bus/netdevsim/devices/netdevsim11821182+ probed=false11831183+11841184+ if [ ! -w /sys/bus/netdevsim/new_device ] ; then11851185+ modprobe -q netdevsim11861186+ check_err $?11871187+ if [ $ret -ne 0 ]; then11881188+ echo "SKIP: bridge_parent_id can't load netdevsim"11891189+ return $ksft_skip11901190+ fi11911191+ probed=true11921192+ fi11931193+11941194+ echo "10 1" > /sys/bus/netdevsim/new_device11951195+ while [ ! -d ${sysfsnet}10 ] ; do :; done11961196+ echo "20 1" > /sys/bus/netdevsim/new_device11971197+ while [ ! -d ${sysfsnet}20 ] ; do :; done11981198+ udevadm settle11991199+ dev10=`ls ${sysfsnet}10/net/`12001200+ dev20=`ls ${sysfsnet}20/net/`12011201+12021202+ ip link add name test-bond0 type bond mode 802.3ad12031203+ ip link set dev $dev10 master test-bond012041204+ ip link set dev $dev20 master test-bond012051205+ ip link add name test-br0 type bridge12061206+ ip link set dev test-bond0 master test-br012071207+ check_err $?12081208+12091209+ # clean up any leftovers12101210+ ip link del dev test-br012111211+ ip link del dev test-bond012121212+ echo 20 > /sys/bus/netdevsim/del_device12131213+ echo 10 > /sys/bus/netdevsim/del_device12141214+ $probed && rmmod netdevsim12151215+12161216+ if [ $ret -ne 0 ]; then12171217+ echo "FAIL: bridge_parent_id"12181218+ return 112191219+ fi12201220+ echo "PASS: bridge_parent_id"12211221+}12221222+11781223kci_test_rtnl()11791224{11801225 local ret=0···12681223 kci_test_fdb_get12691224 check_err $?12701225 kci_test_neigh_get12261226+ check_err $?12271227+ kci_test_bridge_parent_id12711228 check_err $?1272122912731230 kci_del_dummy