···47134713 prevent spurious wakeup);47144714 n = USB_QUIRK_DELAY_CTRL_MSG (Device needs a47154715 pause after every control message);47164716+ o = USB_QUIRK_HUB_SLOW_RESET (Hub needs extra47174717+ delay after resetting its port);47164718 Example: quirks=0781:5580:bk,0a5c:5834:gij4717471947184720 usbhid.mousepoll=
+1-1
Documentation/admin-guide/pm/cpufreq.rst
···150150a governor ``sysfs`` interface to it. Next, the governor is started by151151invoking its ``->start()`` callback.152152153153-That callback it expected to register per-CPU utilization update callbacks for153153+That callback is expected to register per-CPU utilization update callbacks for154154all of the online CPUs belonging to the given policy with the CPU scheduler.155155The utilization update callbacks will be invoked by the CPU scheduler on156156important events, like task enqueue and dequeue, on every iteration of the
+10-9
Documentation/admin-guide/security-bugs.rst
···3232The security list is not a disclosure channel. For that, see Coordination3333below.34343535-Once a robust fix has been developed, our preference is to release the3636-fix in a timely fashion, treating it no differently than any of the other3737-thousands of changes and fixes the Linux kernel project releases every3838-month.3535+Once a robust fix has been developed, the release process starts. Fixes3636+for publicly known bugs are released immediately.39374040-However, at the request of the reporter, we will postpone releasing the4141-fix for up to 5 business days after the date of the report or after the4242-embargo has lifted; whichever comes first. The only exception to that4343-rule is if the bug is publicly known, in which case the preference is to4444-release the fix as soon as it's available.3838+Although our preference is to release fixes for publicly undisclosed bugs3939+as soon as they become available, this may be postponed at the request of4040+the reporter or an affected party for up to 7 calendar days from the start4141+of the release process, with an exceptional extension to 14 calendar days4242+if it is agreed that the criticality of the bug requires more time. The4343+only valid reason for deferring the publication of a fix is to accommodate4444+the logistics of QA and large scale rollouts which require release4545+coordination.45464647Whilst embargoed information may be shared with trusted individuals in4748order to develop a fix, such information will not be published alongside
+41-11
Documentation/core-api/xarray.rst
···7474new entry and return the previous entry stored at that index. You can7575use :c:func:`xa_erase` instead of calling :c:func:`xa_store` with a7676``NULL`` entry. There is no difference between an entry that has never7777-been stored to and one that has most recently had ``NULL`` stored to it.7777+been stored to, one that has been erased and one that has most recently7878+had ``NULL`` stored to it.78797980You can conditionally replace an entry at an index by using8081:c:func:`xa_cmpxchg`. Like :c:func:`cmpxchg`, it will only succeed if···106105indices. Storing into one index may result in the entry retrieved by107106some, but not all of the other indices changing.108107108108+Sometimes you need to ensure that a subsequent call to :c:func:`xa_store`109109+will not need to allocate memory. The :c:func:`xa_reserve` function110110+will store a reserved entry at the indicated index. Users of the normal111111+API will see this entry as containing ``NULL``. If you do not need to112112+use the reserved entry, you can call :c:func:`xa_release` to remove the113113+unused entry. If another user has stored to the entry in the meantime,114114+:c:func:`xa_release` will do nothing; if instead you want the entry to115115+become ``NULL``, you should use :c:func:`xa_erase`.116116+117117+If all entries in the array are ``NULL``, the :c:func:`xa_empty` function118118+will return ``true``.119119+109120Finally, you can remove all entries from an XArray by calling110121:c:func:`xa_destroy`. If the XArray entries are pointers, you may wish111122to free the entries first. You can do this by iterating over all present112123entries in the XArray using the :c:func:`xa_for_each` iterator.113124114114-ID assignment115115--------------125125+Allocating XArrays126126+------------------127127+128128+If you use :c:func:`DEFINE_XARRAY_ALLOC` to define the XArray, or129129+initialise it by passing ``XA_FLAGS_ALLOC`` to :c:func:`xa_init_flags`,130130+the XArray changes to track whether entries are in use or not.116131117132You can call :c:func:`xa_alloc` to store the entry at any unused index118133in the XArray. If you need to modify the array from interrupt context,119134you can use :c:func:`xa_alloc_bh` or :c:func:`xa_alloc_irq` to disable120120-interrupts while allocating the ID. Unlike :c:func:`xa_store`, allocating121121-a ``NULL`` pointer does not delete an entry. Instead it reserves an122122-entry like :c:func:`xa_reserve` and you can release it using either123123-:c:func:`xa_erase` or :c:func:`xa_release`. To use ID assignment, the124124-XArray must be defined with :c:func:`DEFINE_XARRAY_ALLOC`, or initialised125125-by passing ``XA_FLAGS_ALLOC`` to :c:func:`xa_init_flags`,135135+interrupts while allocating the ID.136136+137137+Using :c:func:`xa_store`, :c:func:`xa_cmpxchg` or :c:func:`xa_insert`138138+will mark the entry as being allocated. Unlike a normal XArray, storing139139+``NULL`` will mark the entry as being in use, like :c:func:`xa_reserve`.140140+To free an entry, use :c:func:`xa_erase` (or :c:func:`xa_release` if141141+you only want to free the entry if it's ``NULL``).142142+143143+You cannot use ``XA_MARK_0`` with an allocating XArray as this mark144144+is used to track whether an entry is free or not. The other marks are145145+available for your use.126146127147Memory allocation128148-----------------···180158181159Takes xa_lock internally:182160 * :c:func:`xa_store`161161+ * :c:func:`xa_store_bh`162162+ * :c:func:`xa_store_irq`183163 * :c:func:`xa_insert`184164 * :c:func:`xa_erase`185165 * :c:func:`xa_erase_bh`···191167 * :c:func:`xa_alloc`192168 * :c:func:`xa_alloc_bh`193169 * :c:func:`xa_alloc_irq`170170+ * :c:func:`xa_reserve`171171+ * :c:func:`xa_reserve_bh`172172+ * :c:func:`xa_reserve_irq`194173 * :c:func:`xa_destroy`195174 * :c:func:`xa_set_mark`196175 * :c:func:`xa_clear_mark`···204177 * :c:func:`__xa_erase`205178 * :c:func:`__xa_cmpxchg`206179 * :c:func:`__xa_alloc`180180+ * :c:func:`__xa_reserve`207181 * :c:func:`__xa_set_mark`208182 * :c:func:`__xa_clear_mark`209183···262234using :c:func:`xa_lock_irqsave` in both the interrupt handler and process263235context, or :c:func:`xa_lock_irq` in process context and :c:func:`xa_lock`264236in the interrupt handler. Some of the more common patterns have helper265265-functions such as :c:func:`xa_erase_bh` and :c:func:`xa_erase_irq`.237237+functions such as :c:func:`xa_store_bh`, :c:func:`xa_store_irq`,238238+:c:func:`xa_erase_bh` and :c:func:`xa_erase_irq`.266239267240Sometimes you need to protect access to the XArray with a mutex because268241that lock sits above another mutex in the locking hierarchy. That does···351322 - :c:func:`xa_is_zero`352323 - Zero entries appear as ``NULL`` through the Normal API, but occupy353324 an entry in the XArray which can be used to reserve the index for354354- future use.325325+ future use. This is used by allocating XArrays for allocated entries326326+ which are ``NULL``.355327356328Other internal entries may be added in the future. As far as possible, they357329will be handled by :c:func:`xas_retry`.
+5-3
Documentation/cpu-freq/cpufreq-stats.txt
···8686This will give a fine grained information about all the CPU frequency8787transitions. The cat output here is a two dimensional matrix, where an entry8888<i,j> (row i, column j) represents the count of number of transitions from 8989-Freq_i to Freq_j. Freq_i is in descending order with increasing rows and 9090-Freq_j is in descending order with increasing columns. The output here also 9191-contains the actual freq values for each row and column for better readability.8989+Freq_i to Freq_j. Freq_i rows and Freq_j columns follow the sorting order in9090+which the driver has provided the frequency table initially to the cpufreq core9191+and so can be sorted (ascending or descending) or unsorted. The output here9292+also contains the actual freq values for each row and column for better9393+readability.92949395If the transition table is bigger than PAGE_SIZE, reading this will9496return an -EFBIG error.
···11-Generic ARM big LITTLE cpufreq driver's DT glue22------------------------------------------------33-44-This is DT specific glue layer for generic cpufreq driver for big LITTLE55-systems.66-77-Both required and optional properties listed below must be defined88-under node /cpus/cpu@x. Where x is the first cpu inside a cluster.99-1010-FIXME: Cpus should boot in the order specified in DT and all cpus for a cluster1111-must be present contiguously. Generic DT driver will check only node 'x' for1212-cpu:x.1313-1414-Required properties:1515-- operating-points: Refer to Documentation/devicetree/bindings/opp/opp.txt1616- for details1717-1818-Optional properties:1919-- clock-latency: Specify the possible maximum transition latency for clock,2020- in unit of nanoseconds.2121-2222-Examples:2323-2424-cpus {2525- #address-cells = <1>;2626- #size-cells = <0>;2727-2828- cpu@0 {2929- compatible = "arm,cortex-a15";3030- reg = <0>;3131- next-level-cache = <&L2>;3232- operating-points = <3333- /* kHz uV */3434- 792000 11000003535- 396000 9500003636- 198000 8500003737- >;3838- clock-latency = <61036>; /* two CLK32 periods */3939- };4040-4141- cpu@1 {4242- compatible = "arm,cortex-a15";4343- reg = <1>;4444- next-level-cache = <&L2>;4545- };4646-4747- cpu@100 {4848- compatible = "arm,cortex-a7";4949- reg = <100>;5050- next-level-cache = <&L2>;5151- operating-points = <5252- /* kHz uV */5353- 792000 9500005454- 396000 7500005555- 198000 4500005656- >;5757- clock-latency = <61036>; /* two CLK32 periods */5858- };5959-6060- cpu@101 {6161- compatible = "arm,cortex-a7";6262- reg = <101>;6363- next-level-cache = <&L2>;6464- };6565-};
···55- compatible: "renesas,can-r8a7743" if CAN controller is a part of R8A7743 SoC.66 "renesas,can-r8a7744" if CAN controller is a part of R8A7744 SoC.77 "renesas,can-r8a7745" if CAN controller is a part of R8A7745 SoC.88+ "renesas,can-r8a774a1" if CAN controller is a part of R8A774A1 SoC.89 "renesas,can-r8a7778" if CAN controller is a part of R8A7778 SoC.910 "renesas,can-r8a7779" if CAN controller is a part of R8A7779 SoC.1011 "renesas,can-r8a7790" if CAN controller is a part of R8A7790 SoC.···1514 "renesas,can-r8a7794" if CAN controller is a part of R8A7794 SoC.1615 "renesas,can-r8a7795" if CAN controller is a part of R8A7795 SoC.1716 "renesas,can-r8a7796" if CAN controller is a part of R8A7796 SoC.1717+ "renesas,can-r8a77965" if CAN controller is a part of R8A77965 SoC.1818 "renesas,rcar-gen1-can" for a generic R-Car Gen1 compatible device.1919 "renesas,rcar-gen2-can" for a generic R-Car Gen2 or RZ/G12020 compatible device.2121- "renesas,rcar-gen3-can" for a generic R-Car Gen3 compatible device.2121+ "renesas,rcar-gen3-can" for a generic R-Car Gen3 or RZ/G22222+ compatible device.2223 When compatible with the generic version, nodes must list the2324 SoC-specific version corresponding to the platform first2425 followed by the generic version.25262627- reg: physical base address and size of the R-Car CAN register map.2728- interrupts: interrupt specifier for the sole interrupt.2828-- clocks: phandles and clock specifiers for 3 CAN clock inputs.2929-- clock-names: 3 clock input name strings: "clkp1", "clkp2", "can_clk".2929+- clocks: phandles and clock specifiers for 2 CAN clock inputs for RZ/G23030+ devices.3131+ phandles and clock specifiers for 3 CAN clock inputs for every other3232+ SoC.3333+- clock-names: 2 clock input name strings for RZ/G2: "clkp1", "can_clk".3434+ 3 clock input name strings for every other SoC: "clkp1", "clkp2",3535+ "can_clk".3036- pinctrl-0: pin control group to be used for this controller.3137- pinctrl-names: must be "default".32383333-Required properties for "renesas,can-r8a7795" and "renesas,can-r8a7796"3434-compatible:3535-In R8A7795 and R8A7796 SoCs, "clkp2" can be CANFD clock. This is a div6 clock3636-and can be used by both CAN and CAN FD controller at the same time. It needs to3737-be scaled to maximum frequency if any of these controllers use it. This is done3939+Required properties for R8A7795, R8A7796 and R8A77965:4040+For the denoted SoCs, "clkp2" can be CANFD clock. This is a div6 clock and can4141+be used by both CAN and CAN FD controller at the same time. It needs to be4242+scaled to maximum frequency if any of these controllers use it. This is done3843using the below properties:39444045- assigned-clocks: phandle of clkp2(CANFD) clock.···4942Optional properties:5043- renesas,can-clock-select: R-Car CAN Clock Source Select. Valid values are:5144 <0x0> (default) : Peripheral clock (clkp1)5252- <0x1> : Peripheral clock (clkp2)5353- <0x3> : Externally input clock4545+ <0x1> : Peripheral clock (clkp2) (not supported by4646+ RZ/G2 devices)4747+ <0x3> : External input clock54485549Example5650-------
+1-1
Documentation/devicetree/bindings/net/dsa/dsa.txt
···77Current Binding88---------------991010-Switches are true Linux devices and can be probes by any means. Once1010+Switches are true Linux devices and can be probed by any means. Once1111probed, they register to the DSA framework, passing a node1212pointer. This node is expected to fulfil the following binding, and1313may contain additional properties as required by the device it is
+1-10
Documentation/input/event-codes.rst
···190190* REL_WHEEL, REL_HWHEEL:191191192192 - These codes are used for vertical and horizontal scroll wheels,193193- respectively. The value is the number of "notches" moved on the wheel, the194194- physical size of which varies by device. For high-resolution wheels (which195195- report multiple events for each notch of movement, or do not have notches)196196- this may be an approximation based on the high-resolution scroll events.197197-198198-* REL_WHEEL_HI_RES:199199-200200- - If a vertical scroll wheel supports high-resolution scrolling, this code201201- will be emitted in addition to REL_WHEEL. The value is the (approximate)202202- distance travelled by the user's finger, in microns.193193+ respectively.203194204195EV_ABS205196------
+1-1
Documentation/media/uapi/v4l/dev-meta.rst
···4040the desired operation. Both drivers and applications must set the remainder of4141the :c:type:`v4l2_format` structure to 0.42424343-.. _v4l2-meta-format:4343+.. c:type:: v4l2_meta_format44444545.. tabularcolumns:: |p{1.4cm}|p{2.2cm}|p{13.9cm}|4646
+5
Documentation/media/uapi/v4l/vidioc-g-fmt.rst
···133133 - Definition of a data format, see :ref:`pixfmt`, used by SDR134134 capture and output devices.135135 * -136136+ - struct :c:type:`v4l2_meta_format`137137+ - ``meta``138138+ - Definition of a metadata format, see :ref:`meta-formats`, used by139139+ metadata capture devices.140140+ * -136141 - __u8137142 - ``raw_data``\ [200]138143 - Place holder for future extensions.
+11-6
Documentation/networking/rxrpc.txt
···1056105610571057 u32 rxrpc_kernel_check_life(struct socket *sock,10581058 struct rxrpc_call *call);10591059+ void rxrpc_kernel_probe_life(struct socket *sock,10601060+ struct rxrpc_call *call);1059106110601060- This returns a number that is updated when ACKs are received from the peer10611061- (notably including PING RESPONSE ACKs which we can elicit by sending PING10621062- ACKs to see if the call still exists on the server). The caller should10631063- compare the numbers of two calls to see if the call is still alive after10641064- waiting for a suitable interval.10621062+ The first function returns a number that is updated when ACKs are received10631063+ from the peer (notably including PING RESPONSE ACKs which we can elicit by10641064+ sending PING ACKs to see if the call still exists on the server). The10651065+ caller should compare the numbers of two calls to see if the call is still10661066+ alive after waiting for a suitable interval.1065106710661068 This allows the caller to work out if the server is still contactable and10671069 if the call is still alive on the server whilst waiting for the server to10681070 process a client operation.1069107110701070- This function may transmit a PING ACK.10721072+ The second function causes a ping ACK to be transmitted to try to provoke10731073+ the peer into responding, which would then cause the value returned by the10741074+ first function to change. Note that this must be called in TASK_RUNNING10751075+ state.1071107610721077 (*) Get reply timestamp.10731078
···2323/*2424 * Don't change this structure - ASM code relies on it.2525 */2626-extern struct processor {2626+struct processor {2727 /* MISC2828 * get data abort address/flags2929 */···7979 unsigned int suspend_size;8080 void (*do_suspend)(void *);8181 void (*do_resume)(void *);8282-} processor;8282+};83838484#ifndef MULTI_CPU8585+static inline void init_proc_vtable(const struct processor *p)8686+{8787+}8888+8589extern void cpu_proc_init(void);8690extern void cpu_proc_fin(void);8791extern int cpu_do_idle(void);···10298extern void cpu_do_suspend(void *);10399extern void cpu_do_resume(void *);104100#else105105-#define cpu_proc_init processor._proc_init106106-#define cpu_proc_fin processor._proc_fin107107-#define cpu_reset processor.reset108108-#define cpu_do_idle processor._do_idle109109-#define cpu_dcache_clean_area processor.dcache_clean_area110110-#define cpu_set_pte_ext processor.set_pte_ext111111-#define cpu_do_switch_mm processor.switch_mm112101113113-/* These three are private to arch/arm/kernel/suspend.c */114114-#define cpu_do_suspend processor.do_suspend115115-#define cpu_do_resume processor.do_resume102102+extern struct processor processor;103103+#if defined(CONFIG_BIG_LITTLE) && defined(CONFIG_HARDEN_BRANCH_PREDICTOR)104104+#include <linux/smp.h>105105+/*106106+ * This can't be a per-cpu variable because we need to access it before107107+ * per-cpu has been initialised. We have a couple of functions that are108108+ * called in a pre-emptible context, and so can't use smp_processor_id()109109+ * there, hence PROC_TABLE(). We insist in init_proc_vtable() that the110110+ * function pointers for these are identical across all CPUs.111111+ */112112+extern struct processor *cpu_vtable[];113113+#define PROC_VTABLE(f) cpu_vtable[smp_processor_id()]->f114114+#define PROC_TABLE(f) cpu_vtable[0]->f115115+static inline void init_proc_vtable(const struct processor *p)116116+{117117+ unsigned int cpu = smp_processor_id();118118+ *cpu_vtable[cpu] = *p;119119+ WARN_ON_ONCE(cpu_vtable[cpu]->dcache_clean_area !=120120+ cpu_vtable[0]->dcache_clean_area);121121+ WARN_ON_ONCE(cpu_vtable[cpu]->set_pte_ext !=122122+ cpu_vtable[0]->set_pte_ext);123123+}124124+#else125125+#define PROC_VTABLE(f) processor.f126126+#define PROC_TABLE(f) processor.f127127+static inline void init_proc_vtable(const struct processor *p)128128+{129129+ processor = *p;130130+}131131+#endif132132+133133+#define cpu_proc_init PROC_VTABLE(_proc_init)134134+#define cpu_check_bugs PROC_VTABLE(check_bugs)135135+#define cpu_proc_fin PROC_VTABLE(_proc_fin)136136+#define cpu_reset PROC_VTABLE(reset)137137+#define cpu_do_idle PROC_VTABLE(_do_idle)138138+#define cpu_dcache_clean_area PROC_TABLE(dcache_clean_area)139139+#define cpu_set_pte_ext PROC_TABLE(set_pte_ext)140140+#define cpu_do_switch_mm PROC_VTABLE(switch_mm)141141+142142+/* These two are private to arch/arm/kernel/suspend.c */143143+#define cpu_do_suspend PROC_VTABLE(do_suspend)144144+#define cpu_do_resume PROC_VTABLE(do_resume)116145#endif117146118147extern void cpu_resume(void);
+2-2
arch/arm/kernel/bugs.c
···66void check_other_bugs(void)77{88#ifdef MULTI_CPU99- if (processor.check_bugs)1010- processor.check_bugs();99+ if (cpu_check_bugs)1010+ cpu_check_bugs();1111#endif1212}1313
+3-3
arch/arm/kernel/head-common.S
···145145#endif146146 .size __mmap_switched_data, . - __mmap_switched_data147147148148+ __FINIT149149+ .text150150+148151/*149152 * This provides a C-API version of __lookup_processor_type150153 */···158155 mov r0, r5159156 ldmfd sp!, {r4 - r6, r9, pc}160157ENDPROC(lookup_processor_type)161161-162162- __FINIT163163- .text164158165159/*166160 * Read processor ID register (CP#15, CR0), and look up in the linker-built
+27-17
arch/arm/kernel/setup.c
···114114115115#ifdef MULTI_CPU116116struct processor processor __ro_after_init;117117+#if defined(CONFIG_BIG_LITTLE) && defined(CONFIG_HARDEN_BRANCH_PREDICTOR)118118+struct processor *cpu_vtable[NR_CPUS] = {119119+ [0] = &processor,120120+};121121+#endif117122#endif118123#ifdef MULTI_TLB119124struct cpu_tlb_fns cpu_tlb __ro_after_init;···671666}672667#endif673668669669+/*670670+ * locate processor in the list of supported processor types. The linker671671+ * builds this table for us from the entries in arch/arm/mm/proc-*.S672672+ */673673+struct proc_info_list *lookup_processor(u32 midr)674674+{675675+ struct proc_info_list *list = lookup_processor_type(midr);676676+677677+ if (!list) {678678+ pr_err("CPU%u: configuration botched (ID %08x), CPU halted\n",679679+ smp_processor_id(), midr);680680+ while (1)681681+ /* can't use cpu_relax() here as it may require MMU setup */;682682+ }683683+684684+ return list;685685+}686686+674687static void __init setup_processor(void)675688{676676- struct proc_info_list *list;677677-678678- /*679679- * locate processor in the list of supported processor680680- * types. The linker builds this table for us from the681681- * entries in arch/arm/mm/proc-*.S682682- */683683- list = lookup_processor_type(read_cpuid_id());684684- if (!list) {685685- pr_err("CPU configuration botched (ID %08x), unable to continue.\n",686686- read_cpuid_id());687687- while (1);688688- }689689+ unsigned int midr = read_cpuid_id();690690+ struct proc_info_list *list = lookup_processor(midr);689691690692 cpu_name = list->cpu_name;691693 __cpu_architecture = __get_cpu_architecture();692694693693-#ifdef MULTI_CPU694694- processor = *list->proc;695695-#endif695695+ init_proc_vtable(list->proc);696696#ifdef MULTI_TLB697697 cpu_tlb = *list->tlb;698698#endif···709699#endif710700711701 pr_info("CPU: %s [%08x] revision %d (ARMv%s), cr=%08lx\n",712712- cpu_name, read_cpuid_id(), read_cpuid_id() & 15,702702+ list->cpu_name, midr, midr & 15,713703 proc_arch[cpu_architecture()], get_cr());714704715705 snprintf(init_utsname()->machine, __NEW_UTS_LEN + 1, "%s%c",
+31
arch/arm/kernel/smp.c
···4242#include <asm/mmu_context.h>4343#include <asm/pgtable.h>4444#include <asm/pgalloc.h>4545+#include <asm/procinfo.h>4546#include <asm/processor.h>4647#include <asm/sections.h>4748#include <asm/tlbflush.h>···103102#endif104103}105104105105+#if defined(CONFIG_BIG_LITTLE) && defined(CONFIG_HARDEN_BRANCH_PREDICTOR)106106+static int secondary_biglittle_prepare(unsigned int cpu)107107+{108108+ if (!cpu_vtable[cpu])109109+ cpu_vtable[cpu] = kzalloc(sizeof(*cpu_vtable[cpu]), GFP_KERNEL);110110+111111+ return cpu_vtable[cpu] ? 0 : -ENOMEM;112112+}113113+114114+static void secondary_biglittle_init(void)115115+{116116+ init_proc_vtable(lookup_processor(read_cpuid_id())->proc);117117+}118118+#else119119+static int secondary_biglittle_prepare(unsigned int cpu)120120+{121121+ return 0;122122+}123123+124124+static void secondary_biglittle_init(void)125125+{126126+}127127+#endif128128+106129int __cpu_up(unsigned int cpu, struct task_struct *idle)107130{108131 int ret;109132110133 if (!smp_ops.smp_boot_secondary)111134 return -ENOSYS;135135+136136+ ret = secondary_biglittle_prepare(cpu);137137+ if (ret)138138+ return ret;112139113140 /*114141 * We need to tell the secondary core where to find···387358{388359 struct mm_struct *mm = &init_mm;389360 unsigned int cpu;361361+362362+ secondary_biglittle_init();390363391364 /*392365 * The identity mapping is uncached (strongly ordered), so
+53-58
arch/arm/mach-omap2/display.c
···209209210210 return 0;211211}212212-#else213213-static inline int omapdss_init_fbdev(void)212212+213213+static const char * const omapdss_compat_names[] __initconst = {214214+ "ti,omap2-dss",215215+ "ti,omap3-dss",216216+ "ti,omap4-dss",217217+ "ti,omap5-dss",218218+ "ti,dra7-dss",219219+};220220+221221+static struct device_node * __init omapdss_find_dss_of_node(void)214222{215215- return 0;223223+ struct device_node *node;224224+ int i;225225+226226+ for (i = 0; i < ARRAY_SIZE(omapdss_compat_names); ++i) {227227+ node = of_find_compatible_node(NULL, NULL,228228+ omapdss_compat_names[i]);229229+ if (node)230230+ return node;231231+ }232232+233233+ return NULL;216234}235235+236236+static int __init omapdss_init_of(void)237237+{238238+ int r;239239+ struct device_node *node;240240+ struct platform_device *pdev;241241+242242+ /* only create dss helper devices if dss is enabled in the .dts */243243+244244+ node = omapdss_find_dss_of_node();245245+ if (!node)246246+ return 0;247247+248248+ if (!of_device_is_available(node))249249+ return 0;250250+251251+ pdev = of_find_device_by_node(node);252252+253253+ if (!pdev) {254254+ pr_err("Unable to find DSS platform device\n");255255+ return -ENODEV;256256+ }257257+258258+ r = of_platform_populate(node, NULL, NULL, &pdev->dev);259259+ if (r) {260260+ pr_err("Unable to populate DSS submodule devices\n");261261+ return r;262262+ }263263+264264+ return omapdss_init_fbdev();265265+}266266+omap_device_initcall(omapdss_init_of);217267#endif /* CONFIG_FB_OMAP2 */218268219269static void dispc_disable_outputs(void)···411361412362 return r;413363}414414-415415-static const char * const omapdss_compat_names[] __initconst = {416416- "ti,omap2-dss",417417- "ti,omap3-dss",418418- "ti,omap4-dss",419419- "ti,omap5-dss",420420- "ti,dra7-dss",421421-};422422-423423-static struct device_node * __init omapdss_find_dss_of_node(void)424424-{425425- struct device_node *node;426426- int i;427427-428428- for (i = 0; i < ARRAY_SIZE(omapdss_compat_names); ++i) {429429- node = of_find_compatible_node(NULL, NULL,430430- omapdss_compat_names[i]);431431- if (node)432432- return node;433433- }434434-435435- return NULL;436436-}437437-438438-static int __init omapdss_init_of(void)439439-{440440- int r;441441- struct device_node *node;442442- struct platform_device *pdev;443443-444444- /* only create dss helper devices if dss is enabled in the .dts */445445-446446- node = omapdss_find_dss_of_node();447447- if (!node)448448- return 0;449449-450450- if (!of_device_is_available(node))451451- return 0;452452-453453- pdev = of_find_device_by_node(node);454454-455455- if (!pdev) {456456- pr_err("Unable to find DSS platform device\n");457457- return -ENODEV;458458- }459459-460460- r = of_platform_populate(node, NULL, NULL, &pdev->dev);461461- if (r) {462462- pr_err("Unable to populate DSS submodule devices\n");463463- return r;464464- }465465-466466- return omapdss_init_fbdev();467467-}468468-omap_device_initcall(omapdss_init_of);
+2-15
arch/arm/mm/proc-v7-bugs.c
···5252 case ARM_CPU_PART_CORTEX_A17:5353 case ARM_CPU_PART_CORTEX_A73:5454 case ARM_CPU_PART_CORTEX_A75:5555- if (processor.switch_mm != cpu_v7_bpiall_switch_mm)5656- goto bl_error;5755 per_cpu(harden_branch_predictor_fn, cpu) =5856 harden_branch_predictor_bpiall;5957 spectre_v2_method = "BPIALL";···59616062 case ARM_CPU_PART_CORTEX_A15:6163 case ARM_CPU_PART_BRAHMA_B15:6262- if (processor.switch_mm != cpu_v7_iciallu_switch_mm)6363- goto bl_error;6464 per_cpu(harden_branch_predictor_fn, cpu) =6565 harden_branch_predictor_iciallu;6666 spectre_v2_method = "ICIALLU";···8488 ARM_SMCCC_ARCH_WORKAROUND_1, &res);8589 if ((int)res.a0 != 0)8690 break;8787- if (processor.switch_mm != cpu_v7_hvc_switch_mm && cpu)8888- goto bl_error;8991 per_cpu(harden_branch_predictor_fn, cpu) =9092 call_hvc_arch_workaround_1;9191- processor.switch_mm = cpu_v7_hvc_switch_mm;9393+ cpu_do_switch_mm = cpu_v7_hvc_switch_mm;9294 spectre_v2_method = "hypervisor";9395 break;9496···95101 ARM_SMCCC_ARCH_WORKAROUND_1, &res);96102 if ((int)res.a0 != 0)97103 break;9898- if (processor.switch_mm != cpu_v7_smc_switch_mm && cpu)9999- goto bl_error;100104 per_cpu(harden_branch_predictor_fn, cpu) =101105 call_smc_arch_workaround_1;102102- processor.switch_mm = cpu_v7_smc_switch_mm;106106+ cpu_do_switch_mm = cpu_v7_smc_switch_mm;103107 spectre_v2_method = "firmware";104108 break;105109···111119 if (spectre_v2_method)112120 pr_info("CPU%u: Spectre v2: using %s workaround\n",113121 smp_processor_id(), spectre_v2_method);114114- return;115115-116116-bl_error:117117- pr_err("CPU%u: Spectre v2: incorrect context switching function, system vulnerable\n",118118- cpu);119122}120123#else121124static void cpu_v7_spectre_init(void)
···140140CONFIG_RTC_DRV_DS1307=y141141CONFIG_STAGING=y142142CONFIG_OCTEON_ETHERNET=y143143+CONFIG_OCTEON_USB=y143144# CONFIG_IOMMU_SUPPORT is not set144145CONFIG_RAS=y145146CONFIG_EXT4_FS=y
+1
arch/mips/kernel/setup.c
···794794795795 /* call board setup routine */796796 plat_mem_setup();797797+ memblock_set_bottom_up(true);797798798799 /*799800 * Make sure all kernel memory is in the maps. The "UP" and
+1-2
arch/mips/kernel/traps.c
···22602260 unsigned long size = 0x200 + VECTORSPACING*64;22612261 phys_addr_t ebase_pa;2262226222632263- memblock_set_bottom_up(true);22642263 ebase = (unsigned long)22652264 memblock_alloc_from(size, 1 << fls(size), 0);22662266- memblock_set_bottom_up(false);2267226522682266 /*22692267 * Try to ensure ebase resides in KSeg0 if possible.···23052307 if (board_ebase_setup)23062308 board_ebase_setup();23072309 per_cpu_trap_init(true);23102310+ memblock_set_bottom_up(false);2308231123092312 /*23102313 * Copy the generic exception handlers to their final destination.
+2-10
arch/mips/loongson64/loongson-3/numa.c
···231231 cpumask_clear(&__node_data[(node)]->cpumask);232232 }233233 }234234+ max_low_pfn = PHYS_PFN(memblock_end_of_DRAM());235235+234236 for (cpu = 0; cpu < loongson_sysconf.nr_cpus; cpu++) {235237 node = cpu / loongson_sysconf.cores_per_node;236238 if (node >= num_online_nodes())···250248251249void __init paging_init(void)252250{253253- unsigned node;254251 unsigned long zones_size[MAX_NR_ZONES] = {0, };255252256253 pagetable_init();257257-258258- for_each_online_node(node) {259259- unsigned long start_pfn, end_pfn;260260-261261- get_pfn_range_for_nid(node, &start_pfn, &end_pfn);262262-263263- if (end_pfn > max_low_pfn)264264- max_low_pfn = end_pfn;265265- }266254#ifdef CONFIG_ZONE_DMA32267255 zones_size[ZONE_DMA32] = MAX_DMA32_PFN;268256#endif
+1-10
arch/mips/sgi-ip27/ip27-memory.c
···435435436436 mlreset();437437 szmem();438438+ max_low_pfn = PHYS_PFN(memblock_end_of_DRAM());438439439440 for (node = 0; node < MAX_COMPACT_NODES; node++) {440441 if (node_online(node)) {···456455void __init paging_init(void)457456{458457 unsigned long zones_size[MAX_NR_ZONES] = {0, };459459- unsigned node;460458461459 pagetable_init();462462-463463- for_each_online_node(node) {464464- unsigned long start_pfn, end_pfn;465465-466466- get_pfn_range_for_nid(node, &start_pfn, &end_pfn);467467-468468- if (end_pfn > max_low_pfn)469469- max_low_pfn = end_pfn;470470- }471460 zones_size[ZONE_NORMAL] = max_low_pfn;472461 free_area_init_nodes(zones_size);473462}
+2-2
arch/parisc/include/asm/spinlock.h
···3737 volatile unsigned int *a;38383939 a = __ldcw_align(x);4040- /* Release with ordered store. */4141- __asm__ __volatile__("stw,ma %0,0(%1)" : : "r"(1), "r"(a) : "memory");4040+ mb();4141+ *a = 1;4242}43434444static inline int arch_spin_trylock(arch_spinlock_t *x)
···268268 * their hooks, a bitfield is reserved for use by the platform near the269269 * top of MMIO addresses (not PIO, those have to cope the hard way).270270 *271271- * This bit field is 12 bits and is at the top of the IO virtual272272- * addresses PCI_IO_INDIRECT_TOKEN_MASK.271271+ * The highest address in the kernel virtual space are:273272 *274274- * The kernel virtual space is thus:273273+ * d0003fffffffffff # with Hash MMU274274+ * c00fffffffffffff # with Radix MMU275275 *276276- * 0xD000000000000000 : vmalloc277277- * 0xD000080000000000 : PCI PHB IO space278278- * 0xD000080080000000 : ioremap279279- * 0xD0000fffffffffff : end of ioremap region280280- *281281- * Since the top 4 bits are reserved as the region ID, we use thus282282- * the next 12 bits and keep 4 bits available for the future if the283283- * virtual address space is ever to be extended.276276+ * The top 4 bits are reserved as the region ID on hash, leaving us 8 bits277277+ * that can be used for the field.284278 *285279 * The direct IO mapping operations will then mask off those bits286280 * before doing the actual access, though that only happen when···286292 */287293288294#ifdef CONFIG_PPC_INDIRECT_MMIO289289-#define PCI_IO_IND_TOKEN_MASK 0x0fff000000000000ul290290-#define PCI_IO_IND_TOKEN_SHIFT 48295295+#define PCI_IO_IND_TOKEN_SHIFT 52296296+#define PCI_IO_IND_TOKEN_MASK (0xfful << PCI_IO_IND_TOKEN_SHIFT)291297#define PCI_FIX_ADDR(addr) \292298 ((PCI_IO_ADDR)(((unsigned long)(addr)) & ~PCI_IO_IND_TOKEN_MASK))293299#define PCI_GET_ADDR_TOKEN(addr) \
···11+#22+# arch/riscv/boot/Makefile33+#44+# This file is included by the global makefile so that you can add your own55+# architecture-specific flags and dependencies.66+#77+# This file is subject to the terms and conditions of the GNU General Public88+# License. See the file "COPYING" in the main directory of this archive99+# for more details.1010+#1111+# Copyright (C) 2018, Anup Patel.1212+# Author: Anup Patel <anup@brainfault.org>1313+#1414+# Based on the ia64 and arm64 boot/Makefile.1515+#1616+1717+OBJCOPYFLAGS_Image :=-O binary -R .note -R .note.gnu.build-id -R .comment -S1818+1919+targets := Image2020+2121+$(obj)/Image: vmlinux FORCE2222+ $(call if_changed,objcopy)2323+2424+$(obj)/Image.gz: $(obj)/Image FORCE2525+ $(call if_changed,gzip)2626+2727+install:2828+ $(CONFIG_SHELL) $(srctree)/$(src)/install.sh $(KERNELRELEASE) \2929+ $(obj)/Image System.map "$(INSTALL_PATH)"3030+3131+zinstall:3232+ $(CONFIG_SHELL) $(srctree)/$(src)/install.sh $(KERNELRELEASE) \3333+ $(obj)/Image.gz System.map "$(INSTALL_PATH)"
+60
arch/riscv/boot/install.sh
···11+#!/bin/sh22+#33+# arch/riscv/boot/install.sh44+#55+# This file is subject to the terms and conditions of the GNU General Public66+# License. See the file "COPYING" in the main directory of this archive77+# for more details.88+#99+# Copyright (C) 1995 by Linus Torvalds1010+#1111+# Adapted from code in arch/i386/boot/Makefile by H. Peter Anvin1212+# Adapted from code in arch/i386/boot/install.sh by Russell King1313+#1414+# "make install" script for the RISC-V Linux port1515+#1616+# Arguments:1717+# $1 - kernel version1818+# $2 - kernel image file1919+# $3 - kernel map file2020+# $4 - default install path (blank if root directory)2121+#2222+2323+verify () {2424+ if [ ! -f "$1" ]; then2525+ echo "" 1>&22626+ echo " *** Missing file: $1" 1>&22727+ echo ' *** You need to run "make" before "make install".' 1>&22828+ echo "" 1>&22929+ exit 13030+ fi3131+}3232+3333+# Make sure the files actually exist3434+verify "$2"3535+verify "$3"3636+3737+# User may have a custom install script3838+if [ -x ~/bin/${INSTALLKERNEL} ]; then exec ~/bin/${INSTALLKERNEL} "$@"; fi3939+if [ -x /sbin/${INSTALLKERNEL} ]; then exec /sbin/${INSTALLKERNEL} "$@"; fi4040+4141+if [ "$(basename $2)" = "Image.gz" ]; then4242+# Compressed install4343+ echo "Installing compressed kernel"4444+ base=vmlinuz4545+else4646+# Normal install4747+ echo "Installing normal kernel"4848+ base=vmlinux4949+fi5050+5151+if [ -f $4/$base-$1 ]; then5252+ mv $4/$base-$1 $4/$base-$1.old5353+fi5454+cat $2 > $4/$base-$15555+5656+# Install system map file5757+if [ -f $4/System.map-$1 ]; then5858+ mv $4/System.map-$1 $4/System.map-$1.old5959+fi6060+cp $3 $4/System.map-$1
+1
arch/riscv/configs/defconfig
···7676CONFIG_NFS_V4_2=y7777CONFIG_ROOT_NFS=y7878CONFIG_CRYPTO_USER_API_HASH=y7979+CONFIG_PRINTK_TIME=y7980# CONFIG_RCU_TRACE is not set
···5656 unsigned long sstatus;5757 unsigned long sbadaddr;5858 unsigned long scause;5959- /* a0 value before the syscall */6060- unsigned long orig_a0;5959+ /* a0 value before the syscall */6060+ unsigned long orig_a0;6161};62626363#ifdef CONFIG_64BIT
···13131414/*1515 * There is explicitly no include guard here because this file is expected to1616- * be included multiple times. See uapi/asm/syscalls.h for more info.1616+ * be included multiple times.1717 */18181919-#define __ARCH_WANT_NEW_STAT2019#define __ARCH_WANT_SYS_CLONE2020+2121#include <uapi/asm/unistd.h>2222-#include <uapi/asm/syscalls.h>
···11-/* SPDX-License-Identifier: GPL-2.0 */11+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */22/*33- * Copyright (C) 2017-2018 SiFive33+ * Copyright (C) 2018 David Abdurachmanov <david.abdurachmanov@gmail.com>44+ *55+ * This program is free software; you can redistribute it and/or modify66+ * it under the terms of the GNU General Public License version 2 as77+ * published by the Free Software Foundation.88+ *99+ * This program is distributed in the hope that it will be useful,1010+ * but WITHOUT ANY WARRANTY; without even the implied warranty of1111+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the1212+ * GNU General Public License for more details.1313+ *1414+ * You should have received a copy of the GNU General Public License1515+ * along with this program. If not, see <http://www.gnu.org/licenses/>.416 */51766-/*77- * There is explicitly no include guard here because this file is expected to88- * be included multiple times in order to define the syscall macros via99- * __SYSCALL.1010- */1818+#ifdef __LP64__1919+#define __ARCH_WANT_NEW_STAT2020+#endif /* __LP64__ */2121+2222+#include <asm-generic/unistd.h>11231224/*1325 * Allows the instruction cache to be flushed from userspace. Despite RISC-V
+6-3
arch/riscv/kernel/cpu.c
···64646565static void print_isa(struct seq_file *f, const char *orig_isa)6666{6767- static const char *ext = "mafdc";6767+ static const char *ext = "mafdcsu";6868 const char *isa = orig_isa;6969 const char *e;7070···8888 /*8989 * Check the rest of the ISA string for valid extensions, printing those9090 * we find. RISC-V ISA strings define an order, so we only print the9191- * extension bits when they're in order.9191+ * extension bits when they're in order. Hide the supervisor (S)9292+ * extension from userspace as it's not accessible from there.9293 */9394 for (e = ext; *e != '\0'; ++e) {9495 if (isa[0] == e[0]) {9595- seq_write(f, isa, 1);9696+ if (isa[0] != 's')9797+ seq_write(f, isa, 1);9898+9699 isa++;97100 }98101 }
+10
arch/riscv/kernel/head.S
···4444 amoadd.w a3, a2, (a3)4545 bnez a3, .Lsecondary_start46464747+ /* Clear BSS for flat non-ELF images */4848+ la a3, __bss_start4949+ la a4, __bss_stop5050+ ble a4, a3, clear_bss_done5151+clear_bss:5252+ REG_S zero, (a3)5353+ add a3, a3, RISCV_SZPTR5454+ blt a3, a4, clear_bss5555+clear_bss_done:5656+4757 /* Save hart ID and DTB physical address */4858 mv s0, a04959 mv s1, a1
+6-6
arch/riscv/kernel/module.c
···2121{2222 if (v != (u32)v) {2323 pr_err("%s: value %016llx out of range for 32-bit field\n",2424- me->name, v);2424+ me->name, (long long)v);2525 return -EINVAL;2626 }2727 *location = v;···102102 if (offset != (s32)offset) {103103 pr_err(104104 "%s: target %016llx can not be addressed by the 32-bit offset from PC = %p\n",105105- me->name, v, location);105105+ me->name, (long long)v, location);106106 return -EINVAL;107107 }108108···144144 if (IS_ENABLED(CMODEL_MEDLOW)) {145145 pr_err(146146 "%s: target %016llx can not be addressed by the 32-bit offset from PC = %p\n",147147- me->name, v, location);147147+ me->name, (long long)v, location);148148 return -EINVAL;149149 }150150···188188 } else {189189 pr_err(190190 "%s: can not generate the GOT entry for symbol = %016llx from PC = %p\n",191191- me->name, v, location);191191+ me->name, (long long)v, location);192192 return -EINVAL;193193 }194194···212212 } else {213213 pr_err(214214 "%s: target %016llx can not be addressed by the 32-bit offset from PC = %p\n",215215- me->name, v, location);215215+ me->name, (long long)v, location);216216 return -EINVAL;217217 }218218 }···234234 if (offset != fill_v) {235235 pr_err(236236 "%s: target %016llx can not be addressed by the 32-bit offset from PC = %p\n",237237- me->name, v, location);237237+ me->name, (long long)v, location);238238 return -EINVAL;239239 }240240
···1515#define PCI_DEVICE_ID_INTEL_SKL_HQ_IMC 0x19101616#define PCI_DEVICE_ID_INTEL_SKL_SD_IMC 0x190f1717#define PCI_DEVICE_ID_INTEL_SKL_SQ_IMC 0x191f1818+#define PCI_DEVICE_ID_INTEL_KBL_Y_IMC 0x590c1919+#define PCI_DEVICE_ID_INTEL_KBL_U_IMC 0x59042020+#define PCI_DEVICE_ID_INTEL_KBL_UQ_IMC 0x59142121+#define PCI_DEVICE_ID_INTEL_KBL_SD_IMC 0x590f2222+#define PCI_DEVICE_ID_INTEL_KBL_SQ_IMC 0x591f2323+#define PCI_DEVICE_ID_INTEL_CFL_2U_IMC 0x3ecc2424+#define PCI_DEVICE_ID_INTEL_CFL_4U_IMC 0x3ed02525+#define PCI_DEVICE_ID_INTEL_CFL_4H_IMC 0x3e102626+#define PCI_DEVICE_ID_INTEL_CFL_6H_IMC 0x3ec42727+#define PCI_DEVICE_ID_INTEL_CFL_2S_D_IMC 0x3e0f2828+#define PCI_DEVICE_ID_INTEL_CFL_4S_D_IMC 0x3e1f2929+#define PCI_DEVICE_ID_INTEL_CFL_6S_D_IMC 0x3ec23030+#define PCI_DEVICE_ID_INTEL_CFL_8S_D_IMC 0x3e303131+#define PCI_DEVICE_ID_INTEL_CFL_4S_W_IMC 0x3e183232+#define PCI_DEVICE_ID_INTEL_CFL_6S_W_IMC 0x3ec63333+#define PCI_DEVICE_ID_INTEL_CFL_8S_W_IMC 0x3e313434+#define PCI_DEVICE_ID_INTEL_CFL_4S_S_IMC 0x3e333535+#define PCI_DEVICE_ID_INTEL_CFL_6S_S_IMC 0x3eca3636+#define PCI_DEVICE_ID_INTEL_CFL_8S_S_IMC 0x3e3218371938/* SNB event control */2039#define SNB_UNC_CTL_EV_SEL_MASK 0x000000ff···221202 wrmsrl(SKL_UNC_PERF_GLOBAL_CTL,222203 SNB_UNC_GLOBAL_CTL_EN | SKL_UNC_GLOBAL_CTL_CORE_ALL);223204 }205205+206206+ /* The 8th CBOX has different MSR space */207207+ if (box->pmu->pmu_idx == 7)208208+ __set_bit(UNCORE_BOX_FLAG_CFL8_CBOX_MSR_OFFS, &box->flags);224209}225210226211static void skl_uncore_msr_enable_box(struct intel_uncore_box *box)···251228static struct intel_uncore_type skl_uncore_cbox = {252229 .name = "cbox",253230 .num_counters = 4,254254- .num_boxes = 5,231231+ .num_boxes = 8,255232 .perf_ctr_bits = 44,256233 .fixed_ctr_bits = 48,257234 .perf_ctr = SNB_UNC_CBO_0_PER_CTR0,···592569 PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_SKL_SQ_IMC),593570 .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0),594571 },595595-572572+ { /* IMC */573573+ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_KBL_Y_IMC),574574+ .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0),575575+ },576576+ { /* IMC */577577+ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_KBL_U_IMC),578578+ .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0),579579+ },580580+ { /* IMC */581581+ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_KBL_UQ_IMC),582582+ .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0),583583+ },584584+ { /* IMC */585585+ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_KBL_SD_IMC),586586+ .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0),587587+ },588588+ { /* IMC */589589+ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_KBL_SQ_IMC),590590+ .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0),591591+ },592592+ { /* IMC */593593+ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_CFL_2U_IMC),594594+ .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0),595595+ },596596+ { /* IMC */597597+ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_CFL_4U_IMC),598598+ .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0),599599+ },600600+ { /* IMC */601601+ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_CFL_4H_IMC),602602+ .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0),603603+ },604604+ { /* IMC */605605+ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_CFL_6H_IMC),606606+ .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0),607607+ },608608+ { /* IMC */609609+ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_CFL_2S_D_IMC),610610+ .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0),611611+ },612612+ { /* IMC */613613+ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_CFL_4S_D_IMC),614614+ .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0),615615+ },616616+ { /* IMC */617617+ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_CFL_6S_D_IMC),618618+ .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0),619619+ },620620+ { /* IMC */621621+ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_CFL_8S_D_IMC),622622+ .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0),623623+ },624624+ { /* IMC */625625+ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_CFL_4S_W_IMC),626626+ .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0),627627+ },628628+ { /* IMC */629629+ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_CFL_6S_W_IMC),630630+ .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0),631631+ },632632+ { /* IMC */633633+ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_CFL_8S_W_IMC),634634+ .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0),635635+ },636636+ { /* IMC */637637+ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_CFL_4S_S_IMC),638638+ .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0),639639+ },640640+ { /* IMC */641641+ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_CFL_6S_S_IMC),642642+ .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0),643643+ },644644+ { /* IMC */645645+ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_CFL_8S_S_IMC),646646+ .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0),647647+ },596648 { /* end: all zeroes */ },597649};598650···716618 IMC_DEV(SKL_HQ_IMC, &skl_uncore_pci_driver), /* 6th Gen Core H Quad Core */717619 IMC_DEV(SKL_SD_IMC, &skl_uncore_pci_driver), /* 6th Gen Core S Dual Core */718620 IMC_DEV(SKL_SQ_IMC, &skl_uncore_pci_driver), /* 6th Gen Core S Quad Core */621621+ IMC_DEV(KBL_Y_IMC, &skl_uncore_pci_driver), /* 7th Gen Core Y */622622+ IMC_DEV(KBL_U_IMC, &skl_uncore_pci_driver), /* 7th Gen Core U */623623+ IMC_DEV(KBL_UQ_IMC, &skl_uncore_pci_driver), /* 7th Gen Core U Quad Core */624624+ IMC_DEV(KBL_SD_IMC, &skl_uncore_pci_driver), /* 7th Gen Core S Dual Core */625625+ IMC_DEV(KBL_SQ_IMC, &skl_uncore_pci_driver), /* 7th Gen Core S Quad Core */626626+ IMC_DEV(CFL_2U_IMC, &skl_uncore_pci_driver), /* 8th Gen Core U 2 Cores */627627+ IMC_DEV(CFL_4U_IMC, &skl_uncore_pci_driver), /* 8th Gen Core U 4 Cores */628628+ IMC_DEV(CFL_4H_IMC, &skl_uncore_pci_driver), /* 8th Gen Core H 4 Cores */629629+ IMC_DEV(CFL_6H_IMC, &skl_uncore_pci_driver), /* 8th Gen Core H 6 Cores */630630+ IMC_DEV(CFL_2S_D_IMC, &skl_uncore_pci_driver), /* 8th Gen Core S 2 Cores Desktop */631631+ IMC_DEV(CFL_4S_D_IMC, &skl_uncore_pci_driver), /* 8th Gen Core S 4 Cores Desktop */632632+ IMC_DEV(CFL_6S_D_IMC, &skl_uncore_pci_driver), /* 8th Gen Core S 6 Cores Desktop */633633+ IMC_DEV(CFL_8S_D_IMC, &skl_uncore_pci_driver), /* 8th Gen Core S 8 Cores Desktop */634634+ IMC_DEV(CFL_4S_W_IMC, &skl_uncore_pci_driver), /* 8th Gen Core S 4 Cores Work Station */635635+ IMC_DEV(CFL_6S_W_IMC, &skl_uncore_pci_driver), /* 8th Gen Core S 6 Cores Work Station */636636+ IMC_DEV(CFL_8S_W_IMC, &skl_uncore_pci_driver), /* 8th Gen Core S 8 Cores Work Station */637637+ IMC_DEV(CFL_4S_S_IMC, &skl_uncore_pci_driver), /* 8th Gen Core S 4 Cores Server */638638+ IMC_DEV(CFL_6S_S_IMC, &skl_uncore_pci_driver), /* 8th Gen Core S 6 Cores Server */639639+ IMC_DEV(CFL_8S_S_IMC, &skl_uncore_pci_driver), /* 8th Gen Core S 8 Cores Server */719640 { /* end marker */ }720641};721642
+5-1
arch/xtensa/include/asm/processor.h
···2323# error Linux requires the Xtensa Windowed Registers Option.2424#endif25252626-#define ARCH_SLAB_MINALIGN XCHAL_DATA_WIDTH2626+/* Xtensa ABI requires stack alignment to be at least 16 */2727+2828+#define STACK_ALIGN (XCHAL_DATA_WIDTH > 16 ? XCHAL_DATA_WIDTH : 16)2929+3030+#define ARCH_SLAB_MINALIGN STACK_ALIGN27312832/*2933 * User space process size: 1 GB.
···798798 * dispatch may still be in-progress since we dispatch requests799799 * from more than one contexts.800800 *801801- * No need to quiesce queue if it isn't initialized yet since802802- * blk_freeze_queue() should be enough for cases of passthrough803803- * request.801801+ * We rely on driver to deal with the race in case that queue802802+ * initialization isn't done.804803 */805804 if (q->mq_ops && blk_queue_init_done(q))806805 blk_mq_quiesce_queue(q);
···512512513513config XPOWER_PMIC_OPREGION514514 bool "ACPI operation region support for XPower AXP288 PMIC"515515- depends on MFD_AXP20X_I2C && IOSF_MBI515515+ depends on MFD_AXP20X_I2C && IOSF_MBI=y516516 help517517 This config adds ACPI operation region support for XPower AXP288 PMIC.518518
···29282928 return rc;2929292929302930 if (ars_status_process_records(acpi_desc))29312931- return -ENOMEM;29312931+ dev_err(acpi_desc->dev, "Failed to process ARS records\n");2932293229332933- return 0;29332933+ return rc;29342934}2935293529362936static int ars_register(struct acpi_nfit_desc *acpi_desc,···33413341 struct nvdimm *nvdimm, unsigned int cmd)33423342{33433343 struct acpi_nfit_desc *acpi_desc = to_acpi_nfit_desc(nd_desc);33443344- struct nfit_spa *nfit_spa;33453345- int rc = 0;3346334433473345 if (nvdimm)33483346 return 0;···33533355 * just needs guarantees that any ARS it initiates are not33543356 * interrupted by any intervening start requests from userspace.33553357 */33563356- mutex_lock(&acpi_desc->init_mutex);33573357- list_for_each_entry(nfit_spa, &acpi_desc->spas, list)33583358- if (acpi_desc->scrub_spa33593359- || test_bit(ARS_REQ_SHORT, &nfit_spa->ars_state)33603360- || test_bit(ARS_REQ_LONG, &nfit_spa->ars_state)) {33613361- rc = -EBUSY;33623362- break;33633363- }33643364- mutex_unlock(&acpi_desc->init_mutex);33583358+ if (work_busy(&acpi_desc->dwork.work))33593359+ return -EBUSY;3365336033663366- return rc;33613361+ return 0;33673362}3368336333693364int acpi_nfit_ars_rescan(struct acpi_nfit_desc *acpi_desc,
+1-1
drivers/ata/libata-core.c
···45534553 /* These specific Samsung models/firmware-revs do not handle LPM well */45544554 { "SAMSUNG MZMPC128HBFU-000MV", "CXM14M1Q", ATA_HORKAGE_NOLPM, },45554555 { "SAMSUNG SSD PM830 mSATA *", "CXM13D1Q", ATA_HORKAGE_NOLPM, },45564556- { "SAMSUNG MZ7TD256HAFV-000L9", "DXT02L5Q", ATA_HORKAGE_NOLPM, },45564556+ { "SAMSUNG MZ7TD256HAFV-000L9", NULL, ATA_HORKAGE_NOLPM, },4557455745584558 /* devices that don't properly handle queued TRIM commands */45594559 { "Micron_M500IT_*", "MU01", ATA_HORKAGE_NO_NCQ_TRIM |
···160160 /* Ensure the arm clock divider is what we expect */161161 ret = clk_set_rate(clks[ARM].clk, new_freq * 1000);162162 if (ret) {163163+ int ret1;164164+163165 dev_err(cpu_dev, "failed to set clock rate: %d\n", ret);164164- regulator_set_voltage_tol(arm_reg, volt_old, 0);166166+ ret1 = regulator_set_voltage_tol(arm_reg, volt_old, 0);167167+ if (ret1)168168+ dev_warn(cpu_dev,169169+ "failed to restore vddarm voltage: %d\n", ret1);165170 return ret;166171 }167172
+21-5
drivers/cpufreq/ti-cpufreq.c
···201201 {},202202};203203204204+static const struct of_device_id *ti_cpufreq_match_node(void)205205+{206206+ struct device_node *np;207207+ const struct of_device_id *match;208208+209209+ np = of_find_node_by_path("/");210210+ match = of_match_node(ti_cpufreq_of_match, np);211211+ of_node_put(np);212212+213213+ return match;214214+}215215+204216static int ti_cpufreq_probe(struct platform_device *pdev)205217{206218 u32 version[VERSION_COUNT];207207- struct device_node *np;208219 const struct of_device_id *match;209220 struct opp_table *ti_opp_table;210221 struct ti_cpufreq_data *opp_data;211222 const char * const reg_names[] = {"vdd", "vbb"};212223 int ret;213224214214- np = of_find_node_by_path("/");215215- match = of_match_node(ti_cpufreq_of_match, np);216216- of_node_put(np);225225+ match = dev_get_platdata(&pdev->dev);217226 if (!match)218227 return -ENODEV;219228···299290300291static int ti_cpufreq_init(void)301292{302302- platform_device_register_simple("ti-cpufreq", -1, NULL, 0);293293+ const struct of_device_id *match;294294+295295+ /* Check to ensure we are on a compatible platform */296296+ match = ti_cpufreq_match_node();297297+ if (match)298298+ platform_device_register_data(NULL, "ti-cpufreq", -1, match,299299+ sizeof(*match));300300+303301 return 0;304302}305303module_init(ti_cpufreq_init);
+7-33
drivers/cpuidle/cpuidle-arm.c
···8282{8383 int ret;8484 struct cpuidle_driver *drv;8585- struct cpuidle_device *dev;86858786 drv = kmemdup(&arm_idle_driver, sizeof(*drv), GFP_KERNEL);8887 if (!drv)···102103 goto out_kfree_drv;103104 }104105105105- ret = cpuidle_register_driver(drv);106106- if (ret) {107107- if (ret != -EBUSY)108108- pr_err("Failed to register cpuidle driver\n");109109- goto out_kfree_drv;110110- }111111-112106 /*113107 * Call arch CPU operations in order to initialize114108 * idle states suspend back-end specific data···109117 ret = arm_cpuidle_init(cpu);110118111119 /*112112- * Skip the cpuidle device initialization if the reported120120+ * Allow the initialization to continue for other CPUs, if the reported113121 * failure is a HW misconfiguration/breakage (-ENXIO).114122 */115115- if (ret == -ENXIO)116116- return 0;117117-118123 if (ret) {119124 pr_err("CPU %d failed to init idle CPU ops\n", cpu);120120- goto out_unregister_drv;125125+ ret = ret == -ENXIO ? 0 : ret;126126+ goto out_kfree_drv;121127 }122128123123- dev = kzalloc(sizeof(*dev), GFP_KERNEL);124124- if (!dev) {125125- ret = -ENOMEM;126126- goto out_unregister_drv;127127- }128128- dev->cpu = cpu;129129-130130- ret = cpuidle_register_device(dev);131131- if (ret) {132132- pr_err("Failed to register cpuidle device for CPU %d\n",133133- cpu);134134- goto out_kfree_dev;135135- }129129+ ret = cpuidle_register(drv, NULL);130130+ if (ret)131131+ goto out_kfree_drv;136132137133 return 0;138134139139-out_kfree_dev:140140- kfree(dev);141141-out_unregister_drv:142142- cpuidle_unregister_driver(drv);143135out_kfree_drv:144136 kfree(drv);145137 return ret;···154178 while (--cpu >= 0) {155179 dev = per_cpu(cpuidle_devices, cpu);156180 drv = cpuidle_get_cpu_driver(dev);157157- cpuidle_unregister_device(dev);158158- cpuidle_unregister_driver(drv);159159- kfree(dev);181181+ cpuidle_unregister(drv);160182 kfree(drv);161183 }162184
+17-14
drivers/crypto/hisilicon/sec/sec_algs.c
···732732 int *splits_in_nents;733733 int *splits_out_nents = NULL;734734 struct sec_request_el *el, *temp;735735+ bool split = skreq->src != skreq->dst;735736736737 mutex_init(&sec_req->lock);737738 sec_req->req_base = &skreq->base;···751750 if (ret)752751 goto err_free_split_sizes;753752754754- if (skreq->src != skreq->dst) {753753+ if (split) {755754 sec_req->len_out = sg_nents(skreq->dst);756755 ret = sec_map_and_split_sg(skreq->dst, split_sizes, steps,757756 &splits_out, &splits_out_nents,···786785 split_sizes[i],787786 skreq->src != skreq->dst,788787 splits_in[i], splits_in_nents[i],789789- splits_out[i],790790- splits_out_nents[i], info);788788+ split ? splits_out[i] : NULL,789789+ split ? splits_out_nents[i] : 0,790790+ info);791791 if (IS_ERR(el)) {792792 ret = PTR_ERR(el);793793 goto err_free_elements;···808806 * more refined but this is unlikely to happen so no need.809807 */810808811811- /* Cleanup - all elements in pointer arrays have been coppied */812812- kfree(splits_in_nents);813813- kfree(splits_in);814814- kfree(splits_out_nents);815815- kfree(splits_out);816816- kfree(split_sizes);817817-818809 /* Grab a big lock for a long time to avoid concurrency issues */819810 mutex_lock(&queue->queuelock);820811···822827 (!queue->havesoftqueue ||823828 kfifo_avail(&queue->softqueue) > steps)) ||824829 !list_empty(&ctx->backlog)) {830830+ ret = -EBUSY;825831 if ((skreq->base.flags & CRYPTO_TFM_REQ_MAY_BACKLOG)) {826832 list_add_tail(&sec_req->backlog_head, &ctx->backlog);827833 mutex_unlock(&queue->queuelock);828828- return -EBUSY;834834+ goto out;829835 }830836831831- ret = -EBUSY;832837 mutex_unlock(&queue->queuelock);833838 goto err_free_elements;834839 }···837842 if (ret)838843 goto err_free_elements;839844840840- return -EINPROGRESS;845845+ ret = -EINPROGRESS;846846+out:847847+ /* Cleanup - all elements in pointer arrays have been copied */848848+ kfree(splits_in_nents);849849+ kfree(splits_in);850850+ kfree(splits_out_nents);851851+ kfree(splits_out);852852+ kfree(split_sizes);853853+ return ret;841854842855err_free_elements:843856 list_for_each_entry_safe(el, temp, &sec_req->elements, head) {···857854 crypto_skcipher_ivsize(atfm),858855 DMA_BIDIRECTIONAL);859856err_unmap_out_sg:860860- if (skreq->src != skreq->dst)857857+ if (split)861858 sec_unmap_sg_on_err(skreq->dst, steps, splits_out,862859 splits_out_nents, sec_req->len_out,863860 info->dev);
···265265 (params.mmap & ~PAGE_MASK)));266266267267 init_screen_info();268268+269269+ /* ARM does not permit early mappings to persist across paging_init() */270270+ if (IS_ENABLED(CONFIG_ARM))271271+ efi_memmap_unmap();268272}269273270274static int __init register_gop_device(void)
+1-1
drivers/firmware/efi/arm-runtime.c
···110110{111111 u64 mapsize;112112113113- if (!efi_enabled(EFI_BOOT) || !efi_enabled(EFI_MEMMAP)) {113113+ if (!efi_enabled(EFI_BOOT)) {114114 pr_info("EFI services will not be available.\n");115115 return 0;116116 }
···268268269269 if (pxa_gpio_has_pinctrl()) {270270 ret = pinctrl_gpio_direction_input(chip->base + offset);271271- if (!ret)272272- return 0;271271+ if (ret)272272+ return ret;273273 }274274275275 spin_lock_irqsave(&gpio_lock, flags);
+3-2
drivers/gpio/gpiolib.c
···12951295 gdev->descs = kcalloc(chip->ngpio, sizeof(gdev->descs[0]), GFP_KERNEL);12961296 if (!gdev->descs) {12971297 status = -ENOMEM;12981298- goto err_free_gdev;12981298+ goto err_free_ida;12991299 }1300130013011301 if (chip->ngpio == 0) {···14271427 kfree_const(gdev->label);14281428err_free_descs:14291429 kfree(gdev->descs);14301430-err_free_gdev:14301430+err_free_ida:14311431 ida_simple_remove(&gpio_ida, gdev->id);14321432+err_free_gdev:14321433 /* failures here can mean systems won't boot... */14331434 pr_err("%s: GPIOs %d..%d (%s) failed to register, %d\n", __func__,14341435 gdev->base, gdev->base + gdev->ngpio - 1,
···339339 struct drm_property *audio_property;340340 /* FMT dithering */341341 struct drm_property *dither_property;342342+ /* maximum number of bits per channel for monitor color */343343+ struct drm_property *max_bpc_property;342344 /* hardcoded DFP edid from BIOS */343345 struct edid *bios_hardcoded_edid;344346 int bios_hardcoded_edid_size;
+10-8
drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
···16321632 continue;16331633 }1634163416351635- /* First check if the entry is already handled */16361636- if (cursor.pfn < frag_start) {16371637- cursor.entry->huge = true;16381638- amdgpu_vm_pt_next(adev, &cursor);16391639- continue;16401640- }16411641-16421635 /* If it isn't already handled it can't be a huge page */16431636 if (cursor.entry->huge) {16441637 /* Add the entry to the relocated list to update it. */···16941701 }16951702 } while (frag_start < entry_end);1696170316971697- if (frag >= shift)17041704+ if (amdgpu_vm_pt_descendant(adev, &cursor)) {17051705+ /* Mark all child entries as huge */17061706+ while (cursor.pfn < frag_start) {17071707+ cursor.entry->huge = true;17081708+ amdgpu_vm_pt_next(adev, &cursor);17091709+ }17101710+17111711+ } else if (frag >= shift) {17121712+ /* or just move on to the next on the same level. */16981713 amdgpu_vm_pt_next(adev, &cursor);17141714+ }16991715 }1700171617011717 return 0;
+3-3
drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c
···72727373 /* Program the system aperture low logical page number. */7474 WREG32_SOC15(GC, 0, mmMC_VM_SYSTEM_APERTURE_LOW_ADDR,7575- min(adev->gmc.vram_start, adev->gmc.agp_start) >> 18);7575+ min(adev->gmc.fb_start, adev->gmc.agp_start) >> 18);76767777 if (adev->asic_type == CHIP_RAVEN && adev->rev_id >= 0x8)7878 /*···8282 * to get rid of the VM fault and hardware hang.8383 */8484 WREG32_SOC15(GC, 0, mmMC_VM_SYSTEM_APERTURE_HIGH_ADDR,8585- max((adev->gmc.vram_end >> 18) + 0x1,8585+ max((adev->gmc.fb_end >> 18) + 0x1,8686 adev->gmc.agp_end >> 18));8787 else8888 WREG32_SOC15(GC, 0, mmMC_VM_SYSTEM_APERTURE_HIGH_ADDR,8989- max(adev->gmc.vram_end, adev->gmc.agp_end) >> 18);8989+ max(adev->gmc.fb_end, adev->gmc.agp_end) >> 18);90909191 /* Set default page address. */9292 value = adev->vram_scratch.gpu_addr - adev->gmc.vram_start
···12751275 mutex_lock(&mgr->lock);12761276 mstb = mgr->mst_primary;1277127712781278+ if (!mstb)12791279+ goto out;12801280+12781281 for (i = 0; i < lct - 1; i++) {12791282 int shift = (i % 2) ? 0 : 4;12801283 int port_num = (rad[i / 2] >> shift) & 0xf;
+3
drivers/gpu/drm/drm_fb_helper.c
···219219 mutex_lock(&fb_helper->lock);220220 drm_connector_list_iter_begin(dev, &conn_iter);221221 drm_for_each_connector_iter(connector, &conn_iter) {222222+ if (connector->connector_type == DRM_MODE_CONNECTOR_WRITEBACK)223223+ continue;224224+222225 ret = __drm_fb_helper_add_one_connector(fb_helper, connector);223226 if (ret)224227 goto fail;
+1-1
drivers/gpu/drm/drm_fourcc.c
···97979898/**9999 * drm_driver_legacy_fb_format - compute drm fourcc code from legacy description100100+ * @dev: DRM device100101 * @bpp: bits per pixels101102 * @depth: bit depth per pixel102102- * @native: use host native byte order103103 *104104 * Computes a drm fourcc pixel format code for the given @bpp/@depth values.105105 * Unlike drm_mode_legacy_fb_format() this looks at the drivers mode_config,
+6-1
drivers/gpu/drm/i915/i915_gem_execbuffer.c
···12681268 else if (gen >= 4)12691269 len = 4;12701270 else12711271- len = 3;12711271+ len = 6;1272127212731273 batch = reloc_gpu(eb, vma, len);12741274 if (IS_ERR(batch))···13061306 *batch++ = addr;13071307 *batch++ = target_offset;13081308 } else {13091309+ *batch++ = MI_STORE_DWORD_IMM | MI_MEM_VIRTUAL;13101310+ *batch++ = addr;13111311+ *batch++ = target_offset;13121312+13131313+ /* And again for good measure (blb/pnv) */13091314 *batch++ = MI_STORE_DWORD_IMM | MI_MEM_VIRTUAL;13101315 *batch++ = addr;13111316 *batch++ = target_offset;
···228228 drm_for_each_connector_iter(connector, &conn_iter) {229229 struct intel_connector *intel_connector = to_intel_connector(connector);230230231231- if (intel_connector->encoder->hpd_pin == pin) {231231+ /* Don't check MST ports, they don't have pins */232232+ if (!intel_connector->mst_port &&233233+ intel_connector->encoder->hpd_pin == pin) {232234 if (connector->polled != intel_connector->polled)233235 DRM_DEBUG_DRIVER("Reenabling HPD on connector %s\n",234236 connector->name);···397395 struct intel_encoder *encoder;398396 bool storm_detected = false;399397 bool queue_dig = false, queue_hp = false;398398+ u32 long_hpd_pulse_mask = 0;399399+ u32 short_hpd_pulse_mask = 0;400400+ enum hpd_pin pin;400401401402 if (!pin_mask)402403 return;403404404405 spin_lock(&dev_priv->irq_lock);405405- for_each_intel_encoder(&dev_priv->drm, encoder) {406406- enum hpd_pin pin = encoder->hpd_pin;407407- bool has_hpd_pulse = intel_encoder_has_hpd_pulse(encoder);408406407407+ /*408408+ * Determine whether ->hpd_pulse() exists for each pin, and409409+ * whether we have a short or a long pulse. This is needed410410+ * as each pin may have up to two encoders (HDMI and DP) and411411+ * only the one of them (DP) will have ->hpd_pulse().412412+ */413413+ for_each_intel_encoder(&dev_priv->drm, encoder) {414414+ bool has_hpd_pulse = intel_encoder_has_hpd_pulse(encoder);415415+ enum port port = encoder->port;416416+ bool long_hpd;417417+418418+ pin = encoder->hpd_pin;409419 if (!(BIT(pin) & pin_mask))410420 continue;411421412412- if (has_hpd_pulse) {413413- bool long_hpd = long_mask & BIT(pin);414414- enum port port = encoder->port;422422+ if (!has_hpd_pulse)423423+ continue;415424416416- DRM_DEBUG_DRIVER("digital hpd port %c - %s\n", port_name(port),417417- long_hpd ? "long" : "short");418418- /*419419- * For long HPD pulses we want to have the digital queue happen,420420- * but we still want HPD storm detection to function.421421- */422422- queue_dig = true;423423- if (long_hpd) {424424- dev_priv->hotplug.long_port_mask |= (1 << port);425425- } else {426426- /* for short HPD just trigger the digital queue */427427- dev_priv->hotplug.short_port_mask |= (1 << port);428428- continue;429429- }425425+ long_hpd = long_mask & BIT(pin);426426+427427+ DRM_DEBUG_DRIVER("digital hpd port %c - %s\n", port_name(port),428428+ long_hpd ? "long" : "short");429429+ queue_dig = true;430430+431431+ if (long_hpd) {432432+ long_hpd_pulse_mask |= BIT(pin);433433+ dev_priv->hotplug.long_port_mask |= BIT(port);434434+ } else {435435+ short_hpd_pulse_mask |= BIT(pin);436436+ dev_priv->hotplug.short_port_mask |= BIT(port);430437 }438438+ }439439+440440+ /* Now process each pin just once */441441+ for_each_hpd_pin(pin) {442442+ bool long_hpd;443443+444444+ if (!(BIT(pin) & pin_mask))445445+ continue;431446432447 if (dev_priv->hotplug.stats[pin].state == HPD_DISABLED) {433448 /*···461442 if (dev_priv->hotplug.stats[pin].state != HPD_ENABLED)462443 continue;463444464464- if (!has_hpd_pulse) {445445+ /*446446+ * Delegate to ->hpd_pulse() if one of the encoders for this447447+ * pin has it, otherwise let the hotplug_work deal with this448448+ * pin directly.449449+ */450450+ if (((short_hpd_pulse_mask | long_hpd_pulse_mask) & BIT(pin))) {451451+ long_hpd = long_hpd_pulse_mask & BIT(pin);452452+ } else {465453 dev_priv->hotplug.event_bits |= BIT(pin);454454+ long_hpd = true;466455 queue_hp = true;467456 }457457+458458+ if (!long_hpd)459459+ continue;468460469461 if (intel_hpd_irq_storm_detect(dev_priv, pin)) {470462 dev_priv->hotplug.event_bits &= ~BIT(pin);
+13-1
drivers/gpu/drm/i915/intel_lrc.c
···424424425425 reg_state[CTX_RING_TAIL+1] = intel_ring_set_tail(rq->ring, rq->tail);426426427427- /* True 32b PPGTT with dynamic page allocation: update PDP427427+ /*428428+ * True 32b PPGTT with dynamic page allocation: update PDP428429 * registers and point the unallocated PDPs to scratch page.429430 * PML4 is allocated during ppgtt init, so this is not needed430431 * in 48-bit mode.···433432 if (ppgtt && !i915_vm_is_48bit(&ppgtt->vm))434433 execlists_update_context_pdps(ppgtt, reg_state);435434435435+ /*436436+ * Make sure the context image is complete before we submit it to HW.437437+ *438438+ * Ostensibly, writes (including the WCB) should be flushed prior to439439+ * an uncached write such as our mmio register access, the empirical440440+ * evidence (esp. on Braswell) suggests that the WC write into memory441441+ * may not be visible to the HW prior to the completion of the UC442442+ * register write and that we may begin execution from the context443443+ * before its image is complete leading to invalid PD chasing.444444+ */445445+ wmb();436446 return ce->lrc_desc;437447}438448
+40-1
drivers/gpu/drm/i915/intel_pm.c
···24932493 uint32_t method1, method2;24942494 int cpp;2495249524962496+ if (mem_value == 0)24972497+ return U32_MAX;24982498+24962499 if (!intel_wm_plane_visible(cstate, pstate))24972500 return 0;24982501···25252522 uint32_t method1, method2;25262523 int cpp;2527252425252525+ if (mem_value == 0)25262526+ return U32_MAX;25272527+25282528 if (!intel_wm_plane_visible(cstate, pstate))25292529 return 0;25302530···25502544 uint32_t mem_value)25512545{25522546 int cpp;25472547+25482548+ if (mem_value == 0)25492549+ return U32_MAX;2553255025542551 if (!intel_wm_plane_visible(cstate, pstate))25552552 return 0;···30173008 intel_print_wm_latency(dev_priv, "Cursor", dev_priv->wm.cur_latency);30183009}3019301030113011+static void snb_wm_lp3_irq_quirk(struct drm_i915_private *dev_priv)30123012+{30133013+ /*30143014+ * On some SNB machines (Thinkpad X220 Tablet at least)30153015+ * LP3 usage can cause vblank interrupts to be lost.30163016+ * The DEIIR bit will go high but it looks like the CPU30173017+ * never gets interrupted.30183018+ *30193019+ * It's not clear whether other interrupt source could30203020+ * be affected or if this is somehow limited to vblank30213021+ * interrupts only. To play it safe we disable LP330223022+ * watermarks entirely.30233023+ */30243024+ if (dev_priv->wm.pri_latency[3] == 0 &&30253025+ dev_priv->wm.spr_latency[3] == 0 &&30263026+ dev_priv->wm.cur_latency[3] == 0)30273027+ return;30283028+30293029+ dev_priv->wm.pri_latency[3] = 0;30303030+ dev_priv->wm.spr_latency[3] = 0;30313031+ dev_priv->wm.cur_latency[3] = 0;30323032+30333033+ DRM_DEBUG_KMS("LP3 watermarks disabled due to potential for lost interrupts\n");30343034+ intel_print_wm_latency(dev_priv, "Primary", dev_priv->wm.pri_latency);30353035+ intel_print_wm_latency(dev_priv, "Sprite", dev_priv->wm.spr_latency);30363036+ intel_print_wm_latency(dev_priv, "Cursor", dev_priv->wm.cur_latency);30373037+}30383038+30203039static void ilk_setup_wm_latency(struct drm_i915_private *dev_priv)30213040{30223041 intel_read_wm_latency(dev_priv, dev_priv->wm.pri_latency);···30613024 intel_print_wm_latency(dev_priv, "Sprite", dev_priv->wm.spr_latency);30623025 intel_print_wm_latency(dev_priv, "Cursor", dev_priv->wm.cur_latency);3063302630643064- if (IS_GEN6(dev_priv))30273027+ if (IS_GEN6(dev_priv)) {30653028 snb_wm_latency_quirk(dev_priv);30293029+ snb_wm_lp3_irq_quirk(dev_priv);30303030+ }30663031}3067303230683033static void skl_setup_wm_latency(struct drm_i915_private *dev_priv)
+36-2
drivers/gpu/drm/i915/intel_ringbuffer.c
···9191gen4_render_ring_flush(struct i915_request *rq, u32 mode)9292{9393 u32 cmd, *cs;9494+ int i;94959596 /*9697 * read/write caches:···128127 cmd |= MI_INVALIDATE_ISP;129128 }130129131131- cs = intel_ring_begin(rq, 2);130130+ i = 2;131131+ if (mode & EMIT_INVALIDATE)132132+ i += 20;133133+134134+ cs = intel_ring_begin(rq, i);132135 if (IS_ERR(cs))133136 return PTR_ERR(cs);134137135138 *cs++ = cmd;136136- *cs++ = MI_NOOP;139139+140140+ /*141141+ * A random delay to let the CS invalidate take effect? Without this142142+ * delay, the GPU relocation path fails as the CS does not see143143+ * the updated contents. Just as important, if we apply the flushes144144+ * to the EMIT_FLUSH branch (i.e. immediately after the relocation145145+ * write and before the invalidate on the next batch), the relocations146146+ * still fail. This implies that is a delay following invalidation147147+ * that is required to reset the caches as opposed to a delay to148148+ * ensure the memory is written.149149+ */150150+ if (mode & EMIT_INVALIDATE) {151151+ *cs++ = GFX_OP_PIPE_CONTROL(4) | PIPE_CONTROL_QW_WRITE;152152+ *cs++ = i915_ggtt_offset(rq->engine->scratch) |153153+ PIPE_CONTROL_GLOBAL_GTT;154154+ *cs++ = 0;155155+ *cs++ = 0;156156+157157+ for (i = 0; i < 12; i++)158158+ *cs++ = MI_FLUSH;159159+160160+ *cs++ = GFX_OP_PIPE_CONTROL(4) | PIPE_CONTROL_QW_WRITE;161161+ *cs++ = i915_ggtt_offset(rq->engine->scratch) |162162+ PIPE_CONTROL_GLOBAL_GTT;163163+ *cs++ = 0;164164+ *cs++ = 0;165165+ }166166+167167+ *cs++ = cmd;168168+137169 intel_ring_advance(rq, cs);138170139171 return 0;
···854854 unsigned int sof_lines;855855 unsigned int vsync_lines;856856857857+ /* Use VENCI for 480i and 576i and double HDMI pixels */858858+ if (mode->flags & DRM_MODE_FLAG_DBLCLK) {859859+ hdmi_repeat = true;860860+ use_enci = true;861861+ venc_hdmi_latency = 1;862862+ }863863+857864 if (meson_venc_hdmi_supported_vic(vic)) {858865 vmode = meson_venc_hdmi_get_vic_vmode(vic);859866 if (!vmode) {···872865 } else {873866 meson_venc_hdmi_get_dmt_vmode(mode, &vmode_dmt);874867 vmode = &vmode_dmt;875875- }876876-877877- /* Use VENCI for 480i and 576i and double HDMI pixels */878878- if (mode->flags & DRM_MODE_FLAG_DBLCLK) {879879- hdmi_repeat = true;880880- use_enci = true;881881- venc_hdmi_latency = 1;868868+ use_enci = false;882869 }883870884871 /* Repeat VENC pixels for 480/576i/p, 720p50/60 and 1080p50/60 */
+11-11
drivers/gpu/drm/omapdrm/dss/dsi.c
···5409540954105410 /* DSI on OMAP3 doesn't have register DSI_GNQ, set number54115411 * of data to 3 by default */54125412- if (dsi->data->quirks & DSI_QUIRK_GNQ)54125412+ if (dsi->data->quirks & DSI_QUIRK_GNQ) {54135413+ dsi_runtime_get(dsi);54135414 /* NB_DATA_LANES */54145415 dsi->num_lanes_supported = 1 + REG_GET(dsi, DSI_GNQ, 11, 9);54155415- else54165416+ dsi_runtime_put(dsi);54175417+ } else {54165418 dsi->num_lanes_supported = 3;54195419+ }5417542054185421 r = dsi_init_output(dsi);54195422 if (r)···54295426 }5430542754315428 r = of_platform_populate(dev->of_node, NULL, NULL, dev);54325432- if (r)54295429+ if (r) {54335430 DSSERR("Failed to populate DSI child devices: %d\n", r);54315431+ goto err_uninit_output;54325432+ }5434543354355434 r = component_add(&pdev->dev, &dsi_component_ops);54365435 if (r)54375437- goto err_uninit_output;54365436+ goto err_of_depopulate;5438543754395438 return 0;5440543954405440+err_of_depopulate:54415441+ of_platform_depopulate(dev);54415442err_uninit_output:54425443 dsi_uninit_output(dsi);54435444err_pm_disable:···54775470 /* wait for current handler to finish before turning the DSI off */54785471 synchronize_irq(dsi->irq);5479547254805480- dispc_runtime_put(dsi->dss->dispc);54815481-54825473 return 0;54835474}5484547554855476static int dsi_runtime_resume(struct device *dev)54865477{54875478 struct dsi_data *dsi = dev_get_drvdata(dev);54885488- int r;54895489-54905490- r = dispc_runtime_get(dsi->dss->dispc);54915491- if (r)54925492- return r;5493547954945480 dsi->is_enabled = true;54955481 /* ensure the irq handler sees the is_enabled value */
+10-1
drivers/gpu/drm/omapdrm/dss/dss.c
···14841484 dss);1485148514861486 /* Add all the child devices as components. */14871487+ r = of_platform_populate(pdev->dev.of_node, NULL, NULL, &pdev->dev);14881488+ if (r)14891489+ goto err_uninit_debugfs;14901490+14871491 omapdss_gather_components(&pdev->dev);1488149214891493 device_for_each_child(&pdev->dev, &match, dss_add_child_component);1490149414911495 r = component_master_add_with_match(&pdev->dev, &dss_component_ops, match);14921496 if (r)14931493- goto err_uninit_debugfs;14971497+ goto err_of_depopulate;1494149814951499 return 0;15001500+15011501+err_of_depopulate:15021502+ of_platform_depopulate(&pdev->dev);1496150314971504err_uninit_debugfs:14981505 dss_debugfs_remove_file(dss->debugfs.clk);···15281521static int dss_remove(struct platform_device *pdev)15291522{15301523 struct dss_device *dss = platform_get_drvdata(pdev);15241524+15251525+ of_platform_depopulate(&pdev->dev);1531152615321527 component_master_del(&pdev->dev, &dss_component_ops);15331528
···946946 if (venc->tv_dac_clk)947947 clk_disable_unprepare(venc->tv_dac_clk);948948949949- dispc_runtime_put(venc->dss->dispc);950950-951949 return 0;952950}953951954952static int venc_runtime_resume(struct device *dev)955953{956954 struct venc_device *venc = dev_get_drvdata(dev);957957- int r;958958-959959- r = dispc_runtime_get(venc->dss->dispc);960960- if (r < 0)961961- return r;962955963956 if (venc->tv_dac_clk)964957 clk_prepare_enable(venc->tv_dac_clk);
···214214 return 0;215215 }216216217217+ /* We know for sure we don't want an async update here. Set218218+ * state->legacy_cursor_update to false to prevent219219+ * drm_atomic_helper_setup_commit() from auto-completing220220+ * commit->flip_done.221221+ */222222+ state->legacy_cursor_update = false;217223 ret = drm_atomic_helper_setup_commit(state, nonblock);218224 if (ret)219225 return ret;
+13-2
drivers/gpu/drm/vc4/vc4_plane.c
···807807static void vc4_plane_atomic_async_update(struct drm_plane *plane,808808 struct drm_plane_state *state)809809{810810- struct vc4_plane_state *vc4_state = to_vc4_plane_state(plane->state);810810+ struct vc4_plane_state *vc4_state, *new_vc4_state;811811812812 if (plane->state->fb != state->fb) {813813 vc4_plane_async_set_fb(plane, state->fb);···828828 plane->state->src_y = state->src_y;829829830830 /* Update the display list based on the new crtc_x/y. */831831- vc4_plane_atomic_check(plane, plane->state);831831+ vc4_plane_atomic_check(plane, state);832832+833833+ new_vc4_state = to_vc4_plane_state(state);834834+ vc4_state = to_vc4_plane_state(plane->state);835835+836836+ /* Update the current vc4_state pos0, pos2 and ptr0 dlist entries. */837837+ vc4_state->dlist[vc4_state->pos0_offset] =838838+ new_vc4_state->dlist[vc4_state->pos0_offset];839839+ vc4_state->dlist[vc4_state->pos2_offset] =840840+ new_vc4_state->dlist[vc4_state->pos2_offset];841841+ vc4_state->dlist[vc4_state->ptr0_offset] =842842+ new_vc4_state->dlist[vc4_state->ptr0_offset];832843833844 /* Note that we can't just call vc4_plane_write_dlist()834845 * because that would smash the context data that the HVS is
···325325 { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_ELECOM,326326 USB_DEVICE_ID_ELECOM_BM084),327327 HID_BATTERY_QUIRK_IGNORE },328328+ { HID_USB_DEVICE(USB_VENDOR_ID_SYMBOL,329329+ USB_DEVICE_ID_SYMBOL_SCANNER_3),330330+ HID_BATTERY_QUIRK_IGNORE },328331 {}329332};330333···18411838}18421839EXPORT_SYMBOL_GPL(hidinput_disconnect);1843184018441844-/**18451845- * hid_scroll_counter_handle_scroll() - Send high- and low-resolution scroll18461846- * events given a high-resolution wheel18471847- * movement.18481848- * @counter: a hid_scroll_counter struct describing the wheel.18491849- * @hi_res_value: the movement of the wheel, in the mouse's high-resolution18501850- * units.18511851- *18521852- * Given a high-resolution movement, this function converts the movement into18531853- * microns and emits high-resolution scroll events for the input device. It also18541854- * uses the multiplier from &struct hid_scroll_counter to emit low-resolution18551855- * scroll events when appropriate for backwards-compatibility with userspace18561856- * input libraries.18571857- */18581858-void hid_scroll_counter_handle_scroll(struct hid_scroll_counter *counter,18591859- int hi_res_value)18601860-{18611861- int low_res_value, remainder, multiplier;18621862-18631863- input_report_rel(counter->dev, REL_WHEEL_HI_RES,18641864- hi_res_value * counter->microns_per_hi_res_unit);18651865-18661866- /*18671867- * Update the low-res remainder with the high-res value,18681868- * but reset if the direction has changed.18691869- */18701870- remainder = counter->remainder;18711871- if ((remainder ^ hi_res_value) < 0)18721872- remainder = 0;18731873- remainder += hi_res_value;18741874-18751875- /*18761876- * Then just use the resolution multiplier to see if18771877- * we should send a low-res (aka regular wheel) event.18781878- */18791879- multiplier = counter->resolution_multiplier;18801880- low_res_value = remainder / multiplier;18811881- remainder -= low_res_value * multiplier;18821882- counter->remainder = remainder;18831883-18841884- if (low_res_value)18851885- input_report_rel(counter->dev, REL_WHEEL, low_res_value);18861886-}18871887-EXPORT_SYMBOL_GPL(hid_scroll_counter_handle_scroll);
+27-282
drivers/hid/hid-logitech-hidpp.c
···6464#define HIDPP_QUIRK_NO_HIDINPUT BIT(23)6565#define HIDPP_QUIRK_FORCE_OUTPUT_REPORTS BIT(24)6666#define HIDPP_QUIRK_UNIFYING BIT(25)6767-#define HIDPP_QUIRK_HI_RES_SCROLL_1P0 BIT(26)6868-#define HIDPP_QUIRK_HI_RES_SCROLL_X2120 BIT(27)6969-#define HIDPP_QUIRK_HI_RES_SCROLL_X2121 BIT(28)7070-7171-/* Convenience constant to check for any high-res support. */7272-#define HIDPP_QUIRK_HI_RES_SCROLL (HIDPP_QUIRK_HI_RES_SCROLL_1P0 | \7373- HIDPP_QUIRK_HI_RES_SCROLL_X2120 | \7474- HIDPP_QUIRK_HI_RES_SCROLL_X2121)75677668#define HIDPP_QUIRK_DELAYED_INIT HIDPP_QUIRK_NO_HIDINPUT7769···149157 unsigned long capabilities;150158151159 struct hidpp_battery battery;152152- struct hid_scroll_counter vertical_wheel_counter;153160};154161155162/* HID++ 1.0 error codes */···400409#define HIDPP_SET_LONG_REGISTER 0x82401410#define HIDPP_GET_LONG_REGISTER 0x83402411403403-/**404404- * hidpp10_set_register_bit() - Sets a single bit in a HID++ 1.0 register.405405- * @hidpp_dev: the device to set the register on.406406- * @register_address: the address of the register to modify.407407- * @byte: the byte of the register to modify. Should be less than 3.408408- * Return: 0 if successful, otherwise a negative error code.409409- */410410-static int hidpp10_set_register_bit(struct hidpp_device *hidpp_dev,411411- u8 register_address, u8 byte, u8 bit)412412+#define HIDPP_REG_GENERAL 0x00413413+414414+static int hidpp10_enable_battery_reporting(struct hidpp_device *hidpp_dev)412415{413416 struct hidpp_report response;414417 int ret;415418 u8 params[3] = { 0 };416419417420 ret = hidpp_send_rap_command_sync(hidpp_dev,418418- REPORT_ID_HIDPP_SHORT,419419- HIDPP_GET_REGISTER,420420- register_address,421421- NULL, 0, &response);421421+ REPORT_ID_HIDPP_SHORT,422422+ HIDPP_GET_REGISTER,423423+ HIDPP_REG_GENERAL,424424+ NULL, 0, &response);422425 if (ret)423426 return ret;424427425428 memcpy(params, response.rap.params, 3);426429427427- params[byte] |= BIT(bit);430430+ /* Set the battery bit */431431+ params[0] |= BIT(4);428432429433 return hidpp_send_rap_command_sync(hidpp_dev,430430- REPORT_ID_HIDPP_SHORT,431431- HIDPP_SET_REGISTER,432432- register_address,433433- params, 3, &response);434434-}435435-436436-437437-#define HIDPP_REG_GENERAL 0x00438438-439439-static int hidpp10_enable_battery_reporting(struct hidpp_device *hidpp_dev)440440-{441441- return hidpp10_set_register_bit(hidpp_dev, HIDPP_REG_GENERAL, 0, 4);442442-}443443-444444-#define HIDPP_REG_FEATURES 0x01445445-446446-/* On HID++ 1.0 devices, high-res scroll was called "scrolling acceleration". */447447-static int hidpp10_enable_scrolling_acceleration(struct hidpp_device *hidpp_dev)448448-{449449- return hidpp10_set_register_bit(hidpp_dev, HIDPP_REG_FEATURES, 0, 6);434434+ REPORT_ID_HIDPP_SHORT,435435+ HIDPP_SET_REGISTER,436436+ HIDPP_REG_GENERAL,437437+ params, 3, &response);450438}451439452440#define HIDPP_REG_BATTERY_STATUS 0x07···11341164 }1135116511361166 return ret;11371137-}11381138-11391139-/* -------------------------------------------------------------------------- */11401140-/* 0x2120: Hi-resolution scrolling */11411141-/* -------------------------------------------------------------------------- */11421142-11431143-#define HIDPP_PAGE_HI_RESOLUTION_SCROLLING 0x212011441144-11451145-#define CMD_HI_RESOLUTION_SCROLLING_SET_HIGHRES_SCROLLING_MODE 0x1011461146-11471147-static int hidpp_hrs_set_highres_scrolling_mode(struct hidpp_device *hidpp,11481148- bool enabled, u8 *multiplier)11491149-{11501150- u8 feature_index;11511151- u8 feature_type;11521152- int ret;11531153- u8 params[1];11541154- struct hidpp_report response;11551155-11561156- ret = hidpp_root_get_feature(hidpp,11571157- HIDPP_PAGE_HI_RESOLUTION_SCROLLING,11581158- &feature_index,11591159- &feature_type);11601160- if (ret)11611161- return ret;11621162-11631163- params[0] = enabled ? BIT(0) : 0;11641164- ret = hidpp_send_fap_command_sync(hidpp, feature_index,11651165- CMD_HI_RESOLUTION_SCROLLING_SET_HIGHRES_SCROLLING_MODE,11661166- params, sizeof(params), &response);11671167- if (ret)11681168- return ret;11691169- *multiplier = response.fap.params[1];11701170- return 0;11711171-}11721172-11731173-/* -------------------------------------------------------------------------- */11741174-/* 0x2121: HiRes Wheel */11751175-/* -------------------------------------------------------------------------- */11761176-11771177-#define HIDPP_PAGE_HIRES_WHEEL 0x212111781178-11791179-#define CMD_HIRES_WHEEL_GET_WHEEL_CAPABILITY 0x0011801180-#define CMD_HIRES_WHEEL_SET_WHEEL_MODE 0x2011811181-11821182-static int hidpp_hrw_get_wheel_capability(struct hidpp_device *hidpp,11831183- u8 *multiplier)11841184-{11851185- u8 feature_index;11861186- u8 feature_type;11871187- int ret;11881188- struct hidpp_report response;11891189-11901190- ret = hidpp_root_get_feature(hidpp, HIDPP_PAGE_HIRES_WHEEL,11911191- &feature_index, &feature_type);11921192- if (ret)11931193- goto return_default;11941194-11951195- ret = hidpp_send_fap_command_sync(hidpp, feature_index,11961196- CMD_HIRES_WHEEL_GET_WHEEL_CAPABILITY,11971197- NULL, 0, &response);11981198- if (ret)11991199- goto return_default;12001200-12011201- *multiplier = response.fap.params[0];12021202- return 0;12031203-return_default:12041204- hid_warn(hidpp->hid_dev,12051205- "Couldn't get wheel multiplier (error %d), assuming %d.\n",12061206- ret, *multiplier);12071207- return ret;12081208-}12091209-12101210-static int hidpp_hrw_set_wheel_mode(struct hidpp_device *hidpp, bool invert,12111211- bool high_resolution, bool use_hidpp)12121212-{12131213- u8 feature_index;12141214- u8 feature_type;12151215- int ret;12161216- u8 params[1];12171217- struct hidpp_report response;12181218-12191219- ret = hidpp_root_get_feature(hidpp, HIDPP_PAGE_HIRES_WHEEL,12201220- &feature_index, &feature_type);12211221- if (ret)12221222- return ret;12231223-12241224- params[0] = (invert ? BIT(2) : 0) |12251225- (high_resolution ? BIT(1) : 0) |12261226- (use_hidpp ? BIT(0) : 0);12271227-12281228- return hidpp_send_fap_command_sync(hidpp, feature_index,12291229- CMD_HIRES_WHEEL_SET_WHEEL_MODE,12301230- params, sizeof(params), &response);12311167}1232116812331169/* -------------------------------------------------------------------------- */···23992523 input_report_rel(mydata->input, REL_Y, v);2400252424012525 v = hid_snto32(data[6], 8);24022402- hid_scroll_counter_handle_scroll(24032403- &hidpp->vertical_wheel_counter, v);25262526+ input_report_rel(mydata->input, REL_WHEEL, v);2404252724052528 input_sync(mydata->input);24062529 }···25282653}2529265425302655/* -------------------------------------------------------------------------- */25312531-/* High-resolution scroll wheels */25322532-/* -------------------------------------------------------------------------- */25332533-25342534-/**25352535- * struct hi_res_scroll_info - Stores info on a device's high-res scroll wheel.25362536- * @product_id: the HID product ID of the device being described.25372537- * @microns_per_hi_res_unit: the distance moved by the user's finger for each25382538- * high-resolution unit reported by the device, in25392539- * 256ths of a millimetre.25402540- */25412541-struct hi_res_scroll_info {25422542- __u32 product_id;25432543- int microns_per_hi_res_unit;25442544-};25452545-25462546-static struct hi_res_scroll_info hi_res_scroll_devices[] = {25472547- { /* Anywhere MX */25482548- .product_id = 0x1017, .microns_per_hi_res_unit = 445 },25492549- { /* Performance MX */25502550- .product_id = 0x101a, .microns_per_hi_res_unit = 406 },25512551- { /* M560 */25522552- .product_id = 0x402d, .microns_per_hi_res_unit = 435 },25532553- { /* MX Master 2S */25542554- .product_id = 0x4069, .microns_per_hi_res_unit = 406 },25552555-};25562556-25572557-static int hi_res_scroll_look_up_microns(__u32 product_id)25582558-{25592559- int i;25602560- int num_devices = sizeof(hi_res_scroll_devices)25612561- / sizeof(hi_res_scroll_devices[0]);25622562- for (i = 0; i < num_devices; i++) {25632563- if (hi_res_scroll_devices[i].product_id == product_id)25642564- return hi_res_scroll_devices[i].microns_per_hi_res_unit;25652565- }25662566- /* We don't have a value for this device, so use a sensible default. */25672567- return 406;25682568-}25692569-25702570-static int hi_res_scroll_enable(struct hidpp_device *hidpp)25712571-{25722572- int ret;25732573- u8 multiplier = 8;25742574-25752575- if (hidpp->quirks & HIDPP_QUIRK_HI_RES_SCROLL_X2121) {25762576- ret = hidpp_hrw_set_wheel_mode(hidpp, false, true, false);25772577- hidpp_hrw_get_wheel_capability(hidpp, &multiplier);25782578- } else if (hidpp->quirks & HIDPP_QUIRK_HI_RES_SCROLL_X2120) {25792579- ret = hidpp_hrs_set_highres_scrolling_mode(hidpp, true,25802580- &multiplier);25812581- } else /* if (hidpp->quirks & HIDPP_QUIRK_HI_RES_SCROLL_1P0) */25822582- ret = hidpp10_enable_scrolling_acceleration(hidpp);25832583-25842584- if (ret)25852585- return ret;25862586-25872587- hidpp->vertical_wheel_counter.resolution_multiplier = multiplier;25882588- hidpp->vertical_wheel_counter.microns_per_hi_res_unit =25892589- hi_res_scroll_look_up_microns(hidpp->hid_dev->product);25902590- hid_info(hidpp->hid_dev, "multiplier = %d, microns = %d\n",25912591- multiplier,25922592- hidpp->vertical_wheel_counter.microns_per_hi_res_unit);25932593- return 0;25942594-}25952595-25962596-/* -------------------------------------------------------------------------- */25972656/* Generic HID++ devices */25982657/* -------------------------------------------------------------------------- */25992658···25722763 wtp_populate_input(hidpp, input, origin_is_hid_core);25732764 else if (hidpp->quirks & HIDPP_QUIRK_CLASS_M560)25742765 m560_populate_input(hidpp, input, origin_is_hid_core);25752575-25762576- if (hidpp->quirks & HIDPP_QUIRK_HI_RES_SCROLL) {25772577- input_set_capability(input, EV_REL, REL_WHEEL_HI_RES);25782578- hidpp->vertical_wheel_counter.dev = input;25792579- }25802766}2581276725822768static int hidpp_input_configured(struct hid_device *hdev,···26882884 return m560_raw_event(hdev, data, size);2689288526902886 return 0;26912691-}26922692-26932693-static int hidpp_event(struct hid_device *hdev, struct hid_field *field,26942694- struct hid_usage *usage, __s32 value)26952695-{26962696- /* This function will only be called for scroll events, due to the26972697- * restriction imposed in hidpp_usages.26982698- */26992699- struct hidpp_device *hidpp = hid_get_drvdata(hdev);27002700- struct hid_scroll_counter *counter = &hidpp->vertical_wheel_counter;27012701- /* A scroll event may occur before the multiplier has been retrieved or27022702- * the input device set, or high-res scroll enabling may fail. In such27032703- * cases we must return early (falling back to default behaviour) to27042704- * avoid a crash in hid_scroll_counter_handle_scroll.27052705- */27062706- if (!(hidpp->quirks & HIDPP_QUIRK_HI_RES_SCROLL) || value == 027072707- || counter->dev == NULL || counter->resolution_multiplier == 0)27082708- return 0;27092709-27102710- hid_scroll_counter_handle_scroll(counter, value);27112711- return 1;27122887}2713288827142889static int hidpp_initialize_battery(struct hidpp_device *hidpp)···29013118 if (hidpp->battery.ps)29023119 power_supply_changed(hidpp->battery.ps);2903312029042904- if (hidpp->quirks & HIDPP_QUIRK_HI_RES_SCROLL)29052905- hi_res_scroll_enable(hidpp);29062906-29073121 if (!(hidpp->quirks & HIDPP_QUIRK_NO_HIDINPUT) || hidpp->delayed_input)29083122 /* if the input nodes are already created, we can stop now */29093123 return;···30863306 mutex_destroy(&hidpp->send_mutex);30873307}3088330830893089-#define LDJ_DEVICE(product) \30903090- HID_DEVICE(BUS_USB, HID_GROUP_LOGITECH_DJ_DEVICE, \30913091- USB_VENDOR_ID_LOGITECH, (product))30923092-30933309static const struct hid_device_id hidpp_devices[] = {30943310 { /* wireless touchpad */30953095- LDJ_DEVICE(0x4011),33113311+ HID_DEVICE(BUS_USB, HID_GROUP_LOGITECH_DJ_DEVICE,33123312+ USB_VENDOR_ID_LOGITECH, 0x4011),30963313 .driver_data = HIDPP_QUIRK_CLASS_WTP | HIDPP_QUIRK_DELAYED_INIT |30973314 HIDPP_QUIRK_WTP_PHYSICAL_BUTTONS },30983315 { /* wireless touchpad T650 */30993099- LDJ_DEVICE(0x4101),33163316+ HID_DEVICE(BUS_USB, HID_GROUP_LOGITECH_DJ_DEVICE,33173317+ USB_VENDOR_ID_LOGITECH, 0x4101),31003318 .driver_data = HIDPP_QUIRK_CLASS_WTP | HIDPP_QUIRK_DELAYED_INIT },31013319 { /* wireless touchpad T651 */31023320 HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_LOGITECH,31033321 USB_DEVICE_ID_LOGITECH_T651),31043322 .driver_data = HIDPP_QUIRK_CLASS_WTP },31053105- { /* Mouse Logitech Anywhere MX */31063106- LDJ_DEVICE(0x1017), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_1P0 },31073107- { /* Mouse Logitech Cube */31083108- LDJ_DEVICE(0x4010), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2120 },31093109- { /* Mouse Logitech M335 */31103110- LDJ_DEVICE(0x4050), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 },31113111- { /* Mouse Logitech M515 */31123112- LDJ_DEVICE(0x4007), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2120 },31133323 { /* Mouse logitech M560 */31143114- LDJ_DEVICE(0x402d),31153115- .driver_data = HIDPP_QUIRK_DELAYED_INIT | HIDPP_QUIRK_CLASS_M56031163116- | HIDPP_QUIRK_HI_RES_SCROLL_X2120 },31173117- { /* Mouse Logitech M705 (firmware RQM17) */31183118- LDJ_DEVICE(0x101b), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_1P0 },31193119- { /* Mouse Logitech M705 (firmware RQM67) */31203120- LDJ_DEVICE(0x406d), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 },31213121- { /* Mouse Logitech M720 */31223122- LDJ_DEVICE(0x405e), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 },31233123- { /* Mouse Logitech MX Anywhere 2 */31243124- LDJ_DEVICE(0x404a), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 },31253125- { LDJ_DEVICE(0xb013), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 },31263126- { LDJ_DEVICE(0xb018), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 },31273127- { LDJ_DEVICE(0xb01f), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 },31283128- { /* Mouse Logitech MX Anywhere 2S */31293129- LDJ_DEVICE(0x406a), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 },31303130- { /* Mouse Logitech MX Master */31313131- LDJ_DEVICE(0x4041), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 },31323132- { LDJ_DEVICE(0x4060), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 },31333133- { LDJ_DEVICE(0x4071), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 },31343134- { /* Mouse Logitech MX Master 2S */31353135- LDJ_DEVICE(0x4069), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 },31363136- { /* Mouse Logitech Performance MX */31373137- LDJ_DEVICE(0x101a), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_1P0 },33243324+ HID_DEVICE(BUS_USB, HID_GROUP_LOGITECH_DJ_DEVICE,33253325+ USB_VENDOR_ID_LOGITECH, 0x402d),33263326+ .driver_data = HIDPP_QUIRK_DELAYED_INIT | HIDPP_QUIRK_CLASS_M560 },31383327 { /* Keyboard logitech K400 */31393139- LDJ_DEVICE(0x4024),33283328+ HID_DEVICE(BUS_USB, HID_GROUP_LOGITECH_DJ_DEVICE,33293329+ USB_VENDOR_ID_LOGITECH, 0x4024),31403330 .driver_data = HIDPP_QUIRK_CLASS_K400 },31413331 { /* Solar Keyboard Logitech K750 */31423142- LDJ_DEVICE(0x4002),33323332+ HID_DEVICE(BUS_USB, HID_GROUP_LOGITECH_DJ_DEVICE,33333333+ USB_VENDOR_ID_LOGITECH, 0x4002),31433334 .driver_data = HIDPP_QUIRK_CLASS_K750 },3144333531453145- { LDJ_DEVICE(HID_ANY_ID) },33363336+ { HID_DEVICE(BUS_USB, HID_GROUP_LOGITECH_DJ_DEVICE,33373337+ USB_VENDOR_ID_LOGITECH, HID_ANY_ID)},3146333831473339 { HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH, USB_DEVICE_ID_LOGITECH_G920_WHEEL),31483340 .driver_data = HIDPP_QUIRK_CLASS_G920 | HIDPP_QUIRK_FORCE_OUTPUT_REPORTS},···3123337131243372MODULE_DEVICE_TABLE(hid, hidpp_devices);3125337331263126-static const struct hid_usage_id hidpp_usages[] = {31273127- { HID_GD_WHEEL, EV_REL, REL_WHEEL },31283128- { HID_ANY_ID - 1, HID_ANY_ID - 1, HID_ANY_ID - 1}31293129-};31303130-31313374static struct hid_driver hidpp_driver = {31323375 .name = "logitech-hidpp-device",31333376 .id_table = hidpp_devices,31343377 .probe = hidpp_probe,31353378 .remove = hidpp_remove,31363379 .raw_event = hidpp_raw_event,31373137- .usage_table = hidpp_usages,31383138- .event = hidpp_event,31393380 .input_configured = hidpp_input_configured,31403381 .input_mapping = hidpp_input_mapping,31413382 .input_mapped = hidpp_input_mapped,
···2323 * In order to avoid breaking them this driver creates a layered hidraw device,2424 * so it can detect when the client is running and then:2525 * - it will not send any command to the controller.2626- * - this input device will be disabled, to avoid double input of the same2626+ * - this input device will be removed, to avoid double input of the same2727 * user action.2828+ * When the client is closed, this input device will be created again.2829 *2930 * For additional functions, such as changing the right-pad margin or switching3031 * the led, you can use the user-space tool at:···114113 spinlock_t lock;115114 struct hid_device *hdev, *client_hdev;116115 struct mutex mutex;117117- bool client_opened, input_opened;116116+ bool client_opened;118117 struct input_dev __rcu *input;119118 unsigned long quirks;120119 struct work_struct work_connect;···280279 }281280}282281283283-static void steam_update_lizard_mode(struct steam_device *steam)284284-{285285- mutex_lock(&steam->mutex);286286- if (!steam->client_opened) {287287- if (steam->input_opened)288288- steam_set_lizard_mode(steam, false);289289- else290290- steam_set_lizard_mode(steam, lizard_mode);291291- }292292- mutex_unlock(&steam->mutex);293293-}294294-295282static int steam_input_open(struct input_dev *dev)296283{297284 struct steam_device *steam = input_get_drvdata(dev);···290301 return ret;291302292303 mutex_lock(&steam->mutex);293293- steam->input_opened = true;294304 if (!steam->client_opened && lizard_mode)295305 steam_set_lizard_mode(steam, false);296306 mutex_unlock(&steam->mutex);···301313 struct steam_device *steam = input_get_drvdata(dev);302314303315 mutex_lock(&steam->mutex);304304- steam->input_opened = false;305316 if (!steam->client_opened && lizard_mode)306317 steam_set_lizard_mode(steam, true);307318 mutex_unlock(&steam->mutex);···387400 return 0;388401}389402390390-static int steam_register(struct steam_device *steam)403403+static int steam_input_register(struct steam_device *steam)391404{392405 struct hid_device *hdev = steam->hdev;393406 struct input_dev *input;···400413 dbg_hid("%s: already connected\n", __func__);401414 return 0;402415 }403403-404404- /*405405- * Unlikely, but getting the serial could fail, and it is not so406406- * important, so make up a serial number and go on.407407- */408408- if (steam_get_serial(steam) < 0)409409- strlcpy(steam->serial_no, "XXXXXXXXXX",410410- sizeof(steam->serial_no));411411-412412- hid_info(hdev, "Steam Controller '%s' connected",413413- steam->serial_no);414416415417 input = input_allocate_device();416418 if (!input)···468492 goto input_register_fail;469493470494 rcu_assign_pointer(steam->input, input);471471-472472- /* ignore battery errors, we can live without it */473473- if (steam->quirks & STEAM_QUIRK_WIRELESS)474474- steam_battery_register(steam);475475-476495 return 0;477496478497input_register_fail:···475504 return ret;476505}477506478478-static void steam_unregister(struct steam_device *steam)507507+static void steam_input_unregister(struct steam_device *steam)479508{480509 struct input_dev *input;510510+ rcu_read_lock();511511+ input = rcu_dereference(steam->input);512512+ rcu_read_unlock();513513+ if (!input)514514+ return;515515+ RCU_INIT_POINTER(steam->input, NULL);516516+ synchronize_rcu();517517+ input_unregister_device(input);518518+}519519+520520+static void steam_battery_unregister(struct steam_device *steam)521521+{481522 struct power_supply *battery;482523483524 rcu_read_lock();484484- input = rcu_dereference(steam->input);485525 battery = rcu_dereference(steam->battery);486526 rcu_read_unlock();487527488488- if (battery) {489489- RCU_INIT_POINTER(steam->battery, NULL);490490- synchronize_rcu();491491- power_supply_unregister(battery);528528+ if (!battery)529529+ return;530530+ RCU_INIT_POINTER(steam->battery, NULL);531531+ synchronize_rcu();532532+ power_supply_unregister(battery);533533+}534534+535535+static int steam_register(struct steam_device *steam)536536+{537537+ int ret;538538+539539+ /*540540+ * This function can be called several times in a row with the541541+ * wireless adaptor, without steam_unregister() between them, because542542+ * another client send a get_connection_status command, for example.543543+ * The battery and serial number are set just once per device.544544+ */545545+ if (!steam->serial_no[0]) {546546+ /*547547+ * Unlikely, but getting the serial could fail, and it is not so548548+ * important, so make up a serial number and go on.549549+ */550550+ if (steam_get_serial(steam) < 0)551551+ strlcpy(steam->serial_no, "XXXXXXXXXX",552552+ sizeof(steam->serial_no));553553+554554+ hid_info(steam->hdev, "Steam Controller '%s' connected",555555+ steam->serial_no);556556+557557+ /* ignore battery errors, we can live without it */558558+ if (steam->quirks & STEAM_QUIRK_WIRELESS)559559+ steam_battery_register(steam);560560+561561+ mutex_lock(&steam_devices_lock);562562+ list_add(&steam->list, &steam_devices);563563+ mutex_unlock(&steam_devices_lock);492564 }493493- if (input) {494494- RCU_INIT_POINTER(steam->input, NULL);495495- synchronize_rcu();565565+566566+ mutex_lock(&steam->mutex);567567+ if (!steam->client_opened) {568568+ steam_set_lizard_mode(steam, lizard_mode);569569+ ret = steam_input_register(steam);570570+ } else {571571+ ret = 0;572572+ }573573+ mutex_unlock(&steam->mutex);574574+575575+ return ret;576576+}577577+578578+static void steam_unregister(struct steam_device *steam)579579+{580580+ steam_battery_unregister(steam);581581+ steam_input_unregister(steam);582582+ if (steam->serial_no[0]) {496583 hid_info(steam->hdev, "Steam Controller '%s' disconnected",497584 steam->serial_no);498498- input_unregister_device(input);585585+ mutex_lock(&steam_devices_lock);586586+ list_del(&steam->list);587587+ mutex_unlock(&steam_devices_lock);588588+ steam->serial_no[0] = 0;499589 }500590}501591···632600 mutex_lock(&steam->mutex);633601 steam->client_opened = true;634602 mutex_unlock(&steam->mutex);603603+604604+ steam_input_unregister(steam);605605+635606 return ret;636607}637608···644609645610 mutex_lock(&steam->mutex);646611 steam->client_opened = false;647647- if (steam->input_opened)648648- steam_set_lizard_mode(steam, false);649649- else650650- steam_set_lizard_mode(steam, lizard_mode);651612 mutex_unlock(&steam->mutex);652613653614 hid_hw_close(steam->hdev);615615+ if (steam->connected) {616616+ steam_set_lizard_mode(steam, lizard_mode);617617+ steam_input_register(steam);618618+ }654619}655620656621static int steam_client_ll_raw_request(struct hid_device *hdev,···779744 }780745 }781746782782- mutex_lock(&steam_devices_lock);783783- steam_update_lizard_mode(steam);784784- list_add(&steam->list, &steam_devices);785785- mutex_unlock(&steam_devices_lock);786786-787747 return 0;788748789749hid_hw_open_fail:···804774 return;805775 }806776807807- mutex_lock(&steam_devices_lock);808808- list_del(&steam->list);809809- mutex_unlock(&steam_devices_lock);810810-811777 hid_destroy_device(steam->client_hdev);812778 steam->client_opened = false;813779 cancel_work_sync(&steam->work_connect);···818792static void steam_do_connect_event(struct steam_device *steam, bool connected)819793{820794 unsigned long flags;795795+ bool changed;821796822797 spin_lock_irqsave(&steam->lock, flags);798798+ changed = steam->connected != connected;823799 steam->connected = connected;824800 spin_unlock_irqrestore(&steam->lock, flags);825801826826- if (schedule_work(&steam->work_connect) == 0)802802+ if (changed && schedule_work(&steam->work_connect) == 0)827803 dbg_hid("%s: connected=%d event already queued\n",828804 __func__, connected);829805}···10471019 return 0;10481020 rcu_read_lock();10491021 input = rcu_dereference(steam->input);10501050- if (likely(input)) {10221022+ if (likely(input))10511023 steam_do_input_event(steam, input, data);10521052- } else {10531053- dbg_hid("%s: input data without connect event\n",10541054- __func__);10551055- steam_do_connect_event(steam, true);10561056- }10571024 rcu_read_unlock();10581025 break;10591026 case STEAM_EV_CONNECT:···1097107410981075 mutex_lock(&steam_devices_lock);10991076 list_for_each_entry(steam, &steam_devices, list) {11001100- steam_update_lizard_mode(steam);10771077+ mutex_lock(&steam->mutex);10781078+ if (!steam->client_opened)10791079+ steam_set_lizard_mode(steam, lizard_mode);10801080+ mutex_unlock(&steam->mutex);11011081 }11021082 mutex_unlock(&steam_devices_lock);11031083 return 0;
···12121313#include <linux/atomic.h>1414#include <linux/compat.h>1515+#include <linux/cred.h>1516#include <linux/device.h>1617#include <linux/fs.h>1718#include <linux/hid.h>···497496 goto err_free;498497 }499498500500- len = min(sizeof(hid->name), sizeof(ev->u.create2.name));501501- strlcpy(hid->name, ev->u.create2.name, len);502502- len = min(sizeof(hid->phys), sizeof(ev->u.create2.phys));503503- strlcpy(hid->phys, ev->u.create2.phys, len);504504- len = min(sizeof(hid->uniq), sizeof(ev->u.create2.uniq));505505- strlcpy(hid->uniq, ev->u.create2.uniq, len);499499+ /* @hid is zero-initialized, strncpy() is correct, strlcpy() not */500500+ len = min(sizeof(hid->name), sizeof(ev->u.create2.name)) - 1;501501+ strncpy(hid->name, ev->u.create2.name, len);502502+ len = min(sizeof(hid->phys), sizeof(ev->u.create2.phys)) - 1;503503+ strncpy(hid->phys, ev->u.create2.phys, len);504504+ len = min(sizeof(hid->uniq), sizeof(ev->u.create2.uniq)) - 1;505505+ strncpy(hid->uniq, ev->u.create2.uniq, len);506506507507 hid->ll_driver = &uhid_hid_driver;508508 hid->bus = ev->u.create2.bus;···724722725723 switch (uhid->input_buf.type) {726724 case UHID_CREATE:725725+ /*726726+ * 'struct uhid_create_req' contains a __user pointer which is727727+ * copied from, so it's unsafe to allow this with elevated728728+ * privileges (e.g. from a setuid binary) or via kernel_write().729729+ */730730+ if (file->f_cred != current_cred() || uaccess_kernel()) {731731+ pr_err_once("UHID_CREATE from different security context by process %d (%s), this is not allowed.\n",732732+ task_tgid_vnr(current), current->comm);733733+ ret = -EACCES;734734+ goto unlock;735735+ }727736 ret = uhid_dev_create(uhid, &uhid->input_buf);728737 break;729738 case UHID_CREATE2:
+22-4
drivers/hv/hv_kvp.c
···353353354354 out->body.kvp_ip_val.dhcp_enabled = in->kvp_ip_val.dhcp_enabled;355355356356+ /* fallthrough */357357+358358+ case KVP_OP_GET_IP_INFO:356359 utf16s_to_utf8s((wchar_t *)in->kvp_ip_val.adapter_id,357360 MAX_ADAPTER_ID_SIZE,358361 UTF16_LITTLE_ENDIAN,···408405 process_ib_ipinfo(in_msg, message, KVP_OP_SET_IP_INFO);409406 break;410407 case KVP_OP_GET_IP_INFO:411411- /* We only need to pass on message->kvp_hdr.operation. */408408+ /*409409+ * We only need to pass on the info of operation, adapter_id410410+ * and addr_family to the userland kvp daemon.411411+ */412412+ process_ib_ipinfo(in_msg, message, KVP_OP_GET_IP_INFO);412413 break;413414 case KVP_OP_SET:414415 switch (in_msg->body.kvp_set.data.value_type) {···453446454447 }455448456456- break;457457-458458- case KVP_OP_GET:449449+ /*450450+ * The key is always a string - utf16 encoding.451451+ */459452 message->body.kvp_set.data.key_size =460453 utf16s_to_utf8s(461454 (wchar_t *)in_msg->body.kvp_set.data.key,462455 in_msg->body.kvp_set.data.key_size,463456 UTF16_LITTLE_ENDIAN,464457 message->body.kvp_set.data.key,458458+ HV_KVP_EXCHANGE_MAX_KEY_SIZE - 1) + 1;459459+460460+ break;461461+462462+ case KVP_OP_GET:463463+ message->body.kvp_get.data.key_size =464464+ utf16s_to_utf8s(465465+ (wchar_t *)in_msg->body.kvp_get.data.key,466466+ in_msg->body.kvp_get.data.key_size,467467+ UTF16_LITTLE_ENDIAN,468468+ message->body.kvp_get.data.key,465469 HV_KVP_EXCHANGE_MAX_KEY_SIZE - 1) + 1;466470 break;467471
···595595 pr_err("%s: Page request without PASID: %08llx %08llx\n",596596 iommu->name, ((unsigned long long *)req)[0],597597 ((unsigned long long *)req)[1]);598598- goto bad_req;598598+ goto no_pasid;599599 }600600601601 if (!svm || svm->pasid != req->pasid) {
+3
drivers/iommu/ipmmu-vmsa.c
···498498499499static void ipmmu_domain_destroy_context(struct ipmmu_vmsa_domain *domain)500500{501501+ if (!domain->mmu)502502+ return;503503+501504 /*502505 * Disable the context. Flush the TLB as required when modifying the503506 * context registers.
+38-11
drivers/media/cec/cec-adap.c
···807807 }808808809809 if (adap->transmit_queue_sz >= CEC_MAX_MSG_TX_QUEUE_SZ) {810810- dprintk(1, "%s: transmit queue full\n", __func__);810810+ dprintk(2, "%s: transmit queue full\n", __func__);811811 return -EBUSY;812812 }813813···11801180{11811181 struct cec_log_addrs *las = &adap->log_addrs;11821182 struct cec_msg msg = { };11831183+ const unsigned int max_retries = 2;11841184+ unsigned int i;11831185 int err;1184118611851187 if (cec_has_log_addr(adap, log_addr))···11901188 /* Send poll message */11911189 msg.len = 1;11921190 msg.msg[0] = (log_addr << 4) | log_addr;11931193- err = cec_transmit_msg_fh(adap, &msg, NULL, true);11911191+11921192+ for (i = 0; i < max_retries; i++) {11931193+ err = cec_transmit_msg_fh(adap, &msg, NULL, true);11941194+11951195+ /*11961196+ * While trying to poll the physical address was reset11971197+ * and the adapter was unconfigured, so bail out.11981198+ */11991199+ if (!adap->is_configuring)12001200+ return -EINTR;12011201+12021202+ if (err)12031203+ return err;12041204+12051205+ /*12061206+ * The message was aborted due to a disconnect or12071207+ * unconfigure, just bail out.12081208+ */12091209+ if (msg.tx_status & CEC_TX_STATUS_ABORTED)12101210+ return -EINTR;12111211+ if (msg.tx_status & CEC_TX_STATUS_OK)12121212+ return 0;12131213+ if (msg.tx_status & CEC_TX_STATUS_NACK)12141214+ break;12151215+ /*12161216+ * Retry up to max_retries times if the message was neither12171217+ * OKed or NACKed. This can happen due to e.g. a Lost12181218+ * Arbitration condition.12191219+ */12201220+ }1194122111951222 /*11961196- * While trying to poll the physical address was reset11971197- * and the adapter was unconfigured, so bail out.12231223+ * If we are unable to get an OK or a NACK after max_retries attempts12241224+ * (and note that each attempt already consists of four polls), then12251225+ * then we assume that something is really weird and that it is not a12261226+ * good idea to try and claim this logical address.11981227 */11991199- if (!adap->is_configuring)12001200- return -EINTR;12011201-12021202- if (err)12031203- return err;12041204-12051205- if (msg.tx_status & CEC_TX_STATUS_OK)12281228+ if (i == max_retries)12061229 return 0;1207123012081231 /*
-1
drivers/media/i2c/tc358743.c
···19181918 ret = v4l2_fwnode_endpoint_alloc_parse(of_fwnode_handle(ep), &endpoint);19191919 if (ret) {19201920 dev_err(dev, "failed to parse endpoint\n");19211921- ret = ret;19221921 goto put_node;19231922 }19241923
+4-6
drivers/media/pci/intel/ipu3/ipu3-cio2.c
···18441844static void cio2_pci_remove(struct pci_dev *pci_dev)18451845{18461846 struct cio2_device *cio2 = pci_get_drvdata(pci_dev);18471847- unsigned int i;1848184718491849- cio2_notifier_exit(cio2);18501850- cio2_fbpt_exit_dummy(cio2);18511851- for (i = 0; i < CIO2_QUEUES; i++)18521852- cio2_queue_exit(cio2, &cio2->queue[i]);18531853- v4l2_device_unregister(&cio2->v4l2_dev);18541848 media_device_unregister(&cio2->media_dev);18491849+ cio2_notifier_exit(cio2);18501850+ cio2_queues_exit(cio2);18511851+ cio2_fbpt_exit_dummy(cio2);18521852+ v4l2_device_unregister(&cio2->v4l2_dev);18551853 media_device_cleanup(&cio2->media_dev);18561854 mutex_destroy(&cio2->lock);18571855}
···20322032 int ret;2033203320342034 nand_np = dev->of_node;20352035- nfc_np = of_find_compatible_node(dev->of_node, NULL,20362036- "atmel,sama5d3-nfc");20352035+ nfc_np = of_get_compatible_child(dev->of_node, "atmel,sama5d3-nfc");20372036 if (!nfc_np) {20382037 dev_err(dev, "Could not find device node for sama5d3-nfc\n");20392038 return -ENODEV;···24462447 }2447244824482449 if (caps->legacy_of_bindings) {24502450+ struct device_node *nfc_node;24492451 u32 ale_offs = 21;2450245224512453 /*24522454 * If we are parsing legacy DT props and the DT contains a24532455 * valid NFC node, forward the request to the sama5 logic.24542456 */24552455- if (of_find_compatible_node(pdev->dev.of_node, NULL,24562456- "atmel,sama5d3-nfc"))24572457+ nfc_node = of_get_compatible_child(pdev->dev.of_node,24582458+ "atmel,sama5d3-nfc");24592459+ if (nfc_node) {24572460 caps = &atmel_sama5_nand_caps;24612461+ of_node_put(nfc_node);24622462+ }2458246324592464 /*24602465 * Even if the compatible says we are dealing with an
···644644 ndelay(cqspi->wr_delay);645645646646 while (remaining > 0) {647647+ size_t write_words, mod_bytes;648648+647649 write_bytes = remaining > page_size ? page_size : remaining;648648- iowrite32_rep(cqspi->ahb_base, txbuf,649649- DIV_ROUND_UP(write_bytes, 4));650650+ write_words = write_bytes / 4;651651+ mod_bytes = write_bytes % 4;652652+ /* Write 4 bytes at a time then single bytes. */653653+ if (write_words) {654654+ iowrite32_rep(cqspi->ahb_base, txbuf, write_words);655655+ txbuf += (write_words * 4);656656+ }657657+ if (mod_bytes) {658658+ unsigned int temp = 0xFFFFFFFF;659659+660660+ memcpy(&temp, txbuf, mod_bytes);661661+ iowrite32(temp, cqspi->ahb_base);662662+ txbuf += mod_bytes;663663+ }650664651665 if (!wait_for_completion_timeout(&cqspi->transfer_complete,652666 msecs_to_jiffies(CQSPI_TIMEOUT_MS))) {···669655 goto failwr;670656 }671657672672- txbuf += write_bytes;673658 remaining -= write_bytes;674659675660 if (remaining > 0)
+98-32
drivers/mtd/spi-nor/spi-nor.c
···21562156 * @nor: pointer to a 'struct spi_nor'21572157 * @addr: offset in the serial flash memory21582158 * @len: number of bytes to read21592159- * @buf: buffer where the data is copied into21592159+ * @buf: buffer where the data is copied into (dma-safe memory)21602160 *21612161 * Return: 0 on success, -errno otherwise.21622162 */···25222522}2523252325242524/**25252525+ * spi_nor_sort_erase_mask() - sort erase mask25262526+ * @map: the erase map of the SPI NOR25272527+ * @erase_mask: the erase type mask to be sorted25282528+ *25292529+ * Replicate the sort done for the map's erase types in BFPT: sort the erase25302530+ * mask in ascending order with the smallest erase type size starting from25312531+ * BIT(0) in the sorted erase mask.25322532+ *25332533+ * Return: sorted erase mask.25342534+ */25352535+static u8 spi_nor_sort_erase_mask(struct spi_nor_erase_map *map, u8 erase_mask)25362536+{25372537+ struct spi_nor_erase_type *erase_type = map->erase_type;25382538+ int i;25392539+ u8 sorted_erase_mask = 0;25402540+25412541+ if (!erase_mask)25422542+ return 0;25432543+25442544+ /* Replicate the sort done for the map's erase types. */25452545+ for (i = 0; i < SNOR_ERASE_TYPE_MAX; i++)25462546+ if (erase_type[i].size && erase_mask & BIT(erase_type[i].idx))25472547+ sorted_erase_mask |= BIT(i);25482548+25492549+ return sorted_erase_mask;25502550+}25512551+25522552+/**25252553 * spi_nor_regions_sort_erase_types() - sort erase types in each region25262554 * @map: the erase map of the SPI NOR25272555 *···25642536static void spi_nor_regions_sort_erase_types(struct spi_nor_erase_map *map)25652537{25662538 struct spi_nor_erase_region *region = map->regions;25672567- struct spi_nor_erase_type *erase_type = map->erase_type;25682568- int i;25692539 u8 region_erase_mask, sorted_erase_mask;2570254025712541 while (region) {25722542 region_erase_mask = region->offset & SNOR_ERASE_TYPE_MASK;2573254325742574- /* Replicate the sort done for the map's erase types. */25752575- sorted_erase_mask = 0;25762576- for (i = 0; i < SNOR_ERASE_TYPE_MAX; i++)25772577- if (erase_type[i].size &&25782578- region_erase_mask & BIT(erase_type[i].idx))25792579- sorted_erase_mask |= BIT(i);25442544+ sorted_erase_mask = spi_nor_sort_erase_mask(map,25452545+ region_erase_mask);2580254625812547 /* Overwrite erase mask. */25822548 region->offset = (region->offset & ~SNOR_ERASE_TYPE_MASK) |···28772855 * spi_nor_get_map_in_use() - get the configuration map in use28782856 * @nor: pointer to a 'struct spi_nor'28792857 * @smpt: pointer to the sector map parameter table28582858+ * @smpt_len: sector map parameter table length28592859+ *28602860+ * Return: pointer to the map in use, ERR_PTR(-errno) otherwise.28802861 */28812881-static const u32 *spi_nor_get_map_in_use(struct spi_nor *nor, const u32 *smpt)28622862+static const u32 *spi_nor_get_map_in_use(struct spi_nor *nor, const u32 *smpt,28632863+ u8 smpt_len)28822864{28832883- const u32 *ret = NULL;28842884- u32 i, addr;28652865+ const u32 *ret;28662866+ u8 *buf;28672867+ u32 addr;28852868 int err;28692869+ u8 i;28862870 u8 addr_width, read_opcode, read_dummy;28872887- u8 read_data_mask, data_byte, map_id;28712871+ u8 read_data_mask, map_id;28722872+28732873+ /* Use a kmalloc'ed bounce buffer to guarantee it is DMA-able. */28742874+ buf = kmalloc(sizeof(*buf), GFP_KERNEL);28752875+ if (!buf)28762876+ return ERR_PTR(-ENOMEM);2888287728892878 addr_width = nor->addr_width;28902879 read_dummy = nor->read_dummy;28912880 read_opcode = nor->read_opcode;2892288128932882 map_id = 0;28942894- i = 0;28952883 /* Determine if there are any optional Detection Command Descriptors */28962896- while (!(smpt[i] & SMPT_DESC_TYPE_MAP)) {28842884+ for (i = 0; i < smpt_len; i += 2) {28852885+ if (smpt[i] & SMPT_DESC_TYPE_MAP)28862886+ break;28872887+28972888 read_data_mask = SMPT_CMD_READ_DATA(smpt[i]);28982889 nor->addr_width = spi_nor_smpt_addr_width(nor, smpt[i]);28992890 nor->read_dummy = spi_nor_smpt_read_dummy(nor, smpt[i]);29002891 nor->read_opcode = SMPT_CMD_OPCODE(smpt[i]);29012892 addr = smpt[i + 1];2902289329032903- err = spi_nor_read_raw(nor, addr, 1, &data_byte);29042904- if (err)28942894+ err = spi_nor_read_raw(nor, addr, 1, buf);28952895+ if (err) {28962896+ ret = ERR_PTR(err);29052897 goto out;28982898+ }2906289929072900 /*29082901 * Build an index value that is used to select the Sector Map29092902 * Configuration that is currently in use.29102903 */29112911- map_id = map_id << 1 | !!(data_byte & read_data_mask);29122912- i = i + 2;29042904+ map_id = map_id << 1 | !!(*buf & read_data_mask);29132905 }2914290629152915- /* Find the matching configuration map */29162916- while (SMPT_MAP_ID(smpt[i]) != map_id) {29072907+ /*29082908+ * If command descriptors are provided, they always precede map29092909+ * descriptors in the table. There is no need to start the iteration29102910+ * over smpt array all over again.29112911+ *29122912+ * Find the matching configuration map.29132913+ */29142914+ ret = ERR_PTR(-EINVAL);29152915+ while (i < smpt_len) {29162916+ if (SMPT_MAP_ID(smpt[i]) == map_id) {29172917+ ret = smpt + i;29182918+ break;29192919+ }29202920+29212921+ /*29222922+ * If there are no more configuration map descriptors and no29232923+ * configuration ID matched the configuration identifier, the29242924+ * sector address map is unknown.29252925+ */29172926 if (smpt[i] & SMPT_DESC_END)29182918- goto out;29272927+ break;29282928+29192929 /* increment the table index to the next map */29202930 i += SMPT_MAP_REGION_COUNT(smpt[i]) + 1;29212931 }2922293229232923- ret = smpt + i;29242933 /* fall through */29252934out:29352935+ kfree(buf);29262936 nor->addr_width = addr_width;29272937 nor->read_dummy = read_dummy;29282938 nor->read_opcode = read_opcode;···30002946 u64 offset;30012947 u32 region_count;30022948 int i, j;30033003- u8 erase_type;29492949+ u8 erase_type, uniform_erase_type;3004295030052951 region_count = SMPT_MAP_REGION_COUNT(*smpt);30062952 /*···30132959 return -ENOMEM;30142960 map->regions = region;3015296130163016- map->uniform_erase_type = 0xff;29622962+ uniform_erase_type = 0xff;30172963 offset = 0;30182964 /* Populate regions. */30192965 for (i = 0; i < region_count; i++) {···30282974 * Save the erase types that are supported in all regions and30292975 * can erase the entire flash memory.30302976 */30313031- map->uniform_erase_type &= erase_type;29772977+ uniform_erase_type &= erase_type;3032297830332979 offset = (region[i].offset & ~SNOR_ERASE_FLAGS_MASK) +30342980 region[i].size;30352981 }29822982+29832983+ map->uniform_erase_type = spi_nor_sort_erase_mask(map,29842984+ uniform_erase_type);3036298530372986 spi_nor_region_mark_end(®ion[i - 1]);30382987···30773020 for (i = 0; i < smpt_header->length; i++)30783021 smpt[i] = le32_to_cpu(smpt[i]);3079302230803080- sector_map = spi_nor_get_map_in_use(nor, smpt);30813081- if (!sector_map) {30823082- ret = -EINVAL;30233023+ sector_map = spi_nor_get_map_in_use(nor, smpt, smpt_header->length);30243024+ if (IS_ERR(sector_map)) {30253025+ ret = PTR_ERR(sector_map);30833026 goto out;30843027 }30853028···31823125 if (err)31833126 goto exit;3184312731853185- /* Parse other parameter headers. */31283128+ /* Parse optional parameter tables. */31863129 for (i = 0; i < header.nph; i++) {31873130 param_header = ¶m_headers[i];31883131···31953138 break;31963139 }3197314031983198- if (err)31993199- goto exit;31413141+ if (err) {31423142+ dev_warn(dev, "Failed to parse optional parameter table: %04x\n",31433143+ SFDP_PARAM_HEADER_ID(param_header));31443144+ /*31453145+ * Let's not drop all information we extracted so far31463146+ * if optional table parsers fail. In case of failing,31473147+ * each optional parser is responsible to roll back to31483148+ * the previously known spi_nor data.31493149+ */31503150+ err = 0;31513151+ }32003152 }3201315332023154exit:
+35-13
drivers/net/can/dev.c
···477477}478478EXPORT_SYMBOL_GPL(can_put_echo_skb);479479480480+struct sk_buff *__can_get_echo_skb(struct net_device *dev, unsigned int idx, u8 *len_ptr)481481+{482482+ struct can_priv *priv = netdev_priv(dev);483483+ struct sk_buff *skb = priv->echo_skb[idx];484484+ struct canfd_frame *cf;485485+486486+ if (idx >= priv->echo_skb_max) {487487+ netdev_err(dev, "%s: BUG! Trying to access can_priv::echo_skb out of bounds (%u/max %u)\n",488488+ __func__, idx, priv->echo_skb_max);489489+ return NULL;490490+ }491491+492492+ if (!skb) {493493+ netdev_err(dev, "%s: BUG! Trying to echo non existing skb: can_priv::echo_skb[%u]\n",494494+ __func__, idx);495495+ return NULL;496496+ }497497+498498+ /* Using "struct canfd_frame::len" for the frame499499+ * length is supported on both CAN and CANFD frames.500500+ */501501+ cf = (struct canfd_frame *)skb->data;502502+ *len_ptr = cf->len;503503+ priv->echo_skb[idx] = NULL;504504+505505+ return skb;506506+}507507+480508/*481509 * Get the skb from the stack and loop it back locally482510 *···514486 */515487unsigned int can_get_echo_skb(struct net_device *dev, unsigned int idx)516488{517517- struct can_priv *priv = netdev_priv(dev);489489+ struct sk_buff *skb;490490+ u8 len;518491519519- BUG_ON(idx >= priv->echo_skb_max);492492+ skb = __can_get_echo_skb(dev, idx, &len);493493+ if (!skb)494494+ return 0;520495521521- if (priv->echo_skb[idx]) {522522- struct sk_buff *skb = priv->echo_skb[idx];523523- struct can_frame *cf = (struct can_frame *)skb->data;524524- u8 dlc = cf->can_dlc;496496+ netif_rx(skb);525497526526- netif_rx(priv->echo_skb[idx]);527527- priv->echo_skb[idx] = NULL;528528-529529- return dlc;530530- }531531-532532- return 0;498498+ return len;533499}534500EXPORT_SYMBOL_GPL(can_get_echo_skb);535501
···3535#include <linux/slab.h>3636#include <linux/usb.h>37373838-#include <linux/can.h>3939-#include <linux/can/dev.h>4040-#include <linux/can/error.h>4141-4238#define UCAN_DRIVER_NAME "ucan"4339#define UCAN_MAX_RX_URBS 84440/* the CAN controller needs a while to enable/disable the bus */···15711575/* disconnect the device */15721576static void ucan_disconnect(struct usb_interface *intf)15731577{15741574- struct usb_device *udev;15751578 struct ucan_priv *up = usb_get_intfdata(intf);15761576-15771577- udev = interface_to_usbdev(intf);1578157915791580 usb_set_intfdata(intf, NULL);15801581
+11-12
drivers/net/ethernet/amazon/ena/ena_netdev.c
···18481848 rc = ena_com_dev_reset(adapter->ena_dev, adapter->reset_reason);18491849 if (rc)18501850 dev_err(&adapter->pdev->dev, "Device reset failed\n");18511851+ /* stop submitting admin commands on a device that was reset */18521852+ ena_com_set_admin_running_state(adapter->ena_dev, false);18511853 }1852185418531855 ena_destroy_all_io_queues(adapter);···19151913 struct ena_adapter *adapter = netdev_priv(netdev);1916191419171915 netif_dbg(adapter, ifdown, netdev, "%s\n", __func__);19161916+19171917+ if (!test_bit(ENA_FLAG_DEVICE_RUNNING, &adapter->flags))19181918+ return 0;1918191919191920 if (test_bit(ENA_FLAG_DEV_UP, &adapter->flags))19201921 ena_down(adapter);···26182613 ena_down(adapter);2619261426202615 /* Stop the device from sending AENQ events (in case reset flag is set26212621- * and device is up, ena_close already reset the device26222622- * In case the reset flag is set and the device is up, ena_down()26232623- * already perform the reset, so it can be skipped.26162616+ * and device is up, ena_down() already reset the device.26242617 */26252618 if (!(test_bit(ENA_FLAG_TRIGGER_RESET, &adapter->flags) && dev_up))26262619 ena_com_dev_reset(adapter->ena_dev, adapter->reset_reason);···26972694 ena_com_abort_admin_commands(ena_dev);26982695 ena_com_wait_for_abort_completion(ena_dev);26992696 ena_com_admin_destroy(ena_dev);27002700- ena_com_mmio_reg_read_request_destroy(ena_dev);27012697 ena_com_dev_reset(ena_dev, ENA_REGS_RESET_DRIVER_INVALID_STATE);26982698+ ena_com_mmio_reg_read_request_destroy(ena_dev);27022699err:27032700 clear_bit(ENA_FLAG_DEVICE_RUNNING, &adapter->flags);27042701 clear_bit(ENA_FLAG_ONGOING_RESET, &adapter->flags);···34553452 ena_com_rss_destroy(ena_dev);34563453err_free_msix:34573454 ena_com_dev_reset(ena_dev, ENA_REGS_RESET_INIT_ERR);34553455+ /* stop submitting admin commands on a device that was reset */34563456+ ena_com_set_admin_running_state(ena_dev, false);34583457 ena_free_mgmnt_irq(adapter);34593458 ena_disable_msix(adapter);34603459err_worker_destroy:···3503349835043499 cancel_work_sync(&adapter->reset_task);3505350035063506- unregister_netdev(netdev);35073507-35083508- /* If the device is running then we want to make sure the device will be35093509- * reset to make sure no more events will be issued by the device.35103510- */35113511- if (test_bit(ENA_FLAG_DEVICE_RUNNING, &adapter->flags))35123512- set_bit(ENA_FLAG_TRIGGER_RESET, &adapter->flags);35133513-35143501 rtnl_lock();35153502 ena_destroy_device(adapter, true);35163503 rtnl_unlock();35043504+35053505+ unregister_netdev(netdev);3517350635183507 free_netdev(netdev);35193508
···58635863 if (!is_t4(adapter->params.chip))58645864 cxgb4_ptp_init(adapter);5865586558665866- if (IS_ENABLED(CONFIG_THERMAL) &&58665866+ if (IS_REACHABLE(CONFIG_THERMAL) &&58675867 !is_t4(adapter->params.chip) && (adapter->flags & FW_OK))58685868 cxgb4_thermal_init(adapter);58695869···5932593259335933 if (!is_t4(adapter->params.chip))59345934 cxgb4_ptp_stop(adapter);59355935- if (IS_ENABLED(CONFIG_THERMAL))59355935+ if (IS_REACHABLE(CONFIG_THERMAL))59365936 cxgb4_thermal_remove(adapter);5937593759385938 /* If we allocated filters, free up state associated with any
···540540struct resource_allocator {541541 spinlock_t alloc_lock; /* protect quotas */542542 union {543543- int res_reserved;544544- int res_port_rsvd[MLX4_MAX_PORTS];543543+ unsigned int res_reserved;544544+ unsigned int res_port_rsvd[MLX4_MAX_PORTS];545545 };546546 union {547547 int res_free;
···185185 qed_iscsi_free(p_hwfn);186186 qed_ooo_free(p_hwfn);187187 }188188+189189+ if (QED_IS_RDMA_PERSONALITY(p_hwfn))190190+ qed_rdma_info_free(p_hwfn);191191+188192 qed_iov_free(p_hwfn);189193 qed_l2_free(p_hwfn);190194 qed_dmae_info_free(p_hwfn);···485481 struct qed_qm_info *qm_info = &p_hwfn->qm_info;486482487483 /* Can't have multiple flags set here */488488- if (bitmap_weight((unsigned long *)&pq_flags, sizeof(pq_flags)) > 1)484484+ if (bitmap_weight((unsigned long *)&pq_flags,485485+ sizeof(pq_flags) * BITS_PER_BYTE) > 1) {486486+ DP_ERR(p_hwfn, "requested multiple pq flags 0x%x\n", pq_flags);489487 goto err;488488+ }489489+490490+ if (!(qed_get_pq_flags(p_hwfn) & pq_flags)) {491491+ DP_ERR(p_hwfn, "pq flag 0x%x is not set\n", pq_flags);492492+ goto err;493493+ }490494491495 switch (pq_flags) {492496 case PQ_FLAGS_RLS:···518506 }519507520508err:521521- DP_ERR(p_hwfn, "BAD pq flags %d\n", pq_flags);522522- return NULL;509509+ return &qm_info->start_pq;523510}524511525512/* save pq index in qm info */···542531{543532 u8 max_tc = qed_init_qm_get_num_tcs(p_hwfn);544533534534+ if (max_tc == 0) {535535+ DP_ERR(p_hwfn, "pq with flag 0x%lx do not exist\n",536536+ PQ_FLAGS_MCOS);537537+ return p_hwfn->qm_info.start_pq;538538+ }539539+545540 if (tc > max_tc)546541 DP_ERR(p_hwfn, "tc %d must be smaller than %d\n", tc, max_tc);547542548548- return qed_get_cm_pq_idx(p_hwfn, PQ_FLAGS_MCOS) + tc;543543+ return qed_get_cm_pq_idx(p_hwfn, PQ_FLAGS_MCOS) + (tc % max_tc);549544}550545551546u16 qed_get_cm_pq_idx_vf(struct qed_hwfn *p_hwfn, u16 vf)552547{553548 u16 max_vf = qed_init_qm_get_num_vfs(p_hwfn);554549550550+ if (max_vf == 0) {551551+ DP_ERR(p_hwfn, "pq with flag 0x%lx do not exist\n",552552+ PQ_FLAGS_VFS);553553+ return p_hwfn->qm_info.start_pq;554554+ }555555+555556 if (vf > max_vf)556557 DP_ERR(p_hwfn, "vf %d must be smaller than %d\n", vf, max_vf);557558558558- return qed_get_cm_pq_idx(p_hwfn, PQ_FLAGS_VFS) + vf;559559+ return qed_get_cm_pq_idx(p_hwfn, PQ_FLAGS_VFS) + (vf % max_vf);559560}560561561562u16 qed_get_cm_pq_idx_ofld_mtc(struct qed_hwfn *p_hwfn, u8 tc)···11001077 if (rc)11011078 goto alloc_err;11021079 rc = qed_ooo_alloc(p_hwfn);10801080+ if (rc)10811081+ goto alloc_err;10821082+ }10831083+10841084+ if (QED_IS_RDMA_PERSONALITY(p_hwfn)) {10851085+ rc = qed_rdma_info_alloc(p_hwfn);11031086 if (rc)11041087 goto alloc_err;11051088 }···21312102 if (!p_ptt)21322103 return -EAGAIN;2133210421342134- /* If roce info is allocated it means roce is initialized and should21352135- * be enabled in searcher.21362136- */21372105 if (p_hwfn->p_rdma_info &&21382138- p_hwfn->b_rdma_enabled_in_prs)21062106+ p_hwfn->p_rdma_info->active && p_hwfn->b_rdma_enabled_in_prs)21392107 qed_wr(p_hwfn, p_ptt, p_hwfn->rdma_prs_search_reg, 0x1);2140210821412109 /* Re-open incoming traffic */
+2
drivers/net/ethernet/qlogic/qed/qed_int.c
···992992 */993993 do {994994 index = p_sb_attn->sb_index;995995+ /* finish reading index before the loop condition */996996+ dma_rmb();995997 attn_bits = le32_to_cpu(p_sb_attn->atten_bits);996998 attn_acks = le32_to_cpu(p_sb_attn->atten_ack);997999 } while (index != p_sb_attn->sb_index);
···66 * GPL LICENSE SUMMARY77 *88 * Copyright(c) 2017 Intel Deutschland GmbH99+ * Copyright(c) 2018 Intel Corporation910 *1011 * This program is free software; you can redistribute it and/or modify1112 * it under the terms of version 2 of the GNU General Public License as···2726 * BSD LICENSE2827 *2928 * Copyright(c) 2017 Intel Deutschland GmbH2929+ * Copyright(c) 2018 Intel Corporation3030 * All rights reserved.3131 *3232 * Redistribution and use in source and binary forms, with or without···8381#define ACPI_WRDS_WIFI_DATA_SIZE (ACPI_SAR_TABLE_SIZE + 2)8482#define ACPI_EWRD_WIFI_DATA_SIZE ((ACPI_SAR_PROFILE_NUM - 1) * \8583 ACPI_SAR_TABLE_SIZE + 3)8686-#define ACPI_WGDS_WIFI_DATA_SIZE 188484+#define ACPI_WGDS_WIFI_DATA_SIZE 198785#define ACPI_WRDD_WIFI_DATA_SIZE 28886#define ACPI_SPLC_WIFI_DATA_SIZE 28987
···893893 IWL_DEBUG_RADIO(mvm, "Sending GEO_TX_POWER_LIMIT\n");894894895895 BUILD_BUG_ON(ACPI_NUM_GEO_PROFILES * ACPI_WGDS_NUM_BANDS *896896- ACPI_WGDS_TABLE_SIZE != ACPI_WGDS_WIFI_DATA_SIZE);896896+ ACPI_WGDS_TABLE_SIZE + 1 != ACPI_WGDS_WIFI_DATA_SIZE);897897898898 BUILD_BUG_ON(ACPI_NUM_GEO_PROFILES > IWL_NUM_GEO_PROFILES);899899···928928 return -ENOENT;929929}930930931931+static int iwl_mvm_sar_get_wgds_table(struct iwl_mvm *mvm)932932+{933933+ return -ENOENT;934934+}935935+931936static int iwl_mvm_sar_geo_init(struct iwl_mvm *mvm)932937{933938 return 0;···959954 IWL_DEBUG_RADIO(mvm,960955 "WRDS SAR BIOS table invalid or unavailable. (%d)\n",961956 ret);962962- /* if not available, don't fail and don't bother with EWRD */963963- return 0;957957+ /*958958+ * If not available, don't fail and don't bother with EWRD.959959+ * Return 1 to tell that we can't use WGDS either.960960+ */961961+ return 1;964962 }965963966964 ret = iwl_mvm_sar_get_ewrd_table(mvm);···976968 /* choose profile 1 (WRDS) as default for both chains */977969 ret = iwl_mvm_sar_select_profile(mvm, 1, 1);978970979979- /* if we don't have profile 0 from BIOS, just skip it */971971+ /*972972+ * If we don't have profile 0 from BIOS, just skip it. This973973+ * means that SAR Geo will not be enabled either, even if we974974+ * have other valid profiles.975975+ */980976 if (ret == -ENOENT)981981- return 0;977977+ return 1;982978983979 return ret;984980}···11801168 iwl_mvm_unref(mvm, IWL_MVM_REF_UCODE_DOWN);1181116911821170 ret = iwl_mvm_sar_init(mvm);11831183- if (ret)11841184- goto error;11711171+ if (ret == 0) {11721172+ ret = iwl_mvm_sar_geo_init(mvm);11731173+ } else if (ret > 0 && !iwl_mvm_sar_get_wgds_table(mvm)) {11741174+ /*11751175+ * If basic SAR is not available, we check for WGDS,11761176+ * which should *not* be available either. If it is11771177+ * available, issue an error, because we can't use SAR11781178+ * Geo without basic SAR.11791179+ */11801180+ IWL_ERR(mvm, "BIOS contains WGDS but no WRDS\n");11811181+ }1185118211861186- ret = iwl_mvm_sar_geo_init(mvm);11871187- if (ret)11831183+ if (ret < 0)11881184 goto error;1189118511901186 iwl_mvm_leds_sync(mvm);
+6-6
drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
···301301 goto out;302302 }303303304304- if (changed)305305- *changed = (resp->status == MCC_RESP_NEW_CHAN_PROFILE);304304+ if (changed) {305305+ u32 status = le32_to_cpu(resp->status);306306+307307+ *changed = (status == MCC_RESP_NEW_CHAN_PROFILE ||308308+ status == MCC_RESP_ILLEGAL);309309+ }306310307311 regd = iwl_parse_nvm_mcc_info(mvm->trans->dev, mvm->cfg,308312 __le32_to_cpu(resp->n_channels),···44474443 sinfo->signal_avg = mvmsta->avg_energy;44484444 sinfo->filled |= BIT_ULL(NL80211_STA_INFO_SIGNAL_AVG);44494445 }44504450-44514451- if (!fw_has_capa(&mvm->fw->ucode_capa,44524452- IWL_UCODE_TLV_CAPA_RADIO_BEACON_STATS))44534453- return;4454444644554447 /* if beacon filtering isn't on mac80211 does it anyway */44564448 if (!(vif->driver_flags & IEEE80211_VIF_BEACON_FILTER))
···11config MT76_CORE22 tristate3344+config MT76_LEDS55+ bool66+ depends on MT76_CORE77+ depends on LEDS_CLASS=y || MT76_CORE=LEDS_CLASS88+ default y99+410config MT76_USB511 tristate612 depends on MT76_CORE
+5-3
drivers/net/wireless/mediatek/mt76/mac80211.c
···345345 mt76_check_sband(dev, NL80211_BAND_2GHZ);346346 mt76_check_sband(dev, NL80211_BAND_5GHZ);347347348348- ret = mt76_led_init(dev);349349- if (ret)350350- return ret;348348+ if (IS_ENABLED(CONFIG_MT76_LEDS)) {349349+ ret = mt76_led_init(dev);350350+ if (ret)351351+ return ret;352352+ }351353352354 return ieee80211_register_hw(hw);353355}
···272272 if (val != ~0 && val > 0xffff)273273 return -EINVAL;274274275275- mutex_lock(&dev->mutex);275275+ mutex_lock(&dev->mt76.mutex);276276 mt76x2_mac_set_tx_protection(dev, val);277277- mutex_unlock(&dev->mutex);277277+ mutex_unlock(&dev->mt76.mutex);278278279279 return 0;280280}
+11-6
drivers/net/wireless/ti/wlcore/sdio.c
···285285 struct resource res[2];286286 mmc_pm_flag_t mmcflags;287287 int ret = -ENOMEM;288288- int irq, wakeirq;288288+ int irq, wakeirq, num_irqs;289289 const char *chip_family;290290291291 /* We are only able to handle the wlan function */···353353 irqd_get_trigger_type(irq_get_irq_data(irq));354354 res[0].name = "irq";355355356356- res[1].start = wakeirq;357357- res[1].flags = IORESOURCE_IRQ |358358- irqd_get_trigger_type(irq_get_irq_data(wakeirq));359359- res[1].name = "wakeirq";360356361361- ret = platform_device_add_resources(glue->core, res, ARRAY_SIZE(res));357357+ if (wakeirq > 0) {358358+ res[1].start = wakeirq;359359+ res[1].flags = IORESOURCE_IRQ |360360+ irqd_get_trigger_type(irq_get_irq_data(wakeirq));361361+ res[1].name = "wakeirq";362362+ num_irqs = 2;363363+ } else {364364+ num_irqs = 1;365365+ }366366+ ret = platform_device_add_resources(glue->core, res, num_irqs);362367 if (ret) {363368 dev_err(glue->dev, "can't add resources\n");364369 goto out_dev_put;
+64-11
drivers/nvme/host/fc.c
···152152153153 bool ioq_live;154154 bool assoc_active;155155+ atomic_t err_work_active;155156 u64 association_id;156157157158 struct list_head ctrl_list; /* rport->ctrl_list */···161160 struct blk_mq_tag_set tag_set;162161163162 struct delayed_work connect_work;163163+ struct work_struct err_work;164164165165 struct kref ref;166166 u32 flags;···15331531 struct nvme_fc_fcp_op *aen_op = ctrl->aen_ops;15341532 int i;1535153315341534+ /* ensure we've initialized the ops once */15351535+ if (!(aen_op->flags & FCOP_FLAGS_AEN))15361536+ return;15371537+15361538 for (i = 0; i < NVME_NR_AEN_COMMANDS; i++, aen_op++)15371539 __nvme_fc_abort_op(ctrl, aen_op);15381540}···20552049static void20562050nvme_fc_error_recovery(struct nvme_fc_ctrl *ctrl, char *errmsg)20572051{20582058- /* only proceed if in LIVE state - e.g. on first error */20522052+ int active;20532053+20542054+ /*20552055+ * if an error (io timeout, etc) while (re)connecting,20562056+ * it's an error on creating the new association.20572057+ * Start the error recovery thread if it hasn't already20582058+ * been started. It is expected there could be multiple20592059+ * ios hitting this path before things are cleaned up.20602060+ */20612061+ if (ctrl->ctrl.state == NVME_CTRL_CONNECTING) {20622062+ active = atomic_xchg(&ctrl->err_work_active, 1);20632063+ if (!active && !schedule_work(&ctrl->err_work)) {20642064+ atomic_set(&ctrl->err_work_active, 0);20652065+ WARN_ON(1);20662066+ }20672067+ return;20682068+ }20692069+20702070+ /* Otherwise, only proceed if in LIVE state - e.g. on first error */20592071 if (ctrl->ctrl.state != NVME_CTRL_LIVE)20602072 return;20612073···28382814{28392815 struct nvme_fc_ctrl *ctrl = to_fc_ctrl(nctrl);2840281628172817+ cancel_work_sync(&ctrl->err_work);28412818 cancel_delayed_work_sync(&ctrl->connect_work);28422819 /*28432820 * kill the association on the link side. this will block···28912866}2892286728932868static void28692869+__nvme_fc_terminate_io(struct nvme_fc_ctrl *ctrl)28702870+{28712871+ nvme_stop_keep_alive(&ctrl->ctrl);28722872+28732873+ /* will block will waiting for io to terminate */28742874+ nvme_fc_delete_association(ctrl);28752875+28762876+ if (ctrl->ctrl.state != NVME_CTRL_CONNECTING &&28772877+ !nvme_change_ctrl_state(&ctrl->ctrl, NVME_CTRL_CONNECTING))28782878+ dev_err(ctrl->ctrl.device,28792879+ "NVME-FC{%d}: error_recovery: Couldn't change state "28802880+ "to CONNECTING\n", ctrl->cnum);28812881+}28822882+28832883+static void28942884nvme_fc_reset_ctrl_work(struct work_struct *work)28952885{28962886 struct nvme_fc_ctrl *ctrl =28972887 container_of(work, struct nvme_fc_ctrl, ctrl.reset_work);28982888 int ret;2899288928902890+ __nvme_fc_terminate_io(ctrl);28912891+29002892 nvme_stop_ctrl(&ctrl->ctrl);29012901-29022902- /* will block will waiting for io to terminate */29032903- nvme_fc_delete_association(ctrl);29042904-29052905- if (!nvme_change_ctrl_state(&ctrl->ctrl, NVME_CTRL_CONNECTING)) {29062906- dev_err(ctrl->ctrl.device,29072907- "NVME-FC{%d}: error_recovery: Couldn't change state "29082908- "to CONNECTING\n", ctrl->cnum);29092909- return;29102910- }2911289329122894 if (ctrl->rport->remoteport.port_state == FC_OBJSTATE_ONLINE)29132895 ret = nvme_fc_create_association(ctrl);···29272895 dev_info(ctrl->ctrl.device,29282896 "NVME-FC{%d}: controller reset complete\n",29292897 ctrl->cnum);28982898+}28992899+29002900+static void29012901+nvme_fc_connect_err_work(struct work_struct *work)29022902+{29032903+ struct nvme_fc_ctrl *ctrl =29042904+ container_of(work, struct nvme_fc_ctrl, err_work);29052905+29062906+ __nvme_fc_terminate_io(ctrl);29072907+29082908+ atomic_set(&ctrl->err_work_active, 0);29092909+29102910+ /*29112911+ * Rescheduling the connection after recovering29122912+ * from the io error is left to the reconnect work29132913+ * item, which is what should have stalled waiting on29142914+ * the io that had the error that scheduled this work.29152915+ */29302916}2931291729322918static const struct nvme_ctrl_ops nvme_fc_ctrl_ops = {···30573007 ctrl->cnum = idx;30583008 ctrl->ioq_live = false;30593009 ctrl->assoc_active = false;30103010+ atomic_set(&ctrl->err_work_active, 0);30603011 init_waitqueue_head(&ctrl->ioabort_wait);3061301230623013 get_device(ctrl->dev);···3065301430663015 INIT_WORK(&ctrl->ctrl.reset_work, nvme_fc_reset_ctrl_work);30673016 INIT_DELAYED_WORK(&ctrl->connect_work, nvme_fc_connect_ctrl_work);30173017+ INIT_WORK(&ctrl->err_work, nvme_fc_connect_err_work);30683018 spin_lock_init(&ctrl->lock);3069301930703020 /* io queue count */···31553103fail_ctrl:31563104 nvme_change_ctrl_state(&ctrl->ctrl, NVME_CTRL_DELETING);31573105 cancel_work_sync(&ctrl->ctrl.reset_work);31063106+ cancel_work_sync(&ctrl->err_work);31583107 cancel_delayed_work_sync(&ctrl->connect_work);3159310831603109 ctrl->ctrl.opts = NULL;
+6-4
drivers/nvmem/core.c
···4444 int bytes;4545 int bit_offset;4646 int nbits;4747+ struct device_node *np;4748 struct nvmem_device *nvmem;4849 struct list_head node;4950};···299298 mutex_lock(&nvmem_mutex);300299 list_del(&cell->node);301300 mutex_unlock(&nvmem_mutex);301301+ of_node_put(cell->np);302302 kfree(cell->name);303303 kfree(cell);304304}···532530 return -ENOMEM;533531534532 cell->nvmem = nvmem;533533+ cell->np = of_node_get(child);535534 cell->offset = be32_to_cpup(addr++);536535 cell->bytes = be32_to_cpup(addr);537536 cell->name = kasprintf(GFP_KERNEL, "%pOFn", child);···963960964961#if IS_ENABLED(CONFIG_OF)965962static struct nvmem_cell *966966-nvmem_find_cell_by_index(struct nvmem_device *nvmem, int index)963963+nvmem_find_cell_by_node(struct nvmem_device *nvmem, struct device_node *np)967964{968965 struct nvmem_cell *cell = NULL;969969- int i = 0;970966971967 mutex_lock(&nvmem_mutex);972968 list_for_each_entry(cell, &nvmem->cells, node) {973973- if (index == i++)969969+ if (np == cell->np)974970 break;975971 }976972 mutex_unlock(&nvmem_mutex);···10131011 if (IS_ERR(nvmem))10141012 return ERR_CAST(nvmem);1015101310161016- cell = nvmem_find_cell_by_index(nvmem, index);10141014+ cell = nvmem_find_cell_by_node(nvmem, cell_np);10171015 if (!cell) {10181016 __nvmem_device_put(nvmem);10191017 return ERR_PTR(-ENOENT);
+4-1
drivers/opp/ti-opp-supply.c
···288288 int ret;289289290290 vdd_uv = _get_optimal_vdd_voltage(dev, &opp_data,291291- new_supply_vbb->u_volt);291291+ new_supply_vdd->u_volt);292292+293293+ if (new_supply_vdd->u_volt_min < vdd_uv)294294+ new_supply_vdd->u_volt_min = vdd_uv;292295293296 /* Scaling up? Scale voltage before frequency */294297 if (freq > old_freq) {
-5
drivers/pci/pci-acpi.c
···793793{794794 struct pci_dev *pci_dev = to_pci_dev(dev);795795 struct acpi_device *adev = ACPI_COMPANION(dev);796796- int node;797796798797 if (!adev)799798 return;800800-801801- node = acpi_get_node(adev->handle);802802- if (node != NUMA_NO_NODE)803803- set_dev_node(dev, node);804799805800 pci_acpi_optimize_delay(pci_dev, adev->handle);806801
+1-1
drivers/pinctrl/meson/pinctrl-meson-gxbb.c
···830830831831static struct meson_bank meson_gxbb_aobus_banks[] = {832832 /* name first last irq pullen pull dir out in */833833- BANK("AO", GPIOAO_0, GPIOAO_13, 0, 13, 0, 0, 0, 16, 0, 0, 0, 16, 1, 0),833833+ BANK("AO", GPIOAO_0, GPIOAO_13, 0, 13, 0, 16, 0, 0, 0, 0, 0, 16, 1, 0),834834};835835836836static struct meson_pinctrl_data meson_gxbb_periphs_pinctrl_data = {
+1-1
drivers/pinctrl/meson/pinctrl-meson-gxl.c
···807807808808static struct meson_bank meson_gxl_aobus_banks[] = {809809 /* name first last irq pullen pull dir out in */810810- BANK("AO", GPIOAO_0, GPIOAO_9, 0, 9, 0, 0, 0, 16, 0, 0, 0, 16, 1, 0),810810+ BANK("AO", GPIOAO_0, GPIOAO_9, 0, 9, 0, 16, 0, 0, 0, 0, 0, 16, 1, 0),811811};812812813813static struct meson_pinctrl_data meson_gxl_periphs_pinctrl_data = {
+1-1
drivers/pinctrl/meson/pinctrl-meson.c
···192192 dev_dbg(pc->dev, "pin %u: disable bias\n", pin);193193194194 meson_calc_reg_and_bit(bank, pin, REG_PULL, ®, &bit);195195- ret = regmap_update_bits(pc->reg_pull, reg,195195+ ret = regmap_update_bits(pc->reg_pullen, reg,196196 BIT(bit), 0);197197 if (ret)198198 return ret;
+1-1
drivers/pinctrl/meson/pinctrl-meson8.c
···1053105310541054static struct meson_bank meson8_aobus_banks[] = {10551055 /* name first last irq pullen pull dir out in */10561056- BANK("AO", GPIOAO_0, GPIO_TEST_N, 0, 13, 0, 0, 0, 16, 0, 0, 0, 16, 1, 0),10561056+ BANK("AO", GPIOAO_0, GPIO_TEST_N, 0, 13, 0, 16, 0, 0, 0, 0, 0, 16, 1, 0),10571057};1058105810591059static struct meson_pinctrl_data meson8_cbus_pinctrl_data = {
···578578config SCSI_MYRS579579 tristate "Mylex DAC960/DAC1100 PCI RAID Controller (SCSI Interface)"580580 depends on PCI581581+ depends on !CPU_BIG_ENDIAN || COMPILE_TEST581582 select RAID_ATTRS582583 help583584 This driver adds support for the Mylex DAC960, AcceleRAID, and
···6767MODULE_PARM_DESC(ql2xplogiabsentdevice,6868 "Option to enable PLOGI to devices that are not present after "6969 "a Fabric scan. This is needed for several broken switches. "7070- "Default is 0 - no PLOGI. 1 - perfom PLOGI.");7070+ "Default is 0 - no PLOGI. 1 - perform PLOGI.");71717272int ql2xloginretrycount = 0;7373module_param(ql2xloginretrycount, int, S_IRUGO);···17491749static void17501750__qla2x00_abort_all_cmds(struct qla_qpair *qp, int res)17511751{17521752- int cnt;17521752+ int cnt, status;17531753 unsigned long flags;17541754 srb_t *sp;17551755 scsi_qla_host_t *vha = qp->vha;···17991799 if (!sp_get(sp)) {18001800 spin_unlock_irqrestore18011801 (qp->qp_lock_ptr, flags);18021802- qla2xxx_eh_abort(18021802+ status = qla2xxx_eh_abort(18031803 GET_CMD_SP(sp));18041804 spin_lock_irqsave18051805 (qp->qp_lock_ptr, flags);18061806+ /*18071807+ * Get rid of extra reference caused18081808+ * by early exit from qla2xxx_eh_abort18091809+ */18101810+ if (status == FAST_IO_FAIL)18111811+ atomic_dec(&sp->ref_count);18061812 }18071813 }18081814 sp->done(sp, res);
+8
drivers/scsi/scsi_lib.c
···697697 */698698 scsi_mq_uninit_cmd(cmd);699699700700+ /*701701+ * queue is still alive, so grab the ref for preventing it702702+ * from being cleaned up during running queue.703703+ */704704+ percpu_ref_get(&q->q_usage_counter);705705+700706 __blk_mq_end_request(req, error);701707702708 if (scsi_target(sdev)->single_lun ||···710704 kblockd_schedule_work(&sdev->requeue_work);711705 else712706 blk_mq_run_hw_queues(q, true);707707+708708+ percpu_ref_put(&q->q_usage_counter);713709 } else {714710 unsigned long flags;715711
···310310 ipipeif_write(val, ipipeif_base_addr, IPIPEIF_CFG2);311311 break;312312 }313313+ /* fall through */313314314315 case IPIPEIF_SDRAM_YUV:315316 /* Set clock divider */
···15211521 usb_wakeup_notification(udev->parent, udev->portnum);15221522}1523152315241524+/*15251525+ * Quirk hanlder for errata seen on Cavium ThunderX2 processor XHCI15261526+ * Controller.15271527+ * As per ThunderX2errata-129 USB 2 device may come up as USB 115281528+ * If a connection to a USB 1 device is followed by another connection15291529+ * to a USB 2 device.15301530+ *15311531+ * Reset the PHY after the USB device is disconnected if device speed15321532+ * is less than HCD_USB3.15331533+ * Retry the reset sequence max of 4 times checking the PLL lock status.15341534+ *15351535+ */15361536+static void xhci_cavium_reset_phy_quirk(struct xhci_hcd *xhci)15371537+{15381538+ struct usb_hcd *hcd = xhci_to_hcd(xhci);15391539+ u32 pll_lock_check;15401540+ u32 retry_count = 4;15411541+15421542+ do {15431543+ /* Assert PHY reset */15441544+ writel(0x6F, hcd->regs + 0x1048);15451545+ udelay(10);15461546+ /* De-assert the PHY reset */15471547+ writel(0x7F, hcd->regs + 0x1048);15481548+ udelay(200);15491549+ pll_lock_check = readl(hcd->regs + 0x1070);15501550+ } while (!(pll_lock_check & 0x1) && --retry_count);15511551+}15521552+15241553static void handle_port_status(struct xhci_hcd *xhci,15251554 union xhci_trb *event)15261555{···15811552 port = &xhci->hw_ports[port_id - 1];15821553 if (!port || !port->rhub || port->hcd_portnum == DUPLICATE_ENTRY) {15831554 xhci_warn(xhci, "Event for invalid port %u\n", port_id);15551555+ bogus_port_status = true;15561556+ goto cleanup;15571557+ }15581558+15591559+ /* We might get interrupts after shared_hcd is removed */15601560+ if (port->rhub == &xhci->usb3_rhub && xhci->shared_hcd == NULL) {15611561+ xhci_dbg(xhci, "ignore port event for removed USB3 hcd\n");15841562 bogus_port_status = true;15851563 goto cleanup;15861564 }···16751639 * RExit to a disconnect state). If so, let the the driver know it's16761640 * out of the RExit state.16771641 */16781678- if (!DEV_SUPERSPEED_ANY(portsc) &&16421642+ if (!DEV_SUPERSPEED_ANY(portsc) && hcd->speed < HCD_USB3 &&16791643 test_and_clear_bit(hcd_portnum,16801644 &bus_state->rexit_ports)) {16811645 complete(&bus_state->rexit_done[hcd_portnum]);···16831647 goto cleanup;16841648 }1685164916861686- if (hcd->speed < HCD_USB3)16501650+ if (hcd->speed < HCD_USB3) {16871651 xhci_test_and_clear_bit(xhci, port, PORT_PLC);16521652+ if ((xhci->quirks & XHCI_RESET_PLL_ON_DISCONNECT) &&16531653+ (portsc & PORT_CSC) && !(portsc & PORT_CONNECT))16541654+ xhci_cavium_reset_phy_quirk(xhci);16551655+ }1688165616891657cleanup:16901658 /* Update event ring dequeue pointer before dropping the lock */···23062266 goto cleanup;23072267 case COMP_RING_UNDERRUN:23082268 case COMP_RING_OVERRUN:22692269+ case COMP_STOPPED_LENGTH_INVALID:23092270 goto cleanup;23102271 default:23112272 xhci_err(xhci, "ERROR Transfer event for unknown stream ring slot %u ep %u\n",
···719719720720 /* Only halt host and free memory after both hcds are removed */721721 if (!usb_hcd_is_primary_hcd(hcd)) {722722- /* usb core will free this hcd shortly, unset pointer */723723- xhci->shared_hcd = NULL;724722 mutex_unlock(&xhci->mutex);725723 return;726724 }
+2-1
drivers/usb/host/xhci.h
···16801680 * It can take up to 20 ms to transition from RExit to U0 on the16811681 * Intel Lynx Point LP xHCI host.16821682 */16831683-#define XHCI_MAX_REXIT_TIMEOUT (20 * 1000)16831683+#define XHCI_MAX_REXIT_TIMEOUT_MS 201684168416851685static inline unsigned int hcd_index(struct usb_hcd *hcd)16861686{···18491849#define XHCI_INTEL_USB_ROLE_SW BIT_ULL(31)18501850#define XHCI_ZERO_64B_REGS BIT_ULL(32)18511851#define XHCI_DEFAULT_PM_RUNTIME_ALLOW BIT_ULL(33)18521852+#define XHCI_RESET_PLL_ON_DISCONNECT BIT_ULL(34)1852185318531854 unsigned int num_active_eps;18541855 unsigned int limit_active_eps;
···576576{577577 signed long rtt2, timeout;578578 long ret;579579+ bool stalled = false;579580 u64 rtt;580581 u32 life, last_life;581582···610609611610 life = rxrpc_kernel_check_life(call->net->socket, call->rxcall);612611 if (timeout == 0 &&613613- life == last_life && signal_pending(current))612612+ life == last_life && signal_pending(current)) {613613+ if (stalled)614614 break;615615+ __set_current_state(TASK_RUNNING);616616+ rxrpc_kernel_probe_life(call->net->socket, call->rxcall);617617+ timeout = rtt2;618618+ stalled = true;619619+ continue;620620+ }615621616622 if (life != last_life) {617623 timeout = rtt2;618624 last_life = life;625625+ stalled = false;619626 }620627621628 timeout = schedule_timeout(timeout);
+35-25
fs/dax.c
···9898 return xa_mk_value(flags | (pfn_t_to_pfn(pfn) << DAX_SHIFT));9999}100100101101-static void *dax_make_page_entry(struct page *page)102102-{103103- pfn_t pfn = page_to_pfn_t(page);104104- return dax_make_entry(pfn, PageHead(page) ? DAX_PMD : 0);105105-}106106-107101static bool dax_is_locked(void *entry)108102{109103 return xa_to_value(entry) & DAX_LOCKED;···110116 return 0;111117}112118113113-static int dax_is_pmd_entry(void *entry)119119+static unsigned long dax_is_pmd_entry(void *entry)114120{115121 return xa_to_value(entry) & DAX_PMD;116122}117123118118-static int dax_is_pte_entry(void *entry)124124+static bool dax_is_pte_entry(void *entry)119125{120126 return !(xa_to_value(entry) & DAX_PMD);121127}···216222 ewait.wait.func = wake_exceptional_entry_func;217223218224 for (;;) {219219- entry = xas_load(xas);220220- if (!entry || xa_is_internal(entry) ||221221- WARN_ON_ONCE(!xa_is_value(entry)) ||225225+ entry = xas_find_conflict(xas);226226+ if (!entry || WARN_ON_ONCE(!xa_is_value(entry)) ||222227 !dax_is_locked(entry))223228 return entry;224229···248255{249256 void *old;250257258258+ BUG_ON(dax_is_locked(entry));251259 xas_reset(xas);252260 xas_lock_irq(xas);253261 old = xas_store(xas, entry);···346352 return NULL;347353}348354355355+/*356356+ * dax_lock_mapping_entry - Lock the DAX entry corresponding to a page357357+ * @page: The page whose entry we want to lock358358+ *359359+ * Context: Process context.360360+ * Return: %true if the entry was locked or does not need to be locked.361361+ */349362bool dax_lock_mapping_entry(struct page *page)350363{351364 XA_STATE(xas, NULL, 0);352365 void *entry;366366+ bool locked;353367368368+ /* Ensure page->mapping isn't freed while we look at it */369369+ rcu_read_lock();354370 for (;;) {355371 struct address_space *mapping = READ_ONCE(page->mapping);356372373373+ locked = false;357374 if (!dax_mapping(mapping))358358- return false;375375+ break;359376360377 /*361378 * In the device-dax case there's no need to lock, a···375370 * otherwise we would not have a valid pfn_to_page()376371 * translation.377372 */373373+ locked = true;378374 if (S_ISCHR(mapping->host->i_mode))379379- return true;375375+ break;380376381377 xas.xa = &mapping->i_pages;382378 xas_lock_irq(&xas);···388382 xas_set(&xas, page->index);389383 entry = xas_load(&xas);390384 if (dax_is_locked(entry)) {385385+ rcu_read_unlock();391386 entry = get_unlocked_entry(&xas);392392- /* Did the page move while we slept? */393393- if (dax_to_pfn(entry) != page_to_pfn(page)) {394394- xas_unlock_irq(&xas);395395- continue;396396- }387387+ xas_unlock_irq(&xas);388388+ put_unlocked_entry(&xas, entry);389389+ rcu_read_lock();390390+ continue;397391 }398392 dax_lock_entry(&xas, entry);399393 xas_unlock_irq(&xas);400400- return true;394394+ break;401395 }396396+ rcu_read_unlock();397397+ return locked;402398}403399404400void dax_unlock_mapping_entry(struct page *page)405401{406402 struct address_space *mapping = page->mapping;407403 XA_STATE(xas, &mapping->i_pages, page->index);404404+ void *entry;408405409406 if (S_ISCHR(mapping->host->i_mode))410407 return;411408412412- dax_unlock_entry(&xas, dax_make_page_entry(page));409409+ rcu_read_lock();410410+ entry = xas_load(&xas);411411+ rcu_read_unlock();412412+ entry = dax_make_entry(page_to_pfn_t(page), dax_is_pmd_entry(entry));413413+ dax_unlock_entry(&xas, entry);413414}414415415416/*···458445retry:459446 xas_lock_irq(xas);460447 entry = get_unlocked_entry(xas);461461- if (xa_is_internal(entry))462462- goto fallback;463448464449 if (entry) {465465- if (WARN_ON_ONCE(!xa_is_value(entry))) {450450+ if (!xa_is_value(entry)) {466451 xas_set_err(xas, EIO);467452 goto out_unlock;468453 }···16391628 /* Did we race with someone splitting entry or so? */16401629 if (!entry ||16411630 (order == 0 && !dax_is_pte_entry(entry)) ||16421642- (order == PMD_ORDER && (xa_is_internal(entry) ||16431643- !dax_is_pmd_entry(entry)))) {16311631+ (order == PMD_ORDER && !dax_is_pmd_entry(entry))) {16441632 put_unlocked_entry(&xas, entry);16451633 xas_unlock_irq(&xas);16461634 trace_dax_insert_pfn_mkwrite_no_entry(mapping->host, vmf,
+3-2
fs/exec.c
···6262#include <linux/oom.h>6363#include <linux/compat.h>6464#include <linux/vmalloc.h>6565+#include <linux/freezer.h>65666667#include <linux/uaccess.h>6768#include <asm/mmu_context.h>···10841083 while (sig->notify_count) {10851084 __set_current_state(TASK_KILLABLE);10861085 spin_unlock_irq(lock);10871087- schedule();10861086+ freezable_schedule();10881087 if (unlikely(__fatal_signal_pending(tsk)))10891088 goto killed;10901089 spin_lock_irq(lock);···11121111 __set_current_state(TASK_KILLABLE);11131112 write_unlock_irq(&tasklist_lock);11141113 cgroup_threadgroup_change_end(tsk);11151115- schedule();11141114+ freezable_schedule();11161115 if (unlikely(__fatal_signal_pending(tsk)))11171116 goto killed;11181117 }
+12-4
fs/fuse/dev.c
···165165166166static void fuse_drop_waiting(struct fuse_conn *fc)167167{168168- if (fc->connected) {169169- atomic_dec(&fc->num_waiting);170170- } else if (atomic_dec_and_test(&fc->num_waiting)) {168168+ /*169169+ * lockess check of fc->connected is okay, because atomic_dec_and_test()170170+ * provides a memory barrier mached with the one in fuse_wait_aborted()171171+ * to ensure no wake-up is missed.172172+ */173173+ if (atomic_dec_and_test(&fc->num_waiting) &&174174+ !READ_ONCE(fc->connected)) {171175 /* wake up aborters */172176 wake_up_all(&fc->blocked_waitq);173177 }···17721768 req->in.args[1].size = total_len;1773176917741770 err = fuse_request_send_notify_reply(fc, req, outarg->notify_unique);17751775- if (err)17711771+ if (err) {17761772 fuse_retrieve_end(fc, req);17731773+ fuse_put_request(fc, req);17741774+ }1777177517781776 return err;17791777}···2225221922262220void fuse_wait_aborted(struct fuse_conn *fc)22272221{22222222+ /* matches implicit memory barrier in fuse_drop_waiting() */22232223+ smp_mb();22282224 wait_event(fc->blocked_waitq, atomic_read(&fc->num_waiting) == 0);22292225}22302226
···826826 ret = gfs2_meta_inode_buffer(ip, &dibh);827827 if (ret)828828 goto unlock;829829- iomap->private = dibh;829829+ mp->mp_bh[0] = dibh;830830831831 if (gfs2_is_stuffed(ip)) {832832 if (flags & IOMAP_WRITE) {···863863 len = lblock_stop - lblock + 1;864864 iomap->length = len << inode->i_blkbits;865865866866- get_bh(dibh);867867- mp->mp_bh[0] = dibh;868868-869866 height = ip->i_height;870867 while ((lblock + 1) * sdp->sd_sb.sb_bsize > sdp->sd_heightsize[height])871868 height++;···895898 iomap->bdev = inode->i_sb->s_bdev;896899unlock:897900 up_read(&ip->i_rw_mutex);898898- if (ret && dibh)899899- brelse(dibh);900901 return ret;901902902903do_alloc:···975980976981static int gfs2_iomap_begin_write(struct inode *inode, loff_t pos,977982 loff_t length, unsigned flags,978978- struct iomap *iomap)983983+ struct iomap *iomap,984984+ struct metapath *mp)979985{980980- struct metapath mp = { .mp_aheight = 1, };981986 struct gfs2_inode *ip = GFS2_I(inode);982987 struct gfs2_sbd *sdp = GFS2_SB(inode);983988 unsigned int data_blocks = 0, ind_blocks = 0, rblocks;···991996 unstuff = gfs2_is_stuffed(ip) &&992997 pos + length > gfs2_max_stuffed_size(ip);993998994994- ret = gfs2_iomap_get(inode, pos, length, flags, iomap, &mp);999999+ ret = gfs2_iomap_get(inode, pos, length, flags, iomap, mp);9951000 if (ret)996996- goto out_release;10011001+ goto out_unlock;99710029981003 alloc_required = unstuff || iomap->type == IOMAP_HOLE;9991004···1008101310091014 ret = gfs2_quota_lock_check(ip, &ap);10101015 if (ret)10111011- goto out_release;10161016+ goto out_unlock;1012101710131018 ret = gfs2_inplace_reserve(ip, &ap);10141019 if (ret)···10331038 ret = gfs2_unstuff_dinode(ip, NULL);10341039 if (ret)10351040 goto out_trans_end;10361036- release_metapath(&mp);10371037- brelse(iomap->private);10381038- iomap->private = NULL;10411041+ release_metapath(mp);10391042 ret = gfs2_iomap_get(inode, iomap->offset, iomap->length,10401040- flags, iomap, &mp);10431043+ flags, iomap, mp);10411044 if (ret)10421045 goto out_trans_end;10431046 }1044104710451048 if (iomap->type == IOMAP_HOLE) {10461046- ret = gfs2_iomap_alloc(inode, iomap, flags, &mp);10491049+ ret = gfs2_iomap_alloc(inode, iomap, flags, mp);10471050 if (ret) {10481051 gfs2_trans_end(sdp);10491052 gfs2_inplace_release(ip);···10491056 goto out_qunlock;10501057 }10511058 }10521052- release_metapath(&mp);10531059 if (!gfs2_is_stuffed(ip) && gfs2_is_jdata(ip))10541060 iomap->page_done = gfs2_iomap_journaled_page_done;10551061 return 0;···10611069out_qunlock:10621070 if (alloc_required)10631071 gfs2_quota_unlock(ip);10641064-out_release:10651065- if (iomap->private)10661066- brelse(iomap->private);10671067- release_metapath(&mp);10721072+out_unlock:10681073 gfs2_write_unlock(inode);10691074 return ret;10701075}···1077108810781089 trace_gfs2_iomap_start(ip, pos, length, flags);10791090 if ((flags & IOMAP_WRITE) && !(flags & IOMAP_DIRECT)) {10801080- ret = gfs2_iomap_begin_write(inode, pos, length, flags, iomap);10911091+ ret = gfs2_iomap_begin_write(inode, pos, length, flags, iomap, &mp);10811092 } else {10821093 ret = gfs2_iomap_get(inode, pos, length, flags, iomap, &mp);10831083- release_metapath(&mp);10941094+10841095 /*10851096 * Silently fall back to buffered I/O for stuffed files or if10861097 * we've hot a hole (see gfs2_file_direct_write).···10891100 iomap->type != IOMAP_MAPPED)10901101 ret = -ENOTBLK;10911102 }11031103+ if (!ret) {11041104+ get_bh(mp.mp_bh[0]);11051105+ iomap->private = mp.mp_bh[0];11061106+ }11071107+ release_metapath(&mp);10921108 trace_gfs2_iomap_end(ip, iomap, ret);10931109 return ret;10941110}···19021908 if (ret < 0)19031909 goto out;1904191019051905- /* issue read-ahead on metadata */19061906- if (mp.mp_aheight > 1) {19071907- for (; ret > 1; ret--) {19081908- metapointer_range(&mp, mp.mp_aheight - ret,19111911+ /* On the first pass, issue read-ahead on metadata. */19121912+ if (mp.mp_aheight > 1 && strip_h == ip->i_height - 1) {19131913+ unsigned int height = mp.mp_aheight - 1;19141914+19151915+ /* No read-ahead for data blocks. */19161916+ if (mp.mp_aheight - 1 == strip_h)19171917+ height--;19181918+19191919+ for (; height >= mp.mp_aheight - ret; height--) {19201920+ metapointer_range(&mp, height,19091921 start_list, start_aligned,19101922 end_list, end_aligned,19111923 &start, &end);
+2-1
fs/gfs2/rgrp.c
···733733734734 if (gl) {735735 glock_clear_object(gl, rgd);736736+ gfs2_rgrp_brelse(rgd);736737 gfs2_glock_put(gl);737738 }738739···11751174 * @rgd: the struct gfs2_rgrpd describing the RG to read in11761175 *11771176 * Read in all of a Resource Group's header and bitmap blocks.11781178- * Caller must eventually call gfs2_rgrp_relse() to free the bitmaps.11771177+ * Caller must eventually call gfs2_rgrp_brelse() to free the bitmaps.11791178 *11801179 * Returns: errno11811180 */
+5-2
fs/inode.c
···730730 return LRU_REMOVED;731731 }732732733733- /* recently referenced inodes get one more pass */734734- if (inode->i_state & I_REFERENCED) {733733+ /*734734+ * Recently referenced inodes and inodes with many attached pages735735+ * get one more pass.736736+ */737737+ if (inode->i_state & I_REFERENCED || inode->i_data.nrpages > 1) {735738 inode->i_state &= ~I_REFERENCED;736739 spin_unlock(&inode->i_lock);737740 return LRU_ROTATE;
+41-12
fs/iomap.c
···142142iomap_adjust_read_range(struct inode *inode, struct iomap_page *iop,143143 loff_t *pos, loff_t length, unsigned *offp, unsigned *lenp)144144{145145+ loff_t orig_pos = *pos;146146+ loff_t isize = i_size_read(inode);145147 unsigned block_bits = inode->i_blkbits;146148 unsigned block_size = (1 << block_bits);147149 unsigned poff = offset_in_page(*pos);148150 unsigned plen = min_t(loff_t, PAGE_SIZE - poff, length);149151 unsigned first = poff >> block_bits;150152 unsigned last = (poff + plen - 1) >> block_bits;151151- unsigned end = offset_in_page(i_size_read(inode)) >> block_bits;152153153154 /*154155 * If the block size is smaller than the page size we need to check the···184183 * handle both halves separately so that we properly zero data in the185184 * page cache for blocks that are entirely outside of i_size.186185 */187187- if (first <= end && last > end)188188- plen -= (last - end) * block_size;186186+ if (orig_pos <= isize && orig_pos + length > isize) {187187+ unsigned end = offset_in_page(isize - 1) >> block_bits;188188+189189+ if (first <= end && last > end)190190+ plen -= (last - end) * block_size;191191+ }189192190193 *offp = poff;191194 *lenp = plen;···15851580 struct bio *bio;15861581 bool need_zeroout = false;15871582 bool use_fua = false;15881588- int nr_pages, ret;15831583+ int nr_pages, ret = 0;15891584 size_t copied = 0;1590158515911586 if ((pos | length | align) & ((1 << blkbits) - 1))···1601159616021597 if (iomap->flags & IOMAP_F_NEW) {16031598 need_zeroout = true;16041604- } else {15991599+ } else if (iomap->type == IOMAP_MAPPED) {16051600 /*16061606- * Use a FUA write if we need datasync semantics, this16071607- * is a pure data IO that doesn't require any metadata16081608- * updates and the underlying device supports FUA. This16091609- * allows us to avoid cache flushes on IO completion.16011601+ * Use a FUA write if we need datasync semantics, this is a pure16021602+ * data IO that doesn't require any metadata updates (including16031603+ * after IO completion such as unwritten extent conversion) and16041604+ * the underlying device supports FUA. This allows us to avoid16051605+ * cache flushes on IO completion.16101606 */16111607 if (!(iomap->flags & (IOMAP_F_SHARED|IOMAP_F_DIRTY)) &&16121608 (dio->flags & IOMAP_DIO_WRITE_FUA) &&···1650164416511645 ret = bio_iov_iter_get_pages(bio, &iter);16521646 if (unlikely(ret)) {16471647+ /*16481648+ * We have to stop part way through an IO. We must fall16491649+ * through to the sub-block tail zeroing here, otherwise16501650+ * this short IO may expose stale data in the tail of16511651+ * the block we haven't written data to.16521652+ */16531653 bio_put(bio);16541654- return copied ? copied : ret;16541654+ goto zero_tail;16551655 }1656165616571657 n = bio->bi_iter.bi_size;···16881676 dio->submit.cookie = submit_bio(bio);16891677 } while (nr_pages);1690167816911691- if (need_zeroout) {16791679+ /*16801680+ * We need to zeroout the tail of a sub-block write if the extent type16811681+ * requires zeroing or the write extends beyond EOF. If we don't zero16821682+ * the block tail in the latter case, we can expose stale data via mmap16831683+ * reads of the EOF block.16841684+ */16851685+zero_tail:16861686+ if (need_zeroout ||16871687+ ((dio->flags & IOMAP_DIO_WRITE) && pos >= i_size_read(inode))) {16921688 /* zero out from the end of the write to the end of the block */16931689 pad = pos & (fs_block_size - 1);16941690 if (pad)16951691 iomap_dio_zero(dio, iomap, pos, fs_block_size - pad);16961692 }16971697- return copied;16931693+ return copied ? copied : ret;16981694}1699169517001696static loff_t···18771857 dio->wait_for_completion = true;18781858 ret = 0;18791859 }18601860+18611861+ /*18621862+ * Splicing to pipes can fail on a full pipe. We have to18631863+ * swallow this to make it look like a short IO18641864+ * otherwise the higher splice layers will completely18651865+ * mishandle the error and stop moving data.18661866+ */18671867+ if (ret == -EFAULT)18681868+ ret = 0;18801869 break;18811870 }18821871 pos += ret;
+3-3
fs/namespace.c
···695695696696 hlist_for_each_entry(mp, chain, m_hash) {697697 if (mp->m_dentry == dentry) {698698- /* might be worth a WARN_ON() */699699- if (d_unlinked(dentry))700700- return ERR_PTR(-ENOENT);701698 mp->m_count++;702699 return mp;703700 }···708711 int ret;709712710713 if (d_mountpoint(dentry)) {714714+ /* might be worth a WARN_ON() */715715+ if (d_unlinked(dentry))716716+ return ERR_PTR(-ENOENT);711717mountpoint:712718 read_seqlock_excl(&mount_lock);713719 mp = lookup_mountpoint(dentry);
···13611361 task))13621362 return;1363136313641364- if (ff_layout_read_prepare_common(task, hdr))13651365- return;13661366-13671367- if (nfs4_set_rw_stateid(&hdr->args.stateid, hdr->args.context,13681368- hdr->args.lock_context, FMODE_READ) == -EIO)13691369- rpc_exit(task, -EIO); /* lost lock, terminate I/O */13641364+ ff_layout_read_prepare_common(task, hdr);13701365}1371136613721367static void ff_layout_read_call_done(struct rpc_task *task, void *data)···15371542 task))15381543 return;1539154415401540- if (ff_layout_write_prepare_common(task, hdr))15411541- return;15421542-15431543- if (nfs4_set_rw_stateid(&hdr->args.stateid, hdr->args.context,15441544- hdr->args.lock_context, FMODE_WRITE) == -EIO)15451545- rpc_exit(task, -EIO); /* lost lock, terminate I/O */15451545+ ff_layout_write_prepare_common(task, hdr);15461546}1547154715481548static void ff_layout_write_call_done(struct rpc_task *task, void *data)···17321742 fh = nfs4_ff_layout_select_ds_fh(lseg, idx);17331743 if (fh)17341744 hdr->args.fh = fh;17451745+17461746+ if (!nfs4_ff_layout_select_ds_stateid(lseg, idx, &hdr->args.stateid))17471747+ goto out_failed;17481748+17351749 /*17361750 * Note that if we ever decide to split across DSes,17371751 * then we may need to handle dense-like offsets.···17971803 fh = nfs4_ff_layout_select_ds_fh(lseg, idx);17981804 if (fh)17991805 hdr->args.fh = fh;18061806+18071807+ if (!nfs4_ff_layout_select_ds_stateid(lseg, idx, &hdr->args.stateid))18081808+ goto out_failed;1800180918011810 /*18021811 * Note that if we ever decide to split across DSes,
···115115 continue;116116 mark = iter_info->marks[type];117117 /*118118- * if the event is for a child and this inode doesn't care about119119- * events on the child, don't send it!118118+ * If the event is for a child and this mark doesn't care about119119+ * events on a child, don't send it!120120 */121121- if (type == FSNOTIFY_OBJ_TYPE_INODE &&122122- (event_mask & FS_EVENT_ON_CHILD) &&123123- !(mark->mask & FS_EVENT_ON_CHILD))121121+ if (event_mask & FS_EVENT_ON_CHILD &&122122+ (type != FSNOTIFY_OBJ_TYPE_INODE ||123123+ !(mark->mask & FS_EVENT_ON_CHILD)))124124 continue;125125126126 marks_mask |= mark->mask;
+5-2
fs/notify/fsnotify.c
···167167 parent = dget_parent(dentry);168168 p_inode = parent->d_inode;169169170170- if (unlikely(!fsnotify_inode_watches_children(p_inode)))170170+ if (unlikely(!fsnotify_inode_watches_children(p_inode))) {171171 __fsnotify_update_child_dentry_flags(p_inode);172172- else if (p_inode->i_fsnotify_mask & mask) {172172+ } else if (p_inode->i_fsnotify_mask & mask & ALL_FSNOTIFY_EVENTS) {173173 struct name_snapshot name;174174175175 /* we are notifying a parent so come up with the new mask which···339339 sb = mnt->mnt.mnt_sb;340340 mnt_or_sb_mask = mnt->mnt_fsnotify_mask | sb->s_fsnotify_mask;341341 }342342+ /* An event "on child" is not intended for a mount/sb mark */343343+ if (mask & FS_EVENT_ON_CHILD)344344+ mnt_or_sb_mask = 0;342345343346 /*344347 * Optimization: srcu_read_lock() has a memory barrier which can
+10-2
fs/ocfs2/aops.c
···24112411 /* this io's submitter should not have unlocked this before we could */24122412 BUG_ON(!ocfs2_iocb_is_rw_locked(iocb));2413241324142414- if (bytes > 0 && private)24152415- ret = ocfs2_dio_end_io_write(inode, private, offset, bytes);24142414+ if (bytes <= 0)24152415+ mlog_ratelimited(ML_ERROR, "Direct IO failed, bytes = %lld",24162416+ (long long)bytes);24172417+ if (private) {24182418+ if (bytes > 0)24192419+ ret = ocfs2_dio_end_io_write(inode, private, offset,24202420+ bytes);24212421+ else24222422+ ocfs2_dio_free_write_ctx(inode, private);24232423+ }2416242424172425 ocfs2_iocb_clear_rw_locked(iocb);24182426
+9
fs/ocfs2/cluster/masklog.h
···178178 ##__VA_ARGS__); \179179} while (0)180180181181+#define mlog_ratelimited(mask, fmt, ...) \182182+do { \183183+ static DEFINE_RATELIMIT_STATE(_rs, \184184+ DEFAULT_RATELIMIT_INTERVAL, \185185+ DEFAULT_RATELIMIT_BURST); \186186+ if (__ratelimit(&_rs)) \187187+ mlog(mask, fmt, ##__VA_ARGS__); \188188+} while (0)189189+181190#define mlog_errno(st) ({ \182191 int _st = (st); \183192 if (_st != -ERESTARTSYS && _st != -EINTR && \
···16941694 case BMAP_LEFT_FILLING | BMAP_RIGHT_FILLING | BMAP_RIGHT_CONTIG:16951695 /*16961696 * Filling in all of a previously delayed allocation extent.16971697- * The right neighbor is contiguous, the left is not.16971697+ * The right neighbor is contiguous, the left is not. Take care16981698+ * with delay -> unwritten extent allocation here because the16991699+ * delalloc record we are overwriting is always written.16981700 */16991701 PREV.br_startblock = new->br_startblock;17001702 PREV.br_blockcount += RIGHT.br_blockcount;17031703+ PREV.br_state = new->br_state;1701170417021705 xfs_iext_next(ifp, &bma->icur);17031706 xfs_iext_remove(bma->ip, &bma->icur, state);
+7-4
fs/xfs/libxfs/xfs_ialloc_btree.c
···538538539539static xfs_extlen_t540540xfs_inobt_max_size(541541- struct xfs_mount *mp)541541+ struct xfs_mount *mp,542542+ xfs_agnumber_t agno)542543{544544+ xfs_agblock_t agblocks = xfs_ag_block_count(mp, agno);545545+543546 /* Bail out if we're uninitialized, which can happen in mkfs. */544547 if (mp->m_inobt_mxr[0] == 0)545548 return 0;546549547550 return xfs_btree_calc_size(mp->m_inobt_mnr,548548- (uint64_t)mp->m_sb.sb_agblocks * mp->m_sb.sb_inopblock /549549- XFS_INODES_PER_CHUNK);551551+ (uint64_t)agblocks * mp->m_sb.sb_inopblock /552552+ XFS_INODES_PER_CHUNK);550553}551554552555static int···597594 if (error)598595 return error;599596600600- *ask += xfs_inobt_max_size(mp);597597+ *ask += xfs_inobt_max_size(mp, agno);601598 *used += tree_len;602599 return 0;603600}
+2-8
fs/xfs/xfs_bmap_util.c
···10421042 goto out_unlock;10431043}1044104410451045-static int10451045+int10461046xfs_flush_unmap_range(10471047 struct xfs_inode *ip,10481048 xfs_off_t offset,···11951195 * Writeback and invalidate cache for the remainder of the file as we're11961196 * about to shift down every extent from offset to EOF.11971197 */11981198- error = filemap_write_and_wait_range(VFS_I(ip)->i_mapping, offset, -1);11991199- if (error)12001200- return error;12011201- error = invalidate_inode_pages2_range(VFS_I(ip)->i_mapping,12021202- offset >> PAGE_SHIFT, -1);12031203- if (error)12041204- return error;11981198+ error = xfs_flush_unmap_range(ip, offset, XFS_ISIZE(ip));1205119912061200 /*12071201 * Clean out anything hanging around in the cow fork now that
···12331233}1234123412351235/*12361236- * Requeue a failed buffer for writeback12361236+ * Requeue a failed buffer for writeback.12371237 *12381238- * Return true if the buffer has been re-queued properly, false otherwise12381238+ * We clear the log item failed state here as well, but we have to be careful12391239+ * about reference counts because the only active reference counts on the buffer12401240+ * may be the failed log items. Hence if we clear the log item failed state12411241+ * before queuing the buffer for IO we can release all active references to12421242+ * the buffer and free it, leading to use after free problems in12431243+ * xfs_buf_delwri_queue. It makes no difference to the buffer or log items which12441244+ * order we process them in - the buffer is locked, and we own the buffer list12451245+ * so nothing on them is going to change while we are performing this action.12461246+ *12471247+ * Hence we can safely queue the buffer for IO before we clear the failed log12481248+ * item state, therefore always having an active reference to the buffer and12491249+ * avoiding the transient zero-reference state that leads to use-after-free.12501250+ *12511251+ * Return true if the buffer was added to the buffer list, false if it was12521252+ * already on the buffer list.12391253 */12401254bool12411255xfs_buf_resubmit_failed_buffers(···12571243 struct list_head *buffer_list)12581244{12591245 struct xfs_log_item *lip;12461246+ bool ret;12471247+12481248+ ret = xfs_buf_delwri_queue(bp, buffer_list);1260124912611250 /*12621262- * Clear XFS_LI_FAILED flag from all items before resubmit12631263- *12641264- * XFS_LI_FAILED set/clear is protected by ail_lock, caller this12511251+ * XFS_LI_FAILED set/clear is protected by ail_lock, caller of this12651252 * function already have it acquired12661253 */12671254 list_for_each_entry(lip, &bp->b_li_list, li_bio_list)12681255 xfs_clear_li_failed(lip);1269125612701270- /* Add this buffer back to the delayed write list */12711271- return xfs_buf_delwri_queue(bp, buffer_list);12571257+ return ret;12721258}
···296296 if (error)297297 return error;298298299299+ xfs_trim_extent(imap, got.br_startoff, got.br_blockcount);299300 trace_xfs_reflink_cow_alloc(ip, &got);300301 return 0;301302}···13521351 if (ret)13531352 goto out_unlock;1354135313551355- /* Zap any page cache for the destination file's range. */13561356- truncate_inode_pages_range(&inode_out->i_data,13571357- round_down(pos_out, PAGE_SIZE),13581358- round_up(pos_out + *len, PAGE_SIZE) - 1);13541354+ /*13551355+ * If pos_out > EOF, we may have dirtied blocks between EOF and13561356+ * pos_out. In that case, we need to extend the flush and unmap to cover13571357+ * from EOF to the end of the copy length.13581358+ */13591359+ if (pos_out > XFS_ISIZE(dest)) {13601360+ loff_t flen = *len + (pos_out - XFS_ISIZE(dest));13611361+ ret = xfs_flush_unmap_range(dest, XFS_ISIZE(dest), flen);13621362+ } else {13631363+ ret = xfs_flush_unmap_range(dest, pos_out, *len);13641364+ }13651365+ if (ret)13661366+ goto out_unlock;1359136713601368 return 1;13611369out_unlock:
···11671167extern void efi_reboot(enum reboot_mode reboot_mode, const char *__unused);1168116811691169extern bool efi_is_table_address(unsigned long phys_addr);11701170+11711171+extern int efi_apply_persistent_mem_reservations(void);11701172#else11711173static inline bool efi_enabled(int feature)11721174{···11861184static inline bool efi_is_table_address(unsigned long phys_addr)11871185{11881186 return false;11871187+}11881188+11891189+static inline int efi_apply_persistent_mem_reservations(void)11901190+{11911191+ return 0;11891192}11901193#endif11911194
-28
include/linux/hid.h
···11391139int hid_report_raw_event(struct hid_device *hid, int type, u8 *data, u32 size,11401140 int interrupt);1141114111421142-11431143-/**11441144- * struct hid_scroll_counter - Utility class for processing high-resolution11451145- * scroll events.11461146- * @dev: the input device for which events should be reported.11471147- * @microns_per_hi_res_unit: the amount moved by the user's finger for each11481148- * high-resolution unit reported by the mouse, in11491149- * microns.11501150- * @resolution_multiplier: the wheel's resolution in high-resolution mode as a11511151- * multiple of its lower resolution. For example, if11521152- * moving the wheel by one "notch" would result in a11531153- * value of 1 in low-resolution mode but 8 in11541154- * high-resolution, the multiplier is 8.11551155- * @remainder: counts the number of high-resolution units moved since the last11561156- * low-resolution event (REL_WHEEL or REL_HWHEEL) was sent. Should11571157- * only be used by class methods.11581158- */11591159-struct hid_scroll_counter {11601160- struct input_dev *dev;11611161- int microns_per_hi_res_unit;11621162- int resolution_multiplier;11631163-11641164- int remainder;11651165-};11661166-11671167-void hid_scroll_counter_handle_scroll(struct hid_scroll_counter *counter,11681168- int hi_res_value);11691169-11701142/* HID quirks API */11711143unsigned long hid_lookup_quirk(const struct hid_device *hdev);11721144int hid_quirks_init(char **quirks_param, __u16 bus, int count);
+2
include/linux/net_dim.h
···406406 }407407 /* fall through */408408 case NET_DIM_START_MEASURE:409409+ net_dim_sample(end_sample.event_ctr, end_sample.pkt_ctr, end_sample.byte_ctr,410410+ &dim->start_sample);409411 dim->state = NET_DIM_MEASURE_IN_PROGRESS;410412 break;411413 case NET_DIM_APPLY_NEW_PROFILE:
···196196 u32 rcv_tstamp; /* timestamp of last received ACK (for keepalives) */197197 u32 lsndtime; /* timestamp of last sent data packet (for restart window) */198198 u32 last_oow_ack_time; /* timestamp of last out-of-window ACK */199199+ u32 compressed_ack_rcv_nxt;199200200201 u32 tsoffset; /* timestamp offset */201202
+3
include/linux/usb/quirks.h
···6666/* Device needs a pause after every control message. */6767#define USB_QUIRK_DELAY_CTRL_MSG BIT(13)68686969+/* Hub needs extra delay after resetting its port. */7070+#define USB_QUIRK_HUB_SLOW_RESET BIT(14)7171+6972#endif /* __LINUX_USB_QUIRKS_H */
+203-64
include/linux/xarray.h
···289289void xa_init_flags(struct xarray *, gfp_t flags);290290void *xa_load(struct xarray *, unsigned long index);291291void *xa_store(struct xarray *, unsigned long index, void *entry, gfp_t);292292-void *xa_cmpxchg(struct xarray *, unsigned long index,293293- void *old, void *entry, gfp_t);294294-int xa_reserve(struct xarray *, unsigned long index, gfp_t);292292+void *xa_erase(struct xarray *, unsigned long index);295293void *xa_store_range(struct xarray *, unsigned long first, unsigned long last,296294 void *entry, gfp_t);297295bool xa_get_mark(struct xarray *, unsigned long index, xa_mark_t);···339341static inline bool xa_marked(const struct xarray *xa, xa_mark_t mark)340342{341343 return xa->xa_flags & XA_FLAGS_MARK(mark);342342-}343343-344344-/**345345- * xa_erase() - Erase this entry from the XArray.346346- * @xa: XArray.347347- * @index: Index of entry.348348- *349349- * This function is the equivalent of calling xa_store() with %NULL as350350- * the third argument. The XArray does not need to allocate memory, so351351- * the user does not need to provide GFP flags.352352- *353353- * Context: Process context. Takes and releases the xa_lock.354354- * Return: The entry which used to be at this index.355355- */356356-static inline void *xa_erase(struct xarray *xa, unsigned long index)357357-{358358- return xa_store(xa, index, NULL, 0);359359-}360360-361361-/**362362- * xa_insert() - Store this entry in the XArray unless another entry is363363- * already present.364364- * @xa: XArray.365365- * @index: Index into array.366366- * @entry: New entry.367367- * @gfp: Memory allocation flags.368368- *369369- * If you would rather see the existing entry in the array, use xa_cmpxchg().370370- * This function is for users who don't care what the entry is, only that371371- * one is present.372372- *373373- * Context: Process context. Takes and releases the xa_lock.374374- * May sleep if the @gfp flags permit.375375- * Return: 0 if the store succeeded. -EEXIST if another entry was present.376376- * -ENOMEM if memory could not be allocated.377377- */378378-static inline int xa_insert(struct xarray *xa, unsigned long index,379379- void *entry, gfp_t gfp)380380-{381381- void *curr = xa_cmpxchg(xa, index, NULL, entry, gfp);382382- if (!curr)383383- return 0;384384- if (xa_is_err(curr))385385- return xa_err(curr);386386- return -EEXIST;387387-}388388-389389-/**390390- * xa_release() - Release a reserved entry.391391- * @xa: XArray.392392- * @index: Index of entry.393393- *394394- * After calling xa_reserve(), you can call this function to release the395395- * reservation. If the entry at @index has been stored to, this function396396- * will do nothing.397397- */398398-static inline void xa_release(struct xarray *xa, unsigned long index)399399-{400400- xa_cmpxchg(xa, index, NULL, NULL, 0);401344}402345403346/**···394455void *__xa_cmpxchg(struct xarray *, unsigned long index, void *old,395456 void *entry, gfp_t);396457int __xa_alloc(struct xarray *, u32 *id, u32 max, void *entry, gfp_t);458458+int __xa_reserve(struct xarray *, unsigned long index, gfp_t);397459void __xa_set_mark(struct xarray *, unsigned long index, xa_mark_t);398460void __xa_clear_mark(struct xarray *, unsigned long index, xa_mark_t);399461···427487}428488429489/**490490+ * xa_store_bh() - Store this entry in the XArray.491491+ * @xa: XArray.492492+ * @index: Index into array.493493+ * @entry: New entry.494494+ * @gfp: Memory allocation flags.495495+ *496496+ * This function is like calling xa_store() except it disables softirqs497497+ * while holding the array lock.498498+ *499499+ * Context: Any context. Takes and releases the xa_lock while500500+ * disabling softirqs.501501+ * Return: The entry which used to be at this index.502502+ */503503+static inline void *xa_store_bh(struct xarray *xa, unsigned long index,504504+ void *entry, gfp_t gfp)505505+{506506+ void *curr;507507+508508+ xa_lock_bh(xa);509509+ curr = __xa_store(xa, index, entry, gfp);510510+ xa_unlock_bh(xa);511511+512512+ return curr;513513+}514514+515515+/**516516+ * xa_store_irq() - Erase this entry from the XArray.517517+ * @xa: XArray.518518+ * @index: Index into array.519519+ * @entry: New entry.520520+ * @gfp: Memory allocation flags.521521+ *522522+ * This function is like calling xa_store() except it disables interrupts523523+ * while holding the array lock.524524+ *525525+ * Context: Process context. Takes and releases the xa_lock while526526+ * disabling interrupts.527527+ * Return: The entry which used to be at this index.528528+ */529529+static inline void *xa_store_irq(struct xarray *xa, unsigned long index,530530+ void *entry, gfp_t gfp)531531+{532532+ void *curr;533533+534534+ xa_lock_irq(xa);535535+ curr = __xa_store(xa, index, entry, gfp);536536+ xa_unlock_irq(xa);537537+538538+ return curr;539539+}540540+541541+/**430542 * xa_erase_bh() - Erase this entry from the XArray.431543 * @xa: XArray.432544 * @index: Index of entry.···487495 * the third argument. The XArray does not need to allocate memory, so488496 * the user does not need to provide GFP flags.489497 *490490- * Context: Process context. Takes and releases the xa_lock while498498+ * Context: Any context. Takes and releases the xa_lock while491499 * disabling softirqs.492500 * Return: The entry which used to be at this index.493501 */···524532 xa_unlock_irq(xa);525533526534 return entry;535535+}536536+537537+/**538538+ * xa_cmpxchg() - Conditionally replace an entry in the XArray.539539+ * @xa: XArray.540540+ * @index: Index into array.541541+ * @old: Old value to test against.542542+ * @entry: New value to place in array.543543+ * @gfp: Memory allocation flags.544544+ *545545+ * If the entry at @index is the same as @old, replace it with @entry.546546+ * If the return value is equal to @old, then the exchange was successful.547547+ *548548+ * Context: Any context. Takes and releases the xa_lock. May sleep549549+ * if the @gfp flags permit.550550+ * Return: The old value at this index or xa_err() if an error happened.551551+ */552552+static inline void *xa_cmpxchg(struct xarray *xa, unsigned long index,553553+ void *old, void *entry, gfp_t gfp)554554+{555555+ void *curr;556556+557557+ xa_lock(xa);558558+ curr = __xa_cmpxchg(xa, index, old, entry, gfp);559559+ xa_unlock(xa);560560+561561+ return curr;562562+}563563+564564+/**565565+ * xa_insert() - Store this entry in the XArray unless another entry is566566+ * already present.567567+ * @xa: XArray.568568+ * @index: Index into array.569569+ * @entry: New entry.570570+ * @gfp: Memory allocation flags.571571+ *572572+ * If you would rather see the existing entry in the array, use xa_cmpxchg().573573+ * This function is for users who don't care what the entry is, only that574574+ * one is present.575575+ *576576+ * Context: Process context. Takes and releases the xa_lock.577577+ * May sleep if the @gfp flags permit.578578+ * Return: 0 if the store succeeded. -EEXIST if another entry was present.579579+ * -ENOMEM if memory could not be allocated.580580+ */581581+static inline int xa_insert(struct xarray *xa, unsigned long index,582582+ void *entry, gfp_t gfp)583583+{584584+ void *curr = xa_cmpxchg(xa, index, NULL, entry, gfp);585585+ if (!curr)586586+ return 0;587587+ if (xa_is_err(curr))588588+ return xa_err(curr);589589+ return -EEXIST;527590}528591529592/**···622575 * Updates the @id pointer with the index, then stores the entry at that623576 * index. A concurrent lookup will not see an uninitialised @id.624577 *625625- * Context: Process context. Takes and releases the xa_lock while578578+ * Context: Any context. Takes and releases the xa_lock while626579 * disabling softirqs. May sleep if the @gfp flags permit.627580 * Return: 0 on success, -ENOMEM if memory allocation fails or -ENOSPC if628581 * there is no more space in the XArray.···666619 xa_unlock_irq(xa);667620668621 return err;622622+}623623+624624+/**625625+ * xa_reserve() - Reserve this index in the XArray.626626+ * @xa: XArray.627627+ * @index: Index into array.628628+ * @gfp: Memory allocation flags.629629+ *630630+ * Ensures there is somewhere to store an entry at @index in the array.631631+ * If there is already something stored at @index, this function does632632+ * nothing. If there was nothing there, the entry is marked as reserved.633633+ * Loading from a reserved entry returns a %NULL pointer.634634+ *635635+ * If you do not use the entry that you have reserved, call xa_release()636636+ * or xa_erase() to free any unnecessary memory.637637+ *638638+ * Context: Any context. Takes and releases the xa_lock.639639+ * May sleep if the @gfp flags permit.640640+ * Return: 0 if the reservation succeeded or -ENOMEM if it failed.641641+ */642642+static inline643643+int xa_reserve(struct xarray *xa, unsigned long index, gfp_t gfp)644644+{645645+ int ret;646646+647647+ xa_lock(xa);648648+ ret = __xa_reserve(xa, index, gfp);649649+ xa_unlock(xa);650650+651651+ return ret;652652+}653653+654654+/**655655+ * xa_reserve_bh() - Reserve this index in the XArray.656656+ * @xa: XArray.657657+ * @index: Index into array.658658+ * @gfp: Memory allocation flags.659659+ *660660+ * A softirq-disabling version of xa_reserve().661661+ *662662+ * Context: Any context. Takes and releases the xa_lock while663663+ * disabling softirqs.664664+ * Return: 0 if the reservation succeeded or -ENOMEM if it failed.665665+ */666666+static inline667667+int xa_reserve_bh(struct xarray *xa, unsigned long index, gfp_t gfp)668668+{669669+ int ret;670670+671671+ xa_lock_bh(xa);672672+ ret = __xa_reserve(xa, index, gfp);673673+ xa_unlock_bh(xa);674674+675675+ return ret;676676+}677677+678678+/**679679+ * xa_reserve_irq() - Reserve this index in the XArray.680680+ * @xa: XArray.681681+ * @index: Index into array.682682+ * @gfp: Memory allocation flags.683683+ *684684+ * An interrupt-disabling version of xa_reserve().685685+ *686686+ * Context: Process context. Takes and releases the xa_lock while687687+ * disabling interrupts.688688+ * Return: 0 if the reservation succeeded or -ENOMEM if it failed.689689+ */690690+static inline691691+int xa_reserve_irq(struct xarray *xa, unsigned long index, gfp_t gfp)692692+{693693+ int ret;694694+695695+ xa_lock_irq(xa);696696+ ret = __xa_reserve(xa, index, gfp);697697+ xa_unlock_irq(xa);698698+699699+ return ret;700700+}701701+702702+/**703703+ * xa_release() - Release a reserved entry.704704+ * @xa: XArray.705705+ * @index: Index of entry.706706+ *707707+ * After calling xa_reserve(), you can call this function to release the708708+ * reservation. If the entry at @index has been stored to, this function709709+ * will do nothing.710710+ */711711+static inline void xa_release(struct xarray *xa, unsigned long index)712712+{713713+ xa_cmpxchg(xa, index, NULL, NULL, 0);669714}670715671716/* Everything below here is the Advanced API. Proceed with caution. */
···716716 * the situation described above.717717 */718718#define REL_RESERVED 0x0a719719-#define REL_WHEEL_HI_RES 0x0b720719#define REL_MAX 0x0f721720#define REL_CNT (REL_MAX+1)722721···751752#define ABS_VOLUME 0x20752753753754#define ABS_MISC 0x28754754-755755-/*756756- * 0x2e is reserved and should not be used in input drivers.757757- * It was used by HID as ABS_MISC+6 and userspace needs to detect if758758- * the next ABS_* event is correct or is just ABS_MISC + n.759759- * We define here ABS_RESERVED so userspace can rely on it and detect760760- * the situation described above.761761- */762762-#define ABS_RESERVED 0x2e763755764756#define ABS_MT_SLOT 0x2f /* MT slot being modified */765757#define ABS_MT_TOUCH_MAJOR 0x30 /* Major axis of touching ellipse */
···179179 kdb_printf("no process for cpu %ld\n", cpu);180180 return 0;181181 }182182- sprintf(buf, "btt 0x%p\n", KDB_TSK(cpu));182182+ sprintf(buf, "btt 0x%px\n", KDB_TSK(cpu));183183 kdb_parse(buf);184184 return 0;185185 }186186 kdb_printf("btc: cpu status: ");187187 kdb_parse("cpu\n");188188 for_each_online_cpu(cpu) {189189- sprintf(buf, "btt 0x%p\n", KDB_TSK(cpu));189189+ sprintf(buf, "btt 0x%px\n", KDB_TSK(cpu));190190 kdb_parse(buf);191191 touch_nmi_watchdog();192192 }
+9-6
kernel/debug/kdb/kdb_io.c
···216216 int count;217217 int i;218218 int diag, dtab_count;219219- int key;219219+ int key, buf_size, ret;220220221221222222 diag = kdbgetintenv("DTABCOUNT", &dtab_count);···336336 else337337 p_tmp = tmpbuffer;338338 len = strlen(p_tmp);339339- count = kallsyms_symbol_complete(p_tmp,340340- sizeof(tmpbuffer) -341341- (p_tmp - tmpbuffer));339339+ buf_size = sizeof(tmpbuffer) - (p_tmp - tmpbuffer);340340+ count = kallsyms_symbol_complete(p_tmp, buf_size);342341 if (tab == 2 && count > 0) {343342 kdb_printf("\n%d symbols are found.", count);344343 if (count > dtab_count) {···349350 }350351 kdb_printf("\n");351352 for (i = 0; i < count; i++) {352352- if (WARN_ON(!kallsyms_symbol_next(p_tmp, i)))353353+ ret = kallsyms_symbol_next(p_tmp, i, buf_size);354354+ if (WARN_ON(!ret))353355 break;354354- kdb_printf("%s ", p_tmp);356356+ if (ret != -E2BIG)357357+ kdb_printf("%s ", p_tmp);358358+ else359359+ kdb_printf("%s... ", p_tmp);355360 *(p_tmp + len) = '\0';356361 }357362 if (i >= dtab_count)
+2-2
kernel/debug/kdb/kdb_keyboard.c
···173173 case KT_LATIN:174174 if (isprint(keychar))175175 break; /* printable characters */176176- /* drop through */176176+ /* fall through */177177 case KT_SPEC:178178 if (keychar == K_ENTER)179179 break;180180- /* drop through */180180+ /* fall through */181181 default:182182 return -1; /* ignore unprintables */183183 }
···8383 unsigned long sym_start;8484 unsigned long sym_end;8585 } kdb_symtab_t;8686-extern int kallsyms_symbol_next(char *prefix_name, int flag);8686+extern int kallsyms_symbol_next(char *prefix_name, int flag, int buf_size);8787extern int kallsyms_symbol_complete(char *prefix_name, int max_len);88888989/* Exported Symbols for kernel loadable modules to use. */
+14-14
kernel/debug/kdb/kdb_support.c
···4040int kdbgetsymval(const char *symname, kdb_symtab_t *symtab)4141{4242 if (KDB_DEBUG(AR))4343- kdb_printf("kdbgetsymval: symname=%s, symtab=%p\n", symname,4343+ kdb_printf("kdbgetsymval: symname=%s, symtab=%px\n", symname,4444 symtab);4545 memset(symtab, 0, sizeof(*symtab));4646 symtab->sym_start = kallsyms_lookup_name(symname);···8888 char *knt1 = NULL;89899090 if (KDB_DEBUG(AR))9191- kdb_printf("kdbnearsym: addr=0x%lx, symtab=%p\n", addr, symtab);9191+ kdb_printf("kdbnearsym: addr=0x%lx, symtab=%px\n", addr, symtab);9292 memset(symtab, 0, sizeof(*symtab));93939494 if (addr < 4096)···149149 symtab->mod_name = "kernel";150150 if (KDB_DEBUG(AR))151151 kdb_printf("kdbnearsym: returns %d symtab->sym_start=0x%lx, "152152- "symtab->mod_name=%p, symtab->sym_name=%p (%s)\n", ret,152152+ "symtab->mod_name=%px, symtab->sym_name=%px (%s)\n", ret,153153 symtab->sym_start, symtab->mod_name, symtab->sym_name,154154 symtab->sym_name);155155···221221 * Parameters:222222 * prefix_name prefix of a symbol name to lookup223223 * flag 0 means search from the head, 1 means continue search.224224+ * buf_size maximum length that can be written to prefix_name225225+ * buffer224226 * Returns:225227 * 1 if a symbol matches the given prefix.226228 * 0 if no string found227229 */228228-int kallsyms_symbol_next(char *prefix_name, int flag)230230+int kallsyms_symbol_next(char *prefix_name, int flag, int buf_size)229231{230232 int prefix_len = strlen(prefix_name);231233 static loff_t pos;···237235 pos = 0;238236239237 while ((name = kdb_walk_kallsyms(&pos))) {240240- if (strncmp(name, prefix_name, prefix_len) == 0) {241241- strncpy(prefix_name, name, strlen(name)+1);242242- return 1;243243- }238238+ if (!strncmp(name, prefix_name, prefix_len))239239+ return strscpy(prefix_name, name, buf_size);244240 }245241 return 0;246242}···432432 *word = w8;433433 break;434434 }435435- /* drop through */435435+ /* fall through */436436 default:437437 diag = KDB_BADWIDTH;438438 kdb_printf("kdb_getphysword: bad width %ld\n", (long) size);···481481 *word = w8;482482 break;483483 }484484- /* drop through */484484+ /* fall through */485485 default:486486 diag = KDB_BADWIDTH;487487 kdb_printf("kdb_getword: bad width %ld\n", (long) size);···525525 diag = kdb_putarea(addr, w8);526526 break;527527 }528528- /* drop through */528528+ /* fall through */529529 default:530530 diag = KDB_BADWIDTH;531531 kdb_printf("kdb_putword: bad width %ld\n", (long) size);···887887 __func__, dah_first);888888 if (dah_first) {889889 h_used = (struct debug_alloc_header *)debug_alloc_pool;890890- kdb_printf("%s: h_used %p size %d\n", __func__, h_used,890890+ kdb_printf("%s: h_used %px size %d\n", __func__, h_used,891891 h_used->size);892892 }893893 do {894894 h_used = (struct debug_alloc_header *)895895 ((char *)h_free + dah_overhead + h_free->size);896896- kdb_printf("%s: h_used %p size %d caller %p\n",896896+ kdb_printf("%s: h_used %px size %d caller %px\n",897897 __func__, h_used, h_used->size, h_used->caller);898898 h_free = (struct debug_alloc_header *)899899 (debug_alloc_pool + h_free->next);···902902 ((char *)h_free + dah_overhead + h_free->size);903903 if ((char *)h_used - debug_alloc_pool !=904904 sizeof(debug_alloc_pool_aligned))905905- kdb_printf("%s: h_used %p size %d caller %p\n",905905+ kdb_printf("%s: h_used %px size %d caller %px\n",906906 __func__, h_used, h_used->size, h_used->caller);907907out:908908 spin_unlock(&dap_lock);
···56745674 return target;56755675}5676567656775677-static unsigned long cpu_util_wake(int cpu, struct task_struct *p);56775677+static unsigned long cpu_util_without(int cpu, struct task_struct *p);5678567856795679-static unsigned long capacity_spare_wake(int cpu, struct task_struct *p)56795679+static unsigned long capacity_spare_without(int cpu, struct task_struct *p)56805680{56815681- return max_t(long, capacity_of(cpu) - cpu_util_wake(cpu, p), 0);56815681+ return max_t(long, capacity_of(cpu) - cpu_util_without(cpu, p), 0);56825682}5683568356845684/*···5738573857395739 avg_load += cfs_rq_load_avg(&cpu_rq(i)->cfs);5740574057415741- spare_cap = capacity_spare_wake(i, p);57415741+ spare_cap = capacity_spare_without(i, p);5742574257435743 if (spare_cap > max_spare_cap)57445744 max_spare_cap = spare_cap;···58895889 return prev_cpu;5890589058915891 /*58925892- * We need task's util for capacity_spare_wake, sync it up to prev_cpu's58935893- * last_update_time.58925892+ * We need task's util for capacity_spare_without, sync it up to58935893+ * prev_cpu's last_update_time.58945894 */58955895 if (!(sd_flag & SD_BALANCE_FORK))58965896 sync_entity_load_avg(&p->se);···62166216}6217621762186218/*62196219- * cpu_util_wake: Compute CPU utilization with any contributions from62206220- * the waking task p removed.62196219+ * cpu_util_without: compute cpu utilization without any contributions from *p62206220+ * @cpu: the CPU which utilization is requested62216221+ * @p: the task which utilization should be discounted62226222+ *62236223+ * The utilization of a CPU is defined by the utilization of tasks currently62246224+ * enqueued on that CPU as well as tasks which are currently sleeping after an62256225+ * execution on that CPU.62266226+ *62276227+ * This method returns the utilization of the specified CPU by discounting the62286228+ * utilization of the specified task, whenever the task is currently62296229+ * contributing to the CPU utilization.62216230 */62226222-static unsigned long cpu_util_wake(int cpu, struct task_struct *p)62316231+static unsigned long cpu_util_without(int cpu, struct task_struct *p)62236232{62246233 struct cfs_rq *cfs_rq;62256234 unsigned int util;···62406231 cfs_rq = &cpu_rq(cpu)->cfs;62416232 util = READ_ONCE(cfs_rq->avg.util_avg);6242623362436243- /* Discount task's blocked util from CPU's util */62346234+ /* Discount task's util from CPU's util */62446235 util -= min_t(unsigned int, util, task_util(p));6245623662466237 /*···62496240 * a) if *p is the only task sleeping on this CPU, then:62506241 * cpu_util (== task_util) > util_est (== 0)62516242 * and thus we return:62526252- * cpu_util_wake = (cpu_util - task_util) = 062436243+ * cpu_util_without = (cpu_util - task_util) = 062536244 *62546245 * b) if other tasks are SLEEPING on this CPU, which is now exiting62556246 * IDLE, then:62566247 * cpu_util >= task_util62576248 * cpu_util > util_est (== 0)62586249 * and thus we discount *p's blocked utilization to return:62596259- * cpu_util_wake = (cpu_util - task_util) >= 062506250+ * cpu_util_without = (cpu_util - task_util) >= 062606251 *62616252 * c) if other tasks are RUNNABLE on that CPU and62626253 * util_est > cpu_util···62696260 * covered by the following code when estimated utilization is62706261 * enabled.62716262 */62726272- if (sched_feat(UTIL_EST))62736273- util = max(util, READ_ONCE(cfs_rq->avg.util_est.enqueued));62636263+ if (sched_feat(UTIL_EST)) {62646264+ unsigned int estimated =62656265+ READ_ONCE(cfs_rq->avg.util_est.enqueued);62666266+62676267+ /*62686268+ * Despite the following checks we still have a small window62696269+ * for a possible race, when an execl's select_task_rq_fair()62706270+ * races with LB's detach_task():62716271+ *62726272+ * detach_task()62736273+ * p->on_rq = TASK_ON_RQ_MIGRATING;62746274+ * ---------------------------------- A62756275+ * deactivate_task() \62766276+ * dequeue_task() + RaceTime62776277+ * util_est_dequeue() /62786278+ * ---------------------------------- B62796279+ *62806280+ * The additional check on "current == p" it's required to62816281+ * properly fix the execl regression and it helps in further62826282+ * reducing the chances for the above race.62836283+ */62846284+ if (unlikely(task_on_rq_queued(p) || current == p)) {62856285+ estimated -= min_t(unsigned int, estimated,62866286+ (_task_util_est(p) | UTIL_AVG_UNCHANGED));62876287+ }62886288+ util = max(util, estimated);62896289+ }6274629062756291 /*62766292 * Utilization (estimated) can exceed the CPU capacity, thus let's
+24-23
kernel/sched/psi.c
···633633 */634634void cgroup_move_task(struct task_struct *task, struct css_set *to)635635{636636- bool move_psi = !psi_disabled;637636 unsigned int task_flags = 0;638637 struct rq_flags rf;639638 struct rq *rq;640639641641- if (move_psi) {642642- rq = task_rq_lock(task, &rf);643643-644644- if (task_on_rq_queued(task))645645- task_flags = TSK_RUNNING;646646- else if (task->in_iowait)647647- task_flags = TSK_IOWAIT;648648-649649- if (task->flags & PF_MEMSTALL)650650- task_flags |= TSK_MEMSTALL;651651-652652- if (task_flags)653653- psi_task_change(task, task_flags, 0);640640+ if (psi_disabled) {641641+ /*642642+ * Lame to do this here, but the scheduler cannot be locked643643+ * from the outside, so we move cgroups from inside sched/.644644+ */645645+ rcu_assign_pointer(task->cgroups, to);646646+ return;654647 }655648656656- /*657657- * Lame to do this here, but the scheduler cannot be locked658658- * from the outside, so we move cgroups from inside sched/.659659- */649649+ rq = task_rq_lock(task, &rf);650650+651651+ if (task_on_rq_queued(task))652652+ task_flags = TSK_RUNNING;653653+ else if (task->in_iowait)654654+ task_flags = TSK_IOWAIT;655655+656656+ if (task->flags & PF_MEMSTALL)657657+ task_flags |= TSK_MEMSTALL;658658+659659+ if (task_flags)660660+ psi_task_change(task, task_flags, 0);661661+662662+ /* See comment above */660663 rcu_assign_pointer(task->cgroups, to);661664662662- if (move_psi) {663663- if (task_flags)664664- psi_task_change(task, 0, task_flags);665665+ if (task_flags)666666+ psi_task_change(task, 0, task_flags);665667666666- task_rq_unlock(rq, task, &rf);667667- }668668+ task_rq_unlock(rq, task, &rf);668669}669670#endif /* CONFIG_CGROUPS */670671
+1
lib/test_firmware.c
···837837 if (req->fw->size > PAGE_SIZE) {838838 pr_err("Testing interface must use PAGE_SIZE firmware for now\n");839839 rc = -EINVAL;840840+ goto out;840841 }841842 memcpy(buf, req->fw->data, req->fw->size);842843
+47-3
lib/test_xarray.c
···208208 XA_BUG_ON(xa, xa_get_mark(xa, i, XA_MARK_2));209209210210 /* We should see two elements in the array */211211+ rcu_read_lock();211212 xas_for_each(&xas, entry, ULONG_MAX)212213 seen++;214214+ rcu_read_unlock();213215 XA_BUG_ON(xa, seen != 2);214216215217 /* One of which is marked */216218 xas_set(&xas, 0);217219 seen = 0;220220+ rcu_read_lock();218221 xas_for_each_marked(&xas, entry, ULONG_MAX, XA_MARK_0)219222 seen++;223223+ rcu_read_unlock();220224 XA_BUG_ON(xa, seen != 1);221225 }222226 XA_BUG_ON(xa, xa_get_mark(xa, next, XA_MARK_0));···377373 xa_erase_index(xa, 12345678);378374 XA_BUG_ON(xa, !xa_empty(xa));379375376376+ /* And so does xa_insert */377377+ xa_reserve(xa, 12345678, GFP_KERNEL);378378+ XA_BUG_ON(xa, xa_insert(xa, 12345678, xa_mk_value(12345678), 0) != 0);379379+ xa_erase_index(xa, 12345678);380380+ XA_BUG_ON(xa, !xa_empty(xa));381381+380382 /* Can iterate through a reserved entry */381383 xa_store_index(xa, 5, GFP_KERNEL);382384 xa_reserve(xa, 6, GFP_KERNEL);···446436 XA_BUG_ON(xa, xa_load(xa, max) != NULL);447437 XA_BUG_ON(xa, xa_load(xa, min - 1) != NULL);448438439439+ xas_lock(&xas);449440 XA_BUG_ON(xa, xas_store(&xas, xa_mk_value(min)) != xa_mk_value(index));441441+ xas_unlock(&xas);450442 XA_BUG_ON(xa, xa_load(xa, min) != xa_mk_value(min));451443 XA_BUG_ON(xa, xa_load(xa, max - 1) != xa_mk_value(min));452444 XA_BUG_ON(xa, xa_load(xa, max) != NULL);···464452 XA_STATE(xas, xa, index);465453 xa_store_order(xa, index, order, xa_mk_value(0), GFP_KERNEL);466454455455+ xas_lock(&xas);467456 XA_BUG_ON(xa, xas_store(&xas, xa_mk_value(1)) != xa_mk_value(0));468457 XA_BUG_ON(xa, xas.xa_index != index);469458 XA_BUG_ON(xa, xas_store(&xas, NULL) != xa_mk_value(1));459459+ xas_unlock(&xas);470460 XA_BUG_ON(xa, !xa_empty(xa));471461}472462#endif···512498 rcu_read_unlock();513499514500 /* We can erase multiple values with a single store */515515- xa_store_order(xa, 0, 63, NULL, GFP_KERNEL);501501+ xa_store_order(xa, 0, BITS_PER_LONG - 1, NULL, GFP_KERNEL);516502 XA_BUG_ON(xa, !xa_empty(xa));517503518504 /* Even when the first slot is empty but the others aren't */···716702 }717703}718704719719-static noinline void check_find(struct xarray *xa)705705+static noinline void check_find_1(struct xarray *xa)720706{721707 unsigned long i, j, k;722708···762748 XA_BUG_ON(xa, xa_get_mark(xa, i, XA_MARK_0));763749 }764750 XA_BUG_ON(xa, !xa_empty(xa));751751+}752752+753753+static noinline void check_find_2(struct xarray *xa)754754+{755755+ void *entry;756756+ unsigned long i, j, index = 0;757757+758758+ xa_for_each(xa, entry, index, ULONG_MAX, XA_PRESENT) {759759+ XA_BUG_ON(xa, true);760760+ }761761+762762+ for (i = 0; i < 1024; i++) {763763+ xa_store_index(xa, index, GFP_KERNEL);764764+ j = 0;765765+ index = 0;766766+ xa_for_each(xa, entry, index, ULONG_MAX, XA_PRESENT) {767767+ XA_BUG_ON(xa, xa_mk_value(index) != entry);768768+ XA_BUG_ON(xa, index != j++);769769+ }770770+ }771771+772772+ xa_destroy(xa);773773+}774774+775775+static noinline void check_find(struct xarray *xa)776776+{777777+ check_find_1(xa);778778+ check_find_2(xa);765779 check_multi_find(xa);766780 check_multi_find_2(xa);767781}···11091067 __check_store_range(xa, 4095 + i, 4095 + j);11101068 __check_store_range(xa, 4096 + i, 4096 + j);11111069 __check_store_range(xa, 123456 + i, 123456 + j);11121112- __check_store_range(xa, UINT_MAX + i, UINT_MAX + j);10701070+ __check_store_range(xa, (1 << 24) + i, (1 << 24) + j);11131071 }11141072 }11151073}···11881146 XA_STATE(xas, xa, 1 << order);1189114711901148 xa_store_order(xa, 0, order, xa, GFP_KERNEL);11491149+ rcu_read_lock();11911150 xas_load(&xas);11921151 XA_BUG_ON(xa, xas.xa_node->count == 0);11931152 XA_BUG_ON(xa, xas.xa_node->count > (1 << order));11941153 XA_BUG_ON(xa, xas.xa_node->nr_values != 0);11541154+ rcu_read_unlock();1195115511961156 xa_store_order(xa, 1 << order, order, xa_mk_value(1 << order),11971157 GFP_KERNEL);
+1-2
lib/ubsan.c
···427427EXPORT_SYMBOL(__ubsan_handle_shift_out_of_bounds);428428429429430430-void __noreturn431431-__ubsan_handle_builtin_unreachable(struct unreachable_data *data)430430+void __ubsan_handle_builtin_unreachable(struct unreachable_data *data)432431{433432 unsigned long flags;434433
+60-79
lib/xarray.c
···610610 * (see the xa_cmpxchg() implementation for an example).611611 *612612 * Return: If the slot already existed, returns the contents of this slot.613613- * If the slot was newly created, returns NULL. If it failed to create the614614- * slot, returns NULL and indicates the error in @xas.613613+ * If the slot was newly created, returns %NULL. If it failed to create the614614+ * slot, returns %NULL and indicates the error in @xas.615615 */616616static void *xas_create(struct xa_state *xas)617617{···13341334 XA_STATE(xas, xa, index);13351335 return xas_result(&xas, xas_store(&xas, NULL));13361336}13371337-EXPORT_SYMBOL_GPL(__xa_erase);13371337+EXPORT_SYMBOL(__xa_erase);1338133813391339/**13401340- * xa_store() - Store this entry in the XArray.13401340+ * xa_erase() - Erase this entry from the XArray.13411341 * @xa: XArray.13421342- * @index: Index into array.13431343- * @entry: New entry.13441344- * @gfp: Memory allocation flags.13421342+ * @index: Index of entry.13451343 *13461346- * After this function returns, loads from this index will return @entry.13471347- * Storing into an existing multislot entry updates the entry of every index.13481348- * The marks associated with @index are unaffected unless @entry is %NULL.13441344+ * This function is the equivalent of calling xa_store() with %NULL as13451345+ * the third argument. The XArray does not need to allocate memory, so13461346+ * the user does not need to provide GFP flags.13491347 *13501350- * Context: Process context. Takes and releases the xa_lock. May sleep13511351- * if the @gfp flags permit.13521352- * Return: The old entry at this index on success, xa_err(-EINVAL) if @entry13531353- * cannot be stored in an XArray, or xa_err(-ENOMEM) if memory allocation13541354- * failed.13481348+ * Context: Any context. Takes and releases the xa_lock.13491349+ * Return: The entry which used to be at this index.13551350 */13561356-void *xa_store(struct xarray *xa, unsigned long index, void *entry, gfp_t gfp)13511351+void *xa_erase(struct xarray *xa, unsigned long index)13571352{13581358- XA_STATE(xas, xa, index);13591359- void *curr;13531353+ void *entry;1360135413611361- if (WARN_ON_ONCE(xa_is_internal(entry)))13621362- return XA_ERROR(-EINVAL);13551355+ xa_lock(xa);13561356+ entry = __xa_erase(xa, index);13571357+ xa_unlock(xa);1363135813641364- do {13651365- xas_lock(&xas);13661366- curr = xas_store(&xas, entry);13671367- if (xa_track_free(xa) && entry)13681368- xas_clear_mark(&xas, XA_FREE_MARK);13691369- xas_unlock(&xas);13701370- } while (xas_nomem(&xas, gfp));13711371-13721372- return xas_result(&xas, curr);13591359+ return entry;13731360}13741374-EXPORT_SYMBOL(xa_store);13611361+EXPORT_SYMBOL(xa_erase);1375136213761363/**13771364 * __xa_store() - Store this entry in the XArray.···1382139513831396 if (WARN_ON_ONCE(xa_is_internal(entry)))13841397 return XA_ERROR(-EINVAL);13981398+ if (xa_track_free(xa) && !entry)13991399+ entry = XA_ZERO_ENTRY;1385140013861401 do {13871402 curr = xas_store(&xas, entry);13881388- if (xa_track_free(xa) && entry)14031403+ if (xa_track_free(xa))13891404 xas_clear_mark(&xas, XA_FREE_MARK);13901405 } while (__xas_nomem(&xas, gfp));13911406···13961407EXPORT_SYMBOL(__xa_store);1397140813981409/**13991399- * xa_cmpxchg() - Conditionally replace an entry in the XArray.14101410+ * xa_store() - Store this entry in the XArray.14001411 * @xa: XArray.14011412 * @index: Index into array.14021402- * @old: Old value to test against.14031403- * @entry: New value to place in array.14131413+ * @entry: New entry.14041414 * @gfp: Memory allocation flags.14051415 *14061406- * If the entry at @index is the same as @old, replace it with @entry.14071407- * If the return value is equal to @old, then the exchange was successful.14161416+ * After this function returns, loads from this index will return @entry.14171417+ * Storing into an existing multislot entry updates the entry of every index.14181418+ * The marks associated with @index are unaffected unless @entry is %NULL.14081419 *14091409- * Context: Process context. Takes and releases the xa_lock. May sleep14101410- * if the @gfp flags permit.14111411- * Return: The old value at this index or xa_err() if an error happened.14201420+ * Context: Any context. Takes and releases the xa_lock.14211421+ * May sleep if the @gfp flags permit.14221422+ * Return: The old entry at this index on success, xa_err(-EINVAL) if @entry14231423+ * cannot be stored in an XArray, or xa_err(-ENOMEM) if memory allocation14241424+ * failed.14121425 */14131413-void *xa_cmpxchg(struct xarray *xa, unsigned long index,14141414- void *old, void *entry, gfp_t gfp)14261426+void *xa_store(struct xarray *xa, unsigned long index, void *entry, gfp_t gfp)14151427{14161416- XA_STATE(xas, xa, index);14171428 void *curr;1418142914191419- if (WARN_ON_ONCE(xa_is_internal(entry)))14201420- return XA_ERROR(-EINVAL);14301430+ xa_lock(xa);14311431+ curr = __xa_store(xa, index, entry, gfp);14321432+ xa_unlock(xa);1421143314221422- do {14231423- xas_lock(&xas);14241424- curr = xas_load(&xas);14251425- if (curr == XA_ZERO_ENTRY)14261426- curr = NULL;14271427- if (curr == old) {14281428- xas_store(&xas, entry);14291429- if (xa_track_free(xa) && entry)14301430- xas_clear_mark(&xas, XA_FREE_MARK);14311431- }14321432- xas_unlock(&xas);14331433- } while (xas_nomem(&xas, gfp));14341434-14351435- return xas_result(&xas, curr);14341434+ return curr;14361435}14371437-EXPORT_SYMBOL(xa_cmpxchg);14361436+EXPORT_SYMBOL(xa_store);1438143714391438/**14401439 * __xa_cmpxchg() - Store this entry in the XArray.···1448147114491472 if (WARN_ON_ONCE(xa_is_internal(entry)))14501473 return XA_ERROR(-EINVAL);14741474+ if (xa_track_free(xa) && !entry)14751475+ entry = XA_ZERO_ENTRY;1451147614521477 do {14531478 curr = xas_load(&xas);···14571478 curr = NULL;14581479 if (curr == old) {14591480 xas_store(&xas, entry);14601460- if (xa_track_free(xa) && entry)14811481+ if (xa_track_free(xa))14611482 xas_clear_mark(&xas, XA_FREE_MARK);14621483 }14631484 } while (__xas_nomem(&xas, gfp));···14671488EXPORT_SYMBOL(__xa_cmpxchg);1468148914691490/**14701470- * xa_reserve() - Reserve this index in the XArray.14911491+ * __xa_reserve() - Reserve this index in the XArray.14711492 * @xa: XArray.14721493 * @index: Index into array.14731494 * @gfp: Memory allocation flags.···14751496 * Ensures there is somewhere to store an entry at @index in the array.14761497 * If there is already something stored at @index, this function does14771498 * nothing. If there was nothing there, the entry is marked as reserved.14781478- * Loads from @index will continue to see a %NULL pointer until a14791479- * subsequent store to @index.14991499+ * Loading from a reserved entry returns a %NULL pointer.14801500 *14811501 * If you do not use the entry that you have reserved, call xa_release()14821502 * or xa_erase() to free any unnecessary memory.14831503 *14841484- * Context: Process context. Takes and releases the xa_lock, IRQ or BH safe14851485- * if specified in XArray flags. May sleep if the @gfp flags permit.15041504+ * Context: Any context. Expects the xa_lock to be held on entry. May15051505+ * release the lock, sleep and reacquire the lock if the @gfp flags permit.14861506 * Return: 0 if the reservation succeeded or -ENOMEM if it failed.14871507 */14881488-int xa_reserve(struct xarray *xa, unsigned long index, gfp_t gfp)15081508+int __xa_reserve(struct xarray *xa, unsigned long index, gfp_t gfp)14891509{14901510 XA_STATE(xas, xa, index);14911491- unsigned int lock_type = xa_lock_type(xa);14921511 void *curr;1493151214941513 do {14951495- xas_lock_type(&xas, lock_type);14961514 curr = xas_load(&xas);14971497- if (!curr)15151515+ if (!curr) {14981516 xas_store(&xas, XA_ZERO_ENTRY);14991499- xas_unlock_type(&xas, lock_type);15001500- } while (xas_nomem(&xas, gfp));15171517+ if (xa_track_free(xa))15181518+ xas_clear_mark(&xas, XA_FREE_MARK);15191519+ }15201520+ } while (__xas_nomem(&xas, gfp));1501152115021522 return xas_error(&xas);15031523}15041504-EXPORT_SYMBOL(xa_reserve);15241524+EXPORT_SYMBOL(__xa_reserve);1505152515061526#ifdef CONFIG_XARRAY_MULTI15071527static void xas_set_range(struct xa_state *xas, unsigned long first,···15651587 do {15661588 xas_lock(&xas);15671589 if (entry) {15681568- unsigned int order = (last == ~0UL) ? 64 :15691569- ilog2(last + 1);15901590+ unsigned int order = BITS_PER_LONG;15911591+ if (last + 1)15921592+ order = __ffs(last + 1);15701593 xas_set_order(&xas, last, order);15711594 xas_create(&xas);15721595 if (xas_error(&xas))···16411662 * @index: Index of entry.16421663 * @mark: Mark number.16431664 *16441644- * Attempting to set a mark on a NULL entry does not succeed.16651665+ * Attempting to set a mark on a %NULL entry does not succeed.16451666 *16461667 * Context: Any context. Expects xa_lock to be held on entry.16471668 */···16531674 if (entry)16541675 xas_set_mark(&xas, mark);16551676}16561656-EXPORT_SYMBOL_GPL(__xa_set_mark);16771677+EXPORT_SYMBOL(__xa_set_mark);1657167816581679/**16591680 * __xa_clear_mark() - Clear this mark on this entry while locked.···16711692 if (entry)16721693 xas_clear_mark(&xas, mark);16731694}16741674-EXPORT_SYMBOL_GPL(__xa_clear_mark);16951695+EXPORT_SYMBOL(__xa_clear_mark);1675169616761697/**16771698 * xa_get_mark() - Inquire whether this mark is set on this entry.···17111732 * @index: Index of entry.17121733 * @mark: Mark number.17131734 *17141714- * Attempting to set a mark on a NULL entry does not succeed.17351735+ * Attempting to set a mark on a %NULL entry does not succeed.17151736 *17161737 * Context: Process context. Takes and releases the xa_lock.17171738 */···18081829 entry = xas_find_marked(&xas, max, filter);18091830 else18101831 entry = xas_find(&xas, max);18321832+ if (xas.xa_node == XAS_BOUNDS)18331833+ break;18111834 if (xas.xa_shift) {18121835 if (xas.xa_index & ((1UL << xas.xa_shift) - 1))18131836 continue;···18801899 *18811900 * The @filter may be an XArray mark value, in which case entries which are18821901 * marked with that mark will be copied. It may also be %XA_PRESENT, in18831883- * which case all entries which are not NULL will be copied.19021902+ * which case all entries which are not %NULL will be copied.18841903 *18851904 * The entries returned may not represent a snapshot of the XArray at a18861905 * moment in time. For example, if another thread stores to index 5, then
+8-2
mm/gup.c
···385385 * @vma: vm_area_struct mapping @address386386 * @address: virtual address to look up387387 * @flags: flags modifying lookup behaviour388388- * @page_mask: on output, *page_mask is set according to the size of the page388388+ * @ctx: contains dev_pagemap for %ZONE_DEVICE memory pinning and a389389+ * pointer to output page_mask389390 *390391 * @flags can have FOLL_ flags set, defined in <linux/mm.h>391392 *392392- * Returns the mapped (struct page *), %NULL if no mapping exists, or393393+ * When getting pages from ZONE_DEVICE memory, the @ctx->pgmap caches394394+ * the device's dev_pagemap metadata to avoid repeating expensive lookups.395395+ *396396+ * On output, the @ctx->page_mask is set according to the size of the page.397397+ *398398+ * Return: the mapped (struct page *), %NULL if no mapping exists, or393399 * an error pointer if there is a mapping to something not represented394400 * by a page descriptor (see also vm_normal_page()).395401 */
+19-4
mm/hugetlb.c
···32333233int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src,32343234 struct vm_area_struct *vma)32353235{32363236- pte_t *src_pte, *dst_pte, entry;32363236+ pte_t *src_pte, *dst_pte, entry, dst_entry;32373237 struct page *ptepage;32383238 unsigned long addr;32393239 int cow;···32613261 break;32623262 }3263326332643264- /* If the pagetables are shared don't copy or take references */32653265- if (dst_pte == src_pte)32643264+ /*32653265+ * If the pagetables are shared don't copy or take references.32663266+ * dst_pte == src_pte is the common case of src/dest sharing.32673267+ *32683268+ * However, src could have 'unshared' and dst shares with32693269+ * another vma. If dst_pte !none, this implies sharing.32703270+ * Check here before taking page table lock, and once again32713271+ * after taking the lock below.32723272+ */32733273+ dst_entry = huge_ptep_get(dst_pte);32743274+ if ((dst_pte == src_pte) || !huge_pte_none(dst_entry))32663275 continue;3267327632683277 dst_ptl = huge_pte_lock(h, dst, dst_pte);32693278 src_ptl = huge_pte_lockptr(h, src, src_pte);32703279 spin_lock_nested(src_ptl, SINGLE_DEPTH_NESTING);32713280 entry = huge_ptep_get(src_pte);32723272- if (huge_pte_none(entry)) { /* skip none entry */32813281+ dst_entry = huge_ptep_get(dst_pte);32823282+ if (huge_pte_none(entry) || !huge_pte_none(dst_entry)) {32833283+ /*32843284+ * Skip if src entry none. Also, skip in the32853285+ * unlikely case dst entry !none as this implies32863286+ * sharing with another vma.32873287+ */32733288 ;32743289 } else if (unlikely(is_hugetlb_entry_migration(entry) ||32753290 is_hugetlb_entry_hwpoisoned(entry))) {
+1-1
mm/memblock.c
···1179117911801180#ifdef CONFIG_HAVE_MEMBLOCK_NODE_MAP11811181/*11821182- * Common iterator interface used to define for_each_mem_range().11821182+ * Common iterator interface used to define for_each_mem_pfn_range().11831183 */11841184void __init_memblock __next_mem_pfn_range(int *idx, int nid,11851185 unsigned long *out_start_pfn,
+17-11
mm/page_alloc.c
···40614061 int reserve_flags;4062406240634063 /*40644064- * In the slowpath, we sanity check order to avoid ever trying to40654065- * reclaim >= MAX_ORDER areas which will never succeed. Callers may40664066- * be using allocators in order of preference for an area that is40674067- * too large.40684068- */40694069- if (order >= MAX_ORDER) {40704070- WARN_ON_ONCE(!(gfp_mask & __GFP_NOWARN));40714071- return NULL;40724072- }40734073-40744074- /*40754064 * We also sanity check to catch abuse of atomic reserves being used by40764065 * callers that are not in atomic context.40774066 */···43524363 unsigned int alloc_flags = ALLOC_WMARK_LOW;43534364 gfp_t alloc_mask; /* The gfp_t that was actually used for allocation */43544365 struct alloc_context ac = { };43664366+43674367+ /*43684368+ * There are several places where we assume that the order value is sane43694369+ * so bail out early if the request is out of bound.43704370+ */43714371+ if (unlikely(order >= MAX_ORDER)) {43724372+ WARN_ON_ONCE(!(gfp_mask & __GFP_NOWARN));43734373+ return NULL;43744374+ }4355437543564376 gfp_mask &= gfp_allowed_mask;43574377 alloc_mask = gfp_mask;···7785778777867788 if (PageReserved(page))77877789 goto unmovable;77907790+77917791+ /*77927792+ * If the zone is movable and we have ruled out all reserved77937793+ * pages then it should be reasonably safe to assume the rest77947794+ * is movable.77957795+ */77967796+ if (zone_idx(zone) == ZONE_MOVABLE)77977797+ continue;7788779877897799 /*77907800 * Hugepages are not in LRU lists, but they're movable.
+1-3
mm/shmem.c
···25632563 inode_lock(inode);25642564 /* We're holding i_mutex so we can access i_size directly */2565256525662566- if (offset < 0)25672567- offset = -EINVAL;25682568- else if (offset >= inode->i_size)25662566+ if (offset < 0 || offset >= inode->i_size)25692567 offset = -ENXIO;25702568 else {25712569 start = offset >> PAGE_SHIFT;
···1827182718281828 /*18291829 * The fast way of checking if there are any vmstat diffs.18301830- * This works because the diffs are byte sized items.18311830 */18321832- if (memchr_inv(p->vm_stat_diff, 0, NR_VM_ZONE_STAT_ITEMS))18311831+ if (memchr_inv(p->vm_stat_diff, 0, NR_VM_ZONE_STAT_ITEMS *18321832+ sizeof(p->vm_stat_diff[0])))18331833 return true;18341834#ifdef CONFIG_NUMA18351835- if (memchr_inv(p->vm_numa_stat_diff, 0, NR_VM_NUMA_STAT_ITEMS))18351835+ if (memchr_inv(p->vm_numa_stat_diff, 0, NR_VM_NUMA_STAT_ITEMS *18361836+ sizeof(p->vm_numa_stat_diff[0])))18361837 return true;18371838#endif18381839 }
+63-40
mm/z3fold.c
···9999#define NCHUNKS ((PAGE_SIZE - ZHDR_SIZE_ALIGNED) >> CHUNK_SHIFT)100100101101#define BUDDY_MASK (0x3)102102+#define BUDDY_SHIFT 2102103103104/**104105 * struct z3fold_pool - stores metadata for each z3fold pool···146145 MIDDLE_CHUNK_MAPPED,147146 NEEDS_COMPACTING,148147 PAGE_STALE,149149- UNDER_RECLAIM148148+ PAGE_CLAIMED, /* by either reclaim or free */150149};151150152151/*****************···175174 clear_bit(MIDDLE_CHUNK_MAPPED, &page->private);176175 clear_bit(NEEDS_COMPACTING, &page->private);177176 clear_bit(PAGE_STALE, &page->private);178178- clear_bit(UNDER_RECLAIM, &page->private);177177+ clear_bit(PAGE_CLAIMED, &page->private);179178180179 spin_lock_init(&zhdr->page_lock);181180 kref_init(&zhdr->refcount);···224223 unsigned long handle;225224226225 handle = (unsigned long)zhdr;227227- if (bud != HEADLESS)228228- handle += (bud + zhdr->first_num) & BUDDY_MASK;226226+ if (bud != HEADLESS) {227227+ handle |= (bud + zhdr->first_num) & BUDDY_MASK;228228+ if (bud == LAST)229229+ handle |= (zhdr->last_chunks << BUDDY_SHIFT);230230+ }229231 return handle;230232}231233···236232static struct z3fold_header *handle_to_z3fold_header(unsigned long handle)237233{238234 return (struct z3fold_header *)(handle & PAGE_MASK);235235+}236236+237237+/* only for LAST bud, returns zero otherwise */238238+static unsigned short handle_to_chunks(unsigned long handle)239239+{240240+ return (handle & ~PAGE_MASK) >> BUDDY_SHIFT;239241}240242241243/*···730720 page = virt_to_page(zhdr);731721732722 if (test_bit(PAGE_HEADLESS, &page->private)) {733733- /* HEADLESS page stored */734734- bud = HEADLESS;735735- } else {736736- z3fold_page_lock(zhdr);737737- bud = handle_to_buddy(handle);738738-739739- switch (bud) {740740- case FIRST:741741- zhdr->first_chunks = 0;742742- break;743743- case MIDDLE:744744- zhdr->middle_chunks = 0;745745- zhdr->start_middle = 0;746746- break;747747- case LAST:748748- zhdr->last_chunks = 0;749749- break;750750- default:751751- pr_err("%s: unknown bud %d\n", __func__, bud);752752- WARN_ON(1);753753- z3fold_page_unlock(zhdr);754754- return;723723+ /* if a headless page is under reclaim, just leave.724724+ * NB: we use test_and_set_bit for a reason: if the bit725725+ * has not been set before, we release this page726726+ * immediately so we don't care about its value any more.727727+ */728728+ if (!test_and_set_bit(PAGE_CLAIMED, &page->private)) {729729+ spin_lock(&pool->lock);730730+ list_del(&page->lru);731731+ spin_unlock(&pool->lock);732732+ free_z3fold_page(page);733733+ atomic64_dec(&pool->pages_nr);755734 }735735+ return;756736 }757737758758- if (bud == HEADLESS) {759759- spin_lock(&pool->lock);760760- list_del(&page->lru);761761- spin_unlock(&pool->lock);762762- free_z3fold_page(page);763763- atomic64_dec(&pool->pages_nr);738738+ /* Non-headless case */739739+ z3fold_page_lock(zhdr);740740+ bud = handle_to_buddy(handle);741741+742742+ switch (bud) {743743+ case FIRST:744744+ zhdr->first_chunks = 0;745745+ break;746746+ case MIDDLE:747747+ zhdr->middle_chunks = 0;748748+ break;749749+ case LAST:750750+ zhdr->last_chunks = 0;751751+ break;752752+ default:753753+ pr_err("%s: unknown bud %d\n", __func__, bud);754754+ WARN_ON(1);755755+ z3fold_page_unlock(zhdr);764756 return;765757 }766758···770758 atomic64_dec(&pool->pages_nr);771759 return;772760 }773773- if (test_bit(UNDER_RECLAIM, &page->private)) {761761+ if (test_bit(PAGE_CLAIMED, &page->private)) {774762 z3fold_page_unlock(zhdr);775763 return;776764 }···848836 }849837 list_for_each_prev(pos, &pool->lru) {850838 page = list_entry(pos, struct page, lru);851851- if (test_bit(PAGE_HEADLESS, &page->private))852852- /* candidate found */853853- break;839839+840840+ /* this bit could have been set by free, in which case841841+ * we pass over to the next page in the pool.842842+ */843843+ if (test_and_set_bit(PAGE_CLAIMED, &page->private))844844+ continue;854845855846 zhdr = page_address(page);856856- if (!z3fold_page_trylock(zhdr))847847+ if (test_bit(PAGE_HEADLESS, &page->private))848848+ break;849849+850850+ if (!z3fold_page_trylock(zhdr)) {851851+ zhdr = NULL;857852 continue; /* can't evict at this point */853853+ }858854 kref_get(&zhdr->refcount);859855 list_del_init(&zhdr->buddy);860856 zhdr->cpu = -1;861861- set_bit(UNDER_RECLAIM, &page->private);862857 break;863858 }859859+860860+ if (!zhdr)861861+ break;864862865863 list_del_init(&page->lru);866864 spin_unlock(&pool->lock);···920898 if (test_bit(PAGE_HEADLESS, &page->private)) {921899 if (ret == 0) {922900 free_z3fold_page(page);901901+ atomic64_dec(&pool->pages_nr);923902 return 0;924903 }925904 spin_lock(&pool->lock);···928905 spin_unlock(&pool->lock);929906 } else {930907 z3fold_page_lock(zhdr);931931- clear_bit(UNDER_RECLAIM, &page->private);908908+ clear_bit(PAGE_CLAIMED, &page->private);932909 if (kref_put(&zhdr->refcount,933910 release_z3fold_page_locked)) {934911 atomic64_dec(&pool->pages_nr);···987964 set_bit(MIDDLE_CHUNK_MAPPED, &page->private);988965 break;989966 case LAST:990990- addr += PAGE_SIZE - (zhdr->last_chunks << CHUNK_SHIFT);967967+ addr += PAGE_SIZE - (handle_to_chunks(handle) << CHUNK_SHIFT);991968 break;992969 default:993970 pr_err("unknown buddy id %d\n", buddy);
···275275 kfree(entry);276276277277 packet = (struct batadv_frag_packet *)skb_out->data;278278- size = ntohs(packet->total_size);278278+ size = ntohs(packet->total_size) + hdr_size;279279280280 /* Make room for the rest of the fragments. */281281 if (pskb_expand_head(skb_out, 0, size - skb_out->len, GFP_ATOMIC) < 0) {
+7
net/bridge/br_private.h
···102102 struct metadata_dst *tunnel_dst;103103};104104105105+/* private vlan flags */106106+enum {107107+ BR_VLFLAG_PER_PORT_STATS = BIT(0),108108+};109109+105110/**106111 * struct net_bridge_vlan - per-vlan entry107112 *108113 * @vnode: rhashtable member109114 * @vid: VLAN id110115 * @flags: bridge vlan flags116116+ * @priv_flags: private (in-kernel) bridge vlan flags111117 * @stats: per-cpu VLAN statistics112118 * @br: if MASTER flag set, this points to a bridge struct113119 * @port: if MASTER flag unset, this points to a port struct···133127 struct rhash_head tnode;134128 u16 vid;135129 u16 flags;130130+ u16 priv_flags;136131 struct br_vlan_stats __percpu *stats;137132 union {138133 struct net_bridge *br;
+2-1
net/bridge/br_vlan.c
···197197 v = container_of(rcu, struct net_bridge_vlan, rcu);198198 WARN_ON(br_vlan_is_master(v));199199 /* if we had per-port stats configured then free them here */200200- if (v->brvlan->stats != v->stats)200200+ if (v->priv_flags & BR_VLFLAG_PER_PORT_STATS)201201 free_percpu(v->stats);202202 v->stats = NULL;203203 kfree(v);···264264 err = -ENOMEM;265265 goto out_filt;266266 }267267+ v->priv_flags |= BR_VLFLAG_PER_PORT_STATS;267268 } else {268269 v->stats = masterv->stats;269270 }
+9-8
net/can/raw.c
···745745 } else746746 ifindex = ro->ifindex;747747748748- if (ro->fd_frames) {749749- if (unlikely(size != CANFD_MTU && size != CAN_MTU))750750- return -EINVAL;751751- } else {752752- if (unlikely(size != CAN_MTU))753753- return -EINVAL;754754- }755755-756748 dev = dev_get_by_index(sock_net(sk), ifindex);757749 if (!dev)758750 return -ENXIO;751751+752752+ err = -EINVAL;753753+ if (ro->fd_frames && dev->mtu == CANFD_MTU) {754754+ if (unlikely(size != CANFD_MTU && size != CAN_MTU))755755+ goto put_dev;756756+ } else {757757+ if (unlikely(size != CAN_MTU))758758+ goto put_dev;759759+ }759760760761 skb = sock_alloc_send_skb(sk, size + sizeof(struct can_skb_priv),761762 msg->msg_flags & MSG_DONTWAIT, &err);
+9-3
net/ceph/messenger.c
···580580 struct bio_vec bvec;581581 int ret;582582583583- /* sendpage cannot properly handle pages with page_count == 0,584584- * we need to fallback to sendmsg if that's the case */585585- if (page_count(page) >= 1)583583+ /*584584+ * sendpage cannot properly handle pages with page_count == 0,585585+ * we need to fall back to sendmsg if that's the case.586586+ *587587+ * Same goes for slab pages: skb_can_coalesce() allows588588+ * coalescing neighboring slab objects into a single frag which589589+ * triggers one of hardened usercopy checks.590590+ */591591+ if (page_count(page) >= 1 && !PageSlab(page))586592 return __ceph_tcp_sendpage(sock, page, offset, size, more);587593588594 bvec.bv_page = page;
+9-2
net/core/dev.c
···56555655 skb->vlan_tci = 0;56565656 skb->dev = napi->dev;56575657 skb->skb_iif = 0;56585658+56595659+ /* eth_type_trans() assumes pkt_type is PACKET_HOST */56605660+ skb->pkt_type = PACKET_HOST;56615661+56585662 skb->encapsulation = 0;56595663 skb_shinfo(skb)->gso_type = 0;56605664 skb->truesize = SKB_TRUESIZE(skb_end_offset(skb));···59705966 if (work_done)59715967 timeout = n->dev->gro_flush_timeout;5972596859695969+ /* When the NAPI instance uses a timeout and keeps postponing59705970+ * it, we need to bound somehow the time packets are kept in59715971+ * the GRO layer59725972+ */59735973+ napi_gro_flush(n, !!timeout);59735974 if (timeout)59745975 hrtimer_start(&n->timer, ns_to_ktime(timeout),59755976 HRTIMER_MODE_REL_PINNED);59765976- else59775977- napi_gro_flush(n, false);59785977 }59795978 if (unlikely(!list_empty(&n->poll_list))) {59805979 /* If n->poll_list is not empty, we need to mask irqs */
···740740741741 bh_lock_sock(sk);742742 if (!sock_owned_by_user(sk)) {743743- if (tp->compressed_ack)743743+ if (tp->compressed_ack > TCP_FASTRETRANS_THRESH)744744 tcp_send_ack(sk);745745 } else {746746 if (!test_and_set_bit(TCP_DELACK_TIMER_DEFERRED,
···375375 * getting ACKs from the server. Returns a number representing the life state376376 * which can be compared to that returned by a previous call.377377 *378378- * If this is a client call, ping ACKs will be sent to the server to find out379379- * whether it's still responsive and whether the call is still alive on the380380- * server.378378+ * If the life state stalls, rxrpc_kernel_probe_life() should be called and379379+ * then 2RTT waited.381380 */382382-u32 rxrpc_kernel_check_life(struct socket *sock, struct rxrpc_call *call)381381+u32 rxrpc_kernel_check_life(const struct socket *sock,382382+ const struct rxrpc_call *call)383383{384384 return call->acks_latest;385385}386386EXPORT_SYMBOL(rxrpc_kernel_check_life);387387+388388+/**389389+ * rxrpc_kernel_probe_life - Poke the peer to see if it's still alive390390+ * @sock: The socket the call is on391391+ * @call: The call to check392392+ *393393+ * In conjunction with rxrpc_kernel_check_life(), allow a kernel service to394394+ * find out whether a call is still alive by pinging it. This should cause the395395+ * life state to be bumped in about 2*RTT.396396+ *397397+ * The must be called in TASK_RUNNING state on pain of might_sleep() objecting.398398+ */399399+void rxrpc_kernel_probe_life(struct socket *sock, struct rxrpc_call *call)400400+{401401+ rxrpc_propose_ACK(call, RXRPC_ACK_PING, 0, 0, true, false,402402+ rxrpc_propose_ack_ping_for_check_life);403403+ rxrpc_send_ack_packet(call, true, NULL);404404+}405405+EXPORT_SYMBOL(rxrpc_kernel_probe_life);387406388407/**389408 * rxrpc_kernel_get_epoch - Retrieve the epoch value from a call.
···2727 u32 tcfp_ewma_rate;2828 s64 tcfp_burst;2929 u32 tcfp_mtu;3030- s64 tcfp_toks;3131- s64 tcfp_ptoks;3230 s64 tcfp_mtu_ptoks;3333- s64 tcfp_t_c;3431 struct psched_ratecfg rate;3532 bool rate_present;3633 struct psched_ratecfg peak;···3841struct tcf_police {3942 struct tc_action common;4043 struct tcf_police_params __rcu *params;4444+4545+ spinlock_t tcfp_lock ____cacheline_aligned_in_smp;4646+ s64 tcfp_toks;4747+ s64 tcfp_ptoks;4848+ s64 tcfp_t_c;4149};42504351#define to_police(pc) ((struct tcf_police *)pc)···124122 return ret;125123 }126124 ret = ACT_P_CREATED;125125+ spin_lock_init(&(to_police(*a)->tcfp_lock));127126 } else if (!ovr) {128127 tcf_idr_release(*a, bind);129128 return -EEXIST;···189186 }190187191188 new->tcfp_burst = PSCHED_TICKS2NS(parm->burst);192192- new->tcfp_toks = new->tcfp_burst;193193- if (new->peak_present) {189189+ if (new->peak_present)194190 new->tcfp_mtu_ptoks = (s64)psched_l2t_ns(&new->peak,195191 new->tcfp_mtu);196196- new->tcfp_ptoks = new->tcfp_mtu_ptoks;197197- }198192199193 if (tb[TCA_POLICE_AVRATE])200194 new->tcfp_ewma_rate = nla_get_u32(tb[TCA_POLICE_AVRATE]);···207207 }208208209209 spin_lock_bh(&police->tcf_lock);210210- new->tcfp_t_c = ktime_get_ns();210210+ spin_lock_bh(&police->tcfp_lock);211211+ police->tcfp_t_c = ktime_get_ns();212212+ police->tcfp_toks = new->tcfp_burst;213213+ if (new->peak_present)214214+ police->tcfp_ptoks = new->tcfp_mtu_ptoks;215215+ spin_unlock_bh(&police->tcfp_lock);211216 police->tcf_action = parm->action;212217 rcu_swap_protected(police->params,213218 new,···262257 }263258264259 now = ktime_get_ns();265265- toks = min_t(s64, now - p->tcfp_t_c, p->tcfp_burst);260260+ spin_lock_bh(&police->tcfp_lock);261261+ toks = min_t(s64, now - police->tcfp_t_c, p->tcfp_burst);266262 if (p->peak_present) {267267- ptoks = toks + p->tcfp_ptoks;263263+ ptoks = toks + police->tcfp_ptoks;268264 if (ptoks > p->tcfp_mtu_ptoks)269265 ptoks = p->tcfp_mtu_ptoks;270266 ptoks -= (s64)psched_l2t_ns(&p->peak,271267 qdisc_pkt_len(skb));272268 }273273- toks += p->tcfp_toks;269269+ toks += police->tcfp_toks;274270 if (toks > p->tcfp_burst)275271 toks = p->tcfp_burst;276272 toks -= (s64)psched_l2t_ns(&p->rate, qdisc_pkt_len(skb));277273 if ((toks|ptoks) >= 0) {278278- p->tcfp_t_c = now;279279- p->tcfp_toks = toks;280280- p->tcfp_ptoks = ptoks;274274+ police->tcfp_t_c = now;275275+ police->tcfp_toks = toks;276276+ police->tcfp_ptoks = ptoks;277277+ spin_unlock_bh(&police->tcfp_lock);281278 ret = p->tcfp_result;282279 goto inc_drops;283280 }281281+ spin_unlock_bh(&police->tcfp_lock);284282 }285283286284inc_overlimits:
+18-11
net/sched/sch_fq.c
···469469 goto begin;470470 }471471 prefetch(&skb->end);472472- f->credit -= qdisc_pkt_len(skb);472472+ plen = qdisc_pkt_len(skb);473473+ f->credit -= plen;473474474474- if (ktime_to_ns(skb->tstamp) || !q->rate_enable)475475+ if (!q->rate_enable)475476 goto out;476477477478 rate = q->flow_max_rate;478478- if (skb->sk)479479- rate = min(skb->sk->sk_pacing_rate, rate);480479481481- if (rate <= q->low_rate_threshold) {482482- f->credit = 0;483483- plen = qdisc_pkt_len(skb);484484- } else {485485- plen = max(qdisc_pkt_len(skb), q->quantum);486486- if (f->credit > 0)487487- goto out;480480+ /* If EDT time was provided for this skb, we need to481481+ * update f->time_next_packet only if this qdisc enforces482482+ * a flow max rate.483483+ */484484+ if (!skb->tstamp) {485485+ if (skb->sk)486486+ rate = min(skb->sk->sk_pacing_rate, rate);487487+488488+ if (rate <= q->low_rate_threshold) {489489+ f->credit = 0;490490+ } else {491491+ plen = max(plen, q->quantum);492492+ if (f->credit > 0)493493+ goto out;494494+ }488495 }489496 if (rate != ~0UL) {490497 u64 len = (u64)plen * NSEC_PER_SEC;
+4-20
net/sctp/output.c
···118118 sctp_transport_route(tp, NULL, sp);119119 if (asoc->param_flags & SPP_PMTUD_ENABLE)120120 sctp_assoc_sync_pmtu(asoc);121121+ } else if (!sctp_transport_pmtu_check(tp)) {122122+ if (asoc->param_flags & SPP_PMTUD_ENABLE)123123+ sctp_assoc_sync_pmtu(asoc);121124 }122125123126 if (asoc->pmtu_pending) {···399396 return retval;400397}401398402402-static void sctp_packet_release_owner(struct sk_buff *skb)403403-{404404- sk_free(skb->sk);405405-}406406-407407-static void sctp_packet_set_owner_w(struct sk_buff *skb, struct sock *sk)408408-{409409- skb_orphan(skb);410410- skb->sk = sk;411411- skb->destructor = sctp_packet_release_owner;412412-413413- /*414414- * The data chunks have already been accounted for in sctp_sendmsg(),415415- * therefore only reserve a single byte to keep socket around until416416- * the packet has been transmitted.417417- */418418- refcount_inc(&sk->sk_wmem_alloc);419419-}420420-421399static void sctp_packet_gso_append(struct sk_buff *head, struct sk_buff *skb)422400{423401 if (SCTP_OUTPUT_CB(head)->last == head)···585601 if (!head)586602 goto out;587603 skb_reserve(head, packet->overhead + MAX_HEADER);588588- sctp_packet_set_owner_w(head, sk);604604+ skb_set_owner_w(head, sk);589605590606 /* set sctp header */591607 sh = skb_push(head, sizeof(struct sctphdr));
···12391239 return &gss_auth->rpc_auth;12401240}1241124112421242+static struct gss_cred *12431243+gss_dup_cred(struct gss_auth *gss_auth, struct gss_cred *gss_cred)12441244+{12451245+ struct gss_cred *new;12461246+12471247+ /* Make a copy of the cred so that we can reference count it */12481248+ new = kzalloc(sizeof(*gss_cred), GFP_NOIO);12491249+ if (new) {12501250+ struct auth_cred acred = {12511251+ .uid = gss_cred->gc_base.cr_uid,12521252+ };12531253+ struct gss_cl_ctx *ctx =12541254+ rcu_dereference_protected(gss_cred->gc_ctx, 1);12551255+12561256+ rpcauth_init_cred(&new->gc_base, &acred,12571257+ &gss_auth->rpc_auth,12581258+ &gss_nullops);12591259+ new->gc_base.cr_flags = 1UL << RPCAUTH_CRED_UPTODATE;12601260+ new->gc_service = gss_cred->gc_service;12611261+ new->gc_principal = gss_cred->gc_principal;12621262+ kref_get(&gss_auth->kref);12631263+ rcu_assign_pointer(new->gc_ctx, ctx);12641264+ gss_get_ctx(ctx);12651265+ }12661266+ return new;12671267+}12681268+12421269/*12431243- * gss_destroying_context will cause the RPCSEC_GSS to send a NULL RPC call12701270+ * gss_send_destroy_context will cause the RPCSEC_GSS to send a NULL RPC call12441271 * to the server with the GSS control procedure field set to12451272 * RPC_GSS_PROC_DESTROY. This should normally cause the server to release12461273 * all RPCSEC_GSS state associated with that context.12471274 */12481248-static int12491249-gss_destroying_context(struct rpc_cred *cred)12751275+static void12761276+gss_send_destroy_context(struct rpc_cred *cred)12501277{12511278 struct gss_cred *gss_cred = container_of(cred, struct gss_cred, gc_base);12521279 struct gss_auth *gss_auth = container_of(cred->cr_auth, struct gss_auth, rpc_auth);12531280 struct gss_cl_ctx *ctx = rcu_dereference_protected(gss_cred->gc_ctx, 1);12811281+ struct gss_cred *new;12541282 struct rpc_task *task;1255128312561256- if (test_bit(RPCAUTH_CRED_UPTODATE, &cred->cr_flags) == 0)12571257- return 0;12841284+ new = gss_dup_cred(gss_auth, gss_cred);12851285+ if (new) {12861286+ ctx->gc_proc = RPC_GSS_PROC_DESTROY;1258128712591259- ctx->gc_proc = RPC_GSS_PROC_DESTROY;12601260- cred->cr_ops = &gss_nullops;12881288+ task = rpc_call_null(gss_auth->client, &new->gc_base,12891289+ RPC_TASK_ASYNC|RPC_TASK_SOFT);12901290+ if (!IS_ERR(task))12911291+ rpc_put_task(task);1261129212621262- /* Take a reference to ensure the cred will be destroyed either12631263- * by the RPC call or by the put_rpccred() below */12641264- get_rpccred(cred);12651265-12661266- task = rpc_call_null(gss_auth->client, cred, RPC_TASK_ASYNC|RPC_TASK_SOFT);12671267- if (!IS_ERR(task))12681268- rpc_put_task(task);12691269-12701270- put_rpccred(cred);12711271- return 1;12931293+ put_rpccred(&new->gc_base);12941294+ }12721295}1273129612741297/* gss_destroy_cred (and gss_free_ctx) are used to clean up after failure···13531330gss_destroy_cred(struct rpc_cred *cred)13541331{1355133213561356- if (gss_destroying_context(cred))13571357- return;13331333+ if (test_and_clear_bit(RPCAUTH_CRED_UPTODATE, &cred->cr_flags) != 0)13341334+ gss_send_destroy_context(cred);13581335 gss_destroy_nullcred(cred);13591336}13601337
···166166167167 /* Apply trial address if we just left trial period */168168 if (!trial && !self) {169169- tipc_net_finalize(net, tn->trial_addr);169169+ tipc_sched_net_finalize(net, tn->trial_addr);170170+ msg_set_prevnode(buf_msg(d->skb), tn->trial_addr);170171 msg_set_type(buf_msg(d->skb), DSC_REQ_MSG);171172 }172173···301300 goto exit;302301 }303302304304- /* Trial period over ? */305305- if (!time_before(jiffies, tn->addr_trial_end)) {306306- /* Did we just leave it ? */307307- if (!tipc_own_addr(net))308308- tipc_net_finalize(net, tn->trial_addr);309309-310310- msg_set_type(buf_msg(d->skb), DSC_REQ_MSG);311311- msg_set_prevnode(buf_msg(d->skb), tipc_own_addr(net));303303+ /* Did we just leave trial period ? */304304+ if (!time_before(jiffies, tn->addr_trial_end) && !tipc_own_addr(net)) {305305+ mod_timer(&d->timer, jiffies + TIPC_DISC_INIT);306306+ spin_unlock_bh(&d->lock);307307+ tipc_sched_net_finalize(net, tn->trial_addr);308308+ return;312309 }313310314311 /* Adjust timeout interval according to discovery phase */···318319 d->timer_intv = TIPC_DISC_SLOW;319320 else if (!d->num_nodes && d->timer_intv > TIPC_DISC_FAST)320321 d->timer_intv = TIPC_DISC_FAST;322322+ msg_set_type(buf_msg(d->skb), DSC_REQ_MSG);323323+ msg_set_prevnode(buf_msg(d->skb), tn->trial_addr);321324 }322325323326 mod_timer(&d->timer, jiffies + d->timer_intv);
+37-8
net/tipc/net.c
···104104 * - A local spin_lock protecting the queue of subscriber events.105105*/106106107107+struct tipc_net_work {108108+ struct work_struct work;109109+ struct net *net;110110+ u32 addr;111111+};112112+113113+static void tipc_net_finalize(struct net *net, u32 addr);114114+107115int tipc_net_init(struct net *net, u8 *node_id, u32 addr)108116{109117 if (tipc_own_id(net)) {···127119 return 0;128120}129121130130-void tipc_net_finalize(struct net *net, u32 addr)122122+static void tipc_net_finalize(struct net *net, u32 addr)131123{132124 struct tipc_net *tn = tipc_net(net);133125134134- if (!cmpxchg(&tn->node_addr, 0, addr)) {135135- tipc_set_node_addr(net, addr);136136- tipc_named_reinit(net);137137- tipc_sk_reinit(net);138138- tipc_nametbl_publish(net, TIPC_CFG_SRV, addr, addr,139139- TIPC_CLUSTER_SCOPE, 0, addr);140140- }126126+ if (cmpxchg(&tn->node_addr, 0, addr))127127+ return;128128+ tipc_set_node_addr(net, addr);129129+ tipc_named_reinit(net);130130+ tipc_sk_reinit(net);131131+ tipc_nametbl_publish(net, TIPC_CFG_SRV, addr, addr,132132+ TIPC_CLUSTER_SCOPE, 0, addr);133133+}134134+135135+static void tipc_net_finalize_work(struct work_struct *work)136136+{137137+ struct tipc_net_work *fwork;138138+139139+ fwork = container_of(work, struct tipc_net_work, work);140140+ tipc_net_finalize(fwork->net, fwork->addr);141141+ kfree(fwork);142142+}143143+144144+void tipc_sched_net_finalize(struct net *net, u32 addr)145145+{146146+ struct tipc_net_work *fwork = kzalloc(sizeof(*fwork), GFP_ATOMIC);147147+148148+ if (!fwork)149149+ return;150150+ INIT_WORK(&fwork->work, tipc_net_finalize_work);151151+ fwork->net = net;152152+ fwork->addr = addr;153153+ schedule_work(&fwork->work);141154}142155143156void tipc_net_stop(struct net *net)
+1-1
net/tipc/net.h
···4242extern const struct nla_policy tipc_nl_net_policy[];43434444int tipc_net_init(struct net *net, u8 *node_id, u32 addr);4545-void tipc_net_finalize(struct net *net, u32 addr);4545+void tipc_sched_net_finalize(struct net *net, u32 addr);4646void tipc_net_stop(struct net *net);4747int tipc_nl_net_dump(struct sk_buff *skb, struct netlink_callback *cb);4848int tipc_nl_net_set(struct sk_buff *skb, struct genl_info *info);
+11-4
net/tipc/socket.c
···15551555/**15561556 * tipc_sk_anc_data_recv - optionally capture ancillary data for received message15571557 * @m: descriptor for message info15581558- * @msg: received message header15581558+ * @skb: received message buffer15591559 * @tsk: TIPC port associated with message15601560 *15611561 * Note: Ancillary data is not captured if not requested by receiver.15621562 *15631563 * Returns 0 if successful, otherwise errno15641564 */15651565-static int tipc_sk_anc_data_recv(struct msghdr *m, struct tipc_msg *msg,15651565+static int tipc_sk_anc_data_recv(struct msghdr *m, struct sk_buff *skb,15661566 struct tipc_sock *tsk)15671567{15681568+ struct tipc_msg *msg;15681569 u32 anc_data[3];15691570 u32 err;15701571 u32 dest_type;···1574157315751574 if (likely(m->msg_controllen == 0))15761575 return 0;15761576+ msg = buf_msg(skb);1577157715781578 /* Optionally capture errored message object(s) */15791579 err = msg ? msg_errcode(msg) : 0;···15851583 if (res)15861584 return res;15871585 if (anc_data[1]) {15861586+ if (skb_linearize(skb))15871587+ return -ENOMEM;15881588+ msg = buf_msg(skb);15881589 res = put_cmsg(m, SOL_TIPC, TIPC_RETDATA, anc_data[1],15891590 msg_data(msg));15901591 if (res)···1749174417501745 /* Collect msg meta data, including error code and rejected data */17511746 tipc_sk_set_orig_addr(m, skb);17521752- rc = tipc_sk_anc_data_recv(m, hdr, tsk);17471747+ rc = tipc_sk_anc_data_recv(m, skb, tsk);17531748 if (unlikely(rc))17541749 goto exit;17501750+ hdr = buf_msg(skb);1755175117561752 /* Capture data if non-error msg, otherwise just set return value */17571753 if (likely(!err)) {···18621856 /* Collect msg meta data, incl. error code and rejected data */18631857 if (!copied) {18641858 tipc_sk_set_orig_addr(m, skb);18651865- rc = tipc_sk_anc_data_recv(m, hdr, tsk);18591859+ rc = tipc_sk_anc_data_recv(m, skb, tsk);18661860 if (rc)18671861 break;18621862+ hdr = buf_msg(skb);18681863 }1869186418701865 /* Copy data if msg ok, otherwise return error/partial data */
+1-1
scripts/faddr2line
···71717272# Try to figure out the source directory prefix so we can remove it from the7373# addr2line output. HACK ALERT: This assumes that start_kernel() is in7474-# kernel/init.c! This only works for vmlinux. Otherwise it falls back to7474+# init/main.c! This only works for vmlinux. Otherwise it falls back to7575# printing the absolute path.7676find_dir_prefix() {7777 local objfile=$1
-1
scripts/spdxcheck.py
···168168 self.curline = 0169169 try:170170 for line in fd:171171- line = line.decode(locale.getpreferredencoding(False), errors='ignore')172171 self.curline += 1173172 if self.curline > maxlines:174173 break
···53185318 addr_buf = address;5319531953205320 while (walk_size < addrlen) {53215321+ if (walk_size + sizeof(sa_family_t) > addrlen)53225322+ return -EINVAL;53235323+53215324 addr = addr_buf;53225325 switch (addr->sa_family) {53235326 case AF_UNSPEC:
+7-3
security/selinux/ss/mls.c
···245245 char *rangep[2];246246247247 if (!pol->mls_enabled) {248248- if ((def_sid != SECSID_NULL && oldc) || (*scontext) == '\0')249249- return 0;250250- return -EINVAL;248248+ /*249249+ * With no MLS, only return -EINVAL if there is a MLS field250250+ * and it did not come from an xattr.251251+ */252252+ if (oldc && def_sid == SECSID_NULL)253253+ return -EINVAL;254254+ return 0;251255 }252256253257 /*
···28282929 snprintf(path, sizeof(path), PATH_TO_CPU "cpu%u/cpufreq/%s",3030 cpu, fname);3131- return sysfs_read_file(path, buf, buflen);3131+ return cpupower_read_sysfs(path, buf, buflen);3232}33333434/* helper function to write a new value to a /sys file */