···150150a governor ``sysfs`` interface to it. Next, the governor is started by151151invoking its ``->start()`` callback.152152153153-That callback it expected to register per-CPU utilization update callbacks for153153+That callback is expected to register per-CPU utilization update callbacks for154154all of the online CPUs belonging to the given policy with the CPU scheduler.155155The utilization update callbacks will be invoked by the CPU scheduler on156156important events, like task enqueue and dequeue, on every iteration of the
+5-3
Documentation/cpu-freq/cpufreq-stats.txt
···8686This will give a fine grained information about all the CPU frequency8787transitions. The cat output here is a two dimensional matrix, where an entry8888<i,j> (row i, column j) represents the count of number of transitions from 8989-Freq_i to Freq_j. Freq_i is in descending order with increasing rows and 9090-Freq_j is in descending order with increasing columns. The output here also 9191-contains the actual freq values for each row and column for better readability.8989+Freq_i to Freq_j. Freq_i rows and Freq_j columns follow the sorting order in9090+which the driver has provided the frequency table initially to the cpufreq core9191+and so can be sorted (ascending or descending) or unsorted. The output here9292+also contains the actual freq values for each row and column for better9393+readability.92949395If the transition table is bigger than PAGE_SIZE, reading this will9496return an -EFBIG error.
···11-Generic ARM big LITTLE cpufreq driver's DT glue22------------------------------------------------33-44-This is DT specific glue layer for generic cpufreq driver for big LITTLE55-systems.66-77-Both required and optional properties listed below must be defined88-under node /cpus/cpu@x. Where x is the first cpu inside a cluster.99-1010-FIXME: Cpus should boot in the order specified in DT and all cpus for a cluster1111-must be present contiguously. Generic DT driver will check only node 'x' for1212-cpu:x.1313-1414-Required properties:1515-- operating-points: Refer to Documentation/devicetree/bindings/opp/opp.txt1616- for details1717-1818-Optional properties:1919-- clock-latency: Specify the possible maximum transition latency for clock,2020- in unit of nanoseconds.2121-2222-Examples:2323-2424-cpus {2525- #address-cells = <1>;2626- #size-cells = <0>;2727-2828- cpu@0 {2929- compatible = "arm,cortex-a15";3030- reg = <0>;3131- next-level-cache = <&L2>;3232- operating-points = <3333- /* kHz uV */3434- 792000 11000003535- 396000 9500003636- 198000 8500003737- >;3838- clock-latency = <61036>; /* two CLK32 periods */3939- };4040-4141- cpu@1 {4242- compatible = "arm,cortex-a15";4343- reg = <1>;4444- next-level-cache = <&L2>;4545- };4646-4747- cpu@100 {4848- compatible = "arm,cortex-a7";4949- reg = <100>;5050- next-level-cache = <&L2>;5151- operating-points = <5252- /* kHz uV */5353- 792000 9500005454- 396000 7500005555- 198000 4500005656- >;5757- clock-latency = <61036>; /* two CLK32 periods */5858- };5959-6060- cpu@101 {6161- compatible = "arm,cortex-a7";6262- reg = <101>;6363- next-level-cache = <&L2>;6464- };6565-};
···55- compatible: "renesas,can-r8a7743" if CAN controller is a part of R8A7743 SoC.66 "renesas,can-r8a7744" if CAN controller is a part of R8A7744 SoC.77 "renesas,can-r8a7745" if CAN controller is a part of R8A7745 SoC.88+ "renesas,can-r8a774a1" if CAN controller is a part of R8A774A1 SoC.89 "renesas,can-r8a7778" if CAN controller is a part of R8A7778 SoC.910 "renesas,can-r8a7779" if CAN controller is a part of R8A7779 SoC.1011 "renesas,can-r8a7790" if CAN controller is a part of R8A7790 SoC.···1514 "renesas,can-r8a7794" if CAN controller is a part of R8A7794 SoC.1615 "renesas,can-r8a7795" if CAN controller is a part of R8A7795 SoC.1716 "renesas,can-r8a7796" if CAN controller is a part of R8A7796 SoC.1717+ "renesas,can-r8a77965" if CAN controller is a part of R8A77965 SoC.1818 "renesas,rcar-gen1-can" for a generic R-Car Gen1 compatible device.1919 "renesas,rcar-gen2-can" for a generic R-Car Gen2 or RZ/G12020 compatible device.2121- "renesas,rcar-gen3-can" for a generic R-Car Gen3 compatible device.2121+ "renesas,rcar-gen3-can" for a generic R-Car Gen3 or RZ/G22222+ compatible device.2223 When compatible with the generic version, nodes must list the2324 SoC-specific version corresponding to the platform first2425 followed by the generic version.25262627- reg: physical base address and size of the R-Car CAN register map.2728- interrupts: interrupt specifier for the sole interrupt.2828-- clocks: phandles and clock specifiers for 3 CAN clock inputs.2929-- clock-names: 3 clock input name strings: "clkp1", "clkp2", "can_clk".2929+- clocks: phandles and clock specifiers for 2 CAN clock inputs for RZ/G23030+ devices.3131+ phandles and clock specifiers for 3 CAN clock inputs for every other3232+ SoC.3333+- clock-names: 2 clock input name strings for RZ/G2: "clkp1", "can_clk".3434+ 3 clock input name strings for every other SoC: "clkp1", "clkp2",3535+ "can_clk".3036- pinctrl-0: pin control group to be used for this controller.3137- pinctrl-names: must be "default".32383333-Required properties for "renesas,can-r8a7795" and "renesas,can-r8a7796"3434-compatible:3535-In R8A7795 and R8A7796 SoCs, "clkp2" can be CANFD clock. This is a div6 clock3636-and can be used by both CAN and CAN FD controller at the same time. It needs to3737-be scaled to maximum frequency if any of these controllers use it. This is done3939+Required properties for R8A7795, R8A7796 and R8A77965:4040+For the denoted SoCs, "clkp2" can be CANFD clock. This is a div6 clock and can4141+be used by both CAN and CAN FD controller at the same time. It needs to be4242+scaled to maximum frequency if any of these controllers use it. This is done3843using the below properties:39444045- assigned-clocks: phandle of clkp2(CANFD) clock.···4942Optional properties:5043- renesas,can-clock-select: R-Car CAN Clock Source Select. Valid values are:5144 <0x0> (default) : Peripheral clock (clkp1)5252- <0x1> : Peripheral clock (clkp2)5353- <0x3> : Externally input clock4545+ <0x1> : Peripheral clock (clkp2) (not supported by4646+ RZ/G2 devices)4747+ <0x3> : External input clock54485549Example5650-------
+11-6
Documentation/networking/rxrpc.txt
···1056105610571057 u32 rxrpc_kernel_check_life(struct socket *sock,10581058 struct rxrpc_call *call);10591059+ void rxrpc_kernel_probe_life(struct socket *sock,10601060+ struct rxrpc_call *call);1059106110601060- This returns a number that is updated when ACKs are received from the peer10611061- (notably including PING RESPONSE ACKs which we can elicit by sending PING10621062- ACKs to see if the call still exists on the server). The caller should10631063- compare the numbers of two calls to see if the call is still alive after10641064- waiting for a suitable interval.10621062+ The first function returns a number that is updated when ACKs are received10631063+ from the peer (notably including PING RESPONSE ACKs which we can elicit by10641064+ sending PING ACKs to see if the call still exists on the server). The10651065+ caller should compare the numbers of two calls to see if the call is still10661066+ alive after waiting for a suitable interval.1065106710661068 This allows the caller to work out if the server is still contactable and10671069 if the call is still alive on the server whilst waiting for the server to10681070 process a client operation.1069107110701070- This function may transmit a PING ACK.10721072+ The second function causes a ping ACK to be transmitted to try to provoke10731073+ the peer into responding, which would then cause the value returned by the10741074+ first function to change. Note that this must be called in TASK_RUNNING10751075+ state.1071107610721077 (*) Get reply timestamp.10731078
+10-3
MAINTAINERS
···717717F: include/dt-bindings/reset/altr,rst-mgr-a10sr.h718718719719ALTERA TRIPLE SPEED ETHERNET DRIVER720720-M: Vince Bridgers <vbridger@opensource.altera.com>720720+M: Thor Thayer <thor.thayer@linux.intel.com>721721L: netdev@vger.kernel.org722722L: nios2-dev@lists.rocketboards.org (moderated for non-subscribers)723723S: Maintained···32753275F: include/uapi/linux/caif/32763276F: include/net/caif/32773277F: net/caif/32783278+32793279+CAKE QDISC32803280+M: Toke Høiland-Jørgensen <toke@toke.dk>32813281+L: cake@lists.bufferbloat.net (moderated for non-subscribers)32823282+S: Maintained32833283+F: net/sched/sch_cake.c3278328432793285CALGARY x86-64 IOMMU32803286M: Muli Ben-Yehuda <mulix@mulix.org>···1081510809F: drivers/staging/media/omap4iss/10816108101081710811OMAP MMC SUPPORT1081810818-M: Jarkko Lavinen <jarkko.lavinen@nokia.com>1081210812+M: Aaro Koskinen <aaro.koskinen@iki.fi>1081910813L: linux-omap@vger.kernel.org1082010820-S: Maintained1081410814+S: Odd Fixes1082110815F: drivers/mmc/host/omap.c10822108161082310817OMAP POWER MANAGEMENT SUPPORT···1175211746PIN CONTROLLER - INTEL1175311747M: Mika Westerberg <mika.westerberg@linux.intel.com>1175411748M: Andy Shevchenko <andriy.shevchenko@linux.intel.com>1174911749+T: git git://git.kernel.org/pub/scm/linux/kernel/git/pinctrl/intel.git1175511750S: Maintained1175611751F: drivers/pinctrl/intel/1175711752
···2323/*2424 * Don't change this structure - ASM code relies on it.2525 */2626-extern struct processor {2626+struct processor {2727 /* MISC2828 * get data abort address/flags2929 */···7979 unsigned int suspend_size;8080 void (*do_suspend)(void *);8181 void (*do_resume)(void *);8282-} processor;8282+};83838484#ifndef MULTI_CPU8585+static inline void init_proc_vtable(const struct processor *p)8686+{8787+}8888+8589extern void cpu_proc_init(void);8690extern void cpu_proc_fin(void);8791extern int cpu_do_idle(void);···10298extern void cpu_do_suspend(void *);10399extern void cpu_do_resume(void *);104100#else105105-#define cpu_proc_init processor._proc_init106106-#define cpu_proc_fin processor._proc_fin107107-#define cpu_reset processor.reset108108-#define cpu_do_idle processor._do_idle109109-#define cpu_dcache_clean_area processor.dcache_clean_area110110-#define cpu_set_pte_ext processor.set_pte_ext111111-#define cpu_do_switch_mm processor.switch_mm112101113113-/* These three are private to arch/arm/kernel/suspend.c */114114-#define cpu_do_suspend processor.do_suspend115115-#define cpu_do_resume processor.do_resume102102+extern struct processor processor;103103+#if defined(CONFIG_BIG_LITTLE) && defined(CONFIG_HARDEN_BRANCH_PREDICTOR)104104+#include <linux/smp.h>105105+/*106106+ * This can't be a per-cpu variable because we need to access it before107107+ * per-cpu has been initialised. We have a couple of functions that are108108+ * called in a pre-emptible context, and so can't use smp_processor_id()109109+ * there, hence PROC_TABLE(). We insist in init_proc_vtable() that the110110+ * function pointers for these are identical across all CPUs.111111+ */112112+extern struct processor *cpu_vtable[];113113+#define PROC_VTABLE(f) cpu_vtable[smp_processor_id()]->f114114+#define PROC_TABLE(f) cpu_vtable[0]->f115115+static inline void init_proc_vtable(const struct processor *p)116116+{117117+ unsigned int cpu = smp_processor_id();118118+ *cpu_vtable[cpu] = *p;119119+ WARN_ON_ONCE(cpu_vtable[cpu]->dcache_clean_area !=120120+ cpu_vtable[0]->dcache_clean_area);121121+ WARN_ON_ONCE(cpu_vtable[cpu]->set_pte_ext !=122122+ cpu_vtable[0]->set_pte_ext);123123+}124124+#else125125+#define PROC_VTABLE(f) processor.f126126+#define PROC_TABLE(f) processor.f127127+static inline void init_proc_vtable(const struct processor *p)128128+{129129+ processor = *p;130130+}131131+#endif132132+133133+#define cpu_proc_init PROC_VTABLE(_proc_init)134134+#define cpu_check_bugs PROC_VTABLE(check_bugs)135135+#define cpu_proc_fin PROC_VTABLE(_proc_fin)136136+#define cpu_reset PROC_VTABLE(reset)137137+#define cpu_do_idle PROC_VTABLE(_do_idle)138138+#define cpu_dcache_clean_area PROC_TABLE(dcache_clean_area)139139+#define cpu_set_pte_ext PROC_TABLE(set_pte_ext)140140+#define cpu_do_switch_mm PROC_VTABLE(switch_mm)141141+142142+/* These two are private to arch/arm/kernel/suspend.c */143143+#define cpu_do_suspend PROC_VTABLE(do_suspend)144144+#define cpu_do_resume PROC_VTABLE(do_resume)116145#endif117146118147extern void cpu_resume(void);
+2-2
arch/arm/kernel/bugs.c
···66void check_other_bugs(void)77{88#ifdef MULTI_CPU99- if (processor.check_bugs)1010- processor.check_bugs();99+ if (cpu_check_bugs)1010+ cpu_check_bugs();1111#endif1212}1313
+3-3
arch/arm/kernel/head-common.S
···145145#endif146146 .size __mmap_switched_data, . - __mmap_switched_data147147148148+ __FINIT149149+ .text150150+148151/*149152 * This provides a C-API version of __lookup_processor_type150153 */···158155 mov r0, r5159156 ldmfd sp!, {r4 - r6, r9, pc}160157ENDPROC(lookup_processor_type)161161-162162- __FINIT163163- .text164158165159/*166160 * Read processor ID register (CP#15, CR0), and look up in the linker-built
+27-17
arch/arm/kernel/setup.c
···114114115115#ifdef MULTI_CPU116116struct processor processor __ro_after_init;117117+#if defined(CONFIG_BIG_LITTLE) && defined(CONFIG_HARDEN_BRANCH_PREDICTOR)118118+struct processor *cpu_vtable[NR_CPUS] = {119119+ [0] = &processor,120120+};121121+#endif117122#endif118123#ifdef MULTI_TLB119124struct cpu_tlb_fns cpu_tlb __ro_after_init;···671666}672667#endif673668669669+/*670670+ * locate processor in the list of supported processor types. The linker671671+ * builds this table for us from the entries in arch/arm/mm/proc-*.S672672+ */673673+struct proc_info_list *lookup_processor(u32 midr)674674+{675675+ struct proc_info_list *list = lookup_processor_type(midr);676676+677677+ if (!list) {678678+ pr_err("CPU%u: configuration botched (ID %08x), CPU halted\n",679679+ smp_processor_id(), midr);680680+ while (1)681681+ /* can't use cpu_relax() here as it may require MMU setup */;682682+ }683683+684684+ return list;685685+}686686+674687static void __init setup_processor(void)675688{676676- struct proc_info_list *list;677677-678678- /*679679- * locate processor in the list of supported processor680680- * types. The linker builds this table for us from the681681- * entries in arch/arm/mm/proc-*.S682682- */683683- list = lookup_processor_type(read_cpuid_id());684684- if (!list) {685685- pr_err("CPU configuration botched (ID %08x), unable to continue.\n",686686- read_cpuid_id());687687- while (1);688688- }689689+ unsigned int midr = read_cpuid_id();690690+ struct proc_info_list *list = lookup_processor(midr);689691690692 cpu_name = list->cpu_name;691693 __cpu_architecture = __get_cpu_architecture();692694693693-#ifdef MULTI_CPU694694- processor = *list->proc;695695-#endif695695+ init_proc_vtable(list->proc);696696#ifdef MULTI_TLB697697 cpu_tlb = *list->tlb;698698#endif···709699#endif710700711701 pr_info("CPU: %s [%08x] revision %d (ARMv%s), cr=%08lx\n",712712- cpu_name, read_cpuid_id(), read_cpuid_id() & 15,702702+ list->cpu_name, midr, midr & 15,713703 proc_arch[cpu_architecture()], get_cr());714704715705 snprintf(init_utsname()->machine, __NEW_UTS_LEN + 1, "%s%c",
+31
arch/arm/kernel/smp.c
···4242#include <asm/mmu_context.h>4343#include <asm/pgtable.h>4444#include <asm/pgalloc.h>4545+#include <asm/procinfo.h>4546#include <asm/processor.h>4647#include <asm/sections.h>4748#include <asm/tlbflush.h>···103102#endif104103}105104105105+#if defined(CONFIG_BIG_LITTLE) && defined(CONFIG_HARDEN_BRANCH_PREDICTOR)106106+static int secondary_biglittle_prepare(unsigned int cpu)107107+{108108+ if (!cpu_vtable[cpu])109109+ cpu_vtable[cpu] = kzalloc(sizeof(*cpu_vtable[cpu]), GFP_KERNEL);110110+111111+ return cpu_vtable[cpu] ? 0 : -ENOMEM;112112+}113113+114114+static void secondary_biglittle_init(void)115115+{116116+ init_proc_vtable(lookup_processor(read_cpuid_id())->proc);117117+}118118+#else119119+static int secondary_biglittle_prepare(unsigned int cpu)120120+{121121+ return 0;122122+}123123+124124+static void secondary_biglittle_init(void)125125+{126126+}127127+#endif128128+106129int __cpu_up(unsigned int cpu, struct task_struct *idle)107130{108131 int ret;109132110133 if (!smp_ops.smp_boot_secondary)111134 return -ENOSYS;135135+136136+ ret = secondary_biglittle_prepare(cpu);137137+ if (ret)138138+ return ret;112139113140 /*114141 * We need to tell the secondary core where to find···387358{388359 struct mm_struct *mm = &init_mm;389360 unsigned int cpu;361361+362362+ secondary_biglittle_init();390363391364 /*392365 * The identity mapping is uncached (strongly ordered), so
+53-58
arch/arm/mach-omap2/display.c
···209209210210 return 0;211211}212212-#else213213-static inline int omapdss_init_fbdev(void)212212+213213+static const char * const omapdss_compat_names[] __initconst = {214214+ "ti,omap2-dss",215215+ "ti,omap3-dss",216216+ "ti,omap4-dss",217217+ "ti,omap5-dss",218218+ "ti,dra7-dss",219219+};220220+221221+static struct device_node * __init omapdss_find_dss_of_node(void)214222{215215- return 0;223223+ struct device_node *node;224224+ int i;225225+226226+ for (i = 0; i < ARRAY_SIZE(omapdss_compat_names); ++i) {227227+ node = of_find_compatible_node(NULL, NULL,228228+ omapdss_compat_names[i]);229229+ if (node)230230+ return node;231231+ }232232+233233+ return NULL;216234}235235+236236+static int __init omapdss_init_of(void)237237+{238238+ int r;239239+ struct device_node *node;240240+ struct platform_device *pdev;241241+242242+ /* only create dss helper devices if dss is enabled in the .dts */243243+244244+ node = omapdss_find_dss_of_node();245245+ if (!node)246246+ return 0;247247+248248+ if (!of_device_is_available(node))249249+ return 0;250250+251251+ pdev = of_find_device_by_node(node);252252+253253+ if (!pdev) {254254+ pr_err("Unable to find DSS platform device\n");255255+ return -ENODEV;256256+ }257257+258258+ r = of_platform_populate(node, NULL, NULL, &pdev->dev);259259+ if (r) {260260+ pr_err("Unable to populate DSS submodule devices\n");261261+ return r;262262+ }263263+264264+ return omapdss_init_fbdev();265265+}266266+omap_device_initcall(omapdss_init_of);217267#endif /* CONFIG_FB_OMAP2 */218268219269static void dispc_disable_outputs(void)···411361412362 return r;413363}414414-415415-static const char * const omapdss_compat_names[] __initconst = {416416- "ti,omap2-dss",417417- "ti,omap3-dss",418418- "ti,omap4-dss",419419- "ti,omap5-dss",420420- "ti,dra7-dss",421421-};422422-423423-static struct device_node * __init omapdss_find_dss_of_node(void)424424-{425425- struct device_node *node;426426- int i;427427-428428- for (i = 0; i < ARRAY_SIZE(omapdss_compat_names); ++i) {429429- node = of_find_compatible_node(NULL, NULL,430430- omapdss_compat_names[i]);431431- if (node)432432- return node;433433- }434434-435435- return NULL;436436-}437437-438438-static int __init omapdss_init_of(void)439439-{440440- int r;441441- struct device_node *node;442442- struct platform_device *pdev;443443-444444- /* only create dss helper devices if dss is enabled in the .dts */445445-446446- node = omapdss_find_dss_of_node();447447- if (!node)448448- return 0;449449-450450- if (!of_device_is_available(node))451451- return 0;452452-453453- pdev = of_find_device_by_node(node);454454-455455- if (!pdev) {456456- pr_err("Unable to find DSS platform device\n");457457- return -ENODEV;458458- }459459-460460- r = of_platform_populate(node, NULL, NULL, &pdev->dev);461461- if (r) {462462- pr_err("Unable to populate DSS submodule devices\n");463463- return r;464464- }465465-466466- return omapdss_init_fbdev();467467-}468468-omap_device_initcall(omapdss_init_of);
+2-15
arch/arm/mm/proc-v7-bugs.c
···5252 case ARM_CPU_PART_CORTEX_A17:5353 case ARM_CPU_PART_CORTEX_A73:5454 case ARM_CPU_PART_CORTEX_A75:5555- if (processor.switch_mm != cpu_v7_bpiall_switch_mm)5656- goto bl_error;5755 per_cpu(harden_branch_predictor_fn, cpu) =5856 harden_branch_predictor_bpiall;5957 spectre_v2_method = "BPIALL";···59616062 case ARM_CPU_PART_CORTEX_A15:6163 case ARM_CPU_PART_BRAHMA_B15:6262- if (processor.switch_mm != cpu_v7_iciallu_switch_mm)6363- goto bl_error;6464 per_cpu(harden_branch_predictor_fn, cpu) =6565 harden_branch_predictor_iciallu;6666 spectre_v2_method = "ICIALLU";···8488 ARM_SMCCC_ARCH_WORKAROUND_1, &res);8589 if ((int)res.a0 != 0)8690 break;8787- if (processor.switch_mm != cpu_v7_hvc_switch_mm && cpu)8888- goto bl_error;8991 per_cpu(harden_branch_predictor_fn, cpu) =9092 call_hvc_arch_workaround_1;9191- processor.switch_mm = cpu_v7_hvc_switch_mm;9393+ cpu_do_switch_mm = cpu_v7_hvc_switch_mm;9294 spectre_v2_method = "hypervisor";9395 break;9496···95101 ARM_SMCCC_ARCH_WORKAROUND_1, &res);96102 if ((int)res.a0 != 0)97103 break;9898- if (processor.switch_mm != cpu_v7_smc_switch_mm && cpu)9999- goto bl_error;100104 per_cpu(harden_branch_predictor_fn, cpu) =101105 call_smc_arch_workaround_1;102102- processor.switch_mm = cpu_v7_smc_switch_mm;106106+ cpu_do_switch_mm = cpu_v7_smc_switch_mm;103107 spectre_v2_method = "firmware";104108 break;105109···111119 if (spectre_v2_method)112120 pr_info("CPU%u: Spectre v2: using %s workaround\n",113121 smp_processor_id(), spectre_v2_method);114114- return;115115-116116-bl_error:117117- pr_err("CPU%u: Spectre v2: incorrect context switching function, system vulnerable\n",118118- cpu);119122}120123#else121124static void cpu_v7_spectre_init(void)
···268268 * their hooks, a bitfield is reserved for use by the platform near the269269 * top of MMIO addresses (not PIO, those have to cope the hard way).270270 *271271- * This bit field is 12 bits and is at the top of the IO virtual272272- * addresses PCI_IO_INDIRECT_TOKEN_MASK.271271+ * The highest address in the kernel virtual space are:273272 *274274- * The kernel virtual space is thus:273273+ * d0003fffffffffff # with Hash MMU274274+ * c00fffffffffffff # with Radix MMU275275 *276276- * 0xD000000000000000 : vmalloc277277- * 0xD000080000000000 : PCI PHB IO space278278- * 0xD000080080000000 : ioremap279279- * 0xD0000fffffffffff : end of ioremap region280280- *281281- * Since the top 4 bits are reserved as the region ID, we use thus282282- * the next 12 bits and keep 4 bits available for the future if the283283- * virtual address space is ever to be extended.276276+ * The top 4 bits are reserved as the region ID on hash, leaving us 8 bits277277+ * that can be used for the field.284278 *285279 * The direct IO mapping operations will then mask off those bits286280 * before doing the actual access, though that only happen when···286292 */287293288294#ifdef CONFIG_PPC_INDIRECT_MMIO289289-#define PCI_IO_IND_TOKEN_MASK 0x0fff000000000000ul290290-#define PCI_IO_IND_TOKEN_SHIFT 48295295+#define PCI_IO_IND_TOKEN_SHIFT 52296296+#define PCI_IO_IND_TOKEN_MASK (0xfful << PCI_IO_IND_TOKEN_SHIFT)291297#define PCI_FIX_ADDR(addr) \292298 ((PCI_IO_ADDR)(((unsigned long)(addr)) & ~PCI_IO_IND_TOKEN_MASK))293299#define PCI_GET_ADDR_TOKEN(addr) \
···7676CONFIG_NFS_V4_2=y7777CONFIG_ROOT_NFS=y7878CONFIG_CRYPTO_USER_API_HASH=y7979+CONFIG_PRINTK_TIME=y7980# CONFIG_RCU_TRACE is not set
+2-2
arch/riscv/include/asm/ptrace.h
···5656 unsigned long sstatus;5757 unsigned long sbadaddr;5858 unsigned long scause;5959- /* a0 value before the syscall */6060- unsigned long orig_a0;5959+ /* a0 value before the syscall */6060+ unsigned long orig_a0;6161};62626363#ifdef CONFIG_64BIT
+6-6
arch/riscv/kernel/module.c
···2121{2222 if (v != (u32)v) {2323 pr_err("%s: value %016llx out of range for 32-bit field\n",2424- me->name, v);2424+ me->name, (long long)v);2525 return -EINVAL;2626 }2727 *location = v;···102102 if (offset != (s32)offset) {103103 pr_err(104104 "%s: target %016llx can not be addressed by the 32-bit offset from PC = %p\n",105105- me->name, v, location);105105+ me->name, (long long)v, location);106106 return -EINVAL;107107 }108108···144144 if (IS_ENABLED(CMODEL_MEDLOW)) {145145 pr_err(146146 "%s: target %016llx can not be addressed by the 32-bit offset from PC = %p\n",147147- me->name, v, location);147147+ me->name, (long long)v, location);148148 return -EINVAL;149149 }150150···188188 } else {189189 pr_err(190190 "%s: can not generate the GOT entry for symbol = %016llx from PC = %p\n",191191- me->name, v, location);191191+ me->name, (long long)v, location);192192 return -EINVAL;193193 }194194···212212 } else {213213 pr_err(214214 "%s: target %016llx can not be addressed by the 32-bit offset from PC = %p\n",215215- me->name, v, location);215215+ me->name, (long long)v, location);216216 return -EINVAL;217217 }218218 }···234234 if (offset != fill_v) {235235 pr_err(236236 "%s: target %016llx can not be addressed by the 32-bit offset from PC = %p\n",237237- me->name, v, location);237237+ me->name, (long long)v, location);238238 return -EINVAL;239239 }240240
···1515#define PCI_DEVICE_ID_INTEL_SKL_HQ_IMC 0x19101616#define PCI_DEVICE_ID_INTEL_SKL_SD_IMC 0x190f1717#define PCI_DEVICE_ID_INTEL_SKL_SQ_IMC 0x191f1818+#define PCI_DEVICE_ID_INTEL_KBL_Y_IMC 0x590c1919+#define PCI_DEVICE_ID_INTEL_KBL_U_IMC 0x59042020+#define PCI_DEVICE_ID_INTEL_KBL_UQ_IMC 0x59142121+#define PCI_DEVICE_ID_INTEL_KBL_SD_IMC 0x590f2222+#define PCI_DEVICE_ID_INTEL_KBL_SQ_IMC 0x591f2323+#define PCI_DEVICE_ID_INTEL_CFL_2U_IMC 0x3ecc2424+#define PCI_DEVICE_ID_INTEL_CFL_4U_IMC 0x3ed02525+#define PCI_DEVICE_ID_INTEL_CFL_4H_IMC 0x3e102626+#define PCI_DEVICE_ID_INTEL_CFL_6H_IMC 0x3ec42727+#define PCI_DEVICE_ID_INTEL_CFL_2S_D_IMC 0x3e0f2828+#define PCI_DEVICE_ID_INTEL_CFL_4S_D_IMC 0x3e1f2929+#define PCI_DEVICE_ID_INTEL_CFL_6S_D_IMC 0x3ec23030+#define PCI_DEVICE_ID_INTEL_CFL_8S_D_IMC 0x3e303131+#define PCI_DEVICE_ID_INTEL_CFL_4S_W_IMC 0x3e183232+#define PCI_DEVICE_ID_INTEL_CFL_6S_W_IMC 0x3ec63333+#define PCI_DEVICE_ID_INTEL_CFL_8S_W_IMC 0x3e313434+#define PCI_DEVICE_ID_INTEL_CFL_4S_S_IMC 0x3e333535+#define PCI_DEVICE_ID_INTEL_CFL_6S_S_IMC 0x3eca3636+#define PCI_DEVICE_ID_INTEL_CFL_8S_S_IMC 0x3e3218371938/* SNB event control */2039#define SNB_UNC_CTL_EV_SEL_MASK 0x000000ff···221202 wrmsrl(SKL_UNC_PERF_GLOBAL_CTL,222203 SNB_UNC_GLOBAL_CTL_EN | SKL_UNC_GLOBAL_CTL_CORE_ALL);223204 }205205+206206+ /* The 8th CBOX has different MSR space */207207+ if (box->pmu->pmu_idx == 7)208208+ __set_bit(UNCORE_BOX_FLAG_CFL8_CBOX_MSR_OFFS, &box->flags);224209}225210226211static void skl_uncore_msr_enable_box(struct intel_uncore_box *box)···251228static struct intel_uncore_type skl_uncore_cbox = {252229 .name = "cbox",253230 .num_counters = 4,254254- .num_boxes = 5,231231+ .num_boxes = 8,255232 .perf_ctr_bits = 44,256233 .fixed_ctr_bits = 48,257234 .perf_ctr = SNB_UNC_CBO_0_PER_CTR0,···592569 PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_SKL_SQ_IMC),593570 .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0),594571 },595595-572572+ { /* IMC */573573+ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_KBL_Y_IMC),574574+ .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0),575575+ },576576+ { /* IMC */577577+ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_KBL_U_IMC),578578+ .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0),579579+ },580580+ { /* IMC */581581+ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_KBL_UQ_IMC),582582+ .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0),583583+ },584584+ { /* IMC */585585+ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_KBL_SD_IMC),586586+ .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0),587587+ },588588+ { /* IMC */589589+ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_KBL_SQ_IMC),590590+ .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0),591591+ },592592+ { /* IMC */593593+ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_CFL_2U_IMC),594594+ .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0),595595+ },596596+ { /* IMC */597597+ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_CFL_4U_IMC),598598+ .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0),599599+ },600600+ { /* IMC */601601+ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_CFL_4H_IMC),602602+ .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0),603603+ },604604+ { /* IMC */605605+ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_CFL_6H_IMC),606606+ .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0),607607+ },608608+ { /* IMC */609609+ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_CFL_2S_D_IMC),610610+ .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0),611611+ },612612+ { /* IMC */613613+ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_CFL_4S_D_IMC),614614+ .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0),615615+ },616616+ { /* IMC */617617+ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_CFL_6S_D_IMC),618618+ .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0),619619+ },620620+ { /* IMC */621621+ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_CFL_8S_D_IMC),622622+ .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0),623623+ },624624+ { /* IMC */625625+ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_CFL_4S_W_IMC),626626+ .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0),627627+ },628628+ { /* IMC */629629+ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_CFL_6S_W_IMC),630630+ .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0),631631+ },632632+ { /* IMC */633633+ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_CFL_8S_W_IMC),634634+ .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0),635635+ },636636+ { /* IMC */637637+ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_CFL_4S_S_IMC),638638+ .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0),639639+ },640640+ { /* IMC */641641+ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_CFL_6S_S_IMC),642642+ .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0),643643+ },644644+ { /* IMC */645645+ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_CFL_8S_S_IMC),646646+ .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0),647647+ },596648 { /* end: all zeroes */ },597649};598650···716618 IMC_DEV(SKL_HQ_IMC, &skl_uncore_pci_driver), /* 6th Gen Core H Quad Core */717619 IMC_DEV(SKL_SD_IMC, &skl_uncore_pci_driver), /* 6th Gen Core S Dual Core */718620 IMC_DEV(SKL_SQ_IMC, &skl_uncore_pci_driver), /* 6th Gen Core S Quad Core */621621+ IMC_DEV(KBL_Y_IMC, &skl_uncore_pci_driver), /* 7th Gen Core Y */622622+ IMC_DEV(KBL_U_IMC, &skl_uncore_pci_driver), /* 7th Gen Core U */623623+ IMC_DEV(KBL_UQ_IMC, &skl_uncore_pci_driver), /* 7th Gen Core U Quad Core */624624+ IMC_DEV(KBL_SD_IMC, &skl_uncore_pci_driver), /* 7th Gen Core S Dual Core */625625+ IMC_DEV(KBL_SQ_IMC, &skl_uncore_pci_driver), /* 7th Gen Core S Quad Core */626626+ IMC_DEV(CFL_2U_IMC, &skl_uncore_pci_driver), /* 8th Gen Core U 2 Cores */627627+ IMC_DEV(CFL_4U_IMC, &skl_uncore_pci_driver), /* 8th Gen Core U 4 Cores */628628+ IMC_DEV(CFL_4H_IMC, &skl_uncore_pci_driver), /* 8th Gen Core H 4 Cores */629629+ IMC_DEV(CFL_6H_IMC, &skl_uncore_pci_driver), /* 8th Gen Core H 6 Cores */630630+ IMC_DEV(CFL_2S_D_IMC, &skl_uncore_pci_driver), /* 8th Gen Core S 2 Cores Desktop */631631+ IMC_DEV(CFL_4S_D_IMC, &skl_uncore_pci_driver), /* 8th Gen Core S 4 Cores Desktop */632632+ IMC_DEV(CFL_6S_D_IMC, &skl_uncore_pci_driver), /* 8th Gen Core S 6 Cores Desktop */633633+ IMC_DEV(CFL_8S_D_IMC, &skl_uncore_pci_driver), /* 8th Gen Core S 8 Cores Desktop */634634+ IMC_DEV(CFL_4S_W_IMC, &skl_uncore_pci_driver), /* 8th Gen Core S 4 Cores Work Station */635635+ IMC_DEV(CFL_6S_W_IMC, &skl_uncore_pci_driver), /* 8th Gen Core S 6 Cores Work Station */636636+ IMC_DEV(CFL_8S_W_IMC, &skl_uncore_pci_driver), /* 8th Gen Core S 8 Cores Work Station */637637+ IMC_DEV(CFL_4S_S_IMC, &skl_uncore_pci_driver), /* 8th Gen Core S 4 Cores Server */638638+ IMC_DEV(CFL_6S_S_IMC, &skl_uncore_pci_driver), /* 8th Gen Core S 6 Cores Server */639639+ IMC_DEV(CFL_8S_S_IMC, &skl_uncore_pci_driver), /* 8th Gen Core S 8 Cores Server */719640 { /* end marker */ }720641};721642
+5-1
arch/xtensa/include/asm/processor.h
···2323# error Linux requires the Xtensa Windowed Registers Option.2424#endif25252626-#define ARCH_SLAB_MINALIGN XCHAL_DATA_WIDTH2626+/* Xtensa ABI requires stack alignment to be at least 16 */2727+2828+#define STACK_ALIGN (XCHAL_DATA_WIDTH > 16 ? XCHAL_DATA_WIDTH : 16)2929+3030+#define ARCH_SLAB_MINALIGN STACK_ALIGN27312832/*2933 * User space process size: 1 GB.
···798798 * dispatch may still be in-progress since we dispatch requests799799 * from more than one contexts.800800 *801801- * No need to quiesce queue if it isn't initialized yet since802802- * blk_freeze_queue() should be enough for cases of passthrough803803- * request.801801+ * We rely on driver to deal with the race in case that queue802802+ * initialization isn't done.804803 */805804 if (q->mq_ops && blk_queue_init_done(q))806805 blk_mq_quiesce_queue(q);
···512512513513config XPOWER_PMIC_OPREGION514514 bool "ACPI operation region support for XPower AXP288 PMIC"515515- depends on MFD_AXP20X_I2C && IOSF_MBI515515+ depends on MFD_AXP20X_I2C && IOSF_MBI=y516516 help517517 This config adds ACPI operation region support for XPower AXP288 PMIC.518518
+5-14
drivers/acpi/nfit/core.c
···29282928 return rc;2929292929302930 if (ars_status_process_records(acpi_desc))29312931- return -ENOMEM;29312931+ dev_err(acpi_desc->dev, "Failed to process ARS records\n");2932293229332933- return 0;29332933+ return rc;29342934}2935293529362936static int ars_register(struct acpi_nfit_desc *acpi_desc,···33413341 struct nvdimm *nvdimm, unsigned int cmd)33423342{33433343 struct acpi_nfit_desc *acpi_desc = to_acpi_nfit_desc(nd_desc);33443344- struct nfit_spa *nfit_spa;33453345- int rc = 0;3346334433473345 if (nvdimm)33483346 return 0;···33533355 * just needs guarantees that any ARS it initiates are not33543356 * interrupted by any intervening start requests from userspace.33553357 */33563356- mutex_lock(&acpi_desc->init_mutex);33573357- list_for_each_entry(nfit_spa, &acpi_desc->spas, list)33583358- if (acpi_desc->scrub_spa33593359- || test_bit(ARS_REQ_SHORT, &nfit_spa->ars_state)33603360- || test_bit(ARS_REQ_LONG, &nfit_spa->ars_state)) {33613361- rc = -EBUSY;33623362- break;33633363- }33643364- mutex_unlock(&acpi_desc->init_mutex);33583358+ if (work_busy(&acpi_desc->dwork.work))33593359+ return -EBUSY;3365336033663366- return rc;33613361+ return 0;33673362}3368336333693364int acpi_nfit_ars_rescan(struct acpi_nfit_desc *acpi_desc,
+1-1
drivers/ata/libata-core.c
···45534553 /* These specific Samsung models/firmware-revs do not handle LPM well */45544554 { "SAMSUNG MZMPC128HBFU-000MV", "CXM14M1Q", ATA_HORKAGE_NOLPM, },45554555 { "SAMSUNG SSD PM830 mSATA *", "CXM13D1Q", ATA_HORKAGE_NOLPM, },45564556- { "SAMSUNG MZ7TD256HAFV-000L9", "DXT02L5Q", ATA_HORKAGE_NOLPM, },45564556+ { "SAMSUNG MZ7TD256HAFV-000L9", NULL, ATA_HORKAGE_NOLPM, },4557455745584558 /* devices that don't properly handle queued TRIM commands */45594559 { "Micron_M500IT_*", "MU01", ATA_HORKAGE_NO_NCQ_TRIM |
···160160 /* Ensure the arm clock divider is what we expect */161161 ret = clk_set_rate(clks[ARM].clk, new_freq * 1000);162162 if (ret) {163163+ int ret1;164164+163165 dev_err(cpu_dev, "failed to set clock rate: %d\n", ret);164164- regulator_set_voltage_tol(arm_reg, volt_old, 0);166166+ ret1 = regulator_set_voltage_tol(arm_reg, volt_old, 0);167167+ if (ret1)168168+ dev_warn(cpu_dev,169169+ "failed to restore vddarm voltage: %d\n", ret1);165170 return ret;166171 }167172
+7-33
drivers/cpuidle/cpuidle-arm.c
···8282{8383 int ret;8484 struct cpuidle_driver *drv;8585- struct cpuidle_device *dev;86858786 drv = kmemdup(&arm_idle_driver, sizeof(*drv), GFP_KERNEL);8887 if (!drv)···102103 goto out_kfree_drv;103104 }104105105105- ret = cpuidle_register_driver(drv);106106- if (ret) {107107- if (ret != -EBUSY)108108- pr_err("Failed to register cpuidle driver\n");109109- goto out_kfree_drv;110110- }111111-112106 /*113107 * Call arch CPU operations in order to initialize114108 * idle states suspend back-end specific data···109117 ret = arm_cpuidle_init(cpu);110118111119 /*112112- * Skip the cpuidle device initialization if the reported120120+ * Allow the initialization to continue for other CPUs, if the reported113121 * failure is a HW misconfiguration/breakage (-ENXIO).114122 */115115- if (ret == -ENXIO)116116- return 0;117117-118123 if (ret) {119124 pr_err("CPU %d failed to init idle CPU ops\n", cpu);120120- goto out_unregister_drv;125125+ ret = ret == -ENXIO ? 0 : ret;126126+ goto out_kfree_drv;121127 }122128123123- dev = kzalloc(sizeof(*dev), GFP_KERNEL);124124- if (!dev) {125125- ret = -ENOMEM;126126- goto out_unregister_drv;127127- }128128- dev->cpu = cpu;129129-130130- ret = cpuidle_register_device(dev);131131- if (ret) {132132- pr_err("Failed to register cpuidle device for CPU %d\n",133133- cpu);134134- goto out_kfree_dev;135135- }129129+ ret = cpuidle_register(drv, NULL);130130+ if (ret)131131+ goto out_kfree_drv;136132137133 return 0;138134139139-out_kfree_dev:140140- kfree(dev);141141-out_unregister_drv:142142- cpuidle_unregister_driver(drv);143135out_kfree_drv:144136 kfree(drv);145137 return ret;···154178 while (--cpu >= 0) {155179 dev = per_cpu(cpuidle_devices, cpu);156180 drv = cpuidle_get_cpu_driver(dev);157157- cpuidle_unregister_device(dev);158158- cpuidle_unregister_driver(drv);159159- kfree(dev);181181+ cpuidle_unregister(drv);160182 kfree(drv);161183 }162184
+17-14
drivers/crypto/hisilicon/sec/sec_algs.c
···732732 int *splits_in_nents;733733 int *splits_out_nents = NULL;734734 struct sec_request_el *el, *temp;735735+ bool split = skreq->src != skreq->dst;735736736737 mutex_init(&sec_req->lock);737738 sec_req->req_base = &skreq->base;···751750 if (ret)752751 goto err_free_split_sizes;753752754754- if (skreq->src != skreq->dst) {753753+ if (split) {755754 sec_req->len_out = sg_nents(skreq->dst);756755 ret = sec_map_and_split_sg(skreq->dst, split_sizes, steps,757756 &splits_out, &splits_out_nents,···786785 split_sizes[i],787786 skreq->src != skreq->dst,788787 splits_in[i], splits_in_nents[i],789789- splits_out[i],790790- splits_out_nents[i], info);788788+ split ? splits_out[i] : NULL,789789+ split ? splits_out_nents[i] : 0,790790+ info);791791 if (IS_ERR(el)) {792792 ret = PTR_ERR(el);793793 goto err_free_elements;···808806 * more refined but this is unlikely to happen so no need.809807 */810808811811- /* Cleanup - all elements in pointer arrays have been coppied */812812- kfree(splits_in_nents);813813- kfree(splits_in);814814- kfree(splits_out_nents);815815- kfree(splits_out);816816- kfree(split_sizes);817817-818809 /* Grab a big lock for a long time to avoid concurrency issues */819810 mutex_lock(&queue->queuelock);820811···822827 (!queue->havesoftqueue ||823828 kfifo_avail(&queue->softqueue) > steps)) ||824829 !list_empty(&ctx->backlog)) {830830+ ret = -EBUSY;825831 if ((skreq->base.flags & CRYPTO_TFM_REQ_MAY_BACKLOG)) {826832 list_add_tail(&sec_req->backlog_head, &ctx->backlog);827833 mutex_unlock(&queue->queuelock);828828- return -EBUSY;834834+ goto out;829835 }830836831831- ret = -EBUSY;832837 mutex_unlock(&queue->queuelock);833838 goto err_free_elements;834839 }···837842 if (ret)838843 goto err_free_elements;839844840840- return -EINPROGRESS;845845+ ret = -EINPROGRESS;846846+out:847847+ /* Cleanup - all elements in pointer arrays have been copied */848848+ kfree(splits_in_nents);849849+ kfree(splits_in);850850+ kfree(splits_out_nents);851851+ kfree(splits_out);852852+ kfree(split_sizes);853853+ return ret;841854842855err_free_elements:843856 list_for_each_entry_safe(el, temp, &sec_req->elements, head) {···857854 crypto_skcipher_ivsize(atfm),858855 DMA_BIDIRECTIONAL);859856err_unmap_out_sg:860860- if (skreq->src != skreq->dst)857857+ if (split)861858 sec_unmap_sg_on_err(skreq->dst, steps, splits_out,862859 splits_out_nents, sec_req->len_out,863860 info->dev);
+4
drivers/firmware/efi/arm-init.c
···265265 (params.mmap & ~PAGE_MASK)));266266267267 init_screen_info();268268+269269+ /* ARM does not permit early mappings to persist across paging_init() */270270+ if (IS_ENABLED(CONFIG_ARM))271271+ efi_memmap_unmap();268272}269273270274static int __init register_gop_device(void)
+1-1
drivers/firmware/efi/arm-runtime.c
···110110{111111 u64 mapsize;112112113113- if (!efi_enabled(EFI_BOOT) || !efi_enabled(EFI_MEMMAP)) {113113+ if (!efi_enabled(EFI_BOOT)) {114114 pr_info("EFI services will not be available.\n");115115 return 0;116116 }
···16321632 continue;16331633 }1634163416351635- /* First check if the entry is already handled */16361636- if (cursor.pfn < frag_start) {16371637- cursor.entry->huge = true;16381638- amdgpu_vm_pt_next(adev, &cursor);16391639- continue;16401640- }16411641-16421635 /* If it isn't already handled it can't be a huge page */16431636 if (cursor.entry->huge) {16441637 /* Add the entry to the relocated list to update it. */···16941701 }16951702 } while (frag_start < entry_end);1696170316971697- if (frag >= shift)17041704+ if (amdgpu_vm_pt_descendant(adev, &cursor)) {17051705+ /* Mark all child entries as huge */17061706+ while (cursor.pfn < frag_start) {17071707+ cursor.entry->huge = true;17081708+ amdgpu_vm_pt_next(adev, &cursor);17091709+ }17101710+17111711+ } else if (frag >= shift) {17121712+ /* or just move on to the next on the same level. */16981713 amdgpu_vm_pt_next(adev, &cursor);17141714+ }16991715 }1700171617011717 return 0;
+3-3
drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c
···72727373 /* Program the system aperture low logical page number. */7474 WREG32_SOC15(GC, 0, mmMC_VM_SYSTEM_APERTURE_LOW_ADDR,7575- min(adev->gmc.vram_start, adev->gmc.agp_start) >> 18);7575+ min(adev->gmc.fb_start, adev->gmc.agp_start) >> 18);76767777 if (adev->asic_type == CHIP_RAVEN && adev->rev_id >= 0x8)7878 /*···8282 * to get rid of the VM fault and hardware hang.8383 */8484 WREG32_SOC15(GC, 0, mmMC_VM_SYSTEM_APERTURE_HIGH_ADDR,8585- max((adev->gmc.vram_end >> 18) + 0x1,8585+ max((adev->gmc.fb_end >> 18) + 0x1,8686 adev->gmc.agp_end >> 18));8787 else8888 WREG32_SOC15(GC, 0, mmMC_VM_SYSTEM_APERTURE_HIGH_ADDR,8989- max(adev->gmc.vram_end, adev->gmc.agp_end) >> 18);8989+ max(adev->gmc.fb_end, adev->gmc.agp_end) >> 18);90909191 /* Set default page address. */9292 value = adev->vram_scratch.gpu_addr - adev->gmc.vram_start
+3-3
drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c
···90909191 /* Program the system aperture low logical page number. */9292 WREG32_SOC15(MMHUB, 0, mmMC_VM_SYSTEM_APERTURE_LOW_ADDR,9393- min(adev->gmc.vram_start, adev->gmc.agp_start) >> 18);9393+ min(adev->gmc.fb_start, adev->gmc.agp_start) >> 18);94949595 if (adev->asic_type == CHIP_RAVEN && adev->rev_id >= 0x8)9696 /*···100100 * to get rid of the VM fault and hardware hang.101101 */102102 WREG32_SOC15(MMHUB, 0, mmMC_VM_SYSTEM_APERTURE_HIGH_ADDR,103103- max((adev->gmc.vram_end >> 18) + 0x1,103103+ max((adev->gmc.fb_end >> 18) + 0x1,104104 adev->gmc.agp_end >> 18));105105 else106106 WREG32_SOC15(MMHUB, 0, mmMC_VM_SYSTEM_APERTURE_HIGH_ADDR,107107- max(adev->gmc.vram_end, adev->gmc.agp_end) >> 18);107107+ max(adev->gmc.fb_end, adev->gmc.agp_end) >> 18);108108109109 /* Set default page address. */110110 value = adev->vram_scratch.gpu_addr - adev->gmc.vram_start +
···12751275 mutex_lock(&mgr->lock);12761276 mstb = mgr->mst_primary;1277127712781278+ if (!mstb)12791279+ goto out;12801280+12781281 for (i = 0; i < lct - 1; i++) {12791282 int shift = (i % 2) ? 0 : 4;12801283 int port_num = (rad[i / 2] >> shift) & 0xf;
+1-1
drivers/gpu/drm/drm_fourcc.c
···97979898/**9999 * drm_driver_legacy_fb_format - compute drm fourcc code from legacy description100100+ * @dev: DRM device100101 * @bpp: bits per pixels101102 * @depth: bit depth per pixel102102- * @native: use host native byte order103103 *104104 * Computes a drm fourcc pixel format code for the given @bpp/@depth values.105105 * Unlike drm_mode_legacy_fb_format() this looks at the drivers mode_config,
+1-1
drivers/gpu/drm/i915/intel_device_info.c
···474474 u8 eu_disabled_mask;475475 u32 n_disabled;476476477477- if (!(sseu->subslice_mask[ss] & BIT(ss)))477477+ if (!(sseu->subslice_mask[s] & BIT(ss)))478478 /* skip disabled subslice */479479 continue;480480
+42-3
drivers/gpu/drm/i915/intel_display.c
···48504850 * chroma samples for both of the luma samples, and thus we don't48514851 * actually get the expected MPEG2 chroma siting convention :(48524852 * The same behaviour is observed on pre-SKL platforms as well.48534853+ *48544854+ * Theory behind the formula (note that we ignore sub-pixel48554855+ * source coordinates):48564856+ * s = source sample position48574857+ * d = destination sample position48584858+ *48594859+ * Downscaling 4:1:48604860+ * -0.548614861+ * | 0.048624862+ * | | 1.5 (initial phase)48634863+ * | | |48644864+ * v v v48654865+ * | s | s | s | s |48664866+ * | d |48674867+ *48684868+ * Upscaling 1:4:48694869+ * -0.548704870+ * | -0.375 (initial phase)48714871+ * | | 0.048724872+ * | | |48734873+ * v v v48744874+ * | s |48754875+ * | d | d | d | d |48534876 */48544854-u16 skl_scaler_calc_phase(int sub, bool chroma_cosited)48774877+u16 skl_scaler_calc_phase(int sub, int scale, bool chroma_cosited)48554878{48564879 int phase = -0x8000;48574880 u16 trip = 0;4858488148594882 if (chroma_cosited)48604883 phase += (sub - 1) * 0x8000 / sub;48844884+48854885+ phase += scale / (2 * sub);48864886+48874887+ /*48884888+ * Hardware initial phase limited to [-0.5:1.5].48894889+ * Since the max hardware scale factor is 3.0, we48904890+ * should never actually excdeed 1.0 here.48914891+ */48924892+ WARN_ON(phase < -0x8000 || phase > 0x18000);4861489348624894 if (phase < 0)48634895 phase = 0x10000 + phase;···5099506751005068 if (crtc->config->pch_pfit.enabled) {51015069 u16 uv_rgb_hphase, uv_rgb_vphase;50705070+ int pfit_w, pfit_h, hscale, vscale;51025071 int id;5103507251045073 if (WARN_ON(crtc->config->scaler_state.scaler_id < 0))51055074 return;5106507551075107- uv_rgb_hphase = skl_scaler_calc_phase(1, false);51085108- uv_rgb_vphase = skl_scaler_calc_phase(1, false);50765076+ pfit_w = (crtc->config->pch_pfit.size >> 16) & 0xFFFF;50775077+ pfit_h = crtc->config->pch_pfit.size & 0xFFFF;50785078+50795079+ hscale = (crtc->config->pipe_src_w << 16) / pfit_w;50805080+ vscale = (crtc->config->pipe_src_h << 16) / pfit_h;50815081+50825082+ uv_rgb_hphase = skl_scaler_calc_phase(1, hscale, false);50835083+ uv_rgb_vphase = skl_scaler_calc_phase(1, vscale, false);5109508451105085 id = scaler_state->scaler_id;51115086 I915_WRITE(SKL_PS_CTRL(pipe, id), PS_SCALER_EN |
···228228 drm_for_each_connector_iter(connector, &conn_iter) {229229 struct intel_connector *intel_connector = to_intel_connector(connector);230230231231- if (intel_connector->encoder->hpd_pin == pin) {231231+ /* Don't check MST ports, they don't have pins */232232+ if (!intel_connector->mst_port &&233233+ intel_connector->encoder->hpd_pin == pin) {232234 if (connector->polled != intel_connector->polled)233235 DRM_DEBUG_DRIVER("Reenabling HPD on connector %s\n",234236 connector->name);···397395 struct intel_encoder *encoder;398396 bool storm_detected = false;399397 bool queue_dig = false, queue_hp = false;398398+ u32 long_hpd_pulse_mask = 0;399399+ u32 short_hpd_pulse_mask = 0;400400+ enum hpd_pin pin;400401401402 if (!pin_mask)402403 return;403404404405 spin_lock(&dev_priv->irq_lock);405405- for_each_intel_encoder(&dev_priv->drm, encoder) {406406- enum hpd_pin pin = encoder->hpd_pin;407407- bool has_hpd_pulse = intel_encoder_has_hpd_pulse(encoder);408406407407+ /*408408+ * Determine whether ->hpd_pulse() exists for each pin, and409409+ * whether we have a short or a long pulse. This is needed410410+ * as each pin may have up to two encoders (HDMI and DP) and411411+ * only the one of them (DP) will have ->hpd_pulse().412412+ */413413+ for_each_intel_encoder(&dev_priv->drm, encoder) {414414+ bool has_hpd_pulse = intel_encoder_has_hpd_pulse(encoder);415415+ enum port port = encoder->port;416416+ bool long_hpd;417417+418418+ pin = encoder->hpd_pin;409419 if (!(BIT(pin) & pin_mask))410420 continue;411421412412- if (has_hpd_pulse) {413413- bool long_hpd = long_mask & BIT(pin);414414- enum port port = encoder->port;422422+ if (!has_hpd_pulse)423423+ continue;415424416416- DRM_DEBUG_DRIVER("digital hpd port %c - %s\n", port_name(port),417417- long_hpd ? "long" : "short");418418- /*419419- * For long HPD pulses we want to have the digital queue happen,420420- * but we still want HPD storm detection to function.421421- */422422- queue_dig = true;423423- if (long_hpd) {424424- dev_priv->hotplug.long_port_mask |= (1 << port);425425- } else {426426- /* for short HPD just trigger the digital queue */427427- dev_priv->hotplug.short_port_mask |= (1 << port);428428- continue;429429- }425425+ long_hpd = long_mask & BIT(pin);426426+427427+ DRM_DEBUG_DRIVER("digital hpd port %c - %s\n", port_name(port),428428+ long_hpd ? "long" : "short");429429+ queue_dig = true;430430+431431+ if (long_hpd) {432432+ long_hpd_pulse_mask |= BIT(pin);433433+ dev_priv->hotplug.long_port_mask |= BIT(port);434434+ } else {435435+ short_hpd_pulse_mask |= BIT(pin);436436+ dev_priv->hotplug.short_port_mask |= BIT(port);430437 }438438+ }439439+440440+ /* Now process each pin just once */441441+ for_each_hpd_pin(pin) {442442+ bool long_hpd;443443+444444+ if (!(BIT(pin) & pin_mask))445445+ continue;431446432447 if (dev_priv->hotplug.stats[pin].state == HPD_DISABLED) {433448 /*···461442 if (dev_priv->hotplug.stats[pin].state != HPD_ENABLED)462443 continue;463444464464- if (!has_hpd_pulse) {445445+ /*446446+ * Delegate to ->hpd_pulse() if one of the encoders for this447447+ * pin has it, otherwise let the hotplug_work deal with this448448+ * pin directly.449449+ */450450+ if (((short_hpd_pulse_mask | long_hpd_pulse_mask) & BIT(pin))) {451451+ long_hpd = long_hpd_pulse_mask & BIT(pin);452452+ } else {465453 dev_priv->hotplug.event_bits |= BIT(pin);454454+ long_hpd = true;466455 queue_hp = true;467456 }457457+458458+ if (!long_hpd)459459+ continue;468460469461 if (intel_hpd_irq_storm_detect(dev_priv, pin)) {470462 dev_priv->hotplug.event_bits &= ~BIT(pin);
+13-1
drivers/gpu/drm/i915/intel_lrc.c
···424424425425 reg_state[CTX_RING_TAIL+1] = intel_ring_set_tail(rq->ring, rq->tail);426426427427- /* True 32b PPGTT with dynamic page allocation: update PDP427427+ /*428428+ * True 32b PPGTT with dynamic page allocation: update PDP428429 * registers and point the unallocated PDPs to scratch page.429430 * PML4 is allocated during ppgtt init, so this is not needed430431 * in 48-bit mode.···433432 if (ppgtt && !i915_vm_is_48bit(&ppgtt->vm))434433 execlists_update_context_pdps(ppgtt, reg_state);435434435435+ /*436436+ * Make sure the context image is complete before we submit it to HW.437437+ *438438+ * Ostensibly, writes (including the WCB) should be flushed prior to439439+ * an uncached write such as our mmio register access, the empirical440440+ * evidence (esp. on Braswell) suggests that the WC write into memory441441+ * may not be visible to the HW prior to the completion of the UC442442+ * register write and that we may begin execution from the context443443+ * before its image is complete leading to invalid PD chasing.444444+ */445445+ wmb();436446 return ce->lrc_desc;437447}438448
+36-2
drivers/gpu/drm/i915/intel_ringbuffer.c
···9191gen4_render_ring_flush(struct i915_request *rq, u32 mode)9292{9393 u32 cmd, *cs;9494+ int i;94959596 /*9697 * read/write caches:···128127 cmd |= MI_INVALIDATE_ISP;129128 }130129131131- cs = intel_ring_begin(rq, 2);130130+ i = 2;131131+ if (mode & EMIT_INVALIDATE)132132+ i += 20;133133+134134+ cs = intel_ring_begin(rq, i);132135 if (IS_ERR(cs))133136 return PTR_ERR(cs);134137135138 *cs++ = cmd;136136- *cs++ = MI_NOOP;139139+140140+ /*141141+ * A random delay to let the CS invalidate take effect? Without this142142+ * delay, the GPU relocation path fails as the CS does not see143143+ * the updated contents. Just as important, if we apply the flushes144144+ * to the EMIT_FLUSH branch (i.e. immediately after the relocation145145+ * write and before the invalidate on the next batch), the relocations146146+ * still fail. This implies that is a delay following invalidation147147+ * that is required to reset the caches as opposed to a delay to148148+ * ensure the memory is written.149149+ */150150+ if (mode & EMIT_INVALIDATE) {151151+ *cs++ = GFX_OP_PIPE_CONTROL(4) | PIPE_CONTROL_QW_WRITE;152152+ *cs++ = i915_ggtt_offset(rq->engine->scratch) |153153+ PIPE_CONTROL_GLOBAL_GTT;154154+ *cs++ = 0;155155+ *cs++ = 0;156156+157157+ for (i = 0; i < 12; i++)158158+ *cs++ = MI_FLUSH;159159+160160+ *cs++ = GFX_OP_PIPE_CONTROL(4) | PIPE_CONTROL_QW_WRITE;161161+ *cs++ = i915_ggtt_offset(rq->engine->scratch) |162162+ PIPE_CONTROL_GLOBAL_GTT;163163+ *cs++ = 0;164164+ *cs++ = 0;165165+ }166166+167167+ *cs++ = cmd;168168+137169 intel_ring_advance(rq, cs);138170139171 return 0;
···854854 unsigned int sof_lines;855855 unsigned int vsync_lines;856856857857+ /* Use VENCI for 480i and 576i and double HDMI pixels */858858+ if (mode->flags & DRM_MODE_FLAG_DBLCLK) {859859+ hdmi_repeat = true;860860+ use_enci = true;861861+ venc_hdmi_latency = 1;862862+ }863863+857864 if (meson_venc_hdmi_supported_vic(vic)) {858865 vmode = meson_venc_hdmi_get_vic_vmode(vic);859866 if (!vmode) {···872865 } else {873866 meson_venc_hdmi_get_dmt_vmode(mode, &vmode_dmt);874867 vmode = &vmode_dmt;875875- }876876-877877- /* Use VENCI for 480i and 576i and double HDMI pixels */878878- if (mode->flags & DRM_MODE_FLAG_DBLCLK) {879879- hdmi_repeat = true;880880- use_enci = true;881881- venc_hdmi_latency = 1;868868+ use_enci = false;882869 }883870884871 /* Repeat VENC pixels for 480/576i/p, 720p50/60 and 1080p50/60 */
+11-11
drivers/gpu/drm/omapdrm/dss/dsi.c
···5409540954105410 /* DSI on OMAP3 doesn't have register DSI_GNQ, set number54115411 * of data to 3 by default */54125412- if (dsi->data->quirks & DSI_QUIRK_GNQ)54125412+ if (dsi->data->quirks & DSI_QUIRK_GNQ) {54135413+ dsi_runtime_get(dsi);54135414 /* NB_DATA_LANES */54145415 dsi->num_lanes_supported = 1 + REG_GET(dsi, DSI_GNQ, 11, 9);54155415- else54165416+ dsi_runtime_put(dsi);54175417+ } else {54165418 dsi->num_lanes_supported = 3;54195419+ }5417542054185421 r = dsi_init_output(dsi);54195422 if (r)···54295426 }5430542754315428 r = of_platform_populate(dev->of_node, NULL, NULL, dev);54325432- if (r)54295429+ if (r) {54335430 DSSERR("Failed to populate DSI child devices: %d\n", r);54315431+ goto err_uninit_output;54325432+ }5434543354355434 r = component_add(&pdev->dev, &dsi_component_ops);54365435 if (r)54375437- goto err_uninit_output;54365436+ goto err_of_depopulate;5438543754395438 return 0;5440543954405440+err_of_depopulate:54415441+ of_platform_depopulate(dev);54415442err_uninit_output:54425443 dsi_uninit_output(dsi);54435444err_pm_disable:···54775470 /* wait for current handler to finish before turning the DSI off */54785471 synchronize_irq(dsi->irq);5479547254805480- dispc_runtime_put(dsi->dss->dispc);54815481-54825473 return 0;54835474}5484547554855476static int dsi_runtime_resume(struct device *dev)54865477{54875478 struct dsi_data *dsi = dev_get_drvdata(dev);54885488- int r;54895489-54905490- r = dispc_runtime_get(dsi->dss->dispc);54915491- if (r)54925492- return r;5493547954945480 dsi->is_enabled = true;54955481 /* ensure the irq handler sees the is_enabled value */
+10-1
drivers/gpu/drm/omapdrm/dss/dss.c
···14841484 dss);1485148514861486 /* Add all the child devices as components. */14871487+ r = of_platform_populate(pdev->dev.of_node, NULL, NULL, &pdev->dev);14881488+ if (r)14891489+ goto err_uninit_debugfs;14901490+14871491 omapdss_gather_components(&pdev->dev);1488149214891493 device_for_each_child(&pdev->dev, &match, dss_add_child_component);1490149414911495 r = component_master_add_with_match(&pdev->dev, &dss_component_ops, match);14921496 if (r)14931493- goto err_uninit_debugfs;14971497+ goto err_of_depopulate;1494149814951499 return 0;15001500+15011501+err_of_depopulate:15021502+ of_platform_depopulate(&pdev->dev);1496150314971504err_uninit_debugfs:14981505 dss_debugfs_remove_file(dss->debugfs.clk);···15281521static int dss_remove(struct platform_device *pdev)15291522{15301523 struct dss_device *dss = platform_get_drvdata(pdev);15241524+15251525+ of_platform_depopulate(&pdev->dev);1531152615321527 component_master_del(&pdev->dev, &dss_component_ops);15331528
···946946 if (venc->tv_dac_clk)947947 clk_disable_unprepare(venc->tv_dac_clk);948948949949- dispc_runtime_put(venc->dss->dispc);950950-951949 return 0;952950}953951954952static int venc_runtime_resume(struct device *dev)955953{956954 struct venc_device *venc = dev_get_drvdata(dev);957957- int r;958958-959959- r = dispc_runtime_get(venc->dss->dispc);960960- if (r < 0)961961- return r;962955963956 if (venc->tv_dac_clk)964957 clk_prepare_enable(venc->tv_dac_clk);
···58715871 if (!is_t4(adapter->params.chip))58725872 cxgb4_ptp_init(adapter);5873587358745874- if (IS_ENABLED(CONFIG_THERMAL) &&58745874+ if (IS_REACHABLE(CONFIG_THERMAL) &&58755875 !is_t4(adapter->params.chip) && (adapter->flags & FW_OK))58765876 cxgb4_thermal_init(adapter);58775877···5940594059415941 if (!is_t4(adapter->params.chip))59425942 cxgb4_ptp_stop(adapter);59435943- if (IS_ENABLED(CONFIG_THERMAL))59435943+ if (IS_REACHABLE(CONFIG_THERMAL))59445944 cxgb4_thermal_remove(adapter);5945594559465946 /* If we allocated filters, free up state associated with any
···540540struct resource_allocator {541541 spinlock_t alloc_lock; /* protect quotas */542542 union {543543- int res_reserved;544544- int res_port_rsvd[MLX4_MAX_PORTS];543543+ unsigned int res_reserved;544544+ unsigned int res_port_rsvd[MLX4_MAX_PORTS];545545 };546546 union {547547 int res_free;
···185185 qed_iscsi_free(p_hwfn);186186 qed_ooo_free(p_hwfn);187187 }188188+189189+ if (QED_IS_RDMA_PERSONALITY(p_hwfn))190190+ qed_rdma_info_free(p_hwfn);191191+188192 qed_iov_free(p_hwfn);189193 qed_l2_free(p_hwfn);190194 qed_dmae_info_free(p_hwfn);···10811077 if (rc)10821078 goto alloc_err;10831079 rc = qed_ooo_alloc(p_hwfn);10801080+ if (rc)10811081+ goto alloc_err;10821082+ }10831083+10841084+ if (QED_IS_RDMA_PERSONALITY(p_hwfn)) {10851085+ rc = qed_rdma_info_alloc(p_hwfn);10841086 if (rc)10851087 goto alloc_err;10861088 }···21122102 if (!p_ptt)21132103 return -EAGAIN;2114210421152115- /* If roce info is allocated it means roce is initialized and should21162116- * be enabled in searcher.21172117- */21182105 if (p_hwfn->p_rdma_info &&21192119- p_hwfn->b_rdma_enabled_in_prs)21062106+ p_hwfn->p_rdma_info->active && p_hwfn->b_rdma_enabled_in_prs)21202107 qed_wr(p_hwfn, p_ptt, p_hwfn->rdma_prs_search_reg, 0x1);2121210821222109 /* Re-open incoming traffic */
+2
drivers/net/ethernet/qlogic/qed/qed_int.c
···992992 */993993 do {994994 index = p_sb_attn->sb_index;995995+ /* finish reading index before the loop condition */996996+ dma_rmb();995997 attn_bits = le32_to_cpu(p_sb_attn->atten_bits);996998 attn_acks = le32_to_cpu(p_sb_attn->atten_ack);997999 } while (index != p_sb_attn->sb_index);
···578578config SCSI_MYRS579579 tristate "Mylex DAC960/DAC1100 PCI RAID Controller (SCSI Interface)"580580 depends on PCI581581+ depends on !CPU_BIG_ENDIAN || COMPILE_TEST581582 select RAID_ATTRS582583 help583584 This driver adds support for the Mylex DAC960, AcceleRAID, and
···6767MODULE_PARM_DESC(ql2xplogiabsentdevice,6868 "Option to enable PLOGI to devices that are not present after "6969 "a Fabric scan. This is needed for several broken switches. "7070- "Default is 0 - no PLOGI. 1 - perfom PLOGI.");7070+ "Default is 0 - no PLOGI. 1 - perform PLOGI.");71717272int ql2xloginretrycount = 0;7373module_param(ql2xloginretrycount, int, S_IRUGO);
+8
drivers/scsi/scsi_lib.c
···697697 */698698 scsi_mq_uninit_cmd(cmd);699699700700+ /*701701+ * queue is still alive, so grab the ref for preventing it702702+ * from being cleaned up during running queue.703703+ */704704+ percpu_ref_get(&q->q_usage_counter);705705+700706 __blk_mq_end_request(req, error);701707702708 if (scsi_target(sdev)->single_lun ||···710704 kblockd_schedule_work(&sdev->requeue_work);711705 else712706 blk_mq_run_hw_queues(q, true);707707+708708+ percpu_ref_put(&q->q_usage_counter);713709 } else {714710 unsigned long flags;715711
-7
drivers/scsi/ufs/ufshcd.c
···80998099 err = -ENOMEM;81008100 goto out_error;81018101 }81028102-81038103- /*81048104- * Do not use blk-mq at this time because blk-mq does not support81058105- * runtime pm.81068106- */81078107- host->use_blk_mq = false;81088108-81098102 hba = shost_priv(host);81108103 hba->host = host;81118104 hba->dev = dev;
+2-2
drivers/target/target_core_transport.c
···17781778void transport_generic_request_failure(struct se_cmd *cmd,17791779 sense_reason_t sense_reason)17801780{17811781- int ret = 0;17811781+ int ret = 0, post_ret;1782178217831783 pr_debug("-----[ Storage Engine Exception; sense_reason %d\n",17841784 sense_reason);···17901790 transport_complete_task_attr(cmd);1791179117921792 if (cmd->transport_complete_callback)17931793- cmd->transport_complete_callback(cmd, false, NULL);17931793+ cmd->transport_complete_callback(cmd, false, &post_ret);1794179417951795 if (transport_check_aborted_status(cmd, 1))17961796 return;
+10-1
fs/afs/rxrpc.c
···576576{577577 signed long rtt2, timeout;578578 long ret;579579+ bool stalled = false;579580 u64 rtt;580581 u32 life, last_life;581582···610609611610 life = rxrpc_kernel_check_life(call->net->socket, call->rxcall);612611 if (timeout == 0 &&613613- life == last_life && signal_pending(current))612612+ life == last_life && signal_pending(current)) {613613+ if (stalled)614614 break;615615+ __set_current_state(TASK_RUNNING);616616+ rxrpc_kernel_probe_life(call->net->socket, call->rxcall);617617+ timeout = rtt2;618618+ stalled = true;619619+ continue;620620+ }615621616622 if (life != last_life) {617623 timeout = rtt2;618624 last_life = life;625625+ stalled = false;619626 }620627621628 timeout = schedule_timeout(timeout);
+12-4
fs/fuse/dev.c
···165165166166static void fuse_drop_waiting(struct fuse_conn *fc)167167{168168- if (fc->connected) {169169- atomic_dec(&fc->num_waiting);170170- } else if (atomic_dec_and_test(&fc->num_waiting)) {168168+ /*169169+ * lockess check of fc->connected is okay, because atomic_dec_and_test()170170+ * provides a memory barrier mached with the one in fuse_wait_aborted()171171+ * to ensure no wake-up is missed.172172+ */173173+ if (atomic_dec_and_test(&fc->num_waiting) &&174174+ !READ_ONCE(fc->connected)) {171175 /* wake up aborters */172176 wake_up_all(&fc->blocked_waitq);173177 }···17721768 req->in.args[1].size = total_len;1773176917741770 err = fuse_request_send_notify_reply(fc, req, outarg->notify_unique);17751775- if (err)17711771+ if (err) {17761772 fuse_retrieve_end(fc, req);17731773+ fuse_put_request(fc, req);17741774+ }1777177517781776 return err;17791777}···2225221922262220void fuse_wait_aborted(struct fuse_conn *fc)22272221{22222222+ /* matches implicit memory barrier in fuse_drop_waiting() */22232223+ smp_mb();22282224 wait_event(fc->blocked_waitq, atomic_read(&fc->num_waiting) == 0);22292225}22302226
···826826 ret = gfs2_meta_inode_buffer(ip, &dibh);827827 if (ret)828828 goto unlock;829829- iomap->private = dibh;829829+ mp->mp_bh[0] = dibh;830830831831 if (gfs2_is_stuffed(ip)) {832832 if (flags & IOMAP_WRITE) {···863863 len = lblock_stop - lblock + 1;864864 iomap->length = len << inode->i_blkbits;865865866866- get_bh(dibh);867867- mp->mp_bh[0] = dibh;868868-869866 height = ip->i_height;870867 while ((lblock + 1) * sdp->sd_sb.sb_bsize > sdp->sd_heightsize[height])871868 height++;···895898 iomap->bdev = inode->i_sb->s_bdev;896899unlock:897900 up_read(&ip->i_rw_mutex);898898- if (ret && dibh)899899- brelse(dibh);900901 return ret;901902902903do_alloc:···975980976981static int gfs2_iomap_begin_write(struct inode *inode, loff_t pos,977982 loff_t length, unsigned flags,978978- struct iomap *iomap)983983+ struct iomap *iomap,984984+ struct metapath *mp)979985{980980- struct metapath mp = { .mp_aheight = 1, };981986 struct gfs2_inode *ip = GFS2_I(inode);982987 struct gfs2_sbd *sdp = GFS2_SB(inode);983988 unsigned int data_blocks = 0, ind_blocks = 0, rblocks;···991996 unstuff = gfs2_is_stuffed(ip) &&992997 pos + length > gfs2_max_stuffed_size(ip);993998994994- ret = gfs2_iomap_get(inode, pos, length, flags, iomap, &mp);999999+ ret = gfs2_iomap_get(inode, pos, length, flags, iomap, mp);9951000 if (ret)996996- goto out_release;10011001+ goto out_unlock;99710029981003 alloc_required = unstuff || iomap->type == IOMAP_HOLE;9991004···1008101310091014 ret = gfs2_quota_lock_check(ip, &ap);10101015 if (ret)10111011- goto out_release;10161016+ goto out_unlock;1012101710131018 ret = gfs2_inplace_reserve(ip, &ap);10141019 if (ret)···10331038 ret = gfs2_unstuff_dinode(ip, NULL);10341039 if (ret)10351040 goto out_trans_end;10361036- release_metapath(&mp);10371037- brelse(iomap->private);10381038- iomap->private = NULL;10411041+ release_metapath(mp);10391042 ret = gfs2_iomap_get(inode, iomap->offset, iomap->length,10401040- flags, iomap, &mp);10431043+ flags, iomap, mp);10411044 if (ret)10421045 goto out_trans_end;10431046 }1044104710451048 if (iomap->type == IOMAP_HOLE) {10461046- ret = gfs2_iomap_alloc(inode, iomap, flags, &mp);10491049+ ret = gfs2_iomap_alloc(inode, iomap, flags, mp);10471050 if (ret) {10481051 gfs2_trans_end(sdp);10491052 gfs2_inplace_release(ip);···10491056 goto out_qunlock;10501057 }10511058 }10521052- release_metapath(&mp);10531059 if (!gfs2_is_stuffed(ip) && gfs2_is_jdata(ip))10541060 iomap->page_done = gfs2_iomap_journaled_page_done;10551061 return 0;···10611069out_qunlock:10621070 if (alloc_required)10631071 gfs2_quota_unlock(ip);10641064-out_release:10651065- if (iomap->private)10661066- brelse(iomap->private);10671067- release_metapath(&mp);10721072+out_unlock:10681073 gfs2_write_unlock(inode);10691074 return ret;10701075}···1077108810781089 trace_gfs2_iomap_start(ip, pos, length, flags);10791090 if ((flags & IOMAP_WRITE) && !(flags & IOMAP_DIRECT)) {10801080- ret = gfs2_iomap_begin_write(inode, pos, length, flags, iomap);10911091+ ret = gfs2_iomap_begin_write(inode, pos, length, flags, iomap, &mp);10811092 } else {10821093 ret = gfs2_iomap_get(inode, pos, length, flags, iomap, &mp);10831083- release_metapath(&mp);10941094+10841095 /*10851096 * Silently fall back to buffered I/O for stuffed files or if10861097 * we've hot a hole (see gfs2_file_direct_write).···10891100 iomap->type != IOMAP_MAPPED)10901101 ret = -ENOTBLK;10911102 }11031103+ if (!ret) {11041104+ get_bh(mp.mp_bh[0]);11051105+ iomap->private = mp.mp_bh[0];11061106+ }11071107+ release_metapath(&mp);10921108 trace_gfs2_iomap_end(ip, iomap, ret);10931109 return ret;10941110}···19021908 if (ret < 0)19031909 goto out;1904191019051905- /* issue read-ahead on metadata */19061906- if (mp.mp_aheight > 1) {19071907- for (; ret > 1; ret--) {19081908- metapointer_range(&mp, mp.mp_aheight - ret,19111911+ /* On the first pass, issue read-ahead on metadata. */19121912+ if (mp.mp_aheight > 1 && strip_h == ip->i_height - 1) {19131913+ unsigned int height = mp.mp_aheight - 1;19141914+19151915+ /* No read-ahead for data blocks. */19161916+ if (mp.mp_aheight - 1 == strip_h)19171917+ height--;19181918+19191919+ for (; height >= mp.mp_aheight - ret; height--) {19201920+ metapointer_range(&mp, height,19091921 start_list, start_aligned,19101922 end_list, end_aligned,19111923 &start, &end);
+2-1
fs/gfs2/rgrp.c
···733733734734 if (gl) {735735 glock_clear_object(gl, rgd);736736+ gfs2_rgrp_brelse(rgd);736737 gfs2_glock_put(gl);737738 }738739···11751174 * @rgd: the struct gfs2_rgrpd describing the RG to read in11761175 *11771176 * Read in all of a Resource Group's header and bitmap blocks.11781178- * Caller must eventually call gfs2_rgrp_relse() to free the bitmaps.11771177+ * Caller must eventually call gfs2_rgrp_brelse() to free the bitmaps.11791178 *11801179 * Returns: errno11811180 */
+5-2
fs/inode.c
···730730 return LRU_REMOVED;731731 }732732733733- /* recently referenced inodes get one more pass */734734- if (inode->i_state & I_REFERENCED) {733733+ /*734734+ * Recently referenced inodes and inodes with many attached pages735735+ * get one more pass.736736+ */737737+ if (inode->i_state & I_REFERENCED || inode->i_data.nrpages > 1) {735738 inode->i_state &= ~I_REFERENCED;736739 spin_unlock(&inode->i_lock);737740 return LRU_ROTATE;
+3-3
fs/namespace.c
···695695696696 hlist_for_each_entry(mp, chain, m_hash) {697697 if (mp->m_dentry == dentry) {698698- /* might be worth a WARN_ON() */699699- if (d_unlinked(dentry))700700- return ERR_PTR(-ENOENT);701698 mp->m_count++;702699 return mp;703700 }···708711 int ret;709712710713 if (d_mountpoint(dentry)) {714714+ /* might be worth a WARN_ON() */715715+ if (d_unlinked(dentry))716716+ return ERR_PTR(-ENOENT);711717mountpoint:712718 read_seqlock_excl(&mount_lock);713719 mp = lookup_mountpoint(dentry);
+2-2
fs/nfs/callback_proc.c
···6666out_iput:6767 rcu_read_unlock();6868 trace_nfs4_cb_getattr(cps->clp, &args->fh, inode, -ntohl(res->status));6969- iput(inode);6969+ nfs_iput_and_deactive(inode);7070out:7171 dprintk("%s: exit with status = %d\n", __func__, ntohl(res->status));7272 return res->status;···108108 }109109 trace_nfs4_cb_recall(cps->clp, &args->fh, inode,110110 &args->stateid, -ntohl(res));111111- iput(inode);111111+ nfs_iput_and_deactive(inode);112112out:113113 dprintk("%s: exit with status = %d\n", __func__, ntohl(res));114114 return res;
···26012601 nfs4_clear_state_manager_bit(clp);26022602 /* Did we race with an attempt to give us more work? */26032603 if (clp->cl_state == 0)26042604- break;26042604+ return;26052605 if (test_and_set_bit(NFS4CLNT_MANAGER_RUNNING, &clp->cl_state) != 0)26062606- break;26072607- } while (refcount_read(&clp->cl_count) > 1);26082608- return;26062606+ return;26072607+ } while (refcount_read(&clp->cl_count) > 1 && !signalled());26082608+ goto out_drain;26092609+26092610out_error:26102611 if (strlen(section))26112612 section_sep = ": ";···26142613 " with error %d\n", section_sep, section,26152614 clp->cl_hostname, -status);26162615 ssleep(1);26162616+out_drain:26172617 nfs4_end_drain_session(clp);26182618 nfs4_clear_state_manager_bit(clp);26192619}
+3
fs/nfsd/nfs4proc.c
···10381038{10391039 __be32 status;1040104010411041+ if (!cstate->save_fh.fh_dentry)10421042+ return nfserr_nofilehandle;10431043+10411044 status = nfs4_preprocess_stateid_op(rqstp, cstate, &cstate->save_fh,10421045 src_stateid, RD_STATE, src, NULL);10431046 if (status) {
+5-5
fs/notify/fanotify/fanotify.c
···115115 continue;116116 mark = iter_info->marks[type];117117 /*118118- * if the event is for a child and this inode doesn't care about119119- * events on the child, don't send it!118118+ * If the event is for a child and this mark doesn't care about119119+ * events on a child, don't send it!120120 */121121- if (type == FSNOTIFY_OBJ_TYPE_INODE &&122122- (event_mask & FS_EVENT_ON_CHILD) &&123123- !(mark->mask & FS_EVENT_ON_CHILD))121121+ if (event_mask & FS_EVENT_ON_CHILD &&122122+ (type != FSNOTIFY_OBJ_TYPE_INODE ||123123+ !(mark->mask & FS_EVENT_ON_CHILD)))124124 continue;125125126126 marks_mask |= mark->mask;
+5-2
fs/notify/fsnotify.c
···167167 parent = dget_parent(dentry);168168 p_inode = parent->d_inode;169169170170- if (unlikely(!fsnotify_inode_watches_children(p_inode)))170170+ if (unlikely(!fsnotify_inode_watches_children(p_inode))) {171171 __fsnotify_update_child_dentry_flags(p_inode);172172- else if (p_inode->i_fsnotify_mask & mask) {172172+ } else if (p_inode->i_fsnotify_mask & mask & ALL_FSNOTIFY_EVENTS) {173173 struct name_snapshot name;174174175175 /* we are notifying a parent so come up with the new mask which···339339 sb = mnt->mnt.mnt_sb;340340 mnt_or_sb_mask = mnt->mnt_fsnotify_mask | sb->s_fsnotify_mask;341341 }342342+ /* An event "on child" is not intended for a mount/sb mark */343343+ if (mask & FS_EVENT_ON_CHILD)344344+ mnt_or_sb_mask = 0;342345343346 /*344347 * Optimization: srcu_read_lock() has a memory barrier which can
+10-2
fs/ocfs2/aops.c
···24112411 /* this io's submitter should not have unlocked this before we could */24122412 BUG_ON(!ocfs2_iocb_is_rw_locked(iocb));2413241324142414- if (bytes > 0 && private)24152415- ret = ocfs2_dio_end_io_write(inode, private, offset, bytes);24142414+ if (bytes <= 0)24152415+ mlog_ratelimited(ML_ERROR, "Direct IO failed, bytes = %lld",24162416+ (long long)bytes);24172417+ if (private) {24182418+ if (bytes > 0)24192419+ ret = ocfs2_dio_end_io_write(inode, private, offset,24202420+ bytes);24212421+ else24222422+ ocfs2_dio_free_write_ctx(inode, private);24232423+ }2416242424172425 ocfs2_iocb_clear_rw_locked(iocb);24182426
+9
fs/ocfs2/cluster/masklog.h
···178178 ##__VA_ARGS__); \179179} while (0)180180181181+#define mlog_ratelimited(mask, fmt, ...) \182182+do { \183183+ static DEFINE_RATELIMIT_STATE(_rs, \184184+ DEFAULT_RATELIMIT_INTERVAL, \185185+ DEFAULT_RATELIMIT_BURST); \186186+ if (__ratelimit(&_rs)) \187187+ mlog(mask, fmt, ##__VA_ARGS__); \188188+} while (0)189189+181190#define mlog_errno(st) ({ \182191 int _st = (st); \183192 if (_st != -ERESTARTSYS && _st != -EINTR && \
+1
include/linux/can/dev.h
···169169170170void can_put_echo_skb(struct sk_buff *skb, struct net_device *dev,171171 unsigned int idx);172172+struct sk_buff *__can_get_echo_skb(struct net_device *dev, unsigned int idx, u8 *len_ptr);172173unsigned int can_get_echo_skb(struct net_device *dev, unsigned int idx);173174void can_free_echo_skb(struct net_device *dev, unsigned int idx);174175
···179179 kdb_printf("no process for cpu %ld\n", cpu);180180 return 0;181181 }182182- sprintf(buf, "btt 0x%p\n", KDB_TSK(cpu));182182+ sprintf(buf, "btt 0x%px\n", KDB_TSK(cpu));183183 kdb_parse(buf);184184 return 0;185185 }186186 kdb_printf("btc: cpu status: ");187187 kdb_parse("cpu\n");188188 for_each_online_cpu(cpu) {189189- sprintf(buf, "btt 0x%p\n", KDB_TSK(cpu));189189+ sprintf(buf, "btt 0x%px\n", KDB_TSK(cpu));190190 kdb_parse(buf);191191 touch_nmi_watchdog();192192 }
+9-6
kernel/debug/kdb/kdb_io.c
···216216 int count;217217 int i;218218 int diag, dtab_count;219219- int key;219219+ int key, buf_size, ret;220220221221222222 diag = kdbgetintenv("DTABCOUNT", &dtab_count);···336336 else337337 p_tmp = tmpbuffer;338338 len = strlen(p_tmp);339339- count = kallsyms_symbol_complete(p_tmp,340340- sizeof(tmpbuffer) -341341- (p_tmp - tmpbuffer));339339+ buf_size = sizeof(tmpbuffer) - (p_tmp - tmpbuffer);340340+ count = kallsyms_symbol_complete(p_tmp, buf_size);342341 if (tab == 2 && count > 0) {343342 kdb_printf("\n%d symbols are found.", count);344343 if (count > dtab_count) {···349350 }350351 kdb_printf("\n");351352 for (i = 0; i < count; i++) {352352- if (WARN_ON(!kallsyms_symbol_next(p_tmp, i)))353353+ ret = kallsyms_symbol_next(p_tmp, i, buf_size);354354+ if (WARN_ON(!ret))353355 break;354354- kdb_printf("%s ", p_tmp);356356+ if (ret != -E2BIG)357357+ kdb_printf("%s ", p_tmp);358358+ else359359+ kdb_printf("%s... ", p_tmp);355360 *(p_tmp + len) = '\0';356361 }357362 if (i >= dtab_count)
+2-2
kernel/debug/kdb/kdb_keyboard.c
···173173 case KT_LATIN:174174 if (isprint(keychar))175175 break; /* printable characters */176176- /* drop through */176176+ /* fall through */177177 case KT_SPEC:178178 if (keychar == K_ENTER)179179 break;180180- /* drop through */180180+ /* fall through */181181 default:182182 return -1; /* ignore unprintables */183183 }
···8383 unsigned long sym_start;8484 unsigned long sym_end;8585 } kdb_symtab_t;8686-extern int kallsyms_symbol_next(char *prefix_name, int flag);8686+extern int kallsyms_symbol_next(char *prefix_name, int flag, int buf_size);8787extern int kallsyms_symbol_complete(char *prefix_name, int max_len);88888989/* Exported Symbols for kernel loadable modules to use. */
+14-14
kernel/debug/kdb/kdb_support.c
···4040int kdbgetsymval(const char *symname, kdb_symtab_t *symtab)4141{4242 if (KDB_DEBUG(AR))4343- kdb_printf("kdbgetsymval: symname=%s, symtab=%p\n", symname,4343+ kdb_printf("kdbgetsymval: symname=%s, symtab=%px\n", symname,4444 symtab);4545 memset(symtab, 0, sizeof(*symtab));4646 symtab->sym_start = kallsyms_lookup_name(symname);···8888 char *knt1 = NULL;89899090 if (KDB_DEBUG(AR))9191- kdb_printf("kdbnearsym: addr=0x%lx, symtab=%p\n", addr, symtab);9191+ kdb_printf("kdbnearsym: addr=0x%lx, symtab=%px\n", addr, symtab);9292 memset(symtab, 0, sizeof(*symtab));93939494 if (addr < 4096)···149149 symtab->mod_name = "kernel";150150 if (KDB_DEBUG(AR))151151 kdb_printf("kdbnearsym: returns %d symtab->sym_start=0x%lx, "152152- "symtab->mod_name=%p, symtab->sym_name=%p (%s)\n", ret,152152+ "symtab->mod_name=%px, symtab->sym_name=%px (%s)\n", ret,153153 symtab->sym_start, symtab->mod_name, symtab->sym_name,154154 symtab->sym_name);155155···221221 * Parameters:222222 * prefix_name prefix of a symbol name to lookup223223 * flag 0 means search from the head, 1 means continue search.224224+ * buf_size maximum length that can be written to prefix_name225225+ * buffer224226 * Returns:225227 * 1 if a symbol matches the given prefix.226228 * 0 if no string found227229 */228228-int kallsyms_symbol_next(char *prefix_name, int flag)230230+int kallsyms_symbol_next(char *prefix_name, int flag, int buf_size)229231{230232 int prefix_len = strlen(prefix_name);231233 static loff_t pos;···237235 pos = 0;238236239237 while ((name = kdb_walk_kallsyms(&pos))) {240240- if (strncmp(name, prefix_name, prefix_len) == 0) {241241- strncpy(prefix_name, name, strlen(name)+1);242242- return 1;243243- }238238+ if (!strncmp(name, prefix_name, prefix_len))239239+ return strscpy(prefix_name, name, buf_size);244240 }245241 return 0;246242}···432432 *word = w8;433433 break;434434 }435435- /* drop through */435435+ /* fall through */436436 default:437437 diag = KDB_BADWIDTH;438438 kdb_printf("kdb_getphysword: bad width %ld\n", (long) size);···481481 *word = w8;482482 break;483483 }484484- /* drop through */484484+ /* fall through */485485 default:486486 diag = KDB_BADWIDTH;487487 kdb_printf("kdb_getword: bad width %ld\n", (long) size);···525525 diag = kdb_putarea(addr, w8);526526 break;527527 }528528- /* drop through */528528+ /* fall through */529529 default:530530 diag = KDB_BADWIDTH;531531 kdb_printf("kdb_putword: bad width %ld\n", (long) size);···887887 __func__, dah_first);888888 if (dah_first) {889889 h_used = (struct debug_alloc_header *)debug_alloc_pool;890890- kdb_printf("%s: h_used %p size %d\n", __func__, h_used,890890+ kdb_printf("%s: h_used %px size %d\n", __func__, h_used,891891 h_used->size);892892 }893893 do {894894 h_used = (struct debug_alloc_header *)895895 ((char *)h_free + dah_overhead + h_free->size);896896- kdb_printf("%s: h_used %p size %d caller %p\n",896896+ kdb_printf("%s: h_used %px size %d caller %px\n",897897 __func__, h_used, h_used->size, h_used->caller);898898 h_free = (struct debug_alloc_header *)899899 (debug_alloc_pool + h_free->next);···902902 ((char *)h_free + dah_overhead + h_free->size);903903 if ((char *)h_used - debug_alloc_pool !=904904 sizeof(debug_alloc_pool_aligned))905905- kdb_printf("%s: h_used %p size %d caller %p\n",905905+ kdb_printf("%s: h_used %px size %d caller %px\n",906906 __func__, h_used, h_used->size, h_used->caller);907907out:908908 spin_unlock(&dap_lock);
+48-14
kernel/sched/fair.c
···56745674 return target;56755675}5676567656775677-static unsigned long cpu_util_wake(int cpu, struct task_struct *p);56775677+static unsigned long cpu_util_without(int cpu, struct task_struct *p);5678567856795679-static unsigned long capacity_spare_wake(int cpu, struct task_struct *p)56795679+static unsigned long capacity_spare_without(int cpu, struct task_struct *p)56805680{56815681- return max_t(long, capacity_of(cpu) - cpu_util_wake(cpu, p), 0);56815681+ return max_t(long, capacity_of(cpu) - cpu_util_without(cpu, p), 0);56825682}5683568356845684/*···5738573857395739 avg_load += cfs_rq_load_avg(&cpu_rq(i)->cfs);5740574057415741- spare_cap = capacity_spare_wake(i, p);57415741+ spare_cap = capacity_spare_without(i, p);5742574257435743 if (spare_cap > max_spare_cap)57445744 max_spare_cap = spare_cap;···58895889 return prev_cpu;5890589058915891 /*58925892- * We need task's util for capacity_spare_wake, sync it up to prev_cpu's58935893- * last_update_time.58925892+ * We need task's util for capacity_spare_without, sync it up to58935893+ * prev_cpu's last_update_time.58945894 */58955895 if (!(sd_flag & SD_BALANCE_FORK))58965896 sync_entity_load_avg(&p->se);···62166216}6217621762186218/*62196219- * cpu_util_wake: Compute CPU utilization with any contributions from62206220- * the waking task p removed.62196219+ * cpu_util_without: compute cpu utilization without any contributions from *p62206220+ * @cpu: the CPU which utilization is requested62216221+ * @p: the task which utilization should be discounted62226222+ *62236223+ * The utilization of a CPU is defined by the utilization of tasks currently62246224+ * enqueued on that CPU as well as tasks which are currently sleeping after an62256225+ * execution on that CPU.62266226+ *62276227+ * This method returns the utilization of the specified CPU by discounting the62286228+ * utilization of the specified task, whenever the task is currently62296229+ * contributing to the CPU utilization.62216230 */62226222-static unsigned long cpu_util_wake(int cpu, struct task_struct *p)62316231+static unsigned long cpu_util_without(int cpu, struct task_struct *p)62236232{62246233 struct cfs_rq *cfs_rq;62256234 unsigned int util;···62406231 cfs_rq = &cpu_rq(cpu)->cfs;62416232 util = READ_ONCE(cfs_rq->avg.util_avg);6242623362436243- /* Discount task's blocked util from CPU's util */62346234+ /* Discount task's util from CPU's util */62446235 util -= min_t(unsigned int, util, task_util(p));6245623662466237 /*···62496240 * a) if *p is the only task sleeping on this CPU, then:62506241 * cpu_util (== task_util) > util_est (== 0)62516242 * and thus we return:62526252- * cpu_util_wake = (cpu_util - task_util) = 062436243+ * cpu_util_without = (cpu_util - task_util) = 062536244 *62546245 * b) if other tasks are SLEEPING on this CPU, which is now exiting62556246 * IDLE, then:62566247 * cpu_util >= task_util62576248 * cpu_util > util_est (== 0)62586249 * and thus we discount *p's blocked utilization to return:62596259- * cpu_util_wake = (cpu_util - task_util) >= 062506250+ * cpu_util_without = (cpu_util - task_util) >= 062606251 *62616252 * c) if other tasks are RUNNABLE on that CPU and62626253 * util_est > cpu_util···62696260 * covered by the following code when estimated utilization is62706261 * enabled.62716262 */62726272- if (sched_feat(UTIL_EST))62736273- util = max(util, READ_ONCE(cfs_rq->avg.util_est.enqueued));62636263+ if (sched_feat(UTIL_EST)) {62646264+ unsigned int estimated =62656265+ READ_ONCE(cfs_rq->avg.util_est.enqueued);62666266+62676267+ /*62686268+ * Despite the following checks we still have a small window62696269+ * for a possible race, when an execl's select_task_rq_fair()62706270+ * races with LB's detach_task():62716271+ *62726272+ * detach_task()62736273+ * p->on_rq = TASK_ON_RQ_MIGRATING;62746274+ * ---------------------------------- A62756275+ * deactivate_task() \62766276+ * dequeue_task() + RaceTime62776277+ * util_est_dequeue() /62786278+ * ---------------------------------- B62796279+ *62806280+ * The additional check on "current == p" it's required to62816281+ * properly fix the execl regression and it helps in further62826282+ * reducing the chances for the above race.62836283+ */62846284+ if (unlikely(task_on_rq_queued(p) || current == p)) {62856285+ estimated -= min_t(unsigned int, estimated,62866286+ (_task_util_est(p) | UTIL_AVG_UNCHANGED));62876287+ }62886288+ util = max(util, estimated);62896289+ }6274629062756291 /*62766292 * Utilization (estimated) can exceed the CPU capacity, thus let's
+24-23
kernel/sched/psi.c
···633633 */634634void cgroup_move_task(struct task_struct *task, struct css_set *to)635635{636636- bool move_psi = !psi_disabled;637636 unsigned int task_flags = 0;638637 struct rq_flags rf;639638 struct rq *rq;640639641641- if (move_psi) {642642- rq = task_rq_lock(task, &rf);643643-644644- if (task_on_rq_queued(task))645645- task_flags = TSK_RUNNING;646646- else if (task->in_iowait)647647- task_flags = TSK_IOWAIT;648648-649649- if (task->flags & PF_MEMSTALL)650650- task_flags |= TSK_MEMSTALL;651651-652652- if (task_flags)653653- psi_task_change(task, task_flags, 0);640640+ if (psi_disabled) {641641+ /*642642+ * Lame to do this here, but the scheduler cannot be locked643643+ * from the outside, so we move cgroups from inside sched/.644644+ */645645+ rcu_assign_pointer(task->cgroups, to);646646+ return;654647 }655648656656- /*657657- * Lame to do this here, but the scheduler cannot be locked658658- * from the outside, so we move cgroups from inside sched/.659659- */649649+ rq = task_rq_lock(task, &rf);650650+651651+ if (task_on_rq_queued(task))652652+ task_flags = TSK_RUNNING;653653+ else if (task->in_iowait)654654+ task_flags = TSK_IOWAIT;655655+656656+ if (task->flags & PF_MEMSTALL)657657+ task_flags |= TSK_MEMSTALL;658658+659659+ if (task_flags)660660+ psi_task_change(task, task_flags, 0);661661+662662+ /* See comment above */660663 rcu_assign_pointer(task->cgroups, to);661664662662- if (move_psi) {663663- if (task_flags)664664- psi_task_change(task, 0, task_flags);665665+ if (task_flags)666666+ psi_task_change(task, 0, task_flags);665667666666- task_rq_unlock(rq, task, &rf);667667- }668668+ task_rq_unlock(rq, task, &rf);668669}669670#endif /* CONFIG_CGROUPS */670671
+1-2
lib/ubsan.c
···427427EXPORT_SYMBOL(__ubsan_handle_shift_out_of_bounds);428428429429430430-void __noreturn431431-__ubsan_handle_builtin_unreachable(struct unreachable_data *data)430430+void __ubsan_handle_builtin_unreachable(struct unreachable_data *data)432431{433432 unsigned long flags;434433
+8-2
mm/gup.c
···385385 * @vma: vm_area_struct mapping @address386386 * @address: virtual address to look up387387 * @flags: flags modifying lookup behaviour388388- * @page_mask: on output, *page_mask is set according to the size of the page388388+ * @ctx: contains dev_pagemap for %ZONE_DEVICE memory pinning and a389389+ * pointer to output page_mask389390 *390391 * @flags can have FOLL_ flags set, defined in <linux/mm.h>391392 *392392- * Returns the mapped (struct page *), %NULL if no mapping exists, or393393+ * When getting pages from ZONE_DEVICE memory, the @ctx->pgmap caches394394+ * the device's dev_pagemap metadata to avoid repeating expensive lookups.395395+ *396396+ * On output, the @ctx->page_mask is set according to the size of the page.397397+ *398398+ * Return: the mapped (struct page *), %NULL if no mapping exists, or393399 * an error pointer if there is a mapping to something not represented394400 * by a page descriptor (see also vm_normal_page()).395401 */
+19-4
mm/hugetlb.c
···32333233int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src,32343234 struct vm_area_struct *vma)32353235{32363236- pte_t *src_pte, *dst_pte, entry;32363236+ pte_t *src_pte, *dst_pte, entry, dst_entry;32373237 struct page *ptepage;32383238 unsigned long addr;32393239 int cow;···32613261 break;32623262 }3263326332643264- /* If the pagetables are shared don't copy or take references */32653265- if (dst_pte == src_pte)32643264+ /*32653265+ * If the pagetables are shared don't copy or take references.32663266+ * dst_pte == src_pte is the common case of src/dest sharing.32673267+ *32683268+ * However, src could have 'unshared' and dst shares with32693269+ * another vma. If dst_pte !none, this implies sharing.32703270+ * Check here before taking page table lock, and once again32713271+ * after taking the lock below.32723272+ */32733273+ dst_entry = huge_ptep_get(dst_pte);32743274+ if ((dst_pte == src_pte) || !huge_pte_none(dst_entry))32663275 continue;3267327632683277 dst_ptl = huge_pte_lock(h, dst, dst_pte);32693278 src_ptl = huge_pte_lockptr(h, src, src_pte);32703279 spin_lock_nested(src_ptl, SINGLE_DEPTH_NESTING);32713280 entry = huge_ptep_get(src_pte);32723272- if (huge_pte_none(entry)) { /* skip none entry */32813281+ dst_entry = huge_ptep_get(dst_pte);32823282+ if (huge_pte_none(entry) || !huge_pte_none(dst_entry)) {32833283+ /*32843284+ * Skip if src entry none. Also, skip in the32853285+ * unlikely case dst entry !none as this implies32863286+ * sharing with another vma.32873287+ */32733288 ;32743289 } else if (unlikely(is_hugetlb_entry_migration(entry) ||32753290 is_hugetlb_entry_hwpoisoned(entry))) {
+1-1
mm/memblock.c
···1179117911801180#ifdef CONFIG_HAVE_MEMBLOCK_NODE_MAP11811181/*11821182- * Common iterator interface used to define for_each_mem_range().11821182+ * Common iterator interface used to define for_each_mem_pfn_range().11831183 */11841184void __init_memblock __next_mem_pfn_range(int *idx, int nid,11851185 unsigned long *out_start_pfn,
+17-11
mm/page_alloc.c
···40614061 int reserve_flags;4062406240634063 /*40644064- * In the slowpath, we sanity check order to avoid ever trying to40654065- * reclaim >= MAX_ORDER areas which will never succeed. Callers may40664066- * be using allocators in order of preference for an area that is40674067- * too large.40684068- */40694069- if (order >= MAX_ORDER) {40704070- WARN_ON_ONCE(!(gfp_mask & __GFP_NOWARN));40714071- return NULL;40724072- }40734073-40744074- /*40754064 * We also sanity check to catch abuse of atomic reserves being used by40764065 * callers that are not in atomic context.40774066 */···43524363 unsigned int alloc_flags = ALLOC_WMARK_LOW;43534364 gfp_t alloc_mask; /* The gfp_t that was actually used for allocation */43544365 struct alloc_context ac = { };43664366+43674367+ /*43684368+ * There are several places where we assume that the order value is sane43694369+ * so bail out early if the request is out of bound.43704370+ */43714371+ if (unlikely(order >= MAX_ORDER)) {43724372+ WARN_ON_ONCE(!(gfp_mask & __GFP_NOWARN));43734373+ return NULL;43744374+ }4355437543564376 gfp_mask &= gfp_allowed_mask;43574377 alloc_mask = gfp_mask;···7785778777867788 if (PageReserved(page))77877789 goto unmovable;77907790+77917791+ /*77927792+ * If the zone is movable and we have ruled out all reserved77937793+ * pages then it should be reasonably safe to assume the rest77947794+ * is movable.77957795+ */77967796+ if (zone_idx(zone) == ZONE_MOVABLE)77977797+ continue;7788779877897799 /*77907800 * Hugepages are not in LRU lists, but they're movable.
+1-3
mm/shmem.c
···25632563 inode_lock(inode);25642564 /* We're holding i_mutex so we can access i_size directly */2565256525662566- if (offset < 0)25672567- offset = -EINVAL;25682568- else if (offset >= inode->i_size)25662566+ if (offset < 0 || offset >= inode->i_size)25692567 offset = -ENXIO;25702568 else {25712569 start = offset >> PAGE_SHIFT;
···1827182718281828 /*18291829 * The fast way of checking if there are any vmstat diffs.18301830- * This works because the diffs are byte sized items.18311830 */18321832- if (memchr_inv(p->vm_stat_diff, 0, NR_VM_ZONE_STAT_ITEMS))18311831+ if (memchr_inv(p->vm_stat_diff, 0, NR_VM_ZONE_STAT_ITEMS *18321832+ sizeof(p->vm_stat_diff[0])))18331833 return true;18341834#ifdef CONFIG_NUMA18351835- if (memchr_inv(p->vm_numa_stat_diff, 0, NR_VM_NUMA_STAT_ITEMS))18351835+ if (memchr_inv(p->vm_numa_stat_diff, 0, NR_VM_NUMA_STAT_ITEMS *18361836+ sizeof(p->vm_numa_stat_diff[0])))18361837 return true;18371838#endif18381839 }
+63-40
mm/z3fold.c
···9999#define NCHUNKS ((PAGE_SIZE - ZHDR_SIZE_ALIGNED) >> CHUNK_SHIFT)100100101101#define BUDDY_MASK (0x3)102102+#define BUDDY_SHIFT 2102103103104/**104105 * struct z3fold_pool - stores metadata for each z3fold pool···146145 MIDDLE_CHUNK_MAPPED,147146 NEEDS_COMPACTING,148147 PAGE_STALE,149149- UNDER_RECLAIM148148+ PAGE_CLAIMED, /* by either reclaim or free */150149};151150152151/*****************···175174 clear_bit(MIDDLE_CHUNK_MAPPED, &page->private);176175 clear_bit(NEEDS_COMPACTING, &page->private);177176 clear_bit(PAGE_STALE, &page->private);178178- clear_bit(UNDER_RECLAIM, &page->private);177177+ clear_bit(PAGE_CLAIMED, &page->private);179178180179 spin_lock_init(&zhdr->page_lock);181180 kref_init(&zhdr->refcount);···224223 unsigned long handle;225224226225 handle = (unsigned long)zhdr;227227- if (bud != HEADLESS)228228- handle += (bud + zhdr->first_num) & BUDDY_MASK;226226+ if (bud != HEADLESS) {227227+ handle |= (bud + zhdr->first_num) & BUDDY_MASK;228228+ if (bud == LAST)229229+ handle |= (zhdr->last_chunks << BUDDY_SHIFT);230230+ }229231 return handle;230232}231233···236232static struct z3fold_header *handle_to_z3fold_header(unsigned long handle)237233{238234 return (struct z3fold_header *)(handle & PAGE_MASK);235235+}236236+237237+/* only for LAST bud, returns zero otherwise */238238+static unsigned short handle_to_chunks(unsigned long handle)239239+{240240+ return (handle & ~PAGE_MASK) >> BUDDY_SHIFT;239241}240242241243/*···730720 page = virt_to_page(zhdr);731721732722 if (test_bit(PAGE_HEADLESS, &page->private)) {733733- /* HEADLESS page stored */734734- bud = HEADLESS;735735- } else {736736- z3fold_page_lock(zhdr);737737- bud = handle_to_buddy(handle);738738-739739- switch (bud) {740740- case FIRST:741741- zhdr->first_chunks = 0;742742- break;743743- case MIDDLE:744744- zhdr->middle_chunks = 0;745745- zhdr->start_middle = 0;746746- break;747747- case LAST:748748- zhdr->last_chunks = 0;749749- break;750750- default:751751- pr_err("%s: unknown bud %d\n", __func__, bud);752752- WARN_ON(1);753753- z3fold_page_unlock(zhdr);754754- return;723723+ /* if a headless page is under reclaim, just leave.724724+ * NB: we use test_and_set_bit for a reason: if the bit725725+ * has not been set before, we release this page726726+ * immediately so we don't care about its value any more.727727+ */728728+ if (!test_and_set_bit(PAGE_CLAIMED, &page->private)) {729729+ spin_lock(&pool->lock);730730+ list_del(&page->lru);731731+ spin_unlock(&pool->lock);732732+ free_z3fold_page(page);733733+ atomic64_dec(&pool->pages_nr);755734 }735735+ return;756736 }757737758758- if (bud == HEADLESS) {759759- spin_lock(&pool->lock);760760- list_del(&page->lru);761761- spin_unlock(&pool->lock);762762- free_z3fold_page(page);763763- atomic64_dec(&pool->pages_nr);738738+ /* Non-headless case */739739+ z3fold_page_lock(zhdr);740740+ bud = handle_to_buddy(handle);741741+742742+ switch (bud) {743743+ case FIRST:744744+ zhdr->first_chunks = 0;745745+ break;746746+ case MIDDLE:747747+ zhdr->middle_chunks = 0;748748+ break;749749+ case LAST:750750+ zhdr->last_chunks = 0;751751+ break;752752+ default:753753+ pr_err("%s: unknown bud %d\n", __func__, bud);754754+ WARN_ON(1);755755+ z3fold_page_unlock(zhdr);764756 return;765757 }766758···770758 atomic64_dec(&pool->pages_nr);771759 return;772760 }773773- if (test_bit(UNDER_RECLAIM, &page->private)) {761761+ if (test_bit(PAGE_CLAIMED, &page->private)) {774762 z3fold_page_unlock(zhdr);775763 return;776764 }···848836 }849837 list_for_each_prev(pos, &pool->lru) {850838 page = list_entry(pos, struct page, lru);851851- if (test_bit(PAGE_HEADLESS, &page->private))852852- /* candidate found */853853- break;839839+840840+ /* this bit could have been set by free, in which case841841+ * we pass over to the next page in the pool.842842+ */843843+ if (test_and_set_bit(PAGE_CLAIMED, &page->private))844844+ continue;854845855846 zhdr = page_address(page);856856- if (!z3fold_page_trylock(zhdr))847847+ if (test_bit(PAGE_HEADLESS, &page->private))848848+ break;849849+850850+ if (!z3fold_page_trylock(zhdr)) {851851+ zhdr = NULL;857852 continue; /* can't evict at this point */853853+ }858854 kref_get(&zhdr->refcount);859855 list_del_init(&zhdr->buddy);860856 zhdr->cpu = -1;861861- set_bit(UNDER_RECLAIM, &page->private);862857 break;863858 }859859+860860+ if (!zhdr)861861+ break;864862865863 list_del_init(&page->lru);866864 spin_unlock(&pool->lock);···920898 if (test_bit(PAGE_HEADLESS, &page->private)) {921899 if (ret == 0) {922900 free_z3fold_page(page);901901+ atomic64_dec(&pool->pages_nr);923902 return 0;924903 }925904 spin_lock(&pool->lock);···928905 spin_unlock(&pool->lock);929906 } else {930907 z3fold_page_lock(zhdr);931931- clear_bit(UNDER_RECLAIM, &page->private);908908+ clear_bit(PAGE_CLAIMED, &page->private);932909 if (kref_put(&zhdr->refcount,933910 release_z3fold_page_locked)) {934911 atomic64_dec(&pool->pages_nr);···987964 set_bit(MIDDLE_CHUNK_MAPPED, &page->private);988965 break;989966 case LAST:990990- addr += PAGE_SIZE - (zhdr->last_chunks << CHUNK_SHIFT);967967+ addr += PAGE_SIZE - (handle_to_chunks(handle) << CHUNK_SHIFT);991968 break;992969 default:993970 pr_err("unknown buddy id %d\n", buddy);
···375375 * getting ACKs from the server. Returns a number representing the life state376376 * which can be compared to that returned by a previous call.377377 *378378- * If this is a client call, ping ACKs will be sent to the server to find out379379- * whether it's still responsive and whether the call is still alive on the380380- * server.378378+ * If the life state stalls, rxrpc_kernel_probe_life() should be called and379379+ * then 2RTT waited.381380 */382382-u32 rxrpc_kernel_check_life(struct socket *sock, struct rxrpc_call *call)381381+u32 rxrpc_kernel_check_life(const struct socket *sock,382382+ const struct rxrpc_call *call)383383{384384 return call->acks_latest;385385}386386EXPORT_SYMBOL(rxrpc_kernel_check_life);387387+388388+/**389389+ * rxrpc_kernel_probe_life - Poke the peer to see if it's still alive390390+ * @sock: The socket the call is on391391+ * @call: The call to check392392+ *393393+ * In conjunction with rxrpc_kernel_check_life(), allow a kernel service to394394+ * find out whether a call is still alive by pinging it. This should cause the395395+ * life state to be bumped in about 2*RTT.396396+ *397397+ * The must be called in TASK_RUNNING state on pain of might_sleep() objecting.398398+ */399399+void rxrpc_kernel_probe_life(struct socket *sock, struct rxrpc_call *call)400400+{401401+ rxrpc_propose_ACK(call, RXRPC_ACK_PING, 0, 0, true, false,402402+ rxrpc_propose_ack_ping_for_check_life);403403+ rxrpc_send_ack_packet(call, true, NULL);404404+}405405+EXPORT_SYMBOL(rxrpc_kernel_probe_life);387406388407/**389408 * rxrpc_kernel_get_epoch - Retrieve the epoch value from a call.
···12391239 return &gss_auth->rpc_auth;12401240}1241124112421242+static struct gss_cred *12431243+gss_dup_cred(struct gss_auth *gss_auth, struct gss_cred *gss_cred)12441244+{12451245+ struct gss_cred *new;12461246+12471247+ /* Make a copy of the cred so that we can reference count it */12481248+ new = kzalloc(sizeof(*gss_cred), GFP_NOIO);12491249+ if (new) {12501250+ struct auth_cred acred = {12511251+ .uid = gss_cred->gc_base.cr_uid,12521252+ };12531253+ struct gss_cl_ctx *ctx =12541254+ rcu_dereference_protected(gss_cred->gc_ctx, 1);12551255+12561256+ rpcauth_init_cred(&new->gc_base, &acred,12571257+ &gss_auth->rpc_auth,12581258+ &gss_nullops);12591259+ new->gc_base.cr_flags = 1UL << RPCAUTH_CRED_UPTODATE;12601260+ new->gc_service = gss_cred->gc_service;12611261+ new->gc_principal = gss_cred->gc_principal;12621262+ kref_get(&gss_auth->kref);12631263+ rcu_assign_pointer(new->gc_ctx, ctx);12641264+ gss_get_ctx(ctx);12651265+ }12661266+ return new;12671267+}12681268+12421269/*12431243- * gss_destroying_context will cause the RPCSEC_GSS to send a NULL RPC call12701270+ * gss_send_destroy_context will cause the RPCSEC_GSS to send a NULL RPC call12441271 * to the server with the GSS control procedure field set to12451272 * RPC_GSS_PROC_DESTROY. This should normally cause the server to release12461273 * all RPCSEC_GSS state associated with that context.12471274 */12481248-static int12491249-gss_destroying_context(struct rpc_cred *cred)12751275+static void12761276+gss_send_destroy_context(struct rpc_cred *cred)12501277{12511278 struct gss_cred *gss_cred = container_of(cred, struct gss_cred, gc_base);12521279 struct gss_auth *gss_auth = container_of(cred->cr_auth, struct gss_auth, rpc_auth);12531280 struct gss_cl_ctx *ctx = rcu_dereference_protected(gss_cred->gc_ctx, 1);12811281+ struct gss_cred *new;12541282 struct rpc_task *task;1255128312561256- if (test_bit(RPCAUTH_CRED_UPTODATE, &cred->cr_flags) == 0)12571257- return 0;12841284+ new = gss_dup_cred(gss_auth, gss_cred);12851285+ if (new) {12861286+ ctx->gc_proc = RPC_GSS_PROC_DESTROY;1258128712591259- ctx->gc_proc = RPC_GSS_PROC_DESTROY;12601260- cred->cr_ops = &gss_nullops;12881288+ task = rpc_call_null(gss_auth->client, &new->gc_base,12891289+ RPC_TASK_ASYNC|RPC_TASK_SOFT);12901290+ if (!IS_ERR(task))12911291+ rpc_put_task(task);1261129212621262- /* Take a reference to ensure the cred will be destroyed either12631263- * by the RPC call or by the put_rpccred() below */12641264- get_rpccred(cred);12651265-12661266- task = rpc_call_null(gss_auth->client, cred, RPC_TASK_ASYNC|RPC_TASK_SOFT);12671267- if (!IS_ERR(task))12681268- rpc_put_task(task);12691269-12701270- put_rpccred(cred);12711271- return 1;12931293+ put_rpccred(&new->gc_base);12941294+ }12721295}1273129612741297/* gss_destroy_cred (and gss_free_ctx) are used to clean up after failure···13531330gss_destroy_cred(struct rpc_cred *cred)13541331{1355133213561356- if (gss_destroying_context(cred))13571357- return;13331333+ if (test_and_clear_bit(RPCAUTH_CRED_UPTODATE, &cred->cr_flags) != 0)13341334+ gss_send_destroy_context(cred);13581335 gss_destroy_nullcred(cred);13591336}13601337
···166166167167 /* Apply trial address if we just left trial period */168168 if (!trial && !self) {169169- tipc_net_finalize(net, tn->trial_addr);169169+ tipc_sched_net_finalize(net, tn->trial_addr);170170+ msg_set_prevnode(buf_msg(d->skb), tn->trial_addr);170171 msg_set_type(buf_msg(d->skb), DSC_REQ_MSG);171172 }172173···301300 goto exit;302301 }303302304304- /* Trial period over ? */305305- if (!time_before(jiffies, tn->addr_trial_end)) {306306- /* Did we just leave it ? */307307- if (!tipc_own_addr(net))308308- tipc_net_finalize(net, tn->trial_addr);309309-310310- msg_set_type(buf_msg(d->skb), DSC_REQ_MSG);311311- msg_set_prevnode(buf_msg(d->skb), tipc_own_addr(net));303303+ /* Did we just leave trial period ? */304304+ if (!time_before(jiffies, tn->addr_trial_end) && !tipc_own_addr(net)) {305305+ mod_timer(&d->timer, jiffies + TIPC_DISC_INIT);306306+ spin_unlock_bh(&d->lock);307307+ tipc_sched_net_finalize(net, tn->trial_addr);308308+ return;312309 }313310314311 /* Adjust timeout interval according to discovery phase */···318319 d->timer_intv = TIPC_DISC_SLOW;319320 else if (!d->num_nodes && d->timer_intv > TIPC_DISC_FAST)320321 d->timer_intv = TIPC_DISC_FAST;322322+ msg_set_type(buf_msg(d->skb), DSC_REQ_MSG);323323+ msg_set_prevnode(buf_msg(d->skb), tn->trial_addr);321324 }322325323326 mod_timer(&d->timer, jiffies + d->timer_intv);
+37-8
net/tipc/net.c
···104104 * - A local spin_lock protecting the queue of subscriber events.105105*/106106107107+struct tipc_net_work {108108+ struct work_struct work;109109+ struct net *net;110110+ u32 addr;111111+};112112+113113+static void tipc_net_finalize(struct net *net, u32 addr);114114+107115int tipc_net_init(struct net *net, u8 *node_id, u32 addr)108116{109117 if (tipc_own_id(net)) {···127119 return 0;128120}129121130130-void tipc_net_finalize(struct net *net, u32 addr)122122+static void tipc_net_finalize(struct net *net, u32 addr)131123{132124 struct tipc_net *tn = tipc_net(net);133125134134- if (!cmpxchg(&tn->node_addr, 0, addr)) {135135- tipc_set_node_addr(net, addr);136136- tipc_named_reinit(net);137137- tipc_sk_reinit(net);138138- tipc_nametbl_publish(net, TIPC_CFG_SRV, addr, addr,139139- TIPC_CLUSTER_SCOPE, 0, addr);140140- }126126+ if (cmpxchg(&tn->node_addr, 0, addr))127127+ return;128128+ tipc_set_node_addr(net, addr);129129+ tipc_named_reinit(net);130130+ tipc_sk_reinit(net);131131+ tipc_nametbl_publish(net, TIPC_CFG_SRV, addr, addr,132132+ TIPC_CLUSTER_SCOPE, 0, addr);133133+}134134+135135+static void tipc_net_finalize_work(struct work_struct *work)136136+{137137+ struct tipc_net_work *fwork;138138+139139+ fwork = container_of(work, struct tipc_net_work, work);140140+ tipc_net_finalize(fwork->net, fwork->addr);141141+ kfree(fwork);142142+}143143+144144+void tipc_sched_net_finalize(struct net *net, u32 addr)145145+{146146+ struct tipc_net_work *fwork = kzalloc(sizeof(*fwork), GFP_ATOMIC);147147+148148+ if (!fwork)149149+ return;150150+ INIT_WORK(&fwork->work, tipc_net_finalize_work);151151+ fwork->net = net;152152+ fwork->addr = addr;153153+ schedule_work(&fwork->work);141154}142155143156void tipc_net_stop(struct net *net)
+1-1
net/tipc/net.h
···4242extern const struct nla_policy tipc_nl_net_policy[];43434444int tipc_net_init(struct net *net, u8 *node_id, u32 addr);4545-void tipc_net_finalize(struct net *net, u32 addr);4545+void tipc_sched_net_finalize(struct net *net, u32 addr);4646void tipc_net_stop(struct net *net);4747int tipc_nl_net_dump(struct sk_buff *skb, struct netlink_callback *cb);4848int tipc_nl_net_set(struct sk_buff *skb, struct genl_info *info);
+11-4
net/tipc/socket.c
···15551555/**15561556 * tipc_sk_anc_data_recv - optionally capture ancillary data for received message15571557 * @m: descriptor for message info15581558- * @msg: received message header15581558+ * @skb: received message buffer15591559 * @tsk: TIPC port associated with message15601560 *15611561 * Note: Ancillary data is not captured if not requested by receiver.15621562 *15631563 * Returns 0 if successful, otherwise errno15641564 */15651565-static int tipc_sk_anc_data_recv(struct msghdr *m, struct tipc_msg *msg,15651565+static int tipc_sk_anc_data_recv(struct msghdr *m, struct sk_buff *skb,15661566 struct tipc_sock *tsk)15671567{15681568+ struct tipc_msg *msg;15681569 u32 anc_data[3];15691570 u32 err;15701571 u32 dest_type;···1574157315751574 if (likely(m->msg_controllen == 0))15761575 return 0;15761576+ msg = buf_msg(skb);1577157715781578 /* Optionally capture errored message object(s) */15791579 err = msg ? msg_errcode(msg) : 0;···15851583 if (res)15861584 return res;15871585 if (anc_data[1]) {15861586+ if (skb_linearize(skb))15871587+ return -ENOMEM;15881588+ msg = buf_msg(skb);15881589 res = put_cmsg(m, SOL_TIPC, TIPC_RETDATA, anc_data[1],15891590 msg_data(msg));15901591 if (res)···1749174417501745 /* Collect msg meta data, including error code and rejected data */17511746 tipc_sk_set_orig_addr(m, skb);17521752- rc = tipc_sk_anc_data_recv(m, hdr, tsk);17471747+ rc = tipc_sk_anc_data_recv(m, skb, tsk);17531748 if (unlikely(rc))17541749 goto exit;17501750+ hdr = buf_msg(skb);1755175117561752 /* Capture data if non-error msg, otherwise just set return value */17571753 if (likely(!err)) {···18621856 /* Collect msg meta data, incl. error code and rejected data */18631857 if (!copied) {18641858 tipc_sk_set_orig_addr(m, skb);18651865- rc = tipc_sk_anc_data_recv(m, hdr, tsk);18591859+ rc = tipc_sk_anc_data_recv(m, skb, tsk);18661860 if (rc)18671861 break;18621862+ hdr = buf_msg(skb);18681863 }1869186418701865 /* Copy data if msg ok, otherwise return error/partial data */
+1-1
scripts/faddr2line
···71717272# Try to figure out the source directory prefix so we can remove it from the7373# addr2line output. HACK ALERT: This assumes that start_kernel() is in7474-# kernel/init.c! This only works for vmlinux. Otherwise it falls back to7474+# init/main.c! This only works for vmlinux. Otherwise it falls back to7575# printing the absolute path.7676find_dir_prefix() {7777 local objfile=$1
-1
scripts/spdxcheck.py
···168168 self.curline = 0169169 try:170170 for line in fd:171171- line = line.decode(locale.getpreferredencoding(False), errors='ignore')172171 self.curline += 1173172 if self.curline > maxlines:174173 break
···53185318 addr_buf = address;5319531953205320 while (walk_size < addrlen) {53215321+ if (walk_size + sizeof(sa_family_t) > addrlen)53225322+ return -EINVAL;53235323+53215324 addr = addr_buf;53225325 switch (addr->sa_family) {53235326 case AF_UNSPEC:
+7-3
security/selinux/ss/mls.c
···245245 char *rangep[2];246246247247 if (!pol->mls_enabled) {248248- if ((def_sid != SECSID_NULL && oldc) || (*scontext) == '\0')249249- return 0;250250- return -EINVAL;248248+ /*249249+ * With no MLS, only return -EINVAL if there is a MLS field250250+ * and it did not come from an xattr.251251+ */252252+ if (oldc && def_sid == SECSID_NULL)253253+ return -EINVAL;254254+ return 0;251255 }252256253257 /*
+4-4
tools/testing/nvdimm/test/nfit.c
···140140 [6] = NFIT_DIMM_HANDLE(1, 0, 0, 0, 1),141141};142142143143-static unsigned long dimm_fail_cmd_flags[NUM_DCR];144144-static int dimm_fail_cmd_code[NUM_DCR];143143+static unsigned long dimm_fail_cmd_flags[ARRAY_SIZE(handle)];144144+static int dimm_fail_cmd_code[ARRAY_SIZE(handle)];145145146146static const struct nd_intel_smart smart_def = {147147 .flags = ND_INTEL_SMART_HEALTH_VALID···205205 unsigned long deadline;206206 spinlock_t lock;207207 } ars_state;208208- struct device *dimm_dev[NUM_DCR];208208+ struct device *dimm_dev[ARRAY_SIZE(handle)];209209 struct nd_intel_smart *smart;210210 struct nd_intel_smart_threshold *smart_threshold;211211 struct badrange badrange;···26802680 u32 nfit_handle = __to_nfit_memdev(nfit_mem)->device_handle;26812681 int i;2682268226832683- for (i = 0; i < NUM_DCR; i++)26832683+ for (i = 0; i < ARRAY_SIZE(handle); i++)26842684 if (nfit_handle == handle[i])26852685 dev_set_drvdata(nfit_test->dimm_dev[i],26862686 nfit_mem);