···2233The ARC HS can be configured with a pipeline performance monitor for counting44CPU and cache events like cache misses and hits. Like conventional PCT there55-are 100+ hardware conditions dynamically mapped to upto 32 counters.55+are 100+ hardware conditions dynamically mapped to up to 32 counters.66It also supports overflow interrupts.7788Required properties:
+1-1
Documentation/devicetree/bindings/arc/pct.txt
···2233The ARC700 can be configured with a pipeline performance monitor for counting44CPU and cache events like cache misses and hits. Like conventional PCT there55-are 100+ hardware conditions dynamically mapped to upto 32 counters55+are 100+ hardware conditions dynamically mapped to up to 32 counters6677Note that:88 * The ARC 700 PCT does not support interrupts; although HW events may be
-1
Documentation/devicetree/bindings/arm/cpus.txt
···192192 can be one of:193193 "allwinner,sun6i-a31"194194 "allwinner,sun8i-a23"195195- "arm,psci"196195 "arm,realview-smp"197196 "brcm,bcm-nsp-smp"198197 "brcm,brahma-b15"
···66Required properties :7788 - reg : Offset and length of the register set for the device99- - compatible : should be "rockchip,rk3066-i2c", "rockchip,rk3188-i2c" or1010- "rockchip,rk3288-i2c".99+ - compatible : should be "rockchip,rk3066-i2c", "rockchip,rk3188-i2c",1010+ "rockchip,rk3228-i2c" or "rockchip,rk3288-i2c".1111 - interrupts : interrupt number1212 - clocks : parent clock1313
···99Required properties:1010- compatible: Should be "mediatek,mt7623-eth"1111- reg: Address and length of the register set for the device1212-- interrupts: Should contain the frame engines interrupt1212+- interrupts: Should contain the three frame engines interrupts in numeric1313+ order. These are fe_int0, fe_int1 and fe_int2.1314- clocks: the clock used by the core1415- clock-names: the names of the clock listed in the clocks property. These are1516 "ethif", "esw", "gp2", "gp1"···4342 <ðsys CLK_ETHSYS_GP2>,4443 <ðsys CLK_ETHSYS_GP1>;4544 clock-names = "ethif", "esw", "gp2", "gp1";4646- interrupts = <GIC_SPI 200 IRQ_TYPE_LEVEL_LOW>;4545+ interrupts = <GIC_SPI 200 IRQ_TYPE_LEVEL_LOW4646+ GIC_SPI 199 IRQ_TYPE_LEVEL_LOW4747+ GIC_SPI 198 IRQ_TYPE_LEVEL_LOW>;4748 power-domains = <&scpsys MT2701_POWER_DOMAIN_ETH>;4849 resets = <ðsys MT2701_ETHSYS_ETH_RST>;4950 reset-names = "eth";
···1515 is the rtc tick interrupt. The number of cells representing a interrupt1616 depends on the parent interrupt controller.1717- clocks: Must contain a list of phandle and clock specifier for the rtc1818- and source clocks.1919-- clock-names: Must contain "rtc" and "rtc_src" entries sorted in the2020- same order as the clocks property.1818+ clock and in the case of a s3c6410 compatible controller, also1919+ a source clock.2020+- clock-names: Must contain "rtc" and for a s3c6410 compatible controller,2121+ a "rtc_src" sorted in the same order as the clocks property.21222223Example:2324
+4
Documentation/input/event-codes.txt
···173173 proximity of the device and while the value of the BTN_TOUCH code is 0. If174174 the input device may be used freely in three dimensions, consider ABS_Z175175 instead.176176+ - BTN_TOOL_<name> should be set to 1 when the tool comes into detectable177177+ proximity and set to 0 when the tool leaves detectable proximity.178178+ BTN_TOOL_<name> signals the type of tool that is currently detected by the179179+ hardware and is otherwise independent of ABS_DISTANCE and/or BTN_TOUCH.176180177181* ABS_MT_<name>:178182 - Used to describe multitouch input events. Please see
+9-8
Documentation/sysctl/vm.txt
···581581"Zone Order" orders the zonelists by zone type, then by node within each582582zone. Specify "[Zz]one" for zone order.583583584584-Specify "[Dd]efault" to request automatic configuration. Autoconfiguration585585-will select "node" order in following case.586586-(1) if the DMA zone does not exist or587587-(2) if the DMA zone comprises greater than 50% of the available memory or588588-(3) if any node's DMA zone comprises greater than 70% of its local memory and589589- the amount of local memory is big enough.584584+Specify "[Dd]efault" to request automatic configuration.590585591591-Otherwise, "zone" order will be selected. Default order is recommended unless592592-this is causing problems for your system/application.586586+On 32-bit, the Normal zone needs to be preserved for allocations accessible587587+by the kernel, so "zone" order will be selected.588588+589589+On 64-bit, devices that require DMA32/DMA are relatively rare, so "node"590590+order will be selected.591591+592592+Default order is recommended unless this is causing problems for your593593+system/application.593594594595==============================================================595596
+3-3
Documentation/x86/x86_64/mm.txt
···1919ffffffef00000000 - ffffffff00000000 (=64 GB) EFI region mapping space2020... unused hole ...2121ffffffff80000000 - ffffffffa0000000 (=512 MB) kernel text mapping, from phys 02222-ffffffffa0000000 - ffffffffff5fffff (=1525 MB) module mapping space2222+ffffffffa0000000 - ffffffffff5fffff (=1526 MB) module mapping space2323ffffffffff600000 - ffffffffffdfffff (=8 MB) vsyscalls2424ffffffffffe00000 - ffffffffffffffff (=2 MB) unused hole2525···3131the processes using the page fault handler, with init_level4_pgt as3232reference.33333434-Current X86-64 implementations only support 40 bits of address space,3535-but we support up to 46 bits. This expands into MBZ space in the page tables.3434+Current X86-64 implementations support up to 46 bits of address space (64 TB),3535+which is our current limit. This expands into MBZ space in the page tables.36363737We map EFI runtime services in the 'efi_pgd' PGD in a 64Gb large virtual3838memory window (this size is arbitrary, it can be raised later if needed).
+12-3
MAINTAINERS
···6027602760286028ISCSI EXTENSIONS FOR RDMA (ISER) INITIATOR60296029M: Or Gerlitz <ogerlitz@mellanox.com>60306030-M: Sagi Grimberg <sagig@mellanox.com>60306030+M: Sagi Grimberg <sagi@grimberg.me>60316031M: Roi Dayan <roid@mellanox.com>60326032L: linux-rdma@vger.kernel.org60336033S: Supported···60376037F: drivers/infiniband/ulp/iser/6038603860396039ISCSI EXTENSIONS FOR RDMA (ISER) TARGET60406040-M: Sagi Grimberg <sagig@mellanox.com>60406040+M: Sagi Grimberg <sagi@grimberg.me>60416041T: git git://git.kernel.org/pub/scm/linux/kernel/git/nab/target-pending.git master60426042L: linux-rdma@vger.kernel.org60436043L: target-devel@vger.kernel.org···64006400F: mm/kmemleak-test.c6401640164026402KPROBES64036403-M: Ananth N Mavinakayanahalli <ananth@in.ibm.com>64036403+M: Ananth N Mavinakayanahalli <ananth@linux.vnet.ibm.com>64046404M: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>64056405M: "David S. Miller" <davem@davemloft.net>64066406M: Masami Hiramatsu <mhiramat@kernel.org>···1107011070S: Maintained1107111071F: drivers/clk/ti/1107211072F: include/linux/clk/ti.h1107311073+1107411074+TI ETHERNET SWITCH DRIVER (CPSW)1107511075+M: Mugunthan V N <mugunthanvnm@ti.com>1107611076+R: Grygorii Strashko <grygorii.strashko@ti.com>1107711077+L: linux-omap@vger.kernel.org1107811078+L: netdev@vger.kernel.org1107911079+S: Maintained1108011080+F: drivers/net/ethernet/ti/cpsw*1108111081+F: drivers/net/ethernet/ti/davinci*11073110821107411083TI FLASH MEDIA INTERFACE DRIVER1107511084M: Alex Dubov <oakad@yahoo.com>
···1818#define STATUS_AD_MASK (1<<STATUS_AD_BIT)1919#define STATUS_IE_MASK (1<<STATUS_IE_BIT)20202121+/* status32 Bits as encoded/expected by CLRI/SETI */2222+#define CLRI_STATUS_IE_BIT 42323+2424+#define CLRI_STATUS_E_MASK 0xF2525+#define CLRI_STATUS_IE_MASK (1 << CLRI_STATUS_IE_BIT)2626+2127#define AUX_USER_SP 0x00D2228#define AUX_IRQ_CTRL 0x00E2329#define AUX_IRQ_ACT 0x043 /* Active Intr across all levels */···106100 :107101 : "memory");108102103103+ /* To be compatible with irq_save()/irq_restore()104104+ * encode the irq bits as expected by CLRI/SETI105105+ * (this was needed to make CONFIG_TRACE_IRQFLAGS work)106106+ */107107+ temp = (1 << 5) |108108+ ((!!(temp & STATUS_IE_MASK)) << CLRI_STATUS_IE_BIT) |109109+ (temp & CLRI_STATUS_E_MASK);109110 return temp;110111}111112···121108 */122109static inline int arch_irqs_disabled_flags(unsigned long flags)123110{124124- return !(flags & (STATUS_IE_MASK));111111+ return !(flags & CLRI_STATUS_IE_MASK);125112}126113127114static inline int arch_irqs_disabled(void)···141128142129#else143130131131+#ifdef CONFIG_TRACE_IRQFLAGS132132+133133+.macro TRACE_ASM_IRQ_DISABLE134134+ bl trace_hardirqs_off135135+.endm136136+137137+.macro TRACE_ASM_IRQ_ENABLE138138+ bl trace_hardirqs_on139139+.endm140140+141141+#else142142+143143+.macro TRACE_ASM_IRQ_DISABLE144144+.endm145145+146146+.macro TRACE_ASM_IRQ_ENABLE147147+.endm148148+149149+#endif144150.macro IRQ_DISABLE scratch145151 clri152152+ TRACE_ASM_IRQ_DISABLE146153.endm147154148155.macro IRQ_ENABLE scratch156156+ TRACE_ASM_IRQ_ENABLE149157 seti150158.endm151159
+9-1
arch/arc/kernel/entry-arcv2.S
···69697070 clri ; To make status32.IE agree with CPU internal state71717272- lr r0, [ICAUSE]7272+#ifdef CONFIG_TRACE_IRQFLAGS7373+ TRACE_ASM_IRQ_DISABLE7474+#endif73757676+ lr r0, [ICAUSE]7477 mov blink, ret_from_exception75787679 b.d arch_do_IRQ···171168; All 2 entry points to here already disable interrupts172169173170.Lrestore_regs:171171+172172+ # Interrpts are actually disabled from this point on, but will get173173+ # reenabled after we return from interrupt/exception.174174+ # But irq tracer needs to be told now...175175+ TRACE_ASM_IRQ_ENABLE174176175177 ld r0, [sp, PT_status32] ; U/K mode at time of entry176178 lr r10, [AUX_IRQ_ACT]
+3
arch/arc/kernel/entry-compact.S
···341341342342.Lrestore_regs:343343344344+ # Interrpts are actually disabled from this point on, but will get345345+ # reenabled after we return from interrupt/exception.346346+ # But irq tracer needs to be told now...344347 TRACE_ASM_IRQ_ENABLE345348346349 lr r10, [status32]
···6677#include "dm814x-clocks.dtsi"8899+/* Compared to dm814x, dra62x does not have hdic, l3 or dss PLLs */1010+&adpll_hdvic_ck {1111+ status = "disabled";1212+};1313+1414+&adpll_l3_ck {1515+ status = "disabled";1616+};1717+1818+&adpll_dss_ck {1919+ status = "disabled";2020+};2121+2222+/* Compared to dm814x, dra62x has interconnect clocks on isp PLL */2323+&sysclk4_ck {2424+ clocks = <&adpll_isp_ck 1>;2525+};2626+2727+&sysclk5_ck {2828+ clocks = <&adpll_isp_ck 1>;2929+};3030+3131+&sysclk6_ck {3232+ clocks = <&adpll_isp_ck 1>;3333+};3434+935/*1036 * Compared to dm814x, dra62x has different shifts and more mux options.1137 * Please add the extra options for ysclk_14 and 16 if really needed.
···10831083 pcie_bus_clk: pcie_bus_clk {10841084 compatible = "fixed-clock";10851085 #clock-cells = <0>;10861086- clock-frequency = <100000000>;10861086+ clock-frequency = <0>;10871087 clock-output-names = "pcie_bus";10881088- status = "disabled";10891088 };1090108910911090 /* External SCIF clock */···10931094 #clock-cells = <0>;10941095 /* This value must be overridden by the board. */10951096 clock-frequency = <0>;10961096- status = "disabled";10971097 };1098109810991099 /* External USB clock - can be overridden by the board */···11101112 /* This value must be overridden by the board. */11111113 clock-frequency = <0>;11121114 clock-output-names = "can_clk";11131113- status = "disabled";11141115 };1115111611161117 /* Special CPG clocks */
+1-1
arch/arm/include/asm/cputype.h
···276276 int feature = (features >> field) & 15;277277278278 /* feature registers are signed values */279279- if (feature > 8)279279+ if (feature > 7)280280 feature -= 16;281281282282 return feature;
···274274 */275275static void irq_save_context(void)276276{277277+ /* DRA7 has no SAR to save */278278+ if (soc_is_dra7xx())279279+ return;280280+277281 if (!sar_base)278282 sar_base = omap4_get_sar_ram_base();279283···294290{295291 u32 val;296292 u32 offset = SAR_BACKUP_STATUS_OFFSET;293293+ /* DRA7 has no SAR to save */294294+ if (soc_is_dra7xx())295295+ return;297296298297 if (soc_is_omap54xx())299298 offset = OMAP5_SAR_BACKUP_STATUS_OFFSET;
+13-10
arch/arm/mach-omap2/pm34xx.c
···198198 int per_next_state = PWRDM_POWER_ON;199199 int core_next_state = PWRDM_POWER_ON;200200 int per_going_off;201201- int core_prev_state;202201 u32 sdrc_pwr = 0;203202204203 mpu_next_state = pwrdm_read_next_pwrst(mpu_pwrdm);···277278 sdrc_write_reg(sdrc_pwr, SDRC_POWER);278279279280 /* CORE */280280- if (core_next_state < PWRDM_POWER_ON) {281281- core_prev_state = pwrdm_read_prev_pwrst(core_pwrdm);282282- if (core_prev_state == PWRDM_POWER_OFF) {283283- omap3_core_restore_context();284284- omap3_cm_restore_context();285285- omap3_sram_restore_context();286286- omap2_sms_restore_context();287287- }281281+ if (core_next_state < PWRDM_POWER_ON &&282282+ pwrdm_read_prev_pwrst(core_pwrdm) == PWRDM_POWER_OFF) {283283+ omap3_core_restore_context();284284+ omap3_cm_restore_context();285285+ omap3_sram_restore_context();286286+ omap2_sms_restore_context();287287+ } else {288288+ /*289289+ * In off-mode resume path above, omap3_core_restore_context290290+ * also handles the INTC autoidle restore done here so limit291291+ * this to non-off mode resume paths so we don't do it twice.292292+ */293293+ omap3_intc_resume_idle();288294 }289289- omap3_intc_resume_idle();290295291296 pwrdm_post_transition(NULL);292297
+11-17
arch/arm/mach-shmobile/timer.c
···4040void __init shmobile_init_delay(void)4141{4242 struct device_node *np, *cpus;4343- bool is_a7_a8_a9 = false;4444- bool is_a15 = false;4343+ unsigned int div = 0;4544 bool has_arch_timer = false;4645 u32 max_freq = 0;4746···5455 if (!of_property_read_u32(np, "clock-frequency", &freq))5556 max_freq = max(max_freq, freq);56575757- if (of_device_is_compatible(np, "arm,cortex-a8") ||5858- of_device_is_compatible(np, "arm,cortex-a9")) {5959- is_a7_a8_a9 = true;6060- } else if (of_device_is_compatible(np, "arm,cortex-a7")) {6161- is_a7_a8_a9 = true;6262- has_arch_timer = true;6363- } else if (of_device_is_compatible(np, "arm,cortex-a15")) {6464- is_a15 = true;5858+ if (of_device_is_compatible(np, "arm,cortex-a8")) {5959+ div = 2;6060+ } else if (of_device_is_compatible(np, "arm,cortex-a9")) {6161+ div = 1;6262+ } else if (of_device_is_compatible(np, "arm,cortex-a7") ||6363+ of_device_is_compatible(np, "arm,cortex-a15")) {6464+ div = 1;6565 has_arch_timer = true;6666 }6767 }68686969 of_node_put(cpus);70707171- if (!max_freq)7171+ if (!max_freq || !div)7272 return;73737474- if (!has_arch_timer || !IS_ENABLED(CONFIG_ARM_ARCH_TIMER)) {7575- if (is_a7_a8_a9)7676- shmobile_setup_delay_hz(max_freq, 1, 3);7777- else if (is_a15)7878- shmobile_setup_delay_hz(max_freq, 2, 4);7979- }7474+ if (!has_arch_timer || !IS_ENABLED(CONFIG_ARM_ARCH_TIMER))7575+ shmobile_setup_delay_hz(max_freq, 1, div);8076}
···588588 msr vpidr_el2, x0589589 msr vmpidr_el2, x1590590591591+ /*592592+ * When VHE is not in use, early init of EL2 and EL1 needs to be593593+ * done here.594594+ * When VHE _is_ in use, EL1 will not be used in the host and595595+ * requires no configuration, and all non-hyp-specific EL2 setup596596+ * will be done via the _EL1 system register aliases in __cpu_setup.597597+ */598598+ cbnz x2, 1f599599+591600 /* sctlr_el1 */592601 mov x0, #0x0800 // Set/clear RES{1,0} bits593602CPU_BE( movk x0, #0x33d0, lsl #16 ) // Set EE and E0E on BE systems···606597 /* Coprocessor traps. */607598 mov x0, #0x33ff608599 msr cptr_el2, x0 // Disable copro. traps to EL2600600+1:609601610602#ifdef CONFIG_COMPAT611603 msr hstr_el2, xzr // Disable CP15 traps to EL2···744734745735 .macro update_early_cpu_boot_status status, tmp1, tmp2746736 mov \tmp2, #\status747747- str_l \tmp2, __early_cpu_boot_status, \tmp1737737+ adr_l \tmp1, __early_cpu_boot_status738738+ str \tmp2, [\tmp1]748739 dmb sy749740 dc ivac, \tmp1 // Invalidate potentially stale cache line750741 .endm
+6-5
arch/arm64/kernel/smp_spin_table.c
···5252static int smp_spin_table_cpu_init(unsigned int cpu)5353{5454 struct device_node *dn;5555+ int ret;55565657 dn = of_get_cpu_node(cpu, NULL);5758 if (!dn)···6160 /*6261 * Determine the address from which the CPU is polling.6362 */6464- if (of_property_read_u64(dn, "cpu-release-addr",6565- &cpu_release_addr[cpu])) {6363+ ret = of_property_read_u64(dn, "cpu-release-addr",6464+ &cpu_release_addr[cpu]);6565+ if (ret)6666 pr_err("CPU %d: missing or invalid cpu-release-addr property\n",6767 cpu);68686969- return -1;7070- }6969+ of_node_put(dn);71707272- return 0;7171+ return ret;7372}74737574static int smp_spin_table_cpu_prepare(unsigned int cpu)
···3131#define PPC_FEATURE_PSERIES_PERFMON_COMPAT \3232 0x0000004033333434+/* Reserved - do not use 0x00000004 */3435#define PPC_FEATURE_TRUE_LE 0x000000023536#define PPC_FEATURE_PPC_LE 0x000000013637
···110110static inline void __tlb_flush_kernel(void)111111{112112 if (MACHINE_HAS_IDTE)113113- __tlb_flush_idte((unsigned long) init_mm.pgd |114114- init_mm.context.asce_bits);113113+ __tlb_flush_idte(init_mm.context.asce);115114 else116115 __tlb_flush_global();117116}···132133static inline void __tlb_flush_kernel(void)133134{134135 if (MACHINE_HAS_TLB_LC)135135- __tlb_flush_idte_local((unsigned long) init_mm.pgd |136136- init_mm.context.asce_bits);136136+ __tlb_flush_idte_local(init_mm.context.asce);137137 else138138 __tlb_flush_local();139139}···146148 * only ran on the local cpu.147149 */148150 if (MACHINE_HAS_IDTE && list_empty(&mm->context.gmap_list))149149- __tlb_flush_asce(mm, (unsigned long) mm->pgd |150150- mm->context.asce_bits);151151+ __tlb_flush_asce(mm, mm->context.asce);151152 else152153 __tlb_flush_full(mm);153154}
+1
arch/s390/lib/spinlock.c
···105105 if (_raw_compare_and_swap(&lp->lock, 0, cpu))106106 return;107107 local_irq_restore(flags);108108+ continue;108109 }109110 /* Check if the lock owner is running. */110111 if (first_diag && cpu_is_preempted(~owner)) {
···37943794 pr_cont("Knights Landing events, ");37953795 break;3796379637973797+ case 142: /* 14nm Kabylake Mobile */37983798+ case 158: /* 14nm Kabylake Desktop */37973799 case 78: /* 14nm Skylake Mobile */37983800 case 94: /* 14nm Skylake Desktop */37993801 case 85: /* 14nm Skylake Server */
+1
arch/x86/include/asm/hugetlb.h
···44#include <asm/page.h>55#include <asm-generic/hugetlb.h>6677+#define hugepages_supported() cpu_has_pse7889static inline int is_hugepage_only_range(struct mm_struct *mm,910 unsigned long addr,
+2-1
arch/x86/kernel/apic/vector.c
···256256 struct irq_desc *desc;257257 int cpu, vector;258258259259- BUG_ON(!data->cfg.vector);259259+ if (!data->cfg.vector)260260+ return;260261261262 vector = data->cfg.vector;262263 for_each_cpu_and(cpu, data->domain, cpu_online_mask)
+12
arch/x86/kernel/cpu/mshyperv.c
···152152 .flags = CLOCK_SOURCE_IS_CONTINUOUS,153153};154154155155+static unsigned char hv_get_nmi_reason(void)156156+{157157+ return 0;158158+}159159+155160static void __init ms_hyperv_init_platform(void)156161{157162 /*···196191 machine_ops.crash_shutdown = hv_machine_crash_shutdown;197192#endif198193 mark_tsc_unstable("running on Hyper-V");194194+195195+ /*196196+ * Generation 2 instances don't support reading the NMI status from197197+ * 0x61 port.198198+ */199199+ if (efi_enabled(EFI_BOOT))200200+ x86_platform.get_nmi_reason = hv_get_nmi_reason;199201}200202201203const __refconst struct hypervisor_x86 x86_hyper_ms_hyperv = {
-6
arch/x86/kernel/head_32.S
···389389 /* Make changes effective */390390 wrmsr391391392392- /*393393- * And make sure that all the mappings we set up have NX set from394394- * the beginning.395395- */396396- orl $(1 << (_PAGE_BIT_NX - 32)), pa(__supported_pte_mask + 4)397397-398392enable_paging:399393400394/*
+3-2
arch/x86/mm/setup_nx.c
···32323333void x86_configure_nx(void)3434{3535- /* If disable_nx is set, clear NX on all new mappings going forward. */3636- if (disable_nx)3535+ if (boot_cpu_has(X86_FEATURE_NX) && !disable_nx)3636+ __supported_pte_mask |= _PAGE_NX;3737+ else3738 __supported_pte_mask &= ~_PAGE_NX;3839}3940
+6
arch/x86/xen/spinlock.c
···27272828static void xen_qlock_kick(int cpu)2929{3030+ int irq = per_cpu(lock_kicker_irq, cpu);3131+3232+ /* Don't kick if the target's kicker interrupt is not initialized. */3333+ if (irq == -1)3434+ return;3535+3036 xen_send_IPI_one(cpu, XEN_SPIN_UNLOCK_VECTOR);3137}3238
···136136 return false;137137}138138139139-#if defined(CONFIG_OF) && defined(CONFIG_OF_ADDRESS)140139static struct device_node *bcma_of_find_child_device(struct platform_device *parent,141140 struct bcma_device *core)142141{···183184 struct of_phandle_args out_irq;184185 int ret;185186186186- if (!parent || !parent->dev.of_node)187187+ if (!IS_ENABLED(CONFIG_OF_IRQ) || !parent || !parent->dev.of_node)187188 return 0;188189189190 ret = bcma_of_irq_parse(parent, core, &out_irq, num);···201202{202203 struct device_node *node;203204205205+ if (!IS_ENABLED(CONFIG_OF_IRQ))206206+ return;207207+204208 node = bcma_of_find_child_device(parent, core);205209 if (node)206210 core->dev.of_node = node;207211208212 core->irq = bcma_of_get_irq(parent, core, 0);209213}210210-#else211211-static void bcma_of_fill_device(struct platform_device *parent,212212- struct bcma_device *core)213213-{214214-}215215-static inline unsigned int bcma_of_get_irq(struct platform_device *parent,216216- struct bcma_device *core, int num)217217-{218218- return 0;219219-}220220-#endif /* CONFIG_OF */221214222215unsigned int bcma_core_irq(struct bcma_device *core, int num)223216{
+25-27
drivers/block/rbd.c
···538538 u8 *order, u64 *snap_size);539539static int _rbd_dev_v2_snap_features(struct rbd_device *rbd_dev, u64 snap_id,540540 u64 *snap_features);541541-static u64 rbd_snap_id_by_name(struct rbd_device *rbd_dev, const char *name);542541543542static int rbd_open(struct block_device *bdev, fmode_t mode)544543{···31263127 struct rbd_device *rbd_dev = (struct rbd_device *)data;31273128 int ret;3128312931293129- if (!rbd_dev)31303130- return;31313131-31323130 dout("%s: \"%s\" notify_id %llu opcode %u\n", __func__,31333131 rbd_dev->header_name, (unsigned long long)notify_id,31343132 (unsigned int)opcode);···3259326332603264 ceph_osdc_cancel_event(rbd_dev->watch_event);32613265 rbd_dev->watch_event = NULL;32663266+32673267+ dout("%s flushing notifies\n", __func__);32683268+ ceph_osdc_flush_notifies(&rbd_dev->rbd_client->client->osdc);32623269}3263327032643271/*···36413642static void rbd_dev_update_size(struct rbd_device *rbd_dev)36423643{36433644 sector_t size;36443644- bool removing;3645364536463646 /*36473647- * Don't hold the lock while doing disk operations,36483648- * or lock ordering will conflict with the bdev mutex via:36493649- * rbd_add() -> blkdev_get() -> rbd_open()36473647+ * If EXISTS is not set, rbd_dev->disk may be NULL, so don't36483648+ * try to update its size. If REMOVING is set, updating size36493649+ * is just useless work since the device can't be opened.36503650 */36513651- spin_lock_irq(&rbd_dev->lock);36523652- removing = test_bit(RBD_DEV_FLAG_REMOVING, &rbd_dev->flags);36533653- spin_unlock_irq(&rbd_dev->lock);36543654- /*36553655- * If the device is being removed, rbd_dev->disk has36563656- * been destroyed, so don't try to update its size36573657- */36583658- if (!removing) {36513651+ if (test_bit(RBD_DEV_FLAG_EXISTS, &rbd_dev->flags) &&36523652+ !test_bit(RBD_DEV_FLAG_REMOVING, &rbd_dev->flags)) {36593653 size = (sector_t)rbd_dev->mapping.size / SECTOR_SIZE;36603654 dout("setting size to %llu sectors", (unsigned long long)size);36613655 set_capacity(rbd_dev->disk, size);···41834191 __le64 features;41844192 __le64 incompat;41854193 } __attribute__ ((packed)) features_buf = { 0 };41864186- u64 incompat;41944194+ u64 unsup;41874195 int ret;4188419641894197 ret = rbd_obj_method_sync(rbd_dev, rbd_dev->header_name,···41964204 if (ret < sizeof (features_buf))41974205 return -ERANGE;4198420641994199- incompat = le64_to_cpu(features_buf.incompat);42004200- if (incompat & ~RBD_FEATURES_SUPPORTED)42074207+ unsup = le64_to_cpu(features_buf.incompat) & ~RBD_FEATURES_SUPPORTED;42084208+ if (unsup) {42094209+ rbd_warn(rbd_dev, "image uses unsupported features: 0x%llx",42104210+ unsup);42014211 return -ENXIO;42124212+ }4202421342034214 *snap_features = le64_to_cpu(features_buf.features);42044215···51825187 return ret;51835188}5184518951905190+/*51915191+ * rbd_dev->header_rwsem must be locked for write and will be unlocked51925192+ * upon return.51935193+ */51855194static int rbd_dev_device_setup(struct rbd_device *rbd_dev)51865195{51875196 int ret;···5194519551955196 ret = rbd_dev_id_get(rbd_dev);51965197 if (ret)51975197- return ret;51985198+ goto err_out_unlock;5198519951995200 BUILD_BUG_ON(DEV_NAME_LEN52005201 < sizeof (RBD_DRV_NAME) + MAX_INT_FORMAT_WIDTH);···52355236 /* Everything's ready. Announce the disk to the world. */5236523752375238 set_bit(RBD_DEV_FLAG_EXISTS, &rbd_dev->flags);52385238- add_disk(rbd_dev->disk);52395239+ up_write(&rbd_dev->header_rwsem);5239524052415241+ add_disk(rbd_dev->disk);52405242 pr_info("%s: added with size 0x%llx\n", rbd_dev->disk->disk_name,52415243 (unsigned long long) rbd_dev->mapping.size);52425244···52525252 unregister_blkdev(rbd_dev->major, rbd_dev->name);52535253err_out_id:52545254 rbd_dev_id_put(rbd_dev);52555255+err_out_unlock:52565256+ up_write(&rbd_dev->header_rwsem);52555257 return ret;52565258}52575259···54445442 spec = NULL; /* rbd_dev now owns this */54455443 rbd_opts = NULL; /* rbd_dev now owns this */5446544454455445+ down_write(&rbd_dev->header_rwsem);54475446 rc = rbd_dev_image_probe(rbd_dev, 0);54485447 if (rc < 0)54495448 goto err_out_rbd_dev;···54745471 return rc;5475547254765473err_out_rbd_dev:54745474+ up_write(&rbd_dev->header_rwsem);54775475 rbd_dev_destroy(rbd_dev);54785476err_out_client:54795477 rbd_put_client(rbdc);···55815577 return ret;5582557855835579 rbd_dev_header_unwatch_sync(rbd_dev);55845584- /*55855585- * flush remaining watch callbacks - these must be complete55865586- * before the osd_client is shutdown55875587- */55885588- dout("%s: flushing notifies", __func__);55895589- ceph_osdc_flush_notifies(&rbd_dev->rbd_client->client->osdc);5590558055915581 /*55925582 * Don't free anything from rbd_dev->disk until after all
+1-1
drivers/clocksource/tango_xtal.c
···42424343 ret = clocksource_mmio_init(xtal_in_cnt, "tango-xtal", xtal_freq, 350,4444 32, clocksource_mmio_readl_up);4545- if (!ret) {4545+ if (ret) {4646 pr_err("%s: registration failed\n", np->full_name);4747 return;4848 }
···202202 { NULL_GUID, "", NULL },203203};204204205205+/*206206+ * Check if @var_name matches the pattern given in @match_name.207207+ *208208+ * @var_name: an array of @len non-NUL characters.209209+ * @match_name: a NUL-terminated pattern string, optionally ending in "*". A210210+ * final "*" character matches any trailing characters @var_name,211211+ * including the case when there are none left in @var_name.212212+ * @match: on output, the number of non-wildcard characters in @match_name213213+ * that @var_name matches, regardless of the return value.214214+ * @return: whether @var_name fully matches @match_name.215215+ */205216static bool206217variable_matches(const char *var_name, size_t len, const char *match_name,207218 int *match)208219{209220 for (*match = 0; ; (*match)++) {210221 char c = match_name[*match];211211- char u = var_name[*match];212222213213- /* Wildcard in the matching name means we've matched */214214- if (c == '*')223223+ switch (c) {224224+ case '*':225225+ /* Wildcard in @match_name means we've matched. */215226 return true;216227217217- /* Case sensitive match */218218- if (!c && *match == len)219219- return true;228228+ case '\0':229229+ /* @match_name has ended. Has @var_name too? */230230+ return (*match == len);220231221221- if (c != u)232232+ default:233233+ /*234234+ * We've reached a non-wildcard char in @match_name.235235+ * Continue only if there's an identical character in236236+ * @var_name.237237+ */238238+ if (*match < len && c == var_name[*match])239239+ continue;222240 return false;223223-224224- if (!c)225225- return true;241241+ }226242 }227227- return true;228243}229244230245bool
···425425 struct acp_pm_domain *apd;426426 struct amdgpu_device *adev = (struct amdgpu_device *)handle;427427428428+ /* return early if no ACP */429429+ if (!adev->acp.acp_genpd)430430+ return 0;431431+428432 /* SMU block will power on ACP irrespective of ACP runtime status.429433 * Power off explicitly based on genpd ACP runtime status so that ACP430434 * hw and ACP-genpd status are in sync.
+7-4
drivers/gpu/drm/amd/amdgpu/amdgpu_atpx_handler.c
···6363 return amdgpu_atpx_priv.atpx_detected;6464}65656666-bool amdgpu_has_atpx_dgpu_power_cntl(void) {6767- return amdgpu_atpx_priv.atpx.functions.power_cntl;6868-}6969-7066/**7167 * amdgpu_atpx_call - call an ATPX method7268 *···142146 */143147static int amdgpu_atpx_validate(struct amdgpu_atpx *atpx)144148{149149+ /* make sure required functions are enabled */150150+ /* dGPU power control is required */151151+ if (atpx->functions.power_cntl == false) {152152+ printk("ATPX dGPU power cntl not present, forcing\n");153153+ atpx->functions.power_cntl = true;154154+ }155155+145156 if (atpx->functions.px_params) {146157 union acpi_object *info;147158 struct atpx_px_params output;
+1-7
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
···6262 "LAST",6363};64646565-#if defined(CONFIG_VGA_SWITCHEROO)6666-bool amdgpu_has_atpx_dgpu_power_cntl(void);6767-#else6868-static inline bool amdgpu_has_atpx_dgpu_power_cntl(void) { return false; }6969-#endif7070-7165bool amdgpu_device_is_px(struct drm_device *dev)7266{7367 struct amdgpu_device *adev = dev->dev_private;···1479148514801486 if (amdgpu_runtime_pm == 1)14811487 runtime = true;14821482- if (amdgpu_device_is_px(ddev) && amdgpu_has_atpx_dgpu_power_cntl())14881488+ if (amdgpu_device_is_px(ddev))14831489 runtime = true;14841490 vga_switcheroo_register_client(adev->pdev, &amdgpu_switcheroo_ops, runtime);14851491 if (runtime)
+1-1
drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
···303303 fw_info.feature = adev->vce.fb_version;304304 break;305305 case AMDGPU_INFO_FW_UVD:306306- fw_info.ver = 0;306306+ fw_info.ver = adev->uvd.fw_version;307307 fw_info.feature = 0;308308 break;309309 case AMDGPU_INFO_FW_GMC:
···16721672 u8 sinks[DRM_DP_MAX_SDP_STREAMS];16731673 int i;1674167416751675+ port = drm_dp_get_validated_port_ref(mgr, port);16761676+ if (!port)16771677+ return -EINVAL;16781678+16751679 port_num = port->port_num;16761680 mstb = drm_dp_get_validated_mstb_ref(mgr, port->parent);16771681 if (!mstb) {16781682 mstb = drm_dp_get_last_connected_port_and_mstb(mgr, port->parent, &port_num);1679168316801680- if (!mstb)16841684+ if (!mstb) {16851685+ drm_dp_put_port(port);16811686 return -EINVAL;16871687+ }16821688 }1683168916841690 txmsg = kzalloc(sizeof(*txmsg), GFP_KERNEL);···17131707 kfree(txmsg);17141708fail_put:17151709 drm_dp_put_mst_branch_device(mstb);17101710+ drm_dp_put_port(port);17161711 return ret;17171712}17181713···17961789 req_payload.start_slot = cur_slots;17971790 if (mgr->proposed_vcpis[i]) {17981791 port = container_of(mgr->proposed_vcpis[i], struct drm_dp_mst_port, vcpi);17921792+ port = drm_dp_get_validated_port_ref(mgr, port);17931793+ if (!port) {17941794+ mutex_unlock(&mgr->payload_lock);17951795+ return -EINVAL;17961796+ }17991797 req_payload.num_slots = mgr->proposed_vcpis[i]->num_slots;18001798 req_payload.vcpi = mgr->proposed_vcpis[i]->vcpi;18011799 } else {···18281816 mgr->payloads[i].payload_state = req_payload.payload_state;18291817 }18301818 cur_slots += req_payload.num_slots;18191819+18201820+ if (port)18211821+ drm_dp_put_port(port);18311822 }1832182318331824 for (i = 0; i < mgr->max_payloads; i++) {···2136212121372122 if (mgr->mst_primary) {21382123 int sret;21242124+ u8 guid[16];21252125+21392126 sret = drm_dp_dpcd_read(mgr->aux, DP_DPCD_REV, mgr->dpcd, DP_RECEIVER_CAP_SIZE);21402127 if (sret != DP_RECEIVER_CAP_SIZE) {21412128 DRM_DEBUG_KMS("dpcd read failed - undocked during suspend?\n");···21522135 ret = -1;21532136 goto out_unlock;21542137 }21382138+21392139+ /* Some hubs forget their guids after they resume */21402140+ sret = drm_dp_dpcd_read(mgr->aux, DP_GUID, guid, 16);21412141+ if (sret != 16) {21422142+ DRM_DEBUG_KMS("dpcd read failed - undocked during suspend?\n");21432143+ ret = -1;21442144+ goto out_unlock;21452145+ }21462146+ drm_dp_check_mstb_guid(mgr->mst_primary, guid);21472147+21552148 ret = 0;21562149 } else21572150 ret = -1;
+18-13
drivers/gpu/drm/etnaviv/etnaviv_gpu.c
···572572 goto fail;573573 }574574575575+ /*576576+ * Set the GPU linear window to be at the end of the DMA window, where577577+ * the CMA area is likely to reside. This ensures that we are able to578578+ * map the command buffers while having the linear window overlap as579579+ * much RAM as possible, so we can optimize mappings for other buffers.580580+ *581581+ * For 3D cores only do this if MC2.0 is present, as with MC1.0 it leads582582+ * to different views of the memory on the individual engines.583583+ */584584+ if (!(gpu->identity.features & chipFeatures_PIPE_3D) ||585585+ (gpu->identity.minor_features0 & chipMinorFeatures0_MC20)) {586586+ u32 dma_mask = (u32)dma_get_required_mask(gpu->dev);587587+ if (dma_mask < PHYS_OFFSET + SZ_2G)588588+ gpu->memory_base = PHYS_OFFSET;589589+ else590590+ gpu->memory_base = dma_mask - SZ_2G + 1;591591+ }592592+575593 ret = etnaviv_hw_reset(gpu);576594 if (ret)577595 goto fail;···15841566{15851567 struct device *dev = &pdev->dev;15861568 struct etnaviv_gpu *gpu;15871587- u32 dma_mask;15881569 int err = 0;1589157015901571 gpu = devm_kzalloc(dev, sizeof(*gpu), GFP_KERNEL);···1592157515931576 gpu->dev = &pdev->dev;15941577 mutex_init(&gpu->lock);15951595-15961596- /*15971597- * Set the GPU linear window to be at the end of the DMA window, where15981598- * the CMA area is likely to reside. This ensures that we are able to15991599- * map the command buffers while having the linear window overlap as16001600- * much RAM as possible, so we can optimize mappings for other buffers.16011601- */16021602- dma_mask = (u32)dma_get_required_mask(dev);16031603- if (dma_mask < PHYS_OFFSET + SZ_2G)16041604- gpu->memory_base = PHYS_OFFSET;16051605- else16061606- gpu->memory_base = dma_mask - SZ_2G + 1;1607157816081579 /* Map registers: */16091580 gpu->mmio = etnaviv_ioremap(pdev, NULL, dev_name(gpu->dev));
+3-2
drivers/gpu/drm/i915/i915_drv.h
···2634263426352635/* WaRsDisableCoarsePowerGating:skl,bxt */26362636#define NEEDS_WaRsDisableCoarsePowerGating(dev) (IS_BXT_REVID(dev, 0, BXT_REVID_A1) || \26372637- ((IS_SKL_GT3(dev) || IS_SKL_GT4(dev)) && \26382638- IS_SKL_REVID(dev, 0, SKL_REVID_F0)))26372637+ IS_SKL_GT3(dev) || \26382638+ IS_SKL_GT4(dev))26392639+26392640/*26402641 * dp aux and gmbus irq on gen4 seems to be able to generate legacy interrupts26412642 * even when in MSI mode. This results in spurious interrupt warnings if the
+16-11
drivers/gpu/drm/i915/i915_gem_userptr.c
···501501 if (pvec != NULL) {502502 struct mm_struct *mm = obj->userptr.mm->mm;503503504504- down_read(&mm->mmap_sem);505505- while (pinned < npages) {506506- ret = get_user_pages_remote(work->task, mm,507507- obj->userptr.ptr + pinned * PAGE_SIZE,508508- npages - pinned,509509- !obj->userptr.read_only, 0,510510- pvec + pinned, NULL);511511- if (ret < 0)512512- break;504504+ ret = -EFAULT;505505+ if (atomic_inc_not_zero(&mm->mm_users)) {506506+ down_read(&mm->mmap_sem);507507+ while (pinned < npages) {508508+ ret = get_user_pages_remote509509+ (work->task, mm,510510+ obj->userptr.ptr + pinned * PAGE_SIZE,511511+ npages - pinned,512512+ !obj->userptr.read_only, 0,513513+ pvec + pinned, NULL);514514+ if (ret < 0)515515+ break;513516514514- pinned += ret;517517+ pinned += ret;518518+ }519519+ up_read(&mm->mmap_sem);520520+ mmput(mm);515521 }516516- up_read(&mm->mmap_sem);517522 }518523519524 mutex_lock(&dev->struct_mutex);
+11-5
drivers/gpu/drm/i915/intel_lrc.c
···841841 if (unlikely(total_bytes > remain_usable)) {842842 /*843843 * The base request will fit but the reserved space844844- * falls off the end. So only need to to wait for the845845- * reserved size after flushing out the remainder.844844+ * falls off the end. So don't need an immediate wrap845845+ * and only need to effectively wait for the reserved846846+ * size space from the start of ringbuffer.846847 */847848 wait_bytes = remain_actual + ringbuf->reserved_size;848848- need_wrap = true;849849 } else if (total_bytes > ringbuf->space) {850850 /* No wrapping required, just waiting. */851851 wait_bytes = total_bytes;···19131913 struct intel_ringbuffer *ringbuf = request->ringbuf;19141914 int ret;1915191519161916- ret = intel_logical_ring_begin(request, 6 + WA_TAIL_DWORDS);19161916+ ret = intel_logical_ring_begin(request, 8 + WA_TAIL_DWORDS);19171917 if (ret)19181918 return ret;19191919+19201920+ /* We're using qword write, seqno should be aligned to 8 bytes. */19211921+ BUILD_BUG_ON(I915_GEM_HWS_INDEX & 1);1919192219201923 /* w/a for post sync ops following a GPGPU operation we19211924 * need a prior CS_STALL, which is emitted by the flush19221925 * following the batch.19231926 */19241924- intel_logical_ring_emit(ringbuf, GFX_OP_PIPE_CONTROL(5));19271927+ intel_logical_ring_emit(ringbuf, GFX_OP_PIPE_CONTROL(6));19251928 intel_logical_ring_emit(ringbuf,19261929 (PIPE_CONTROL_GLOBAL_GTT_IVB |19271930 PIPE_CONTROL_CS_STALL |···19321929 intel_logical_ring_emit(ringbuf, hws_seqno_address(request->ring));19331930 intel_logical_ring_emit(ringbuf, 0);19341931 intel_logical_ring_emit(ringbuf, i915_gem_request_get_seqno(request));19321932+ /* We're thrashing one dword of HWS. */19331933+ intel_logical_ring_emit(ringbuf, 0);19351934 intel_logical_ring_emit(ringbuf, MI_USER_INTERRUPT);19351935+ intel_logical_ring_emit(ringbuf, MI_NOOP);19361936 return intel_logical_ring_advance_and_submit(request);19371937}19381938
···968968969969 /* WaForceContextSaveRestoreNonCoherent:skl,bxt */970970 tmp = HDC_FORCE_CONTEXT_SAVE_RESTORE_NON_COHERENT;971971- if (IS_SKL_REVID(dev, SKL_REVID_F0, SKL_REVID_F0) ||971971+ if (IS_SKL_REVID(dev, SKL_REVID_F0, REVID_FOREVER) ||972972 IS_BXT_REVID(dev, BXT_REVID_B0, REVID_FOREVER))973973 tmp |= HDC_FORCE_CSR_NON_COHERENT_OVR_DISABLE;974974 WA_SET_BIT_MASKED(HDC_CHICKEN0, tmp);···10851085 WA_SET_BIT_MASKED(HIZ_CHICKEN,10861086 BDW_HIZ_POWER_COMPILER_CLOCK_GATING_DISABLE);1087108710881088- if (IS_SKL_REVID(dev, 0, SKL_REVID_F0)) {10881088+ /* This is tied to WaForceContextSaveRestoreNonCoherent */10891089+ if (IS_SKL_REVID(dev, 0, REVID_FOREVER)) {10891090 /*10901091 *Use Force Non-Coherent whenever executing a 3D context. This10911092 * is a workaround for a possible hang in the unlikely event···20912090{20922091 struct drm_i915_private *dev_priv = to_i915(dev);20932092 struct drm_i915_gem_object *obj = ringbuf->obj;20932093+ /* Ring wraparound at offset 0 sometimes hangs. No idea why. */20942094+ unsigned flags = PIN_OFFSET_BIAS | 4096;20942095 int ret;2095209620962097 if (HAS_LLC(dev_priv) && !obj->stolen) {20972097- ret = i915_gem_obj_ggtt_pin(obj, PAGE_SIZE, 0);20982098+ ret = i915_gem_obj_ggtt_pin(obj, PAGE_SIZE, flags);20982099 if (ret)20992100 return ret;21002101···21122109 return -ENOMEM;21132110 }21142111 } else {21152115- ret = i915_gem_obj_ggtt_pin(obj, PAGE_SIZE, PIN_MAPPABLE);21122112+ ret = i915_gem_obj_ggtt_pin(obj, PAGE_SIZE,21132113+ flags | PIN_MAPPABLE);21162114 if (ret)21172115 return ret;21182116···24582454 if (unlikely(total_bytes > remain_usable)) {24592455 /*24602456 * The base request will fit but the reserved space24612461- * falls off the end. So only need to to wait for the24622462- * reserved size after flushing out the remainder.24572457+ * falls off the end. So don't need an immediate wrap24582458+ * and only need to effectively wait for the reserved24592459+ * size space from the start of ringbuffer.24632460 */24642461 wait_bytes = remain_actual + ringbuf->reserved_size;24652465- need_wrap = true;24662462 } else if (total_bytes > ringbuf->space) {24672463 /* No wrapping required, just waiting. */24682464 wait_bytes = total_bytes;
···1832183218331833 gf100_gr_mmio(gr, gr->func->mmio);1834183418351835+ nvkm_mask(device, TPC_UNIT(0, 0, 0x05c), 0x00000001, 0x00000001);18361836+18351837 memcpy(tpcnr, gr->tpc_nr, sizeof(gr->tpc_nr));18361838 for (i = 0, gpc = -1; i < gr->tpc_total; i++) {18371839 do {
+153-1
drivers/gpu/drm/radeon/evergreen.c
···26082608 WREG32(VM_CONTEXT1_CNTL, 0);26092609}2610261026112611+static const unsigned ni_dig_offsets[] =26122612+{26132613+ NI_DIG0_REGISTER_OFFSET,26142614+ NI_DIG1_REGISTER_OFFSET,26152615+ NI_DIG2_REGISTER_OFFSET,26162616+ NI_DIG3_REGISTER_OFFSET,26172617+ NI_DIG4_REGISTER_OFFSET,26182618+ NI_DIG5_REGISTER_OFFSET26192619+};26202620+26212621+static const unsigned ni_tx_offsets[] =26222622+{26232623+ NI_DCIO_UNIPHY0_UNIPHY_TX_CONTROL1,26242624+ NI_DCIO_UNIPHY1_UNIPHY_TX_CONTROL1,26252625+ NI_DCIO_UNIPHY2_UNIPHY_TX_CONTROL1,26262626+ NI_DCIO_UNIPHY3_UNIPHY_TX_CONTROL1,26272627+ NI_DCIO_UNIPHY4_UNIPHY_TX_CONTROL1,26282628+ NI_DCIO_UNIPHY5_UNIPHY_TX_CONTROL126292629+};26302630+26312631+static const unsigned evergreen_dp_offsets[] =26322632+{26332633+ EVERGREEN_DP0_REGISTER_OFFSET,26342634+ EVERGREEN_DP1_REGISTER_OFFSET,26352635+ EVERGREEN_DP2_REGISTER_OFFSET,26362636+ EVERGREEN_DP3_REGISTER_OFFSET,26372637+ EVERGREEN_DP4_REGISTER_OFFSET,26382638+ EVERGREEN_DP5_REGISTER_OFFSET26392639+};26402640+26412641+26422642+/*26432643+ * Assumption is that EVERGREEN_CRTC_MASTER_EN enable for requested crtc26442644+ * We go from crtc to connector and it is not relible since it26452645+ * should be an opposite direction .If crtc is enable then26462646+ * find the dig_fe which selects this crtc and insure that it enable.26472647+ * if such dig_fe is found then find dig_be which selects found dig_be and26482648+ * insure that it enable and in DP_SST mode.26492649+ * if UNIPHY_PLL_CONTROL1.enable then we should disconnect timing26502650+ * from dp symbols clocks .26512651+ */26522652+static bool evergreen_is_dp_sst_stream_enabled(struct radeon_device *rdev,26532653+ unsigned crtc_id, unsigned *ret_dig_fe)26542654+{26552655+ unsigned i;26562656+ unsigned dig_fe;26572657+ unsigned dig_be;26582658+ unsigned dig_en_be;26592659+ unsigned uniphy_pll;26602660+ unsigned digs_fe_selected;26612661+ unsigned dig_be_mode;26622662+ unsigned dig_fe_mask;26632663+ bool is_enabled = false;26642664+ bool found_crtc = false;26652665+26662666+ /* loop through all running dig_fe to find selected crtc */26672667+ for (i = 0; i < ARRAY_SIZE(ni_dig_offsets); i++) {26682668+ dig_fe = RREG32(NI_DIG_FE_CNTL + ni_dig_offsets[i]);26692669+ if (dig_fe & NI_DIG_FE_CNTL_SYMCLK_FE_ON &&26702670+ crtc_id == NI_DIG_FE_CNTL_SOURCE_SELECT(dig_fe)) {26712671+ /* found running pipe */26722672+ found_crtc = true;26732673+ dig_fe_mask = 1 << i;26742674+ dig_fe = i;26752675+ break;26762676+ }26772677+ }26782678+26792679+ if (found_crtc) {26802680+ /* loop through all running dig_be to find selected dig_fe */26812681+ for (i = 0; i < ARRAY_SIZE(ni_dig_offsets); i++) {26822682+ dig_be = RREG32(NI_DIG_BE_CNTL + ni_dig_offsets[i]);26832683+ /* if dig_fe_selected by dig_be? */26842684+ digs_fe_selected = NI_DIG_BE_CNTL_FE_SOURCE_SELECT(dig_be);26852685+ dig_be_mode = NI_DIG_FE_CNTL_MODE(dig_be);26862686+ if (dig_fe_mask & digs_fe_selected &&26872687+ /* if dig_be in sst mode? */26882688+ dig_be_mode == NI_DIG_BE_DPSST) {26892689+ dig_en_be = RREG32(NI_DIG_BE_EN_CNTL +26902690+ ni_dig_offsets[i]);26912691+ uniphy_pll = RREG32(NI_DCIO_UNIPHY0_PLL_CONTROL1 +26922692+ ni_tx_offsets[i]);26932693+ /* dig_be enable and tx is running */26942694+ if (dig_en_be & NI_DIG_BE_EN_CNTL_ENABLE &&26952695+ dig_en_be & NI_DIG_BE_EN_CNTL_SYMBCLK_ON &&26962696+ uniphy_pll & NI_DCIO_UNIPHY0_PLL_CONTROL1_ENABLE) {26972697+ is_enabled = true;26982698+ *ret_dig_fe = dig_fe;26992699+ break;27002700+ }27012701+ }27022702+ }27032703+ }27042704+27052705+ return is_enabled;27062706+}27072707+27082708+/*27092709+ * Blank dig when in dp sst mode27102710+ * Dig ignores crtc timing27112711+ */27122712+static void evergreen_blank_dp_output(struct radeon_device *rdev,27132713+ unsigned dig_fe)27142714+{27152715+ unsigned stream_ctrl;27162716+ unsigned fifo_ctrl;27172717+ unsigned counter = 0;27182718+27192719+ if (dig_fe >= ARRAY_SIZE(evergreen_dp_offsets)) {27202720+ DRM_ERROR("invalid dig_fe %d\n", dig_fe);27212721+ return;27222722+ }27232723+27242724+ stream_ctrl = RREG32(EVERGREEN_DP_VID_STREAM_CNTL +27252725+ evergreen_dp_offsets[dig_fe]);27262726+ if (!(stream_ctrl & EVERGREEN_DP_VID_STREAM_CNTL_ENABLE)) {27272727+ DRM_ERROR("dig %d , should be enable\n", dig_fe);27282728+ return;27292729+ }27302730+27312731+ stream_ctrl &=~EVERGREEN_DP_VID_STREAM_CNTL_ENABLE;27322732+ WREG32(EVERGREEN_DP_VID_STREAM_CNTL +27332733+ evergreen_dp_offsets[dig_fe], stream_ctrl);27342734+27352735+ stream_ctrl = RREG32(EVERGREEN_DP_VID_STREAM_CNTL +27362736+ evergreen_dp_offsets[dig_fe]);27372737+ while (counter < 32 && stream_ctrl & EVERGREEN_DP_VID_STREAM_STATUS) {27382738+ msleep(1);27392739+ counter++;27402740+ stream_ctrl = RREG32(EVERGREEN_DP_VID_STREAM_CNTL +27412741+ evergreen_dp_offsets[dig_fe]);27422742+ }27432743+ if (counter >= 32 )27442744+ DRM_ERROR("counter exceeds %d\n", counter);27452745+27462746+ fifo_ctrl = RREG32(EVERGREEN_DP_STEER_FIFO + evergreen_dp_offsets[dig_fe]);27472747+ fifo_ctrl |= EVERGREEN_DP_STEER_FIFO_RESET;27482748+ WREG32(EVERGREEN_DP_STEER_FIFO + evergreen_dp_offsets[dig_fe], fifo_ctrl);27492749+27502750+}27512751+26112752void evergreen_mc_stop(struct radeon_device *rdev, struct evergreen_mc_save *save)26122753{26132754 u32 crtc_enabled, tmp, frame_count, blackout;26142755 int i, j;27562756+ unsigned dig_fe;2615275726162758 if (!ASIC_IS_NODCE(rdev)) {26172759 save->vga_render_control = RREG32(VGA_RENDER_CONTROL);···27932651 break;27942652 udelay(1);27952653 }27962796-26542654+ /*we should disable dig if it drives dp sst*/26552655+ /*but we are in radeon_device_init and the topology is unknown*/26562656+ /*and it is available after radeon_modeset_init*/26572657+ /*the following method radeon_atom_encoder_dpms_dig*/26582658+ /*does the job if we initialize it properly*/26592659+ /*for now we do it this manually*/26602660+ /**/26612661+ if (ASIC_IS_DCE5(rdev) &&26622662+ evergreen_is_dp_sst_stream_enabled(rdev, i ,&dig_fe))26632663+ evergreen_blank_dp_output(rdev, dig_fe);26642664+ /*we could remove 6 lines below*/27972665 /* XXX this is a hack to avoid strange behavior with EFI on certain systems */27982666 WREG32(EVERGREEN_CRTC_UPDATE_LOCK + crtc_offsets[i], 1);27992667 tmp = RREG32(EVERGREEN_CRTC_CONTROL + crtc_offsets[i]);
···6262 return radeon_atpx_priv.atpx_detected;6363}64646565-bool radeon_has_atpx_dgpu_power_cntl(void) {6666- return radeon_atpx_priv.atpx.functions.power_cntl;6767-}6868-6965/**7066 * radeon_atpx_call - call an ATPX method7167 *···141145 */142146static int radeon_atpx_validate(struct radeon_atpx *atpx)143147{148148+ /* make sure required functions are enabled */149149+ /* dGPU power control is required */150150+ if (atpx->functions.power_cntl == false) {151151+ printk("ATPX dGPU power cntl not present, forcing\n");152152+ atpx->functions.power_cntl = true;153153+ }154154+144155 if (atpx->functions.px_params) {145156 union acpi_object *info;146157 struct atpx_px_params output;
+6-1
drivers/gpu/drm/radeon/radeon_connectors.c
···20022002 rdev->mode_info.dither_property,20032003 RADEON_FMT_DITHER_DISABLE);2004200420052005- if (radeon_audio != 0)20052005+ if (radeon_audio != 0) {20062006 drm_object_attach_property(&radeon_connector->base.base,20072007 rdev->mode_info.audio_property,20082008 RADEON_AUDIO_AUTO);20092009+ radeon_connector->audio = RADEON_AUDIO_AUTO;20102010+ }20092011 if (ASIC_IS_DCE5(rdev))20102012 drm_object_attach_property(&radeon_connector->base.base,20112013 rdev->mode_info.output_csc_property,···21322130 drm_object_attach_property(&radeon_connector->base.base,21332131 rdev->mode_info.audio_property,21342132 RADEON_AUDIO_AUTO);21332133+ radeon_connector->audio = RADEON_AUDIO_AUTO;21352134 }21362135 if (connector_type == DRM_MODE_CONNECTOR_DVII) {21372136 radeon_connector->dac_load_detect = true;···21882185 drm_object_attach_property(&radeon_connector->base.base,21892186 rdev->mode_info.audio_property,21902187 RADEON_AUDIO_AUTO);21882188+ radeon_connector->audio = RADEON_AUDIO_AUTO;21912189 }21922190 if (ASIC_IS_DCE5(rdev))21932191 drm_object_attach_property(&radeon_connector->base.base,···22412237 drm_object_attach_property(&radeon_connector->base.base,22422238 rdev->mode_info.audio_property,22432239 RADEON_AUDIO_AUTO);22402240+ radeon_connector->audio = RADEON_AUDIO_AUTO;22442241 }22452242 if (ASIC_IS_DCE5(rdev))22462243 drm_object_attach_property(&radeon_connector->base.base,
+4-10
drivers/gpu/drm/radeon/radeon_device.c
···103103 "LAST",104104};105105106106-#if defined(CONFIG_VGA_SWITCHEROO)107107-bool radeon_has_atpx_dgpu_power_cntl(void);108108-#else109109-static inline bool radeon_has_atpx_dgpu_power_cntl(void) { return false; }110110-#endif111111-112106#define RADEON_PX_QUIRK_DISABLE_PX (1 << 0)113107#define RADEON_PX_QUIRK_LONG_WAKEUP (1 << 1)114108···12991305 }13001306 rdev->fence_context = fence_context_alloc(RADEON_NUM_RINGS);1301130713021302- DRM_INFO("initializing kernel modesetting (%s 0x%04X:0x%04X 0x%04X:0x%04X).\n",13031303- radeon_family_name[rdev->family], pdev->vendor, pdev->device,13041304- pdev->subsystem_vendor, pdev->subsystem_device);13081308+ DRM_INFO("initializing kernel modesetting (%s 0x%04X:0x%04X 0x%04X:0x%04X 0x%02X).\n",13091309+ radeon_family_name[rdev->family], pdev->vendor, pdev->device,13101310+ pdev->subsystem_vendor, pdev->subsystem_device, pdev->revision);1305131113061312 /* mutex initialization are all done here so we13071313 * can recall function without having locking issues */···14331439 * ignore it */14341440 vga_client_register(rdev->pdev, rdev, NULL, radeon_vga_set_decode);1435144114361436- if ((rdev->flags & RADEON_IS_PX) && radeon_has_atpx_dgpu_power_cntl())14421442+ if (rdev->flags & RADEON_IS_PX)14371443 runtime = true;14381444 vga_switcheroo_register_client(rdev->pdev, &radeon_switcheroo_ops, runtime);14391445 if (runtime)
···975975976976config I2C_XLP9XX977977 tristate "XLP9XX I2C support"978978- depends on CPU_XLP || COMPILE_TEST978978+ depends on CPU_XLP || ARCH_VULCAN || COMPILE_TEST979979 help980980 This driver enables support for the on-chip I2C interface of981981- the Broadcom XLP9xx/XLP5xx MIPS processors.981981+ the Broadcom XLP9xx/XLP5xx MIPS and Vulcan ARM64 processors.982982983983 This driver can also be built as a module. If so, the module will984984 be called i2c-xlp9xx.
···858858 goto err_free_buf;859859 }860860861861+ /* Sanity check that a device has an endpoint */862862+ if (usbinterface->altsetting[0].desc.bNumEndpoints < 1) {863863+ dev_err(&usbinterface->dev,864864+ "Invalid number of endpoints\n");865865+ error = -EINVAL;866866+ goto err_free_urb;867867+ }868868+861869 /*862870 * The endpoint is always altsetting 0, we know this since we know863871 * this device only has one interrupt endpoint···887879 * HID report descriptor888880 */889881 if (usb_get_extra_descriptor(usbinterface->cur_altsetting,890890- HID_DEVICE_TYPE, &hid_desc) != 0){882882+ HID_DEVICE_TYPE, &hid_desc) != 0) {891883 dev_err(&usbinterface->dev,892884 "Can't retrieve exta USB descriptor to get hid report descriptor length\n");893885 error = -EIO;
+76-11
drivers/iommu/amd_iommu.c
···9292 struct list_head dev_data_list; /* For global dev_data_list */9393 struct protection_domain *domain; /* Domain the device is bound to */9494 u16 devid; /* PCI Device ID */9595+ u16 alias; /* Alias Device ID */9596 bool iommu_v2; /* Device can make use of IOMMUv2 */9697 bool passthrough; /* Device is identity mapped */9798 struct {···167166 return container_of(dom, struct protection_domain, domain);168167}169168169169+static inline u16 get_device_id(struct device *dev)170170+{171171+ struct pci_dev *pdev = to_pci_dev(dev);172172+173173+ return PCI_DEVID(pdev->bus->number, pdev->devfn);174174+}175175+170176static struct iommu_dev_data *alloc_dev_data(u16 devid)171177{172178 struct iommu_dev_data *dev_data;···211203 return dev_data;212204}213205206206+static int __last_alias(struct pci_dev *pdev, u16 alias, void *data)207207+{208208+ *(u16 *)data = alias;209209+ return 0;210210+}211211+212212+static u16 get_alias(struct device *dev)213213+{214214+ struct pci_dev *pdev = to_pci_dev(dev);215215+ u16 devid, ivrs_alias, pci_alias;216216+217217+ devid = get_device_id(dev);218218+ ivrs_alias = amd_iommu_alias_table[devid];219219+ pci_for_each_dma_alias(pdev, __last_alias, &pci_alias);220220+221221+ if (ivrs_alias == pci_alias)222222+ return ivrs_alias;223223+224224+ /*225225+ * DMA alias showdown226226+ *227227+ * The IVRS is fairly reliable in telling us about aliases, but it228228+ * can't know about every screwy device. If we don't have an IVRS229229+ * reported alias, use the PCI reported alias. In that case we may230230+ * still need to initialize the rlookup and dev_table entries if the231231+ * alias is to a non-existent device.232232+ */233233+ if (ivrs_alias == devid) {234234+ if (!amd_iommu_rlookup_table[pci_alias]) {235235+ amd_iommu_rlookup_table[pci_alias] =236236+ amd_iommu_rlookup_table[devid];237237+ memcpy(amd_iommu_dev_table[pci_alias].data,238238+ amd_iommu_dev_table[devid].data,239239+ sizeof(amd_iommu_dev_table[pci_alias].data));240240+ }241241+242242+ return pci_alias;243243+ }244244+245245+ pr_info("AMD-Vi: Using IVRS reported alias %02x:%02x.%d "246246+ "for device %s[%04x:%04x], kernel reported alias "247247+ "%02x:%02x.%d\n", PCI_BUS_NUM(ivrs_alias), PCI_SLOT(ivrs_alias),248248+ PCI_FUNC(ivrs_alias), dev_name(dev), pdev->vendor, pdev->device,249249+ PCI_BUS_NUM(pci_alias), PCI_SLOT(pci_alias),250250+ PCI_FUNC(pci_alias));251251+252252+ /*253253+ * If we don't have a PCI DMA alias and the IVRS alias is on the same254254+ * bus, then the IVRS table may know about a quirk that we don't.255255+ */256256+ if (pci_alias == devid &&257257+ PCI_BUS_NUM(ivrs_alias) == pdev->bus->number) {258258+ pdev->dev_flags |= PCI_DEV_FLAGS_DMA_ALIAS_DEVFN;259259+ pdev->dma_alias_devfn = ivrs_alias & 0xff;260260+ pr_info("AMD-Vi: Added PCI DMA alias %02x.%d for %s\n",261261+ PCI_SLOT(ivrs_alias), PCI_FUNC(ivrs_alias),262262+ dev_name(dev));263263+ }264264+265265+ return ivrs_alias;266266+}267267+214268static struct iommu_dev_data *find_dev_data(u16 devid)215269{216270 struct iommu_dev_data *dev_data;···283213 dev_data = alloc_dev_data(devid);284214285215 return dev_data;286286-}287287-288288-static inline u16 get_device_id(struct device *dev)289289-{290290- struct pci_dev *pdev = to_pci_dev(dev);291291-292292- return PCI_DEVID(pdev->bus->number, pdev->devfn);293216}294217295218static struct iommu_dev_data *get_dev_data(struct device *dev)···412349 if (!dev_data)413350 return -ENOMEM;414351352352+ dev_data->alias = get_alias(dev);353353+415354 if (pci_iommuv2_capable(pdev)) {416355 struct amd_iommu *iommu;417356···434369 u16 devid, alias;435370436371 devid = get_device_id(dev);437437- alias = amd_iommu_alias_table[devid];372372+ alias = get_alias(dev);438373439374 memset(&amd_iommu_dev_table[devid], 0, sizeof(struct dev_table_entry));440375 memset(&amd_iommu_dev_table[alias], 0, sizeof(struct dev_table_entry));···11261061 int ret;1127106211281063 iommu = amd_iommu_rlookup_table[dev_data->devid];11291129- alias = amd_iommu_alias_table[dev_data->devid];10641064+ alias = dev_data->alias;1130106511311066 ret = iommu_flush_dte(iommu, dev_data->devid);11321067 if (!ret && alias != dev_data->devid)···21042039 bool ats;2105204021062041 iommu = amd_iommu_rlookup_table[dev_data->devid];21072107- alias = amd_iommu_alias_table[dev_data->devid];20422042+ alias = dev_data->alias;21082043 ats = dev_data->ats.enabled;2109204421102045 /* Update data structures */···21382073 return;2139207421402075 iommu = amd_iommu_rlookup_table[dev_data->devid];21412141- alias = amd_iommu_alias_table[dev_data->devid];20762076+ alias = dev_data->alias;2142207721432078 /* decrease reference counters */21442079 dev_data->domain->dev_iommu[iommu->index] -= 1;
+16-8
drivers/iommu/arm-smmu.c
···826826 if (smmu_domain->smmu)827827 goto out_unlock;828828829829+ /* We're bypassing these SIDs, so don't allocate an actual context */830830+ if (domain->type == IOMMU_DOMAIN_DMA) {831831+ smmu_domain->smmu = smmu;832832+ goto out_unlock;833833+ }834834+829835 /*830836 * Mapping the requested stage onto what we support is surprisingly831837 * complicated, mainly because the spec allows S1+S2 SMMUs without···954948 void __iomem *cb_base;955949 int irq;956950957957- if (!smmu)951951+ if (!smmu || domain->type == IOMMU_DOMAIN_DMA)958952 return;959953960954 /*···10951089 struct arm_smmu_device *smmu = smmu_domain->smmu;10961090 void __iomem *gr0_base = ARM_SMMU_GR0(smmu);1097109110921092+ /*10931093+ * FIXME: This won't be needed once we have IOMMU-backed DMA ops10941094+ * for all devices behind the SMMU. Note that we need to take10951095+ * care configuring SMRs for devices both a platform_device and10961096+ * and a PCI device (i.e. a PCI host controller)10971097+ */10981098+ if (smmu_domain->domain.type == IOMMU_DOMAIN_DMA)10991099+ return 0;11001100+10981101 /* Devices in an IOMMU group may already be configured */10991102 ret = arm_smmu_master_configure_smrs(smmu, cfg);11001103 if (ret)11011104 return ret == -EEXIST ? 0 : ret;11021102-11031103- /*11041104- * FIXME: This won't be needed once we have IOMMU-backed DMA ops11051105- * for all devices behind the SMMU.11061106- */11071107- if (smmu_domain->domain.type == IOMMU_DOMAIN_DMA)11081108- return 0;1109110511101106 for (i = 0; i < cfg->num_streamids; ++i) {11111107 u32 idx, s2cr;
+2-2
drivers/irqchip/irq-mips-gic.c
···467467 gic_map_to_vpe(irq, mips_cm_vp_id(cpumask_first(&tmp)));468468469469 /* Update the pcpu_masks */470470- for (i = 0; i < gic_vpes; i++)470470+ for (i = 0; i < min(gic_vpes, NR_CPUS); i++)471471 clear_bit(irq, pcpu_masks[i].pcpu_mask);472472 set_bit(irq, pcpu_masks[cpumask_first(&tmp)].pcpu_mask);473473···707707 spin_lock_irqsave(&gic_lock, flags);708708 gic_map_to_pin(intr, gic_cpu_pin);709709 gic_map_to_vpe(intr, vpe);710710- for (i = 0; i < gic_vpes; i++)710710+ for (i = 0; i < min(gic_vpes, NR_CPUS); i++)711711 clear_bit(intr, pcpu_masks[i].pcpu_mask);712712 set_bit(intr, pcpu_masks[vpe].pcpu_mask);713713 spin_unlock_irqrestore(&gic_lock, flags);
+3
drivers/isdn/mISDN/socket.c
···715715 if (!maddr || maddr->family != AF_ISDN)716716 return -EINVAL;717717718718+ if (addr_len < sizeof(struct sockaddr_mISDN))719719+ return -EINVAL;720720+718721 lock_sock(sk);719722720723 if (_pms(sk)->dev) {
-7
drivers/media/usb/usbvision/usbvision-video.c
···14521452 printk(KERN_INFO "%s: %s found\n", __func__,14531453 usbvision_device_data[model].model_string);1454145414551455- /*14561456- * this is a security check.14571457- * an exploit using an incorrect bInterfaceNumber is known14581458- */14591459- if (ifnum >= USB_MAXINTERFACES || !dev->actconfig->interface[ifnum])14601460- return -ENODEV;14611461-14621455 if (usbvision_device_data[model].interface >= 0)14631456 interface = &dev->actconfig->interface[usbvision_device_data[model].interface]->altsetting[0];14641457 else if (ifnum < dev->actconfig->desc.bNumInterfaces)
+15-5
drivers/media/v4l2-core/videobuf2-core.c
···16451645 * Will sleep if required for nonblocking == false.16461646 */16471647static int __vb2_get_done_vb(struct vb2_queue *q, struct vb2_buffer **vb,16481648- int nonblocking)16481648+ void *pb, int nonblocking)16491649{16501650 unsigned long flags;16511651 int ret;···16661666 /*16671667 * Only remove the buffer from done_list if v4l2_buffer can handle all16681668 * the planes.16691669- * Verifying planes is NOT necessary since it already has been checked16701670- * before the buffer is queued/prepared. So it can never fail.16711669 */16721672- list_del(&(*vb)->done_entry);16701670+ ret = call_bufop(q, verify_planes_array, *vb, pb);16711671+ if (!ret)16721672+ list_del(&(*vb)->done_entry);16731673 spin_unlock_irqrestore(&q->done_lock, flags);1674167416751675 return ret;···17481748 struct vb2_buffer *vb = NULL;17491749 int ret;1750175017511751- ret = __vb2_get_done_vb(q, &vb, nonblocking);17511751+ ret = __vb2_get_done_vb(q, &vb, pb, nonblocking);17521752 if (ret < 0)17531753 return ret;17541754···22952295 * error flag is set.22962296 */22972297 if (!vb2_is_streaming(q) || q->error)22982298+ return POLLERR;22992299+23002300+ /*23012301+ * If this quirk is set and QBUF hasn't been called yet then23022302+ * return POLLERR as well. This only affects capture queues, output23032303+ * queues will always initialize waiting_for_buffers to false.23042304+ * This quirk is set by V4L2 for backwards compatibility reasons.23052305+ */23062306+ if (q->quirk_poll_must_check_waiting_for_buffers &&23072307+ q->waiting_for_buffers && (req_events & (POLLIN | POLLRDNORM)))22982308 return POLLERR;2299230923002310 /*
+1-1
drivers/media/v4l2-core/videobuf2-memops.c
···4949 vec = frame_vector_create(nr);5050 if (!vec)5151 return ERR_PTR(-ENOMEM);5252- ret = get_vaddr_frames(start, nr, write, 1, vec);5252+ ret = get_vaddr_frames(start & PAGE_MASK, nr, write, true, vec);5353 if (ret < 0)5454 goto out_destroy;5555 /* We accept only complete set of PFNs */
+12-8
drivers/media/v4l2-core/videobuf2-v4l2.c
···7474 return 0;7575}76767777+static int __verify_planes_array_core(struct vb2_buffer *vb, const void *pb)7878+{7979+ return __verify_planes_array(vb, pb);8080+}8181+7782/**7883 * __verify_length() - Verify that the bytesused value for each plane fits in7984 * the plane length and that the data offset doesn't exceed the bytesused value.···442437}443438444439static const struct vb2_buf_ops v4l2_buf_ops = {440440+ .verify_planes_array = __verify_planes_array_core,445441 .fill_user_buffer = __fill_v4l2_buffer,446442 .fill_vb2_buffer = __fill_vb2_buffer,447443 .copy_timestamp = __copy_timestamp,···771765 q->is_output = V4L2_TYPE_IS_OUTPUT(q->type);772766 q->copy_timestamp = (q->timestamp_flags & V4L2_BUF_FLAG_TIMESTAMP_MASK)773767 == V4L2_BUF_FLAG_TIMESTAMP_COPY;768768+ /*769769+ * For compatibility with vb1: if QBUF hasn't been called yet, then770770+ * return POLLERR as well. This only affects capture queues, output771771+ * queues will always initialize waiting_for_buffers to false.772772+ */773773+ q->quirk_poll_must_check_waiting_for_buffers = true;774774775775 return vb2_core_queue_init(q);776776}···829817 else if (req_events & POLLPRI)830818 poll_wait(file, &fh->wait, wait);831819 }832832-833833- /*834834- * For compatibility with vb1: if QBUF hasn't been called yet, then835835- * return POLLERR as well. This only affects capture queues, output836836- * queues will always initialize waiting_for_buffers to false.837837- */838838- if (q->waiting_for_buffers && (req_events & (POLLIN | POLLRDNORM)))839839- return POLLERR;840820841821 return res | vb2_core_poll(q, file, wait);842822}
+7
drivers/misc/cxl/context.c
···223223 cxl_ops->link_ok(ctx->afu->adapter, ctx->afu));224224 flush_work(&ctx->fault_work); /* Only needed for dedicated process */225225226226+ /*227227+ * Wait until no further interrupts are presented by the PSL228228+ * for this context.229229+ */230230+ if (cxl_ops->irq_wait)231231+ cxl_ops->irq_wait(ctx);232232+226233 /* release the reference to the group leader and mm handling pid */227234 put_pid(ctx->pid);228235 put_pid(ctx->glpid);
···1414#include <linux/mutex.h>1515#include <linux/mm.h>1616#include <linux/uaccess.h>1717+#include <linux/delay.h>1718#include <asm/synch.h>1819#include <misc/cxl-base.h>1920···798797 return fail_psl_irq(afu, &irq_info);799798}800799800800+void native_irq_wait(struct cxl_context *ctx)801801+{802802+ u64 dsisr;803803+ int timeout = 1000;804804+ int ph;805805+806806+ /*807807+ * Wait until no further interrupts are presented by the PSL808808+ * for this context.809809+ */810810+ while (timeout--) {811811+ ph = cxl_p2n_read(ctx->afu, CXL_PSL_PEHandle_An) & 0xffff;812812+ if (ph != ctx->pe)813813+ return;814814+ dsisr = cxl_p2n_read(ctx->afu, CXL_PSL_DSISR_An);815815+ if ((dsisr & CXL_PSL_DSISR_PENDING) == 0)816816+ return;817817+ /*818818+ * We are waiting for the workqueue to process our819819+ * irq, so need to let that run here.820820+ */821821+ msleep(1);822822+ }823823+824824+ dev_warn(&ctx->afu->dev, "WARNING: waiting on DSI for PE %i"825825+ " DSISR %016llx!\n", ph, dsisr);826826+ return;827827+}828828+801829static irqreturn_t native_slice_irq_err(int irq, void *data)802830{803831 struct cxl_afu *afu = data;···11061076 .handle_psl_slice_error = native_handle_psl_slice_error,11071077 .psl_interrupt = NULL,11081078 .ack_irq = native_ack_irq,10791079+ .irq_wait = native_irq_wait,11091080 .attach_process = native_attach_process,11101081 .detach_process = native_detach_process,11111082 .support_attributes = native_support_attributes,
+1
drivers/mmc/host/Kconfig
···9797config MMC_SDHCI_ACPI9898 tristate "SDHCI support for ACPI enumerated SDHCI controllers"9999 depends on MMC_SDHCI && ACPI100100+ select IOSF_MBI if X86100101 help101102 This selects support for ACPI enumerated SDHCI controllers,102103 identified by ACPI Compatibility ID PNP0D40 or specific
···11291129 MMC_CAP_1_8V_DDR |11301130 MMC_CAP_ERASE | MMC_CAP_SDIO_IRQ;1131113111321132+ /* TODO MMC DDR is not working on A80 */11331133+ if (of_device_is_compatible(pdev->dev.of_node,11341134+ "allwinner,sun9i-a80-mmc"))11351135+ mmc->caps &= ~MMC_CAP_1_8V_DDR;11361136+11321137 ret = mmc_of_parse(mmc);11331138 if (ret)11341139 goto error_free_dma;
+3-3
drivers/net/Kconfig
···6262 this device is consigned into oblivion) with a configurable IP6363 address. It is most commonly used in order to make your currently6464 inactive SLIP address seem like a real address for local programs.6565- If you use SLIP or PPP, you might want to say Y here. Since this6666- thing often comes in handy, the default is Y. It won't enlarge your6767- kernel either. What a deal. Read about it in the Network6565+ If you use SLIP or PPP, you might want to say Y here. It won't6666+ enlarge your kernel. What a deal. Read about it in the Network6867 Administrator's Guide, available from6968 <http://www.tldp.org/docs.html#guide>.7069···194195195196config MACSEC196197 tristate "IEEE 802.1AE MAC-level encryption (MACsec)"198198+ select CRYPTO197199 select CRYPTO_AES198200 select CRYPTO_GCM199201 ---help---
+5-29
drivers/net/dsa/mv88e6xxx.c
···21812181 struct net_device *bridge)21822182{21832183 struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);21842184- u16 fid;21852184 int i, err;2186218521872186 mutex_lock(&ps->smi_mutex);21882188-21892189- /* Get or create the bridge FID and assign it to the port */21902190- for (i = 0; i < ps->num_ports; ++i)21912191- if (ps->ports[i].bridge_dev == bridge)21922192- break;21932193-21942194- if (i < ps->num_ports)21952195- err = _mv88e6xxx_port_fid_get(ds, i, &fid);21962196- else21972197- err = _mv88e6xxx_fid_new(ds, &fid);21982198- if (err)21992199- goto unlock;22002200-22012201- err = _mv88e6xxx_port_fid_set(ds, port, fid);22022202- if (err)22032203- goto unlock;2204218722052188 /* Assign the bridge and remap each port's VLANTable */22062189 ps->ports[port].bridge_dev = bridge;···21962213 }21972214 }2198221521992199-unlock:22002216 mutex_unlock(&ps->smi_mutex);2201221722022218 return err;···22052223{22062224 struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);22072225 struct net_device *bridge = ps->ports[port].bridge_dev;22082208- u16 fid;22092226 int i;2210222722112228 mutex_lock(&ps->smi_mutex);22122212-22132213- /* Give the port a fresh Filtering Information Database */22142214- if (_mv88e6xxx_fid_new(ds, &fid) ||22152215- _mv88e6xxx_port_fid_set(ds, port, fid))22162216- netdev_warn(ds->ports[port], "failed to assign a new FID\n");2217222922182230 /* Unassign the bridge and remap each port's VLANTable */22192231 ps->ports[port].bridge_dev = NULL;···24522476 * the other bits clear.24532477 */24542478 reg = 1 << port;24552455- /* Disable learning for DSA and CPU ports */24562456- if (dsa_is_cpu_port(ds, port) || dsa_is_dsa_port(ds, port))24572457- reg = PORT_ASSOC_VECTOR_LOCKED_PORT;24792479+ /* Disable learning for CPU port */24802480+ if (dsa_is_cpu_port(ds, port))24812481+ reg = 0;2458248224592483 ret = _mv88e6xxx_reg_write(ds, REG_PORT(port), PORT_ASSOC_VECTOR, reg);24602484 if (ret)···25342558 if (ret)25352559 goto abort;2536256025372537- /* Port based VLAN map: give each port its own address25612561+ /* Port based VLAN map: give each port the same default address25382562 * database, and allow bidirectional communication between the25392563 * CPU and DSA port(s), and the other ports.25402564 */25412541- ret = _mv88e6xxx_port_fid_set(ds, port, port + 1);25652565+ ret = _mv88e6xxx_port_fid_set(ds, port, 0);25422566 if (ret)25432567 goto abort;25442568
+1-1
drivers/net/ethernet/atheros/atlx/atl2.c
···1412141214131413 err = -EIO;1414141414151415- netdev->hw_features = NETIF_F_SG | NETIF_F_HW_VLAN_CTAG_RX;14151415+ netdev->hw_features = NETIF_F_HW_VLAN_CTAG_RX;14161416 netdev->features |= (NETIF_F_HW_VLAN_CTAG_TX | NETIF_F_HW_VLAN_CTAG_RX);1417141714181418 /* Init PHY as early as possible due to power saving issue */
+5
drivers/net/ethernet/broadcom/bgmac.c
···15721572 dev_warn(&core->dev, "Using random MAC: %pM\n", mac);15731573 }1574157415751575+ /* This (reset &) enable is not preset in specs or reference driver but15761576+ * Broadcom does it in arch PCI code when enabling fake PCI device.15771577+ */15781578+ bcma_core_enable(core, 0);15791579+15751580 /* Allocation and references */15761581 net_dev = alloc_etherdev(sizeof(*bgmac));15771582 if (!net_dev)
+3-3
drivers/net/ethernet/broadcom/bgmac.h
···199199#define BGMAC_CMDCFG_TAI 0x00000200200200#define BGMAC_CMDCFG_HD 0x00000400 /* Set if in half duplex mode */201201#define BGMAC_CMDCFG_HD_SHIFT 10202202-#define BGMAC_CMDCFG_SR_REV0 0x00000800 /* Set to reset mode, for other revs */203203-#define BGMAC_CMDCFG_SR_REV4 0x00002000 /* Set to reset mode, only for core rev 4 */204204-#define BGMAC_CMDCFG_SR(rev) ((rev == 4) ? BGMAC_CMDCFG_SR_REV4 : BGMAC_CMDCFG_SR_REV0)202202+#define BGMAC_CMDCFG_SR_REV0 0x00000800 /* Set to reset mode, for core rev 0-3 */203203+#define BGMAC_CMDCFG_SR_REV4 0x00002000 /* Set to reset mode, for core rev >= 4 */204204+#define BGMAC_CMDCFG_SR(rev) ((rev >= 4) ? BGMAC_CMDCFG_SR_REV4 : BGMAC_CMDCFG_SR_REV0)205205#define BGMAC_CMDCFG_ML 0x00008000 /* Set to activate mac loopback mode */206206#define BGMAC_CMDCFG_AE 0x00400000207207#define BGMAC_CMDCFG_CFE 0x00800000
+5-1
drivers/net/ethernet/broadcom/genet/bcmgenet.c
···878878 else879879 p = (char *)priv;880880 p += s->stat_offset;881881- data[i] = *(u32 *)p;881881+ if (sizeof(unsigned long) != sizeof(u32) &&882882+ s->stat_sizeof == sizeof(unsigned long))883883+ data[i] = *(unsigned long *)p;884884+ else885885+ data[i] = *(u32 *)p;882886 }883887}884888
···14511451 unsigned int mmd, unsigned int reg, u16 *valp);14521452int t4_mdio_wr(struct adapter *adap, unsigned int mbox, unsigned int phy_addr,14531453 unsigned int mmd, unsigned int reg, u16 val);14541454+int t4_iq_stop(struct adapter *adap, unsigned int mbox, unsigned int pf,14551455+ unsigned int vf, unsigned int iqtype, unsigned int iqid,14561456+ unsigned int fl0id, unsigned int fl1id);14541457int t4_iq_free(struct adapter *adap, unsigned int mbox, unsigned int pf,14551458 unsigned int vf, unsigned int iqtype, unsigned int iqid,14561459 unsigned int fl0id, unsigned int fl1id);
+17-3
drivers/net/ethernet/chelsio/cxgb4/sge.c
···29812981void t4_free_sge_resources(struct adapter *adap)29822982{29832983 int i;29842984- struct sge_eth_rxq *eq = adap->sge.ethrxq;29852985- struct sge_eth_txq *etq = adap->sge.ethtxq;29842984+ struct sge_eth_rxq *eq;29852985+ struct sge_eth_txq *etq;29862986+29872987+ /* stop all Rx queues in order to start them draining */29882988+ for (i = 0; i < adap->sge.ethqsets; i++) {29892989+ eq = &adap->sge.ethrxq[i];29902990+ if (eq->rspq.desc)29912991+ t4_iq_stop(adap, adap->mbox, adap->pf, 0,29922992+ FW_IQ_TYPE_FL_INT_CAP,29932993+ eq->rspq.cntxt_id,29942994+ eq->fl.size ? eq->fl.cntxt_id : 0xffff,29952995+ 0xffff);29962996+ }2986299729872998 /* clean up Ethernet Tx/Rx queues */29882988- for (i = 0; i < adap->sge.ethqsets; i++, eq++, etq++) {29992999+ for (i = 0; i < adap->sge.ethqsets; i++) {30003000+ eq = &adap->sge.ethrxq[i];29893001 if (eq->rspq.desc)29903002 free_rspq_fl(adap, &eq->rspq,29913003 eq->fl.size ? &eq->fl : NULL);30043004+30053005+ etq = &adap->sge.ethtxq[i];29923006 if (etq->q.desc) {29933007 t4_eth_eq_free(adap, adap->mbox, adap->pf, 0,29943008 etq->q.cntxt_id);
+43
drivers/net/ethernet/chelsio/cxgb4/t4_hw.c
···25572557}2558255825592559#define EEPROM_STAT_ADDR 0x7bfc25602560+#define VPD_SIZE 0x80025602561#define VPD_BASE 0x40025612562#define VPD_BASE_OLD 025622563#define VPD_LEN 1024···25942593 vpd = vmalloc(VPD_LEN);25952594 if (!vpd)25962595 return -ENOMEM;25962596+25972597+ /* We have two VPD data structures stored in the adapter VPD area.25982598+ * By default, Linux calculates the size of the VPD area by traversing25992599+ * the first VPD area at offset 0x0, so we need to tell the OS what26002600+ * our real VPD size is.26012601+ */26022602+ ret = pci_set_vpd_size(adapter->pdev, VPD_SIZE);26032603+ if (ret < 0)26042604+ goto out;2597260525982606 /* Card information normally starts at VPD_BASE but early cards had25992607 * it at 0.···69466936 FW_VI_ENABLE_CMD_VIID_V(viid));69476937 c.ien_to_len16 = cpu_to_be32(FW_VI_ENABLE_CMD_LED_F | FW_LEN16(c));69486938 c.blinkdur = cpu_to_be16(nblinks);69396939+ return t4_wr_mbox(adap, mbox, &c, sizeof(c), NULL);69406940+}69416941+69426942+/**69436943+ * t4_iq_stop - stop an ingress queue and its FLs69446944+ * @adap: the adapter69456945+ * @mbox: mailbox to use for the FW command69466946+ * @pf: the PF owning the queues69476947+ * @vf: the VF owning the queues69486948+ * @iqtype: the ingress queue type (FW_IQ_TYPE_FL_INT_CAP, etc.)69496949+ * @iqid: ingress queue id69506950+ * @fl0id: FL0 queue id or 0xffff if no attached FL069516951+ * @fl1id: FL1 queue id or 0xffff if no attached FL169526952+ *69536953+ * Stops an ingress queue and its associated FLs, if any. This causes69546954+ * any current or future data/messages destined for these queues to be69556955+ * tossed.69566956+ */69576957+int t4_iq_stop(struct adapter *adap, unsigned int mbox, unsigned int pf,69586958+ unsigned int vf, unsigned int iqtype, unsigned int iqid,69596959+ unsigned int fl0id, unsigned int fl1id)69606960+{69616961+ struct fw_iq_cmd c;69626962+69636963+ memset(&c, 0, sizeof(c));69646964+ c.op_to_vfn = cpu_to_be32(FW_CMD_OP_V(FW_IQ_CMD) | FW_CMD_REQUEST_F |69656965+ FW_CMD_EXEC_F | FW_IQ_CMD_PFN_V(pf) |69666966+ FW_IQ_CMD_VFN_V(vf));69676967+ c.alloc_to_len16 = cpu_to_be32(FW_IQ_CMD_IQSTOP_F | FW_LEN16(c));69686968+ c.type_to_iqandstindex = cpu_to_be32(FW_IQ_CMD_TYPE_V(iqtype));69696969+ c.iqid = cpu_to_be16(iqid);69706970+ c.fl0id = cpu_to_be16(fl0id);69716971+ c.fl1id = cpu_to_be16(fl1id);69496972 return t4_wr_mbox(adap, mbox, &c, sizeof(c), NULL);69506973}69516974
+22-8
drivers/net/ethernet/intel/fm10k/fm10k_pf.c
···12231223 if (err)12241224 return err;1225122512261226- /* verify upper 16 bits are zero */12271227- if (vid >> 16)12281228- return FM10K_ERR_PARAM;12291229-12301226 set = !(vid & FM10K_VLAN_CLEAR);12311227 vid &= ~FM10K_VLAN_CLEAR;1232122812331233- err = fm10k_iov_select_vid(vf_info, (u16)vid);12341234- if (err < 0)12351235- return err;12291229+ /* if the length field has been set, this is a multi-bit12301230+ * update request. For multi-bit requests, simply disallow12311231+ * them when the pf_vid has been set. In this case, the PF12321232+ * should have already cleared the VLAN_TABLE, and if we12331233+ * allowed them, it could allow a rogue VF to receive traffic12341234+ * on a VLAN it was not assigned. In the single-bit case, we12351235+ * need to modify requests for VLAN 0 to use the default PF or12361236+ * SW vid when assigned.12371237+ */1236123812371237- vid = err;12391239+ if (vid >> 16) {12401240+ /* prevent multi-bit requests when PF has12411241+ * administratively set the VLAN for this VF12421242+ */12431243+ if (vf_info->pf_vid)12441244+ return FM10K_ERR_PARAM;12451245+ } else {12461246+ err = fm10k_iov_select_vid(vf_info, (u16)vid);12471247+ if (err < 0)12481248+ return err;12491249+12501250+ vid = err;12511251+ }1238125212391253 /* update VSI info for VF in regards to VLAN table */12401254 err = hw->mac.ops.update_vlan(hw, vid, vf_info->vsi, set);
+24-25
drivers/net/ethernet/intel/i40e/i40e_txrx.c
···25942594}2595259525962596/**25972597- * __i40e_chk_linearize - Check if there are more than 8 fragments per packet25972597+ * __i40e_chk_linearize - Check if there are more than 8 buffers per packet25982598 * @skb: send buffer25992599 *26002600- * Note: Our HW can't scatter-gather more than 8 fragments to build26012601- * a packet on the wire and so we need to figure out the cases where we26022602- * need to linearize the skb.26002600+ * Note: Our HW can't DMA more than 8 buffers to build a packet on the wire26012601+ * and so we need to figure out the cases where we need to linearize the skb.26022602+ *26032603+ * For TSO we need to count the TSO header and segment payload separately.26042604+ * As such we need to check cases where we have 7 fragments or more as we26052605+ * can potentially require 9 DMA transactions, 1 for the TSO header, 1 for26062606+ * the segment payload in the first descriptor, and another 7 for the26072607+ * fragments.26032608 **/26042609bool __i40e_chk_linearize(struct sk_buff *skb)26052610{26062611 const struct skb_frag_struct *frag, *stale;26072607- int gso_size, nr_frags, sum;26122612+ int nr_frags, sum;2608261326092609- /* check to see if TSO is enabled, if so we may get a repreive */26102610- gso_size = skb_shinfo(skb)->gso_size;26112611- if (unlikely(!gso_size))26122612- return true;26132613-26142614- /* no need to check if number of frags is less than 8 */26142614+ /* no need to check if number of frags is less than 7 */26152615 nr_frags = skb_shinfo(skb)->nr_frags;26162616- if (nr_frags < I40E_MAX_BUFFER_TXD)26162616+ if (nr_frags < (I40E_MAX_BUFFER_TXD - 1))26172617 return false;2618261826192619 /* We need to walk through the list and validate that each group26202620 * of 6 fragments totals at least gso_size. However we don't need26212621- * to perform such validation on the first or last 6 since the first26222622- * 6 cannot inherit any data from a descriptor before them, and the26232623- * last 6 cannot inherit any data from a descriptor after them.26212621+ * to perform such validation on the last 6 since the last 6 cannot26222622+ * inherit any data from a descriptor after them.26242623 */26252625- nr_frags -= I40E_MAX_BUFFER_TXD - 1;26242624+ nr_frags -= I40E_MAX_BUFFER_TXD - 2;26262625 frag = &skb_shinfo(skb)->frags[0];2627262626282627 /* Initialize size to the negative value of gso_size minus 1. We···26302631 * descriptors for a single transmit as the header and previous26312632 * fragment are already consuming 2 descriptors.26322633 */26332633- sum = 1 - gso_size;26342634+ sum = 1 - skb_shinfo(skb)->gso_size;2634263526352635- /* Add size of frags 1 through 5 to create our initial sum */26362636- sum += skb_frag_size(++frag);26372637- sum += skb_frag_size(++frag);26382638- sum += skb_frag_size(++frag);26392639- sum += skb_frag_size(++frag);26402640- sum += skb_frag_size(++frag);26362636+ /* Add size of frags 0 through 4 to create our initial sum */26372637+ sum += skb_frag_size(frag++);26382638+ sum += skb_frag_size(frag++);26392639+ sum += skb_frag_size(frag++);26402640+ sum += skb_frag_size(frag++);26412641+ sum += skb_frag_size(frag++);2641264226422643 /* Walk through fragments adding latest fragment, testing it, and26432644 * then removing stale fragments from the sum.26442645 */26452646 stale = &skb_shinfo(skb)->frags[0];26462647 for (;;) {26472647- sum += skb_frag_size(++frag);26482648+ sum += skb_frag_size(frag++);2648264926492650 /* if sum is negative we failed to make sufficient progress */26502651 if (sum < 0)···26542655 if (!--nr_frags)26552656 break;2656265726572657- sum -= skb_frag_size(++stale);26582658+ sum -= skb_frag_size(stale++);26582659 }2659266026602661 return false;
+7-3
drivers/net/ethernet/intel/i40e/i40e_txrx.h
···413413 **/414414static inline bool i40e_chk_linearize(struct sk_buff *skb, int count)415415{416416- /* we can only support up to 8 data buffers for a single send */417417- if (likely(count <= I40E_MAX_BUFFER_TXD))416416+ /* Both TSO and single send will work if count is less than 8 */417417+ if (likely(count < I40E_MAX_BUFFER_TXD))418418 return false;419419420420- return __i40e_chk_linearize(skb);420420+ if (skb_is_gso(skb))421421+ return __i40e_chk_linearize(skb);422422+423423+ /* we can support up to 8 data buffers for a single send */424424+ return count != I40E_MAX_BUFFER_TXD;421425}422426#endif /* _I40E_TXRX_H_ */
+24-25
drivers/net/ethernet/intel/i40evf/i40e_txrx.c
···17961796}1797179717981798/**17991799- * __i40evf_chk_linearize - Check if there are more than 8 fragments per packet17991799+ * __i40evf_chk_linearize - Check if there are more than 8 buffers per packet18001800 * @skb: send buffer18011801 *18021802- * Note: Our HW can't scatter-gather more than 8 fragments to build18031803- * a packet on the wire and so we need to figure out the cases where we18041804- * need to linearize the skb.18021802+ * Note: Our HW can't DMA more than 8 buffers to build a packet on the wire18031803+ * and so we need to figure out the cases where we need to linearize the skb.18041804+ *18051805+ * For TSO we need to count the TSO header and segment payload separately.18061806+ * As such we need to check cases where we have 7 fragments or more as we18071807+ * can potentially require 9 DMA transactions, 1 for the TSO header, 1 for18081808+ * the segment payload in the first descriptor, and another 7 for the18091809+ * fragments.18051810 **/18061811bool __i40evf_chk_linearize(struct sk_buff *skb)18071812{18081813 const struct skb_frag_struct *frag, *stale;18091809- int gso_size, nr_frags, sum;18141814+ int nr_frags, sum;1810181518111811- /* check to see if TSO is enabled, if so we may get a repreive */18121812- gso_size = skb_shinfo(skb)->gso_size;18131813- if (unlikely(!gso_size))18141814- return true;18151815-18161816- /* no need to check if number of frags is less than 8 */18161816+ /* no need to check if number of frags is less than 7 */18171817 nr_frags = skb_shinfo(skb)->nr_frags;18181818- if (nr_frags < I40E_MAX_BUFFER_TXD)18181818+ if (nr_frags < (I40E_MAX_BUFFER_TXD - 1))18191819 return false;1820182018211821 /* We need to walk through the list and validate that each group18221822 * of 6 fragments totals at least gso_size. However we don't need18231823- * to perform such validation on the first or last 6 since the first18241824- * 6 cannot inherit any data from a descriptor before them, and the18251825- * last 6 cannot inherit any data from a descriptor after them.18231823+ * to perform such validation on the last 6 since the last 6 cannot18241824+ * inherit any data from a descriptor after them.18261825 */18271827- nr_frags -= I40E_MAX_BUFFER_TXD - 1;18261826+ nr_frags -= I40E_MAX_BUFFER_TXD - 2;18281827 frag = &skb_shinfo(skb)->frags[0];1829182818301829 /* Initialize size to the negative value of gso_size minus 1. We···18321833 * descriptors for a single transmit as the header and previous18331834 * fragment are already consuming 2 descriptors.18341835 */18351835- sum = 1 - gso_size;18361836+ sum = 1 - skb_shinfo(skb)->gso_size;1836183718371837- /* Add size of frags 1 through 5 to create our initial sum */18381838- sum += skb_frag_size(++frag);18391839- sum += skb_frag_size(++frag);18401840- sum += skb_frag_size(++frag);18411841- sum += skb_frag_size(++frag);18421842- sum += skb_frag_size(++frag);18381838+ /* Add size of frags 0 through 4 to create our initial sum */18391839+ sum += skb_frag_size(frag++);18401840+ sum += skb_frag_size(frag++);18411841+ sum += skb_frag_size(frag++);18421842+ sum += skb_frag_size(frag++);18431843+ sum += skb_frag_size(frag++);1843184418441845 /* Walk through fragments adding latest fragment, testing it, and18451846 * then removing stale fragments from the sum.18461847 */18471848 stale = &skb_shinfo(skb)->frags[0];18481849 for (;;) {18491849- sum += skb_frag_size(++frag);18501850+ sum += skb_frag_size(frag++);1850185118511852 /* if sum is negative we failed to make sufficient progress */18521853 if (sum < 0)···18561857 if (!--nr_frags)18571858 break;1858185918591859- sum -= skb_frag_size(++stale);18601860+ sum -= skb_frag_size(stale++);18601861 }1861186218621863 return false;
+7-3
drivers/net/ethernet/intel/i40evf/i40e_txrx.h
···395395 **/396396static inline bool i40e_chk_linearize(struct sk_buff *skb, int count)397397{398398- /* we can only support up to 8 data buffers for a single send */399399- if (likely(count <= I40E_MAX_BUFFER_TXD))398398+ /* Both TSO and single send will work if count is less than 8 */399399+ if (likely(count < I40E_MAX_BUFFER_TXD))400400 return false;401401402402- return __i40evf_chk_linearize(skb);402402+ if (skb_is_gso(skb))403403+ return __i40evf_chk_linearize(skb);404404+405405+ /* we can support up to 8 data buffers for a single send */406406+ return count != I40E_MAX_BUFFER_TXD;403407}404408#endif /* _I40E_TXRX_H_ */
···323323 unsigned long csum_ok;324324 unsigned long csum_none;325325 unsigned long csum_complete;326326+ unsigned long dropped;326327 int hwtstamp_rx_filter;327328 cpumask_var_t affinity_mask;328329};
+13
drivers/net/ethernet/mellanox/mlx4/port.c
···13171317 }1318131813191319 gen_context->mtu = cpu_to_be16(master->max_mtu[port]);13201320+ /* Slave cannot change Global Pause configuration */13211321+ if (slave != mlx4_master_func_num(dev) &&13221322+ ((gen_context->pptx != master->pptx) ||13231323+ (gen_context->pprx != master->pprx))) {13241324+ gen_context->pptx = master->pptx;13251325+ gen_context->pprx = master->pprx;13261326+ mlx4_warn(dev,13271327+ "denying Global Pause change for slave:%d\n",13281328+ slave);13291329+ } else {13301330+ master->pptx = gen_context->pptx;13311331+ master->pprx = gen_context->pprx;13321332+ }13201333 break;13211334 case MLX4_SET_PORT_GID_TABLE:13221335 /* change to MULTIPLE entries: number of guest's gids
···617617 { USB_VENDOR_AND_INTERFACE_INFO(0x0bdb, USB_CLASS_COMM, USB_CDC_SUBCLASS_MBIM, USB_CDC_PROTO_NONE),618618 .driver_info = (unsigned long)&cdc_mbim_info,619619 },620620- /* Huawei E3372 fails unless NDP comes after the IP packets */621621- { USB_DEVICE_AND_INTERFACE_INFO(0x12d1, 0x157d, USB_CLASS_COMM, USB_CDC_SUBCLASS_MBIM, USB_CDC_PROTO_NONE),620620+621621+ /* Some Huawei devices, ME906s-158 (12d1:15c1) and E3372622622+ * (12d1:157d), are known to fail unless the NDP is placed623623+ * after the IP packets. Applying the quirk to all Huawei624624+ * devices is broader than necessary, but harmless.625625+ */626626+ { USB_VENDOR_AND_INTERFACE_INFO(0x12d1, USB_CLASS_COMM, USB_CDC_SUBCLASS_MBIM, USB_CDC_PROTO_NONE),622627 .driver_info = (unsigned long)&cdc_mbim_info_ndp_to_end,623628 },624629 /* default entry */
+8-4
drivers/net/vmxnet3/vmxnet3_drv.c
···11521152 union Vmxnet3_GenericDesc *gdesc)11531153{11541154 if (!gdesc->rcd.cnc && adapter->netdev->features & NETIF_F_RXCSUM) {11551155- /* typical case: TCP/UDP over IP and both csums are correct */11561156- if ((le32_to_cpu(gdesc->dword[3]) & VMXNET3_RCD_CSUM_OK) ==11571157- VMXNET3_RCD_CSUM_OK) {11551155+ if (gdesc->rcd.v4 &&11561156+ (le32_to_cpu(gdesc->dword[3]) &11571157+ VMXNET3_RCD_CSUM_OK) == VMXNET3_RCD_CSUM_OK) {11581158 skb->ip_summed = CHECKSUM_UNNECESSARY;11591159 BUG_ON(!(gdesc->rcd.tcp || gdesc->rcd.udp));11601160- BUG_ON(!(gdesc->rcd.v4 || gdesc->rcd.v6));11601160+ BUG_ON(gdesc->rcd.frg);11611161+ } else if (gdesc->rcd.v6 && (le32_to_cpu(gdesc->dword[3]) &11621162+ (1 << VMXNET3_RCD_TUC_SHIFT))) {11631163+ skb->ip_summed = CHECKSUM_UNNECESSARY;11641164+ BUG_ON(!(gdesc->rcd.tcp || gdesc->rcd.udp));11611165 BUG_ON(gdesc->rcd.frg);11621166 } else {11631167 if (gdesc->rcd.csum) {
+2-2
drivers/net/vmxnet3/vmxnet3_int.h
···6969/*7070 * Version numbers7171 */7272-#define VMXNET3_DRIVER_VERSION_STRING "1.4.6.0-k"7272+#define VMXNET3_DRIVER_VERSION_STRING "1.4.7.0-k"73737474/* a 32-bit int, each byte encode a verion number in VMXNET3_DRIVER_VERSION */7575-#define VMXNET3_DRIVER_VERSION_NUM 0x010406007575+#define VMXNET3_DRIVER_VERSION_NUM 0x0104070076767777#if defined(CONFIG_PCI_MSI)7878 /* RSS only makes sense if MSI-X is supported. */
···11471147 /* the fw is stopped, the aux sta is dead: clean up driver state */11481148 iwl_mvm_del_aux_sta(mvm);1149114911501150+ iwl_free_fw_paging(mvm);11511151+11501152 /*11511153 * Clear IN_HW_RESTART flag when stopping the hw (as restart_complete()11521154 * won't be called in this case).
-2
drivers/net/wireless/intel/iwlwifi/mvm/ops.c
···761761 for (i = 0; i < NVM_MAX_NUM_SECTIONS; i++)762762 kfree(mvm->nvm_sections[i].data);763763764764- iwl_free_fw_paging(mvm);765765-766764 iwl_mvm_tof_clean(mvm);767765768766 ieee80211_free_hw(mvm->hw);
+2-2
drivers/net/wireless/intel/iwlwifi/pcie/trans.c
···732732 */733733 val = iwl_read_prph(trans, PREG_AUX_BUS_WPROT_0);734734 if (val & (BIT(1) | BIT(17))) {735735- IWL_INFO(trans,736736- "can't access the RSA semaphore it is write protected\n");735735+ IWL_DEBUG_INFO(trans,736736+ "can't access the RSA semaphore it is write protected\n");737737 return 0;738738 }739739
···737737 break;738738 case CPU_PM_EXIT:739739 case CPU_PM_ENTER_FAILED:740740- /* Restore and enable the counter */741741- armpmu_start(event, PERF_EF_RELOAD);740740+ /*741741+ * Restore and enable the counter.742742+ * armpmu_start() indirectly calls743743+ *744744+ * perf_event_update_userpage()745745+ *746746+ * that requires RCU read locking to be functional,747747+ * wrap the call within RCU_NONIDLE to make the748748+ * RCU subsystem aware this cpu is not idle from749749+ * an RCU perspective for the armpmu_start() call750750+ * duration.751751+ */752752+ RCU_NONIDLE(armpmu_start(event, PERF_EF_RELOAD));742753 break;743754 default:744755 break;
+5-2
drivers/phy/phy-rockchip-dp.c
···8686 if (!np)8787 return -ENODEV;88888989+ if (!dev->parent || !dev->parent->of_node)9090+ return -ENODEV;9191+8992 dp = devm_kzalloc(dev, sizeof(*dp), GFP_KERNEL);9093 if (IS_ERR(dp))9194 return -ENOMEM;···107104 return ret;108105 }109106110110- dp->grf = syscon_regmap_lookup_by_phandle(np, "rockchip,grf");107107+ dp->grf = syscon_node_to_regmap(dev->parent->of_node);111108 if (IS_ERR(dp->grf)) {112112- dev_err(dev, "rk3288-dp needs rockchip,grf property\n");109109+ dev_err(dev, "rk3288-dp needs the General Register Files syscon\n");113110 return PTR_ERR(dp->grf);114111 }115112
+4-1
drivers/phy/phy-rockchip-emmc.c
···176176 struct regmap *grf;177177 unsigned int reg_offset;178178179179- grf = syscon_regmap_lookup_by_phandle(dev->of_node, "rockchip,grf");179179+ if (!dev->parent || !dev->parent->of_node)180180+ return -ENODEV;181181+182182+ grf = syscon_node_to_regmap(dev->parent->of_node);180183 if (IS_ERR(grf)) {181184 dev_err(dev, "Missing rockchip,grf property\n");182185 return PTR_ERR(grf);
···687687 ipcdev.acpi_io_size = size;688688 dev_info(&pdev->dev, "io res: %pR\n", res);689689690690- /* This is index 0 to cover BIOS data register */691690 punit_res = punit_res_array;691691+ /* This is index 0 to cover BIOS data register */692692 res = platform_get_resource(pdev, IORESOURCE_MEM,693693 PLAT_RESOURCE_BIOS_DATA_INDEX);694694 if (!res) {···698698 *punit_res = *res;699699 dev_info(&pdev->dev, "punit BIOS data res: %pR\n", res);700700701701+ /* This is index 1 to cover BIOS interface register */701702 res = platform_get_resource(pdev, IORESOURCE_MEM,702703 PLAT_RESOURCE_BIOS_IFACE_INDEX);703704 if (!res) {704705 dev_err(&pdev->dev, "Failed to get res of punit BIOS iface\n");705706 return -ENXIO;706707 }707707- /* This is index 1 to cover BIOS interface register */708708 *++punit_res = *res;709709 dev_info(&pdev->dev, "punit BIOS interface res: %pR\n", res);710710711711+ /* This is index 2 to cover ISP data register, optional */711712 res = platform_get_resource(pdev, IORESOURCE_MEM,712713 PLAT_RESOURCE_ISP_DATA_INDEX);713713- if (!res) {714714- dev_err(&pdev->dev, "Failed to get res of punit ISP data\n");715715- return -ENXIO;714714+ ++punit_res;715715+ if (res) {716716+ *punit_res = *res;717717+ dev_info(&pdev->dev, "punit ISP data res: %pR\n", res);716718 }717717- /* This is index 2 to cover ISP data register */718718- *++punit_res = *res;719719- dev_info(&pdev->dev, "punit ISP data res: %pR\n", res);720719720720+ /* This is index 3 to cover ISP interface register, optional */721721 res = platform_get_resource(pdev, IORESOURCE_MEM,722722 PLAT_RESOURCE_ISP_IFACE_INDEX);723723- if (!res) {724724- dev_err(&pdev->dev, "Failed to get res of punit ISP iface\n");725725- return -ENXIO;723723+ ++punit_res;724724+ if (res) {725725+ *punit_res = *res;726726+ dev_info(&pdev->dev, "punit ISP interface res: %pR\n", res);726727 }727727- /* This is index 3 to cover ISP interface register */728728- *++punit_res = *res;729729- dev_info(&pdev->dev, "punit ISP interface res: %pR\n", res);730728729729+ /* This is index 4 to cover GTD data register, optional */731730 res = platform_get_resource(pdev, IORESOURCE_MEM,732731 PLAT_RESOURCE_GTD_DATA_INDEX);733733- if (!res) {734734- dev_err(&pdev->dev, "Failed to get res of punit GTD data\n");735735- return -ENXIO;732732+ ++punit_res;733733+ if (res) {734734+ *punit_res = *res;735735+ dev_info(&pdev->dev, "punit GTD data res: %pR\n", res);736736 }737737- /* This is index 4 to cover GTD data register */738738- *++punit_res = *res;739739- dev_info(&pdev->dev, "punit GTD data res: %pR\n", res);740737738738+ /* This is index 5 to cover GTD interface register, optional */741739 res = platform_get_resource(pdev, IORESOURCE_MEM,742740 PLAT_RESOURCE_GTD_IFACE_INDEX);743743- if (!res) {744744- dev_err(&pdev->dev, "Failed to get res of punit GTD iface\n");745745- return -ENXIO;741741+ ++punit_res;742742+ if (res) {743743+ *punit_res = *res;744744+ dev_info(&pdev->dev, "punit GTD interface res: %pR\n", res);746745 }747747- /* This is index 5 to cover GTD interface register */748748- *++punit_res = *res;749749- dev_info(&pdev->dev, "punit GTD interface res: %pR\n", res);750746751747 res = platform_get_resource(pdev, IORESOURCE_MEM,752748 PLAT_RESOURCE_IPC_INDEX);
+32-16
drivers/platform/x86/intel_punit_ipc.c
···227227 struct resource *res;228228 void __iomem *addr;229229230230+ /*231231+ * The following resources are required232232+ * - BIOS_IPC BASE_DATA233233+ * - BIOS_IPC BASE_IFACE234234+ */230235 res = platform_get_resource(pdev, IORESOURCE_MEM, 0);231236 addr = devm_ioremap_resource(&pdev->dev, res);232237 if (IS_ERR(addr))···244239 return PTR_ERR(addr);245240 punit_ipcdev->base[BIOS_IPC][BASE_IFACE] = addr;246241242242+ /*243243+ * The following resources are optional244244+ * - ISPDRIVER_IPC BASE_DATA245245+ * - ISPDRIVER_IPC BASE_IFACE246246+ * - GTDRIVER_IPC BASE_DATA247247+ * - GTDRIVER_IPC BASE_IFACE248248+ */247249 res = platform_get_resource(pdev, IORESOURCE_MEM, 2);248248- addr = devm_ioremap_resource(&pdev->dev, res);249249- if (IS_ERR(addr))250250- return PTR_ERR(addr);251251- punit_ipcdev->base[ISPDRIVER_IPC][BASE_DATA] = addr;250250+ if (res) {251251+ addr = devm_ioremap_resource(&pdev->dev, res);252252+ if (!IS_ERR(addr))253253+ punit_ipcdev->base[ISPDRIVER_IPC][BASE_DATA] = addr;254254+ }252255253256 res = platform_get_resource(pdev, IORESOURCE_MEM, 3);254254- addr = devm_ioremap_resource(&pdev->dev, res);255255- if (IS_ERR(addr))256256- return PTR_ERR(addr);257257- punit_ipcdev->base[ISPDRIVER_IPC][BASE_IFACE] = addr;257257+ if (res) {258258+ addr = devm_ioremap_resource(&pdev->dev, res);259259+ if (!IS_ERR(addr))260260+ punit_ipcdev->base[ISPDRIVER_IPC][BASE_IFACE] = addr;261261+ }258262259263 res = platform_get_resource(pdev, IORESOURCE_MEM, 4);260260- addr = devm_ioremap_resource(&pdev->dev, res);261261- if (IS_ERR(addr))262262- return PTR_ERR(addr);263263- punit_ipcdev->base[GTDRIVER_IPC][BASE_DATA] = addr;264264+ if (res) {265265+ addr = devm_ioremap_resource(&pdev->dev, res);266266+ if (!IS_ERR(addr))267267+ punit_ipcdev->base[GTDRIVER_IPC][BASE_DATA] = addr;268268+ }264269265270 res = platform_get_resource(pdev, IORESOURCE_MEM, 5);266266- addr = devm_ioremap_resource(&pdev->dev, res);267267- if (IS_ERR(addr))268268- return PTR_ERR(addr);269269- punit_ipcdev->base[GTDRIVER_IPC][BASE_IFACE] = addr;271271+ if (res) {272272+ addr = devm_ioremap_resource(&pdev->dev, res);273273+ if (!IS_ERR(addr))274274+ punit_ipcdev->base[GTDRIVER_IPC][BASE_IFACE] = addr;275275+ }270276271277 return 0;272278}
+1-1
drivers/platform/x86/intel_telemetry_pltdrv.c
···659659static int telemetry_plt_set_sampling_period(u8 pss_period, u8 ioss_period)660660{661661 u32 telem_ctrl = 0;662662- int ret;662662+ int ret = 0;663663664664 mutex_lock(&(telm_conf->telem_lock));665665 if (ioss_period) {
···2669266926702670 /* Create device class needed by udev */26712671 dev_class = class_create(THIS_MODULE, DRV_NAME);26722672- if (!dev_class) {26722672+ if (IS_ERR(dev_class)) {26732673 rmcd_error("Unable to create " DRV_NAME " class");26742674- return -EINVAL;26742674+ return PTR_ERR(dev_class);26752675 }2676267626772677 ret = alloc_chrdev_region(&dev_number, 0, RIO_MAX_MPORTS, DRV_NAME);
+3-3
drivers/rtc/rtc-ds1307.c
···863863 * A user-initiated temperature conversion is not started by this function,864864 * so the temperature is updated once every 64 seconds.865865 */866866-static int ds3231_hwmon_read_temp(struct device *dev, s16 *mC)866866+static int ds3231_hwmon_read_temp(struct device *dev, s32 *mC)867867{868868 struct ds1307 *ds1307 = dev_get_drvdata(dev);869869 u8 temp_buf[2];···892892 struct device_attribute *attr, char *buf)893893{894894 int ret;895895- s16 temp;895895+ s32 temp;896896897897 ret = ds3231_hwmon_read_temp(dev, &temp);898898 if (ret)···15311531 return PTR_ERR(ds1307->rtc);15321532 }1533153315341534- if (ds1307_can_wakeup_device) {15341534+ if (ds1307_can_wakeup_device && ds1307->client->irq <= 0) {15351535 /* Disable request for an IRQ */15361536 want_irq = false;15371537 dev_info(&client->dev, "'wakeup-source' is set, request for an IRQ is disabled!\n");
···688688{689689 struct flowi6 fl;690690691691+ memset(&fl, 0, sizeof(fl));691692 if (saddr)692693 memcpy(&fl.saddr, saddr, sizeof(struct in6_addr));693694 if (daddr)
+6-5
drivers/soc/mediatek/mtk-scpsys.c
···491491 genpd->dev_ops.active_wakeup = scpsys_active_wakeup;492492493493 /*494494- * With CONFIG_PM disabled turn on all domains to make the495495- * hardware usable.494494+ * Initially turn on all domains to make the domains usable495495+ * with !CONFIG_PM and to get the hardware in sync with the496496+ * software. The unused domains will be switched off during497497+ * late_init time.496498 */497497- if (!IS_ENABLED(CONFIG_PM))498498- genpd->power_on(genpd);499499+ genpd->power_on(genpd);499500500500- pm_genpd_init(genpd, NULL, true);501501+ pm_genpd_init(genpd, NULL, false);501502 }502503503504 /*
+34-20
drivers/staging/media/davinci_vpfe/vpfe_video.c
···172172static int vpfe_update_pipe_state(struct vpfe_video_device *video)173173{174174 struct vpfe_pipeline *pipe = &video->pipe;175175+ int ret;175176176176- if (vpfe_prepare_pipeline(video))177177- return vpfe_prepare_pipeline(video);177177+ ret = vpfe_prepare_pipeline(video);178178+ if (ret)179179+ return ret;178180179181 /*180182 * Find out if there is any input video···184182 */185183 if (pipe->input_num == 0) {186184 pipe->state = VPFE_PIPELINE_STREAM_CONTINUOUS;187187- if (vpfe_update_current_ext_subdev(video)) {185185+ ret = vpfe_update_current_ext_subdev(video);186186+ if (ret) {188187 pr_err("Invalid external subdev\n");189189- return vpfe_update_current_ext_subdev(video);188188+ return ret;190189 }191190 } else {192191 pipe->state = VPFE_PIPELINE_STREAM_SINGLESHOT;···670667 struct v4l2_subdev *subdev;671668 struct v4l2_format format;672669 struct media_pad *remote;670670+ int ret;673671674672 v4l2_dbg(1, debug, &vpfe_dev->v4l2_dev, "vpfe_enum_fmt\n");675673···699695 sd_fmt.pad = remote->index;700696 sd_fmt.which = V4L2_SUBDEV_FORMAT_ACTIVE;701697 /* get output format of remote subdev */702702- if (v4l2_subdev_call(subdev, pad, get_fmt, NULL, &sd_fmt)) {698698+ ret = v4l2_subdev_call(subdev, pad, get_fmt, NULL, &sd_fmt);699699+ if (ret) {703700 v4l2_err(&vpfe_dev->v4l2_dev,704701 "invalid remote subdev for video node\n");705705- return v4l2_subdev_call(subdev, pad, get_fmt, NULL, &sd_fmt);702702+ return ret;706703 }707704 /* convert to pix format */708705 mbus.code = sd_fmt.format.code;···730725 struct vpfe_video_device *video = video_drvdata(file);731726 struct vpfe_device *vpfe_dev = video->vpfe_dev;732727 struct v4l2_format format;728728+ int ret;733729734730 v4l2_dbg(1, debug, &vpfe_dev->v4l2_dev, "vpfe_s_fmt\n");735731 /* If streaming is started, return error */···739733 return -EBUSY;740734 }741735 /* get adjacent subdev's output pad format */742742- if (__vpfe_video_get_format(video, &format))743743- return __vpfe_video_get_format(video, &format);736736+ ret = __vpfe_video_get_format(video, &format);737737+ if (ret)738738+ return ret;744739 *fmt = format;745740 video->fmt = *fmt;746741 return 0;···764757 struct vpfe_video_device *video = video_drvdata(file);765758 struct vpfe_device *vpfe_dev = video->vpfe_dev;766759 struct v4l2_format format;760760+ int ret;767761768762 v4l2_dbg(1, debug, &vpfe_dev->v4l2_dev, "vpfe_try_fmt\n");769763 /* get adjacent subdev's output pad format */770770- if (__vpfe_video_get_format(video, &format))771771- return __vpfe_video_get_format(video, &format);764764+ ret = __vpfe_video_get_format(video, &format);765765+ if (ret)766766+ return ret;772767773768 *fmt = format;774769 return 0;···847838848839 v4l2_dbg(1, debug, &vpfe_dev->v4l2_dev, "vpfe_s_input\n");849840850850- if (mutex_lock_interruptible(&video->lock))851851- return mutex_lock_interruptible(&video->lock);841841+ ret = mutex_lock_interruptible(&video->lock);842842+ if (ret)843843+ return ret;852844 /*853845 * If streaming is started return device busy854846 * error···950940 v4l2_dbg(1, debug, &vpfe_dev->v4l2_dev, "vpfe_s_std\n");951941952942 /* Call decoder driver function to set the standard */953953- if (mutex_lock_interruptible(&video->lock))954954- return mutex_lock_interruptible(&video->lock);943943+ ret = mutex_lock_interruptible(&video->lock);944944+ if (ret)945945+ return ret;955946 sdinfo = video->current_ext_subdev;956947 /* If streaming is started, return device busy error */957948 if (video->started) {···13381327 return -EINVAL;13391328 }1340132913411341- if (mutex_lock_interruptible(&video->lock))13421342- return mutex_lock_interruptible(&video->lock);13301330+ ret = mutex_lock_interruptible(&video->lock);13311331+ if (ret)13321332+ return ret;1343133313441334 if (video->io_usrs != 0) {13451335 v4l2_err(&vpfe_dev->v4l2_dev, "Only one IO user allowed\n");···13661354 q->buf_struct_size = sizeof(struct vpfe_cap_buffer);13671355 q->timestamp_flags = V4L2_BUF_FLAG_TIMESTAMP_MONOTONIC;1368135613691369- if (vb2_queue_init(q)) {13571357+ ret = vb2_queue_init(q);13581358+ if (ret) {13701359 v4l2_err(&vpfe_dev->v4l2_dev, "vb2_queue_init() failed\n");13711360 vb2_dma_contig_cleanup_ctx(vpfe_dev->pdev);13721372- return vb2_queue_init(q);13611361+ return ret;13731362 }1374136313751364 fh->io_allowed = 1;···15461533 return -EINVAL;15471534 }1548153515491549- if (mutex_lock_interruptible(&video->lock))15501550- return mutex_lock_interruptible(&video->lock);15361536+ ret = mutex_lock_interruptible(&video->lock);15371537+ if (ret)15381538+ return ret;1551153915521540 vpfe_stop_capture(video);15531541 ret = vb2_streamoff(&video->buffer_queue, buf_type);
+1-1
drivers/staging/rdma/hfi1/TODO
···33- Remove unneeded file entries in sysfs44- Remove software processing of IB protocol and place in library for use55 by qib, ipath (if still present), hfi1, and eventually soft-roce66-66+- Replace incorrect uAPI
+35-56
drivers/staging/rdma/hfi1/file_ops.c
···4949#include <linux/vmalloc.h>5050#include <linux/io.h>51515252+#include <rdma/ib.h>5353+5254#include "hfi.h"5355#include "pio.h"5456#include "device.h"···191189 __u64 user_val = 0;192190 int uctxt_required = 1;193191 int must_be_root = 0;192192+193193+ /* FIXME: This interface cannot continue out of staging */194194+ if (WARN_ON_ONCE(!ib_safe_file_access(fp)))195195+ return -EACCES;194196195197 if (count < sizeof(cmd)) {196198 ret = -EINVAL;···797791 spin_unlock_irqrestore(&dd->uctxt_lock, flags);798792799793 dd->rcd[uctxt->ctxt] = NULL;794794+795795+ hfi1_user_exp_rcv_free(fdata);796796+ hfi1_clear_ctxt_pkey(dd, uctxt->ctxt);797797+800798 uctxt->rcvwait_to = 0;801799 uctxt->piowait_to = 0;802800 uctxt->rcvnowait = 0;803801 uctxt->pionowait = 0;804802 uctxt->event_flags = 0;805805-806806- hfi1_user_exp_rcv_free(fdata);807807- hfi1_clear_ctxt_pkey(dd, uctxt->ctxt);808803809804 hfi1_stats.sps_ctxts--;810805 if (++dd->freectxts == dd->num_user_contexts)···1134112711351128static int user_init(struct file *fp)11361129{11371137- int ret;11381130 unsigned int rcvctrl_ops = 0;11391131 struct hfi1_filedata *fd = fp->private_data;11401132 struct hfi1_ctxtdata *uctxt = fd->uctxt;1141113311421134 /* make sure that the context has already been setup */11431143- if (!test_bit(HFI1_CTXT_SETUP_DONE, &uctxt->event_flags)) {11441144- ret = -EFAULT;11451145- goto done;11461146- }11471147-11481148- /*11491149- * Subctxts don't need to initialize anything since master11501150- * has done it.11511151- */11521152- if (fd->subctxt) {11531153- ret = wait_event_interruptible(uctxt->wait, !test_bit(11541154- HFI1_CTXT_MASTER_UNINIT,11551155- &uctxt->event_flags));11561156- goto expected;11571157- }11351135+ if (!test_bit(HFI1_CTXT_SETUP_DONE, &uctxt->event_flags))11361136+ return -EFAULT;1158113711591138 /* initialize poll variables... */11601139 uctxt->urgent = 0;···11951202 wake_up(&uctxt->wait);11961203 }1197120411981198-expected:11991199- /*12001200- * Expected receive has to be setup for all processes (including12011201- * shared contexts). However, it has to be done after the master12021202- * context has been fully configured as it depends on the12031203- * eager/expected split of the RcvArray entries.12041204- * Setting it up here ensures that the subcontexts will be waiting12051205- * (due to the above wait_event_interruptible() until the master12061206- * is setup.12071207- */12081208- ret = hfi1_user_exp_rcv_init(fp);12091209-done:12101210- return ret;12051205+ return 0;12111206}1212120712131208static int get_ctxt_info(struct file *fp, void __user *ubase, __u32 len)···12421261 int ret = 0;1243126212441263 /*12451245- * Context should be set up only once (including allocation and12641264+ * Context should be set up only once, including allocation and12461265 * programming of eager buffers. This is done if context sharing12471266 * is not requested or by the master process.12481267 */···12631282 if (ret)12641283 goto done;12651284 }12851285+ } else {12861286+ ret = wait_event_interruptible(uctxt->wait, !test_bit(12871287+ HFI1_CTXT_MASTER_UNINIT,12881288+ &uctxt->event_flags));12891289+ if (ret)12901290+ goto done;12661291 }12921292+12671293 ret = hfi1_user_sdma_alloc_queues(uctxt, fp);12941294+ if (ret)12951295+ goto done;12961296+ /*12971297+ * Expected receive has to be setup for all processes (including12981298+ * shared contexts). However, it has to be done after the master12991299+ * context has been fully configured as it depends on the13001300+ * eager/expected split of the RcvArray entries.13011301+ * Setting it up here ensures that the subcontexts will be waiting13021302+ * (due to the above wait_event_interruptible() until the master13031303+ * is setup.13041304+ */13051305+ ret = hfi1_user_exp_rcv_init(fp);12681306 if (ret)12691307 goto done;12701308···15651565{15661566 struct hfi1_devdata *dd = filp->private_data;1567156715681568- switch (whence) {15691569- case SEEK_SET:15701570- break;15711571- case SEEK_CUR:15721572- offset += filp->f_pos;15731573- break;15741574- case SEEK_END:15751575- offset = ((dd->kregend - dd->kregbase) + DC8051_DATA_MEM_SIZE) -15761576- offset;15771577- break;15781578- default:15791579- return -EINVAL;15801580- }15811581-15821582- if (offset < 0)15831583- return -EINVAL;15841584-15851585- if (offset >= (dd->kregend - dd->kregbase) + DC8051_DATA_MEM_SIZE)15861586- return -EINVAL;15871587-15881588- filp->f_pos = offset;15891589-15901590- return filp->f_pos;15681568+ return fixed_size_llseek(filp, offset, whence,15691569+ (dd->kregend - dd->kregbase) + DC8051_DATA_MEM_SIZE);15911570}1592157115931572/* NOTE: assumes unsigned long is 8 bytes */
+25-15
drivers/staging/rdma/hfi1/mmu_rb.c
···7171 struct mm_struct *,7272 unsigned long, unsigned long);7373static void mmu_notifier_mem_invalidate(struct mmu_notifier *,7474+ struct mm_struct *,7475 unsigned long, unsigned long);7576static struct mmu_rb_node *__mmu_rb_search(struct mmu_rb_handler *,7677 unsigned long, unsigned long);···138137 rbnode = rb_entry(node, struct mmu_rb_node, node);139138 rb_erase(node, root);140139 if (handler->ops->remove)141141- handler->ops->remove(root, rbnode, false);140140+ handler->ops->remove(root, rbnode, NULL);142141 }143142 }144143···177176 return ret;178177}179178180180-/* Caller must host handler lock */179179+/* Caller must hold handler lock */181180static struct mmu_rb_node *__mmu_rb_search(struct mmu_rb_handler *handler,182181 unsigned long addr,183182 unsigned long len)···201200 return node;202201}203202203203+/* Caller must *not* hold handler lock. */204204static void __mmu_rb_remove(struct mmu_rb_handler *handler,205205- struct mmu_rb_node *node, bool arg)205205+ struct mmu_rb_node *node, struct mm_struct *mm)206206{207207+ unsigned long flags;208208+207209 /* Validity of handler and node pointers has been checked by caller. */208210 hfi1_cdbg(MMU, "Removing node addr 0x%llx, len %u", node->addr,209211 node->len);212212+ spin_lock_irqsave(&handler->lock, flags);210213 __mmu_int_rb_remove(node, handler->root);214214+ spin_unlock_irqrestore(&handler->lock, flags);215215+211216 if (handler->ops->remove)212212- handler->ops->remove(handler->root, node, arg);217217+ handler->ops->remove(handler->root, node, mm);213218}214219215220struct mmu_rb_node *hfi1_mmu_rb_search(struct rb_root *root, unsigned long addr,···238231void hfi1_mmu_rb_remove(struct rb_root *root, struct mmu_rb_node *node)239232{240233 struct mmu_rb_handler *handler = find_mmu_handler(root);241241- unsigned long flags;242234243235 if (!handler || !node)244236 return;245237246246- spin_lock_irqsave(&handler->lock, flags);247247- __mmu_rb_remove(handler, node, false);248248- spin_unlock_irqrestore(&handler->lock, flags);238238+ __mmu_rb_remove(handler, node, NULL);249239}250240251241static struct mmu_rb_handler *find_mmu_handler(struct rb_root *root)···264260static inline void mmu_notifier_page(struct mmu_notifier *mn,265261 struct mm_struct *mm, unsigned long addr)266262{267267- mmu_notifier_mem_invalidate(mn, addr, addr + PAGE_SIZE);263263+ mmu_notifier_mem_invalidate(mn, mm, addr, addr + PAGE_SIZE);268264}269265270266static inline void mmu_notifier_range_start(struct mmu_notifier *mn,···272268 unsigned long start,273269 unsigned long end)274270{275275- mmu_notifier_mem_invalidate(mn, start, end);271271+ mmu_notifier_mem_invalidate(mn, mm, start, end);276272}277273278274static void mmu_notifier_mem_invalidate(struct mmu_notifier *mn,275275+ struct mm_struct *mm,279276 unsigned long start, unsigned long end)280277{281278 struct mmu_rb_handler *handler =282279 container_of(mn, struct mmu_rb_handler, mn);283280 struct rb_root *root = handler->root;284284- struct mmu_rb_node *node;281281+ struct mmu_rb_node *node, *ptr = NULL;285282 unsigned long flags;286283287284 spin_lock_irqsave(&handler->lock, flags);288288- for (node = __mmu_int_rb_iter_first(root, start, end - 1); node;289289- node = __mmu_int_rb_iter_next(node, start, end - 1)) {285285+ for (node = __mmu_int_rb_iter_first(root, start, end - 1);286286+ node; node = ptr) {287287+ /* Guard against node removal. */288288+ ptr = __mmu_int_rb_iter_next(node, start, end - 1);290289 hfi1_cdbg(MMU, "Invalidating node addr 0x%llx, len %u",291290 node->addr, node->len);292292- if (handler->ops->invalidate(root, node))293293- __mmu_rb_remove(handler, node, true);291291+ if (handler->ops->invalidate(root, node)) {292292+ spin_unlock_irqrestore(&handler->lock, flags);293293+ __mmu_rb_remove(handler, node, mm);294294+ spin_lock_irqsave(&handler->lock, flags);295295+ }294296 }295297 spin_unlock_irqrestore(&handler->lock, flags);296298}
···519519 * do the flush work until that QP's520520 * sdma work has finished.521521 */522522+ spin_lock(&qp->s_lock);522523 if (qp->s_flags & RVT_S_WAIT_DMA) {523524 qp->s_flags &= ~RVT_S_WAIT_DMA;524525 hfi1_schedule_send(qp);525526 }527527+ spin_unlock(&qp->s_lock);526528}527529528530/**
+7-4
drivers/staging/rdma/hfi1/user_exp_rcv.c
···8787static int set_rcvarray_entry(struct file *, unsigned long, u32,8888 struct tid_group *, struct page **, unsigned);8989static int mmu_rb_insert(struct rb_root *, struct mmu_rb_node *);9090-static void mmu_rb_remove(struct rb_root *, struct mmu_rb_node *, bool);9090+static void mmu_rb_remove(struct rb_root *, struct mmu_rb_node *,9191+ struct mm_struct *);9192static int mmu_rb_invalidate(struct rb_root *, struct mmu_rb_node *);9293static int program_rcvarray(struct file *, unsigned long, struct tid_group *,9394 struct tid_pageset *, unsigned, u16, struct page **,···255254 struct hfi1_ctxtdata *uctxt = fd->uctxt;256255 struct tid_group *grp, *gptr;257256257257+ if (!test_bit(HFI1_CTXT_SETUP_DONE, &uctxt->event_flags))258258+ return 0;258259 /*259260 * The notifier would have been removed when the process'es mm260261 * was freed.···902899 if (!node || node->rcventry != (uctxt->expected_base + rcventry))903900 return -EBADF;904901 if (HFI1_CAP_IS_USET(TID_UNMAP))905905- mmu_rb_remove(&fd->tid_rb_root, &node->mmu, false);902902+ mmu_rb_remove(&fd->tid_rb_root, &node->mmu, NULL);906903 else907904 hfi1_mmu_rb_remove(&fd->tid_rb_root, &node->mmu);908905···968965 continue;969966 if (HFI1_CAP_IS_USET(TID_UNMAP))970967 mmu_rb_remove(&fd->tid_rb_root,971971- &node->mmu, false);968968+ &node->mmu, NULL);972969 else973970 hfi1_mmu_rb_remove(&fd->tid_rb_root,974971 &node->mmu);···10351032}1036103310371034static void mmu_rb_remove(struct rb_root *root, struct mmu_rb_node *node,10381038- bool notifier)10351035+ struct mm_struct *mm)10391036{10401037 struct hfi1_filedata *fdata =10411038 container_of(root, struct hfi1_filedata, tid_rb_root);
+22-11
drivers/staging/rdma/hfi1/user_sdma.c
···278278static void user_sdma_free_request(struct user_sdma_request *, bool);279279static int pin_vector_pages(struct user_sdma_request *,280280 struct user_sdma_iovec *);281281-static void unpin_vector_pages(struct mm_struct *, struct page **, unsigned);281281+static void unpin_vector_pages(struct mm_struct *, struct page **, unsigned,282282+ unsigned);282283static int check_header_template(struct user_sdma_request *,283284 struct hfi1_pkt_header *, u32, u32);284285static int set_txreq_header(struct user_sdma_request *,···300299static void activate_packet_queue(struct iowait *, int);301300static bool sdma_rb_filter(struct mmu_rb_node *, unsigned long, unsigned long);302301static int sdma_rb_insert(struct rb_root *, struct mmu_rb_node *);303303-static void sdma_rb_remove(struct rb_root *, struct mmu_rb_node *, bool);302302+static void sdma_rb_remove(struct rb_root *, struct mmu_rb_node *,303303+ struct mm_struct *);304304static int sdma_rb_invalidate(struct rb_root *, struct mmu_rb_node *);305305306306static struct mmu_rb_ops sdma_rb_ops = {···10651063 rb_node = hfi1_mmu_rb_search(&pq->sdma_rb_root,10661064 (unsigned long)iovec->iov.iov_base,10671065 iovec->iov.iov_len);10681068- if (rb_node)10661066+ if (rb_node && !IS_ERR(rb_node))10691067 node = container_of(rb_node, struct sdma_mmu_node, rb);10681068+ else10691069+ rb_node = NULL;1070107010711071 if (!node) {10721072 node = kzalloc(sizeof(*node), GFP_KERNEL);···11111107 goto bail;11121108 }11131109 if (pinned != npages) {11141114- unpin_vector_pages(current->mm, pages, pinned);11101110+ unpin_vector_pages(current->mm, pages, node->npages,11111111+ pinned);11151112 ret = -EFAULT;11161113 goto bail;11171114 }···11521147}1153114811541149static void unpin_vector_pages(struct mm_struct *mm, struct page **pages,11551155- unsigned npages)11501150+ unsigned start, unsigned npages)11561151{11571157- hfi1_release_user_pages(mm, pages, npages, 0);11521152+ hfi1_release_user_pages(mm, pages + start, npages, 0);11581153 kfree(pages);11591154}11601155···15071502 &req->pq->sdma_rb_root,15081503 (unsigned long)req->iovs[i].iov.iov_base,15091504 req->iovs[i].iov.iov_len);15101510- if (!mnode)15051505+ if (!mnode || IS_ERR(mnode))15111506 continue;1512150715131508 node = container_of(mnode, struct sdma_mmu_node, rb);···15521547}1553154815541549static void sdma_rb_remove(struct rb_root *root, struct mmu_rb_node *mnode,15551555- bool notifier)15501550+ struct mm_struct *mm)15561551{15571552 struct sdma_mmu_node *node =15581553 container_of(mnode, struct sdma_mmu_node, rb);···15621557 node->pq->n_locked -= node->npages;15631558 spin_unlock(&node->pq->evict_lock);1564155915651565- unpin_vector_pages(notifier ? NULL : current->mm, node->pages,15601560+ /*15611561+ * If mm is set, we are being called by the MMU notifier and we15621562+ * should not pass a mm_struct to unpin_vector_page(). This is to15631563+ * prevent a deadlock when hfi1_release_user_pages() attempts to15641564+ * take the mmap_sem, which the MMU notifier has already taken.15651565+ */15661566+ unpin_vector_pages(mm ? NULL : current->mm, node->pages, 0,15661567 node->npages);15671568 /*15681569 * If called by the MMU notifier, we have to adjust the pinned15691570 * page count ourselves.15701571 */15711571- if (notifier)15721572- current->mm->pinned_vm -= node->npages;15721572+ if (mm)15731573+ mm->pinned_vm -= node->npages;15731574 kfree(node);15741575}15751576
+2
drivers/thermal/Kconfig
···376376 tristate "Temperature sensor driver for mediatek SoCs"377377 depends on ARCH_MEDIATEK || COMPILE_TEST378378 depends on HAS_IOMEM379379+ depends on NVMEM || NVMEM=n380380+ depends on RESET_CONTROLLER379381 default y380382 help381383 Enable this option if you want to have support for thermal management
···688688{689689 struct thermal_zone_device *tz = to_thermal_zone(dev);690690 int trip, ret;691691- unsigned long temperature;691691+ int temperature;692692693693 if (!tz->ops->set_trip_temp)694694 return -EPERM;···696696 if (!sscanf(attr->attr.name, "trip_point_%d_temp", &trip))697697 return -EINVAL;698698699699- if (kstrtoul(buf, 10, &temperature))699699+ if (kstrtoint(buf, 10, &temperature))700700 return -EINVAL;701701702702 ret = tz->ops->set_trip_temp(tz, trip, temperature);···899899{900900 struct thermal_zone_device *tz = to_thermal_zone(dev);901901 int ret = 0;902902- unsigned long temperature;902902+ int temperature;903903904904- if (kstrtoul(buf, 10, &temperature))904904+ if (kstrtoint(buf, 10, &temperature))905905 return -EINVAL;906906907907 if (!tz->ops->set_emul_temp) {···959959 struct thermal_zone_device *tz = to_thermal_zone(dev); \960960 \961961 if (tz->tzp) \962962- return sprintf(buf, "%u\n", tz->tzp->name); \962962+ return sprintf(buf, "%d\n", tz->tzp->name); \963963 else \964964 return -EIO; \965965 } \
+37-42
drivers/tty/pty.c
···626626 */627627628628static struct tty_struct *ptm_unix98_lookup(struct tty_driver *driver,629629- struct inode *ptm_inode, int idx)629629+ struct file *file, int idx)630630{631631 /* Master must be open via /dev/ptmx */632632 return ERR_PTR(-EIO);···642642 */643643644644static struct tty_struct *pts_unix98_lookup(struct tty_driver *driver,645645- struct inode *pts_inode, int idx)645645+ struct file *file, int idx)646646{647647 struct tty_struct *tty;648648649649 mutex_lock(&devpts_mutex);650650- tty = devpts_get_priv(pts_inode);650650+ tty = devpts_get_priv(file->f_path.dentry);651651 mutex_unlock(&devpts_mutex);652652 /* Master must be open before slave */653653 if (!tty)···663663/* this is called once with whichever end is closed last */664664static void pty_unix98_remove(struct tty_driver *driver, struct tty_struct *tty)665665{666666- struct inode *ptmx_inode;666666+ struct pts_fs_info *fsi;667667668668 if (tty->driver->subtype == PTY_TYPE_MASTER)669669- ptmx_inode = tty->driver_data;669669+ fsi = tty->driver_data;670670 else671671- ptmx_inode = tty->link->driver_data;672672- devpts_kill_index(ptmx_inode, tty->index);673673- devpts_del_ref(ptmx_inode);671671+ fsi = tty->link->driver_data;672672+ devpts_kill_index(fsi, tty->index);673673+ devpts_put_ref(fsi);674674}675675676676static const struct tty_operations ptm_unix98_ops = {···720720721721static int ptmx_open(struct inode *inode, struct file *filp)722722{723723+ struct pts_fs_info *fsi;723724 struct tty_struct *tty;724724- struct inode *slave_inode;725725+ struct dentry *dentry;725726 int retval;726727 int index;727728···735734 if (retval)736735 return retval;737736737737+ fsi = devpts_get_ref(inode, filp);738738+ retval = -ENODEV;739739+ if (!fsi)740740+ goto out_free_file;741741+738742 /* find a device that is not in use. */739743 mutex_lock(&devpts_mutex);740740- index = devpts_new_index(inode);741741- if (index < 0) {742742- retval = index;743743- mutex_unlock(&devpts_mutex);744744- goto err_file;745745- }746746-744744+ index = devpts_new_index(fsi);747745 mutex_unlock(&devpts_mutex);746746+747747+ retval = index;748748+ if (index < 0)749749+ goto out_put_ref;750750+748751749752 mutex_lock(&tty_mutex);750753 tty = tty_init_dev(ptm_driver, index);751751-752752- if (IS_ERR(tty)) {753753- retval = PTR_ERR(tty);754754- goto out;755755- }756756-757754 /* The tty returned here is locked so we can safely758755 drop the mutex */759756 mutex_unlock(&tty_mutex);760757761761- set_bit(TTY_PTY_LOCK, &tty->flags); /* LOCK THE SLAVE */762762- tty->driver_data = inode;758758+ retval = PTR_ERR(tty);759759+ if (IS_ERR(tty))760760+ goto out;763761764762 /*765765- * In the case where all references to ptmx inode are dropped and we766766- * still have /dev/tty opened pointing to the master/slave pair (ptmx767767- * is closed/released before /dev/tty), we must make sure that the inode768768- * is still valid when we call the final pty_unix98_shutdown, thus we769769- * hold an additional reference to the ptmx inode. For the same /dev/tty770770- * last close case, we also need to make sure the super_block isn't771771- * destroyed (devpts instance unmounted), before /dev/tty is closed and772772- * on its release devpts_kill_index is called.763763+ * From here on out, the tty is "live", and the index and764764+ * fsi will be killed/put by the tty_release()773765 */774774- devpts_add_ref(inode);766766+ set_bit(TTY_PTY_LOCK, &tty->flags); /* LOCK THE SLAVE */767767+ tty->driver_data = fsi;775768776769 tty_add_file(tty, filp);777770778778- slave_inode = devpts_pty_new(inode,779779- MKDEV(UNIX98_PTY_SLAVE_MAJOR, index), index,780780- tty->link);781781- if (IS_ERR(slave_inode)) {782782- retval = PTR_ERR(slave_inode);771771+ dentry = devpts_pty_new(fsi, index, tty->link);772772+ if (IS_ERR(dentry)) {773773+ retval = PTR_ERR(dentry);783774 goto err_release;784775 }785785- tty->link->driver_data = slave_inode;776776+ tty->link->driver_data = dentry;786777787778 retval = ptm_driver->ops->open(tty, filp);788779 if (retval)···786793 return 0;787794err_release:788795 tty_unlock(tty);796796+ // This will also put-ref the fsi789797 tty_release(inode, filp);790798 return retval;791799out:792792- mutex_unlock(&tty_mutex);793793- devpts_kill_index(inode, index);794794-err_file:800800+ devpts_kill_index(fsi, index);801801+out_put_ref:802802+ devpts_put_ref(fsi);803803+out_free_file:795804 tty_free_file(filp);796805 return retval;797806}
+10-1
drivers/tty/serial/8250/8250_port.c
···14031403 /*14041404 * Empty the RX FIFO, we are not interested in anything14051405 * received during the half-duplex transmission.14061406+ * Enable previously disabled RX interrupts.14061407 */14071407- if (!(p->port.rs485.flags & SER_RS485_RX_DURING_TX))14081408+ if (!(p->port.rs485.flags & SER_RS485_RX_DURING_TX)) {14081409 serial8250_clear_fifos(p);14101410+14111411+ serial8250_rpm_get(p);14121412+14131413+ p->ier |= UART_IER_RLSI | UART_IER_RDI;14141414+ serial_port_out(&p->port, UART_IER, p->ier);14151415+14161416+ serial8250_rpm_put(p);14171417+ }14091418}1410141914111420static void serial8250_em485_handle_stop_tx(unsigned long arg)
-1
drivers/tty/serial/8250/Kconfig
···324324config SERIAL_8250_RT288X325325 bool "Ralink RT288x/RT305x/RT3662/RT3883 serial port support"326326 depends on SERIAL_8250327327- depends on MIPS || COMPILE_TEST328327 default y if MIPS_ALCHEMY || SOC_RT288X || SOC_RT305X || SOC_RT3883 || SOC_MT7620329328 help330329 Selecting this option will add support for the alternate register
···196196 * We record lock dependency chains, so that we can cache them:197197 */198198struct lock_chain {199199- u8 irq_context;200200- u8 depth;201201- u16 base;199199+ /* see BUILD_BUG_ON()s in lookup_chain_cache() */200200+ unsigned int irq_context : 2,201201+ depth : 6,202202+ base : 24;203203+ /* 4 byte hole */202204 struct hlist_node entry;203205 u64 chain_key;204206};
···9898 if (!is_a_nulls(first))9999 first->pprev = &n->next;100100}101101+102102+/**103103+ * hlist_nulls_add_tail_rcu104104+ * @n: the element to add to the hash list.105105+ * @h: the list to add to.106106+ *107107+ * Description:108108+ * Adds the specified element to the end of the specified hlist_nulls,109109+ * while permitting racing traversals. NOTE: tail insertion requires110110+ * list traversal.111111+ *112112+ * The caller must take whatever precautions are necessary113113+ * (such as holding appropriate locks) to avoid racing114114+ * with another list-mutation primitive, such as hlist_nulls_add_head_rcu()115115+ * or hlist_nulls_del_rcu(), running on this same list.116116+ * However, it is perfectly legal to run concurrently with117117+ * the _rcu list-traversal primitives, such as118118+ * hlist_nulls_for_each_entry_rcu(), used to prevent memory-consistency119119+ * problems on Alpha CPUs. Regardless of the type of CPU, the120120+ * list-traversal primitive must be guarded by rcu_read_lock().121121+ */122122+static inline void hlist_nulls_add_tail_rcu(struct hlist_nulls_node *n,123123+ struct hlist_nulls_head *h)124124+{125125+ struct hlist_nulls_node *i, *last = NULL;126126+127127+ for (i = hlist_nulls_first_rcu(h); !is_a_nulls(i);128128+ i = hlist_nulls_next_rcu(i))129129+ last = i;130130+131131+ if (last) {132132+ n->next = last->next;133133+ n->pprev = &last->next;134134+ rcu_assign_pointer(hlist_nulls_next_rcu(last), n);135135+ } else {136136+ hlist_nulls_add_head_rcu(n, h);137137+ }138138+}139139+101140/**102141 * hlist_nulls_for_each_entry_rcu - iterate over rcu list of given type103142 * @tpos: the type * to use as a loop cursor.
+2-2
include/linux/thermal.h
···352352353353struct thermal_trip {354354 struct device_node *np;355355- unsigned long int temperature;356356- unsigned long int hysteresis;355355+ int temperature;356356+ int hysteresis;357357 enum thermal_trip_type type;358358};359359
+2-2
include/linux/tty_driver.h
···77 * defined; unless noted otherwise, they are optional, and can be88 * filled in with a null pointer.99 *1010- * struct tty_struct * (*lookup)(struct tty_driver *self, int idx)1010+ * struct tty_struct * (*lookup)(struct tty_driver *self, struct file *, int idx)1111 *1212 * Return the tty device corresponding to idx, NULL if there is not1313 * one currently in use and an ERR_PTR value on error. Called under···250250251251struct tty_operations {252252 struct tty_struct * (*lookup)(struct tty_driver *driver,253253- struct inode *inode, int idx);253253+ struct file *filp, int idx);254254 int (*install)(struct tty_driver *driver, struct tty_struct *tty);255255 void (*remove)(struct tty_driver *driver, struct tty_struct *tty);256256 int (*open)(struct tty_struct * tty, struct file * filp);
+8
include/media/videobuf2-core.h
···375375/**376376 * struct vb2_ops - driver-specific callbacks377377 *378378+ * @verify_planes_array: Verify that a given user space structure contains379379+ * enough planes for the buffer. This is called380380+ * for each dequeued buffer.378381 * @fill_user_buffer: given a vb2_buffer fill in the userspace structure.379382 * For V4L2 this is a struct v4l2_buffer.380383 * @fill_vb2_buffer: given a userspace structure, fill in the vb2_buffer.···387384 * the vb2_buffer struct.388385 */389386struct vb2_buf_ops {387387+ int (*verify_planes_array)(struct vb2_buffer *vb, const void *pb);390388 void (*fill_user_buffer)(struct vb2_buffer *vb, void *pb);391389 int (*fill_vb2_buffer)(struct vb2_buffer *vb, const void *pb,392390 struct vb2_plane *planes);···404400 * @fileio_read_once: report EOF after reading the first buffer405401 * @fileio_write_immediately: queue buffer after each write() call406402 * @allow_zero_bytesused: allow bytesused == 0 to be passed to the driver403403+ * @quirk_poll_must_check_waiting_for_buffers: Return POLLERR at poll when QBUF404404+ * has not been called. This is a vb1 idiom that has been adopted405405+ * also by vb2.407406 * @lock: pointer to a mutex that protects the vb2_queue struct. The408407 * driver can set this to a mutex to let the v4l2 core serialize409408 * the queuing ioctls. If the driver wants to handle locking···470463 unsigned fileio_read_once:1;471464 unsigned fileio_write_immediately:1;472465 unsigned allow_zero_bytesused:1;466466+ unsigned quirk_poll_must_check_waiting_for_buffers:1;473467474468 struct mutex *lock;475469 void *owner;
+5-2
include/net/cls_cgroup.h
···1717#include <linux/hardirq.h>1818#include <linux/rcupdate.h>1919#include <net/sock.h>2020+#include <net/inet_sock.h>20212122#ifdef CONFIG_CGROUP_NET_CLASSID2223struct cgroup_cls_state {···6463 * softirqs always disables bh.6564 */6665 if (in_serving_softirq()) {6666+ struct sock *sk = skb_to_full_sk(skb);6767+6768 /* If there is an sock_cgroup_classid we'll use that. */6868- if (!skb->sk)6969+ if (!sk || !sk_fullsock(sk))6970 return 0;70717171- classid = sock_cgroup_classid(&skb->sk->sk_cgrp_data);7272+ classid = sock_cgroup_classid(&sk->sk_cgrp_data);7273 }73747475 return classid;
+3
include/net/ip6_route.h
···101101struct rt6_info *addrconf_dst_alloc(struct inet6_dev *idev,102102 const struct in6_addr *addr, bool anycast);103103104104+struct rt6_info *ip6_dst_alloc(struct net *net, struct net_device *dev,105105+ int flags);106106+104107/*105108 * support functions for ND106109 *
+2
include/net/ipv6.h
···959959int ip6_datagram_connect(struct sock *sk, struct sockaddr *addr, int addr_len);960960int ip6_datagram_connect_v6_only(struct sock *sk, struct sockaddr *addr,961961 int addr_len);962962+int ip6_datagram_dst_update(struct sock *sk, bool fix_sk_saddr);963963+void ip6_datagram_release_cb(struct sock *sk);962964963965int ipv6_recv_error(struct sock *sk, struct msghdr *msg, int len,964966 int *addr_len);
···847847 */848848 ktime_t last_time_heard;849849850850+ /* When was the last time that we sent a chunk using this851851+ * transport? We use this to check for idle transports852852+ */853853+ unsigned long last_time_sent;854854+850855 /* Last time(in jiffies) when cwnd is reduced due to the congestion851856 * indication based on ECNE chunk.852857 */···957952 struct sctp_sock *);958953void sctp_transport_pmtu(struct sctp_transport *, struct sock *sk);959954void sctp_transport_free(struct sctp_transport *);960960-void sctp_transport_reset_timers(struct sctp_transport *);955955+void sctp_transport_reset_t3_rtx(struct sctp_transport *);956956+void sctp_transport_reset_hb_timer(struct sctp_transport *);961957int sctp_transport_hold(struct sctp_transport *);962958void sctp_transport_put(struct sctp_transport *);963959void sctp_transport_update_rto(struct sctp_transport *, __u32);
···3434#define _RDMA_IB_H35353636#include <linux/types.h>3737+#include <linux/sched.h>37383839struct ib_addr {3940 union {···8685 __be64 sib_sid_mask;8786 __u64 sib_scope_id;8887};8888+8989+/*9090+ * The IB interfaces that use write() as bi-directional ioctl() are9191+ * fundamentally unsafe, since there are lots of ways to trigger "write()"9292+ * calls from various contexts with elevated privileges. That includes the9393+ * traditional suid executable error message writes, but also various kernel9494+ * interfaces that can write to file descriptors.9595+ *9696+ * This function provides protection for the legacy API by restricting the9797+ * calling context.9898+ */9999+static inline bool ib_safe_file_access(struct file *filp)100100+{101101+ return filp->f_cred == current_cred() && segment_eq(get_fs(), USER_DS);102102+}8910390104#endif /* _RDMA_IB_H */
···1717 unsigned int verb);1818int snd_hdac_regmap_read_raw(struct hdac_device *codec, unsigned int reg,1919 unsigned int *val);2020+int snd_hdac_regmap_read_raw_uncached(struct hdac_device *codec,2121+ unsigned int reg, unsigned int *val);2022int snd_hdac_regmap_write_raw(struct hdac_device *codec, unsigned int reg,2123 unsigned int val);2224int snd_hdac_regmap_update_raw(struct hdac_device *codec, unsigned int reg,
+5-1
include/uapi/asm-generic/unistd.h
···717717__SYSCALL(__NR_mlock2, sys_mlock2)718718#define __NR_copy_file_range 285719719__SYSCALL(__NR_copy_file_range, sys_copy_file_range)720720+#define __NR_preadv2 286721721+__SYSCALL(__NR_preadv2, sys_preadv2)722722+#define __NR_pwritev2 287723723+__SYSCALL(__NR_pwritev2, sys_pwritev2)720724721725#undef __NR_syscalls722722-#define __NR_syscalls 286726726+#define __NR_syscalls 288723727724728/*725729 * All syscalls below here should go away really,
···13741374 }1375137513761376 if (insn->dst_reg != BPF_REG_0 || insn->off != 0 ||13771377+ BPF_SIZE(insn->code) == BPF_DW ||13771378 (mode == BPF_ABS && insn->src_reg != BPF_REG_0)) {13781379 verbose("BPF_LD_ABS uses reserved fields\n");13791380 return -EINVAL;···20302029 if (IS_ERR(map)) {20312030 verbose("fd %d is not pointing to valid bpf_map\n",20322031 insn->imm);20332033- fdput(f);20342032 return PTR_ERR(map);20352033 }20362034
+5-2
kernel/cgroup.c
···28252825 size_t nbytes, loff_t off, bool threadgroup)28262826{28272827 struct task_struct *tsk;28282828+ struct cgroup_subsys *ss;28282829 struct cgroup *cgrp;28292830 pid_t pid;28302830- int ret;28312831+ int ssid, ret;2831283228322833 if (kstrtoint(strstrip(buf), 0, &pid) || pid < 0)28332834 return -EINVAL;···28762875 rcu_read_unlock();28772876out_unlock_threadgroup:28782877 percpu_up_write(&cgroup_threadgroup_rwsem);28782878+ for_each_subsys(ss, ssid)28792879+ if (ss->post_attach)28802880+ ss->post_attach();28792881 cgroup_kn_unlock(of->kn);28802880- cpuset_post_attach_flush();28812882 return ret ?: nbytes;28822883}28832884
+26-7
kernel/cpu.c
···3636 * @target: The target state3737 * @thread: Pointer to the hotplug thread3838 * @should_run: Thread should execute3939+ * @rollback: Perform a rollback3940 * @cb_stat: The state for a single callback (install/uninstall)4041 * @cb: Single callback function (install/uninstall)4142 * @result: Result of the operation···4847#ifdef CONFIG_SMP4948 struct task_struct *thread;5049 bool should_run;5050+ bool rollback;5151 enum cpuhp_state cb_state;5252 int (*cb)(unsigned int cpu);5353 int result;···303301 return __cpu_notify(val, cpu, -1, NULL);304302}305303304304+static void cpu_notify_nofail(unsigned long val, unsigned int cpu)305305+{306306+ BUG_ON(cpu_notify(val, cpu));307307+}308308+306309/* Notifier wrappers for transitioning to state machine */307310static int notify_prepare(unsigned int cpu)308311{···484477 } else {485478 ret = cpuhp_invoke_callback(cpu, st->cb_state, st->cb);486479 }480480+ } else if (st->rollback) {481481+ BUG_ON(st->state < CPUHP_AP_ONLINE_IDLE);482482+483483+ undo_cpu_down(cpu, st, cpuhp_ap_states);484484+ /*485485+ * This is a momentary workaround to keep the notifier users486486+ * happy. Will go away once we got rid of the notifiers.487487+ */488488+ cpu_notify_nofail(CPU_DOWN_FAILED, cpu);489489+ st->rollback = false;487490 } else {488491 /* Cannot happen .... */489492 BUG_ON(st->state < CPUHP_AP_ONLINE_IDLE);···653636 read_unlock(&tasklist_lock);654637}655638656656-static void cpu_notify_nofail(unsigned long val, unsigned int cpu)657657-{658658- BUG_ON(cpu_notify(val, cpu));659659-}660660-661639static int notify_down_prepare(unsigned int cpu)662640{663641 int err, nr_calls = 0;···733721 */734722 err = stop_machine(take_cpu_down, NULL, cpumask_of(cpu));735723 if (err) {736736- /* CPU didn't die: tell everyone. Can't complain. */737737- cpu_notify_nofail(CPU_DOWN_FAILED, cpu);724724+ /* CPU refused to die */738725 irq_unlock_sparse();726726+ /* Unpark the hotplug thread so we can rollback there */727727+ kthread_unpark(per_cpu_ptr(&cpuhp_state, cpu)->thread);739728 return err;740729 }741730 BUG_ON(cpu_online(cpu));···845832 * to do the further cleanups.846833 */847834 ret = cpuhp_down_callbacks(cpu, st, cpuhp_bp_states, target);835835+ if (ret && st->state > CPUHP_TEARDOWN_CPU && st->state < prev_state) {836836+ st->target = prev_state;837837+ st->rollback = true;838838+ cpuhp_kick_ap_work(cpu);839839+ }848840849841 hasdied = prev_state != st->state && st->state == CPUHP_OFFLINE;850842out:···12671249 .name = "notify:online",12681250 .startup = notify_online,12691251 .teardown = notify_down_prepare,12521252+ .skip_onerr = true,12701253 },12711254#endif12721255 /*
···12951295 if (unlikely(should_fail_futex(true)))12961296 ret = -EFAULT;1297129712981298- if (cmpxchg_futex_value_locked(&curval, uaddr, uval, newval))12981298+ if (cmpxchg_futex_value_locked(&curval, uaddr, uval, newval)) {12991299 ret = -EFAULT;13001300- else if (curval != uval)13011301- ret = -EINVAL;13001300+ } else if (curval != uval) {13011301+ /*13021302+ * If a unconditional UNLOCK_PI operation (user space did not13031303+ * try the TID->0 transition) raced with a waiter setting the13041304+ * FUTEX_WAITERS flag between get_user() and locking the hash13051305+ * bucket lock, retry the operation.13061306+ */13071307+ if ((FUTEX_TID_MASK & curval) == uval)13081308+ ret = -EAGAIN;13091309+ else13101310+ ret = -EINVAL;13111311+ }13021312 if (ret) {13031313 raw_spin_unlock_irq(&pi_state->pi_mutex.wait_lock);13041314 return ret;···15351525 if (likely(&hb1->chain != &hb2->chain)) {15361526 plist_del(&q->list, &hb1->chain);15371527 hb_waiters_dec(hb1);15381538- plist_add(&q->list, &hb2->chain);15391528 hb_waiters_inc(hb2);15291529+ plist_add(&q->list, &hb2->chain);15401530 q->lock_ptr = &hb2->lock;15411531 }15421532 get_futex_key_refs(key2);···26322622 */26332623 if (ret == -EFAULT)26342624 goto pi_faulted;26252625+ /*26262626+ * A unconditional UNLOCK_PI op raced against a waiter26272627+ * setting the FUTEX_WAITERS bit. Try again.26282628+ */26292629+ if (ret == -EAGAIN) {26302630+ spin_unlock(&hb->lock);26312631+ put_futex_key(&key);26322632+ goto retry;26332633+ }26352634 /*26362635 * wake_futex_pi has detected invalid state. Tell user26372636 * space.
+1
kernel/irq/ipi.c
···9494 data = irq_get_irq_data(virq + i);9595 cpumask_copy(data->common->affinity, dest);9696 data->common->ipi_offset = offset;9797+ irq_set_status_flags(virq + i, IRQ_NO_BALANCING);9798 }9899 return virq;99100
+2-1
kernel/kcov.c
···11#define pr_fmt(fmt) "kcov: " fmt2233+#define DISABLE_BRANCH_PROFILING34#include <linux/compiler.h>45#include <linux/types.h>56#include <linux/file.h>···4443 * Entry point from instrumented code.4544 * This is called once per basic-block/edge.4645 */4747-void __sanitizer_cov_trace_pc(void)4646+void notrace __sanitizer_cov_trace_pc(void)4847{4948 struct task_struct *t;5049 enum kcov_mode mode;
···21762176 chain->irq_context = hlock->irq_context;21772177 i = get_first_held_lock(curr, hlock);21782178 chain->depth = curr->lockdep_depth + 1 - i;21792179+21802180+ BUILD_BUG_ON((1UL << 24) <= ARRAY_SIZE(chain_hlocks));21812181+ BUILD_BUG_ON((1UL << 6) <= ARRAY_SIZE(curr->held_locks));21822182+ BUILD_BUG_ON((1UL << 8*sizeof(chain_hlocks[0])) <= ARRAY_SIZE(lock_classes));21832183+21792184 if (likely(nr_chain_hlocks + chain->depth <= MAX_LOCKDEP_CHAIN_HLOCKS)) {21802185 chain->base = nr_chain_hlocks;21812181- nr_chain_hlocks += chain->depth;21822186 for (j = 0; j < chain->depth - 1; j++, i++) {21832187 int lock_id = curr->held_locks[i].class_idx - 1;21842188 chain_hlocks[chain->base + j] = lock_id;21852189 }21862190 chain_hlocks[chain->base + j] = class - lock_classes;21872191 }21922192+21932193+ if (nr_chain_hlocks < MAX_LOCKDEP_CHAIN_HLOCKS)21942194+ nr_chain_hlocks += chain->depth;21952195+21962196+#ifdef CONFIG_DEBUG_LOCKDEP21972197+ /*21982198+ * Important for check_no_collision().21992199+ */22002200+ if (unlikely(nr_chain_hlocks > MAX_LOCKDEP_CHAIN_HLOCKS)) {22012201+ if (debug_locks_off_graph_unlock())22022202+ return 0;22032203+22042204+ print_lockdep_off("BUG: MAX_LOCKDEP_CHAIN_HLOCKS too low!");22052205+ dump_stack();22062206+ return 0;22072207+ }22082208+#endif22092209+21882210 hlist_add_head_rcu(&chain->entry, hash_head);21892211 debug_atomic_inc(chain_lookup_misses);21902212 inc_chains();···29542932 return 1;29552933}2956293429352935+static inline unsigned int task_irq_context(struct task_struct *task)29362936+{29372937+ return 2 * !!task->hardirq_context + !!task->softirq_context;29382938+}29392939+29572940static int separate_irq_context(struct task_struct *curr,29582941 struct held_lock *hlock)29592942{···29672940 /*29682941 * Keep track of points where we cross into an interrupt context:29692942 */29702970- hlock->irq_context = 2*(curr->hardirq_context ? 1 : 0) +29712971- curr->softirq_context;29722943 if (depth) {29732944 struct held_lock *prev_hlock;29742945···29962971 struct held_lock *hlock)29972972{29982973 return 1;29742974+}29752975+29762976+static inline unsigned int task_irq_context(struct task_struct *task)29772977+{29782978+ return 0;29992979}3000298030012981static inline int separate_irq_context(struct task_struct *curr,···32713241 hlock->acquire_ip = ip;32723242 hlock->instance = lock;32733243 hlock->nest_lock = nest_lock;32443244+ hlock->irq_context = task_irq_context(curr);32743245 hlock->trylock = trylock;32753246 hlock->read = read;32763247 hlock->check = check;
+2
kernel/locking/lockdep_proc.c
···141141 int i;142142143143 if (v == SEQ_START_TOKEN) {144144+ if (nr_chain_hlocks > MAX_LOCKDEP_CHAIN_HLOCKS)145145+ seq_printf(m, "(buggered) ");144146 seq_printf(m, "all lock chains:\n");145147 return 0;146148 }
···666666 */667667 smp_wmb();668668 set_work_data(work, (unsigned long)pool_id << WORK_OFFQ_POOL_SHIFT, 0);669669+ /*670670+ * The following mb guarantees that previous clear of a PENDING bit671671+ * will not be reordered with any speculative LOADS or STORES from672672+ * work->current_func, which is executed afterwards. This possible673673+ * reordering can lead to a missed execution on attempt to qeueue674674+ * the same @work. E.g. consider this case:675675+ *676676+ * CPU#0 CPU#1677677+ * ---------------------------- --------------------------------678678+ *679679+ * 1 STORE event_indicated680680+ * 2 queue_work_on() {681681+ * 3 test_and_set_bit(PENDING)682682+ * 4 } set_..._and_clear_pending() {683683+ * 5 set_work_data() # clear bit684684+ * 6 smp_mb()685685+ * 7 work->current_func() {686686+ * 8 LOAD event_indicated687687+ * }688688+ *689689+ * Without an explicit full barrier speculative LOAD on line 8 can690690+ * be executed before CPU#0 does STORE on line 1. If that happens,691691+ * CPU#0 observes the PENDING bit is still set and new execution of692692+ * a @work is not queued in a hope, that CPU#1 will eventually693693+ * finish the queued @work. Meanwhile CPU#1 does not see694694+ * event_indicated is set, because speculative LOAD was executed695695+ * before actual STORE.696696+ */697697+ smp_mb();669698}670699671700static void clear_work_data(struct work_struct *work)
-4
lib/stackdepot.c
···210210 goto fast_exit;211211212212 hash = hash_stack(trace->entries, trace->nr_entries);213213- /* Bad luck, we won't store this stack. */214214- if (hash == 0)215215- goto exit;216216-217213 bucket = &stack_table[hash & STACK_HASH_MASK];218214219215 /*
+5-7
mm/huge_memory.c
···232232 return READ_ONCE(huge_zero_page);233233}234234235235-static void put_huge_zero_page(void)235235+void put_huge_zero_page(void)236236{237237 /*238238 * Counter should never go to zero here. Only shrinker can put···16841684 if (vma_is_dax(vma)) {16851685 spin_unlock(ptl);16861686 if (is_huge_zero_pmd(orig_pmd))16871687- put_huge_zero_page();16871687+ tlb_remove_page(tlb, pmd_page(orig_pmd));16881688 } else if (is_huge_zero_pmd(orig_pmd)) {16891689 pte_free(tlb->mm, pgtable_trans_huge_withdraw(tlb->mm, pmd));16901690 atomic_long_dec(&tlb->mm->nr_ptes);16911691 spin_unlock(ptl);16921692- put_huge_zero_page();16921692+ tlb_remove_page(tlb, pmd_page(orig_pmd));16931693 } else {16941694 struct page *page = pmd_page(orig_pmd);16951695 page_remove_rmap(page, true);···19601960 * page fault if needed.19611961 */19621962 return 0;19631963- if (vma->vm_ops)19631963+ if (vma->vm_ops || (vm_flags & VM_NO_THP))19641964 /* khugepaged not yet working on file or special mappings */19651965 return 0;19661966- VM_BUG_ON_VMA(vm_flags & VM_NO_THP, vma);19671966 hstart = (vma->vm_start + ~HPAGE_PMD_MASK) & HPAGE_PMD_MASK;19681967 hend = vma->vm_end & HPAGE_PMD_MASK;19691968 if (hstart < hend)···23512352 return false;23522353 if (is_vma_temporary_stack(vma))23532354 return false;23542354- VM_BUG_ON_VMA(vma->vm_flags & VM_NO_THP, vma);23552355- return true;23552355+ return !(vma->vm_flags & VM_NO_THP);23562356}2357235723582358static void collapse_huge_page(struct mm_struct *mm,
+19-18
mm/memcontrol.c
···207207/* "mc" and its members are protected by cgroup_mutex */208208static struct move_charge_struct {209209 spinlock_t lock; /* for from, to */210210+ struct mm_struct *mm;210211 struct mem_cgroup *from;211212 struct mem_cgroup *to;212213 unsigned long flags;···4668466746694668static void mem_cgroup_clear_mc(void)46704669{46704670+ struct mm_struct *mm = mc.mm;46714671+46714672 /*46724673 * we must clear moving_task before waking up waiters at the end of46734674 * task migration.···46794676 spin_lock(&mc.lock);46804677 mc.from = NULL;46814678 mc.to = NULL;46794679+ mc.mm = NULL;46824680 spin_unlock(&mc.lock);46814681+46824682+ mmput(mm);46834683}4684468446854685static int mem_cgroup_can_attach(struct cgroup_taskset *tset)···47394733 VM_BUG_ON(mc.moved_swap);4740473447414735 spin_lock(&mc.lock);47364736+ mc.mm = mm;47424737 mc.from = from;47434738 mc.to = memcg;47444739 mc.flags = move_flags;···47494742 ret = mem_cgroup_precharge_mc(mm);47504743 if (ret)47514744 mem_cgroup_clear_mc();47454745+ } else {47464746+ mmput(mm);47524747 }47534753- mmput(mm);47544748 return ret;47554749}47564750···48604852 return ret;48614853}4862485448634863-static void mem_cgroup_move_charge(struct mm_struct *mm)48554855+static void mem_cgroup_move_charge(void)48644856{48654857 struct mm_walk mem_cgroup_move_charge_walk = {48664858 .pmd_entry = mem_cgroup_move_charge_pte_range,48674867- .mm = mm,48594859+ .mm = mc.mm,48684860 };4869486148704862 lru_add_drain_all();···48764868 atomic_inc(&mc.from->moving_account);48774869 synchronize_rcu();48784870retry:48794879- if (unlikely(!down_read_trylock(&mm->mmap_sem))) {48714871+ if (unlikely(!down_read_trylock(&mc.mm->mmap_sem))) {48804872 /*48814873 * Someone who are holding the mmap_sem might be waiting in48824874 * waitq. So we cancel all extra charges, wake up all waiters,···48934885 * additional charge, the page walk just aborts.48944886 */48954887 walk_page_range(0, ~0UL, &mem_cgroup_move_charge_walk);48964896- up_read(&mm->mmap_sem);48884888+ up_read(&mc.mm->mmap_sem);48974889 atomic_dec(&mc.from->moving_account);48984890}4899489149004900-static void mem_cgroup_move_task(struct cgroup_taskset *tset)48924892+static void mem_cgroup_move_task(void)49014893{49024902- struct cgroup_subsys_state *css;49034903- struct task_struct *p = cgroup_taskset_first(tset, &css);49044904- struct mm_struct *mm = get_task_mm(p);49054905-49064906- if (mm) {49074907- if (mc.to)49084908- mem_cgroup_move_charge(mm);49094909- mmput(mm);49104910- }49114911- if (mc.to)48944894+ if (mc.to) {48954895+ mem_cgroup_move_charge();49124896 mem_cgroup_clear_mc();48974897+ }49134898}49144899#else /* !CONFIG_MMU */49154900static int mem_cgroup_can_attach(struct cgroup_taskset *tset)···49124911static void mem_cgroup_cancel_attach(struct cgroup_taskset *tset)49134912{49144913}49154915-static void mem_cgroup_move_task(struct cgroup_taskset *tset)49144914+static void mem_cgroup_move_task(void)49164915{49174916}49184917#endif···51965195 .css_reset = mem_cgroup_css_reset,51975196 .can_attach = mem_cgroup_can_attach,51985197 .cancel_attach = mem_cgroup_cancel_attach,51995199- .attach = mem_cgroup_move_task,51985198+ .post_attach = mem_cgroup_move_task,52005199 .bind = mem_cgroup_bind,52015200 .dfl_cftypes = memory_files,52025201 .legacy_cftypes = mem_cgroup_legacy_files,
···789789 return pfn_to_page(pfn);790790}791791792792+#ifdef CONFIG_TRANSPARENT_HUGEPAGE793793+struct page *vm_normal_page_pmd(struct vm_area_struct *vma, unsigned long addr,794794+ pmd_t pmd)795795+{796796+ unsigned long pfn = pmd_pfn(pmd);797797+798798+ /*799799+ * There is no pmd_special() but there may be special pmds, e.g.800800+ * in a direct-access (dax) mapping, so let's just replicate the801801+ * !HAVE_PTE_SPECIAL case from vm_normal_page() here.802802+ */803803+ if (unlikely(vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP))) {804804+ if (vma->vm_flags & VM_MIXEDMAP) {805805+ if (!pfn_valid(pfn))806806+ return NULL;807807+ goto out;808808+ } else {809809+ unsigned long off;810810+ off = (addr - vma->vm_start) >> PAGE_SHIFT;811811+ if (pfn == vma->vm_pgoff + off)812812+ return NULL;813813+ if (!is_cow_mapping(vma->vm_flags))814814+ return NULL;815815+ }816816+ }817817+818818+ if (is_zero_pfn(pfn))819819+ return NULL;820820+ if (unlikely(pfn > highest_memmap_pfn))821821+ return NULL;822822+823823+ /*824824+ * NOTE! We still have PageReserved() pages in the page tables.825825+ * eg. VDSO mappings can cause them to exist.826826+ */827827+out:828828+ return pfn_to_page(pfn);829829+}830830+#endif831831+792832/*793833 * copy one vm_area from one task to the other. Assumes the page tables794834 * already present in the new task to be cleared in the whole range
+7-1
mm/migrate.c
···975975 dec_zone_page_state(page, NR_ISOLATED_ANON +976976 page_is_file_cache(page));977977 /* Soft-offlined page shouldn't go through lru cache list */978978- if (reason == MR_MEMORY_FAILURE) {978978+ if (reason == MR_MEMORY_FAILURE && rc == MIGRATEPAGE_SUCCESS) {979979+ /*980980+ * With this release, we free successfully migrated981981+ * page and set PG_HWPoison on just freed page982982+ * intentionally. Although it's rather weird, it's how983983+ * HWPoison flag works at the moment.984984+ */979985 put_page(page);980986 if (!test_set_page_hwpoison(page))981987 num_poisoned_pages_inc();
+5-1
mm/page_io.c
···353353354354 ret = bdev_read_page(sis->bdev, swap_page_sector(page), page);355355 if (!ret) {356356- swap_slot_free_notify(page);356356+ if (trylock_page(page)) {357357+ swap_slot_free_notify(page);358358+ unlock_page(page);359359+ }360360+357361 count_vm_event(PSWPIN);358362 return 0;359363 }
+5
mm/swap.c
···728728 zone = NULL;729729 }730730731731+ if (is_huge_zero_page(page)) {732732+ put_huge_zero_page();733733+ continue;734734+ }735735+731736 page = compound_head(page);732737 if (!put_page_testzero(page))733738 continue;
+15-15
mm/vmscan.c
···25532553 sc->gfp_mask |= __GFP_HIGHMEM;2554255425552555 for_each_zone_zonelist_nodemask(zone, z, zonelist,25562556- requested_highidx, sc->nodemask) {25562556+ gfp_zone(sc->gfp_mask), sc->nodemask) {25572557 enum zone_type classzone_idx;2558255825592559 if (!populated_zone(zone))···33183318 /* Try to sleep for a short interval */33193319 if (prepare_kswapd_sleep(pgdat, order, remaining,33203320 balanced_classzone_idx)) {33213321+ /*33223322+ * Compaction records what page blocks it recently failed to33233323+ * isolate pages from and skips them in the future scanning.33243324+ * When kswapd is going to sleep, it is reasonable to assume33253325+ * that pages and compaction may succeed so reset the cache.33263326+ */33273327+ reset_isolation_suitable(pgdat);33283328+33293329+ /*33303330+ * We have freed the memory, now we should compact it to make33313331+ * allocation of the requested order possible.33323332+ */33333333+ wakeup_kcompactd(pgdat, order, classzone_idx);33343334+33213335 remaining = schedule_timeout(HZ/10);33223336 finish_wait(&pgdat->kswapd_wait, &wait);33233337 prepare_to_wait(&pgdat->kswapd_wait, &wait, TASK_INTERRUPTIBLE);···33543340 * them before going back to sleep.33553341 */33563342 set_pgdat_percpu_threshold(pgdat, calculate_normal_threshold);33573357-33583358- /*33593359- * Compaction records what page blocks it recently failed to33603360- * isolate pages from and skips them in the future scanning.33613361- * When kswapd is going to sleep, it is reasonable to assume33623362- * that pages and compaction may succeed so reset the cache.33633363- */33643364- reset_isolation_suitable(pgdat);33653365-33663366- /*33673367- * We have freed the memory, now we should compact it to make33683368- * allocation of the requested order possible.33693369- */33703370- wakeup_kcompactd(pgdat, order, classzone_idx);3371334333723344 if (!kthread_should_stop())33733345 schedule();
···10341034 if (!fld.daddr) {10351035 fld.daddr = fld.saddr;1036103610371037- err = -EADDRNOTAVAIL;10381037 if (dev_out)10391038 dev_put(dev_out);10391039+ err = -EINVAL;10401040 dev_out = init_net.loopback_dev;10411041+ if (!dev_out->dn_ptr)10421042+ goto out;10431043+ err = -EADDRNOTAVAIL;10411044 dev_hold(dev_out);10421045 if (!fld.daddr) {10431046 fld.daddr =···11131110 if (dev_out == NULL)11141111 goto out;11151112 dn_db = rcu_dereference_raw(dev_out->dn_ptr);11131113+ if (!dn_db)11141114+ goto e_inval;11161115 /* Possible improvement - check all devices for local addr */11171116 if (dn_dev_islocal(dev_out, fld.daddr)) {11181117 dev_put(dev_out);···11561151 dev_put(dev_out);11571152 dev_out = init_net.loopback_dev;11581153 dev_hold(dev_out);11541154+ if (!dev_out->dn_ptr)11551155+ goto e_inval;11591156 fld.flowidn_oif = dev_out->ifindex;11601157 if (res.fi)11611158 dn_fib_info_put(res.fi);
+5-1
net/ipv4/fib_frontend.c
···904904 if (ifa->ifa_flags & IFA_F_SECONDARY) {905905 prim = inet_ifa_byprefix(in_dev, any, ifa->ifa_mask);906906 if (!prim) {907907- pr_warn("%s: bug: prim == NULL\n", __func__);907907+ /* if the device has been deleted, we don't perform908908+ * address promotion909909+ */910910+ if (!in_dev->dead)911911+ pr_warn("%s: bug: prim == NULL\n", __func__);908912 return;909913 }910914 if (iprim && iprim != prim) {
+6
net/ipv4/netfilter/arptable_filter.c
···8181 return ret;8282 }83838484+ ret = arptable_filter_table_init(&init_net);8585+ if (ret) {8686+ unregister_pernet_subsys(&arptable_filter_net_ops);8787+ kfree(arpfilter_ops);8888+ }8989+8490 return ret;8591}8692
+16-3
net/ipv4/route.c
···14381438#endif14391439}1440144014411441-static struct rtable *rt_dst_alloc(struct net_device *dev,14421442- unsigned int flags, u16 type,14431443- bool nopolicy, bool noxfrm, bool will_cache)14411441+struct rtable *rt_dst_alloc(struct net_device *dev,14421442+ unsigned int flags, u16 type,14431443+ bool nopolicy, bool noxfrm, bool will_cache)14441444{14451445 struct rtable *rt;14461446···1468146814691469 return rt;14701470}14711471+EXPORT_SYMBOL(rt_dst_alloc);1471147214721473/* called in rcu_read_lock() section */14731474static int ip_route_input_mc(struct sk_buff *skb, __be32 daddr, __be32 saddr,···20462045 */20472046 if (fi && res->prefixlen < 4)20482047 fi = NULL;20482048+ } else if ((type == RTN_LOCAL) && (orig_oif != 0) &&20492049+ (orig_oif != dev_out->ifindex)) {20502050+ /* For local routes that require a particular output interface20512051+ * we do not want to cache the result. Caching the result20522052+ * causes incorrect behaviour when there are multiple source20532053+ * addresses on the interface, the end result being that if the20542054+ * intended recipient is waiting on that interface for the20552055+ * packet he won't receive it because it will be delivered on20562056+ * the loopback interface and the IP_PKTINFO ipi_ifindex will20572057+ * be set to the loopback interface as well.20582058+ */20592059+ fi = NULL;20492060 }2050206120512062 fnhe = NULL;
···159159 if (validate)160160 skb = validate_xmit_skb_list(skb, dev);161161162162- if (skb) {162162+ if (likely(skb)) {163163 HARD_TX_LOCK(dev, txq, smp_processor_id());164164 if (!netif_xmit_frozen_or_stopped(txq))165165 skb = dev_hard_start_xmit(skb, dev, txq, &ret);166166167167 HARD_TX_UNLOCK(dev, txq);168168+ } else {169169+ spin_lock(root_lock);170170+ return qdisc_qlen(q);168171 }169172 spin_lock(root_lock);170173
+10-5
net/sctp/outqueue.c
···866866 * sender MUST assure that at least one T3-rtx867867 * timer is running.868868 */869869- if (chunk->chunk_hdr->type == SCTP_CID_FWD_TSN)870870- sctp_transport_reset_timers(transport);869869+ if (chunk->chunk_hdr->type == SCTP_CID_FWD_TSN) {870870+ sctp_transport_reset_t3_rtx(transport);871871+ transport->last_time_sent = jiffies;872872+ }871873 }872874 break;873875···926924 error = sctp_outq_flush_rtx(q, packet,927925 rtx_timeout, &start_timer);928926929929- if (start_timer)930930- sctp_transport_reset_timers(transport);927927+ if (start_timer) {928928+ sctp_transport_reset_t3_rtx(transport);929929+ transport->last_time_sent = jiffies;930930+ }931931932932 /* This can happen on COOKIE-ECHO resend. Only933933 * one chunk can get bundled with a COOKIE-ECHO.···10661062 list_add_tail(&chunk->transmitted_list,10671063 &transport->transmitted);1068106410691069- sctp_transport_reset_timers(transport);10651065+ sctp_transport_reset_t3_rtx(transport);10661066+ transport->last_time_sent = jiffies;1070106710711068 /* Only let one DATA chunk get bundled with a10721069 * COOKIE-ECHO chunk.
+1-2
net/sctp/sm_make_chunk.c
···30803080 return SCTP_ERROR_RSRC_LOW;3081308130823082 /* Start the heartbeat timer. */30833083- if (!mod_timer(&peer->hb_timer, sctp_transport_timeout(peer)))30843084- sctp_transport_hold(peer);30833083+ sctp_transport_reset_hb_timer(peer);30853084 asoc->new_transport = peer;30863085 break;30873086 case SCTP_PARAM_DEL_IP:
+16-20
net/sctp/sm_sideeffect.c
···6969 sctp_cmd_seq_t *commands,7070 gfp_t gfp);71717272-static void sctp_cmd_hb_timer_update(sctp_cmd_seq_t *cmds,7373- struct sctp_transport *t);7472/********************************************************************7573 * Helper functions7674 ********************************************************************/···365367 struct sctp_association *asoc = transport->asoc;366368 struct sock *sk = asoc->base.sk;367369 struct net *net = sock_net(sk);370370+ u32 elapsed, timeout;368371369372 bh_lock_sock(sk);370373 if (sock_owned_by_user(sk)) {···373374374375 /* Try again later. */375376 if (!mod_timer(&transport->hb_timer, jiffies + (HZ/20)))377377+ sctp_transport_hold(transport);378378+ goto out_unlock;379379+ }380380+381381+ /* Check if we should still send the heartbeat or reschedule */382382+ elapsed = jiffies - transport->last_time_sent;383383+ timeout = sctp_transport_timeout(transport);384384+ if (elapsed < timeout) {385385+ elapsed = timeout - elapsed;386386+ if (!mod_timer(&transport->hb_timer, jiffies + elapsed))376387 sctp_transport_hold(transport);377388 goto out_unlock;378389 }···516507 0);517508518509 /* Update the hb timer to resend a heartbeat every rto */519519- sctp_cmd_hb_timer_update(commands, transport);510510+ sctp_transport_reset_hb_timer(transport);520511 }521512522513 if (transport->state != SCTP_INACTIVE &&···643634 * hold a reference on the transport to make sure none of644635 * the needed data structures go away.645636 */646646- list_for_each_entry(t, &asoc->peer.transport_addr_list, transports) {647647-648648- if (!mod_timer(&t->hb_timer, sctp_transport_timeout(t)))649649- sctp_transport_hold(t);650650- }637637+ list_for_each_entry(t, &asoc->peer.transport_addr_list, transports)638638+ sctp_transport_reset_hb_timer(t);651639}652640653641static void sctp_cmd_hb_timers_stop(sctp_cmd_seq_t *cmds,···674668 }675669}676670677677-678678-/* Helper function to update the heartbeat timer. */679679-static void sctp_cmd_hb_timer_update(sctp_cmd_seq_t *cmds,680680- struct sctp_transport *t)681681-{682682- /* Update the heartbeat timer. */683683- if (!mod_timer(&t->hb_timer, sctp_transport_timeout(t)))684684- sctp_transport_hold(t);685685-}686671687672/* Helper function to handle the reception of an HEARTBEAT ACK. */688673static void sctp_cmd_transport_on(sctp_cmd_seq_t *cmds,···739742 sctp_transport_update_rto(t, (jiffies - hbinfo->sent_at));740743741744 /* Update the heartbeat timer. */742742- if (!mod_timer(&t->hb_timer, sctp_transport_timeout(t)))743743- sctp_transport_hold(t);745745+ sctp_transport_reset_hb_timer(t);744746745747 if (was_unconfirmed && asoc->peer.transport_count == 1)746748 sctp_transport_immediate_rtx(t);···1610161416111615 case SCTP_CMD_HB_TIMER_UPDATE:16121616 t = cmd->obj.transport;16131613- sctp_cmd_hb_timer_update(commands, t);16171617+ sctp_transport_reset_hb_timer(t);16141618 break;1615161916161620 case SCTP_CMD_HB_TIMERS_STOP:
+13-6
net/sctp/transport.c
···183183/* Start T3_rtx timer if it is not already running and update the heartbeat184184 * timer. This routine is called every time a DATA chunk is sent.185185 */186186-void sctp_transport_reset_timers(struct sctp_transport *transport)186186+void sctp_transport_reset_t3_rtx(struct sctp_transport *transport)187187{188188 /* RFC 2960 6.3.2 Retransmission Timer Rules189189 *···197197 if (!mod_timer(&transport->T3_rtx_timer,198198 jiffies + transport->rto))199199 sctp_transport_hold(transport);200200+}201201+202202+void sctp_transport_reset_hb_timer(struct sctp_transport *transport)203203+{204204+ unsigned long expires;200205201206 /* When a data chunk is sent, reset the heartbeat interval. */202202- if (!mod_timer(&transport->hb_timer,203203- sctp_transport_timeout(transport)))204204- sctp_transport_hold(transport);207207+ expires = jiffies + sctp_transport_timeout(transport);208208+ if (time_before(transport->hb_timer.expires, expires) &&209209+ !mod_timer(&transport->hb_timer,210210+ expires + prandom_u32_max(transport->rto)))211211+ sctp_transport_hold(transport);205212}206213207214/* This transport has been assigned to an association.···602595unsigned long sctp_transport_timeout(struct sctp_transport *trans)603596{604597 /* RTO + timer slack +/- 50% of RTO */605605- unsigned long timeout = (trans->rto >> 1) + prandom_u32_max(trans->rto);598598+ unsigned long timeout = trans->rto >> 1;606599607600 if (trans->state != SCTP_UNCONFIRMED &&608601 trans->state != SCTP_PF)609602 timeout += trans->hbinterval;610603611611- return timeout + jiffies;604604+ return timeout;612605}613606614607/* Reset transport variables to their initial values */
+6
net/switchdev/switchdev.c
···305305 if (err && err != -EOPNOTSUPP)306306 netdev_err(dev, "failed (err=%d) to set attribute (id=%d)\n",307307 err, attr->id);308308+ if (attr->complete)309309+ attr->complete(dev, err, attr->complete_priv);308310}309311310312static int switchdev_port_attr_set_defer(struct net_device *dev,···436434 if (err && err != -EOPNOTSUPP)437435 netdev_err(dev, "failed (err=%d) to add object (id=%d)\n",438436 err, obj->id);437437+ if (obj->complete)438438+ obj->complete(dev, err, obj->complete_priv);439439}440440441441static int switchdev_port_obj_add_defer(struct net_device *dev,···506502 if (err && err != -EOPNOTSUPP)507503 netdev_err(dev, "failed (err=%d) to del object (id=%d)\n",508504 err, obj->id);505505+ if (obj->complete)506506+ obj->complete(dev, err, obj->complete_priv);509507}510508511509static int switchdev_port_obj_del_defer(struct net_device *dev,
+1
net/tipc/core.c
···6969 if (err)7070 goto out_nametbl;71717272+ INIT_LIST_HEAD(&tn->dist_queue);7273 err = tipc_topsrv_start(net);7374 if (err)7475 goto out_subscr;
···40404141int sysctl_tipc_named_timeout __read_mostly = 2000;42424343-/**4444- * struct tipc_dist_queue - queue holding deferred name table updates4545- */4646-static struct list_head tipc_dist_queue = LIST_HEAD_INIT(tipc_dist_queue);4747-4843struct distr_queue_item {4944 struct distr_item i;5045 u32 dtype;···224229 kfree_rcu(p, rcu);225230}226231232232+/**233233+ * tipc_dist_queue_purge - remove deferred updates from a node that went down234234+ */235235+static void tipc_dist_queue_purge(struct net *net, u32 addr)236236+{237237+ struct tipc_net *tn = net_generic(net, tipc_net_id);238238+ struct distr_queue_item *e, *tmp;239239+240240+ spin_lock_bh(&tn->nametbl_lock);241241+ list_for_each_entry_safe(e, tmp, &tn->dist_queue, next) {242242+ if (e->node != addr)243243+ continue;244244+ list_del(&e->next);245245+ kfree(e);246246+ }247247+ spin_unlock_bh(&tn->nametbl_lock);248248+}249249+227250void tipc_publ_notify(struct net *net, struct list_head *nsub_list, u32 addr)228251{229252 struct publication *publ, *tmp;230253231254 list_for_each_entry_safe(publ, tmp, nsub_list, nodesub_list)232255 tipc_publ_purge(net, publ, addr);256256+ tipc_dist_queue_purge(net, addr);233257}234258235259/**···293279 * tipc_named_add_backlog - add a failed name table update to the backlog294280 *295281 */296296-static void tipc_named_add_backlog(struct distr_item *i, u32 type, u32 node)282282+static void tipc_named_add_backlog(struct net *net, struct distr_item *i,283283+ u32 type, u32 node)297284{298285 struct distr_queue_item *e;286286+ struct tipc_net *tn = net_generic(net, tipc_net_id);299287 unsigned long now = get_jiffies_64();300288301289 e = kzalloc(sizeof(*e), GFP_ATOMIC);···307291 e->node = node;308292 e->expires = now + msecs_to_jiffies(sysctl_tipc_named_timeout);309293 memcpy(e, i, sizeof(*i));310310- list_add_tail(&e->next, &tipc_dist_queue);294294+ list_add_tail(&e->next, &tn->dist_queue);311295}312296313297/**···317301void tipc_named_process_backlog(struct net *net)318302{319303 struct distr_queue_item *e, *tmp;304304+ struct tipc_net *tn = net_generic(net, tipc_net_id);320305 char addr[16];321306 unsigned long now = get_jiffies_64();322307323323- list_for_each_entry_safe(e, tmp, &tipc_dist_queue, next) {308308+ list_for_each_entry_safe(e, tmp, &tn->dist_queue, next) {324309 if (time_after(e->expires, now)) {325310 if (!tipc_update_nametbl(net, &e->i, e->node, e->dtype))326311 continue;···361344 node = msg_orignode(msg);362345 while (count--) {363346 if (!tipc_update_nametbl(net, item, node, mtype))364364- tipc_named_add_backlog(item, mtype, node);347347+ tipc_named_add_backlog(net, item, mtype, node);365348 item++;366349 }367350 kfree_skb(skb);
+2-5
net/vmw_vsock/vmci_transport.c
···17351735 /* Retrieve the head sk_buff from the socket's receive queue. */17361736 err = 0;17371737 skb = skb_recv_datagram(&vsk->sk, flags, noblock, &err);17381738- if (err)17391739- return err;17401740-17411738 if (!skb)17421742- return -EAGAIN;17391739+ return err;1743174017441741 dg = (struct vmci_datagram *)skb->data;17451742 if (!dg)···2151215421522155MODULE_AUTHOR("VMware, Inc.");21532156MODULE_DESCRIPTION("VMCI transport for Virtual Sockets");21542154-MODULE_VERSION("1.0.3.0-k");21572157+MODULE_VERSION("1.0.4.0-k");21552158MODULE_LICENSE("GPL v2");21562159MODULE_ALIAS("vmware_vsock");21572160MODULE_ALIAS_NETPROTO(PF_VSOCK);
···299299int snd_hdac_read_parm_uncached(struct hdac_device *codec, hda_nid_t nid,300300 int parm)301301{302302- int val;302302+ unsigned int cmd, val;303303304304- if (codec->regmap)305305- regcache_cache_bypass(codec->regmap, true);306306- val = snd_hdac_read_parm(codec, nid, parm);307307- if (codec->regmap)308308- regcache_cache_bypass(codec->regmap, false);304304+ cmd = snd_hdac_regmap_encode_verb(nid, AC_VERB_PARAMETERS) | parm;305305+ if (snd_hdac_regmap_read_raw_uncached(codec, cmd, &val) < 0)306306+ return -1;309307 return val;310308}311309EXPORT_SYMBOL_GPL(snd_hdac_read_parm_uncached);
+50-10
sound/hda/hdac_i915.c
···2020#include <sound/core.h>2121#include <sound/hdaudio.h>2222#include <sound/hda_i915.h>2323+#include <sound/hda_register.h>23242425static struct i915_audio_component *hdac_acomp;2526···9897}9998EXPORT_SYMBOL_GPL(snd_hdac_display_power);10099100100+#define CONTROLLER_IN_GPU(pci) (((pci)->device == 0x0a0c) || \101101+ ((pci)->device == 0x0c0c) || \102102+ ((pci)->device == 0x0d0c) || \103103+ ((pci)->device == 0x160c))104104+101105/**102102- * snd_hdac_get_display_clk - Get CDCLK in kHz106106+ * snd_hdac_i915_set_bclk - Reprogram BCLK for HSW/BDW103107 * @bus: HDA core bus104108 *105105- * This function is supposed to be used only by a HD-audio controller106106- * driver that needs the interaction with i915 graphics.109109+ * Intel HSW/BDW display HDA controller is in GPU. Both its power and link BCLK110110+ * depends on GPU. Two Extended Mode registers EM4 (M value) and EM5 (N Value)111111+ * are used to convert CDClk (Core Display Clock) to 24MHz BCLK:112112+ * BCLK = CDCLK * M / N113113+ * The values will be lost when the display power well is disabled and need to114114+ * be restored to avoid abnormal playback speed.107115 *108108- * This function queries CDCLK value in kHz from the graphics driver and109109- * returns the value. A negative code is returned in error.116116+ * Call this function at initializing and changing power well, as well as117117+ * at ELD notifier for the hotplug.110118 */111111-int snd_hdac_get_display_clk(struct hdac_bus *bus)119119+void snd_hdac_i915_set_bclk(struct hdac_bus *bus)112120{113121 struct i915_audio_component *acomp = bus->audio_component;122122+ struct pci_dev *pci = to_pci_dev(bus->dev);123123+ int cdclk_freq;124124+ unsigned int bclk_m, bclk_n;114125115115- if (!acomp || !acomp->ops)116116- return -ENODEV;126126+ if (!acomp || !acomp->ops || !acomp->ops->get_cdclk_freq)127127+ return; /* only for i915 binding */128128+ if (!CONTROLLER_IN_GPU(pci))129129+ return; /* only HSW/BDW */117130118118- return acomp->ops->get_cdclk_freq(acomp->dev);131131+ cdclk_freq = acomp->ops->get_cdclk_freq(acomp->dev);132132+ switch (cdclk_freq) {133133+ case 337500:134134+ bclk_m = 16;135135+ bclk_n = 225;136136+ break;137137+138138+ case 450000:139139+ default: /* default CDCLK 450MHz */140140+ bclk_m = 4;141141+ bclk_n = 75;142142+ break;143143+144144+ case 540000:145145+ bclk_m = 4;146146+ bclk_n = 90;147147+ break;148148+149149+ case 675000:150150+ bclk_m = 8;151151+ bclk_n = 225;152152+ break;153153+ }154154+155155+ snd_hdac_chip_writew(bus, HSW_EM4, bclk_m);156156+ snd_hdac_chip_writew(bus, HSW_EM5, bclk_n);119157}120120-EXPORT_SYMBOL_GPL(snd_hdac_get_display_clk);158158+EXPORT_SYMBOL_GPL(snd_hdac_i915_set_bclk);121159122160/* There is a fixed mapping between audio pin node and display port123161 * on current Intel platforms:
+28-12
sound/hda/hdac_regmap.c
···453453EXPORT_SYMBOL_GPL(snd_hdac_regmap_write_raw);454454455455static int reg_raw_read(struct hdac_device *codec, unsigned int reg,456456- unsigned int *val)456456+ unsigned int *val, bool uncached)457457{458458- if (!codec->regmap)458458+ if (uncached || !codec->regmap)459459 return hda_reg_read(codec, reg, val);460460 else461461 return regmap_read(codec->regmap, reg, val);462462+}463463+464464+static int __snd_hdac_regmap_read_raw(struct hdac_device *codec,465465+ unsigned int reg, unsigned int *val,466466+ bool uncached)467467+{468468+ int err;469469+470470+ err = reg_raw_read(codec, reg, val, uncached);471471+ if (err == -EAGAIN) {472472+ err = snd_hdac_power_up_pm(codec);473473+ if (!err)474474+ err = reg_raw_read(codec, reg, val, uncached);475475+ snd_hdac_power_down_pm(codec);476476+ }477477+ return err;462478}463479464480/**···488472int snd_hdac_regmap_read_raw(struct hdac_device *codec, unsigned int reg,489473 unsigned int *val)490474{491491- int err;492492-493493- err = reg_raw_read(codec, reg, val);494494- if (err == -EAGAIN) {495495- err = snd_hdac_power_up_pm(codec);496496- if (!err)497497- err = reg_raw_read(codec, reg, val);498498- snd_hdac_power_down_pm(codec);499499- }500500- return err;475475+ return __snd_hdac_regmap_read_raw(codec, reg, val, false);501476}502477EXPORT_SYMBOL_GPL(snd_hdac_regmap_read_raw);478478+479479+/* Works like snd_hdac_regmap_read_raw(), but this doesn't read from the480480+ * cache but always via hda verbs.481481+ */482482+int snd_hdac_regmap_read_raw_uncached(struct hdac_device *codec,483483+ unsigned int reg, unsigned int *val)484484+{485485+ return __snd_hdac_regmap_read_raw(codec, reg, val, true);486486+}503487504488/**505489 * snd_hdac_regmap_update_raw - update a pseudo register with power mgmt
+4-2
sound/pci/hda/hda_generic.c
···826826 bool allow_powerdown)827827{828828 hda_nid_t nid, changed = 0;829829- int i, state;829829+ int i, state, power;830830831831 for (i = 0; i < path->depth; i++) {832832 nid = path->path[i];···838838 state = AC_PWRST_D0;839839 else840840 state = AC_PWRST_D3;841841- if (!snd_hda_check_power_state(codec, nid, state)) {841841+ power = snd_hda_codec_read(codec, nid, 0,842842+ AC_VERB_GET_POWER_STATE, 0);843843+ if (power != (state | (state << 4))) {842844 snd_hda_codec_write(codec, nid, 0,843845 AC_VERB_SET_POWER_STATE, state);844846 changed = nid;
+7-52
sound/pci/hda/hda_intel.c
···857857#define azx_del_card_list(chip) /* NOP */858858#endif /* CONFIG_PM */859859860860-/* Intel HSW/BDW display HDA controller is in GPU. Both its power and link BCLK861861- * depends on GPU. Two Extended Mode registers EM4 (M value) and EM5 (N Value)862862- * are used to convert CDClk (Core Display Clock) to 24MHz BCLK:863863- * BCLK = CDCLK * M / N864864- * The values will be lost when the display power well is disabled and need to865865- * be restored to avoid abnormal playback speed.866866- */867867-static void haswell_set_bclk(struct hda_intel *hda)868868-{869869- struct azx *chip = &hda->chip;870870- int cdclk_freq;871871- unsigned int bclk_m, bclk_n;872872-873873- if (!hda->need_i915_power)874874- return;875875-876876- cdclk_freq = snd_hdac_get_display_clk(azx_bus(chip));877877- switch (cdclk_freq) {878878- case 337500:879879- bclk_m = 16;880880- bclk_n = 225;881881- break;882882-883883- case 450000:884884- default: /* default CDCLK 450MHz */885885- bclk_m = 4;886886- bclk_n = 75;887887- break;888888-889889- case 540000:890890- bclk_m = 4;891891- bclk_n = 90;892892- break;893893-894894- case 675000:895895- bclk_m = 8;896896- bclk_n = 225;897897- break;898898- }899899-900900- azx_writew(chip, HSW_EM4, bclk_m);901901- azx_writew(chip, HSW_EM5, bclk_n);902902-}903903-904860#if defined(CONFIG_PM_SLEEP) || defined(SUPPORT_VGA_SWITCHEROO)905861/*906862 * power management···914958 if (chip->driver_caps & AZX_DCAPS_I915_POWERWELL915959 && hda->need_i915_power) {916960 snd_hdac_display_power(azx_bus(chip), true);917917- haswell_set_bclk(hda);961961+ snd_hdac_i915_set_bclk(azx_bus(chip));918962 }919963 if (chip->msi)920964 if (pci_enable_msi(pci) < 0)···10141058 bus = azx_bus(chip);10151059 if (hda->need_i915_power) {10161060 snd_hdac_display_power(bus, true);10171017- haswell_set_bclk(hda);10611061+ snd_hdac_i915_set_bclk(bus);10181062 } else {10191063 /* toggle codec wakeup bit for STATESTS read */10201064 snd_hdac_set_codec_wakeup(bus, true);···17521796 /* initialize chip */17531797 azx_init_pci(chip);1754179817551755- if (chip->driver_caps & AZX_DCAPS_I915_POWERWELL) {17561756- struct hda_intel *hda;17571757-17581758- hda = container_of(chip, struct hda_intel, chip);17591759- haswell_set_bclk(hda);17601760- }17991799+ if (chip->driver_caps & AZX_DCAPS_I915_POWERWELL)18001800+ snd_hdac_i915_set_bclk(bus);1761180117621802 hda_intel_init_chip(chip, (probe_only[dev] & 2) == 0);17631803···21832231 .driver_data = AZX_DRIVER_PCH | AZX_DCAPS_INTEL_SKYLAKE },21842232 /* Broxton-P(Apollolake) */21852233 { PCI_DEVICE(0x8086, 0x5a98),22342234+ .driver_data = AZX_DRIVER_PCH | AZX_DCAPS_INTEL_BROXTON },22352235+ /* Broxton-T */22362236+ { PCI_DEVICE(0x8086, 0x1a98),21862237 .driver_data = AZX_DRIVER_PCH | AZX_DCAPS_INTEL_BROXTON },21872238 /* Haswell */21882239 { PCI_DEVICE(0x8086, 0x0a0c),
+14
sound/pci/hda/patch_cirrus.c
···361361{362362 struct cs_spec *spec = codec->spec;363363 int err;364364+ int i;364365365366 err = snd_hda_parse_pin_defcfg(codec, &spec->gen.autocfg, NULL, 0);366367 if (err < 0)···370369 err = snd_hda_gen_parse_auto_config(codec, &spec->gen.autocfg);371370 if (err < 0)372371 return err;372372+373373+ /* keep the ADCs powered up when it's dynamically switchable */374374+ if (spec->gen.dyn_adc_switch) {375375+ unsigned int done = 0;376376+ for (i = 0; i < spec->gen.input_mux.num_items; i++) {377377+ int idx = spec->gen.dyn_adc_idx[i];378378+ if (done & (1 << idx))379379+ continue;380380+ snd_hda_gen_fix_pin_power(codec,381381+ spec->gen.adc_nids[idx]);382382+ done |= 1 << idx;383383+ }384384+ }373385374386 return 0;375387}
···307307extern int arizona_init_gpio(struct snd_soc_codec *codec);308308extern int arizona_init_mono(struct snd_soc_codec *codec);309309310310+extern int arizona_free_spk(struct snd_soc_codec *codec);311311+310312extern int arizona_init_dai(struct arizona_priv *priv, int dai);311313312314int arizona_set_output_mode(struct snd_soc_codec *codec, int output,
+13-4
sound/soc/codecs/cs35l32.c
···274274 if (of_property_read_u32(np, "cirrus,sdout-share", &val) >= 0)275275 pdata->sdout_share = val;276276277277- of_property_read_u32(np, "cirrus,boost-manager", &val);277277+ if (of_property_read_u32(np, "cirrus,boost-manager", &val))278278+ val = -1u;279279+278280 switch (val) {279281 case CS35L32_BOOST_MGR_AUTO:280282 case CS35L32_BOOST_MGR_AUTO_AUDIO:···284282 case CS35L32_BOOST_MGR_FIXED:285283 pdata->boost_mng = val;286284 break;285285+ case -1u:287286 default:288287 dev_err(&i2c_client->dev,289288 "Wrong cirrus,boost-manager DT value %d\n", val);290289 pdata->boost_mng = CS35L32_BOOST_MGR_BYPASS;291290 }292291293293- of_property_read_u32(np, "cirrus,sdout-datacfg", &val);292292+ if (of_property_read_u32(np, "cirrus,sdout-datacfg", &val))293293+ val = -1u;294294 switch (val) {295295 case CS35L32_DATA_CFG_LR_VP:296296 case CS35L32_DATA_CFG_LR_STAT:···300296 case CS35L32_DATA_CFG_LR_VPSTAT:301297 pdata->sdout_datacfg = val;302298 break;299299+ case -1u:303300 default:304301 dev_err(&i2c_client->dev,305302 "Wrong cirrus,sdout-datacfg DT value %d\n", val);306303 pdata->sdout_datacfg = CS35L32_DATA_CFG_LR;307304 }308305309309- of_property_read_u32(np, "cirrus,battery-threshold", &val);306306+ if (of_property_read_u32(np, "cirrus,battery-threshold", &val))307307+ val = -1u;310308 switch (val) {311309 case CS35L32_BATT_THRESH_3_1V:312310 case CS35L32_BATT_THRESH_3_2V:···316310 case CS35L32_BATT_THRESH_3_4V:317311 pdata->batt_thresh = val;318312 break;313313+ case -1u:319314 default:320315 dev_err(&i2c_client->dev,321316 "Wrong cirrus,battery-threshold DT value %d\n", val);322317 pdata->batt_thresh = CS35L32_BATT_THRESH_3_3V;323318 }324319325325- of_property_read_u32(np, "cirrus,battery-recovery", &val);320320+ if (of_property_read_u32(np, "cirrus,battery-recovery", &val))321321+ val = -1u;326322 switch (val) {327323 case CS35L32_BATT_RECOV_3_1V:328324 case CS35L32_BATT_RECOV_3_2V:···334326 case CS35L32_BATT_RECOV_3_6V:335327 pdata->batt_recov = val;336328 break;329329+ case -1u:337330 default:338331 dev_err(&i2c_client->dev,339332 "Wrong cirrus,battery-recovery DT value %d\n", val);
···14201420}1421142114221422#ifdef CONFIG_PM14231423-static int hdmi_codec_resume(struct snd_soc_codec *codec)14231423+static int hdmi_codec_prepare(struct device *dev)14241424{14251425- struct hdac_ext_device *edev = snd_soc_codec_get_drvdata(codec);14251425+ struct hdac_ext_device *edev = to_hda_ext_device(dev);14261426+ struct hdac_device *hdac = &edev->hdac;14271427+14281428+ pm_runtime_get_sync(&edev->hdac.dev);14291429+14301430+ /*14311431+ * Power down afg.14321432+ * codec_read is preferred over codec_write to set the power state.14331433+ * This way verb is send to set the power state and response14341434+ * is received. So setting power state is ensured without using loop14351435+ * to read the state.14361436+ */14371437+ snd_hdac_codec_read(hdac, hdac->afg, 0, AC_VERB_SET_POWER_STATE,14381438+ AC_PWRST_D3);14391439+14401440+ return 0;14411441+}14421442+14431443+static void hdmi_codec_complete(struct device *dev)14441444+{14451445+ struct hdac_ext_device *edev = to_hda_ext_device(dev);14261446 struct hdac_hdmi_priv *hdmi = edev->private_data;14271447 struct hdac_hdmi_pin *pin;14281448 struct hdac_device *hdac = &edev->hdac;14291429- struct hdac_bus *bus = hdac->bus;14301430- int err;14311431- unsigned long timeout;14491449+14501450+ /* Power up afg */14511451+ snd_hdac_codec_read(hdac, hdac->afg, 0, AC_VERB_SET_POWER_STATE,14521452+ AC_PWRST_D0);1432145314331454 hdac_hdmi_skl_enable_all_pins(&edev->hdac);14341455 hdac_hdmi_skl_enable_dp12(&edev->hdac);14351435-14361436- /* Power up afg */14371437- if (!snd_hdac_check_power_state(hdac, hdac->afg, AC_PWRST_D0)) {14381438-14391439- snd_hdac_codec_write(hdac, hdac->afg, 0,14401440- AC_VERB_SET_POWER_STATE, AC_PWRST_D0);14411441-14421442- /* Wait till power state is set to D0 */14431443- timeout = jiffies + msecs_to_jiffies(1000);14441444- while (!snd_hdac_check_power_state(hdac, hdac->afg, AC_PWRST_D0)14451445- && time_before(jiffies, timeout)) {14461446- msleep(50);14471447- }14481448- }1449145614501457 /*14511458 * As the ELD notify callback request is not entertained while the···14621455 list_for_each_entry(pin, &hdmi->pin_list, head)14631456 hdac_hdmi_present_sense(pin, 1);1464145714651465- /*14661466- * Codec power is turned ON during controller resume.14671467- * Turn it OFF here14681468- */14691469- err = snd_hdac_display_power(bus, false);14701470- if (err < 0) {14711471- dev_err(bus->dev,14721472- "Cannot turn OFF display power on i915, err: %d\n",14731473- err);14741474- return err;14751475- }14761476-14771477- return 0;14581458+ pm_runtime_put_sync(&edev->hdac.dev);14781459}14791460#else14801480-#define hdmi_codec_resume NULL14611461+#define hdmi_codec_prepare NULL14621462+#define hdmi_codec_complete NULL14811463#endif1482146414831465static struct snd_soc_codec_driver hdmi_hda_codec = {14841466 .probe = hdmi_codec_probe,14851467 .remove = hdmi_codec_remove,14861486- .resume = hdmi_codec_resume,14871468 .idle_bias_off = true,14881469};14891470···15561561 struct hdac_ext_device *edev = to_hda_ext_device(dev);15571562 struct hdac_device *hdac = &edev->hdac;15581563 struct hdac_bus *bus = hdac->bus;15591559- unsigned long timeout;15601564 int err;1561156515621566 dev_dbg(dev, "Enter: %s\n", __func__);···15641570 if (!bus)15651571 return 0;1566157215671567- /* Power down afg */15681568- if (!snd_hdac_check_power_state(hdac, hdac->afg, AC_PWRST_D3)) {15691569- snd_hdac_codec_write(hdac, hdac->afg, 0,15701570- AC_VERB_SET_POWER_STATE, AC_PWRST_D3);15711571-15721572- /* Wait till power state is set to D3 */15731573- timeout = jiffies + msecs_to_jiffies(1000);15741574- while (!snd_hdac_check_power_state(hdac, hdac->afg, AC_PWRST_D3)15751575- && time_before(jiffies, timeout)) {15761576-15771577- msleep(50);15781578- }15791579- }15801580-15731573+ /*15741574+ * Power down afg.15751575+ * codec_read is preferred over codec_write to set the power state.15761576+ * This way verb is send to set the power state and response15771577+ * is received. So setting power state is ensured without using loop15781578+ * to read the state.15791579+ */15801580+ snd_hdac_codec_read(hdac, hdac->afg, 0, AC_VERB_SET_POWER_STATE,15811581+ AC_PWRST_D3);15811582 err = snd_hdac_display_power(bus, false);15821583 if (err < 0) {15831584 dev_err(bus->dev, "Cannot turn on display power on i915\n");···16051616 hdac_hdmi_skl_enable_dp12(&edev->hdac);1606161716071618 /* Power up afg */16081608- if (!snd_hdac_check_power_state(hdac, hdac->afg, AC_PWRST_D0))16091609- snd_hdac_codec_write(hdac, hdac->afg, 0,16101610- AC_VERB_SET_POWER_STATE, AC_PWRST_D0);16191619+ snd_hdac_codec_read(hdac, hdac->afg, 0, AC_VERB_SET_POWER_STATE,16201620+ AC_PWRST_D0);1611162116121622 return 0;16131623}···1617162916181630static const struct dev_pm_ops hdac_hdmi_pm = {16191631 SET_RUNTIME_PM_OPS(hdac_hdmi_runtime_suspend, hdac_hdmi_runtime_resume, NULL)16321632+ .prepare = hdmi_codec_prepare,16331633+ .complete = hdmi_codec_complete,16201634};1621163516221636static const struct hda_device_id hdmi_list[] = {
+71-55
sound/soc/codecs/nau8825.c
···343343 SND_SOC_DAPM_SUPPLY("ADC Power", NAU8825_REG_ANALOG_ADC_2, 6, 0, NULL,344344 0),345345346346- /* ADC for button press detection */347347- SND_SOC_DAPM_ADC("SAR", NULL, NAU8825_REG_SAR_CTRL,348348- NAU8825_SAR_ADC_EN_SFT, 0),346346+ /* ADC for button press detection. A dapm supply widget is used to347347+ * prevent dapm_power_widgets keeping the codec at SND_SOC_BIAS_ON348348+ * during suspend.349349+ */350350+ SND_SOC_DAPM_SUPPLY("SAR", NAU8825_REG_SAR_CTRL,351351+ NAU8825_SAR_ADC_EN_SFT, 0, NULL, 0),349352350353 SND_SOC_DAPM_PGA_S("ADACL", 2, NAU8825_REG_RDAC, 12, 0, NULL, 0),351354 SND_SOC_DAPM_PGA_S("ADACR", 2, NAU8825_REG_RDAC, 13, 0, NULL, 0),···610607611608static void nau8825_restart_jack_detection(struct regmap *regmap)612609{610610+ /* Chip needs one FSCLK cycle in order to generate interrupts,611611+ * as we cannot guarantee one will be provided by the system. Turning612612+ * master mode on then off enables us to generate that FSCLK cycle613613+ * with a minimum of contention on the clock bus.614614+ */615615+ regmap_update_bits(regmap, NAU8825_REG_I2S_PCM_CTRL2,616616+ NAU8825_I2S_MS_MASK, NAU8825_I2S_MS_MASTER);617617+ regmap_update_bits(regmap, NAU8825_REG_I2S_PCM_CTRL2,618618+ NAU8825_I2S_MS_MASK, NAU8825_I2S_MS_SLAVE);619619+613620 /* this will restart the entire jack detection process including MIC/GND614621 * switching and create interrupts. We have to go from 0 to 1 and back615622 * to 0 to restart.···741728 struct regmap *regmap = nau8825->regmap;742729 int active_irq, clear_irq = 0, event = 0, event_mask = 0;743730744744- regmap_read(regmap, NAU8825_REG_IRQ_STATUS, &active_irq);731731+ if (regmap_read(regmap, NAU8825_REG_IRQ_STATUS, &active_irq)) {732732+ dev_err(nau8825->dev, "failed to read irq status\n");733733+ return IRQ_NONE;734734+ }745735746736 if ((active_irq & NAU8825_JACK_EJECTION_IRQ_MASK) ==747737 NAU8825_JACK_EJECTION_DETECTED) {···11571141 return ret;11581142 }11591143 }11601160-11611161- ret = regcache_sync(nau8825->regmap);11621162- if (ret) {11631163- dev_err(codec->dev,11641164- "Failed to sync cache: %d\n", ret);11651165- return ret;11661166- }11671144 }11681168-11691145 break;1170114611711147 case SND_SOC_BIAS_OFF:11721148 if (nau8825->mclk_freq)11731149 clk_disable_unprepare(nau8825->mclk);11741174-11751175- regcache_mark_dirty(nau8825->regmap);11761150 break;11771151 }11781152 return 0;11791153}11541154+11551155+#ifdef CONFIG_PM11561156+static int nau8825_suspend(struct snd_soc_codec *codec)11571157+{11581158+ struct nau8825 *nau8825 = snd_soc_codec_get_drvdata(codec);11591159+11601160+ disable_irq(nau8825->irq);11611161+ regcache_cache_only(nau8825->regmap, true);11621162+ regcache_mark_dirty(nau8825->regmap);11631163+11641164+ return 0;11651165+}11661166+11671167+static int nau8825_resume(struct snd_soc_codec *codec)11681168+{11691169+ struct nau8825 *nau8825 = snd_soc_codec_get_drvdata(codec);11701170+11711171+ /* The chip may lose power and reset in S3. regcache_sync restores11721172+ * register values including configurations for sysclk, irq, and11731173+ * jack/button detection.11741174+ */11751175+ regcache_cache_only(nau8825->regmap, false);11761176+ regcache_sync(nau8825->regmap);11771177+11781178+ /* Check the jack plug status directly. If the headset is unplugged11791179+ * during S3 when the chip has no power, there will be no jack11801180+ * detection irq even after the nau8825_restart_jack_detection below,11811181+ * because the chip just thinks no headset has ever been plugged in.11821182+ */11831183+ if (!nau8825_is_jack_inserted(nau8825->regmap)) {11841184+ nau8825_eject_jack(nau8825);11851185+ snd_soc_jack_report(nau8825->jack, 0, SND_JACK_HEADSET);11861186+ }11871187+11881188+ enable_irq(nau8825->irq);11891189+11901190+ /* Run jack detection to check the type (OMTP or CTIA) of the headset11911191+ * if there is one. This handles the case where a different type of11921192+ * headset is plugged in during S3. This triggers an IRQ iff a headset11931193+ * is already plugged in.11941194+ */11951195+ nau8825_restart_jack_detection(nau8825->regmap);11961196+11971197+ return 0;11981198+}11991199+#else12001200+#define nau8825_suspend NULL12011201+#define nau8825_resume NULL12021202+#endif1180120311811204static struct snd_soc_codec_driver nau8825_codec_driver = {11821205 .probe = nau8825_codec_probe,···12231168 .set_pll = nau8825_set_pll,12241169 .set_bias_level = nau8825_set_bias_level,12251170 .suspend_bias_off = true,11711171+ .suspend = nau8825_suspend,11721172+ .resume = nau8825_resume,1226117312271174 .controls = nau8825_controls,12281175 .num_controls = ARRAY_SIZE(nau8825_controls),···13341277 regmap_update_bits(regmap, NAU8825_REG_ENA_CTRL,13351278 NAU8825_ENABLE_DACR, NAU8825_ENABLE_DACR);1336127913371337- /* Chip needs one FSCLK cycle in order to generate interrupts,13381338- * as we cannot guarantee one will be provided by the system. Turning13391339- * master mode on then off enables us to generate that FSCLK cycle13401340- * with a minimum of contention on the clock bus.13411341- */13421342- regmap_update_bits(regmap, NAU8825_REG_I2S_PCM_CTRL2,13431343- NAU8825_I2S_MS_MASK, NAU8825_I2S_MS_MASTER);13441344- regmap_update_bits(regmap, NAU8825_REG_I2S_PCM_CTRL2,13451345- NAU8825_I2S_MS_MASK, NAU8825_I2S_MS_SLAVE);13461346-13471280 ret = devm_request_threaded_irq(nau8825->dev, nau8825->irq, NULL,13481281 nau8825_interrupt, IRQF_TRIGGER_LOW | IRQF_ONESHOT,13491282 "nau8825", nau8825);···14011354 return 0;14021355}1403135614041404-#ifdef CONFIG_PM_SLEEP14051405-static int nau8825_suspend(struct device *dev)14061406-{14071407- struct i2c_client *client = to_i2c_client(dev);14081408- struct nau8825 *nau8825 = dev_get_drvdata(dev);14091409-14101410- disable_irq(client->irq);14111411- regcache_cache_only(nau8825->regmap, true);14121412- regcache_mark_dirty(nau8825->regmap);14131413-14141414- return 0;14151415-}14161416-14171417-static int nau8825_resume(struct device *dev)14181418-{14191419- struct i2c_client *client = to_i2c_client(dev);14201420- struct nau8825 *nau8825 = dev_get_drvdata(dev);14211421-14221422- regcache_cache_only(nau8825->regmap, false);14231423- regcache_sync(nau8825->regmap);14241424- enable_irq(client->irq);14251425-14261426- return 0;14271427-}14281428-#endif14291429-14301430-static const struct dev_pm_ops nau8825_pm = {14311431- SET_SYSTEM_SLEEP_PM_OPS(nau8825_suspend, nau8825_resume)14321432-};14331433-14341357static const struct i2c_device_id nau8825_i2c_ids[] = {14351358 { "nau8825", 0 },14361359 { }···14271410 .name = "nau8825",14281411 .of_match_table = of_match_ptr(nau8825_of_ids),14291412 .acpi_match_table = ACPI_PTR(nau8825_acpi_match),14301430- .pm = &nau8825_pm,14311413 },14321414 .probe = nau8825_i2c_probe,14331415 .remove = nau8825_i2c_remove,
+1-1
sound/soc/codecs/rt5640.c
···359359360360/* Interface data select */361361static const char * const rt5640_data_select[] = {362362- "Normal", "left copy to right", "right copy to left", "Swap"};362362+ "Normal", "Swap", "left copy to right", "right copy to left"};363363364364static SOC_ENUM_SINGLE_DECL(rt5640_if1_dac_enum, RT5640_DIG_INF_DATA,365365 RT5640_IF1_DAC_SEL_SFT, rt5640_data_select);
···13451345 return 0;1346134613471347 /* wait for pause to complete before we reset the stream */13481348- while (stream->running && tries--)13481348+ while (stream->running && --tries)13491349 msleep(1);13501350 if (!tries) {13511351 dev_err(hsw->dev, "error: reset stream %d still running\n",
···222222 struct hdac_ext_bus *ebus = pci_get_drvdata(pci);223223 struct skl *skl = ebus_to_skl(ebus);224224 struct hdac_bus *bus = ebus_to_hbus(ebus);225225+ int ret = 0;225226226227 /*227228 * Do not suspend if streams which are marked ignore suspend are···233232 enable_irq_wake(bus->irq);234233 pci_save_state(pci);235234 pci_disable_device(pci);236236- return 0;237235 } else {238238- return _skl_suspend(ebus);236236+ ret = _skl_suspend(ebus);237237+ if (ret < 0)238238+ return ret;239239 }240240+241241+ if (IS_ENABLED(CONFIG_SND_SOC_HDAC_HDMI)) {242242+ ret = snd_hdac_display_power(bus, false);243243+ if (ret < 0)244244+ dev_err(bus->dev,245245+ "Cannot turn OFF display power on i915\n");246246+ }247247+248248+ return ret;240249}241250242251static int skl_resume(struct device *dev)···327316328317 if (bus->irq >= 0)329318 free_irq(bus->irq, (void *)bus);330330- if (bus->remap_addr)331331- iounmap(bus->remap_addr);332332-333319 snd_hdac_bus_free_stream_pages(bus);334320 snd_hdac_stream_free_all(ebus);335321 snd_hdac_link_free_all(ebus);322322+323323+ if (bus->remap_addr)324324+ iounmap(bus->remap_addr);325325+336326 pci_release_regions(skl->pci);337327 pci_disable_device(skl->pci);338328339329 snd_hdac_ext_bus_exit(ebus);340330331331+ if (IS_ENABLED(CONFIG_SND_SOC_HDAC_HDMI))332332+ snd_hdac_i915_exit(&ebus->bus);341333 return 0;342334}343335···733719 if (skl->tplg)734720 release_firmware(skl->tplg);735721736736- if (IS_ENABLED(CONFIG_SND_SOC_HDAC_HDMI))737737- snd_hdac_i915_exit(&ebus->bus);738738-739722 if (pci_dev_run_wake(pci))740723 pm_runtime_get_noresume(&pci->dev);741741- pci_dev_put(pci);724724+725725+ /* codec removal, invoke bus_device_remove */726726+ snd_hdac_ext_bus_device_remove(ebus);727727+742728 skl_platform_unregister(&pci->dev);743729 skl_free_dsp(skl);744730 skl_machine_device_unregister(skl);
+7
sound/soc/soc-dapm.c
···21882188 int count = 0;21892189 char *state = "not set";2190219021912191+ /* card won't be set for the dummy component, as a spot fix21922192+ * we're checking for that case specifically here but in future21932193+ * we will ensure that the dummy component looks like others.21942194+ */21952195+ if (!cmpnt->card)21962196+ return 0;21972197+21912198 list_for_each_entry(w, &cmpnt->card->widgets, list) {21922199 if (w->dapm != dapm)21932200 continue;
+29-9
tools/objtool/Documentation/stack-validation.txt
···299299Errors in .c files300300------------------301301302302-If you're getting an objtool error in a compiled .c file, chances are303303-the file uses an asm() statement which has a "call" instruction. An304304-asm() statement with a call instruction must declare the use of the305305-stack pointer in its output operand. For example, on x86_64:302302+1. c_file.o: warning: objtool: funcA() falls through to next function funcB()306303307307- register void *__sp asm("rsp");308308- asm volatile("call func" : "+r" (__sp));304304+ This means that funcA() doesn't end with a return instruction or an305305+ unconditional jump, and that objtool has determined that the function306306+ can fall through into the next function. There could be different307307+ reasons for this:309308310310-Otherwise the stack frame may not get created before the call.309309+ 1) funcA()'s last instruction is a call to a "noreturn" function like310310+ panic(). In this case the noreturn function needs to be added to311311+ objtool's hard-coded global_noreturns array. Feel free to bug the312312+ objtool maintainer, or you can submit a patch.311313312312-Another possible cause for errors in C code is if the Makefile removes313313--fno-omit-frame-pointer or adds -fomit-frame-pointer to the gcc options.314314+ 2) funcA() uses the unreachable() annotation in a section of code315315+ that is actually reachable.316316+317317+ 3) If funcA() calls an inline function, the object code for funcA()318318+ might be corrupt due to a gcc bug. For more details, see:319319+ https://gcc.gnu.org/bugzilla/show_bug.cgi?id=70646320320+321321+2. If you're getting any other objtool error in a compiled .c file, it322322+ may be because the file uses an asm() statement which has a "call"323323+ instruction. An asm() statement with a call instruction must declare324324+ the use of the stack pointer in its output operand. For example, on325325+ x86_64:326326+327327+ register void *__sp asm("rsp");328328+ asm volatile("call func" : "+r" (__sp));329329+330330+ Otherwise the stack frame may not get created before the call.331331+332332+3. Another possible cause for errors in C code is if the Makefile removes333333+ -fno-omit-frame-pointer or adds -fomit-frame-pointer to the gcc options.314334315335Also see the above section for .S file errors for more information what316336the individual error messages mean.
+72-25
tools/objtool/builtin-check.c
···5454 struct symbol *call_dest;5555 struct instruction *jump_dest;5656 struct list_head alts;5757+ struct symbol *func;5758};58595960struct alternative {···6766 struct list_head insn_list;6867 DECLARE_HASHTABLE(insn_hash, 16);6968 struct section *rodata, *whitelist;6969+ bool ignore_unreachables, c_file;7070};71717272const char *objname;···230228 }231229 }232230233233- if (insn->type == INSN_JUMP_DYNAMIC)231231+ if (insn->type == INSN_JUMP_DYNAMIC && list_empty(&insn->alts))234232 /* sibling call */235233 return 0;236234 }···250248static int decode_instructions(struct objtool_file *file)251249{252250 struct section *sec;251251+ struct symbol *func;253252 unsigned long offset;254253 struct instruction *insn;255254 int ret;···283280284281 hash_add(file->insn_hash, &insn->hash, insn->offset);285282 list_add_tail(&insn->list, &file->insn_list);283283+ }284284+285285+ list_for_each_entry(func, &sec->symbol_list, list) {286286+ if (func->type != STT_FUNC)287287+ continue;288288+289289+ if (!find_insn(file, sec, func->offset)) {290290+ WARN("%s(): can't find starting instruction",291291+ func->name);292292+ return -1;293293+ }294294+295295+ func_for_each_insn(file, func, insn)296296+ if (!insn->func)297297+ insn->func = func;286298 }287299 }288300···682664 text_rela->addend);683665684666 /*685685- * TODO: Document where this is needed, or get rid of it.686686- *687667 * rare case: jmpq *[addr](%rip)668668+ *669669+ * This check is for a rare gcc quirk, currently only seen in670670+ * three driver functions in the kernel, only with certain671671+ * obscure non-distro configs.672672+ *673673+ * As part of an optimization, gcc makes a copy of an existing674674+ * switch jump table, modifies it, and then hard-codes the jump675675+ * (albeit with an indirect jump) to use a single entry in the676676+ * table. The rest of the jump table and some of its jump677677+ * targets remain as dead code.678678+ *679679+ * In such a case we can just crudely ignore all unreachable680680+ * instruction warnings for the entire object file. Ideally we681681+ * would just ignore them for the function, but that would682682+ * require redesigning the code quite a bit. And honestly683683+ * that's just not worth doing: unreachable instruction684684+ * warnings are of questionable value anyway, and this is such685685+ * a rare issue.686686+ *687687+ * kbuild reports:688688+ * - https://lkml.kernel.org/r/201603231906.LWcVUpxm%25fengguang.wu@intel.com689689+ * - https://lkml.kernel.org/r/201603271114.K9i45biy%25fengguang.wu@intel.com690690+ * - https://lkml.kernel.org/r/201603291058.zuJ6ben1%25fengguang.wu@intel.com691691+ *692692+ * gcc bug:693693+ * - https://gcc.gnu.org/bugzilla/show_bug.cgi?id=70604688694 */689689- if (!rodata_rela)695695+ if (!rodata_rela) {690696 rodata_rela = find_rela_by_dest(file->rodata,691697 text_rela->addend + 4);698698+ if (rodata_rela)699699+ file->ignore_unreachables = true;700700+ }692701693702 if (!rodata_rela)694703 continue;···776731static int decode_sections(struct objtool_file *file)777732{778733 int ret;779779-780780- file->whitelist = find_section_by_name(file->elf, "__func_stack_frame_non_standard");781781- file->rodata = find_section_by_name(file->elf, ".rodata");782734783735 ret = decode_instructions(file);784736 if (ret)···841799 struct alternative *alt;842800 struct instruction *insn;843801 struct section *sec;802802+ struct symbol *func = NULL;844803 unsigned char state;845804 int ret;846805···856813 }857814858815 while (1) {816816+ if (file->c_file && insn->func) {817817+ if (func && func != insn->func) {818818+ WARN("%s() falls through to next function %s()",819819+ func->name, insn->func->name);820820+ return 1;821821+ }822822+823823+ func = insn->func;824824+ }825825+859826 if (insn->visited) {860827 if (frame_state(insn->state) != frame_state(state)) {861828 WARN_FUNC("frame pointer state mismatch",···875822876823 return 0;877824 }878878-879879- /*880880- * Catch a rare case where a noreturn function falls through to881881- * the next function.882882- */883883- if (is_fentry_call(insn) && (state & STATE_FENTRY))884884- return 0;885825886826 insn->visited = true;887827 insn->state = state;···10811035 continue;1082103610831037 insn = find_insn(file, sec, func->offset);10841084- if (!insn) {10851085- WARN("%s(): can't find starting instruction",10861086- func->name);10871087- warnings++;10381038+ if (!insn)10881039 continue;10891089- }1090104010911041 ret = validate_branch(file, insn, 0);10921042 warnings += ret;···10981056 if (insn->visited)10991057 continue;1100105811011101- if (!ignore_unreachable_insn(func, insn) &&11021102- !warnings) {11031103- WARN_FUNC("function has unreachable instruction", insn->sec, insn->offset);11041104- warnings++;11051105- }11061106-11071059 insn->visited = true;10601060+10611061+ if (file->ignore_unreachables || warnings ||10621062+ ignore_unreachable_insn(func, insn))10631063+ continue;10641064+10651065+ WARN_FUNC("function has unreachable instruction", insn->sec, insn->offset);10661066+ warnings++;11081067 }11091068 }11101069 }···1176113311771134 INIT_LIST_HEAD(&file.insn_list);11781135 hash_init(file.insn_hash);11361136+ file.whitelist = find_section_by_name(file.elf, "__func_stack_frame_non_standard");11371137+ file.rodata = find_section_by_name(file.elf, ".rodata");11381138+ file.ignore_unreachables = false;11391139+ file.c_file = find_section_by_name(file.elf, ".comment");1179114011801141 ret = decode_sections(&file);11811142 if (ret < 0)