···2222Optional properties:2323- ti,hwmods: Name of the hwmods associated to the eDMA CC2424- ti,edma-memcpy-channels: List of channels allocated to be used for memcpy, iow2525- these channels will be SW triggered channels. The list must2626- contain 16 bits numbers, see example.2525+ these channels will be SW triggered channels. See example.2726- ti,edma-reserved-slot-ranges: PaRAM slot ranges which should not be used by2827 the driver, they are allocated to be used by for example the2928 DSP. See example.···5556 ti,tptcs = <&edma_tptc0 7>, <&edma_tptc1 7>, <&edma_tptc2 0>;56575758 /* Channel 20 and 21 is allocated for memcpy */5858- ti,edma-memcpy-channels = /bits/ 16 <20 21>;5959- /* The following PaRAM slots are reserved: 35-45 and 100-110 */6060- ti,edma-reserved-slot-ranges = /bits/ 16 <35 10>,6161- /bits/ 16 <100 10>;5959+ ti,edma-memcpy-channels = <20 21>;6060+ /* The following PaRAM slots are reserved: 35-44 and 100-109 */6161+ ti,edma-reserved-slot-ranges = <35 10>, <100 10>;6262};63636464edma_tptc0: tptc@49800000 {
···1111 0 = active high1212 1 = active low13131414+Optional properties:1515+- little-endian : GPIO registers are used as little endian. If not1616+ present registers are used as big endian by default.1717+1418Example:15191620gpio0: gpio@1100 {
···1212Required subnode-properties:1313 - label: Descriptive name of the key.1414 - linux,code: Keycode to emit.1515- - channel: Channel this key is attached to, mut be 0 or 1.1515+ - channel: Channel this key is attached to, must be 0 or 1.1616 - voltage: Voltage in µV at lradc input when this key is pressed.17171818Example:
···66as RedBoot.7788The partition table should be a subnode of the mtd node and should be named99-'partitions'. Partitions are defined in subnodes of the partitions node.99+'partitions'. This node should have the following property:1010+- compatible : (required) must be "fixed-partitions"1111+Partitions are then defined in subnodes of the partitions node.10121113For backwards compatibility partitions as direct subnodes of the mtd device are1214supported. This use is discouraged.···38363937flash@0 {4038 partitions {3939+ compatible = "fixed-partitions";4140 #address-cells = <1>;4241 #size-cells = <1>;4342···56535754flash@1 {5855 partitions {5656+ compatible = "fixed-partitions";5957 #address-cells = <1>;6058 #size-cells = <2>;6159···70667167flash@2 {7268 partitions {6969+ compatible = "fixed-partitions";7370 #address-cells = <2>;7471 #size-cells = <2>;7572
-14
Documentation/networking/e100.txt
···181181If an issue is identified with the released source code on the supported182182kernel with a supported adapter, email the specific information related to the183183issue to e1000-devel@lists.sourceforge.net.184184-185185-186186-License187187-=======188188-189189-This software program is released under the terms of a license agreement190190-between you ('Licensee') and Intel. Do not use or load this software or any191191-associated materials (collectively, the 'Software') until you have carefully192192-read the full terms and conditions of the file COPYING located in this software193193-package. By loading or using the Software, you agree to the terms of this194194-Agreement. If you do not agree with the terms of this Agreement, do not install195195-or use the Software.196196-197197-* Other names and brands may be claimed as the property of others.
+18-2
MAINTAINERS
···29752975CONTROL GROUP - MEMORY RESOURCE CONTROLLER (MEMCG)29762976M: Johannes Weiner <hannes@cmpxchg.org>29772977M: Michal Hocko <mhocko@kernel.org>29782978+M: Vladimir Davydov <vdavydov@virtuozzo.com>29782979L: cgroups@vger.kernel.org29792980L: linux-mm@kvack.org29802981S: Maintained···55875586R: Shannon Nelson <shannon.nelson@intel.com>55885587R: Carolyn Wyborny <carolyn.wyborny@intel.com>55895588R: Don Skidmore <donald.c.skidmore@intel.com>55905590-R: Matthew Vick <matthew.vick@intel.com>55895589+R: Bruce Allan <bruce.w.allan@intel.com>55915590R: John Ronciak <john.ronciak@intel.com>55925591R: Mitch Williams <mitch.a.williams@intel.com>55935592L: intel-wired-lan@lists.osuosl.org···82968295F: kernel/delayacct.c8297829682988297PERFORMANCE EVENTS SUBSYSTEM82998299-M: Peter Zijlstra <a.p.zijlstra@chello.nl>82988298+M: Peter Zijlstra <peterz@infradead.org>83008299M: Ingo Molnar <mingo@redhat.com>83018300M: Arnaldo Carvalho de Melo <acme@kernel.org>83028301L: linux-kernel@vger.kernel.org···83888387L: linux-samsung-soc@vger.kernel.org (moderated for non-subscribers)83898388S: Maintained83908389F: drivers/pinctrl/samsung/83908390+83918391+PIN CONTROLLER - SINGLE83928392+M: Tony Lindgren <tony@atomide.com>83938393+M: Haojian Zhuang <haojian.zhuang@linaro.org>83948394+L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)83958395+L: linux-omap@vger.kernel.org83968396+S: Maintained83978397+F: drivers/pinctrl/pinctrl-single.c8391839883928399PIN CONTROLLER - ST SPEAR83938400M: Viresh Kumar <vireshk@kernel.org>···89628953F: drivers/rpmsg/89638954F: Documentation/rpmsg.txt89648955F: include/linux/rpmsg.h89568956+89578957+RENESAS ETHERNET DRIVERS89588958+R: Sergei Shtylyov <sergei.shtylyov@cogentembedded.com>89598959+L: netdev@vger.kernel.org89608960+L: linux-sh@vger.kernel.org89618961+F: drivers/net/ethernet/renesas/89628962+F: include/linux/sh_eth.h8965896389668964RESET CONTROLLER FRAMEWORK89678965M: Philipp Zabel <p.zabel@pengutronix.de>
···445445 However some customers have peripherals mapped at this addr, so446446 Linux needs to be scooted a bit.447447 If you don't know what the above means, leave this setting alone.448448+ This needs to match memory start address specified in Device Tree448449449450config HIGHMEM450451 bool "High Memory Support"
···17171818 memory {1919 device_type = "memory";2020- reg = <0x0 0x80000000 0x0 0x40000000 /* 1 GB low mem */2020+ /* CONFIG_LINUX_LINK_BASE needs to match low mem start */2121+ reg = <0x0 0x80000000 0x0 0x20000000 /* 512 MB low mem */2122 0x1 0x00000000 0x0 0x40000000>; /* 1 GB highmem */2223 };2324
+2-2
arch/arc/include/asm/mach_desc.h
···2323 * @dt_compat: Array of device tree 'compatible' strings2424 * (XXX: although only 1st entry is looked at)2525 * @init_early: Very early callback [called from setup_arch()]2626- * @init_cpu_smp: for each CPU as it is coming up (SMP as well as UP)2626+ * @init_per_cpu: for each CPU as it is coming up (SMP as well as UP)2727 * [(M):init_IRQ(), (o):start_kernel_secondary()]2828 * @init_machine: arch initcall level callback (e.g. populate static2929 * platform devices or parse Devicetree)···3535 const char **dt_compat;3636 void (*init_early)(void);3737#ifdef CONFIG_SMP3838- void (*init_cpu_smp)(unsigned int);3838+ void (*init_per_cpu)(unsigned int);3939#endif4040 void (*init_machine)(void);4141 void (*init_late)(void);
+2-2
arch/arc/include/asm/smp.h
···4848 * @init_early_smp: A SMP specific h/w block can init itself4949 * Could be common across platforms so not covered by5050 * mach_desc->init_early()5151- * @init_irq_cpu: Called for each core so SMP h/w block driver can do5151+ * @init_per_cpu: Called for each core so SMP h/w block driver can do5252 * any needed setup per cpu (e.g. IPI request)5353 * @cpu_kick: For Master to kickstart a cpu (optionally at a PC)5454 * @ipi_send: To send IPI to a @cpu···5757struct plat_smp_ops {5858 const char *info;5959 void (*init_early_smp)(void);6060- void (*init_irq_cpu)(int cpu);6060+ void (*init_per_cpu)(int cpu);6161 void (*cpu_kick)(int cpu, unsigned long pc);6262 void (*ipi_send)(int cpu);6363 void (*ipi_clear)(int irq);
···106106static int arcv2_irq_map(struct irq_domain *d, unsigned int irq,107107 irq_hw_number_t hw)108108{109109- if (irq == TIMER0_IRQ || irq == IPI_IRQ)109109+ /*110110+ * core intc IRQs [16, 23]:111111+ * Statically assigned always private-per-core (Timers, WDT, IPI, PCT)112112+ */113113+ if (hw < 24) {114114+ /*115115+ * A subsequent request_percpu_irq() fails if percpu_devid is116116+ * not set. That in turns sets NOAUTOEN, meaning each core needs117117+ * to call enable_percpu_irq()118118+ */119119+ irq_set_percpu_devid(irq);110120 irq_set_chip_and_handler(irq, &arcv2_irq_chip, handle_percpu_irq);111111- else121121+ } else {112122 irq_set_chip_and_handler(irq, &arcv2_irq_chip, handle_level_irq);123123+ }113124114125 return 0;115126}
+24-9
arch/arc/kernel/irq.c
···29293030#ifdef CONFIG_SMP3131 /* a SMP H/w block could do IPI IRQ request here */3232- if (plat_smp_ops.init_irq_cpu)3333- plat_smp_ops.init_irq_cpu(smp_processor_id());3232+ if (plat_smp_ops.init_per_cpu)3333+ plat_smp_ops.init_per_cpu(smp_processor_id());34343535- if (machine_desc->init_cpu_smp)3636- machine_desc->init_cpu_smp(smp_processor_id());3535+ if (machine_desc->init_per_cpu)3636+ machine_desc->init_per_cpu(smp_processor_id());3737#endif3838}3939···5151 set_irq_regs(old_regs);5252}53535454+/*5555+ * API called for requesting percpu interrupts - called by each CPU5656+ * - For boot CPU, actually request the IRQ with genirq core + enables5757+ * - For subsequent callers only enable called locally5858+ *5959+ * Relies on being called by boot cpu first (i.e. request called ahead) of6060+ * any enable as expected by genirq. Hence Suitable only for TIMER, IPI6161+ * which are guaranteed to be setup on boot core first.6262+ * Late probed peripherals such as perf can't use this as there no guarantee6363+ * of being called on boot CPU first.6464+ */6565+5466void arc_request_percpu_irq(int irq, int cpu,5567 irqreturn_t (*isr)(int irq, void *dev),5668 const char *irq_nm,···7260 if (!cpu) {7361 int rc;74626363+#ifdef CONFIG_ISA_ARCOMPACT7564 /*7676- * These 2 calls are essential to making percpu IRQ APIs work7777- * Ideally these details could be hidden in irq chip map function7878- * but the issue is IPIs IRQs being static (non-DT) and platform7979- * specific, so we can't identify them there.6565+ * A subsequent request_percpu_irq() fails if percpu_devid is6666+ * not set. That in turns sets NOAUTOEN, meaning each core needs6767+ * to call enable_percpu_irq()6868+ *6969+ * For ARCv2, this is done in irq map function since we know7070+ * which irqs are strictly per cpu8071 */8172 irq_set_percpu_devid(irq);8282- irq_modify_status(irq, IRQ_NOAUTOEN, 0); /* @irq, @clr, @set */7373+#endif83748475 rc = request_percpu_irq(irq, isr, irq_nm, percpu_dev);8576 if (rc)
···428428429429#endif /* CONFIG_ISA_ARCV2 */430430431431-void arc_cpu_pmu_irq_init(void)431431+static void arc_cpu_pmu_irq_init(void *data)432432{433433- struct arc_pmu_cpu *pmu_cpu = this_cpu_ptr(&arc_pmu_cpu);433433+ int irq = *(int *)data;434434435435- arc_request_percpu_irq(arc_pmu->irq, smp_processor_id(), arc_pmu_intr,436436- "ARC perf counters", pmu_cpu);435435+ enable_percpu_irq(irq, IRQ_TYPE_NONE);437436438437 /* Clear all pending interrupt flags */439438 write_aux_reg(ARC_REG_PCT_INT_ACT, 0xffffffff);···514515515516 if (has_interrupts) {516517 int irq = platform_get_irq(pdev, 0);517517- unsigned long flags;518518519519 if (irq < 0) {520520 pr_err("Cannot get IRQ number for the platform\n");···522524523525 arc_pmu->irq = irq;524526525525- /*526526- * arc_cpu_pmu_irq_init() needs to be called on all cores for527527- * their respective local PMU.528528- * However we use opencoded on_each_cpu() to ensure it is called529529- * on core0 first, so that arc_request_percpu_irq() sets up530530- * AUTOEN etc. Otherwise enable_percpu_irq() fails to enable531531- * perf IRQ on non master cores.532532- * see arc_request_percpu_irq()533533- */534534- preempt_disable();535535- local_irq_save(flags);536536- arc_cpu_pmu_irq_init();537537- local_irq_restore(flags);538538- smp_call_function((smp_call_func_t)arc_cpu_pmu_irq_init, 0, 1);539539- preempt_enable();527527+ /* intc map function ensures irq_set_percpu_devid() called */528528+ request_percpu_irq(irq, arc_pmu_intr, "ARC perf counters",529529+ this_cpu_ptr(&arc_pmu_cpu));540530541541- /* Clean all pending interrupt flags */542542- write_aux_reg(ARC_REG_PCT_INT_ACT, 0xffffffff);531531+ on_each_cpu(arc_cpu_pmu_irq_init, &irq, 1);532532+543533 } else544534 arc_pmu->pmu.capabilities |= PERF_PMU_CAP_NO_INTERRUPT;545535
···132132 pr_info("## CPU%u LIVE ##: Executing Code...\n", cpu);133133134134 /* Some SMP H/w setup - for each cpu */135135- if (plat_smp_ops.init_irq_cpu)136136- plat_smp_ops.init_irq_cpu(cpu);135135+ if (plat_smp_ops.init_per_cpu)136136+ plat_smp_ops.init_per_cpu(cpu);137137138138- if (machine_desc->init_cpu_smp)139139- machine_desc->init_cpu_smp(cpu);138138+ if (machine_desc->init_per_cpu)139139+ machine_desc->init_per_cpu(cpu);140140141141 arc_local_timer_setup();142142
+35-18
arch/arc/kernel/unwind.c
···170170171171static unsigned long read_pointer(const u8 **pLoc,172172 const void *end, signed ptrType);173173+static void init_unwind_hdr(struct unwind_table *table,174174+ void *(*alloc) (unsigned long));175175+176176+/*177177+ * wrappers for header alloc (vs. calling one vs. other at call site)178178+ * to elide section mismatches warnings179179+ */180180+static void *__init unw_hdr_alloc_early(unsigned long sz)181181+{182182+ return __alloc_bootmem_nopanic(sz, sizeof(unsigned int),183183+ MAX_DMA_ADDRESS);184184+}185185+186186+static void *unw_hdr_alloc(unsigned long sz)187187+{188188+ return kmalloc(sz, GFP_KERNEL);189189+}173190174191static void init_unwind_table(struct unwind_table *table, const char *name,175192 const void *core_start, unsigned long core_size,···226209 __start_unwind, __end_unwind - __start_unwind,227210 NULL, 0);228211 /*__start_unwind_hdr, __end_unwind_hdr - __start_unwind_hdr);*/212212+213213+ init_unwind_hdr(&root_table, unw_hdr_alloc_early);229214}230215231216static const u32 bad_cie, not_fde;···260241 e2->fde = v;261242}262243263263-static void __init setup_unwind_table(struct unwind_table *table,264264- void *(*alloc) (unsigned long))244244+static void init_unwind_hdr(struct unwind_table *table,245245+ void *(*alloc) (unsigned long))265246{266247 const u8 *ptr;267248 unsigned long tableSize = table->size, hdrSize;···293274 const u32 *cie = cie_for_fde(fde, table);294275 signed ptrType;295276296296- if (cie == ¬_fde)277277+ if (cie == ¬_fde) /* only process FDE here */297278 continue;298279 if (cie == NULL || cie == &bad_cie)299299- return;280280+ continue; /* say FDE->CIE.version != 1 */300281 ptrType = fde_pointer_type(cie);301282 if (ptrType < 0)302302- return;283283+ continue;303284304285 ptr = (const u8 *)(fde + 2);305286 if (!read_pointer(&ptr, (const u8 *)(fde + 1) + *fde,···319300320301 hdrSize = 4 + sizeof(unsigned long) + sizeof(unsigned int)321302 + 2 * n * sizeof(unsigned long);303303+322304 header = alloc(hdrSize);323305 if (!header)324306 return;307307+325308 header->version = 1;326309 header->eh_frame_ptr_enc = DW_EH_PE_abs | DW_EH_PE_native;327310 header->fde_count_enc = DW_EH_PE_abs | DW_EH_PE_data4;···343322344323 if (fde[1] == 0xffffffff)345324 continue; /* this is a CIE */325325+326326+ if (*(u8 *)(cie + 2) != 1)327327+ continue; /* FDE->CIE.version not supported */328328+346329 ptr = (const u8 *)(fde + 2);347330 header->table[n].start = read_pointer(&ptr,348331 (const u8 *)(fde + 1) +···365340 table->hdrsz = hdrSize;366341 smp_wmb();367342 table->header = (const void *)header;368368-}369369-370370-static void *__init balloc(unsigned long sz)371371-{372372- return __alloc_bootmem_nopanic(sz,373373- sizeof(unsigned int),374374- __pa(MAX_DMA_ADDRESS));375375-}376376-377377-void __init arc_unwind_setup(void)378378-{379379- setup_unwind_table(&root_table, balloc);380343}381344382345#ifdef CONFIG_MODULES···389376 module->module_init, module->init_size,390377 table_start, table_size,391378 NULL, 0);379379+380380+ init_unwind_hdr(table, unw_hdr_alloc);392381393382#ifdef UNWIND_DEBUG394383 unw_debug("Table added for [%s] %lx %lx\n",···454439 info.init_only = init_only;455440456441 unlink_table(&info); /* XXX: SMP */442442+ kfree(table->header);457443 kfree(table);458444}459445···523507524508 if (*cie <= sizeof(*cie) + 4 || *cie >= fde[1] - sizeof(*fde)525509 || (*cie & (sizeof(*cie) - 1))526526- || (cie[1] != 0xffffffff))510510+ || (cie[1] != 0xffffffff)511511+ || ( *(u8 *)(cie + 2) != 1)) /* version 1 supported */527512 return NULL; /* this is not a (valid) CIE */528513 return cie;529514}
+3-1
arch/arc/mm/init.c
···5151 int in_use = 0;52525353 if (!low_mem_sz) {5454- BUG_ON(base != low_mem_start);5454+ if (base != low_mem_start)5555+ panic("CONFIG_LINUX_LINK_BASE != DT memory { }");5656+5557 low_mem_sz = size;5658 in_use = 1;5759 } else {
···510510static inline unsigned long __must_check511511__copy_to_user(void __user *to, const void *from, unsigned long n)512512{513513+#ifndef CONFIG_UACCESS_WITH_MEMCPY513514 unsigned int __ua_flags = uaccess_save_and_enable();514515 n = arm_copy_to_user(to, from, n);515516 uaccess_restore(__ua_flags);516517 return n;518518+#else519519+ return arm_copy_to_user(to, from, n);520520+#endif517521}518522519523extern unsigned long __must_check
+18-15
arch/arm/kernel/process.c
···9595{9696 unsigned long flags;9797 char buf[64];9898+#ifndef CONFIG_CPU_V7M9999+ unsigned int domain;100100+#ifdef CONFIG_CPU_SW_DOMAIN_PAN101101+ /*102102+ * Get the domain register for the parent context. In user103103+ * mode, we don't save the DACR, so lets use what it should104104+ * be. For other modes, we place it after the pt_regs struct.105105+ */106106+ if (user_mode(regs))107107+ domain = DACR_UACCESS_ENABLE;108108+ else109109+ domain = *(unsigned int *)(regs + 1);110110+#else111111+ domain = get_domain();112112+#endif113113+#endif9811499115 show_regs_print_info(KERN_DEFAULT);100116···139123140124#ifndef CONFIG_CPU_V7M141125 {142142- unsigned int domain = get_domain();143126 const char *segment;144144-145145-#ifdef CONFIG_CPU_SW_DOMAIN_PAN146146- /*147147- * Get the domain register for the parent context. In user148148- * mode, we don't save the DACR, so lets use what it should149149- * be. For other modes, we place it after the pt_regs struct.150150- */151151- if (user_mode(regs))152152- domain = DACR_UACCESS_ENABLE;153153- else154154- domain = *(unsigned int *)(regs + 1);155155-#endif156127157128 if ((domain & domain_mask(DOMAIN_USER)) ==158129 domain_val(DOMAIN_USER, DOMAIN_NOACCESS))···166163 buf[0] = '\0';167164#ifdef CONFIG_CPU_CP15_MMU168165 {169169- unsigned int transbase, dac = get_domain();166166+ unsigned int transbase;170167 asm("mrc p15, 0, %0, c2, c0\n\t"171168 : "=r" (transbase));172169 snprintf(buf, sizeof(buf), " Table: %08x DAC: %08x",173173- transbase, dac);170170+ transbase, domain);174171 }175172#endif176173 asm("mrc p15, 0, %0, c1, c0\n" : "=r" (ctrl));
···8888static unsigned long noinline8989__copy_to_user_memcpy(void __user *to, const void *from, unsigned long n)9090{9191+ unsigned long ua_flags;9192 int atomic;92939394 if (unlikely(segment_eq(get_fs(), KERNEL_DS))) {···119118 if (tocopy > n)120119 tocopy = n;121120121121+ ua_flags = uaccess_save_and_enable();122122 memcpy((void *)to, from, tocopy);123123+ uaccess_restore(ua_flags);123124 to += tocopy;124125 from += tocopy;125126 n -= tocopy;···148145 * With frame pointer disabled, tail call optimization kicks in149146 * as well making this test almost invisible.150147 */151151- if (n < 64)152152- return __copy_to_user_std(to, from, n);153153- return __copy_to_user_memcpy(to, from, n);148148+ if (n < 64) {149149+ unsigned long ua_flags = uaccess_save_and_enable();150150+ n = __copy_to_user_std(to, from, n);151151+ uaccess_restore(ua_flags);152152+ } else {153153+ n = __copy_to_user_memcpy(to, from, n);154154+ }155155+ return n;154156}155157156158static unsigned long noinline157159__clear_user_memset(void __user *addr, unsigned long n)158160{161161+ unsigned long ua_flags;162162+159163 if (unlikely(segment_eq(get_fs(), KERNEL_DS))) {160164 memset((void *)addr, 0, n);161165 return 0;···185175 if (tocopy > n)186176 tocopy = n;187177178178+ ua_flags = uaccess_save_and_enable();188179 memset((void *)addr, 0, tocopy);180180+ uaccess_restore(ua_flags);189181 addr += tocopy;190182 n -= tocopy;191183···205193unsigned long arm_clear_user(void __user *addr, unsigned long n)206194{207195 /* See rational for this in __copy_to_user() above. */208208- if (n < 64)209209- return __clear_user_std(addr, n);210210- return __clear_user_memset(addr, n);196196+ if (n < 64) {197197+ unsigned long ua_flags = uaccess_save_and_enable();198198+ n = __clear_user_std(addr, n);199199+ uaccess_restore(ua_flags);200200+ } else {201201+ n = __clear_user_memset(addr, n);202202+ }203203+ return n;211204}212205213206#if 0
+5-1
arch/arm/mach-at91/Kconfig
···44 select ARCH_REQUIRE_GPIOLIB55 select COMMON_CLK_AT9166 select PINCTRL77- select PINCTRL_AT9187 select SOC_BUS98109if ARCH_AT91···1617 select HAVE_AT91_USB_CLK1718 select HAVE_AT91_H32MX1819 select HAVE_AT91_GENERATED_CLK2020+ select PINCTRL_AT91PIO41921 help2022 Select this if ou are using one of Atmel's SAMA5D2 family SoC.2123···2727 select HAVE_AT91_UTMI2828 select HAVE_AT91_SMD2929 select HAVE_AT91_USB_CLK3030+ select PINCTRL_AT913031 help3132 Select this if you are using one of Atmel's SAMA5D3 family SoC.3233 This support covers SAMA5D31, SAMA5D33, SAMA5D34, SAMA5D35, SAMA5D36.···4140 select HAVE_AT91_SMD4241 select HAVE_AT91_USB_CLK4342 select HAVE_AT91_H32MX4343+ select PINCTRL_AT914444 help4545 Select this if you are using one of Atmel's SAMA5D4 family SoC.4646···5250 select CPU_ARM920T5351 select HAVE_AT91_USB_CLK5452 select MIGHT_HAVE_PCI5353+ select PINCTRL_AT915554 select SOC_SAM_V4_V55655 select SRAM if PM5756 help···6865 select HAVE_AT91_UTMI6966 select HAVE_FB_ATMEL7067 select MEMORY6868+ select PINCTRL_AT917169 select SOC_SAM_V4_V57270 select SRAM if PM7371 help
+6-1
arch/arm/mach-at91/pm.c
···4141 * implementation should be moved down into the pinctrl driver and get4242 * called as part of the generic suspend/resume path.4343 */4444+#ifdef CONFIG_PINCTRL_AT914445extern void at91_pinctrl_gpio_suspend(void);4546extern void at91_pinctrl_gpio_resume(void);4747+#endif46484749static struct {4850 unsigned long uhp_udp_mask;···153151154152static int at91_pm_enter(suspend_state_t state)155153{154154+#ifdef CONFIG_PINCTRL_AT91156155 at91_pinctrl_gpio_suspend();157157-156156+#endif158157 switch (state) {159158 /*160159 * Suspend-to-RAM is like STANDBY plus slow clock mode, so···195192error:196193 target_state = PM_SUSPEND_ON;197194195195+#ifdef CONFIG_PINCTRL_AT91198196 at91_pinctrl_gpio_resume();197197+#endif199198 return 0;200199}201200
+5-1
arch/arm/mach-exynos/pmu.c
···748748void exynos_sys_powerdown_conf(enum sys_powerdown mode)749749{750750 unsigned int i;751751+ const struct exynos_pmu_data *pmu_data;751752752752- const struct exynos_pmu_data *pmu_data = pmu_context->pmu_data;753753+ if (!pmu_context)754754+ return;755755+756756+ pmu_data = pmu_context->pmu_data;753757754758 if (pmu_data->powerdown_conf)755759 pmu_data->powerdown_conf(mode);
+6-6
arch/arm/mach-ixp4xx/include/mach/io.h
···143143 writel(*vaddr++, bus_addr);144144}145145146146-static inline unsigned char __indirect_readb(const volatile void __iomem *p)146146+static inline u8 __indirect_readb(const volatile void __iomem *p)147147{148148 u32 addr = (u32)p;149149 u32 n, byte_enables, data;···166166 *vaddr++ = readb(bus_addr);167167}168168169169-static inline unsigned short __indirect_readw(const volatile void __iomem *p)169169+static inline u16 __indirect_readw(const volatile void __iomem *p)170170{171171 u32 addr = (u32)p;172172 u32 n, byte_enables, data;···189189 *vaddr++ = readw(bus_addr);190190}191191192192-static inline unsigned long __indirect_readl(const volatile void __iomem *p)192192+static inline u32 __indirect_readl(const volatile void __iomem *p)193193{194194 u32 addr = (__force u32)p;195195 u32 data;···350350 ((unsigned long)p <= (PIO_MASK + PIO_OFFSET)))351351352352#define ioread8(p) ioread8(p)353353-static inline unsigned int ioread8(const void __iomem *addr)353353+static inline u8 ioread8(const void __iomem *addr)354354{355355 unsigned long port = (unsigned long __force)addr;356356 if (__is_io_address(port))···378378}379379380380#define ioread16(p) ioread16(p)381381-static inline unsigned int ioread16(const void __iomem *addr)381381+static inline u16 ioread16(const void __iomem *addr)382382{383383 unsigned long port = (unsigned long __force)addr;384384 if (__is_io_address(port))···407407}408408409409#define ioread32(p) ioread32(p)410410-static inline unsigned int ioread32(const void __iomem *addr)410410+static inline u32 ioread32(const void __iomem *addr)411411{412412 unsigned long port = (unsigned long __force)addr;413413 if (__is_io_address(port))
+1-1
arch/arm/mach-omap2/Kconfig
···121121 select NEON if CPU_V7122122 select PM123123 select REGULATOR124124+ select REGULATOR_FIXED_VOLTAGE124125 select TWL4030_CORE if ARCH_OMAP3 || ARCH_OMAP4125126 select TWL4030_POWER if ARCH_OMAP3 || ARCH_OMAP4126127 select VFP···202201 depends on ARCH_OMAP3203202 default y204203 select OMAP_PACKAGE_CBB205205- select REGULATOR_FIXED_VOLTAGE if REGULATOR206204207205config MACH_NOKIA_N810208206 bool
···165165 __flush_icache_all();166166}167167168168-static int is_reserved_asid(u64 asid)168168+static bool check_update_reserved_asid(u64 asid, u64 newasid)169169{170170 int cpu;171171- for_each_possible_cpu(cpu)172172- if (per_cpu(reserved_asids, cpu) == asid)173173- return 1;174174- return 0;171171+ bool hit = false;172172+173173+ /*174174+ * Iterate over the set of reserved ASIDs looking for a match.175175+ * If we find one, then we can update our mm to use newasid176176+ * (i.e. the same ASID in the current generation) but we can't177177+ * exit the loop early, since we need to ensure that all copies178178+ * of the old ASID are updated to reflect the mm. Failure to do179179+ * so could result in us missing the reserved ASID in a future180180+ * generation.181181+ */182182+ for_each_possible_cpu(cpu) {183183+ if (per_cpu(reserved_asids, cpu) == asid) {184184+ hit = true;185185+ per_cpu(reserved_asids, cpu) = newasid;186186+ }187187+ }188188+189189+ return hit;175190}176191177192static u64 new_context(struct mm_struct *mm, unsigned int cpu)···196181 u64 generation = atomic64_read(&asid_generation);197182198183 if (asid != 0) {184184+ u64 newasid = generation | (asid & ~ASID_MASK);185185+199186 /*200187 * If our current ASID was active during a rollover, we201188 * can continue to use it and this was just a false alarm.202189 */203203- if (is_reserved_asid(asid))204204- return generation | (asid & ~ASID_MASK);190190+ if (check_update_reserved_asid(asid, newasid))191191+ return newasid;205192206193 /*207194 * We had a valid ASID in a previous life, so try to re-use···211194 */212195 asid &= ~ASID_MASK;213196 if (!__test_and_set_bit(asid, asid_map))214214- goto bump_gen;197197+ return newasid;215198 }216199217200 /*···233216234217 __set_bit(asid, asid_map);235218 cur_idx = asid;236236-237237-bump_gen:238238- asid |= generation;239219 cpumask_clear(mm_cpumask(mm));240240- return asid;220220+ return asid | generation;241221}242222243223void check_and_switch_context(struct mm_struct *mm, struct task_struct *tsk)
+1-1
arch/arm/mm/dma-mapping.c
···15211521 return -ENOMEM;1522152215231523 for (count = 0, s = sg; count < (size >> PAGE_SHIFT); s = sg_next(s)) {15241524- phys_addr_t phys = sg_phys(s) & PAGE_MASK;15241524+ phys_addr_t phys = page_to_phys(sg_page(s));15251525 unsigned int len = PAGE_ALIGN(s->offset + s->length);1526152615271527 if (!is_coherent &&
+62-30
arch/arm/mm/init.c
···2222#include <linux/memblock.h>2323#include <linux/dma-contiguous.h>2424#include <linux/sizes.h>2525+#include <linux/stop_machine.h>25262627#include <asm/cp15.h>2728#include <asm/mach-types.h>···628627 * safe to be called with preemption disabled, as under stop_machine().629628 */630629static inline void section_update(unsigned long addr, pmdval_t mask,631631- pmdval_t prot)630630+ pmdval_t prot, struct mm_struct *mm)632631{633633- struct mm_struct *mm;634632 pmd_t *pmd;635633636636- mm = current->active_mm;637634 pmd = pmd_offset(pud_offset(pgd_offset(mm, addr), addr), addr);638635639636#ifdef CONFIG_ARM_LPAE···655656 return !!(get_cr() & CR_XP);656657}657658658658-#define set_section_perms(perms, field) { \659659- size_t i; \660660- unsigned long addr; \661661- \662662- if (!arch_has_strict_perms()) \663663- return; \664664- \665665- for (i = 0; i < ARRAY_SIZE(perms); i++) { \666666- if (!IS_ALIGNED(perms[i].start, SECTION_SIZE) || \667667- !IS_ALIGNED(perms[i].end, SECTION_SIZE)) { \668668- pr_err("BUG: section %lx-%lx not aligned to %lx\n", \669669- perms[i].start, perms[i].end, \670670- SECTION_SIZE); \671671- continue; \672672- } \673673- \674674- for (addr = perms[i].start; \675675- addr < perms[i].end; \676676- addr += SECTION_SIZE) \677677- section_update(addr, perms[i].mask, \678678- perms[i].field); \679679- } \659659+void set_section_perms(struct section_perm *perms, int n, bool set,660660+ struct mm_struct *mm)661661+{662662+ size_t i;663663+ unsigned long addr;664664+665665+ if (!arch_has_strict_perms())666666+ return;667667+668668+ for (i = 0; i < n; i++) {669669+ if (!IS_ALIGNED(perms[i].start, SECTION_SIZE) ||670670+ !IS_ALIGNED(perms[i].end, SECTION_SIZE)) {671671+ pr_err("BUG: section %lx-%lx not aligned to %lx\n",672672+ perms[i].start, perms[i].end,673673+ SECTION_SIZE);674674+ continue;675675+ }676676+677677+ for (addr = perms[i].start;678678+ addr < perms[i].end;679679+ addr += SECTION_SIZE)680680+ section_update(addr, perms[i].mask,681681+ set ? perms[i].prot : perms[i].clear, mm);682682+ }683683+680684}681685682682-static inline void fix_kernmem_perms(void)686686+static void update_sections_early(struct section_perm perms[], int n)683687{684684- set_section_perms(nx_perms, prot);688688+ struct task_struct *t, *s;689689+690690+ read_lock(&tasklist_lock);691691+ for_each_process(t) {692692+ if (t->flags & PF_KTHREAD)693693+ continue;694694+ for_each_thread(t, s)695695+ set_section_perms(perms, n, true, s->mm);696696+ }697697+ read_unlock(&tasklist_lock);698698+ set_section_perms(perms, n, true, current->active_mm);699699+ set_section_perms(perms, n, true, &init_mm);700700+}701701+702702+int __fix_kernmem_perms(void *unused)703703+{704704+ update_sections_early(nx_perms, ARRAY_SIZE(nx_perms));705705+ return 0;706706+}707707+708708+void fix_kernmem_perms(void)709709+{710710+ stop_machine(__fix_kernmem_perms, NULL, NULL);685711}686712687713#ifdef CONFIG_DEBUG_RODATA714714+int __mark_rodata_ro(void *unused)715715+{716716+ update_sections_early(ro_perms, ARRAY_SIZE(ro_perms));717717+ return 0;718718+}719719+688720void mark_rodata_ro(void)689721{690690- set_section_perms(ro_perms, prot);722722+ stop_machine(__mark_rodata_ro, NULL, NULL);691723}692724693725void set_kernel_text_rw(void)694726{695695- set_section_perms(ro_perms, clear);727727+ set_section_perms(ro_perms, ARRAY_SIZE(ro_perms), false,728728+ current->active_mm);696729}697730698731void set_kernel_text_ro(void)699732{700700- set_section_perms(ro_perms, prot);733733+ set_section_perms(ro_perms, ARRAY_SIZE(ro_perms), true,734734+ current->active_mm);701735}702736#endif /* CONFIG_DEBUG_RODATA */703737
···1414 * Copyright (C) 2008-2009 Red Hat, Inc., Ingo Molnar1515 * Copyright (C) 2009 Jaswinder Singh Rajput1616 * Copyright (C) 2009 Advanced Micro Devices, Inc., Robert Richter1717- * Copyright (C) 2008-2009 Red Hat, Inc., Peter Zijlstra <pzijlstr@redhat.com>1717+ * Copyright (C) 2008-2009 Red Hat, Inc., Peter Zijlstra1818 * Copyright (C) 2009 Intel Corporation, <markus.t.metzger@intel.com>1919 *2020 * ppc:
+1-1
arch/ia64/include/asm/unistd.h
···1111121213131414-#define NR_syscalls 322 /* length of syscall table */1414+#define NR_syscalls 323 /* length of syscall table */15151616/*1717 * The following defines stop scripts/checksyscalls.sh from complaining about
···171171}172172173173174174-void __init pcibios_init_bus(struct pci_bus *bus)175175-{176176- struct pci_dev *dev = bus->self;177177- unsigned short bridge_ctl;178178-179179- /* We deal only with pci controllers and pci-pci bridges. */180180- if (!dev || (dev->class >> 8) != PCI_CLASS_BRIDGE_PCI)181181- return;182182-183183- /* PCI-PCI bridge - set the cache line and default latency184184- (32) for primary and secondary buses. */185185- pci_write_config_byte(dev, PCI_SEC_LATENCY_TIMER, 32);186186-187187- pci_read_config_word(dev, PCI_BRIDGE_CONTROL, &bridge_ctl);188188- bridge_ctl |= PCI_BRIDGE_CTL_PARITY | PCI_BRIDGE_CTL_SERR;189189- pci_write_config_word(dev, PCI_BRIDGE_CONTROL, bridge_ctl);190190-}191191-192174/*193175 * pcibios align resources() is called every time generic PCI code194176 * wants to generate a new address. The process of looking for
···590590 eeh_ops->configure_bridge(pe);591591 eeh_pe_restore_bars(pe);592592593593- /*594594- * If it's PHB PE, the frozen state on all available PEs should have595595- * been cleared by the PHB reset. Otherwise, we unfreeze the PE and its596596- * child PEs because they might be in frozen state.597597- */598598- if (!(pe->type & EEH_PE_PHB)) {599599- rc = eeh_clear_pe_frozen_state(pe, false);600600- if (rc)601601- return rc;602602- }593593+ /* Clear frozen state */594594+ rc = eeh_clear_pe_frozen_state(pe, false);595595+ if (rc)596596+ return rc;603597604598 /* Give the system 5 seconds to finish running the user-space605599 * hotplug shutdown scripts, e.g. ifdown for ethernet. Yes,
+6
arch/powerpc/kvm/book3s_hv.c
···224224225225static void kvmppc_set_msr_hv(struct kvm_vcpu *vcpu, u64 msr)226226{227227+ /*228228+ * Check for illegal transactional state bit combination229229+ * and if we find it, force the TS field to a safe state.230230+ */231231+ if ((msr & MSR_TS_MASK) == MSR_TS_MASK)232232+ msr &= ~MSR_TS_MASK;227233 vcpu->arch.shregs.msr = msr;228234 kvmppc_end_cede(vcpu);229235}
+38-26
arch/powerpc/platforms/powernv/opal-irqchip.c
···4343static unsigned int *opal_irqs;44444545static void opal_handle_irq_work(struct irq_work *work);4646-static __be64 last_outstanding_events;4646+static u64 last_outstanding_events;4747static struct irq_work opal_event_irq_work = {4848 .func = opal_handle_irq_work,4949};5050+5151+void opal_handle_events(uint64_t events)5252+{5353+ int virq, hwirq = 0;5454+ u64 mask = opal_event_irqchip.mask;5555+5656+ if (!in_irq() && (events & mask)) {5757+ last_outstanding_events = events;5858+ irq_work_queue(&opal_event_irq_work);5959+ return;6060+ }6161+6262+ while (events & mask) {6363+ hwirq = fls64(events) - 1;6464+ if (BIT_ULL(hwirq) & mask) {6565+ virq = irq_find_mapping(opal_event_irqchip.domain,6666+ hwirq);6767+ if (virq)6868+ generic_handle_irq(virq);6969+ }7070+ events &= ~BIT_ULL(hwirq);7171+ }7272+}50735174static void opal_event_mask(struct irq_data *d)5275{···78557956static void opal_event_unmask(struct irq_data *d)8057{5858+ __be64 events;5959+8160 set_bit(d->hwirq, &opal_event_irqchip.mask);82618383- opal_poll_events(&last_outstanding_events);6262+ opal_poll_events(&events);6363+ last_outstanding_events = be64_to_cpu(events);6464+6565+ /*6666+ * We can't just handle the events now with opal_handle_events().6767+ * If we did we would deadlock when opal_event_unmask() is called from6868+ * handle_level_irq() with the irq descriptor lock held, because6969+ * calling opal_handle_events() would call generic_handle_irq() and7070+ * then handle_level_irq() which would try to take the descriptor lock7171+ * again. Instead queue the events for later.7272+ */8473 if (last_outstanding_events & opal_event_irqchip.mask)8574 /* Need to retrigger the interrupt */8675 irq_work_queue(&opal_event_irq_work);···13196 return 0;13297}13398134134-void opal_handle_events(uint64_t events)135135-{136136- int virq, hwirq = 0;137137- u64 mask = opal_event_irqchip.mask;138138-139139- if (!in_irq() && (events & mask)) {140140- last_outstanding_events = events;141141- irq_work_queue(&opal_event_irq_work);142142- return;143143- }144144-145145- while (events & mask) {146146- hwirq = fls64(events) - 1;147147- if (BIT_ULL(hwirq) & mask) {148148- virq = irq_find_mapping(opal_event_irqchip.domain,149149- hwirq);150150- if (virq)151151- generic_handle_irq(virq);152152- }153153- events &= ~BIT_ULL(hwirq);154154- }155155-}156156-15799static irqreturn_t opal_interrupt(int irq, void *data)158100{159101 __be64 events;···143131144132static void opal_handle_irq_work(struct irq_work *work)145133{146146- opal_handle_events(be64_to_cpu(last_outstanding_events));134134+ opal_handle_events(last_outstanding_events);147135}148136149137static int opal_event_match(struct irq_domain *h, struct device_node *node,
···1010 * Copyright (C) 2008-2009 Red Hat, Inc., Ingo Molnar1111 * Copyright (C) 2009 Jaswinder Singh Rajput1212 * Copyright (C) 2009 Advanced Micro Devices, Inc., Robert Richter1313- * Copyright (C) 2008-2009 Red Hat, Inc., Peter Zijlstra <pzijlstr@redhat.com>1313+ * Copyright (C) 2008-2009 Red Hat, Inc., Peter Zijlstra1414 * Copyright (C) 2009 Intel Corporation, <markus.t.metzger@intel.com>1515 *1616 * ppc:
+1-1
arch/sparc/kernel/perf_event.c
···99 * Copyright (C) 2008-2009 Red Hat, Inc., Ingo Molnar1010 * Copyright (C) 2009 Jaswinder Singh Rajput1111 * Copyright (C) 2009 Advanced Micro Devices, Inc., Robert Richter1212- * Copyright (C) 2008-2009 Red Hat, Inc., Peter Zijlstra <pzijlstr@redhat.com>1212+ * Copyright (C) 2008-2009 Red Hat, Inc., Peter Zijlstra1313 */14141515#include <linux/perf_event.h>
+1-1
arch/tile/kernel/perf_event.c
···2121 * Copyright (C) 2008-2009 Red Hat, Inc., Ingo Molnar2222 * Copyright (C) 2009 Jaswinder Singh Rajput2323 * Copyright (C) 2009 Advanced Micro Devices, Inc., Robert Richter2424- * Copyright (C) 2008-2009 Red Hat, Inc., Peter Zijlstra <pzijlstr@redhat.com>2424+ * Copyright (C) 2008-2009 Red Hat, Inc., Peter Zijlstra2525 * Copyright (C) 2009 Intel Corporation, <markus.t.metzger@intel.com>2626 * Copyright (C) 2009 Google, Inc., Stephane Eranian2727 */
+1-1
arch/um/Makefile
···131131# The wrappers will select whether using "malloc" or the kernel allocator.132132LINK_WRAPS = -Wl,--wrap,malloc -Wl,--wrap,free -Wl,--wrap,calloc133133134134-LD_FLAGS_CMDLINE = $(foreach opt,$(LDFLAGS),-Wl,$(opt)) -lrt134134+LD_FLAGS_CMDLINE = $(foreach opt,$(LDFLAGS),-Wl,$(opt))135135136136# Used by link-vmlinux.sh which has special support for um link137137export CFLAGS_vmlinux := $(LINK-y) $(LINK_WRAPS) $(LD_FLAGS_CMDLINE)
···239239 }240240241241 mask = x86_pmu.lbr_nr - 1;242242- tos = intel_pmu_lbr_tos();242242+ tos = task_ctx->tos;243243 for (i = 0; i < tos; i++) {244244 lbr_idx = (tos - i) & mask;245245 wrmsrl(x86_pmu.lbr_from + lbr_idx, task_ctx->lbr_from[i]);···247247 if (x86_pmu.intel_cap.lbr_format == LBR_FORMAT_INFO)248248 wrmsrl(MSR_LBR_INFO_0 + lbr_idx, task_ctx->lbr_info[i]);249249 }250250+ wrmsrl(x86_pmu.lbr_tos, tos);250251 task_ctx->lbr_stack_state = LBR_NONE;251252}252253···271270 if (x86_pmu.intel_cap.lbr_format == LBR_FORMAT_INFO)272271 rdmsrl(MSR_LBR_INFO_0 + lbr_idx, task_ctx->lbr_info[i]);273272 }273273+ task_ctx->tos = tos;274274 task_ctx->lbr_stack_state = LBR_VALID;275275}276276
+1-1
arch/x86/kernel/irq_work.c
···11/*22 * x86 specific code for irq_work33 *44- * Copyright (C) 2010 Red Hat, Inc., Peter Zijlstra <pzijlstr@redhat.com>44+ * Copyright (C) 2010 Red Hat, Inc., Peter Zijlstra55 */6677#include <linux/kernel.h>
···120120 return mtrr_state->deftype & IA32_MTRR_DEF_TYPE_TYPE_MASK;121121}122122123123-static u8 mtrr_disabled_type(void)123123+static u8 mtrr_disabled_type(struct kvm_vcpu *vcpu)124124{125125 /*126126 * Intel SDM 11.11.2.2: all MTRRs are disabled when127127 * IA32_MTRR_DEF_TYPE.E bit is cleared, and the UC128128 * memory type is applied to all of physical memory.129129+ *130130+ * However, virtual machines can be run with CPUID such that131131+ * there are no MTRRs. In that case, the firmware will never132132+ * enable MTRRs and it is obviously undesirable to run the133133+ * guest entirely with UC memory and we use WB.129134 */130130- return MTRR_TYPE_UNCACHABLE;135135+ if (guest_cpuid_has_mtrr(vcpu))136136+ return MTRR_TYPE_UNCACHABLE;137137+ else138138+ return MTRR_TYPE_WRBACK;131139}132140133141/*···275267276268 for (seg = 0; seg < seg_num; seg++) {277269 mtrr_seg = &fixed_seg_table[seg];278278- if (mtrr_seg->start >= addr && addr < mtrr_seg->end)270270+ if (mtrr_seg->start <= addr && addr < mtrr_seg->end)279271 return seg;280272 }281273···308300 *start = range->base & PAGE_MASK;309301310302 mask = range->mask & PAGE_MASK;311311- mask |= ~0ULL << boot_cpu_data.x86_phys_bits;312303313304 /* This cannot overflow because writing to the reserved bits of314305 * variable MTRRs causes a #GP.···363356 if (var_mtrr_range_is_valid(cur))364357 list_del(&mtrr_state->var_ranges[index].node);365358359359+ /* Extend the mask with all 1 bits to the left, since those360360+ * bits must implicitly be 0. The bits are then cleared361361+ * when reading them.362362+ */366363 if (!is_mtrr_mask)367364 cur->base = data;368365 else369369- cur->mask = data;366366+ cur->mask = data | (-1LL << cpuid_maxphyaddr(vcpu));370367371368 /* add it to the list if it's enabled. */372369 if (var_mtrr_range_is_valid(cur)) {···437426 *pdata = vcpu->arch.mtrr_state.var_ranges[index].base;438427 else439428 *pdata = vcpu->arch.mtrr_state.var_ranges[index].mask;429429+430430+ *pdata &= (1ULL << cpuid_maxphyaddr(vcpu)) - 1;440431 }441432442433 return 0;···683670 }684671685672 if (iter.mtrr_disabled)686686- return mtrr_disabled_type();673673+ return mtrr_disabled_type(vcpu);687674688675 /* not contained in any MTRRs. */689676 if (type == -1)
···28032803 msr_info->data = vcpu->arch.ia32_xss;28042804 break;28052805 case MSR_TSC_AUX:28062806- if (!guest_cpuid_has_rdtscp(vcpu))28062806+ if (!guest_cpuid_has_rdtscp(vcpu) && !msr_info->host_initiated)28072807 return 1;28082808 /* Otherwise falls through */28092809 default:···29092909 clear_atomic_switch_msr(vmx, MSR_IA32_XSS);29102910 break;29112911 case MSR_TSC_AUX:29122912- if (!guest_cpuid_has_rdtscp(vcpu))29122912+ if (!guest_cpuid_has_rdtscp(vcpu) && !msr_info->host_initiated)29132913 return 1;29142914 /* Check reserved bit, higher 32 bits should be zero */29152915 if ((data >> 32) != 0)···80428042 u32 exit_reason = vmx->exit_reason;80438043 u32 vectoring_info = vmx->idt_vectoring_info;8044804480458045+ trace_kvm_exit(exit_reason, vcpu, KVM_ISA_VMX);80468046+80458047 /*80468048 * Flush logged GPAs PML buffer, this will make dirty_bitmap more80478049 * updated. Another good is, in kvm_vm_ioctl_get_dirty_log, before···86708668 vmx->loaded_vmcs->launched = 1;8671866986728670 vmx->exit_reason = vmcs_read32(VM_EXIT_REASON);86738673- trace_kvm_exit(vmx->exit_reason, vcpu, KVM_ISA_VMX);8674867186758672 /*86768673 * the KVM_REQ_EVENT optimization bit is only on for one entry, and if
+8-4
arch/x86/kvm/x86.c
···3572357235733573static int kvm_vm_ioctl_set_pit(struct kvm *kvm, struct kvm_pit_state *ps)35743574{35753575+ int i;35753576 mutex_lock(&kvm->arch.vpit->pit_state.lock);35763577 memcpy(&kvm->arch.vpit->pit_state, ps, sizeof(struct kvm_pit_state));35773577- kvm_pit_load_count(kvm, 0, ps->channels[0].count, 0);35783578+ for (i = 0; i < 3; i++)35793579+ kvm_pit_load_count(kvm, i, ps->channels[i].count, 0);35783580 mutex_unlock(&kvm->arch.vpit->pit_state.lock);35793581 return 0;35803582}···35953593static int kvm_vm_ioctl_set_pit2(struct kvm *kvm, struct kvm_pit_state2 *ps)35963594{35973595 int start = 0;35963596+ int i;35983597 u32 prev_legacy, cur_legacy;35993598 mutex_lock(&kvm->arch.vpit->pit_state.lock);36003599 prev_legacy = kvm->arch.vpit->pit_state.flags & KVM_PIT_FLAGS_HPET_LEGACY;···36053602 memcpy(&kvm->arch.vpit->pit_state.channels, &ps->channels,36063603 sizeof(kvm->arch.vpit->pit_state.channels));36073604 kvm->arch.vpit->pit_state.flags = ps->flags;36083608- kvm_pit_load_count(kvm, 0, kvm->arch.vpit->pit_state.channels[0].count, start);36053605+ for (i = 0; i < 3; i++)36063606+ kvm_pit_load_count(kvm, i, kvm->arch.vpit->pit_state.channels[i].count, start);36093607 mutex_unlock(&kvm->arch.vpit->pit_state.lock);36103608 return 0;36113609}···65196515 if (req_immediate_exit)65206516 smp_send_reschedule(vcpu->cpu);6521651765186518+ trace_kvm_entry(vcpu->vcpu_id);65196519+ wait_lapic_expire(vcpu);65226520 __kvm_guest_enter();6523652165246522 if (unlikely(vcpu->arch.switch_db_regs)) {···65336527 vcpu->arch.switch_db_regs &= ~KVM_DEBUGREG_RELOAD;65346528 }6535652965366536- trace_kvm_entry(vcpu->vcpu_id);65376537- wait_lapic_expire(vcpu);65386530 kvm_x86_ops->run(vcpu);6539653165406532 /*
···24952495{24962496 x86_init.paging.pagetable_init = xen_pagetable_init;2497249724982498- /* Optimization - we can use the HVM one but it has no idea which24992499- * VCPUs are descheduled - which means that it will needlessly IPI25002500- * them. Xen knows so let it do the job.25012501- */25022502- if (xen_feature(XENFEAT_auto_translated_physmap)) {25032503- pv_mmu_ops.flush_tlb_others = xen_flush_tlb_others;24982498+ if (xen_feature(XENFEAT_auto_translated_physmap))25042499 return;25052505- }25002500+25062501 pv_mmu_ops = xen_mmu_ops;2507250225082503 memset(dummy_mapping, 0xff, PAGE_SIZE);
+10-10
arch/x86/xen/suspend.c
···68686969void xen_arch_pre_suspend(void)7070{7171- int cpu;7272-7373- for_each_online_cpu(cpu)7474- xen_pmu_finish(cpu);7575-7671 if (xen_pv_domain())7772 xen_pv_pre_suspend();7873}79748075void xen_arch_post_suspend(int cancelled)8176{8282- int cpu;8383-8477 if (xen_pv_domain())8578 xen_pv_post_suspend(cancelled);8679 else8780 xen_hvm_post_suspend(cancelled);8888-8989- for_each_online_cpu(cpu)9090- xen_pmu_init(cpu);9181}92829383static void xen_vcpu_notify_restore(void *data)···9610697107void xen_arch_resume(void)98108{109109+ int cpu;110110+99111 on_each_cpu(xen_vcpu_notify_restore, NULL, 1);112112+113113+ for_each_online_cpu(cpu)114114+ xen_pmu_init(cpu);100115}101116102117void xen_arch_suspend(void)103118{119119+ int cpu;120120+121121+ for_each_online_cpu(cpu)122122+ xen_pmu_finish(cpu);123123+104124 on_each_cpu(xen_vcpu_notify_suspend, NULL, 1);105125}
+3-3
block/blk-cgroup.c
···11271127 * of the main cic data structures. For now we allow a task to change11281128 * its cgroup only if it's the only owner of its ioc.11291129 */11301130-static int blkcg_can_attach(struct cgroup_subsys_state *css,11311131- struct cgroup_taskset *tset)11301130+static int blkcg_can_attach(struct cgroup_taskset *tset)11321131{11331132 struct task_struct *task;11331133+ struct cgroup_subsys_state *dst_css;11341134 struct io_context *ioc;11351135 int ret = 0;1136113611371137 /* task_lock() is needed to avoid races with exit_io_context() */11381138- cgroup_taskset_for_each(task, tset) {11381138+ cgroup_taskset_for_each(task, dst_css, tset) {11391139 task_lock(task);11401140 ioc = task->io_context;11411141 if (ioc && atomic_read(&ioc->nr_tasks) > 1)
+14-2
block/blk-core.c
···16891689 struct request *req;16901690 unsigned int request_count = 0;1691169116921692- blk_queue_split(q, &bio, q->bio_split);16931693-16941692 /*16951693 * low level driver can indicate that it wants pages above a16961694 * certain limit bounced to low memory (ie for highmem, or even16971695 * ISA dma in theory)16981696 */16991697 blk_queue_bounce(q, &bio);16981698+16991699+ blk_queue_split(q, &bio, q->bio_split);1700170017011701 if (bio_integrity_enabled(bio) && bio_integrity_prep(bio)) {17021702 bio->bi_error = -EIO;···34053405{34063406 int ret = 0;3407340734083408+ if (!q->dev)34093409+ return ret;34103410+34083411 spin_lock_irq(q->queue_lock);34093412 if (q->nr_pending) {34103413 ret = -EBUSY;···34353432 */34363433void blk_post_runtime_suspend(struct request_queue *q, int err)34373434{34353435+ if (!q->dev)34363436+ return;34373437+34383438 spin_lock_irq(q->queue_lock);34393439 if (!err) {34403440 q->rpm_status = RPM_SUSPENDED;···34623456 */34633457void blk_pre_runtime_resume(struct request_queue *q)34643458{34593459+ if (!q->dev)34603460+ return;34613461+34653462 spin_lock_irq(q->queue_lock);34663463 q->rpm_status = RPM_RESUMING;34673464 spin_unlock_irq(q->queue_lock);···34873478 */34883479void blk_post_runtime_resume(struct request_queue *q, int err)34893480{34813481+ if (!q->dev)34823482+ return;34833483+34903484 spin_lock_irq(q->queue_lock);34913485 if (!err) {34923486 q->rpm_status = RPM_ACTIVE;
+1-1
crypto/ablkcipher.c
···277277 if (WARN_ON_ONCE(in_irq()))278278 return -EDEADLK;279279280280+ walk->iv = req->info;280281 walk->nbytes = walk->total;281282 if (unlikely(!walk->total))282283 return 0;283284284285 walk->iv_buffer = NULL;285285- walk->iv = req->info;286286 if (unlikely(((unsigned long)walk->iv & alignmask))) {287287 int err = ablkcipher_copy_iv(walk, tfm, alignmask);288288
+1-1
crypto/blkcipher.c
···326326 if (WARN_ON_ONCE(in_irq()))327327 return -EDEADLK;328328329329+ walk->iv = desc->info;329330 walk->nbytes = walk->total;330331 if (unlikely(!walk->total))331332 return 0;332333333334 walk->buffer = NULL;334334- walk->iv = desc->info;335335 if (unlikely(((unsigned long)walk->iv & walk->alignmask))) {336336 int err = blkcipher_copy_iv(walk);337337 if (err)
+1-1
drivers/acpi/nfit.c
···18101810 if (!dev->driver) {18111811 /* dev->driver may be null if we're being removed */18121812 dev_dbg(dev, "%s: no driver found for dev\n", __func__);18131813- return;18131813+ goto out_unlock;18141814 }1815181518161816 if (!acpi_desc) {
···408408 struct blkif_x86_32_request *src)409409{410410 int i, n = BLKIF_MAX_SEGMENTS_PER_REQUEST, j;411411- dst->operation = src->operation;412412- switch (src->operation) {411411+ dst->operation = READ_ONCE(src->operation);412412+ switch (dst->operation) {413413 case BLKIF_OP_READ:414414 case BLKIF_OP_WRITE:415415 case BLKIF_OP_WRITE_BARRIER:···456456 struct blkif_x86_64_request *src)457457{458458 int i, n = BLKIF_MAX_SEGMENTS_PER_REQUEST, j;459459- dst->operation = src->operation;460460- switch (src->operation) {459459+ dst->operation = READ_ONCE(src->operation);460460+ switch (dst->operation) {461461 case BLKIF_OP_READ:462462 case BLKIF_OP_WRITE:463463 case BLKIF_OP_WRITE_BARRIER:
+4-4
drivers/char/ipmi/ipmi_si_intf.c
···1230123012311231 new_smi->intf = intf;1232123212331233- /* Try to claim any interrupts. */12341234- if (new_smi->irq_setup)12351235- new_smi->irq_setup(new_smi);12361236-12371233 /* Set up the timer that drives the interface. */12381234 setup_timer(&new_smi->si_timer, smi_timeout, (long)new_smi);12391235 smi_mod_timer(new_smi, jiffies + SI_TIMEOUT_JIFFIES);12361236+12371237+ /* Try to claim any interrupts. */12381238+ if (new_smi->irq_setup)12391239+ new_smi->irq_setup(new_smi);1240124012411241 /*12421242 * Check if the user forcefully enabled the daemon.
+17-16
drivers/clk/clk-gpio.c
···209209210210struct clk_gpio_delayed_register_data {211211 const char *gpio_name;212212+ int num_parents;213213+ const char **parent_names;212214 struct device_node *node;213215 struct mutex lock;214216 struct clk *clk;···224222{225223 struct clk_gpio_delayed_register_data *data = _data;226224 struct clk *clk;227227- const char **parent_names;228228- int i, num_parents;229225 int gpio;230226 enum of_gpio_flags of_flags;231227···248248 return ERR_PTR(gpio);249249 }250250251251- num_parents = of_clk_get_parent_count(data->node);252252-253253- parent_names = kcalloc(num_parents, sizeof(char *), GFP_KERNEL);254254- if (!parent_names) {255255- clk = ERR_PTR(-ENOMEM);256256- goto out;257257- }258258-259259- for (i = 0; i < num_parents; i++)260260- parent_names[i] = of_clk_get_parent_name(data->node, i);261261-262262- clk = data->clk_register_get(data->node->name, parent_names,263263- num_parents, gpio, of_flags & OF_GPIO_ACTIVE_LOW);251251+ clk = data->clk_register_get(data->node->name, data->parent_names,252252+ data->num_parents, gpio, of_flags & OF_GPIO_ACTIVE_LOW);264253 if (IS_ERR(clk))265254 goto out;266255267256 data->clk = clk;268257out:269258 mutex_unlock(&data->lock);270270- kfree(parent_names);271259272260 return clk;273261}···284296 unsigned gpio, bool active_low))285297{286298 struct clk_gpio_delayed_register_data *data;299299+ const char **parent_names;300300+ int i, num_parents;287301288302 data = kzalloc(sizeof(*data), GFP_KERNEL);289303 if (!data)290304 return;291305306306+ num_parents = of_clk_get_parent_count(node);307307+308308+ parent_names = kcalloc(num_parents, sizeof(char *), GFP_KERNEL);309309+ if (!parent_names)310310+ return;311311+312312+ for (i = 0; i < num_parents; i++)313313+ parent_names[i] = of_clk_get_parent_name(node, i);314314+315315+ data->num_parents = num_parents;316316+ data->parent_names = parent_names;292317 data->node = node;293318 data->gpio_name = gpio_name;294319 data->clk_register_get = clk_register_get;
+3-1
drivers/clk/clk-qoriq.c
···778778 */779779 clksel = (cg_in(cg, hwc->reg) & CLKSEL_MASK) >> CLKSEL_SHIFT;780780 div = get_pll_div(cg, hwc, clksel);781781- if (!div)781781+ if (!div) {782782+ kfree(hwc);782783 return NULL;784784+ }783785784786 pct80_rate = clk_get_rate(div->clk);785787 pct80_rate *= 8;
+1
drivers/clk/clk-scpi.c
···292292 ret = scpi_clk_add(dev, child, match);293293 if (ret) {294294 scpi_clocks_remove(pdev);295295+ of_node_put(child);295296 return ret;296297 }297298 }
+7-7
drivers/clk/imx/clk-pllv1.c
···5252 unsigned long parent_rate)5353{5454 struct clk_pllv1 *pll = to_clk_pllv1(hw);5555- long long ll;5555+ unsigned long long ull;5656 int mfn_abs;5757 unsigned int mfi, mfn, mfd, pd;5858 u32 reg;···9494 rate = parent_rate * 2;9595 rate /= pd + 1;96969797- ll = (unsigned long long)rate * mfn_abs;9797+ ull = (unsigned long long)rate * mfn_abs;98989999- do_div(ll, mfd + 1);9999+ do_div(ull, mfd + 1);100100101101 if (mfn_is_negative(pll, mfn))102102- ll = -ll;102102+ ull = (rate * mfi) - ull;103103+ else104104+ ull = (rate * mfi) + ull;103105104104- ll = (rate * mfi) + ll;105105-106106- return ll;106106+ return ull;107107}108108109109static struct clk_ops clk_pllv1_ops = {
···99 * warranty of any kind, whether express or implied.1010 */11111212+#include <linux/clk.h>1213#include <linux/module.h>1314#include <linux/kernel.h>1415#include <linux/spinlock.h>
+1
drivers/clk/mmp/clk-pxa168.c
···99 * warranty of any kind, whether express or implied.1010 */11111212+#include <linux/clk.h>1213#include <linux/module.h>1314#include <linux/kernel.h>1415#include <linux/spinlock.h>
+1
drivers/clk/mmp/clk-pxa910.c
···99 * warranty of any kind, whether express or implied.1010 */11111212+#include <linux/clk.h>1213#include <linux/module.h>1314#include <linux/kernel.h>1415#include <linux/spinlock.h>
···168168{169169 struct fapll_data *fd = to_fapll(hw);170170 u32 fapll_n, fapll_p, v;171171- long long rate;171171+ u64 rate;172172173173 if (ti_fapll_clock_is_bypass(fd))174174 return parent_rate;···314314{315315 struct fapll_synth *synth = to_synth(hw);316316 u32 synth_div_m;317317- long long rate;317317+ u64 rate;318318319319 /* The audio_pll_clk1 is hardwired to produce 32.768KiHz clock */320320 if (!synth->div)
···226226227227config ARM_TEGRA124_CPUFREQ228228 tristate "Tegra124 CPUFreq support"229229- depends on ARCH_TEGRA && CPUFREQ_DT229229+ depends on ARCH_TEGRA && CPUFREQ_DT && REGULATOR230230 default y231231 help232232 This adds the CPUFreq driver support for Tegra124 SOCs.
···648648 *649649 * Register the given set of PLLs with the system.650650 */651651-int __init s3c_plltab_register(struct cpufreq_frequency_table *plls,651651+int s3c_plltab_register(struct cpufreq_frequency_table *plls,652652 unsigned int plls_no)653653{654654 struct cpufreq_frequency_table *vals;
+6-3
drivers/dma/at_xdmac.c
···156156#define AT_XDMAC_CC_WRIP (0x1 << 23) /* Write in Progress (read only) */157157#define AT_XDMAC_CC_WRIP_DONE (0x0 << 23)158158#define AT_XDMAC_CC_WRIP_IN_PROGRESS (0x1 << 23)159159-#define AT_XDMAC_CC_PERID(i) (0x7f & (h) << 24) /* Channel Peripheral Identifier */159159+#define AT_XDMAC_CC_PERID(i) (0x7f & (i) << 24) /* Channel Peripheral Identifier */160160#define AT_XDMAC_CDS_MSP 0x2C /* Channel Data Stride Memory Set Pattern */161161#define AT_XDMAC_CSUS 0x30 /* Channel Source Microblock Stride */162162#define AT_XDMAC_CDUS 0x34 /* Channel Destination Microblock Stride */···965965 NULL,966966 src_addr, dst_addr,967967 xt, xt->sgl);968968- for (i = 0; i < xt->numf; i++)968968+969969+ /* Length of the block is (BLEN+1) microblocks. */970970+ for (i = 0; i < xt->numf - 1; i++)969971 at_xdmac_increment_block_count(chan, first);970972971973 dev_dbg(chan2dev(chan), "%s: add desc 0x%p to descs_list 0x%p\n",···10881086 /* Check remaining length and change data width if needed. */10891087 dwidth = at_xdmac_align_width(chan,10901088 src_addr | dst_addr | xfer_size);10891089+ chan_cc &= ~AT_XDMAC_CC_DWIDTH_MASK;10911090 chan_cc |= AT_XDMAC_CC_DWIDTH(dwidth);1092109110931092 ublen = xfer_size >> dwidth;···13361333 * since we don't care about the stride anymore.13371334 */13381335 if ((i == (sg_len - 1)) &&13391339- sg_dma_len(ppsg) == sg_dma_len(psg)) {13361336+ sg_dma_len(psg) == sg_dma_len(sg)) {13401337 dev_dbg(chan2dev(chan),13411338 "%s: desc 0x%p can be merged with desc 0x%p\n",13421339 __func__, desc, pdesc);
+54-24
drivers/dma/bcm2835-dma.c
···3131 */3232#include <linux/dmaengine.h>3333#include <linux/dma-mapping.h>3434+#include <linux/dmapool.h>3435#include <linux/err.h>3536#include <linux/init.h>3637#include <linux/interrupt.h>···6362 uint32_t pad[2];6463};65646565+struct bcm2835_cb_entry {6666+ struct bcm2835_dma_cb *cb;6767+ dma_addr_t paddr;6868+};6969+6670struct bcm2835_chan {6771 struct virt_dma_chan vc;6872 struct list_head node;···78727973 int ch;8074 struct bcm2835_desc *desc;7575+ struct dma_pool *cb_pool;81768277 void __iomem *chan_base;8378 int irq_number;8479};85808681struct bcm2835_desc {8282+ struct bcm2835_chan *c;8783 struct virt_dma_desc vd;8884 enum dma_transfer_direction dir;89859090- unsigned int control_block_size;9191- struct bcm2835_dma_cb *control_block_base;9292- dma_addr_t control_block_base_phys;8686+ struct bcm2835_cb_entry *cb_list;93879488 unsigned int frames;9589 size_t size;···149143static void bcm2835_dma_desc_free(struct virt_dma_desc *vd)150144{151145 struct bcm2835_desc *desc = container_of(vd, struct bcm2835_desc, vd);152152- dma_free_coherent(desc->vd.tx.chan->device->dev,153153- desc->control_block_size,154154- desc->control_block_base,155155- desc->control_block_base_phys);146146+ int i;147147+148148+ for (i = 0; i < desc->frames; i++)149149+ dma_pool_free(desc->c->cb_pool, desc->cb_list[i].cb,150150+ desc->cb_list[i].paddr);151151+152152+ kfree(desc->cb_list);156153 kfree(desc);157154}158155···208199209200 c->desc = d = to_bcm2835_dma_desc(&vd->tx);210201211211- writel(d->control_block_base_phys, c->chan_base + BCM2835_DMA_ADDR);202202+ writel(d->cb_list[0].paddr, c->chan_base + BCM2835_DMA_ADDR);212203 writel(BCM2835_DMA_ACTIVE, c->chan_base + BCM2835_DMA_CS);213204}214205···241232static int bcm2835_dma_alloc_chan_resources(struct dma_chan *chan)242233{243234 struct bcm2835_chan *c = to_bcm2835_dma_chan(chan);235235+ struct device *dev = c->vc.chan.device->dev;244236245245- dev_dbg(c->vc.chan.device->dev,246246- "Allocating DMA channel %d\n", c->ch);237237+ dev_dbg(dev, "Allocating DMA channel %d\n", c->ch);238238+239239+ c->cb_pool = dma_pool_create(dev_name(dev), dev,240240+ sizeof(struct bcm2835_dma_cb), 0, 0);241241+ if (!c->cb_pool) {242242+ dev_err(dev, "unable to allocate descriptor pool\n");243243+ return -ENOMEM;244244+ }247245248246 return request_irq(c->irq_number,249247 bcm2835_dma_callback, 0, "DMA IRQ", c);···262246263247 vchan_free_chan_resources(&c->vc);264248 free_irq(c->irq_number, c);249249+ dma_pool_destroy(c->cb_pool);265250266251 dev_dbg(c->vc.chan.device->dev, "Freeing DMA channel %u\n", c->ch);267252}···278261 size_t size;279262280263 for (size = i = 0; i < d->frames; i++) {281281- struct bcm2835_dma_cb *control_block =282282- &d->control_block_base[i];264264+ struct bcm2835_dma_cb *control_block = d->cb_list[i].cb;283265 size_t this_size = control_block->length;284266 dma_addr_t dma;285267···359343 dma_addr_t dev_addr;360344 unsigned int es, sync_type;361345 unsigned int frame;346346+ int i;362347363348 /* Grab configuration */364349 if (!is_slave_direction(direction)) {···391374 if (!d)392375 return NULL;393376377377+ d->c = c;394378 d->dir = direction;395379 d->frames = buf_len / period_len;396380397397- /* Allocate memory for control blocks */398398- d->control_block_size = d->frames * sizeof(struct bcm2835_dma_cb);399399- d->control_block_base = dma_zalloc_coherent(chan->device->dev,400400- d->control_block_size, &d->control_block_base_phys,401401- GFP_NOWAIT);402402-403403- if (!d->control_block_base) {381381+ d->cb_list = kcalloc(d->frames, sizeof(*d->cb_list), GFP_KERNEL);382382+ if (!d->cb_list) {404383 kfree(d);405384 return NULL;385385+ }386386+ /* Allocate memory for control blocks */387387+ for (i = 0; i < d->frames; i++) {388388+ struct bcm2835_cb_entry *cb_entry = &d->cb_list[i];389389+390390+ cb_entry->cb = dma_pool_zalloc(c->cb_pool, GFP_ATOMIC,391391+ &cb_entry->paddr);392392+ if (!cb_entry->cb)393393+ goto error_cb;406394 }407395408396 /*···415393 * for each frame and link them together.416394 */417395 for (frame = 0; frame < d->frames; frame++) {418418- struct bcm2835_dma_cb *control_block =419419- &d->control_block_base[frame];396396+ struct bcm2835_dma_cb *control_block = d->cb_list[frame].cb;420397421398 /* Setup adresses */422399 if (d->dir == DMA_DEV_TO_MEM) {···449428 * This DMA engine driver currently only supports cyclic DMA.450429 * Therefore, wrap around at number of frames.451430 */452452- control_block->next = d->control_block_base_phys +453453- sizeof(struct bcm2835_dma_cb)454454- * ((frame + 1) % d->frames);431431+ control_block->next = d->cb_list[((frame + 1) % d->frames)].paddr;455432 }456433457434 return vchan_tx_prep(&c->vc, &d->vd, flags);435435+error_cb:436436+ i--;437437+ for (; i >= 0; i--) {438438+ struct bcm2835_cb_entry *cb_entry = &d->cb_list[i];439439+440440+ dma_pool_free(c->cb_pool, cb_entry->cb, cb_entry->paddr);441441+ }442442+443443+ kfree(d->cb_list);444444+ kfree(d);445445+ return NULL;458446}459447460448static int bcm2835_dma_slave_config(struct dma_chan *chan,
···141141 unsigned long pinmask = bgc->pin2mask(bgc, gpio);142142143143 if (bgc->dir & pinmask)144144- return bgc->read_reg(bgc->reg_set) & pinmask;144144+ return !!(bgc->read_reg(bgc->reg_set) & pinmask);145145 else146146- return bgc->read_reg(bgc->reg_dat) & pinmask;146146+ return !!(bgc->read_reg(bgc->reg_dat) & pinmask);147147}148148149149static int bgpio_get(struct gpio_chip *gc, unsigned int gpio)
+7-1
drivers/gpio/gpiolib.c
···12791279 chip = desc->chip;12801280 offset = gpio_chip_hwgpio(desc);12811281 value = chip->get ? chip->get(chip, offset) : -EIO;12821282- value = value < 0 ? value : !!value;12821282+ /*12831283+ * FIXME: fix all drivers to clamp to [0,1] or return negative,12841284+ * then change this to:12851285+ * value = value < 0 ? value : !!value;12861286+ * so we can properly propagate error codes.12871287+ */12881288+ value = !!value;12831289 trace_gpio_value(desc_to_gpio(desc), 1, value);12841290 return value;12851291}
···127127 return 0;128128}129129130130+static int amdgpu_cs_user_fence_chunk(struct amdgpu_cs_parser *p,131131+ struct drm_amdgpu_cs_chunk_fence *fence_data)132132+{133133+ struct drm_gem_object *gobj;134134+ uint32_t handle;135135+136136+ handle = fence_data->handle;137137+ gobj = drm_gem_object_lookup(p->adev->ddev, p->filp,138138+ fence_data->handle);139139+ if (gobj == NULL)140140+ return -EINVAL;141141+142142+ p->uf.bo = amdgpu_bo_ref(gem_to_amdgpu_bo(gobj));143143+ p->uf.offset = fence_data->offset;144144+145145+ if (amdgpu_ttm_tt_has_userptr(p->uf.bo->tbo.ttm)) {146146+ drm_gem_object_unreference_unlocked(gobj);147147+ return -EINVAL;148148+ }149149+150150+ p->uf_entry.robj = amdgpu_bo_ref(p->uf.bo);151151+ p->uf_entry.prefered_domains = AMDGPU_GEM_DOMAIN_GTT;152152+ p->uf_entry.allowed_domains = AMDGPU_GEM_DOMAIN_GTT;153153+ p->uf_entry.priority = 0;154154+ p->uf_entry.tv.bo = &p->uf_entry.robj->tbo;155155+ p->uf_entry.tv.shared = true;156156+157157+ drm_gem_object_unreference_unlocked(gobj);158158+ return 0;159159+}160160+130161int amdgpu_cs_parser_init(struct amdgpu_cs_parser *p, void *data)131162{132163 union drm_amdgpu_cs *cs = data;···238207239208 case AMDGPU_CHUNK_ID_FENCE:240209 size = sizeof(struct drm_amdgpu_cs_chunk_fence);241241- if (p->chunks[i].length_dw * sizeof(uint32_t) >= size) {242242- uint32_t handle;243243- struct drm_gem_object *gobj;244244- struct drm_amdgpu_cs_chunk_fence *fence_data;245245-246246- fence_data = (void *)p->chunks[i].kdata;247247- handle = fence_data->handle;248248- gobj = drm_gem_object_lookup(p->adev->ddev,249249- p->filp, handle);250250- if (gobj == NULL) {251251- ret = -EINVAL;252252- goto free_partial_kdata;253253- }254254-255255- p->uf.bo = gem_to_amdgpu_bo(gobj);256256- amdgpu_bo_ref(p->uf.bo);257257- drm_gem_object_unreference_unlocked(gobj);258258- p->uf.offset = fence_data->offset;259259- } else {210210+ if (p->chunks[i].length_dw * sizeof(uint32_t) < size) {260211 ret = -EINVAL;261212 goto free_partial_kdata;262213 }214214+215215+ ret = amdgpu_cs_user_fence_chunk(p, (void *)p->chunks[i].kdata);216216+ if (ret)217217+ goto free_partial_kdata;218218+263219 break;264220265221 case AMDGPU_CHUNK_ID_DEPENDENCIES:···409391 INIT_LIST_HEAD(&duplicates);410392 amdgpu_vm_get_pd_bo(&fpriv->vm, &p->validated, &p->vm_pd);411393394394+ if (p->uf.bo)395395+ list_add(&p->uf_entry.tv.head, &p->validated);396396+412397 if (need_mmap_lock)413398 down_read(¤t->mm->mmap_sem);414399···509488 for (i = 0; i < parser->num_ibs; i++)510489 amdgpu_ib_free(parser->adev, &parser->ibs[i]);511490 kfree(parser->ibs);512512- if (parser->uf.bo)513513- amdgpu_bo_unref(&parser->uf.bo);491491+ amdgpu_bo_unref(&parser->uf.bo);492492+ amdgpu_bo_unref(&parser->uf_entry.robj);514493}515494516495static int amdgpu_bo_vm_update_pte(struct amdgpu_cs_parser *p,
+8
drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
···476476 if (domain == AMDGPU_GEM_DOMAIN_CPU)477477 goto error_unreserve;478478 }479479+ list_for_each_entry(entry, &duplicates, head) {480480+ domain = amdgpu_mem_type_to_domain(entry->bo->mem.mem_type);481481+ /* if anything is swapped out don't swap it in here,482482+ just abort and wait for the next CS */483483+ if (domain == AMDGPU_GEM_DOMAIN_CPU)484484+ goto error_unreserve;485485+ }486486+479487 r = amdgpu_vm_update_page_directory(adev, bo_va->vm);480488 if (r)481489 goto error_unreserve;
+3
drivers/gpu/drm/exynos/exynos_drm_crtc.c
···5555{5656 struct exynos_drm_crtc *exynos_crtc = to_exynos_crtc(crtc);57575858+ if (!state->enable)5959+ return 0;6060+5861 if (exynos_crtc->ops->atomic_check)5962 return exynos_crtc->ops->atomic_check(exynos_crtc, state);6063
+19-1
drivers/gpu/drm/i915/intel_display.c
···1405914059 struct drm_crtc *crtc = crtc_state->base.crtc;1406014060 struct drm_framebuffer *fb = state->base.fb;1406114061 struct drm_i915_gem_object *obj = intel_fb_obj(fb);1406214062+ enum pipe pipe = to_intel_plane(plane)->pipe;1406214063 unsigned stride;1406314064 int ret;1406414065···14090140891409114090 if (fb->modifier[0] != DRM_FORMAT_MOD_NONE) {1409214091 DRM_DEBUG_KMS("cursor cannot be tiled\n");1409214092+ return -EINVAL;1409314093+ }1409414094+1409514095+ /*1409614096+ * There's something wrong with the cursor on CHV pipe C.1409714097+ * If it straddles the left edge of the screen then1409814098+ * moving it away from the edge or disabling it often1409914099+ * results in a pipe underrun, and often that can lead to1410014100+ * dead pipe (constant underrun reported, and it scans1410114101+ * out just a solid color). To recover from that, the1410214102+ * display power well must be turned off and on again.1410314103+ * Refuse the put the cursor into that compromised position.1410414104+ */1410514105+ if (IS_CHERRYVIEW(plane->dev) && pipe == PIPE_C &&1410614106+ state->visible && state->base.crtc_x < 0) {1410714107+ DRM_DEBUG_KMS("CHV cursor C not allowed to straddle the left screen edge\n");1409314108 return -EINVAL;1409414109 }1409514110···14141141241414214125 intel_crtc->cursor_addr = addr;14143141261414414144- intel_crtc_update_cursor(crtc, state->visible);1412714127+ if (crtc->state->active)1412814128+ intel_crtc_update_cursor(crtc, state->visible);1414514129}14146141301414714131static struct drm_plane *intel_cursor_plane_create(struct drm_device *dev,
+4-3
drivers/gpu/drm/i915/intel_hdmi.c
···13851385 struct intel_hdmi *intel_hdmi = intel_attached_hdmi(connector);13861386 struct drm_i915_private *dev_priv = to_i915(connector->dev);13871387 bool live_status = false;13881388- unsigned int retry = 3;13881388+ unsigned int try;1389138913901390 DRM_DEBUG_KMS("[CONNECTOR:%d:%s]\n",13911391 connector->base.id, connector->name);1392139213931393 intel_display_power_get(dev_priv, POWER_DOMAIN_GMBUS);1394139413951395- while (!live_status && --retry) {13951395+ for (try = 0; !live_status && try < 4; try++) {13961396+ if (try)13971397+ msleep(10);13961398 live_status = intel_digital_port_connected(dev_priv,13971399 hdmi_to_dig_port(intel_hdmi));13981398- msleep(10);13991400 }1400140114011402 if (!live_status)
+2-3
drivers/gpu/drm/i915/intel_pm.c
···47134713 * 3b: Enable Coarse Power Gating only when RC6 is enabled.47144714 * WaRsDisableCoarsePowerGating:skl,bxt - Render/Media PG need to be disabled with RC6.47154715 */47164716- if (IS_BXT_REVID(dev, 0, BXT_REVID_A1) ||47174717- ((IS_SKL_GT3(dev) || IS_SKL_GT4(dev)) &&47184718- IS_SKL_REVID(dev, 0, SKL_REVID_F0)))47164716+ if ((IS_BROXTON(dev) && (INTEL_REVID(dev) < BXT_REVID_B0)) ||47174717+ ((IS_SKL_GT3(dev) || IS_SKL_GT4(dev)) && (INTEL_REVID(dev) <= SKL_REVID_F0)))47194718 I915_WRITE(GEN9_PG_ENABLE, 0);47204719 else47214720 I915_WRITE(GEN9_PG_ENABLE, (rc6_mask & GEN6_RC_CTL_RC6_ENABLE) ?
···112112 dma_addr_t paddr;113113 int ret;114114115115- /* only doing ARGB32 since this is what is needed to alpha-blend116116- * with video overlays:117117- */118115 sizes->surface_bpp = 32;119119- sizes->surface_depth = 32;116116+ sizes->surface_depth = 24;120117121118 DBG("create fbdev: %dx%d@%d (%dx%d)", sizes->surface_width,122119 sizes->surface_height, sizes->surface_bpp,
···390390 else if (ctx_id == SVGA3D_INVALID_ID)391391 ret = vmw_local_fifo_reserve(dev_priv, bytes);392392 else {393393- WARN_ON("Command buffer has not been allocated.\n");393393+ WARN(1, "Command buffer has not been allocated.\n");394394 ret = NULL;395395 }396396 if (IS_ERR_OR_NULL(ret)) {
···12171217config SENSORS_SHT1512181218 tristate "Sensiron humidity and temperature sensors. SHT15 and compat."12191219 depends on GPIOLIB || COMPILE_TEST12201220+ select BITREVERSE12201221 help12211222 If you say yes here you get support for the Sensiron SHT10, SHT11,12221223 SHT15, SHT71, SHT75 humidity and temperature sensors.
+15-1
drivers/hwmon/tmp102.c
···5858 u16 config_orig;5959 unsigned long last_update;6060 int temp[3];6161+ bool first_time;6162};62636364/* convert left adjusted 13-bit TMP102 register value to milliCelsius */···9493 tmp102->temp[i] = tmp102_reg_to_mC(status);9594 }9695 tmp102->last_update = jiffies;9696+ tmp102->first_time = false;9797 }9898 mutex_unlock(&tmp102->lock);9999 return tmp102;···103101static int tmp102_read_temp(void *dev, int *temp)104102{105103 struct tmp102 *tmp102 = tmp102_update_device(dev);104104+105105+ /* Is it too early even to return a conversion? */106106+ if (tmp102->first_time) {107107+ dev_dbg(dev, "%s: Conversion not ready yet..\n", __func__);108108+ return -EAGAIN;109109+ }106110107111 *temp = tmp102->temp[0];108112···121113{122114 struct sensor_device_attribute *sda = to_sensor_dev_attr(attr);123115 struct tmp102 *tmp102 = tmp102_update_device(dev);116116+117117+ /* Is it too early even to return a read? */118118+ if (tmp102->first_time)119119+ return -EAGAIN;124120125121 return sprintf(buf, "%d\n", tmp102->temp[sda->index]);126122}···219207 status = -ENODEV;220208 goto fail_restore_config;221209 }222222- tmp102->last_update = jiffies - HZ;210210+ tmp102->last_update = jiffies;211211+ /* Mark that we are not ready with data until conversion is complete */212212+ tmp102->first_time = true;223213 mutex_init(&tmp102->lock);224214225215 hwmon_dev = hwmon_device_register_with_groups(dev, client->name,
+9-2
drivers/i2c/busses/i2c-davinci.c
···202202 * d is always 6 on Keystone I2C controller203203 */204204205205- /* get minimum of 7 MHz clock, but max of 12 MHz */206206- psc = (input_clock / 7000000) - 1;205205+ /*206206+ * Both Davinci and current Keystone User Guides recommend a value207207+ * between 7MHz and 12MHz. In reality 7MHz module clock doesn't208208+ * always produce enough margin between SDA and SCL transitions.209209+ * Measurements show that the higher the module clock is, the210210+ * bigger is the margin, providing more reliable communication.211211+ * So we better target for 12MHz.212212+ */213213+ psc = (input_clock / 12000000) - 1;207214 if ((input_clock / (psc + 1)) > 12000000)208215 psc++; /* better to run under spec than over */209216 d = (psc >= 2) ? 5 : 7 - psc;
+6
drivers/i2c/busses/i2c-designware-core.c
···813813tx_aborted:814814 if ((stat & (DW_IC_INTR_TX_ABRT | DW_IC_INTR_STOP_DET)) || dev->msg_err)815815 complete(&dev->cmd_complete);816816+ else if (unlikely(dev->accessor_flags & ACCESS_INTR_MASK)) {817817+ /* workaround to trigger pending interrupt */818818+ stat = dw_readl(dev, DW_IC_INTR_MASK);819819+ i2c_dw_disable_int(dev);820820+ dw_writel(dev, stat, DW_IC_INTR_MASK);821821+ }816822817823 return IRQ_HANDLED;818824}
···839839840840 for_each_available_child_of_node(node, child) {841841 ret = vadc_get_dt_channel_data(vadc->dev, &prop, child);842842- if (ret)842842+ if (ret) {843843+ of_node_put(child);843844 return ret;845845+ }844846845847 vadc->chan_props[index] = prop;846848
+1-1
drivers/iio/industrialio-buffer.c
···302302 if (trialmask == NULL)303303 return -ENOMEM;304304 if (!indio_dev->masklength) {305305- WARN_ON("Trying to set scanmask prior to registering buffer\n");305305+ WARN(1, "Trying to set scanmask prior to registering buffer\n");306306 goto err_invalid_mask;307307 }308308 bitmap_copy(trialmask, buffer->scan_mask, indio_dev->masklength);
+1-1
drivers/iio/industrialio-core.c
···655655 break;656656 case IIO_SEPARATE:657657 if (!chan->indexed) {658658- WARN_ON("Differential channels must be indexed\n");658658+ WARN(1, "Differential channels must be indexed\n");659659 ret = -EINVAL;660660 goto error_free_full_postfix;661661 }
···130130 if (ret < 0)131131 break;132132133133- /* return 0 since laser is likely pointed out of range */133133+ /* return -EINVAL since laser is likely pointed out of range */134134 if (ret & LIDAR_REG_STATUS_INVALID) {135135 *reg = 0;136136- ret = 0;136136+ ret = -EINVAL;137137 break;138138 }139139···197197 if (!ret) {198198 iio_push_to_buffers_with_timestamp(indio_dev, data->buffer,199199 iio_get_time_ns());200200- } else {200200+ } else if (ret != -EINVAL) {201201 dev_err(&data->client->dev, "cannot read LIDAR measurement");202202 }203203
+1-4
drivers/infiniband/core/cma.c
···1126112611271127 rcu_read_lock();11281128 err = fib_lookup(dev_net(net_dev), &fl4, &res, 0);11291129- if (err)11301130- return false;11311131-11321132- ret = FIB_RES_DEV(res) == net_dev;11291129+ ret = err == 0 && FIB_RES_DEV(res) == net_dev;11331130 rcu_read_unlock();1134113111351132 return ret;
+5
drivers/infiniband/core/mad.c
···18111811 if (qp_num == 0)18121812 valid = 1;18131813 } else {18141814+ /* CM attributes other than ClassPortInfo only use Send method */18151815+ if ((mad_hdr->mgmt_class == IB_MGMT_CLASS_CM) &&18161816+ (mad_hdr->attr_id != IB_MGMT_CLASSPORTINFO_ATTR_ID) &&18171817+ (mad_hdr->method != IB_MGMT_METHOD_SEND))18181818+ goto out;18141819 /* Filter GSI packets sent to QP0 */18151820 if (qp_num != 0)18161821 valid = 1;
+17-15
drivers/infiniband/core/sa_query.c
···512512 return len;513513}514514515515-static int ib_nl_send_msg(struct ib_sa_query *query)515515+static int ib_nl_send_msg(struct ib_sa_query *query, gfp_t gfp_mask)516516{517517 struct sk_buff *skb = NULL;518518 struct nlmsghdr *nlh;···526526 if (len <= 0)527527 return -EMSGSIZE;528528529529- skb = nlmsg_new(len, GFP_KERNEL);529529+ skb = nlmsg_new(len, gfp_mask);530530 if (!skb)531531 return -ENOMEM;532532···544544 /* Repair the nlmsg header length */545545 nlmsg_end(skb, nlh);546546547547- ret = ibnl_multicast(skb, nlh, RDMA_NL_GROUP_LS, GFP_KERNEL);547547+ ret = ibnl_multicast(skb, nlh, RDMA_NL_GROUP_LS, gfp_mask);548548 if (!ret)549549 ret = len;550550 else···553553 return ret;554554}555555556556-static int ib_nl_make_request(struct ib_sa_query *query)556556+static int ib_nl_make_request(struct ib_sa_query *query, gfp_t gfp_mask)557557{558558 unsigned long flags;559559 unsigned long delay;···562562 INIT_LIST_HEAD(&query->list);563563 query->seq = (u32)atomic_inc_return(&ib_nl_sa_request_seq);564564565565+ /* Put the request on the list first.*/565566 spin_lock_irqsave(&ib_nl_request_lock, flags);566566- ret = ib_nl_send_msg(query);567567- if (ret <= 0) {568568- ret = -EIO;569569- goto request_out;570570- } else {571571- ret = 0;572572- }573573-574567 delay = msecs_to_jiffies(sa_local_svc_timeout_ms);575568 query->timeout = delay + jiffies;576569 list_add_tail(&query->list, &ib_nl_request_list);577570 /* Start the timeout if this is the only request */578571 if (ib_nl_request_list.next == &query->list)579572 queue_delayed_work(ib_nl_wq, &ib_nl_timed_work, delay);580580-581581-request_out:582573 spin_unlock_irqrestore(&ib_nl_request_lock, flags);574574+575575+ ret = ib_nl_send_msg(query, gfp_mask);576576+ if (ret <= 0) {577577+ ret = -EIO;578578+ /* Remove the request */579579+ spin_lock_irqsave(&ib_nl_request_lock, flags);580580+ list_del(&query->list);581581+ spin_unlock_irqrestore(&ib_nl_request_lock, flags);582582+ } else {583583+ ret = 0;584584+ }583585584586 return ret;585587}···1110110811111109 if (query->flags & IB_SA_ENABLE_LOCAL_SERVICE) {11121110 if (!ibnl_chk_listeners(RDMA_NL_GROUP_LS)) {11131113- if (!ib_nl_make_request(query))11111111+ if (!ib_nl_make_request(query, gfp_mask))11141112 return id;11151113 }11161114 ib_sa_disable_local_svc(query);
+17-10
drivers/infiniband/core/uverbs_cmd.c
···6262 * The ib_uobject locking scheme is as follows:6363 *6464 * - ib_uverbs_idr_lock protects the uverbs idrs themselves, so it6565- * needs to be held during all idr operations. When an object is6565+ * needs to be held during all idr write operations. When an object is6666 * looked up, a reference must be taken on the object's kref before6767- * dropping this lock.6767+ * dropping this lock. For read operations, the rcu_read_lock()6868+ * and rcu_write_lock() but similarly the kref reference is grabbed6969+ * before the rcu_read_unlock().6870 *6971 * - Each object also has an rwsem. This rwsem must be held for7072 * reading while an operation that uses the object is performed.···98969997static void release_uobj(struct kref *kref)10098{101101- kfree(container_of(kref, struct ib_uobject, ref));9999+ kfree_rcu(container_of(kref, struct ib_uobject, ref), rcu);102100}103101104102static void put_uobj(struct ib_uobject *uobj)···147145{148146 struct ib_uobject *uobj;149147150150- spin_lock(&ib_uverbs_idr_lock);148148+ rcu_read_lock();151149 uobj = idr_find(idr, id);152150 if (uobj) {153151 if (uobj->context == context)···155153 else156154 uobj = NULL;157155 }158158- spin_unlock(&ib_uverbs_idr_lock);156156+ rcu_read_unlock();159157160158 return uobj;161159}···24482446 int i, sg_ind;24492447 int is_ud;24502448 ssize_t ret = -EINVAL;24492449+ size_t next_size;2451245024522451 if (copy_from_user(&cmd, buf, sizeof cmd))24532452 return -EFAULT;···24932490 goto out_put;24942491 }2495249224962496- ud = alloc_wr(sizeof(*ud), user_wr->num_sge);24932493+ next_size = sizeof(*ud);24942494+ ud = alloc_wr(next_size, user_wr->num_sge);24972495 if (!ud) {24982496 ret = -ENOMEM;24992497 goto out_put;···25152511 user_wr->opcode == IB_WR_RDMA_READ) {25162512 struct ib_rdma_wr *rdma;2517251325182518- rdma = alloc_wr(sizeof(*rdma), user_wr->num_sge);25142514+ next_size = sizeof(*rdma);25152515+ rdma = alloc_wr(next_size, user_wr->num_sge);25192516 if (!rdma) {25202517 ret = -ENOMEM;25212518 goto out_put;···25302525 user_wr->opcode == IB_WR_ATOMIC_FETCH_AND_ADD) {25312526 struct ib_atomic_wr *atomic;2532252725332533- atomic = alloc_wr(sizeof(*atomic), user_wr->num_sge);25282528+ next_size = sizeof(*atomic);25292529+ atomic = alloc_wr(next_size, user_wr->num_sge);25342530 if (!atomic) {25352531 ret = -ENOMEM;25362532 goto out_put;···25462540 } else if (user_wr->opcode == IB_WR_SEND ||25472541 user_wr->opcode == IB_WR_SEND_WITH_IMM ||25482542 user_wr->opcode == IB_WR_SEND_WITH_INV) {25492549- next = alloc_wr(sizeof(*next), user_wr->num_sge);25432543+ next_size = sizeof(*next);25442544+ next = alloc_wr(next_size, user_wr->num_sge);25502545 if (!next) {25512546 ret = -ENOMEM;25522547 goto out_put;···2579257225802573 if (next->num_sge) {25812574 next->sg_list = (void *) next +25822582- ALIGN(sizeof *next, sizeof (struct ib_sge));25752575+ ALIGN(next_size, sizeof(struct ib_sge));25832576 if (copy_from_user(next->sg_list,25842577 buf + sizeof cmd +25852578 cmd.wr_count * cmd.wqe_size +
+21-20
drivers/infiniband/core/verbs.c
···15161516 * @sg_nents: number of entries in sg15171517 * @set_page: driver page assignment function pointer15181518 *15191519- * Core service helper for drivers to covert the largest15191519+ * Core service helper for drivers to convert the largest15201520 * prefix of given sg list to a page vector. The sg list15211521 * prefix converted is the prefix that meet the requirements15221522 * of ib_map_mr_sg.···15331533 u64 last_end_dma_addr = 0, last_page_addr = 0;15341534 unsigned int last_page_off = 0;15351535 u64 page_mask = ~((u64)mr->page_size - 1);15361536- int i;15361536+ int i, ret;1537153715381538 mr->iova = sg_dma_address(&sgl[0]);15391539 mr->length = 0;···15441544 u64 end_dma_addr = dma_addr + dma_len;15451545 u64 page_addr = dma_addr & page_mask;1546154615471547- if (i && page_addr != dma_addr) {15481548- if (last_end_dma_addr != dma_addr) {15491549- /* gap */15501550- goto done;15471547+ /*15481548+ * For the second and later elements, check whether either the15491549+ * end of element i-1 or the start of element i is not aligned15501550+ * on a page boundary.15511551+ */15521552+ if (i && (last_page_off != 0 || page_addr != dma_addr)) {15531553+ /* Stop mapping if there is a gap. */15541554+ if (last_end_dma_addr != dma_addr)15551555+ break;1551155615521552- } else if (last_page_off + dma_len <= mr->page_size) {15531553- /* chunk this fragment with the last */15541554- mr->length += dma_len;15551555- last_end_dma_addr += dma_len;15561556- last_page_off += dma_len;15571557- continue;15581558- } else {15591559- /* map starting from the next page */15601560- page_addr = last_page_addr + mr->page_size;15611561- dma_len -= mr->page_size - last_page_off;15621562- }15571557+ /*15581558+ * Coalesce this element with the last. If it is small15591559+ * enough just update mr->length. Otherwise start15601560+ * mapping from the next page.15611561+ */15621562+ goto next_page;15631563 }1564156415651565 do {15661566- if (unlikely(set_page(mr, page_addr)))15671567- goto done;15661566+ ret = set_page(mr, page_addr);15671567+ if (unlikely(ret < 0))15681568+ return i ? : ret;15691569+next_page:15681570 page_addr += mr->page_size;15691571 } while (page_addr < end_dma_addr);15701572···15761574 last_page_off = end_dma_addr & ~page_mask;15771575 }1578157615791579-done:15801577 return i;15811578}15821579EXPORT_SYMBOL(ib_sg_to_pages);
···381381 }382382 }383383 } else if (ent->cur > 2 * ent->limit) {384384- if (!someone_adding(cache) &&384384+ /*385385+ * The remove_keys() logic is performed as garbage collection386386+ * task. Such task is intended to be run when no other active387387+ * processes are running.388388+ *389389+ * The need_resched() will return TRUE if there are user tasks390390+ * to be activated in near future.391391+ *392392+ * In such case, we don't execute remove_keys() and postpone393393+ * the garbage collection work to try to run in next cycle,394394+ * in order to free CPU resources to other tasks.395395+ */396396+ if (!need_resched() && !someone_adding(cache) &&385397 time_after(jiffies, cache->last_add + 300 * HZ)) {386398 remove_keys(dev, i, 1);387399 if (ent->cur > ent->limit)
+2-2
drivers/infiniband/hw/qib/qib_qsfp.c
···292292 qib_dev_porterr(ppd->dd, ppd->port,293293 "QSFP byte0 is 0x%02X, S/B 0x0C/D\n", peek[0]);294294295295- if ((peek[2] & 2) == 0) {295295+ if ((peek[2] & 4) == 0) {296296 /*297297 * If cable is paged, rather than "flat memory", we need to298298 * set the page to zero, Even if it already appears to be zero.···538538 sofar += scnprintf(buf + sofar, len - sofar, "Date:%.*s\n",539539 QSFP_DATE_LEN, cd.date);540540 sofar += scnprintf(buf + sofar, len - sofar, "Lot:%.*s\n",541541- QSFP_LOT_LEN, cd.date);541541+ QSFP_LOT_LEN, cd.lot);542542543543 while (bidx < QSFP_DEFAULT_HDR_CNT) {544544 int iidx;
+1-1
drivers/infiniband/hw/qib/qib_verbs.h
···329329struct qib_mr {330330 struct ib_mr ibmr;331331 struct ib_umem *umem;332332- struct qib_mregion mr; /* must be last */333332 u64 *pages;334333 u32 npages;334334+ struct qib_mregion mr; /* must be last */335335};336336337337/*
+1-1
drivers/infiniband/ulp/iser/iser_verbs.c
···12931293 if (mr_status.fail_status & IB_MR_CHECK_SIG_STATUS) {12941294 sector_t sector_off = mr_status.sig_err.sig_err_offset;1295129512961296- do_div(sector_off, sector_size + 8);12961296+ sector_div(sector_off, sector_size + 8);12971297 *sector = scsi_get_lba(iser_task->sc) + sector_off;1298129812991299 pr_err("PI error found type %d at sector %llx "
+3-10
drivers/infiniband/ulp/isert/ib_isert.c
···157157 attr.recv_cq = comp->cq;158158 attr.cap.max_send_wr = ISERT_QP_MAX_REQ_DTOS;159159 attr.cap.max_recv_wr = ISERT_QP_MAX_RECV_DTOS + 1;160160- /*161161- * FIXME: Use devattr.max_sge - 2 for max_send_sge as162162- * work-around for RDMA_READs with ConnectX-2.163163- *164164- * Also, still make sure to have at least two SGEs for165165- * outgoing control PDU responses.166166- */167167- attr.cap.max_send_sge = max(2, device->dev_attr.max_sge - 2);168168- isert_conn->max_sge = attr.cap.max_send_sge;169169-160160+ attr.cap.max_send_sge = device->dev_attr.max_sge;161161+ isert_conn->max_sge = min(device->dev_attr.max_sge,162162+ device->dev_attr.max_sge_rd);170163 attr.cap.max_recv_sge = 1;171164 attr.sq_sig_type = IB_SIGNAL_REQ_WR;172165 attr.qp_type = IB_QPT_RC;
+26-22
drivers/infiniband/ulp/srp/ib_srp.c
···488488 struct ib_qp *qp;489489 struct ib_fmr_pool *fmr_pool = NULL;490490 struct srp_fr_pool *fr_pool = NULL;491491- const int m = 1 + dev->use_fast_reg;491491+ const int m = dev->use_fast_reg ? 3 : 1;492492 struct ib_cq_init_attr cq_attr = {};493493 int ret;494494···994994995995 ret = srp_lookup_path(ch);996996 if (ret)997997- return ret;997997+ goto out;998998999999 while (1) {10001000 init_completion(&ch->done);10011001 ret = srp_send_req(ch, multich);10021002 if (ret)10031003- return ret;10031003+ goto out;10041004 ret = wait_for_completion_interruptible(&ch->done);10051005 if (ret < 0)10061006- return ret;10061006+ goto out;1007100710081008 /*10091009 * The CM event handling code will set status to···10111011 * back, or SRP_DLID_REDIRECT if we get a lid/qp10121012 * redirect REJ back.10131013 */10141014- switch (ch->status) {10141014+ ret = ch->status;10151015+ switch (ret) {10151016 case 0:10161017 ch->connected = true;10171017- return 0;10181018+ goto out;1018101910191020 case SRP_PORT_REDIRECT:10201021 ret = srp_lookup_path(ch);10211022 if (ret)10221022- return ret;10231023+ goto out;10231024 break;1024102510251026 case SRP_DLID_REDIRECT:···10291028 case SRP_STALE_CONN:10301029 shost_printk(KERN_ERR, target->scsi_host, PFX10311030 "giving up on stale connection\n");10321032- ch->status = -ECONNRESET;10331033- return ch->status;10311031+ ret = -ECONNRESET;10321032+ goto out;1034103310351034 default:10361036- return ch->status;10351035+ goto out;10371036 }10381037 }10381038+10391039+out:10401040+ return ret <= 0 ? ret : -ENODEV;10391041}1040104210411043static int srp_inv_rkey(struct srp_rdma_ch *ch, u32 rkey)···13131309}1314131013151311static int srp_map_finish_fr(struct srp_map_state *state,13161316- struct srp_rdma_ch *ch)13121312+ struct srp_rdma_ch *ch, int sg_nents)13171313{13181314 struct srp_target_port *target = ch->target;13191315 struct srp_device *dev = target->srp_host->srp_dev;···1328132413291325 WARN_ON_ONCE(!dev->use_fast_reg);1330132613311331- if (state->sg_nents == 0)13271327+ if (sg_nents == 0)13321328 return 0;1333132913341334- if (state->sg_nents == 1 && target->global_mr) {13301330+ if (sg_nents == 1 && target->global_mr) {13351331 srp_map_desc(state, sg_dma_address(state->sg),13361332 sg_dma_len(state->sg),13371333 target->global_mr->rkey);···13451341 rkey = ib_inc_rkey(desc->mr->rkey);13461342 ib_update_fast_reg_key(desc->mr, rkey);1347134313481348- n = ib_map_mr_sg(desc->mr, state->sg, state->sg_nents,13491349- dev->mr_page_size);13441344+ n = ib_map_mr_sg(desc->mr, state->sg, sg_nents, dev->mr_page_size);13501345 if (unlikely(n < 0))13511346 return n;13521347···14511448 state->fr.next = req->fr_list;14521449 state->fr.end = req->fr_list + ch->target->cmd_sg_cnt;14531450 state->sg = scat;14541454- state->sg_nents = scsi_sg_count(req->scmnd);1455145114561456- while (state->sg_nents) {14521452+ while (count) {14571453 int i, n;1458145414591459- n = srp_map_finish_fr(state, ch);14551455+ n = srp_map_finish_fr(state, ch, count);14601456 if (unlikely(n < 0))14611457 return n;1462145814631463- state->sg_nents -= n;14591459+ count -= n;14641460 for (i = 0; i < n; i++)14651461 state->sg = sg_next(state->sg);14661462 }···1519151715201518 if (dev->use_fast_reg) {15211519 state.sg = idb_sg;15221522- state.sg_nents = 1;15231520 sg_set_buf(idb_sg, req->indirect_desc, idb_len);15241521 idb_sg->dma_address = req->indirect_dma_addr; /* hack! */15251525- ret = srp_map_finish_fr(&state, ch);15221522+#ifdef CONFIG_NEED_SG_DMA_LENGTH15231523+ idb_sg->dma_length = idb_sg->length; /* hack^2 */15241524+#endif15251525+ ret = srp_map_finish_fr(&state, ch, 1);15261526 if (ret < 0)15271527 return ret;15281528 } else if (dev->use_fmr) {···16591655 return ret;16601656 req->nmdesc++;16611657 } else {16621662- idb_rkey = target->global_mr->rkey;16581658+ idb_rkey = cpu_to_be32(target->global_mr->rkey);16631659 }1664166016651661 indirect_hdr->table_desc.va = cpu_to_be64(req->indirect_dma_addr);
+1-4
drivers/infiniband/ulp/srp/ib_srp.h
···300300 dma_addr_t base_dma_addr;301301 u32 dma_len;302302 u32 total_len;303303- union {304304- unsigned int npages;305305- int sg_nents;306306- };303303+ unsigned int npages;307304 unsigned int nmdesc;308305 unsigned int ndesc;309306};
···18191819 input_set_abs_params(inputdev, ABS_TILT_Y, AIPTEK_TILT_MIN, AIPTEK_TILT_MAX, 0, 0);18201820 input_set_abs_params(inputdev, ABS_WHEEL, AIPTEK_WHEEL_MIN, AIPTEK_WHEEL_MAX - 1, 0, 0);1821182118221822+ /* Verify that a device really has an endpoint */18231823+ if (intf->altsetting[0].desc.bNumEndpoints < 1) {18241824+ dev_err(&intf->dev,18251825+ "interface has %d endpoints, but must have minimum 1\n",18261826+ intf->altsetting[0].desc.bNumEndpoints);18271827+ err = -EINVAL;18281828+ goto fail3;18291829+ }18221830 endpoint = &intf->altsetting[0].endpoint[0].desc;1823183118241832 /* Go set up our URB, which is called when the tablet receives···18691861 if (i == ARRAY_SIZE(speeds)) {18701862 dev_info(&intf->dev,18711863 "Aiptek tried all speeds, no sane response\n");18641864+ err = -EINVAL;18721865 goto fail3;18731866 }18741867
···6363 return bsearch(n, key, 0);6464}65656666+static int upper_bound(struct btree_node *n, uint64_t key)6767+{6868+ return bsearch(n, key, 1);6969+}7070+6671void inc_children(struct dm_transaction_manager *tm, struct btree_node *n,6772 struct dm_btree_value_type *vt)6873{···257252 dm_tm_unlock(s->tm, f->b);258253}259254255255+static void unlock_all_frames(struct del_stack *s)256256+{257257+ struct frame *f;258258+259259+ while (unprocessed_frames(s)) {260260+ f = s->spine + s->top--;261261+ dm_tm_unlock(s->tm, f->b);262262+ }263263+}264264+260265int dm_btree_del(struct dm_btree_info *info, dm_block_t root)261266{262267 int r;···323308 pop_frame(s);324309 }325310 }326326-327311out:312312+ if (r) {313313+ /* cleanup all frames of del_stack */314314+ unlock_all_frames(s);315315+ }328316 kfree(s);317317+329318 return r;330319}331320EXPORT_SYMBOL_GPL(dm_btree_del);···410391 return r;411392}412393EXPORT_SYMBOL_GPL(dm_btree_lookup);394394+395395+static int dm_btree_lookup_next_single(struct dm_btree_info *info, dm_block_t root,396396+ uint64_t key, uint64_t *rkey, void *value_le)397397+{398398+ int r, i;399399+ uint32_t flags, nr_entries;400400+ struct dm_block *node;401401+ struct btree_node *n;402402+403403+ r = bn_read_lock(info, root, &node);404404+ if (r)405405+ return r;406406+407407+ n = dm_block_data(node);408408+ flags = le32_to_cpu(n->header.flags);409409+ nr_entries = le32_to_cpu(n->header.nr_entries);410410+411411+ if (flags & INTERNAL_NODE) {412412+ i = lower_bound(n, key);413413+ if (i < 0 || i >= nr_entries) {414414+ r = -ENODATA;415415+ goto out;416416+ }417417+418418+ r = dm_btree_lookup_next_single(info, value64(n, i), key, rkey, value_le);419419+ if (r == -ENODATA && i < (nr_entries - 1)) {420420+ i++;421421+ r = dm_btree_lookup_next_single(info, value64(n, i), key, rkey, value_le);422422+ }423423+424424+ } else {425425+ i = upper_bound(n, key);426426+ if (i < 0 || i >= nr_entries) {427427+ r = -ENODATA;428428+ goto out;429429+ }430430+431431+ *rkey = le64_to_cpu(n->keys[i]);432432+ memcpy(value_le, value_ptr(n, i), info->value_type.size);433433+ }434434+out:435435+ dm_tm_unlock(info->tm, node);436436+ return r;437437+}438438+439439+int dm_btree_lookup_next(struct dm_btree_info *info, dm_block_t root,440440+ uint64_t *keys, uint64_t *rkey, void *value_le)441441+{442442+ unsigned level;443443+ int r = -ENODATA;444444+ __le64 internal_value_le;445445+ struct ro_spine spine;446446+447447+ init_ro_spine(&spine, info);448448+ for (level = 0; level < info->levels - 1u; level++) {449449+ r = btree_lookup_raw(&spine, root, keys[level],450450+ lower_bound, rkey,451451+ &internal_value_le, sizeof(uint64_t));452452+ if (r)453453+ goto out;454454+455455+ if (*rkey != keys[level]) {456456+ r = -ENODATA;457457+ goto out;458458+ }459459+460460+ root = le64_to_cpu(internal_value_le);461461+ }462462+463463+ r = dm_btree_lookup_next_single(info, root, keys[level], rkey, value_le);464464+out:465465+ exit_ro_spine(&spine);466466+ return r;467467+}468468+469469+EXPORT_SYMBOL_GPL(dm_btree_lookup_next);413470414471/*415472 * Splits a node by creating a sibling node and shifting half the nodes···568473569474 r = insert_at(sizeof(__le64), pn, parent_index + 1,570475 le64_to_cpu(rn->keys[0]), &location);571571- if (r)476476+ if (r) {477477+ unlock_block(s->info, right);572478 return r;479479+ }573480574481 if (key < le64_to_cpu(rn->keys[0])) {575482 unlock_block(s->info, right);
+11-3
drivers/md/persistent-data/dm-btree.h
···110110 uint64_t *keys, void *value_le);111111112112/*113113+ * Tries to find the first key where the bottom level key is >= to that114114+ * given. Useful for skipping empty sections of the btree.115115+ */116116+int dm_btree_lookup_next(struct dm_btree_info *info, dm_block_t root,117117+ uint64_t *keys, uint64_t *rkey, void *value_le);118118+119119+/*113120 * Insertion (or overwrite an existing value). O(ln(n))114121 */115122int dm_btree_insert(struct dm_btree_info *info, dm_block_t root,···142135 uint64_t *keys, dm_block_t *new_root);143136144137/*145145- * Removes values between 'keys' and keys2, where keys2 is keys with the146146- * final key replaced with 'end_key'. 'end_key' is the one-past-the-end147147- * value. 'keys' may be altered.138138+ * Removes a _contiguous_ run of values starting from 'keys' and not139139+ * reaching keys2 (where keys2 is keys with the final key replaced with140140+ * 'end_key'). 'end_key' is the one-past-the-end value. 'keys' may be141141+ * altered.148142 */149143int dm_btree_remove_leaves(struct dm_btree_info *info, dm_block_t root,150144 uint64_t *keys, uint64_t end_key,
···805805{806806 int i;807807808808- for (i = 0; i < IVTV_CARD_MAX_VIDEO_INPUTS - 1; i++)808808+ for (i = 0; i < IVTV_CARD_MAX_VIDEO_INPUTS; i++)809809 if (itv->card->video_inputs[i].video_type == 0)810810 break;811811 itv->nof_inputs = i;812812- for (i = 0; i < IVTV_CARD_MAX_AUDIO_INPUTS - 1; i++)812812+ for (i = 0; i < IVTV_CARD_MAX_AUDIO_INPUTS; i++)813813 if (itv->card->audio_inputs[i].audio_type == 0)814814 break;815815 itv->nof_audio_inputs = i;
+1-1
drivers/media/usb/airspy/airspy.c
···134134 int urbs_submitted;135135136136 /* USB control message buffer */137137- #define BUF_SIZE 24137137+ #define BUF_SIZE 128138138 u8 buf[BUF_SIZE];139139140140 /* Current configuration */
+12-1
drivers/media/usb/hackrf/hackrf.c
···2424#include <media/videobuf2-v4l2.h>2525#include <media/videobuf2-vmalloc.h>26262727+/*2828+ * Used Avago MGA-81563 RF amplifier could be destroyed pretty easily with too2929+ * strong signal or transmitting to bad antenna.3030+ * Set RF gain control to 'grabbed' state by default for sure.3131+ */3232+static bool hackrf_enable_rf_gain_ctrl;3333+module_param_named(enable_rf_gain_ctrl, hackrf_enable_rf_gain_ctrl, bool, 0644);3434+MODULE_PARM_DESC(enable_rf_gain_ctrl, "enable RX/TX RF amplifier control (warn: could damage amplifier)");3535+2736/* HackRF USB API commands (from HackRF Library) */2837enum {2938 CMD_SET_TRANSCEIVER_MODE = 0x01,···14601451 dev_err(dev->dev, "Could not initialize controls\n");14611452 goto err_v4l2_ctrl_handler_free_rx;14621453 }14541454+ v4l2_ctrl_grab(dev->rx_rf_gain, !hackrf_enable_rf_gain_ctrl);14631455 v4l2_ctrl_handler_setup(&dev->rx_ctrl_handler);1464145614651457 /* Register controls for transmitter */···14811471 dev_err(dev->dev, "Could not initialize controls\n");14821472 goto err_v4l2_ctrl_handler_free_tx;14831473 }14741474+ v4l2_ctrl_grab(dev->tx_rf_gain, !hackrf_enable_rf_gain_ctrl);14841475 v4l2_ctrl_handler_setup(&dev->tx_ctrl_handler);1485147614861477 /* Register the v4l2_device structure */···15411530err_kfree:15421531 kfree(dev);15431532err:15441544- dev_dbg(dev->dev, "failed=%d\n", ret);15331533+ dev_dbg(&intf->dev, "failed=%d\n", ret);15451534 return ret;15461535}15471536
+1-1
drivers/misc/cxl/native.c
···497497{498498 u64 sr = 0;499499500500+ set_endian(sr);500501 if (ctx->master)501502 sr |= CXL_PSL_SR_An_MP;502503 if (mfspr(SPRN_LPCR) & LPCR_TC)···507506 sr |= CXL_PSL_SR_An_HV;508507 } else {509508 sr |= CXL_PSL_SR_An_PR | CXL_PSL_SR_An_R;510510- set_endian(sr);511509 sr &= ~(CXL_PSL_SR_An_HV);512510 if (!test_tsk_thread_flag(current, TIF_32BIT))513511 sr |= CXL_PSL_SR_An_SF;
+10-2
drivers/mtd/ofpart.c
···46464747 ofpart_node = of_get_child_by_name(mtd_node, "partitions");4848 if (!ofpart_node) {4949- pr_warn("%s: 'partitions' subnode not found on %s. Trying to parse direct subnodes as partitions.\n",5050- master->name, mtd_node->full_name);4949+ /*5050+ * We might get here even when ofpart isn't used at all (e.g.,5151+ * when using another parser), so don't be louder than5252+ * KERN_DEBUG5353+ */5454+ pr_debug("%s: 'partitions' subnode not found on %s. Trying to parse direct subnodes as partitions.\n",5555+ master->name, mtd_node->full_name);5156 ofpart_node = mtd_node;5257 dedicated = false;5858+ } else if (!of_device_is_compatible(ofpart_node, "fixed-partitions")) {5959+ /* The 'partitions' subnode might be used by another parser */6060+ return 0;5361 }54625563 /* First count the subnodes */
+2-2
drivers/net/ethernet/amd/xgbe/xgbe-dev.c
···18491849 usleep_range(10, 15);1850185018511851 /* Poll Until Poll Condition */18521852- while (count-- && XGMAC_IOREAD_BITS(pdata, DMA_MR, SWR))18521852+ while (--count && XGMAC_IOREAD_BITS(pdata, DMA_MR, SWR))18531853 usleep_range(500, 600);1854185418551855 if (!count)···18731873 /* Poll Until Poll Condition */18741874 for (i = 0; i < pdata->tx_q_count; i++) {18751875 count = 2000;18761876- while (count-- && XGMAC_MTL_IOREAD_BITS(pdata, i,18761876+ while (--count && XGMAC_MTL_IOREAD_BITS(pdata, i,18771877 MTL_Q_TQOMR, FTQ))18781878 usleep_range(500, 600);18791879
···10161016 sizeof(struct atl1c_recv_ret_status) * rx_desc_count +10171017 8 * 4;1018101810191019- ring_header->desc = pci_alloc_consistent(pdev, ring_header->size,10201020- &ring_header->dma);10191019+ ring_header->desc = dma_zalloc_coherent(&pdev->dev, ring_header->size,10201020+ &ring_header->dma, GFP_KERNEL);10211021 if (unlikely(!ring_header->desc)) {10221022- dev_err(&pdev->dev, "pci_alloc_consistend failed\n");10221022+ dev_err(&pdev->dev, "could not get memory for DMA buffer\n");10231023 goto err_nomem;10241024 }10251025- memset(ring_header->desc, 0, ring_header->size);10261025 /* init TPD ring */1027102610281027 tpd_ring[0].dma = roundup(ring_header->dma, 8);
+1
drivers/net/ethernet/aurora/Kconfig
···13131414config AURORA_NB88001515 tristate "Aurora AU-NB8800 support"1616+ depends on HAS_DMA1617 select PHYLIB1718 help1819 Support for the AU-NB8800 gigabit Ethernet controller.
+32-14
drivers/net/ethernet/broadcom/bnxt/bnxt.c
···26932693 req.ver_upd = DRV_VER_UPD;2694269426952695 if (BNXT_PF(bp)) {26962696- unsigned long vf_req_snif_bmap[4];26962696+ DECLARE_BITMAP(vf_req_snif_bmap, 256);26972697 u32 *data = (u32 *)vf_req_snif_bmap;2698269826992699- memset(vf_req_snif_bmap, 0, 32);26992699+ memset(vf_req_snif_bmap, 0, sizeof(vf_req_snif_bmap));27002700 for (i = 0; i < ARRAY_SIZE(bnxt_vf_req_snif); i++)27012701 __set_bit(bnxt_vf_req_snif[i], vf_req_snif_bmap);2702270227032703- for (i = 0; i < 8; i++) {27042704- req.vf_req_fwd[i] = cpu_to_le32(*data);27052705- data++;27062706- }27032703+ for (i = 0; i < 8; i++)27042704+ req.vf_req_fwd[i] = cpu_to_le32(data[i]);27052705+27072706 req.enables |=27082707 cpu_to_le32(FUNC_DRV_RGTR_REQ_ENABLES_VF_REQ_FWD);27092708 }···46024603 bp->nge_port_cnt = 1;46034604 }4604460546054605- bp->state = BNXT_STATE_OPEN;46064606+ set_bit(BNXT_STATE_OPEN, &bp->state);46064607 bnxt_enable_int(bp);46074608 /* Enable TX queues */46084609 bnxt_tx_enable(bp);···46784679 /* Change device state to avoid TX queue wake up's */46794680 bnxt_tx_disable(bp);4680468146814681- bp->state = BNXT_STATE_CLOSED;46824682- cancel_work_sync(&bp->sp_task);46824682+ clear_bit(BNXT_STATE_OPEN, &bp->state);46834683+ smp_mb__after_atomic();46844684+ while (test_bit(BNXT_STATE_IN_SP_TASK, &bp->state))46854685+ msleep(20);4683468646844687 /* Flush rings before disabling interrupts */46854688 bnxt_shutdown_nic(bp, irq_re_init);···50315030static void bnxt_reset_task(struct bnxt *bp)50325031{50335032 bnxt_dbg_dump_states(bp);50345034- if (netif_running(bp->dev))50355035- bnxt_tx_disable(bp); /* prevent tx timout again */50335033+ if (netif_running(bp->dev)) {50345034+ bnxt_close_nic(bp, false, false);50355035+ bnxt_open_nic(bp, false, false);50365036+ }50365037}5037503850385039static void bnxt_tx_timeout(struct net_device *dev)···50845081 struct bnxt *bp = container_of(work, struct bnxt, sp_task);50855082 int rc;5086508350875087- if (bp->state != BNXT_STATE_OPEN)50845084+ set_bit(BNXT_STATE_IN_SP_TASK, &bp->state);50855085+ smp_mb__after_atomic();50865086+ if (!test_bit(BNXT_STATE_OPEN, &bp->state)) {50875087+ clear_bit(BNXT_STATE_IN_SP_TASK, &bp->state);50885088 return;50895089+ }5089509050905091 if (test_and_clear_bit(BNXT_RX_MASK_SP_EVENT, &bp->sp_event))50915092 bnxt_cfg_rx_mode(bp);···51135106 bnxt_hwrm_tunnel_dst_port_free(51145107 bp, TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_VXLAN);51155108 }51165116- if (test_and_clear_bit(BNXT_RESET_TASK_SP_EVENT, &bp->sp_event))51095109+ if (test_and_clear_bit(BNXT_RESET_TASK_SP_EVENT, &bp->sp_event)) {51105110+ /* bnxt_reset_task() calls bnxt_close_nic() which waits51115111+ * for BNXT_STATE_IN_SP_TASK to clear.51125112+ */51135113+ clear_bit(BNXT_STATE_IN_SP_TASK, &bp->state);51145114+ rtnl_lock();51175115 bnxt_reset_task(bp);51165116+ set_bit(BNXT_STATE_IN_SP_TASK, &bp->state);51175117+ rtnl_unlock();51185118+ }51195119+51205120+ smp_mb__before_atomic();51215121+ clear_bit(BNXT_STATE_IN_SP_TASK, &bp->state);51185122}5119512351205124static int bnxt_init_board(struct pci_dev *pdev, struct net_device *dev)···52045186 bp->timer.function = bnxt_timer;52055187 bp->current_interval = BNXT_TIMER_INTERVAL;5206518852075207- bp->state = BNXT_STATE_CLOSED;51895189+ clear_bit(BNXT_STATE_OPEN, &bp->state);5208519052095191 return 0;52105192
···2121#ifdef CONFIG_BNXT_SRIOV2222static int bnxt_vf_ndo_prep(struct bnxt *bp, int vf_id)2323{2424- if (bp->state != BNXT_STATE_OPEN) {2424+ if (!test_bit(BNXT_STATE_OPEN, &bp->state)) {2525 netdev_err(bp->dev, "vf ndo called though PF is down\n");2626 return -EINVAL;2727 }
+18-21
drivers/net/ethernet/cavium/thunder/nic_main.c
···3737#define NIC_GET_BGX_FROM_VF_LMAC_MAP(map) ((map >> 4) & 0xF)3838#define NIC_GET_LMAC_FROM_VF_LMAC_MAP(map) (map & 0xF)3939 u8 vf_lmac_map[MAX_LMAC];4040- u8 lmac_cnt;4140 struct delayed_work dwork;4241 struct workqueue_struct *check_link;4342 u8 link[MAX_LMAC];···279280 u64 lmac_credit;280281281282 nic->num_vf_en = 0;282282- nic->lmac_cnt = 0;283283284284 for (bgx = 0; bgx < NIC_MAX_BGX; bgx++) {285285 if (!(bgx_map & (1 << bgx)))···288290 nic->vf_lmac_map[next_bgx_lmac++] =289291 NIC_SET_VF_LMAC_MAP(bgx, lmac);290292 nic->num_vf_en += lmac_cnt;291291- nic->lmac_cnt += lmac_cnt;292293293294 /* Program LMAC credits */294295 lmac_credit = (1ull << 1); /* channel credit enable */···615618 return 0;616619}617620621621+static void nic_enable_vf(struct nicpf *nic, int vf, bool enable)622622+{623623+ int bgx, lmac;624624+625625+ nic->vf_enabled[vf] = enable;626626+627627+ if (vf >= nic->num_vf_en)628628+ return;629629+630630+ bgx = NIC_GET_BGX_FROM_VF_LMAC_MAP(nic->vf_lmac_map[vf]);631631+ lmac = NIC_GET_LMAC_FROM_VF_LMAC_MAP(nic->vf_lmac_map[vf]);632632+633633+ bgx_lmac_rx_tx_enable(nic->node, bgx, lmac, enable);634634+}635635+618636/* Interrupt handler to handle mailbox messages from VFs */619637static void nic_handle_mbx_intr(struct nicpf *nic, int vf)620638{···729717 break;730718 case NIC_MBOX_MSG_CFG_DONE:731719 /* Last message of VF config msg sequence */732732- nic->vf_enabled[vf] = true;733733- if (vf >= nic->lmac_cnt)734734- goto unlock;735735-736736- bgx = NIC_GET_BGX_FROM_VF_LMAC_MAP(nic->vf_lmac_map[vf]);737737- lmac = NIC_GET_LMAC_FROM_VF_LMAC_MAP(nic->vf_lmac_map[vf]);738738-739739- bgx_lmac_rx_tx_enable(nic->node, bgx, lmac, true);720720+ nic_enable_vf(nic, vf, true);740721 goto unlock;741722 case NIC_MBOX_MSG_SHUTDOWN:742723 /* First msg in VF teardown sequence */743743- nic->vf_enabled[vf] = false;744724 if (vf >= nic->num_vf_en)745725 nic->sqs_used[vf - nic->num_vf_en] = false;746726 nic->pqs_vf[vf] = 0;747747-748748- if (vf >= nic->lmac_cnt)749749- break;750750-751751- bgx = NIC_GET_BGX_FROM_VF_LMAC_MAP(nic->vf_lmac_map[vf]);752752- lmac = NIC_GET_LMAC_FROM_VF_LMAC_MAP(nic->vf_lmac_map[vf]);753753-754754- bgx_lmac_rx_tx_enable(nic->node, bgx, lmac, false);727727+ nic_enable_vf(nic, vf, false);755728 break;756729 case NIC_MBOX_MSG_ALLOC_SQS:757730 nic_alloc_sqs(nic, &mbx.sqs_alloc);···955958956959 mbx.link_status.msg = NIC_MBOX_MSG_BGX_LINK_CHANGE;957960958958- for (vf = 0; vf < nic->lmac_cnt; vf++) {961961+ for (vf = 0; vf < nic->num_vf_en; vf++) {959962 /* Poll only if VF is UP */960963 if (!nic->vf_enabled[vf])961964 continue;
+8-20
drivers/net/ethernet/ezchip/nps_enet.c
···4848 *reg = nps_enet_reg_get(priv, NPS_ENET_REG_RX_BUF);4949 else { /* !dst_is_aligned */5050 for (i = 0; i < len; i++, reg++) {5151- u32 buf =5252- nps_enet_reg_get(priv, NPS_ENET_REG_RX_BUF);5353-5454- /* to accommodate word-unaligned address of "reg"5555- * we have to do memcpy_toio() instead of simple "=".5656- */5757- memcpy_toio((void __iomem *)reg, &buf, sizeof(buf));5151+ u32 buf = nps_enet_reg_get(priv, NPS_ENET_REG_RX_BUF);5252+ put_unaligned(buf, reg);5853 }5954 }60556156 /* copy last bytes (if any) */6257 if (last) {6358 u32 buf = nps_enet_reg_get(priv, NPS_ENET_REG_RX_BUF);6464-6565- memcpy_toio((void __iomem *)reg, &buf, last);5959+ memcpy((u8*)reg, &buf, last);6660 }6761}6862···361367 struct nps_enet_tx_ctl tx_ctrl;362368 short length = skb->len;363369 u32 i, len = DIV_ROUND_UP(length, sizeof(u32));364364- u32 *src = (u32 *)virt_to_phys(skb->data);370370+ u32 *src = (void *)skb->data;365371 bool src_is_aligned = IS_ALIGNED((unsigned long)src, sizeof(u32));366372367373 tx_ctrl.value = 0;···369375 if (src_is_aligned)370376 for (i = 0; i < len; i++, src++)371377 nps_enet_reg_set(priv, NPS_ENET_REG_TX_BUF, *src);372372- else { /* !src_is_aligned */373373- for (i = 0; i < len; i++, src++) {374374- u32 buf;378378+ else /* !src_is_aligned */379379+ for (i = 0; i < len; i++, src++)380380+ nps_enet_reg_set(priv, NPS_ENET_REG_TX_BUF,381381+ get_unaligned(src));375382376376- /* to accommodate word-unaligned address of "src"377377- * we have to do memcpy_fromio() instead of simple "="378378- */379379- memcpy_fromio(&buf, (void __iomem *)src, sizeof(buf));380380- nps_enet_reg_set(priv, NPS_ENET_REG_TX_BUF, buf);381381- }382382- }383383 /* Write the length of the Frame */384384 tx_ctrl.nt = length;385385
+1-1
drivers/net/ethernet/freescale/fs_enet/mac-fcc.c
···552552 cbd_t __iomem *prev_bd;553553 cbd_t __iomem *last_tx_bd;554554555555- last_tx_bd = fep->tx_bd_base + (fpi->tx_ring * sizeof(cbd_t));555555+ last_tx_bd = fep->tx_bd_base + ((fpi->tx_ring - 1) * sizeof(cbd_t));556556557557 /* get the current bd held in TBPTR and scan back from this point */558558 recheck_bd = curr_tbptr = (cbd_t __iomem *)
+1-1
drivers/net/ethernet/freescale/fsl_pq_mdio.c
···464464 * address). Print error message but continue anyway.465465 */466466 if ((void *)tbipa > priv->map + resource_size(&res) - 4)467467- dev_err(&pdev->dev, "invalid register map (should be at least 0x%04x to contain TBI address)\n",467467+ dev_err(&pdev->dev, "invalid register map (should be at least 0x%04zx to contain TBI address)\n",468468 ((void *)tbipa - priv->map) + 4);469469470470 iowrite32be(be32_to_cpup(prop), tbipa);
···567567 goto init_adminq_exit;568568 }569569570570- /* initialize locks */571571- mutex_init(&hw->aq.asq_mutex);572572- mutex_init(&hw->aq.arq_mutex);573573-574570 /* Set up register offsets */575571 i40e_adminq_init_regs(hw);576572···659663660664 i40e_shutdown_asq(hw);661665 i40e_shutdown_arq(hw);662662-663663- /* destroy the locks */664666665667 if (hw->nvm_buff.va)666668 i40e_free_virt_mem(hw, &hw->nvm_buff);
+10-1
drivers/net/ethernet/intel/i40e/i40e_main.c
···1029510295 /* set up a default setting for link flow control */1029610296 pf->hw.fc.requested_mode = I40E_FC_NONE;10297102971029810298+ /* set up the locks for the AQ, do this only once in probe1029910299+ * and destroy them only once in remove1030010300+ */1030110301+ mutex_init(&hw->aq.asq_mutex);1030210302+ mutex_init(&hw->aq.arq_mutex);1030310303+1029810304 err = i40e_init_adminq(hw);10299103051030010306 /* provide nvm, fw, api versions */···1070310697 set_bit(__I40E_DOWN, &pf->state);1070410698 del_timer_sync(&pf->service_timer);1070510699 cancel_work_sync(&pf->service_task);1070610706- i40e_fdir_teardown(pf);10707107001070810701 if (pf->flags & I40E_FLAG_SRIOV_ENABLED) {1070910702 i40e_free_vfs(pf);···1074410739 dev_warn(&pdev->dev,1074510740 "Failed to destroy the Admin Queue resources: %d\n",1074610741 ret_code);1074210742+1074310743+ /* destroy the locks only once, here */1074410744+ mutex_destroy(&hw->aq.arq_mutex);1074510745+ mutex_destroy(&hw->aq.asq_mutex);10747107461074810747 /* Clear all dynamic memory lists of rings, q_vectors, and VSIs */1074910748 i40e_clear_interrupt_scheme(pf);
-6
drivers/net/ethernet/intel/i40evf/i40e_adminq.c
···551551 goto init_adminq_exit;552552 }553553554554- /* initialize locks */555555- mutex_init(&hw->aq.asq_mutex);556556- mutex_init(&hw->aq.arq_mutex);557557-558554 /* Set up register offsets */559555 i40e_adminq_init_regs(hw);560556···591595592596 i40e_shutdown_asq(hw);593597 i40e_shutdown_arq(hw);594594-595595- /* destroy the locks */596598597599 if (hw->nvm_buff.va)598600 i40e_free_virt_mem(hw, &hw->nvm_buff);
+10
drivers/net/ethernet/intel/i40evf/i40evf_main.c
···24762476 hw->bus.device = PCI_SLOT(pdev->devfn);24772477 hw->bus.func = PCI_FUNC(pdev->devfn);2478247824792479+ /* set up the locks for the AQ, do this only once in probe24802480+ * and destroy them only once in remove24812481+ */24822482+ mutex_init(&hw->aq.asq_mutex);24832483+ mutex_init(&hw->aq.arq_mutex);24842484+24792485 INIT_LIST_HEAD(&adapter->mac_filter_list);24802486 INIT_LIST_HEAD(&adapter->vlan_filter_list);24812487···2634262826352629 if (hw->aq.asq.count)26362630 i40evf_shutdown_adminq(hw);26312631+26322632+ /* destroy the locks only once, here */26332633+ mutex_destroy(&hw->aq.arq_mutex);26342634+ mutex_destroy(&hw->aq.asq_mutex);2637263526382636 iounmap(hw->hw_addr);26392637 pci_release_regions(pdev);
···246246 u32 state;247247248248 state = QLCRDX(ahw, QLC_83XX_VNIC_STATE);249249- while (state != QLCNIC_DEV_NPAR_OPER && idc->vnic_wait_limit--) {249249+ while (state != QLCNIC_DEV_NPAR_OPER && idc->vnic_wait_limit) {250250+ idc->vnic_wait_limit--;250251 msleep(1000);251252 state = QLCRDX(ahw, QLC_83XX_VNIC_STATE);252253 }
+3-2
drivers/net/ethernet/qlogic/qlge/qlge_main.c
···4211421142124212 /* Wait for an outstanding reset to complete. */42134213 if (!test_bit(QL_ADAPTER_UP, &qdev->flags)) {42144214- int i = 3;42154215- while (i-- && !test_bit(QL_ADAPTER_UP, &qdev->flags)) {42144214+ int i = 4;42154215+42164216+ while (--i && !test_bit(QL_ADAPTER_UP, &qdev->flags)) {42164217 netif_err(qdev, ifup, qdev->ndev,42174218 "Waiting for adapter UP...\n");42184219 ssleep(1);
+2-3
drivers/net/ethernet/qualcomm/qca_spi.c
···736736 netdev_info(qca->net_dev, "Transmit timeout at %ld, latency %ld\n",737737 jiffies, jiffies - dev->trans_start);738738 qca->net_dev->stats.tx_errors++;739739- /* wake the queue if there is room */740740- if (qcaspi_tx_ring_has_space(&qca->txr))741741- netif_wake_queue(dev);739739+ /* Trigger tx queue flush and QCA7000 reset */740740+ qca->sync = QCASPI_SYNC_UNKNOWN;742741}743742744743static int
+4-1
drivers/net/ethernet/renesas/ravb_main.c
···905905 netdev_info(ndev, "limited PHY to 100Mbit/s\n");906906 }907907908908+ /* 10BASE is not supported */909909+ phydev->supported &= ~PHY_10BT_FEATURES;910910+908911 netdev_info(ndev, "attached PHY %d (IRQ %d) to driver %s\n",909912 phydev->addr, phydev->irq, phydev->drv->name);910913···10401037 "rx_queue_1_mcast_packets",10411038 "rx_queue_1_errors",10421039 "rx_queue_1_crc_errors",10431043- "rx_queue_1_frame_errors_",10401040+ "rx_queue_1_frame_errors",10441041 "rx_queue_1_length_errors",10451042 "rx_queue_1_missed_errors",10461043 "rx_queue_1_over_errors",
+41-14
drivers/net/ethernet/renesas/sh_eth.c
···5252 NETIF_MSG_RX_ERR| \5353 NETIF_MSG_TX_ERR)54545555+#define SH_ETH_OFFSET_INVALID ((u16)~0)5656+5557#define SH_ETH_OFFSET_DEFAULTS \5658 [0 ... SH_ETH_MAX_REGISTER_OFFSET - 1] = SH_ETH_OFFSET_INVALID5759···405403406404static void sh_eth_rcv_snd_disable(struct net_device *ndev);407405static struct net_device_stats *sh_eth_get_stats(struct net_device *ndev);406406+407407+static void sh_eth_write(struct net_device *ndev, u32 data, int enum_index)408408+{409409+ struct sh_eth_private *mdp = netdev_priv(ndev);410410+ u16 offset = mdp->reg_offset[enum_index];411411+412412+ if (WARN_ON(offset == SH_ETH_OFFSET_INVALID))413413+ return;414414+415415+ iowrite32(data, mdp->addr + offset);416416+}417417+418418+static u32 sh_eth_read(struct net_device *ndev, int enum_index)419419+{420420+ struct sh_eth_private *mdp = netdev_priv(ndev);421421+ u16 offset = mdp->reg_offset[enum_index];422422+423423+ if (WARN_ON(offset == SH_ETH_OFFSET_INVALID))424424+ return ~0U;425425+426426+ return ioread32(mdp->addr + offset);427427+}408428409429static bool sh_eth_is_gether(struct sh_eth_private *mdp)410430{···11961172 break;11971173 }11981174 mdp->rx_skbuff[i] = skb;11991199- rxdesc->addr = dma_addr;11751175+ rxdesc->addr = cpu_to_edmac(mdp, dma_addr);12001176 rxdesc->status = cpu_to_edmac(mdp, RD_RACT | RD_RFP);1201117712021178 /* Rx descriptor address set */···14271403 entry, edmac_to_cpu(mdp, txdesc->status));14281404 /* Free the original skb. */14291405 if (mdp->tx_skbuff[entry]) {14301430- dma_unmap_single(&ndev->dev, txdesc->addr,14061406+ dma_unmap_single(&ndev->dev,14071407+ edmac_to_cpu(mdp, txdesc->addr),14311408 txdesc->buffer_length, DMA_TO_DEVICE);14321409 dev_kfree_skb_irq(mdp->tx_skbuff[entry]);14331410 mdp->tx_skbuff[entry] = NULL;···14871462 if (mdp->cd->shift_rd0)14881463 desc_status >>= 16;1489146414651465+ skb = mdp->rx_skbuff[entry];14901466 if (desc_status & (RD_RFS1 | RD_RFS2 | RD_RFS3 | RD_RFS4 |14911467 RD_RFS5 | RD_RFS6 | RD_RFS10)) {14921468 ndev->stats.rx_errors++;···15031477 ndev->stats.rx_missed_errors++;15041478 if (desc_status & RD_RFS10)15051479 ndev->stats.rx_over_errors++;15061506- } else {14801480+ } else if (skb) {14811481+ dma_addr = edmac_to_cpu(mdp, rxdesc->addr);15071482 if (!mdp->cd->hw_swap)15081483 sh_eth_soft_swap(15091509- phys_to_virt(ALIGN(rxdesc->addr, 4)),14841484+ phys_to_virt(ALIGN(dma_addr, 4)),15101485 pkt_len + 2);15111511- skb = mdp->rx_skbuff[entry];15121486 mdp->rx_skbuff[entry] = NULL;15131487 if (mdp->cd->rpadir)15141488 skb_reserve(skb, NET_IP_ALIGN);15151515- dma_unmap_single(&ndev->dev, rxdesc->addr,14891489+ dma_unmap_single(&ndev->dev, dma_addr,15161490 ALIGN(mdp->rx_buf_sz, 32),15171491 DMA_FROM_DEVICE);15181492 skb_put(skb, pkt_len);···15491523 mdp->rx_skbuff[entry] = skb;1550152415511525 skb_checksum_none_assert(skb);15521552- rxdesc->addr = dma_addr;15261526+ rxdesc->addr = cpu_to_edmac(mdp, dma_addr);15531527 }15541528 dma_wmb(); /* RACT bit must be set after all the above writes */15551529 if (entry >= mdp->num_rx_ring - 1)···23572331 /* Free all the skbuffs in the Rx queue. */23582332 for (i = 0; i < mdp->num_rx_ring; i++) {23592333 rxdesc = &mdp->rx_ring[i];23602360- rxdesc->status = 0;23612361- rxdesc->addr = 0xBADF00D0;23342334+ rxdesc->status = cpu_to_edmac(mdp, 0);23352335+ rxdesc->addr = cpu_to_edmac(mdp, 0xBADF00D0);23622336 dev_kfree_skb(mdp->rx_skbuff[i]);23632337 mdp->rx_skbuff[i] = NULL;23642338 }···23762350{23772351 struct sh_eth_private *mdp = netdev_priv(ndev);23782352 struct sh_eth_txdesc *txdesc;23532353+ dma_addr_t dma_addr;23792354 u32 entry;23802355 unsigned long flags;23812356···23992372 txdesc = &mdp->tx_ring[entry];24002373 /* soft swap. */24012374 if (!mdp->cd->hw_swap)24022402- sh_eth_soft_swap(phys_to_virt(ALIGN(txdesc->addr, 4)),24032403- skb->len + 2);24042404- txdesc->addr = dma_map_single(&ndev->dev, skb->data, skb->len,24052405- DMA_TO_DEVICE);24062406- if (dma_mapping_error(&ndev->dev, txdesc->addr)) {23752375+ sh_eth_soft_swap(PTR_ALIGN(skb->data, 4), skb->len + 2);23762376+ dma_addr = dma_map_single(&ndev->dev, skb->data, skb->len,23772377+ DMA_TO_DEVICE);23782378+ if (dma_mapping_error(&ndev->dev, dma_addr)) {24072379 kfree_skb(skb);24082380 return NETDEV_TX_OK;24092381 }23822382+ txdesc->addr = cpu_to_edmac(mdp, dma_addr);24102383 txdesc->buffer_length = skb->len;2411238424122385 dma_wmb(); /* TACT bit must be set after all the above writes */
···339339{340340 const struct device *dev = &phydev->dev;341341 const struct device_node *of_node = dev->of_node;342342+ const struct device *dev_walker;342343343343- if (!of_node && dev->parent->of_node)344344- of_node = dev->parent->of_node;344344+ /* The Micrel driver has a deprecated option to place phy OF345345+ * properties in the MAC node. Walk up the tree of devices to346346+ * find a device with an OF node.347347+ */348348+ dev_walker = &phydev->dev;349349+ do {350350+ of_node = dev_walker->of_node;351351+ dev_walker = dev_walker->parent;352352+353353+ } while (!of_node && dev_walker);345354346355 if (of_node) {347356 ksz9021_load_values_from_of(phydev, of_node,
···419419 struct pptp_opt *opt = &po->proto.pptp;420420 int error = 0;421421422422+ if (sockaddr_len < sizeof(struct sockaddr_pppox))423423+ return -EINVAL;424424+422425 lock_sock(sk);423426424427 opt->src_addr = sp->sa_addr.pptp;···442439 struct rtable *rt;443440 struct flowi4 fl4;444441 int error = 0;442442+443443+ if (sockaddr_len < sizeof(struct sockaddr_pppox))444444+ return -EINVAL;445445446446 if (sp->sa_protocol != PX_PROTO_PPTP)447447 return -EINVAL;
+25-1
drivers/net/usb/cdc_mbim.c
···158158 if (!cdc_ncm_comm_intf_is_mbim(intf->cur_altsetting))159159 goto err;160160161161- ret = cdc_ncm_bind_common(dev, intf, data_altsetting, 0);161161+ ret = cdc_ncm_bind_common(dev, intf, data_altsetting, dev->driver_info->data);162162 if (ret)163163 goto err;164164···582582 .tx_fixup = cdc_mbim_tx_fixup,583583};584584585585+/* The spefication explicitly allows NDPs to be placed anywhere in the586586+ * frame, but some devices fail unless the NDP is placed after the IP587587+ * packets. Using the CDC_NCM_FLAG_NDP_TO_END flags to force this588588+ * behaviour.589589+ *590590+ * Note: The current implementation of this feature restricts each NTB591591+ * to a single NDP, implying that multiplexed sessions cannot share an592592+ * NTB. This might affect performace for multiplexed sessions.593593+ */594594+static const struct driver_info cdc_mbim_info_ndp_to_end = {595595+ .description = "CDC MBIM",596596+ .flags = FLAG_NO_SETINT | FLAG_MULTI_PACKET | FLAG_WWAN,597597+ .bind = cdc_mbim_bind,598598+ .unbind = cdc_mbim_unbind,599599+ .manage_power = cdc_mbim_manage_power,600600+ .rx_fixup = cdc_mbim_rx_fixup,601601+ .tx_fixup = cdc_mbim_tx_fixup,602602+ .data = CDC_NCM_FLAG_NDP_TO_END,603603+};604604+585605static const struct usb_device_id mbim_devs[] = {586606 /* This duplicate NCM entry is intentional. MBIM devices can587607 * be disguised as NCM by default, and this is necessary to···616596 /* ZLP conformance whitelist: All Ericsson MBIM devices */617597 { USB_VENDOR_AND_INTERFACE_INFO(0x0bdb, USB_CLASS_COMM, USB_CDC_SUBCLASS_MBIM, USB_CDC_PROTO_NONE),618598 .driver_info = (unsigned long)&cdc_mbim_info,599599+ },600600+ /* Huawei E3372 fails unless NDP comes after the IP packets */601601+ { USB_DEVICE_AND_INTERFACE_INFO(0x12d1, 0x157d, USB_CLASS_COMM, USB_CDC_SUBCLASS_MBIM, USB_CDC_PROTO_NONE),602602+ .driver_info = (unsigned long)&cdc_mbim_info_ndp_to_end,619603 },620604 /* default entry */621605 { USB_INTERFACE_INFO(USB_CLASS_COMM, USB_CDC_SUBCLASS_MBIM, USB_CDC_PROTO_NONE),
+9-1
drivers/net/usb/cdc_ncm.c
···955955 * NTH16 header as we would normally do. NDP isn't written to the SKB yet, and956956 * the wNdpIndex field in the header is actually not consistent with reality. It will be later.957957 */958958- if (ctx->drvflags & CDC_NCM_FLAG_NDP_TO_END)958958+ if (ctx->drvflags & CDC_NCM_FLAG_NDP_TO_END) {959959 if (ctx->delayed_ndp16->dwSignature == sign)960960 return ctx->delayed_ndp16;961961+962962+ /* We can only push a single NDP to the end. Return963963+ * NULL to send what we've already got and queue this964964+ * skb for later.965965+ */966966+ else if (ctx->delayed_ndp16->dwSignature)967967+ return NULL;968968+ }961969962970 /* follow the chain of NDPs, looking for a match */963971 while (ndpoffset) {
+3-18
drivers/net/usb/r8152.c
···3067306730683068 mutex_lock(&tp->control);3069306930703070- /* The WORK_ENABLE may be set when autoresume occurs */30713071- if (test_bit(WORK_ENABLE, &tp->flags)) {30723072- clear_bit(WORK_ENABLE, &tp->flags);30733073- usb_kill_urb(tp->intr_urb);30743074- cancel_delayed_work_sync(&tp->schedule);30753075-30763076- /* disable the tx/rx, if the workqueue has enabled them. */30773077- if (netif_carrier_ok(netdev))30783078- tp->rtl_ops.disable(tp);30793079- }30803080-30813070 tp->rtl_ops.up(tp);3082307130833072 rtl8152_set_speed(tp, AUTONEG_ENABLE,···31123123 rtl_stop_rx(tp);31133124 } else {31143125 mutex_lock(&tp->control);31153115-31163116- /* The autosuspend may have been enabled and wouldn't31173117- * be disable when autoresume occurs, because the31183118- * netif_running() would be false.31193119- */31203120- rtl_runtime_suspend_enable(tp, false);3121312631223127 tp->rtl_ops.down(tp);31233128···34953512 netif_device_attach(tp->netdev);34963513 }3497351434983498- if (netif_running(tp->netdev)) {35153515+ if (netif_running(tp->netdev) && tp->netdev->flags & IFF_UP) {34993516 if (test_bit(SELECTIVE_SUSPEND, &tp->flags)) {35003517 rtl_runtime_suspend_enable(tp, false);35013518 clear_bit(SELECTIVE_SUSPEND, &tp->flags);···35153532 }35163533 usb_submit_urb(tp->intr_urb, GFP_KERNEL);35173534 } else if (test_bit(SELECTIVE_SUSPEND, &tp->flags)) {35353535+ if (tp->netdev->flags & IFF_UP)35363536+ rtl_runtime_suspend_enable(tp, false);35183537 clear_bit(SELECTIVE_SUSPEND, &tp->flags);35193538 }35203539
+19-15
drivers/net/virtio_net.c
···140140141141 /* CPU hot plug notifier */142142 struct notifier_block nb;143143+144144+ /* Control VQ buffers: protected by the rtnl lock */145145+ struct virtio_net_ctrl_hdr ctrl_hdr;146146+ virtio_net_ctrl_ack ctrl_status;147147+ u8 ctrl_promisc;148148+ u8 ctrl_allmulti;143149};144150145151struct padded_vnet_hdr {···982976 struct scatterlist *out)983977{984978 struct scatterlist *sgs[4], hdr, stat;985985- struct virtio_net_ctrl_hdr ctrl;986986- virtio_net_ctrl_ack status = ~0;987979 unsigned out_num = 0, tmp;988980989981 /* Caller should know better */990982 BUG_ON(!virtio_has_feature(vi->vdev, VIRTIO_NET_F_CTRL_VQ));991983992992- ctrl.class = class;993993- ctrl.cmd = cmd;984984+ vi->ctrl_status = ~0;985985+ vi->ctrl_hdr.class = class;986986+ vi->ctrl_hdr.cmd = cmd;994987 /* Add header */995995- sg_init_one(&hdr, &ctrl, sizeof(ctrl));988988+ sg_init_one(&hdr, &vi->ctrl_hdr, sizeof(vi->ctrl_hdr));996989 sgs[out_num++] = &hdr;997990998991 if (out)999992 sgs[out_num++] = out;10009931001994 /* Add return status. */10021002- sg_init_one(&stat, &status, sizeof(status));995995+ sg_init_one(&stat, &vi->ctrl_status, sizeof(vi->ctrl_status));1003996 sgs[out_num] = &stat;10049971005998 BUG_ON(out_num + 1 > ARRAY_SIZE(sgs));1006999 virtqueue_add_sgs(vi->cvq, sgs, out_num, 1, vi, GFP_ATOMIC);1007100010081001 if (unlikely(!virtqueue_kick(vi->cvq)))10091009- return status == VIRTIO_NET_OK;10021002+ return vi->ctrl_status == VIRTIO_NET_OK;1010100310111004 /* Spin for a response, the kick causes an ioport write, trapping10121005 * into the hypervisor, so the request should be handled immediately.···10141009 !virtqueue_is_broken(vi->cvq))10151010 cpu_relax();1016101110171017- return status == VIRTIO_NET_OK;10121012+ return vi->ctrl_status == VIRTIO_NET_OK;10181013}1019101410201015static int virtnet_set_mac_address(struct net_device *dev, void *p)···11561151{11571152 struct virtnet_info *vi = netdev_priv(dev);11581153 struct scatterlist sg[2];11591159- u8 promisc, allmulti;11601154 struct virtio_net_ctrl_mac *mac_data;11611155 struct netdev_hw_addr *ha;11621156 int uc_count;···11671163 if (!virtio_has_feature(vi->vdev, VIRTIO_NET_F_CTRL_RX))11681164 return;1169116511701170- promisc = ((dev->flags & IFF_PROMISC) != 0);11711171- allmulti = ((dev->flags & IFF_ALLMULTI) != 0);11661166+ vi->ctrl_promisc = ((dev->flags & IFF_PROMISC) != 0);11671167+ vi->ctrl_allmulti = ((dev->flags & IFF_ALLMULTI) != 0);1172116811731173- sg_init_one(sg, &promisc, sizeof(promisc));11691169+ sg_init_one(sg, &vi->ctrl_promisc, sizeof(vi->ctrl_promisc));1174117011751171 if (!virtnet_send_command(vi, VIRTIO_NET_CTRL_RX,11761172 VIRTIO_NET_CTRL_RX_PROMISC, sg))11771173 dev_warn(&dev->dev, "Failed to %sable promisc mode.\n",11781178- promisc ? "en" : "dis");11741174+ vi->ctrl_promisc ? "en" : "dis");1179117511801180- sg_init_one(sg, &allmulti, sizeof(allmulti));11761176+ sg_init_one(sg, &vi->ctrl_allmulti, sizeof(vi->ctrl_allmulti));1181117711821178 if (!virtnet_send_command(vi, VIRTIO_NET_CTRL_RX,11831179 VIRTIO_NET_CTRL_RX_ALLMULTI, sg))11841180 dev_warn(&dev->dev, "Failed to %sable allmulti mode.\n",11851185- allmulti ? "en" : "dis");11811181+ vi->ctrl_allmulti ? "en" : "dis");1186118211871183 uc_count = netdev_uc_count(dev);11881184 mc_count = netdev_mc_count(dev);
+59-16
drivers/net/vxlan.c
···11581158 struct pcpu_sw_netstats *stats;11591159 union vxlan_addr saddr;11601160 int err = 0;11611161- union vxlan_addr *remote_ip;1162116111631162 /* For flow based devices, map all packets to VNI 0 */11641163 if (vs->flags & VXLAN_F_COLLECT_METADATA)···11681169 if (!vxlan)11691170 goto drop;1170117111711171- remote_ip = &vxlan->default_dst.remote_ip;11721172 skb_reset_mac_header(skb);11731173 skb_scrub_packet(skb, !net_eq(vxlan->net, dev_net(vxlan->dev)));11741174 skb->protocol = eth_type_trans(skb, vxlan->dev);···11771179 if (ether_addr_equal(eth_hdr(skb)->h_source, vxlan->dev->dev_addr))11781180 goto drop;1179118111801180- /* Re-examine inner Ethernet packet */11811181- if (remote_ip->sa.sa_family == AF_INET) {11821182+ /* Get data from the outer IP header */11831183+ if (vxlan_get_sk_family(vs) == AF_INET) {11821184 oip = ip_hdr(skb);11831185 saddr.sin.sin_addr.s_addr = oip->saddr;11841186 saddr.sa.sa_family = AF_INET;···18461848 !(vxflags & VXLAN_F_UDP_CSUM));18471849}1848185018511851+#if IS_ENABLED(CONFIG_IPV6)18521852+static struct dst_entry *vxlan6_get_route(struct vxlan_dev *vxlan,18531853+ struct sk_buff *skb, int oif,18541854+ const struct in6_addr *daddr,18551855+ struct in6_addr *saddr)18561856+{18571857+ struct dst_entry *ndst;18581858+ struct flowi6 fl6;18591859+ int err;18601860+18611861+ memset(&fl6, 0, sizeof(fl6));18621862+ fl6.flowi6_oif = oif;18631863+ fl6.daddr = *daddr;18641864+ fl6.saddr = vxlan->cfg.saddr.sin6.sin6_addr;18651865+ fl6.flowi6_mark = skb->mark;18661866+ fl6.flowi6_proto = IPPROTO_UDP;18671867+18681868+ err = ipv6_stub->ipv6_dst_lookup(vxlan->net,18691869+ vxlan->vn6_sock->sock->sk,18701870+ &ndst, &fl6);18711871+ if (err < 0)18721872+ return ERR_PTR(err);18731873+18741874+ *saddr = fl6.saddr;18751875+ return ndst;18761876+}18771877+#endif18781878+18491879/* Bypass encapsulation if the destination is local */18501880static void vxlan_encap_bypass(struct sk_buff *skb, struct vxlan_dev *src_vxlan,18511881 struct vxlan_dev *dst_vxlan)···20612035#if IS_ENABLED(CONFIG_IPV6)20622036 } else {20632037 struct dst_entry *ndst;20642064- struct flowi6 fl6;20382038+ struct in6_addr saddr;20652039 u32 rt6i_flags;2066204020672041 if (!vxlan->vn6_sock)20682042 goto drop;20692043 sk = vxlan->vn6_sock->sock->sk;2070204420712071- memset(&fl6, 0, sizeof(fl6));20722072- fl6.flowi6_oif = rdst ? rdst->remote_ifindex : 0;20732073- fl6.daddr = dst->sin6.sin6_addr;20742074- fl6.saddr = vxlan->cfg.saddr.sin6.sin6_addr;20752075- fl6.flowi6_mark = skb->mark;20762076- fl6.flowi6_proto = IPPROTO_UDP;20772077-20782078- if (ipv6_stub->ipv6_dst_lookup(vxlan->net, sk, &ndst, &fl6)) {20452045+ ndst = vxlan6_get_route(vxlan, skb,20462046+ rdst ? rdst->remote_ifindex : 0,20472047+ &dst->sin6.sin6_addr, &saddr);20482048+ if (IS_ERR(ndst)) {20792049 netdev_dbg(dev, "no route to %pI6\n",20802050 &dst->sin6.sin6_addr);20812051 dev->stats.tx_carrier_errors++;···21032081 }2104208221052083 ttl = ttl ? : ip6_dst_hoplimit(ndst);21062106- err = vxlan6_xmit_skb(ndst, sk, skb, dev, &fl6.saddr, &fl6.daddr,20842084+ err = vxlan6_xmit_skb(ndst, sk, skb, dev, &saddr, &dst->sin6.sin6_addr,21072085 0, ttl, src_port, dst_port, htonl(vni << 8), md,21082086 !net_eq(vxlan->net, dev_net(vxlan->dev)),21092087 flags);···24172395 vxlan->cfg.port_max, true);24182396 dport = info->key.tp_dst ? : vxlan->cfg.dst_port;2419239724202420- if (ip_tunnel_info_af(info) == AF_INET)23982398+ if (ip_tunnel_info_af(info) == AF_INET) {23992399+ if (!vxlan->vn4_sock)24002400+ return -EINVAL;24212401 return egress_ipv4_tun_info(dev, skb, info, sport, dport);24222422- return -EINVAL;24022402+ } else {24032403+#if IS_ENABLED(CONFIG_IPV6)24042404+ struct dst_entry *ndst;24052405+24062406+ if (!vxlan->vn6_sock)24072407+ return -EINVAL;24082408+ ndst = vxlan6_get_route(vxlan, skb, 0,24092409+ &info->key.u.ipv6.dst,24102410+ &info->key.u.ipv6.src);24112411+ if (IS_ERR(ndst))24122412+ return PTR_ERR(ndst);24132413+ dst_release(ndst);24142414+24152415+ info->key.tp_src = sport;24162416+ info->key.tp_dst = dport;24172417+#else /* !CONFIG_IPV6 */24182418+ return -EPFNOSUPPORT;24192419+#endif24202420+ }24212421+ return 0;24232422}2424242324252424static const struct net_device_ops vxlan_netdev_ops = {
+15-19
drivers/net/xen-netback/netback.c
···258258 struct netrx_pending_operations *npo)259259{260260 struct xenvif_rx_meta *meta;261261- struct xen_netif_rx_request *req;261261+ struct xen_netif_rx_request req;262262263263- req = RING_GET_REQUEST(&queue->rx, queue->rx.req_cons++);263263+ RING_COPY_REQUEST(&queue->rx, queue->rx.req_cons++, &req);264264265265 meta = npo->meta + npo->meta_prod++;266266 meta->gso_type = XEN_NETIF_GSO_TYPE_NONE;267267 meta->gso_size = 0;268268 meta->size = 0;269269- meta->id = req->id;269269+ meta->id = req.id;270270271271 npo->copy_off = 0;272272- npo->copy_gref = req->gref;272272+ npo->copy_gref = req.gref;273273274274 return meta;275275}···424424 struct xenvif *vif = netdev_priv(skb->dev);425425 int nr_frags = skb_shinfo(skb)->nr_frags;426426 int i;427427- struct xen_netif_rx_request *req;427427+ struct xen_netif_rx_request req;428428 struct xenvif_rx_meta *meta;429429 unsigned char *data;430430 int head = 1;···443443444444 /* Set up a GSO prefix descriptor, if necessary */445445 if ((1 << gso_type) & vif->gso_prefix_mask) {446446- req = RING_GET_REQUEST(&queue->rx, queue->rx.req_cons++);446446+ RING_COPY_REQUEST(&queue->rx, queue->rx.req_cons++, &req);447447 meta = npo->meta + npo->meta_prod++;448448 meta->gso_type = gso_type;449449 meta->gso_size = skb_shinfo(skb)->gso_size;450450 meta->size = 0;451451- meta->id = req->id;451451+ meta->id = req.id;452452 }453453454454- req = RING_GET_REQUEST(&queue->rx, queue->rx.req_cons++);454454+ RING_COPY_REQUEST(&queue->rx, queue->rx.req_cons++, &req);455455 meta = npo->meta + npo->meta_prod++;456456457457 if ((1 << gso_type) & vif->gso_mask) {···463463 }464464465465 meta->size = 0;466466- meta->id = req->id;466466+ meta->id = req.id;467467 npo->copy_off = 0;468468- npo->copy_gref = req->gref;468468+ npo->copy_gref = req.gref;469469470470 data = skb->data;471471 while (data < skb_tail_pointer(skb)) {···679679 * Allow a burst big enough to transmit a jumbo packet of up to 128kB.680680 * Otherwise the interface can seize up due to insufficient credit.681681 */682682- max_burst = RING_GET_REQUEST(&queue->tx, queue->tx.req_cons)->size;683683- max_burst = min(max_burst, 131072UL);684684- max_burst = max(max_burst, queue->credit_bytes);682682+ max_burst = max(131072UL, queue->credit_bytes);685683686684 /* Take care that adding a new chunk of credit doesn't wrap to zero. */687685 max_credit = queue->remaining_credit + queue->credit_bytes;···709711 spin_unlock_irqrestore(&queue->response_lock, flags);710712 if (cons == end)711713 break;712712- txp = RING_GET_REQUEST(&queue->tx, cons++);714714+ RING_COPY_REQUEST(&queue->tx, cons++, txp);713715 } while (1);714716 queue->tx.req_cons = cons;715717}···776778 if (drop_err)777779 txp = &dropped_tx;778780779779- memcpy(txp, RING_GET_REQUEST(&queue->tx, cons + slots),780780- sizeof(*txp));781781+ RING_COPY_REQUEST(&queue->tx, cons + slots, txp);781782782783 /* If the guest submitted a frame >= 64 KiB then783784 * first->size overflowed and following slots will···11091112 return -EBADR;11101113 }1111111411121112- memcpy(&extra, RING_GET_REQUEST(&queue->tx, cons),11131113- sizeof(extra));11151115+ RING_COPY_REQUEST(&queue->tx, cons, &extra);11141116 if (unlikely(!extra.type ||11151117 extra.type >= XEN_NETIF_EXTRA_TYPE_MAX)) {11161118 queue->tx.req_cons = ++cons;···1318132213191323 idx = queue->tx.req_cons;13201324 rmb(); /* Ensure that we see the request before we copy it. */13211321- memcpy(&txreq, RING_GET_REQUEST(&queue->tx, idx), sizeof(txreq));13251325+ RING_COPY_REQUEST(&queue->tx, idx, &txreq);1322132613231327 /* Credit-based scheduling. */13241328 if (txreq.size > queue->remaining_credit &&
···25402540{25412541 bool kill = nvme_io_incapable(ns->dev) && !blk_queue_dying(ns->queue);2542254225432543- if (kill)25432543+ if (kill) {25442544 blk_set_queue_dying(ns->queue);25452545+25462546+ /*25472547+ * The controller was shutdown first if we got here through25482548+ * device removal. The shutdown may requeue outstanding25492549+ * requests. These need to be aborted immediately so25502550+ * del_gendisk doesn't block indefinitely for their completion.25512551+ */25522552+ blk_mq_abort_requeue_list(ns->queue);25532553+ }25452554 if (ns->disk->flags & GENHD_FL_UP)25462555 del_gendisk(ns->disk);25472556 if (kill || !blk_queue_dying(ns->queue)) {···29862977{29872978 struct nvme_ns *ns, *next;2988297929802980+ if (nvme_io_incapable(dev)) {29812981+ /*29822982+ * If the device is not capable of IO (surprise hot-removal,29832983+ * for example), we need to quiesce prior to deleting the29842984+ * namespaces. This will end outstanding requests and prevent29852985+ * attempts to sync dirty data.29862986+ */29872987+ nvme_dev_shutdown(dev);29882988+ }29892989 list_for_each_entry_safe(ns, next, &dev->namespaces, list)29902990 nvme_ns_remove(ns);29912991}
+3-2
drivers/of/address.c
···485485 int rone;486486 u64 offset = OF_BAD_ADDR;487487488488- /* Normally, an absence of a "ranges" property means we are488488+ /*489489+ * Normally, an absence of a "ranges" property means we are489490 * crossing a non-translatable boundary, and thus the addresses490490- * below the current not cannot be converted to CPU physical ones.491491+ * below the current cannot be converted to CPU physical ones.491492 * Unfortunately, while this is very clear in the spec, it's not492493 * what Apple understood, and they do have things like /uni-n or493494 * /ht nodes with no "ranges" property and a lot of perfectly
+6-1
drivers/of/fdt.c
···1313#include <linux/kernel.h>1414#include <linux/initrd.h>1515#include <linux/memblock.h>1616+#include <linux/mutex.h>1617#include <linux/of.h>1718#include <linux/of_fdt.h>1819#include <linux/of_reserved_mem.h>···437436 return kzalloc(size, GFP_KERNEL);438437}439438439439+static DEFINE_MUTEX(of_fdt_unflatten_mutex);440440+440441/**441442 * of_fdt_unflatten_tree - create tree of device_nodes from flat blob442443 *···450447void of_fdt_unflatten_tree(const unsigned long *blob,451448 struct device_node **mynodes)452449{450450+ mutex_lock(&of_fdt_unflatten_mutex);453451 __unflatten_device_tree(blob, mynodes, &kernel_tree_alloc);452452+ mutex_unlock(&of_fdt_unflatten_mutex);454453}455454EXPORT_SYMBOL_GPL(of_fdt_unflatten_tree);456455···10461041int __init __weak early_init_dt_reserve_memory_arch(phys_addr_t base,10471042 phys_addr_t size, bool nomap)10481043{10491049- pr_err("Reserved memory not supported, ignoring range 0x%pa - 0x%pa%s\n",10441044+ pr_err("Reserved memory not supported, ignoring range %pa - %pa%s\n",10501045 &base, &size, nomap ? " (nomap)" : "");10511046 return -ENOSYS;10521047}
+2-1
drivers/of/irq.c
···5353 * Returns a pointer to the interrupt parent node, or NULL if the interrupt5454 * parent could not be determined.5555 */5656-static struct device_node *of_irq_find_parent(struct device_node *child)5656+struct device_node *of_irq_find_parent(struct device_node *child)5757{5858 struct device_node *p;5959 const __be32 *parp;···77777878 return p;7979}8080+EXPORT_SYMBOL_GPL(of_irq_find_parent);80818182/**8283 * of_irq_parse_raw - Low level interrupt tree parsing
···5454 struct irq_domain *domain;55555656 domain = pci_msi_get_domain(dev);5757- if (domain)5757+ if (domain && irq_domain_is_hierarchy(domain))5858 return pci_msi_domain_alloc_irqs(domain, dev, nvec, type);59596060 return arch_setup_msi_irqs(dev, nvec, type);···6565 struct irq_domain *domain;66666767 domain = pci_msi_get_domain(dev);6868- if (domain)6868+ if (domain && irq_domain_is_hierarchy(domain))6969 pci_msi_domain_free_irqs(domain, dev);7070 else7171 arch_teardown_msi_irqs(dev);
+1
drivers/phy/Kconfig
···233233 tristate "Allwinner sun9i SoC USB PHY driver"234234 depends on ARCH_SUNXI && HAS_IOMEM && OF235235 depends on RESET_CONTROLLER236236+ depends on USB_COMMON236237 select GENERIC_PHY237238 help238239 Enable this to support the transceiver that is part of Allwinner
+12-4
drivers/phy/phy-bcm-cygnus-pcie.c
···128128 struct phy_provider *provider;129129 struct resource *res;130130 unsigned cnt = 0;131131+ int ret;131132132133 if (of_get_child_count(node) == 0) {133134 dev_err(dev, "PHY no child node\n");···155154 if (of_property_read_u32(child, "reg", &id)) {156155 dev_err(dev, "missing reg property for %s\n",157156 child->name);158158- return -EINVAL;157157+ ret = -EINVAL;158158+ goto put_child;159159 }160160161161 if (id >= MAX_NUM_PHYS) {162162 dev_err(dev, "invalid PHY id: %u\n", id);163163- return -EINVAL;163163+ ret = -EINVAL;164164+ goto put_child;164165 }165166166167 if (core->phys[id].phy) {167168 dev_err(dev, "duplicated PHY id: %u\n", id);168168- return -EINVAL;169169+ ret = -EINVAL;170170+ goto put_child;169171 }170172171173 p = &core->phys[id];172174 p->phy = devm_phy_create(dev, child, &cygnus_pcie_phy_ops);173175 if (IS_ERR(p->phy)) {174176 dev_err(dev, "failed to create PHY\n");175175- return PTR_ERR(p->phy);177177+ ret = PTR_ERR(p->phy);178178+ goto put_child;176179 }177180178181 p->core = core;···196191 dev_dbg(dev, "registered %u PCIe PHY(s)\n", cnt);197192198193 return 0;194194+put_child:195195+ of_node_put(child);196196+ return ret;199197}200198201199static const struct of_device_id cygnus_pcie_phy_match_table[] = {
+14-6
drivers/phy/phy-berlin-sata.c
···195195 struct phy_provider *phy_provider;196196 struct phy_berlin_priv *priv;197197 struct resource *res;198198- int i = 0;198198+ int ret, i = 0;199199 u32 phy_id;200200201201 priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL);···237237 if (of_property_read_u32(child, "reg", &phy_id)) {238238 dev_err(dev, "missing reg property in node %s\n",239239 child->name);240240- return -EINVAL;240240+ ret = -EINVAL;241241+ goto put_child;241242 }242243243244 if (phy_id >= ARRAY_SIZE(phy_berlin_power_down_bits)) {244245 dev_err(dev, "invalid reg in node %s\n", child->name);245245- return -EINVAL;246246+ ret = -EINVAL;247247+ goto put_child;246248 }247249248250 phy_desc = devm_kzalloc(dev, sizeof(*phy_desc), GFP_KERNEL);249249- if (!phy_desc)250250- return -ENOMEM;251251+ if (!phy_desc) {252252+ ret = -ENOMEM;253253+ goto put_child;254254+ }251255252256 phy = devm_phy_create(dev, NULL, &phy_berlin_sata_ops);253257 if (IS_ERR(phy)) {254258 dev_err(dev, "failed to create PHY %d\n", phy_id);255255- return PTR_ERR(phy);259259+ ret = PTR_ERR(phy);260260+ goto put_child;256261 }257262258263 phy_desc->phy = phy;···274269 phy_provider =275270 devm_of_phy_provider_register(dev, phy_berlin_sata_phy_xlate);276271 return PTR_ERR_OR_ZERO(phy_provider);272272+put_child:273273+ of_node_put(child);274274+ return ret;277275}278276279277static const struct of_device_id phy_berlin_sata_of_match[] = {
+12-5
drivers/phy/phy-brcmstb-sata.c
···140140 struct brcm_sata_phy *priv;141141 struct resource *res;142142 struct phy_provider *provider;143143- int count = 0;143143+ int ret, count = 0;144144145145 if (of_get_child_count(dn) == 0)146146 return -ENODEV;···163163 if (of_property_read_u32(child, "reg", &id)) {164164 dev_err(dev, "missing reg property in node %s\n",165165 child->name);166166- return -EINVAL;166166+ ret = -EINVAL;167167+ goto put_child;167168 }168169169170 if (id >= MAX_PORTS) {170171 dev_err(dev, "invalid reg: %u\n", id);171171- return -EINVAL;172172+ ret = -EINVAL;173173+ goto put_child;172174 }173175 if (priv->phys[id].phy) {174176 dev_err(dev, "already registered port %u\n", id);175175- return -EINVAL;177177+ ret = -EINVAL;178178+ goto put_child;176179 }177180178181 port = &priv->phys[id];···185182 port->ssc_en = of_property_read_bool(child, "brcm,enable-ssc");186183 if (IS_ERR(port->phy)) {187184 dev_err(dev, "failed to create PHY\n");188188- return PTR_ERR(port->phy);185185+ ret = PTR_ERR(port->phy);186186+ goto put_child;189187 }190188191189 phy_set_drvdata(port->phy, port);···202198 dev_info(dev, "registered %d port(s)\n", count);203199204200 return 0;201201+put_child:202202+ of_node_put(child);203203+ return ret;205204}206205207206static struct platform_driver brcm_sata_phy_driver = {
+15-6
drivers/phy/phy-core.c
···636636 * @np: node containing the phy637637 * @index: index of the phy638638 *639639- * Gets the phy using _of_phy_get(), and associates a device with it using640640- * devres. On driver detach, release function is invoked on the devres data,639639+ * Gets the phy using _of_phy_get(), then gets a refcount to it,640640+ * and associates a device with it using devres. On driver detach,641641+ * release function is invoked on the devres data,641642 * then, devres data is freed.642643 *643644 */···652651 return ERR_PTR(-ENOMEM);653652654653 phy = _of_phy_get(np, index);655655- if (!IS_ERR(phy)) {656656- *ptr = phy;657657- devres_add(dev, ptr);658658- } else {654654+ if (IS_ERR(phy)) {659655 devres_free(ptr);656656+ return phy;660657 }658658+659659+ if (!try_module_get(phy->ops->owner)) {660660+ devres_free(ptr);661661+ return ERR_PTR(-EPROBE_DEFER);662662+ }663663+664664+ get_device(&phy->dev);665665+666666+ *ptr = phy;667667+ devres_add(dev, ptr);661668662669 return phy;663670}
+11-5
drivers/phy/phy-miphy28lp.c
···1226122612271227 miphy_phy = devm_kzalloc(&pdev->dev, sizeof(*miphy_phy),12281228 GFP_KERNEL);12291229- if (!miphy_phy)12301230- return -ENOMEM;12291229+ if (!miphy_phy) {12301230+ ret = -ENOMEM;12311231+ goto put_child;12321232+ }1231123312321234 miphy_dev->phys[port] = miphy_phy;1233123512341236 phy = devm_phy_create(&pdev->dev, child, &miphy28lp_ops);12351237 if (IS_ERR(phy)) {12361238 dev_err(&pdev->dev, "failed to create PHY\n");12371237- return PTR_ERR(phy);12391239+ ret = PTR_ERR(phy);12401240+ goto put_child;12381241 }1239124212401243 miphy_dev->phys[port]->phy = phy;···1245124212461243 ret = miphy28lp_of_probe(child, miphy_phy);12471244 if (ret)12481248- return ret;12451245+ goto put_child;1249124612501247 ret = miphy28lp_probe_resets(child, miphy_dev->phys[port]);12511248 if (ret)12521252- return ret;12491249+ goto put_child;1253125012541251 phy_set_drvdata(phy, miphy_dev->phys[port]);12551252 port++;···1258125512591256 provider = devm_of_phy_provider_register(&pdev->dev, miphy28lp_xlate);12601257 return PTR_ERR_OR_ZERO(provider);12581258+put_child:12591259+ of_node_put(child);12601260+ return ret;12611261}1262126212631263static const struct of_device_id miphy28lp_of_match[] = {
+11-5
drivers/phy/phy-miphy365x.c
···566566567567 miphy_phy = devm_kzalloc(&pdev->dev, sizeof(*miphy_phy),568568 GFP_KERNEL);569569- if (!miphy_phy)570570- return -ENOMEM;569569+ if (!miphy_phy) {570570+ ret = -ENOMEM;571571+ goto put_child;572572+ }571573572574 miphy_dev->phys[port] = miphy_phy;573575574576 phy = devm_phy_create(&pdev->dev, child, &miphy365x_ops);575577 if (IS_ERR(phy)) {576578 dev_err(&pdev->dev, "failed to create PHY\n");577577- return PTR_ERR(phy);579579+ ret = PTR_ERR(phy);580580+ goto put_child;578581 }579582580583 miphy_dev->phys[port]->phy = phy;581584582585 ret = miphy365x_of_probe(child, miphy_phy);583586 if (ret)584584- return ret;587587+ goto put_child;585588586589 phy_set_drvdata(phy, miphy_dev->phys[port]);587590···594591 &miphy_phy->ctrlreg);595592 if (ret) {596593 dev_err(&pdev->dev, "No sysconfig offset found\n");597597- return ret;594594+ goto put_child;598595 }599596 }600597601598 provider = devm_of_phy_provider_register(&pdev->dev, miphy365x_xlate);602599 return PTR_ERR_OR_ZERO(provider);600600+put_child:601601+ of_node_put(child);602602+ return ret;603603}604604605605static const struct of_device_id miphy365x_of_match[] = {
···5555 * ACPI).5656 * @ie_offset: Register offset of GPI_IE from @regs.5757 * @pin_base: Starting pin of pins in this community5858+ * @gpp_size: Maximum number of pads in each group, such as PADCFGLOCK,5959+ * HOSTSW_OWN, GPI_IS, GPI_IE, etc.5860 * @npins: Number of pins in this community5961 * @regs: Community specific common registers (reserved for core driver)6062 * @pad_regs: Community specific pad registers (reserved for core driver)···7068 unsigned hostown_offset;7169 unsigned ie_offset;7270 unsigned pin_base;7171+ unsigned gpp_size;7372 size_t npins;7473 void __iomem *regs;7574 void __iomem *pad_regs;
···5656 int irq;5757};58585959+/*6060+ * The Rockchip calendar used by the RK808 counts November with 31 days. We use6161+ * these translation functions to convert its dates to/from the Gregorian6262+ * calendar used by the rest of the world. We arbitrarily define Jan 1st, 20166363+ * as the day when both calendars were in sync, and treat all other dates6464+ * relative to that.6565+ * NOTE: Other system software (e.g. firmware) that reads the same hardware must6666+ * implement this exact same conversion algorithm, with the same anchor date.6767+ */6868+static time64_t nov2dec_transitions(struct rtc_time *tm)6969+{7070+ return (tm->tm_year + 1900) - 2016 + (tm->tm_mon + 1 > 11 ? 1 : 0);7171+}7272+7373+static void rockchip_to_gregorian(struct rtc_time *tm)7474+{7575+ /* If it's Nov 31st, rtc_tm_to_time64() will count that like Dec 1st */7676+ time64_t time = rtc_tm_to_time64(tm);7777+ rtc_time64_to_tm(time + nov2dec_transitions(tm) * 86400, tm);7878+}7979+8080+static void gregorian_to_rockchip(struct rtc_time *tm)8181+{8282+ time64_t extra_days = nov2dec_transitions(tm);8383+ time64_t time = rtc_tm_to_time64(tm);8484+ rtc_time64_to_tm(time - extra_days * 86400, tm);8585+8686+ /* Compensate if we went back over Nov 31st (will work up to 2381) */8787+ if (nov2dec_transitions(tm) < extra_days) {8888+ if (tm->tm_mon + 1 == 11)8989+ tm->tm_mday++; /* This may result in 31! */9090+ else9191+ rtc_time64_to_tm(time - (extra_days - 1) * 86400, tm);9292+ }9393+}9494+5995/* Read current time and date in RTC */6096static int rk808_rtc_readtime(struct device *dev, struct rtc_time *tm)6197{···137101 tm->tm_mon = (bcd2bin(rtc_data[4] & MONTHS_REG_MSK)) - 1;138102 tm->tm_year = (bcd2bin(rtc_data[5] & YEARS_REG_MSK)) + 100;139103 tm->tm_wday = bcd2bin(rtc_data[6] & WEEKS_REG_MSK);104104+ rockchip_to_gregorian(tm);140105 dev_dbg(dev, "RTC date/time %4d-%02d-%02d(%d) %02d:%02d:%02d\n",141106 1900 + tm->tm_year, tm->tm_mon + 1, tm->tm_mday,142142- tm->tm_wday, tm->tm_hour , tm->tm_min, tm->tm_sec);107107+ tm->tm_wday, tm->tm_hour, tm->tm_min, tm->tm_sec);143108144109 return ret;145110}···153116 u8 rtc_data[NUM_TIME_REGS];154117 int ret;155118119119+ dev_dbg(dev, "set RTC date/time %4d-%02d-%02d(%d) %02d:%02d:%02d\n",120120+ 1900 + tm->tm_year, tm->tm_mon + 1, tm->tm_mday,121121+ tm->tm_wday, tm->tm_hour, tm->tm_min, tm->tm_sec);122122+ gregorian_to_rockchip(tm);156123 rtc_data[0] = bin2bcd(tm->tm_sec);157124 rtc_data[1] = bin2bcd(tm->tm_min);158125 rtc_data[2] = bin2bcd(tm->tm_hour);···164123 rtc_data[4] = bin2bcd(tm->tm_mon + 1);165124 rtc_data[5] = bin2bcd(tm->tm_year - 100);166125 rtc_data[6] = bin2bcd(tm->tm_wday);167167- dev_dbg(dev, "set RTC date/time %4d-%02d-%02d(%d) %02d:%02d:%02d\n",168168- 1900 + tm->tm_year, tm->tm_mon + 1, tm->tm_mday,169169- tm->tm_wday, tm->tm_hour , tm->tm_min, tm->tm_sec);170126171127 /* Stop RTC while updating the RTC registers */172128 ret = regmap_update_bits(rk808->regmap, RK808_RTC_CTRL_REG,···208170 alrm->time.tm_mday = bcd2bin(alrm_data[3] & DAYS_REG_MSK);209171 alrm->time.tm_mon = (bcd2bin(alrm_data[4] & MONTHS_REG_MSK)) - 1;210172 alrm->time.tm_year = (bcd2bin(alrm_data[5] & YEARS_REG_MSK)) + 100;173173+ rockchip_to_gregorian(&alrm->time);211174212175 ret = regmap_read(rk808->regmap, RK808_RTC_INT_REG, &int_reg);213176 if (ret) {···266227 alrm->time.tm_mday, alrm->time.tm_wday, alrm->time.tm_hour,267228 alrm->time.tm_min, alrm->time.tm_sec);268229230230+ gregorian_to_rockchip(&alrm->time);269231 alrm_data[0] = bin2bcd(alrm->time.tm_sec);270232 alrm_data[1] = bin2bcd(alrm->time.tm_min);271233 alrm_data[2] = bin2bcd(alrm->time.tm_hour);
+3-1
drivers/s390/crypto/ap_bus.c
···599599 status = ap_sm_recv(ap_dev);600600 switch (status.response_code) {601601 case AP_RESPONSE_NORMAL:602602- if (ap_dev->queue_count > 0)602602+ if (ap_dev->queue_count > 0) {603603+ ap_dev->state = AP_STATE_WORKING;603604 return AP_WAIT_AGAIN;605605+ }604606 ap_dev->state = AP_STATE_IDLE;605607 return AP_WAIT_NONE;606608 case AP_RESPONSE_NO_PENDING_REPLY:
+39-27
drivers/s390/virtio/virtio_ccw.c
···984984 return vq;985985}986986987987-static void virtio_ccw_int_handler(struct ccw_device *cdev,988988- unsigned long intparm,989989- struct irb *irb)987987+static void virtio_ccw_check_activity(struct virtio_ccw_device *vcdev,988988+ __u32 activity)990989{991991- __u32 activity = intparm & VIRTIO_CCW_INTPARM_MASK;992992- struct virtio_ccw_device *vcdev = dev_get_drvdata(&cdev->dev);993993- int i;994994- struct virtqueue *vq;995995-996996- if (!vcdev)997997- return;998998- /* Check if it's a notification from the host. */999999- if ((intparm == 0) &&10001000- (scsw_stctl(&irb->scsw) ==10011001- (SCSW_STCTL_ALERT_STATUS | SCSW_STCTL_STATUS_PEND))) {10021002- /* OK */10031003- }10041004- if (irb_is_error(irb)) {10051005- /* Command reject? */10061006- if ((scsw_dstat(&irb->scsw) & DEV_STAT_UNIT_CHECK) &&10071007- (irb->ecw[0] & SNS0_CMD_REJECT))10081008- vcdev->err = -EOPNOTSUPP;10091009- else10101010- /* Map everything else to -EIO. */10111011- vcdev->err = -EIO;10121012- }1013990 if (vcdev->curr_io & activity) {1014991 switch (activity) {1015992 case VIRTIO_CCW_DOING_READ_FEAT:···10061029 break;10071030 default:10081031 /* don't know what to do... */10091009- dev_warn(&cdev->dev, "Suspicious activity '%08x'\n",10101010- activity);10321032+ dev_warn(&vcdev->cdev->dev,10331033+ "Suspicious activity '%08x'\n", activity);10111034 WARN_ON(1);10121035 break;10131036 }10141037 }10381038+}10391039+10401040+static void virtio_ccw_int_handler(struct ccw_device *cdev,10411041+ unsigned long intparm,10421042+ struct irb *irb)10431043+{10441044+ __u32 activity = intparm & VIRTIO_CCW_INTPARM_MASK;10451045+ struct virtio_ccw_device *vcdev = dev_get_drvdata(&cdev->dev);10461046+ int i;10471047+ struct virtqueue *vq;10481048+10491049+ if (!vcdev)10501050+ return;10511051+ if (IS_ERR(irb)) {10521052+ vcdev->err = PTR_ERR(irb);10531053+ virtio_ccw_check_activity(vcdev, activity);10541054+ /* Don't poke around indicators, something's wrong. */10551055+ return;10561056+ }10571057+ /* Check if it's a notification from the host. */10581058+ if ((intparm == 0) &&10591059+ (scsw_stctl(&irb->scsw) ==10601060+ (SCSW_STCTL_ALERT_STATUS | SCSW_STCTL_STATUS_PEND))) {10611061+ /* OK */10621062+ }10631063+ if (irb_is_error(irb)) {10641064+ /* Command reject? */10651065+ if ((scsw_dstat(&irb->scsw) & DEV_STAT_UNIT_CHECK) &&10661066+ (irb->ecw[0] & SNS0_CMD_REJECT))10671067+ vcdev->err = -EOPNOTSUPP;10681068+ else10691069+ /* Map everything else to -EIO. */10701070+ vcdev->err = -EIO;10711071+ }10721072+ virtio_ccw_check_activity(vcdev, activity);10151073 for_each_set_bit(i, &vcdev->indicators,10161074 sizeof(vcdev->indicators) * BITS_PER_BYTE) {10171075 /* The bit clear must happen before the vring kick. */
+10-10
drivers/scsi/scsi_pm.c
···219219 struct scsi_device *sdev = to_scsi_device(dev);220220 int err = 0;221221222222- if (pm && pm->runtime_suspend) {223223- err = blk_pre_runtime_suspend(sdev->request_queue);224224- if (err)225225- return err;222222+ err = blk_pre_runtime_suspend(sdev->request_queue);223223+ if (err)224224+ return err;225225+ if (pm && pm->runtime_suspend)226226 err = pm->runtime_suspend(dev);227227- blk_post_runtime_suspend(sdev->request_queue, err);228228- }227227+ blk_post_runtime_suspend(sdev->request_queue, err);228228+229229 return err;230230}231231···248248 const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL;249249 int err = 0;250250251251- if (pm && pm->runtime_resume) {252252- blk_pre_runtime_resume(sdev->request_queue);251251+ blk_pre_runtime_resume(sdev->request_queue);252252+ if (pm && pm->runtime_resume)253253 err = pm->runtime_resume(dev);254254- blk_post_runtime_resume(sdev->request_queue, err);255255- }254254+ blk_post_runtime_resume(sdev->request_queue, err);255255+256256 return err;257257}258258
+28-2
drivers/scsi/ses.c
···8484static int ses_recv_diag(struct scsi_device *sdev, int page_code,8585 void *buf, int bufflen)8686{8787+ int ret;8788 unsigned char cmd[] = {8889 RECEIVE_DIAGNOSTIC,8990 1, /* Set PCV bit */···9392 bufflen & 0xff,9493 09594 };9595+ unsigned char recv_page_code;96969797- return scsi_execute_req(sdev, cmd, DMA_FROM_DEVICE, buf, bufflen,9797+ ret = scsi_execute_req(sdev, cmd, DMA_FROM_DEVICE, buf, bufflen,9898 NULL, SES_TIMEOUT, SES_RETRIES, NULL);9999+ if (unlikely(!ret))100100+ return ret;101101+102102+ recv_page_code = ((unsigned char *)buf)[0];103103+104104+ if (likely(recv_page_code == page_code))105105+ return ret;106106+107107+ /* successful diagnostic but wrong page code. This happens to some108108+ * USB devices, just print a message and pretend there was an error */109109+110110+ sdev_printk(KERN_ERR, sdev,111111+ "Wrong diagnostic page; asked for %d got %u\n",112112+ page_code, recv_page_code);113113+114114+ return -EINVAL;99115}100116101117static int ses_send_diag(struct scsi_device *sdev, int page_code,···559541 if (desc_ptr)560542 desc_ptr += len;561543562562- if (addl_desc_ptr)544544+ if (addl_desc_ptr &&545545+ /* only find additional descriptions for specific devices */546546+ (type_ptr[0] == ENCLOSURE_COMPONENT_DEVICE ||547547+ type_ptr[0] == ENCLOSURE_COMPONENT_ARRAY_DEVICE ||548548+ type_ptr[0] == ENCLOSURE_COMPONENT_SAS_EXPANDER ||549549+ /* these elements are optional */550550+ type_ptr[0] == ENCLOSURE_COMPONENT_SCSI_TARGET_PORT ||551551+ type_ptr[0] == ENCLOSURE_COMPONENT_SCSI_INITIATOR_PORT ||552552+ type_ptr[0] == ENCLOSURE_COMPONENT_CONTROLLER_ELECTRONICS))563553 addl_desc_ptr += addl_desc_ptr[1] + 2;564554565555 }
···18381838 },18391839#endif1840184018411841+ /* Exclude Infineon Flash Loader utility */18421842+ { USB_DEVICE(0x058b, 0x0041),18431843+ .driver_info = IGNORE_DEVICE,18441844+ },18451845+18411846 /* control interfaces without any protocol set */18421847 { USB_INTERFACE_INFO(USB_CLASS_COMM, USB_CDC_SUBCLASS_ACM,18431848 USB_CDC_PROTO_NONE) },
+2-1
drivers/usb/core/config.c
···115115 USB_SS_MULT(desc->bmAttributes) > 3) {116116 dev_warn(ddev, "Isoc endpoint has Mult of %d in "117117 "config %d interface %d altsetting %d ep %d: "118118- "setting to 3\n", desc->bmAttributes + 1,118118+ "setting to 3\n",119119+ USB_SS_MULT(desc->bmAttributes),119120 cfgno, inum, asnum, ep->desc.bEndpointAddress);120121 ep->ss_ep_comp.bmAttributes = 2;121122 }
+33-11
drivers/usb/core/hub.c
···124124125125int usb_device_supports_lpm(struct usb_device *udev)126126{127127+ /* Some devices have trouble with LPM */128128+ if (udev->quirks & USB_QUIRK_NO_LPM)129129+ return 0;130130+127131 /* USB 2.1 (and greater) devices indicate LPM support through128132 * their USB 2.0 Extended Capabilities BOS descriptor.129133 */···10351031 unsigned delay;1036103210371033 /* Continue a partial initialization */10381038- if (type == HUB_INIT2)10391039- goto init2;10401040- if (type == HUB_INIT3)10341034+ if (type == HUB_INIT2 || type == HUB_INIT3) {10351035+ device_lock(hub->intfdev);10361036+10371037+ /* Was the hub disconnected while we were waiting? */10381038+ if (hub->disconnected) {10391039+ device_unlock(hub->intfdev);10401040+ kref_put(&hub->kref, hub_release);10411041+ return;10421042+ }10431043+ if (type == HUB_INIT2)10441044+ goto init2;10411045 goto init3;10461046+ }10471047+ kref_get(&hub->kref);1042104810431049 /* The superspeed hub except for root hub has to use Hub Depth10441050 * value as an offset into the route string to locate the bits···12461232 queue_delayed_work(system_power_efficient_wq,12471233 &hub->init_work,12481234 msecs_to_jiffies(delay));12351235+ device_unlock(hub->intfdev);12491236 return; /* Continues at init3: below */12501237 } else {12511238 msleep(delay);···12681253 /* Allow autosuspend if it was suppressed */12691254 if (type <= HUB_INIT3)12701255 usb_autopm_put_interface_async(to_usb_interface(hub->intfdev));12561256+12571257+ if (type == HUB_INIT2 || type == HUB_INIT3)12581258+ device_unlock(hub->intfdev);12591259+12601260+ kref_put(&hub->kref, hub_release);12711261}1272126212731263/* Implement the continuations for the delays above */···45324512 goto fail;45334513 }4534451445154515+ usb_detect_quirks(udev);45164516+45354517 if (udev->wusb == 0 && le16_to_cpu(udev->descriptor.bcdUSB) >= 0x0201) {45364518 retval = usb_get_bos_descriptor(udev);45374519 if (!retval) {···47324710 if (status < 0)47334711 goto loop;4734471247354735- usb_detect_quirks(udev);47364713 if (udev->quirks & USB_QUIRK_DELAY_INIT)47374714 msleep(1000);47384715···53475326 if (udev->usb2_hw_lpm_enabled == 1)53485327 usb_set_usb2_hardware_lpm(udev, 0);5349532853505350- bos = udev->bos;53515351- udev->bos = NULL;53525352-53535329 /* Disable LPM and LTM while we reset the device and reinstall the alt53545330 * settings. Device-initiated LPM settings, and system exit latency53555331 * settings are cleared when the device is reset, so we have to set···53555337 ret = usb_unlocked_disable_lpm(udev);53565338 if (ret) {53575339 dev_err(&udev->dev, "%s Failed to disable LPM\n.", __func__);53585358- goto re_enumerate;53405340+ goto re_enumerate_no_bos;53595341 }53605342 ret = usb_disable_ltm(udev);53615343 if (ret) {53625344 dev_err(&udev->dev, "%s Failed to disable LTM\n.",53635345 __func__);53645364- goto re_enumerate;53465346+ goto re_enumerate_no_bos;53655347 }53485348+53495349+ bos = udev->bos;53505350+ udev->bos = NULL;5366535153675352 for (i = 0; i < SET_CONFIG_TRIES; ++i) {53685353···54635442 return 0;5464544354655444re_enumerate:54665466- /* LPM state doesn't matter when we're about to destroy the device. */54675467- hub_port_logical_disconnect(parent_hub, port1);54685445 usb_release_bos_descriptor(udev);54695446 udev->bos = bos;54475447+re_enumerate_no_bos:54485448+ /* LPM state doesn't matter when we're about to destroy the device. */54495449+ hub_port_logical_disconnect(parent_hub, port1);54705450 return -ENODEV;54715451}54725452
+2-2
drivers/usb/core/port.c
···206206 else207207 method = "default";208208209209- pr_warn("usb: failed to peer %s and %s by %s (%s:%s) (%s:%s)\n",209209+ pr_debug("usb: failed to peer %s and %s by %s (%s:%s) (%s:%s)\n",210210 dev_name(&left->dev), dev_name(&right->dev), method,211211 dev_name(&left->dev),212212 lpeer ? dev_name(&lpeer->dev) : "none",···265265 if (rc == 0) {266266 dev_dbg(&left->dev, "peered to %s\n", dev_name(&right->dev));267267 } else {268268- dev_warn(&left->dev, "failed to peer to %s (%d)\n",268268+ dev_dbg(&left->dev, "failed to peer to %s (%d)\n",269269 dev_name(&right->dev), rc);270270 pr_warn_once("usb: port power management may be unreliable\n");271271 usb_port_block_power_off = 1;
···473473 if (!pdata)474474 return -ENOMEM;475475476476+ pdev->dev.platform_data = pdata;477477+476478 if (!of_property_read_u32(np, "num-ports", &ports))477479 pdata->ports = ports;478480···485483 */486484 if (i >= pdata->ports) {487485 pdata->vbus_pin[i] = -EINVAL;486486+ pdata->overcurrent_pin[i] = -EINVAL;488487 continue;489488 }490489···516513 }517514518515 at91_for_each_port(i) {519519- if (i >= pdata->ports) {520520- pdata->overcurrent_pin[i] = -EINVAL;521521- continue;522522- }516516+ if (i >= pdata->ports)517517+ break;523518524519 pdata->overcurrent_pin[i] =525520 of_get_named_gpio_flags(np, "atmel,oc-gpio", i, &flags);···552551 "can't get gpio IRQ for overcurrent\n");553552 }554553 }555555-556556- pdev->dev.platform_data = pdata;557554558555 device_init_wakeup(&pdev->dev, 1);559556 return usb_hcd_at91_probe(&ohci_at91_hc_driver, pdev);
+4
drivers/usb/host/whci/qset.c
···377377 if (std->pl_virt == NULL)378378 return -ENOMEM;379379 std->dma_addr = dma_map_single(whc->wusbhc.dev, std->pl_virt, pl_len, DMA_TO_DEVICE);380380+ if (dma_mapping_error(whc->wusbhc.dev, std->dma_addr)) {381381+ kfree(std->pl_virt);382382+ return -EFAULT;383383+ }380384381385 for (p = 0; p < std->num_pointers; p++) {382386 std->pl_virt[p].buf_ptr = cpu_to_le64(dma_addr);
+42-5
drivers/usb/host/xhci-hub.c
···733733 if ((raw_port_status & PORT_RESET) ||734734 !(raw_port_status & PORT_PE))735735 return 0xffffffff;736736- if (time_after_eq(jiffies,737737- bus_state->resume_done[wIndex])) {736736+ /* did port event handler already start resume timing? */737737+ if (!bus_state->resume_done[wIndex]) {738738+ /* If not, maybe we are in a host initated resume? */739739+ if (test_bit(wIndex, &bus_state->resuming_ports)) {740740+ /* Host initated resume doesn't time the resume741741+ * signalling using resume_done[].742742+ * It manually sets RESUME state, sleeps 20ms743743+ * and sets U0 state. This should probably be744744+ * changed, but not right now.745745+ */746746+ } else {747747+ /* port resume was discovered now and here,748748+ * start resume timing749749+ */750750+ unsigned long timeout = jiffies +751751+ msecs_to_jiffies(USB_RESUME_TIMEOUT);752752+753753+ set_bit(wIndex, &bus_state->resuming_ports);754754+ bus_state->resume_done[wIndex] = timeout;755755+ mod_timer(&hcd->rh_timer, timeout);756756+ }757757+ /* Has resume been signalled for USB_RESUME_TIME yet? */758758+ } else if (time_after_eq(jiffies,759759+ bus_state->resume_done[wIndex])) {738760 int time_left;739761740762 xhci_dbg(xhci, "Resume USB2 port %d\n",···797775 } else {798776 /*799777 * The resume has been signaling for less than800800- * 20ms. Report the port status as SUSPEND,801801- * let the usbcore check port status again802802- * and clear resume signaling later.778778+ * USB_RESUME_TIME. Report the port status as SUSPEND,779779+ * let the usbcore check port status again and clear780780+ * resume signaling later.803781 */804782 status |= USB_PORT_STAT_SUSPEND;805783 }806784 }785785+ /*786786+ * Clear stale usb2 resume signalling variables in case port changed787787+ * state during resume signalling. For example on error788788+ */789789+ if ((bus_state->resume_done[wIndex] ||790790+ test_bit(wIndex, &bus_state->resuming_ports)) &&791791+ (raw_port_status & PORT_PLS_MASK) != XDEV_U3 &&792792+ (raw_port_status & PORT_PLS_MASK) != XDEV_RESUME) {793793+ bus_state->resume_done[wIndex] = 0;794794+ clear_bit(wIndex, &bus_state->resuming_ports);795795+ }796796+797797+807798 if ((raw_port_status & PORT_PLS_MASK) == XDEV_U0 &&808799 (raw_port_status & PORT_POWER)) {809800 if (bus_state->suspended_ports & (1 << wIndex)) {···11501115 if ((temp & PORT_PE) == 0)11511116 goto error;1152111711181118+ set_bit(wIndex, &bus_state->resuming_ports);11531119 xhci_set_link_state(xhci, port_array, wIndex,11541120 XDEV_RESUME);11551121 spin_unlock_irqrestore(&xhci->lock, flags);···11581122 spin_lock_irqsave(&xhci->lock, flags);11591123 xhci_set_link_state(xhci, port_array, wIndex,11601124 XDEV_U0);11251125+ clear_bit(wIndex, &bus_state->resuming_ports);11611126 }11621127 bus_state->port_c_suspend |= 1 << wIndex;11631128
···47784778 ctrl_ctx->add_flags |= cpu_to_le32(SLOT_FLAG);47794779 slot_ctx = xhci_get_slot_ctx(xhci, config_cmd->in_ctx);47804780 slot_ctx->dev_info |= cpu_to_le32(DEV_HUB);47814781+ /*47824782+ * refer to section 6.2.2: MTT should be 0 for full speed hub,47834783+ * but it may be already set to 1 when setup an xHCI virtual47844784+ * device, so clear it anyway.47854785+ */47814786 if (tt->multi)47824787 slot_ctx->dev_info |= cpu_to_le32(DEV_MTT);47884788+ else if (hdev->speed == USB_SPEED_FULL)47894789+ slot_ctx->dev_info &= cpu_to_le32(~DEV_MTT);47904790+47834791 if (xhci->hci_version > 0x95) {47844792 xhci_dbg(xhci, "xHCI version %x needs hub "47854793 "TT think time and number of ports\n",
+1-1
drivers/usb/musb/Kconfig
···159159160160config USB_TI_CPPI41_DMA161161 bool 'TI CPPI 4.1 (AM335x)'162162- depends on ARCH_OMAP162162+ depends on ARCH_OMAP && DMADEVICES163163 select TI_CPPI41164164165165config USB_TUSB_OMAP_DMA
+7-1
drivers/usb/musb/musb_core.c
···20172017 /* We need musb_read/write functions initialized for PM */20182018 pm_runtime_use_autosuspend(musb->controller);20192019 pm_runtime_set_autosuspend_delay(musb->controller, 200);20202020- pm_runtime_irq_safe(musb->controller);20212020 pm_runtime_enable(musb->controller);2022202120232022 /* The musb_platform_init() call:···20942095#ifndef CONFIG_MUSB_PIO_ONLY20952096 if (!musb->ops->dma_init || !musb->ops->dma_exit) {20962097 dev_err(dev, "DMA controller not set\n");20982098+ status = -ENODEV;20972099 goto fail2;20982100 }20992101 musb_dma_controller_create = musb->ops->dma_init;···22172217 goto fail5;2218221822192219 pm_runtime_put(musb->controller);22202220+22212221+ /*22222222+ * For why this is currently needed, see commit 3e43a072563722232223+ * ("usb: musb: core: add pm_runtime_irq_safe()")22242224+ */22252225+ pm_runtime_irq_safe(musb->controller);2220222622212227 return 0;22222228
···531531 * through. Since this has a reasonably high failure rate, we retry532532 * several times.533533 */534534- while (retries--) {534534+ while (retries) {535535+ retries--;535536 result = usb_control_msg(serial->dev,536537 usb_sndctrlpipe(serial->dev, 0), 0x22, 0x21,537538 0x1, 0, NULL, 0, 100);
···796796 if (devinfo->flags & US_FL_NO_REPORT_OPCODES)797797 sdev->no_report_opcodes = 1;798798799799+ /* A few buggy USB-ATA bridges don't understand FUA */800800+ if (devinfo->flags & US_FL_BROKEN_FUA)801801+ sdev->broken_fua = 1;802802+799803 scsi_change_queue_depth(sdev, devinfo->qdepth - 2);800804 return 0;801805}
+1-1
drivers/usb/storage/unusual_devs.h
···19871987 US_FL_IGNORE_RESIDUE ),1988198819891989/* Reported by Michael Büsch <m@bues.ch> */19901990-UNUSUAL_DEV( 0x152d, 0x0567, 0x0114, 0x0114,19901990+UNUSUAL_DEV( 0x152d, 0x0567, 0x0114, 0x0116,19911991 "JMicron",19921992 "USB to ATA/ATAPI Bridge",19931993 USB_SC_DEVICE, USB_PR_DEVICE, NULL,
+1-1
drivers/usb/storage/unusual_uas.h
···132132 "JMicron",133133 "JMS567",134134 USB_SC_DEVICE, USB_PR_DEVICE, NULL,135135- US_FL_NO_REPORT_OPCODES),135135+ US_FL_BROKEN_FUA | US_FL_NO_REPORT_OPCODES),136136137137/* Reported-by: Hans de Goede <hdegoede@redhat.com> */138138UNUSUAL_DEV(0x2109, 0x0711, 0x0000, 0x9999,
-15
drivers/vfio/Kconfig
···31313232 If you don't know what to do here, say N.33333434-menuconfig VFIO_NOIOMMU3535- bool "VFIO No-IOMMU support"3636- depends on VFIO3737- help3838- VFIO is built on the ability to isolate devices using the IOMMU.3939- Only with an IOMMU can userspace access to DMA capable devices be4040- considered secure. VFIO No-IOMMU mode enables IOMMU groups for4141- devices without IOMMU backing for the purpose of re-using the VFIO4242- infrastructure in a non-secure mode. Use of this mode will result4343- in an unsupportable kernel and will therefore taint the kernel.4444- Device assignment to virtual machines is also not possible with4545- this mode since there is no IOMMU to provide DMA translation.4646-4747- If you don't know what to do here, say N.4848-4934source "drivers/vfio/pci/Kconfig"5035source "drivers/vfio/platform/Kconfig"5136source "virt/lib/Kconfig"
+5-5
drivers/vfio/pci/vfio_pci.c
···940940 if (pdev->hdr_type != PCI_HEADER_TYPE_NORMAL)941941 return -EINVAL;942942943943- group = vfio_iommu_group_get(&pdev->dev);943943+ group = iommu_group_get(&pdev->dev);944944 if (!group)945945 return -EINVAL;946946947947 vdev = kzalloc(sizeof(*vdev), GFP_KERNEL);948948 if (!vdev) {949949- vfio_iommu_group_put(group, &pdev->dev);949949+ iommu_group_put(group);950950 return -ENOMEM;951951 }952952···957957958958 ret = vfio_add_group_dev(&pdev->dev, &vfio_pci_ops, vdev);959959 if (ret) {960960- vfio_iommu_group_put(group, &pdev->dev);960960+ iommu_group_put(group);961961 kfree(vdev);962962 return ret;963963 }···993993 if (!vdev)994994 return;995995996996- vfio_iommu_group_put(pdev->dev.iommu_group, &pdev->dev);996996+ iommu_group_put(pdev->dev.iommu_group);997997 kfree(vdev);998998999999 if (vfio_pci_is_vga(pdev)) {···10351035 return PCI_ERS_RESULT_CAN_RECOVER;10361036}1037103710381038-static struct pci_error_handlers vfio_err_handlers = {10381038+static const struct pci_error_handlers vfio_err_handlers = {10391039 .error_detected = vfio_pci_aer_err_detected,10401040};10411041
···6262 struct rw_semaphore group_lock;6363 struct vfio_iommu_driver *iommu_driver;6464 void *iommu_data;6565- bool noiommu;6665};67666867struct vfio_unbound_dev {···8485 struct list_head unbound_list;8586 struct mutex unbound_lock;8687 atomic_t opened;8787- bool noiommu;8888};89899090struct vfio_device {···9496 struct list_head group_next;9597 void *device_data;9698};9797-9898-#ifdef CONFIG_VFIO_NOIOMMU9999-static bool noiommu __read_mostly;100100-module_param_named(enable_unsafe_noiommu_support,101101- noiommu, bool, S_IRUGO | S_IWUSR);102102-MODULE_PARM_DESC(enable_unsafe_noiommu_mode, "Enable UNSAFE, no-IOMMU mode. This mode provides no device isolation, no DMA translation, no host kernel protection, cannot be used for device assignment to virtual machines, requires RAWIO permissions, and will taint the kernel. If you do not know what this is for, step away. (default: false)");103103-#endif104104-105105-/*106106- * vfio_iommu_group_{get,put} are only intended for VFIO bus driver probe107107- * and remove functions, any use cases other than acquiring the first108108- * reference for the purpose of calling vfio_add_group_dev() or removing109109- * that symmetric reference after vfio_del_group_dev() should use the raw110110- * iommu_group_{get,put} functions. In particular, vfio_iommu_group_put()111111- * removes the device from the dummy group and cannot be nested.112112- */113113-struct iommu_group *vfio_iommu_group_get(struct device *dev)114114-{115115- struct iommu_group *group;116116- int __maybe_unused ret;117117-118118- group = iommu_group_get(dev);119119-120120-#ifdef CONFIG_VFIO_NOIOMMU121121- /*122122- * With noiommu enabled, an IOMMU group will be created for a device123123- * that doesn't already have one and doesn't have an iommu_ops on their124124- * bus. We use iommu_present() again in the main code to detect these125125- * fake groups.126126- */127127- if (group || !noiommu || iommu_present(dev->bus))128128- return group;129129-130130- group = iommu_group_alloc();131131- if (IS_ERR(group))132132- return NULL;133133-134134- iommu_group_set_name(group, "vfio-noiommu");135135- ret = iommu_group_add_device(group, dev);136136- iommu_group_put(group);137137- if (ret)138138- return NULL;139139-140140- /*141141- * Where to taint? At this point we've added an IOMMU group for a142142- * device that is not backed by iommu_ops, therefore any iommu_143143- * callback using iommu_ops can legitimately Oops. So, while we may144144- * be about to give a DMA capable device to a user without IOMMU145145- * protection, which is clearly taint-worthy, let's go ahead and do146146- * it here.147147- */148148- add_taint(TAINT_USER, LOCKDEP_STILL_OK);149149- dev_warn(dev, "Adding kernel taint for vfio-noiommu group on device\n");150150-#endif151151-152152- return group;153153-}154154-EXPORT_SYMBOL_GPL(vfio_iommu_group_get);155155-156156-void vfio_iommu_group_put(struct iommu_group *group, struct device *dev)157157-{158158-#ifdef CONFIG_VFIO_NOIOMMU159159- if (!iommu_present(dev->bus))160160- iommu_group_remove_device(dev);161161-#endif162162-163163- iommu_group_put(group);164164-}165165-EXPORT_SYMBOL_GPL(vfio_iommu_group_put);166166-167167-#ifdef CONFIG_VFIO_NOIOMMU168168-static void *vfio_noiommu_open(unsigned long arg)169169-{170170- if (arg != VFIO_NOIOMMU_IOMMU)171171- return ERR_PTR(-EINVAL);172172- if (!capable(CAP_SYS_RAWIO))173173- return ERR_PTR(-EPERM);174174-175175- return NULL;176176-}177177-178178-static void vfio_noiommu_release(void *iommu_data)179179-{180180-}181181-182182-static long vfio_noiommu_ioctl(void *iommu_data,183183- unsigned int cmd, unsigned long arg)184184-{185185- if (cmd == VFIO_CHECK_EXTENSION)186186- return arg == VFIO_NOIOMMU_IOMMU ? 1 : 0;187187-188188- return -ENOTTY;189189-}190190-191191-static int vfio_iommu_present(struct device *dev, void *unused)192192-{193193- return iommu_present(dev->bus) ? 1 : 0;194194-}195195-196196-static int vfio_noiommu_attach_group(void *iommu_data,197197- struct iommu_group *iommu_group)198198-{199199- return iommu_group_for_each_dev(iommu_group, NULL,200200- vfio_iommu_present) ? -EINVAL : 0;201201-}202202-203203-static void vfio_noiommu_detach_group(void *iommu_data,204204- struct iommu_group *iommu_group)205205-{206206-}207207-208208-static struct vfio_iommu_driver_ops vfio_noiommu_ops = {209209- .name = "vfio-noiommu",210210- .owner = THIS_MODULE,211211- .open = vfio_noiommu_open,212212- .release = vfio_noiommu_release,213213- .ioctl = vfio_noiommu_ioctl,214214- .attach_group = vfio_noiommu_attach_group,215215- .detach_group = vfio_noiommu_detach_group,216216-};217217-218218-static struct vfio_iommu_driver vfio_noiommu_driver = {219219- .ops = &vfio_noiommu_ops,220220-};221221-222222-/*223223- * Wrap IOMMU drivers, the noiommu driver is the one and only driver for224224- * noiommu groups (and thus containers) and not available for normal groups.225225- */226226-#define vfio_for_each_iommu_driver(con, pos) \227227- for (pos = con->noiommu ? &vfio_noiommu_driver : \228228- list_first_entry(&vfio.iommu_drivers_list, \229229- struct vfio_iommu_driver, vfio_next); \230230- (con->noiommu ? pos != NULL : \231231- &pos->vfio_next != &vfio.iommu_drivers_list); \232232- pos = con->noiommu ? NULL : list_next_entry(pos, vfio_next))233233-#else234234-#define vfio_for_each_iommu_driver(con, pos) \235235- list_for_each_entry(pos, &vfio.iommu_drivers_list, vfio_next)236236-#endif237237-23899239100/**240101 * IOMMU driver registration···199342/**200343 * Group objects - create, release, get, put, search201344 */202202-static struct vfio_group *vfio_create_group(struct iommu_group *iommu_group,203203- bool noiommu)345345+static struct vfio_group *vfio_create_group(struct iommu_group *iommu_group)204346{205347 struct vfio_group *group, *tmp;206348 struct device *dev;···217361 atomic_set(&group->container_users, 0);218362 atomic_set(&group->opened, 0);219363 group->iommu_group = iommu_group;220220- group->noiommu = noiommu;221364222365 group->nb.notifier_call = vfio_iommu_group_notifier;223366···252397253398 dev = device_create(vfio.class, NULL,254399 MKDEV(MAJOR(vfio.group_devt), minor),255255- group, "%s%d", noiommu ? "noiommu-" : "",256256- iommu_group_id(iommu_group));400400+ group, "%d", iommu_group_id(iommu_group));257401 if (IS_ERR(dev)) {258402 vfio_free_group_minor(minor);259403 vfio_group_unlock_and_free(group);···536682 return 0;537683538684 /* TODO Prevent device auto probing */539539- WARN("Device %s added to live group %d!\n", dev_name(dev),685685+ WARN(1, "Device %s added to live group %d!\n", dev_name(dev),540686 iommu_group_id(group->iommu_group));541687542688 return 0;···640786641787 group = vfio_group_get_from_iommu(iommu_group);642788 if (!group) {643643- group = vfio_create_group(iommu_group,644644- !iommu_present(dev->bus));789789+ group = vfio_create_group(iommu_group);645790 if (IS_ERR(group)) {646791 iommu_group_put(iommu_group);647792 return PTR_ERR(group);···852999 */8531000 if (!driver) {8541001 mutex_lock(&vfio.iommu_drivers_lock);855855- vfio_for_each_iommu_driver(container, driver) {10021002+ list_for_each_entry(driver, &vfio.iommu_drivers_list,10031003+ vfio_next) {8561004 if (!try_module_get(driver->ops->owner))8571005 continue;8581006···9221068 }92310699241070 mutex_lock(&vfio.iommu_drivers_lock);925925- vfio_for_each_iommu_driver(container, driver) {10711071+ list_for_each_entry(driver, &vfio.iommu_drivers_list, vfio_next) {9261072 void *data;92710739281074 if (!try_module_get(driver->ops->owner))···11871333 if (atomic_read(&group->container_users))11881334 return -EINVAL;1189133511901190- if (group->noiommu && !capable(CAP_SYS_RAWIO))11911191- return -EPERM;11921192-11931336 f = fdget(container_fd);11941337 if (!f.file)11951338 return -EBADF;···1202135112031352 down_write(&container->group_lock);1204135312051205- /* Real groups and fake groups cannot mix */12061206- if (!list_empty(&container->group_list) &&12071207- container->noiommu != group->noiommu) {12081208- ret = -EPERM;12091209- goto unlock_out;12101210- }12111211-12121354 driver = container->iommu_driver;12131355 if (driver) {12141356 ret = driver->ops->attach_group(container->iommu_data,···12111367 }1212136812131369 group->container = container;12141214- container->noiommu = group->noiommu;12151370 list_add(&group->container_next, &container->group_list);1216137112171372 /* Get a reference on the container and mark a user within the group */···12401397 if (0 == atomic_read(&group->container_users) ||12411398 !group->container->iommu_driver || !vfio_group_viable(group))12421399 return -EINVAL;12431243-12441244- if (group->noiommu && !capable(CAP_SYS_RAWIO))12451245- return -EPERM;1246140012471401 device = vfio_device_get_from_name(group, buf);12481402 if (!device)···12821442 atomic_inc(&group->container_users);1283144312841444 fd_install(ret, filep);12851285-12861286- if (group->noiommu)12871287- dev_warn(device->dev, "vfio-noiommu device opened by user "12881288- "(%s:%d)\n", current->comm, task_pid_nr(current));1289144512901446 return ret;12911447}···13701534 group = vfio_group_get_from_minor(iminor(inode));13711535 if (!group)13721536 return -ENODEV;13731373-13741374- if (group->noiommu && !capable(CAP_SYS_RAWIO)) {13751375- vfio_group_put(group);13761376- return -EPERM;13771377- }1378153713791538 /* Do we need multiple instances of the group open? Seems not. */13801539 opened = atomic_cmpxchg(&group->opened, 0, 1);···1532170115331702 if (!atomic_inc_not_zero(&group->container_users))15341703 return ERR_PTR(-EINVAL);15351535-15361536- if (group->noiommu) {15371537- atomic_dec(&group->container_users);15381538- return ERR_PTR(-EPERM);15391539- }1540170415411705 if (!group->container->iommu_driver ||15421706 !vfio_group_viable(group)) {
+4-4
drivers/vhost/vhost.c
···819819 BUILD_BUG_ON(__alignof__ *vq->used > VRING_USED_ALIGN_SIZE);820820 if ((a.avail_user_addr & (VRING_AVAIL_ALIGN_SIZE - 1)) ||821821 (a.used_user_addr & (VRING_USED_ALIGN_SIZE - 1)) ||822822- (a.log_guest_addr & (sizeof(u64) - 1))) {822822+ (a.log_guest_addr & (VRING_USED_ALIGN_SIZE - 1))) {823823 r = -EINVAL;824824 break;825825 }···13691369 /* Grab the next descriptor number they're advertising, and increment13701370 * the index we've seen. */13711371 if (unlikely(__get_user(ring_head,13721372- &vq->avail->ring[last_avail_idx % vq->num]))) {13721372+ &vq->avail->ring[last_avail_idx & (vq->num - 1)]))) {13731373 vq_err(vq, "Failed to read head: idx %d address %p\n",13741374 last_avail_idx,13751375 &vq->avail->ring[last_avail_idx % vq->num]);···14891489 u16 old, new;14901490 int start;1491149114921492- start = vq->last_used_idx % vq->num;14921492+ start = vq->last_used_idx & (vq->num - 1);14931493 used = vq->used->ring + start;14941494 if (count == 1) {14951495 if (__put_user(heads[0].id, &used->id)) {···15311531{15321532 int start, n, r;1533153315341534- start = vq->last_used_idx % vq->num;15341534+ start = vq->last_used_idx & (vq->num - 1);15351535 n = vq->num - start;15361536 if (n < count) {15371537 r = __vhost_add_used_n(vq, heads, n);
+12-1
drivers/video/fbdev/fsl-diu-fb.c
···479479 port = FSL_DIU_PORT_DLVDS;480480 }481481482482- return diu_ops.valid_monitor_port(port);482482+ if (diu_ops.valid_monitor_port)483483+ port = diu_ops.valid_monitor_port(port);484484+485485+ return port;483486}484487485488/*···19181915#else19191916 monitor_port = fsl_diu_name_to_port(monitor_string);19201917#endif19181918+19191919+ /*19201920+ * Must to verify set_pixel_clock. If not implement on platform,19211921+ * then that means that there is no platform support for the DIU.19221922+ */19231923+ if (!diu_ops.set_pixel_clock)19241924+ return -ENODEV;19251925+19211926 pr_info("Freescale Display Interface Unit (DIU) framebuffer driver\n");1922192719231928#ifdef CONFIG_NOT_COHERENT_CACHE
···8080 /* Last used index we've seen. */8181 u16 last_used_idx;82828383+ /* Last written value to avail->flags */8484+ u16 avail_flags_shadow;8585+8686+ /* Last written value to avail->idx in guest byte order */8787+ u16 avail_idx_shadow;8888+8389 /* How to notify other side. FIXME: commonalize hcalls! */8490 bool (*notify)(struct virtqueue *vq);8591···115109 * otherwise virt_to_phys will give us bogus addresses in the116110 * virtqueue.117111 */118118- gfp &= ~(__GFP_HIGHMEM | __GFP_HIGH);112112+ gfp &= ~__GFP_HIGHMEM;119113120114 desc = kmalloc(total_sg * sizeof(struct vring_desc), gfp);121115 if (!desc)···241235242236 /* Put entry in available array (but don't update avail->idx until they243237 * do sync). */244244- avail = virtio16_to_cpu(_vq->vdev, vq->vring.avail->idx) & (vq->vring.num - 1);238238+ avail = vq->avail_idx_shadow & (vq->vring.num - 1);245239 vq->vring.avail->ring[avail] = cpu_to_virtio16(_vq->vdev, head);246240247241 /* Descriptors and available array need to be set before we expose the248242 * new available array entries. */249243 virtio_wmb(vq->weak_barriers);250250- vq->vring.avail->idx = cpu_to_virtio16(_vq->vdev, virtio16_to_cpu(_vq->vdev, vq->vring.avail->idx) + 1);244244+ vq->avail_idx_shadow++;245245+ vq->vring.avail->idx = cpu_to_virtio16(_vq->vdev, vq->avail_idx_shadow);251246 vq->num_added++;252247253248 pr_debug("Added buffer head %i to %p\n", head, vq);···361354 * event. */362355 virtio_mb(vq->weak_barriers);363356364364- old = virtio16_to_cpu(_vq->vdev, vq->vring.avail->idx) - vq->num_added;365365- new = virtio16_to_cpu(_vq->vdev, vq->vring.avail->idx);357357+ old = vq->avail_idx_shadow - vq->num_added;358358+ new = vq->avail_idx_shadow;366359 vq->num_added = 0;367360368361#ifdef DEBUG···517510 /* If we expect an interrupt for the next entry, tell host518511 * by writing event index and flush out the write before519512 * the read in the next get_buf call. */520520- if (!(vq->vring.avail->flags & cpu_to_virtio16(_vq->vdev, VRING_AVAIL_F_NO_INTERRUPT))) {513513+ if (!(vq->avail_flags_shadow & VRING_AVAIL_F_NO_INTERRUPT)) {521514 vring_used_event(&vq->vring) = cpu_to_virtio16(_vq->vdev, vq->last_used_idx);522515 virtio_mb(vq->weak_barriers);523516 }···544537{545538 struct vring_virtqueue *vq = to_vvq(_vq);546539547547- vq->vring.avail->flags |= cpu_to_virtio16(_vq->vdev, VRING_AVAIL_F_NO_INTERRUPT);540540+ if (!(vq->avail_flags_shadow & VRING_AVAIL_F_NO_INTERRUPT)) {541541+ vq->avail_flags_shadow |= VRING_AVAIL_F_NO_INTERRUPT;542542+ vq->vring.avail->flags = cpu_to_virtio16(_vq->vdev, vq->avail_flags_shadow);543543+ }544544+548545}549546EXPORT_SYMBOL_GPL(virtqueue_disable_cb);550547···576565 /* Depending on the VIRTIO_RING_F_EVENT_IDX feature, we need to577566 * either clear the flags bit or point the event index at the next578567 * entry. Always do both to keep code simple. */579579- vq->vring.avail->flags &= cpu_to_virtio16(_vq->vdev, ~VRING_AVAIL_F_NO_INTERRUPT);568568+ if (vq->avail_flags_shadow & VRING_AVAIL_F_NO_INTERRUPT) {569569+ vq->avail_flags_shadow &= ~VRING_AVAIL_F_NO_INTERRUPT;570570+ vq->vring.avail->flags = cpu_to_virtio16(_vq->vdev, vq->avail_flags_shadow);571571+ }580572 vring_used_event(&vq->vring) = cpu_to_virtio16(_vq->vdev, last_used_idx = vq->last_used_idx);581573 END_USE(vq);582574 return last_used_idx;···647633 /* Depending on the VIRTIO_RING_F_USED_EVENT_IDX feature, we need to648634 * either clear the flags bit or point the event index at the next649635 * entry. Always do both to keep code simple. */650650- vq->vring.avail->flags &= cpu_to_virtio16(_vq->vdev, ~VRING_AVAIL_F_NO_INTERRUPT);636636+ if (vq->avail_flags_shadow & VRING_AVAIL_F_NO_INTERRUPT) {637637+ vq->avail_flags_shadow &= ~VRING_AVAIL_F_NO_INTERRUPT;638638+ vq->vring.avail->flags = cpu_to_virtio16(_vq->vdev, vq->avail_flags_shadow);639639+ }651640 /* TODO: tune this threshold */652652- bufs = (u16)(virtio16_to_cpu(_vq->vdev, vq->vring.avail->idx) - vq->last_used_idx) * 3 / 4;641641+ bufs = (u16)(vq->avail_idx_shadow - vq->last_used_idx) * 3 / 4;653642 vring_used_event(&vq->vring) = cpu_to_virtio16(_vq->vdev, vq->last_used_idx + bufs);654643 virtio_mb(vq->weak_barriers);655644 if (unlikely((u16)(virtio16_to_cpu(_vq->vdev, vq->vring.used->idx) - vq->last_used_idx) > bufs)) {···687670 /* detach_buf clears data, so grab it now. */688671 buf = vq->data[i];689672 detach_buf(vq, i);690690- vq->vring.avail->idx = cpu_to_virtio16(_vq->vdev, virtio16_to_cpu(_vq->vdev, vq->vring.avail->idx) - 1);673673+ vq->avail_idx_shadow--;674674+ vq->vring.avail->idx = cpu_to_virtio16(_vq->vdev, vq->avail_idx_shadow);691675 END_USE(vq);692676 return buf;693677 }···753735 vq->weak_barriers = weak_barriers;754736 vq->broken = false;755737 vq->last_used_idx = 0;738738+ vq->avail_flags_shadow = 0;739739+ vq->avail_idx_shadow = 0;756740 vq->num_added = 0;757741 list_add_tail(&vq->vq.list, &vdev->vqs);758742#ifdef DEBUG···766746 vq->event = virtio_has_feature(vdev, VIRTIO_RING_F_EVENT_IDX);767747768748 /* No callback? Tell other side not to bother us. */769769- if (!callback)770770- vq->vring.avail->flags |= cpu_to_virtio16(vdev, VRING_AVAIL_F_NO_INTERRUPT);749749+ if (!callback) {750750+ vq->avail_flags_shadow |= VRING_AVAIL_F_NO_INTERRUPT;751751+ vq->vring.avail->flags = cpu_to_virtio16(vdev, vq->avail_flags_shadow);752752+ }771753772754 /* Put everything in free lists. */773755 vq->free_head = 0;
+18-5
drivers/xen/events/events_fifo.c
···281281282282static void consume_one_event(unsigned cpu,283283 struct evtchn_fifo_control_block *control_block,284284- unsigned priority, unsigned long *ready)284284+ unsigned priority, unsigned long *ready,285285+ bool drop)285286{286287 struct evtchn_fifo_queue *q = &per_cpu(cpu_queue, cpu);287288 uint32_t head;···314313 if (head == 0)315314 clear_bit(priority, ready);316315317317- if (evtchn_fifo_is_pending(port) && !evtchn_fifo_is_masked(port))318318- handle_irq_for_port(port);316316+ if (evtchn_fifo_is_pending(port) && !evtchn_fifo_is_masked(port)) {317317+ if (unlikely(drop))318318+ pr_warn("Dropping pending event for port %u\n", port);319319+ else320320+ handle_irq_for_port(port);321321+ }319322320323 q->head[priority] = head;321324}322325323323-static void evtchn_fifo_handle_events(unsigned cpu)326326+static void __evtchn_fifo_handle_events(unsigned cpu, bool drop)324327{325328 struct evtchn_fifo_control_block *control_block;326329 unsigned long ready;···336331337332 while (ready) {338333 q = find_first_bit(&ready, EVTCHN_FIFO_MAX_QUEUES);339339- consume_one_event(cpu, control_block, q, &ready);334334+ consume_one_event(cpu, control_block, q, &ready, drop);340335 ready |= xchg(&control_block->ready, 0);341336 }337337+}338338+339339+static void evtchn_fifo_handle_events(unsigned cpu)340340+{341341+ __evtchn_fifo_handle_events(cpu, false);342342}343343344344static void evtchn_fifo_resume(void)···429419 case CPU_UP_PREPARE:430420 if (!per_cpu(cpu_control_block, cpu))431421 ret = evtchn_fifo_alloc_control_block(cpu);422422+ break;423423+ case CPU_DEAD:424424+ __evtchn_fifo_handle_events(cpu, true);432425 break;433426 default:434427 break;
···451451{452452 struct v9fs_inode *v9inode = V9FS_I(inode);453453454454- truncate_inode_pages_final(inode->i_mapping);454454+ truncate_inode_pages_final(&inode->i_data);455455 clear_inode(inode);456456- filemap_fdatawrite(inode->i_mapping);456456+ filemap_fdatawrite(&inode->i_data);457457458458 v9fs_cache_inode_put_cookie(inode);459459 /* clunk the fid stashed in writeback_fid */
+7-4
fs/block_dev.c
···15231523 WARN_ON_ONCE(bdev->bd_holders);15241524 sync_blockdev(bdev);15251525 kill_bdev(bdev);15261526- /*15271527- * ->release can cause the queue to disappear, so flush all15281528- * dirty data before.15291529- */15261526+15301527 bdev_write_inode(bdev);15281528+ /*15291529+ * Detaching bdev inode from its wb in __destroy_inode()15301530+ * is too late: the queue which embeds its bdi (along with15311531+ * root wb) can be gone as soon as we put_disk() below.15321532+ */15331533+ inode_detach_wb(bdev->bd_inode);15311534 }15321535 if (bdev->bd_contains == bdev) {15331536 if (disk->fops->release)
+7-3
fs/btrfs/extent-tree.c
···1048010480 * until transaction commit to do the actual discard.1048110481 */1048210482 if (trimming) {1048310483- WARN_ON(!list_empty(&block_group->bg_list));1048410484- spin_lock(&trans->transaction->deleted_bgs_lock);1048310483+ spin_lock(&fs_info->unused_bgs_lock);1048410484+ /*1048510485+ * A concurrent scrub might have added us to the list1048610486+ * fs_info->unused_bgs, so use a list_move operation1048710487+ * to add the block group to the deleted_bgs list.1048810488+ */1048510489 list_move(&block_group->bg_list,1048610490 &trans->transaction->deleted_bgs);1048710487- spin_unlock(&trans->transaction->deleted_bgs_lock);1049110491+ spin_unlock(&fs_info->unused_bgs_lock);1048810492 btrfs_get_block_group(block_group);1048910493 }1049010494end_trans:
+14-4
fs/btrfs/file.c
···12911291 * on error we return an unlocked page and the error value12921292 * on success we return a locked page and 012931293 */12941294-static int prepare_uptodate_page(struct page *page, u64 pos,12941294+static int prepare_uptodate_page(struct inode *inode,12951295+ struct page *page, u64 pos,12951296 bool force_uptodate)12961297{12971298 int ret = 0;···13061305 if (!PageUptodate(page)) {13071306 unlock_page(page);13081307 return -EIO;13081308+ }13091309+ if (page->mapping != inode->i_mapping) {13101310+ unlock_page(page);13111311+ return -EAGAIN;13091312 }13101313 }13111314 return 0;···13291324 int faili;1330132513311326 for (i = 0; i < num_pages; i++) {13271327+again:13321328 pages[i] = find_or_create_page(inode->i_mapping, index + i,13331329 mask | __GFP_WRITE);13341330 if (!pages[i]) {···13391333 }1340133413411335 if (i == 0)13421342- err = prepare_uptodate_page(pages[i], pos,13361336+ err = prepare_uptodate_page(inode, pages[i], pos,13431337 force_uptodate);13441344- if (i == num_pages - 1)13451345- err = prepare_uptodate_page(pages[i],13381338+ if (!err && i == num_pages - 1)13391339+ err = prepare_uptodate_page(inode, pages[i],13461340 pos + write_bytes, false);13471341 if (err) {13481342 page_cache_release(pages[i]);13431343+ if (err == -EAGAIN) {13441344+ err = 0;13451345+ goto again;13461346+ }13491347 faili = i - 1;13501348 goto fail;13511349 }
+6-4
fs/btrfs/free-space-cache.c
···891891 spin_unlock(&block_group->lock);892892 ret = 0;893893894894- btrfs_warn(fs_info, "failed to load free space cache for block group %llu, rebuild it now",894894+ btrfs_warn(fs_info, "failed to load free space cache for block group %llu, rebuilding it now",895895 block_group->key.objectid);896896 }897897···29722972 u64 cont1_bytes, u64 min_bytes)29732973{29742974 struct btrfs_free_space_ctl *ctl = block_group->free_space_ctl;29752975- struct btrfs_free_space *entry;29752975+ struct btrfs_free_space *entry = NULL;29762976 int ret = -ENOSPC;29772977 u64 bitmap_offset = offset_to_bitmap(ctl, offset);29782978···29832983 * The bitmap that covers offset won't be in the list unless offset29842984 * is just its start offset.29852985 */29862986- entry = list_first_entry(bitmaps, struct btrfs_free_space, list);29872987- if (entry->offset != bitmap_offset) {29862986+ if (!list_empty(bitmaps))29872987+ entry = list_first_entry(bitmaps, struct btrfs_free_space, list);29882988+29892989+ if (!entry || entry->offset != bitmap_offset) {29882990 entry = tree_search_offset(ctl, bitmap_offset, 1, 0);29892991 if (entry && list_empty(&entry->list))29902992 list_add(&entry->list, bitmaps);
···18311831 * @word: long word containing the bit lock18321832 */18331833static int18341834-cifs_wait_bit_killable(struct wait_bit_key *key)18341834+cifs_wait_bit_killable(struct wait_bit_key *key, int mode)18351835{18361836- if (fatal_signal_pending(current))18371837- return -ERESTARTSYS;18381836 freezable_schedule_unsafe();18371837+ if (signal_pending_state(mode, current))18381838+ return -ERESTARTSYS;18391839 return 0;18401840}18411841
···549549 unregister_chrdev_region(cc->cdev->dev, 1);550550 cdev_del(cc->cdev);551551 }552552+ /* Base reference is now owned by "fud" */553553+ fuse_conn_put(&cc->fc);552554553555 rc = fuse_dev_release(inode, file); /* puts the base reference */554556
···10091009}1010101010111011/* Fast check whether buffer is already attached to the required transaction */10121012-static bool jbd2_write_access_granted(handle_t *handle, struct buffer_head *bh)10121012+static bool jbd2_write_access_granted(handle_t *handle, struct buffer_head *bh,10131013+ bool undo)10131014{10141015 struct journal_head *jh;10151016 bool ret = false;···10361035 /* This should be bh2jh() but that doesn't work with inline functions */10371036 jh = READ_ONCE(bh->b_private);10381037 if (!jh)10381038+ goto out;10391039+ /* For undo access buffer must have data copied */10401040+ if (undo && !jh->b_committed_data)10391041 goto out;10401042 if (jh->b_transaction != handle->h_transaction &&10411043 jh->b_next_transaction != handle->h_transaction)···10771073 struct journal_head *jh;10781074 int rc;1079107510801080- if (jbd2_write_access_granted(handle, bh))10761076+ if (jbd2_write_access_granted(handle, bh, false))10811077 return 0;1082107810831079 jh = jbd2_journal_add_journal_head(bh);···12141210 char *committed_data = NULL;1215121112161212 JBUFFER_TRACE(jh, "entry");12171217- if (jbd2_write_access_granted(handle, bh))12131213+ if (jbd2_write_access_granted(handle, bh, true))12181214 return 0;1219121512201216 jh = jbd2_journal_add_journal_head(bh);···2156215221572153 if (!buffer_dirty(bh)) {21582154 /* bdflush has written it. We can drop it now */21552155+ __jbd2_journal_remove_checkpoint(jh);21592156 goto zap_buffer;21602157 }21612158···21862181 /* The orphan record's transaction has21872182 * committed. We can cleanse this buffer */21882183 clear_buffer_jbddirty(bh);21842184+ __jbd2_journal_remove_checkpoint(jh);21892185 goto zap_buffer;21902186 }21912187 }
···7575 * nfs_wait_bit_killable - helper for functions that are sleeping on bit locks7676 * @word: long word containing the bit lock7777 */7878-int nfs_wait_bit_killable(struct wait_bit_key *key)7878+int nfs_wait_bit_killable(struct wait_bit_key *key, int mode)7979{8080- if (fatal_signal_pending(current))8181- return -ERESTARTSYS;8280 freezable_schedule_unsafe();8181+ if (signal_pending_state(mode, current))8282+ return -ERESTARTSYS;8383 return 0;8484}8585EXPORT_SYMBOL_GPL(nfs_wait_bit_killable);
+1-1
fs/nfs/internal.h
···379379extern void nfs_clear_inode(struct inode *);380380extern void nfs_evict_inode(struct inode *);381381void nfs_zap_acl_cache(struct inode *inode);382382-extern int nfs_wait_bit_killable(struct wait_bit_key *key);382382+extern int nfs_wait_bit_killable(struct wait_bit_key *key, int mode);383383384384/* super.c */385385extern const struct super_operations nfs_sops;
···55 * Copyright 2001 Red Hat, Inc.66 * Based on code from mm/memory.c Copyright Linus Torvalds and others.77 *88- * Copyright 2011 Red Hat, Inc., Peter Zijlstra <pzijlstr@redhat.com>88+ * Copyright 2011 Red Hat, Inc., Peter Zijlstra99 *1010 * This program is free software; you can redistribute it and/or1111 * modify it under the terms of the GNU General Public License
···9090 */9191struct cgroup_file {9292 /* do not access any fields from outside cgroup core */9393- struct list_head node; /* anchored at css->files */9493 struct kernfs_node *kn;9594};9695···132133 * used to allow interrupting and resuming iterations.133134 */134135 u64 serial_nr;135135-136136- /* all cgroup_files associated with this css */137137- struct list_head files;138136139137 /* percpu_ref killing and RCU release */140138 struct rcu_head rcu_head;···422426 void (*css_reset)(struct cgroup_subsys_state *css);423427 void (*css_e_css_changed)(struct cgroup_subsys_state *css);424428425425- int (*can_attach)(struct cgroup_subsys_state *css,426426- struct cgroup_taskset *tset);427427- void (*cancel_attach)(struct cgroup_subsys_state *css,428428- struct cgroup_taskset *tset);429429- void (*attach)(struct cgroup_subsys_state *css,430430- struct cgroup_taskset *tset);429429+ int (*can_attach)(struct cgroup_taskset *tset);430430+ void (*cancel_attach)(struct cgroup_taskset *tset);431431+ void (*attach)(struct cgroup_taskset *tset);431432 int (*can_fork)(struct task_struct *task, void **priv_p);432433 void (*cancel_fork)(struct task_struct *task, void *priv);433434 void (*fork)(struct task_struct *task, void *priv);
+23-24
include/linux/cgroup.h
···8888int cgroup_add_dfl_cftypes(struct cgroup_subsys *ss, struct cftype *cfts);8989int cgroup_add_legacy_cftypes(struct cgroup_subsys *ss, struct cftype *cfts);9090int cgroup_rm_cftypes(struct cftype *cfts);9191+void cgroup_file_notify(struct cgroup_file *cfile);91929293char *task_cgroup_path(struct task_struct *task, char *buf, size_t buflen);9394int cgroupstats_build(struct cgroupstats *stats, struct dentry *dentry);···120119struct cgroup_subsys_state *css_next_descendant_post(struct cgroup_subsys_state *pos,121120 struct cgroup_subsys_state *css);122121123123-struct task_struct *cgroup_taskset_first(struct cgroup_taskset *tset);124124-struct task_struct *cgroup_taskset_next(struct cgroup_taskset *tset);122122+struct task_struct *cgroup_taskset_first(struct cgroup_taskset *tset,123123+ struct cgroup_subsys_state **dst_cssp);124124+struct task_struct *cgroup_taskset_next(struct cgroup_taskset *tset,125125+ struct cgroup_subsys_state **dst_cssp);125126126127void css_task_iter_start(struct cgroup_subsys_state *css,127128 struct css_task_iter *it);···238235/**239236 * cgroup_taskset_for_each - iterate cgroup_taskset240237 * @task: the loop cursor238238+ * @dst_css: the destination css241239 * @tset: taskset to iterate242240 *243241 * @tset may contain multiple tasks and they may belong to multiple244244- * processes. When there are multiple tasks in @tset, if a task of a245245- * process is in @tset, all tasks of the process are in @tset. Also, all246246- * are guaranteed to share the same source and destination csses.242242+ * processes.243243+ *244244+ * On the v2 hierarchy, there may be tasks from multiple processes and they245245+ * may not share the source or destination csses.246246+ *247247+ * On traditional hierarchies, when there are multiple tasks in @tset, if a248248+ * task of a process is in @tset, all tasks of the process are in @tset.249249+ * Also, all are guaranteed to share the same source and destination csses.247250 *248251 * Iteration is not in any specific order.249252 */250250-#define cgroup_taskset_for_each(task, tset) \251251- for ((task) = cgroup_taskset_first((tset)); (task); \252252- (task) = cgroup_taskset_next((tset)))253253+#define cgroup_taskset_for_each(task, dst_css, tset) \254254+ for ((task) = cgroup_taskset_first((tset), &(dst_css)); \255255+ (task); \256256+ (task) = cgroup_taskset_next((tset), &(dst_css)))253257254258/**255259 * cgroup_taskset_for_each_leader - iterate group leaders in a cgroup_taskset256260 * @leader: the loop cursor261261+ * @dst_css: the destination css257262 * @tset: takset to iterate258263 *259264 * Iterate threadgroup leaders of @tset. For single-task migrations, @tset260265 * may not contain any.261266 */262262-#define cgroup_taskset_for_each_leader(leader, tset) \263263- for ((leader) = cgroup_taskset_first((tset)); (leader); \264264- (leader) = cgroup_taskset_next((tset))) \267267+#define cgroup_taskset_for_each_leader(leader, dst_css, tset) \268268+ for ((leader) = cgroup_taskset_first((tset), &(dst_css)); \269269+ (leader); \270270+ (leader) = cgroup_taskset_next((tset), &(dst_css))) \265271 if ((leader) != (leader)->group_leader) \266272 ; \267273 else···526514static inline void pr_cont_cgroup_path(struct cgroup *cgrp)527515{528516 pr_cont_kernfs_path(cgrp->kn);529529-}530530-531531-/**532532- * cgroup_file_notify - generate a file modified event for a cgroup_file533533- * @cfile: target cgroup_file534534- *535535- * @cfile must have been obtained by setting cftype->file_offset.536536- */537537-static inline void cgroup_file_notify(struct cgroup_file *cfile)538538-{539539- /* might not have been created due to one of the CFTYPE selector flags */540540- if (cfile->kn)541541- kernfs_notify(cfile->kn);542517}543518544519#else /* !CONFIG_CGROUPS */
+4
include/linux/enclosure.h
···2929/* A few generic types ... taken from ses-2 */3030enum enclosure_component_type {3131 ENCLOSURE_COMPONENT_DEVICE = 0x01,3232+ ENCLOSURE_COMPONENT_CONTROLLER_ELECTRONICS = 0x07,3333+ ENCLOSURE_COMPONENT_SCSI_TARGET_PORT = 0x14,3434+ ENCLOSURE_COMPONENT_SCSI_INITIATOR_PORT = 0x15,3235 ENCLOSURE_COMPONENT_ARRAY_DEVICE = 0x17,3636+ ENCLOSURE_COMPONENT_SAS_EXPANDER = 0x18,3337};34383539/* ses-2 common element status */
···22 * Runtime locking correctness validator33 *44 * Copyright (C) 2006,2007 Red Hat, Inc., Ingo Molnar <mingo@redhat.com>55- * Copyright (C) 2007 Red Hat, Inc., Peter Zijlstra <pzijlstr@redhat.com>55+ * Copyright (C) 2007 Red Hat, Inc., Peter Zijlstra66 *77 * see Documentation/locking/lockdep-design.txt for more details.88 */
+11
include/linux/mlx4/device.h
···427427};428428429429enum {430430+ /*431431+ * Max wqe size for rdma read is 512 bytes, so this432432+ * limits our max_sge_rd as the wqe needs to fit:433433+ * - ctrl segment (16 bytes)434434+ * - rdma segment (16 bytes)435435+ * - scatter elements (16 bytes each)436436+ */437437+ MLX4_MAX_SGE_RD = (512 - 16 - 16) / 16438438+};439439+440440+enum {430441 MLX4_DEV_PMC_SUBTYPE_GUID_INFO = 0x14,431442 MLX4_DEV_PMC_SUBTYPE_PORT_INFO = 0x15,432443 MLX4_DEV_PMC_SUBTYPE_PKEY_TABLE = 0x16,
···697697 * if there is no cgroup event for the current CPU context.698698 */699699static inline struct perf_cgroup *700700-perf_cgroup_from_task(struct task_struct *task)700700+perf_cgroup_from_task(struct task_struct *task, struct perf_event_context *ctx)701701{702702- return container_of(task_css(task, perf_event_cgrp_id),702702+ return container_of(task_css_check(task, perf_event_cgrp_id,703703+ ctx ? lockdep_is_held(&ctx->lock)704704+ : true),703705 struct perf_cgroup, css);704706}705707#endif /* CONFIG_CGROUP_PERF */
+1-1
include/linux/platform_data/edma.h
···7272 struct edma_rsv_info *rsv;73737474 /* List of channels allocated for memcpy, terminated with -1 */7575- s16 *memcpy_channels;7575+ s32 *memcpy_channels;76767777 s8 (*queue_priority_mapping)[2];7878 const s16 (*xbar_chans)[2];
+1-1
include/linux/proportions.h
···11/*22 * FLoating proportions33 *44- * Copyright (C) 2007 Red Hat, Inc., Peter Zijlstra <pzijlstr@redhat.com>44+ * Copyright (C) 2007 Red Hat, Inc., Peter Zijlstra55 *66 * This file contains the public data structure and API definitions.77 */
···9999 * grabbing every spinlock (and more). So the "read" side to such a100100 * lock is anything which disables preemption.101101 */102102-#if defined(CONFIG_STOP_MACHINE) && defined(CONFIG_SMP)102102+#if defined(CONFIG_SMP) || defined(CONFIG_HOTPLUG_CPU)103103104104/**105105 * stop_machine: freeze the machine on all CPUs and run this function···118118119119int stop_machine_from_inactive_cpu(cpu_stop_fn_t fn, void *data,120120 const struct cpumask *cpus);121121-#else /* CONFIG_STOP_MACHINE && CONFIG_SMP */121121+#else /* CONFIG_SMP || CONFIG_HOTPLUG_CPU */122122123123static inline int stop_machine(cpu_stop_fn_t fn, void *data,124124 const struct cpumask *cpus)···137137 return stop_machine(fn, data, cpus);138138}139139140140-#endif /* CONFIG_STOP_MACHINE && CONFIG_SMP */140140+#endif /* CONFIG_SMP || CONFIG_HOTPLUG_CPU */141141#endif /* _LINUX_STOP_MACHINE */
+1-1
include/linux/uprobes.h
···2121 * Authors:2222 * Srikar Dronamraju2323 * Jim Keniston2424- * Copyright (C) 2011-2012 Red Hat, Inc., Peter Zijlstra <pzijlstr@redhat.com>2424+ * Copyright (C) 2011-2012 Red Hat, Inc., Peter Zijlstra2525 */26262727#include <linux/errno.h>
···145145 list_del(&old->task_list);146146}147147148148-typedef int wait_bit_action_f(struct wait_bit_key *);148148+typedef int wait_bit_action_f(struct wait_bit_key *, int mode);149149void __wake_up(wait_queue_head_t *q, unsigned int mode, int nr, void *key);150150void __wake_up_locked_key(wait_queue_head_t *q, unsigned int mode, void *key);151151void __wake_up_sync_key(wait_queue_head_t *q, unsigned int mode, int nr, void *key);···960960 } while (0)961961962962963963-extern int bit_wait(struct wait_bit_key *);964964-extern int bit_wait_io(struct wait_bit_key *);965965-extern int bit_wait_timeout(struct wait_bit_key *);966966-extern int bit_wait_io_timeout(struct wait_bit_key *);963963+extern int bit_wait(struct wait_bit_key *, int);964964+extern int bit_wait_io(struct wait_bit_key *, int);965965+extern int bit_wait_timeout(struct wait_bit_key *, int);966966+extern int bit_wait_io_timeout(struct wait_bit_key *, int);967967968968/**969969 * wait_on_bit - wait for a bit to be cleared
+33
include/net/dst.h
···322322 }323323}324324325325+/**326326+ * dst_hold_safe - Take a reference on a dst if possible327327+ * @dst: pointer to dst entry328328+ *329329+ * This helper returns false if it could not safely330330+ * take a reference on a dst.331331+ */332332+static inline bool dst_hold_safe(struct dst_entry *dst)333333+{334334+ if (dst->flags & DST_NOCACHE)335335+ return atomic_inc_not_zero(&dst->__refcnt);336336+ dst_hold(dst);337337+ return true;338338+}339339+340340+/**341341+ * skb_dst_force_safe - makes sure skb dst is refcounted342342+ * @skb: buffer343343+ *344344+ * If dst is not yet refcounted and not destroyed, grab a ref on it.345345+ */346346+static inline void skb_dst_force_safe(struct sk_buff *skb)347347+{348348+ if (skb_dst_is_noref(skb)) {349349+ struct dst_entry *dst = skb_dst(skb);350350+351351+ if (!dst_hold_safe(dst))352352+ dst = NULL;353353+354354+ skb->_skb_refdst = (unsigned long)dst;355355+ }356356+}357357+325358326359/**327360 * __skb_tunnel_rx - prepare skb for rx reinsert
+23-4
include/net/inet_sock.h
···210210#define IP_CMSG_ORIGDSTADDR BIT(6)211211#define IP_CMSG_CHECKSUM BIT(7)212212213213-/* SYNACK messages might be attached to request sockets.213213+/**214214+ * sk_to_full_sk - Access to a full socket215215+ * @sk: pointer to a socket216216+ *217217+ * SYNACK messages might be attached to request sockets.214218 * Some places want to reach the listener in this case.215219 */216216-static inline struct sock *skb_to_full_sk(const struct sk_buff *skb)220220+static inline struct sock *sk_to_full_sk(struct sock *sk)217221{218218- struct sock *sk = skb->sk;219219-222222+#ifdef CONFIG_INET220223 if (sk && sk->sk_state == TCP_NEW_SYN_RECV)221224 sk = inet_reqsk(sk)->rsk_listener;225225+#endif222226 return sk;227227+}228228+229229+/* sk_to_full_sk() variant with a const argument */230230+static inline const struct sock *sk_const_to_full_sk(const struct sock *sk)231231+{232232+#ifdef CONFIG_INET233233+ if (sk && sk->sk_state == TCP_NEW_SYN_RECV)234234+ sk = ((const struct request_sock *)sk)->rsk_listener;235235+#endif236236+ return sk;237237+}238238+239239+static inline struct sock *skb_to_full_sk(const struct sk_buff *skb)240240+{241241+ return sk_to_full_sk(skb->sk);223242}224243225244static inline struct inet_sock *inet_sk(const struct sock *sk)
···14931493 * : SACK's are not delayed (see Section 6).14941494 */14951495 __u8 sack_needed:1, /* Do we need to sack the peer? */14961496- sack_generation:1;14961496+ sack_generation:1,14971497+ zero_window_announced:1;14971498 __u32 sack_cnt;1498149914991500 __u32 adaptation_ind; /* Adaptation Code point. */
+5-2
include/net/sock.h
···388388 struct socket_wq *sk_wq_raw;389389 };390390#ifdef CONFIG_XFRM391391- struct xfrm_policy *sk_policy[2];391391+ struct xfrm_policy __rcu *sk_policy[2];392392#endif393393 struct dst_entry *sk_rx_dst;394394 struct dst_entry __rcu *sk_dst_cache;···404404 sk_userlocks : 4,405405 sk_protocol : 8,406406 sk_type : 16;407407+#define SK_PROTOCOL_MAX U8_MAX407408 kmemcheck_bitfield_end(flags);408409 int sk_wmem_queued;409410 gfp_t sk_allocation;···741740 SOCK_SELECT_ERR_QUEUE, /* Wake select on error queue */742741};743742743743+#define SK_FLAGS_TIMESTAMP ((1UL << SOCK_TIMESTAMP) | (1UL << SOCK_TIMESTAMPING_RX_SOFTWARE))744744+744745static inline void sock_copy_flags(struct sock *nsk, struct sock *osk)745746{746747 nsk->sk_flags = osk->sk_flags;···817814static inline void __sk_add_backlog(struct sock *sk, struct sk_buff *skb)818815{819816 /* dont let skb dst not refcounted, we are going to leave rcu lock */820820- skb_dst_force(skb);817817+ skb_dst_force_safe(skb);821818822819 if (!sk->sk_backlog.tail)823820 sk->sk_backlog.head = skb;
···628628 * @OVS_CT_ATTR_MARK: u32 value followed by u32 mask. For each bit set in the629629 * mask, the corresponding bit in the value is copied to the connection630630 * tracking mark field in the connection.631631- * @OVS_CT_ATTR_LABEL: %OVS_CT_LABELS_LEN value followed by %OVS_CT_LABELS_LEN631631+ * @OVS_CT_ATTR_LABELS: %OVS_CT_LABELS_LEN value followed by %OVS_CT_LABELS_LEN632632 * mask. For each bit set in the mask, the corresponding bit in the value is633633 * copied to the connection tracking label field in the connection.634634 * @OVS_CT_ATTR_HELPER: variable length string defining conntrack ALG.
-7
include/uapi/linux/vfio.h
···3939#define VFIO_SPAPR_TCE_v2_IOMMU 740404141/*4242- * The No-IOMMU IOMMU offers no translation or isolation for devices and4343- * supports no ioctls outside of VFIO_CHECK_EXTENSION. Use of VFIO's No-IOMMU4444- * code will taint the host kernel and should be used with extreme caution.4545- */4646-#define VFIO_NOIOMMU_IOMMU 84747-4848-/*4942 * The IOCTL interface is designed for extensibility by embedding the5043 * structure length (argsz) and flags into structures passed between5144 * kernel and userspace. We therefore use the _IO() macro for these
+14
include/xen/interface/io/ring.h
···181181#define RING_GET_REQUEST(_r, _idx) \182182 (&((_r)->sring->ring[((_idx) & (RING_SIZE(_r) - 1))].req))183183184184+/*185185+ * Get a local copy of a request.186186+ *187187+ * Use this in preference to RING_GET_REQUEST() so all processing is188188+ * done on a local copy that cannot be modified by the other end.189189+ *190190+ * Note that https://gcc.gnu.org/bugzilla/show_bug.cgi?id=58145 may cause this191191+ * to be ineffective where _req is a struct which consists of only bitfields.192192+ */193193+#define RING_COPY_REQUEST(_r, _idx, _req) do { \194194+ /* Use volatile to force the copy into _req. */ \195195+ *(_req) = *(volatile typeof(_req))RING_GET_REQUEST(_r, _idx); \196196+} while (0)197197+184198#define RING_GET_RESPONSE(_r, _idx) \185199 (&((_r)->sring->ring[((_idx) & (RING_SIZE(_r) - 1))].rsp))186200
-7
init/Kconfig
···20302030 it was better to provide this option than to break all the archs20312031 and have several arch maintainers pursuing me down dark alleys.2032203220332033-config STOP_MACHINE20342034- bool20352035- default y20362036- depends on (SMP && MODULE_UNLOAD) || HOTPLUG_CPU20372037- help20382038- Need stop_machine() primitive.20392039-20402033source "block/Kconfig"2041203420422035config PREEMPT_NOTIFIERS
+78-21
kernel/cgroup.c
···9898static DEFINE_SPINLOCK(cgroup_idr_lock);9999100100/*101101+ * Protects cgroup_file->kn for !self csses. It synchronizes notifications102102+ * against file removal/re-creation across css hiding.103103+ */104104+static DEFINE_SPINLOCK(cgroup_file_kn_lock);105105+106106+/*101107 * Protects cgroup_subsys->release_agent_path. Modifying it also requires102108 * cgroup_mutex. Reading requires either cgroup_mutex or this spinlock.103109 */···760754 if (!atomic_dec_and_test(&cset->refcount))761755 return;762756763763- /* This css_set is dead. unlink it and release cgroup refcounts */764764- for_each_subsys(ss, ssid)757757+ /* This css_set is dead. unlink it and release cgroup and css refs */758758+ for_each_subsys(ss, ssid) {765759 list_del(&cset->e_cset_node[ssid]);760760+ css_put(cset->subsys[ssid]);761761+ }766762 hash_del(&cset->hlist);767763 css_set_count--;768764···10641056 key = css_set_hash(cset->subsys);10651057 hash_add(css_set_table, &cset->hlist, key);1066105810671067- for_each_subsys(ss, ssid)10591059+ for_each_subsys(ss, ssid) {10601060+ struct cgroup_subsys_state *css = cset->subsys[ssid];10611061+10681062 list_add_tail(&cset->e_cset_node[ssid],10691069- &cset->subsys[ssid]->cgroup->e_csets[ssid]);10631063+ &css->cgroup->e_csets[ssid]);10641064+ css_get(css);10651065+ }1070106610711067 spin_unlock_bh(&css_set_lock);10721068···14051393 char name[CGROUP_FILE_NAME_MAX];1406139414071395 lockdep_assert_held(&cgroup_mutex);13961396+13971397+ if (cft->file_offset) {13981398+ struct cgroup_subsys_state *css = cgroup_css(cgrp, cft->ss);13991399+ struct cgroup_file *cfile = (void *)css + cft->file_offset;14001400+14011401+ spin_lock_irq(&cgroup_file_kn_lock);14021402+ cfile->kn = NULL;14031403+ spin_unlock_irq(&cgroup_file_kn_lock);14041404+ }14051405+14081406 kernfs_remove_by_name(cgrp->kn, cgroup_file_name(cgrp, cft, name));14091407}14101408···1878185618791857 INIT_LIST_HEAD(&cgrp->self.sibling);18801858 INIT_LIST_HEAD(&cgrp->self.children);18811881- INIT_LIST_HEAD(&cgrp->self.files);18821859 INIT_LIST_HEAD(&cgrp->cset_links);18831860 INIT_LIST_HEAD(&cgrp->pidlists);18841861 mutex_init(&cgrp->pidlist_mutex);···22372216 struct list_head src_csets;22382217 struct list_head dst_csets;2239221822192219+ /* the subsys currently being processed */22202220+ int ssid;22212221+22402222 /*22412223 * Fields for cgroup_taskset_*() iteration.22422224 *···23022278/**23032279 * cgroup_taskset_first - reset taskset and return the first task23042280 * @tset: taskset of interest22812281+ * @dst_cssp: output variable for the destination css23052282 *23062283 * @tset iteration is initialized and the first task is returned.23072284 */23082308-struct task_struct *cgroup_taskset_first(struct cgroup_taskset *tset)22852285+struct task_struct *cgroup_taskset_first(struct cgroup_taskset *tset,22862286+ struct cgroup_subsys_state **dst_cssp)23092287{23102288 tset->cur_cset = list_first_entry(tset->csets, struct css_set, mg_node);23112289 tset->cur_task = NULL;2312229023132313- return cgroup_taskset_next(tset);22912291+ return cgroup_taskset_next(tset, dst_cssp);23142292}2315229323162294/**23172295 * cgroup_taskset_next - iterate to the next task in taskset23182296 * @tset: taskset of interest22972297+ * @dst_cssp: output variable for the destination css23192298 *23202299 * Return the next task in @tset. Iteration must have been initialized23212300 * with cgroup_taskset_first().23222301 */23232323-struct task_struct *cgroup_taskset_next(struct cgroup_taskset *tset)23022302+struct task_struct *cgroup_taskset_next(struct cgroup_taskset *tset,23032303+ struct cgroup_subsys_state **dst_cssp)23242304{23252305 struct css_set *cset = tset->cur_cset;23262306 struct task_struct *task = tset->cur_task;···23392311 if (&task->cg_list != &cset->mg_tasks) {23402312 tset->cur_cset = cset;23412313 tset->cur_task = task;23142314+23152315+ /*23162316+ * This function may be called both before and23172317+ * after cgroup_taskset_migrate(). The two cases23182318+ * can be distinguished by looking at whether @cset23192319+ * has its ->mg_dst_cset set.23202320+ */23212321+ if (cset->mg_dst_cset)23222322+ *dst_cssp = cset->mg_dst_cset->subsys[tset->ssid];23232323+ else23242324+ *dst_cssp = cset->subsys[tset->ssid];23252325+23422326 return task;23432327 }23442328···23862346 /* check that we can legitimately attach to the cgroup */23872347 for_each_e_css(css, i, dst_cgrp) {23882348 if (css->ss->can_attach) {23892389- ret = css->ss->can_attach(css, tset);23492349+ tset->ssid = i;23502350+ ret = css->ss->can_attach(tset);23902351 if (ret) {23912352 failed_css = css;23922353 goto out_cancel_attach;···24202379 */24212380 tset->csets = &tset->dst_csets;2422238124232423- for_each_e_css(css, i, dst_cgrp)24242424- if (css->ss->attach)24252425- css->ss->attach(css, tset);23822382+ for_each_e_css(css, i, dst_cgrp) {23832383+ if (css->ss->attach) {23842384+ tset->ssid = i;23852385+ css->ss->attach(tset);23862386+ }23872387+ }2426238824272389 ret = 0;24282390 goto out_release_tset;···24342390 for_each_e_css(css, i, dst_cgrp) {24352391 if (css == failed_css)24362392 break;24372437- if (css->ss->cancel_attach)24382438- css->ss->cancel_attach(css, tset);23932393+ if (css->ss->cancel_attach) {23942394+ tset->ssid = i;23952395+ css->ss->cancel_attach(tset);23962396+ }24392397 }24402398out_release_tset:24412399 spin_lock_bh(&css_set_lock);···33593313 if (cft->file_offset) {33603314 struct cgroup_file *cfile = (void *)css + cft->file_offset;3361331533623362- kernfs_get(kn);33163316+ spin_lock_irq(&cgroup_file_kn_lock);33633317 cfile->kn = kn;33643364- list_add(&cfile->node, &css->files);33183318+ spin_unlock_irq(&cgroup_file_kn_lock);33653319 }3366332033673321 return 0;···35963550 for (cft = cfts; cft && cft->name[0] != '\0'; cft++)35973551 cft->flags |= __CFTYPE_NOT_ON_DFL;35983552 return cgroup_add_cftypes(ss, cfts);35533553+}35543554+35553555+/**35563556+ * cgroup_file_notify - generate a file modified event for a cgroup_file35573557+ * @cfile: target cgroup_file35583558+ *35593559+ * @cfile must have been obtained by setting cftype->file_offset.35603560+ */35613561+void cgroup_file_notify(struct cgroup_file *cfile)35623562+{35633563+ unsigned long flags;35643564+35653565+ spin_lock_irqsave(&cgroup_file_kn_lock, flags);35663566+ if (cfile->kn)35673567+ kernfs_notify(cfile->kn);35683568+ spin_unlock_irqrestore(&cgroup_file_kn_lock, flags);35993569}3600357036013571/**···46754613 container_of(work, struct cgroup_subsys_state, destroy_work);46764614 struct cgroup_subsys *ss = css->ss;46774615 struct cgroup *cgrp = css->cgroup;46784678- struct cgroup_file *cfile;4679461646804617 percpu_ref_exit(&css->refcnt);46814681-46824682- list_for_each_entry(cfile, &css->files, node)46834683- kernfs_put(cfile->kn);4684461846854619 if (ss) {46864620 /* css free path */···47824724 css->ss = ss;47834725 INIT_LIST_HEAD(&css->sibling);47844726 INIT_LIST_HEAD(&css->children);47854785- INIT_LIST_HEAD(&css->files);47864727 css->serial_nr = css_serial_nr_next++;4787472847884729 if (cgroup_parent(cgrp)) {
+10-13
kernel/cgroup_freezer.c
···155155 * @freezer->lock. freezer_attach() makes the new tasks conform to the156156 * current state and all following state changes can see the new tasks.157157 */158158-static void freezer_attach(struct cgroup_subsys_state *new_css,159159- struct cgroup_taskset *tset)158158+static void freezer_attach(struct cgroup_taskset *tset)160159{161161- struct freezer *freezer = css_freezer(new_css);162160 struct task_struct *task;163163- bool clear_frozen = false;161161+ struct cgroup_subsys_state *new_css;164162165163 mutex_lock(&freezer_mutex);166164···172174 * current state before executing the following - !frozen tasks may173175 * be visible in a FROZEN cgroup and frozen tasks in a THAWED one.174176 */175175- cgroup_taskset_for_each(task, tset) {177177+ cgroup_taskset_for_each(task, new_css, tset) {178178+ struct freezer *freezer = css_freezer(new_css);179179+176180 if (!(freezer->state & CGROUP_FREEZING)) {177181 __thaw_task(task);178182 } else {179183 freeze_task(task);180180- freezer->state &= ~CGROUP_FROZEN;181181- clear_frozen = true;184184+ /* clear FROZEN and propagate upwards */185185+ while (freezer && (freezer->state & CGROUP_FROZEN)) {186186+ freezer->state &= ~CGROUP_FROZEN;187187+ freezer = parent_freezer(freezer);188188+ }182189 }183183- }184184-185185- /* propagate FROZEN clearing upwards */186186- while (clear_frozen && (freezer = parent_freezer(freezer))) {187187- freezer->state &= ~CGROUP_FROZEN;188188- clear_frozen = freezer->state & CGROUP_FREEZING;189190 }190191191192 mutex_unlock(&freezer_mutex);
+20-57
kernel/cgroup_pids.c
···106106{107107 struct pids_cgroup *p;108108109109- for (p = pids; p; p = parent_pids(p))109109+ for (p = pids; parent_pids(p); p = parent_pids(p))110110 pids_cancel(p, num);111111}112112···123123{124124 struct pids_cgroup *p;125125126126- for (p = pids; p; p = parent_pids(p))126126+ for (p = pids; parent_pids(p); p = parent_pids(p))127127 atomic64_add(num, &p->counter);128128}129129···140140{141141 struct pids_cgroup *p, *q;142142143143- for (p = pids; p; p = parent_pids(p)) {143143+ for (p = pids; parent_pids(p); p = parent_pids(p)) {144144 int64_t new = atomic64_add_return(num, &p->counter);145145146146 /*···162162 return -EAGAIN;163163}164164165165-static int pids_can_attach(struct cgroup_subsys_state *css,166166- struct cgroup_taskset *tset)165165+static int pids_can_attach(struct cgroup_taskset *tset)167166{168168- struct pids_cgroup *pids = css_pids(css);169167 struct task_struct *task;168168+ struct cgroup_subsys_state *dst_css;170169171171- cgroup_taskset_for_each(task, tset) {170170+ cgroup_taskset_for_each(task, dst_css, tset) {171171+ struct pids_cgroup *pids = css_pids(dst_css);172172 struct cgroup_subsys_state *old_css;173173 struct pids_cgroup *old_pids;174174···187187 return 0;188188}189189190190-static void pids_cancel_attach(struct cgroup_subsys_state *css,191191- struct cgroup_taskset *tset)190190+static void pids_cancel_attach(struct cgroup_taskset *tset)192191{193193- struct pids_cgroup *pids = css_pids(css);194192 struct task_struct *task;193193+ struct cgroup_subsys_state *dst_css;195194196196- cgroup_taskset_for_each(task, tset) {195195+ cgroup_taskset_for_each(task, dst_css, tset) {196196+ struct pids_cgroup *pids = css_pids(dst_css);197197 struct cgroup_subsys_state *old_css;198198 struct pids_cgroup *old_pids;199199···205205 }206206}207207208208+/*209209+ * task_css_check(true) in pids_can_fork() and pids_cancel_fork() relies210210+ * on threadgroup_change_begin() held by the copy_process().211211+ */208212static int pids_can_fork(struct task_struct *task, void **priv_p)209213{210214 struct cgroup_subsys_state *css;211215 struct pids_cgroup *pids;212212- int err;213216214214- /*215215- * Use the "current" task_css for the pids subsystem as the tentative216216- * css. It is possible we will charge the wrong hierarchy, in which217217- * case we will forcefully revert/reapply the charge on the right218218- * hierarchy after it is committed to the task proper.219219- */220220- css = task_get_css(current, pids_cgrp_id);217217+ css = task_css_check(current, pids_cgrp_id, true);221218 pids = css_pids(css);222222-223223- err = pids_try_charge(pids, 1);224224- if (err)225225- goto err_css_put;226226-227227- *priv_p = css;228228- return 0;229229-230230-err_css_put:231231- css_put(css);232232- return err;219219+ return pids_try_charge(pids, 1);233220}234221235222static void pids_cancel_fork(struct task_struct *task, void *priv)236223{237237- struct cgroup_subsys_state *css = priv;238238- struct pids_cgroup *pids = css_pids(css);239239-240240- pids_uncharge(pids, 1);241241- css_put(css);242242-}243243-244244-static void pids_fork(struct task_struct *task, void *priv)245245-{246224 struct cgroup_subsys_state *css;247247- struct cgroup_subsys_state *old_css = priv;248225 struct pids_cgroup *pids;249249- struct pids_cgroup *old_pids = css_pids(old_css);250226251251- css = task_get_css(task, pids_cgrp_id);227227+ css = task_css_check(current, pids_cgrp_id, true);252228 pids = css_pids(css);253253-254254- /*255255- * If the association has changed, we have to revert and reapply the256256- * charge/uncharge on the wrong hierarchy to the current one. Since257257- * the association can only change due to an organisation event, its258258- * okay for us to ignore the limit in this case.259259- */260260- if (pids != old_pids) {261261- pids_uncharge(old_pids, 1);262262- pids_charge(pids, 1);263263- }264264-265265- css_put(css);266266- css_put(old_css);229229+ pids_uncharge(pids, 1);267230}268231269232static void pids_free(struct task_struct *task)···298335 {299336 .name = "current",300337 .read_s64 = pids_current_read,338338+ .flags = CFTYPE_NOT_ON_ROOT,301339 },302340 { } /* terminate */303341};···310346 .cancel_attach = pids_cancel_attach,311347 .can_fork = pids_can_fork,312348 .cancel_fork = pids_cancel_fork,313313- .fork = pids_fork,314349 .free = pids_free,315350 .legacy_cftypes = pids_files,316351 .dfl_cftypes = pids_files,
+21-12
kernel/cpuset.c
···14291429static struct cpuset *cpuset_attach_old_cs;1430143014311431/* Called by cgroups to determine if a cpuset is usable; cpuset_mutex held */14321432-static int cpuset_can_attach(struct cgroup_subsys_state *css,14331433- struct cgroup_taskset *tset)14321432+static int cpuset_can_attach(struct cgroup_taskset *tset)14341433{14351435- struct cpuset *cs = css_cs(css);14341434+ struct cgroup_subsys_state *css;14351435+ struct cpuset *cs;14361436 struct task_struct *task;14371437 int ret;1438143814391439 /* used later by cpuset_attach() */14401440- cpuset_attach_old_cs = task_cs(cgroup_taskset_first(tset));14401440+ cpuset_attach_old_cs = task_cs(cgroup_taskset_first(tset, &css));14411441+ cs = css_cs(css);1441144214421443 mutex_lock(&cpuset_mutex);14431444···14481447 (cpumask_empty(cs->cpus_allowed) || nodes_empty(cs->mems_allowed)))14491448 goto out_unlock;1450144914511451- cgroup_taskset_for_each(task, tset) {14501450+ cgroup_taskset_for_each(task, css, tset) {14521451 ret = task_can_attach(task, cs->cpus_allowed);14531452 if (ret)14541453 goto out_unlock;···14681467 return ret;14691468}1470146914711471-static void cpuset_cancel_attach(struct cgroup_subsys_state *css,14721472- struct cgroup_taskset *tset)14701470+static void cpuset_cancel_attach(struct cgroup_taskset *tset)14731471{14721472+ struct cgroup_subsys_state *css;14731473+ struct cpuset *cs;14741474+14751475+ cgroup_taskset_first(tset, &css);14761476+ cs = css_cs(css);14771477+14741478 mutex_lock(&cpuset_mutex);14751479 css_cs(css)->attach_in_progress--;14761480 mutex_unlock(&cpuset_mutex);···14881482 */14891483static cpumask_var_t cpus_attach;1490148414911491-static void cpuset_attach(struct cgroup_subsys_state *css,14921492- struct cgroup_taskset *tset)14851485+static void cpuset_attach(struct cgroup_taskset *tset)14931486{14941487 /* static buf protected by cpuset_mutex */14951488 static nodemask_t cpuset_attach_nodemask_to;14961489 struct task_struct *task;14971490 struct task_struct *leader;14981498- struct cpuset *cs = css_cs(css);14911491+ struct cgroup_subsys_state *css;14921492+ struct cpuset *cs;14991493 struct cpuset *oldcs = cpuset_attach_old_cs;14941494+14951495+ cgroup_taskset_first(tset, &css);14961496+ cs = css_cs(css);1500149715011498 mutex_lock(&cpuset_mutex);15021499···1511150215121503 guarantee_online_mems(cs, &cpuset_attach_nodemask_to);1513150415141514- cgroup_taskset_for_each(task, tset) {15051505+ cgroup_taskset_for_each(task, css, tset) {15151506 /*15161507 * can_attach beforehand should guarantee that this doesn't15171508 * fail. TODO: have a better way to handle failure here···15271518 * sleep and should be moved outside migration path proper.15281519 */15291520 cpuset_attach_nodemask_to = cs->effective_mems;15301530- cgroup_taskset_for_each_leader(leader, tset) {15211521+ cgroup_taskset_for_each_leader(leader, css, tset) {15311522 struct mm_struct *mm = get_task_mm(leader);1532152315331524 if (mm) {
+1-1
kernel/events/callchain.c
···33 *44 * Copyright (C) 2008 Thomas Gleixner <tglx@linutronix.de>55 * Copyright (C) 2008-2011 Red Hat, Inc., Ingo Molnar66- * Copyright (C) 2008-2011 Red Hat, Inc., Peter Zijlstra <pzijlstr@redhat.com>66+ * Copyright (C) 2008-2011 Red Hat, Inc., Peter Zijlstra77 * Copyright � 2009 Paul Mackerras, IBM Corp. <paulus@au1.ibm.com>88 *99 * For licensing details see kernel-base/COPYING
···1919 * Authors:2020 * Srikar Dronamraju2121 * Jim Keniston2222- * Copyright (C) 2011-2012 Red Hat, Inc., Peter Zijlstra <pzijlstr@redhat.com>2222+ * Copyright (C) 2011-2012 Red Hat, Inc., Peter Zijlstra2323 */24242525#include <linux/kernel.h>
···11/*22- * Copyright (C) 2010 Red Hat, Inc., Peter Zijlstra <pzijlstr@redhat.com>22+ * Copyright (C) 2010 Red Hat, Inc., Peter Zijlstra33 *44 * Provides a framework for enqueueing and running callbacks from hardirq55 * context. The enqueueing is NMI-safe.
+1-1
kernel/jump_label.c
···22 * jump label support33 *44 * Copyright (C) 2009 Jason Baron <jbaron@redhat.com>55- * Copyright (C) 2011 Peter Zijlstra <pzijlstr@redhat.com>55+ * Copyright (C) 2011 Peter Zijlstra66 *77 */88#include <linux/memory.h>
+1-1
kernel/locking/lockdep.c
···66 * Started by Ingo Molnar:77 *88 * Copyright (C) 2006,2007 Red Hat, Inc., Ingo Molnar <mingo@redhat.com>99- * Copyright (C) 2007 Red Hat, Inc., Peter Zijlstra <pzijlstr@redhat.com>99+ * Copyright (C) 2007 Red Hat, Inc., Peter Zijlstra1010 *1111 * this code maps all the lock dependencies as they occur in a live kernel1212 * and will warn about the following classes of locking bugs:
+1-1
kernel/locking/lockdep_proc.c
···66 * Started by Ingo Molnar:77 *88 * Copyright (C) 2006,2007 Red Hat, Inc., Ingo Molnar <mingo@redhat.com>99- * Copyright (C) 2007 Red Hat, Inc., Peter Zijlstra <pzijlstr@redhat.com>99+ * Copyright (C) 2007 Red Hat, Inc., Peter Zijlstra1010 *1111 * Code for /proc/lockdep and /proc/lockdep_stats:1212 *
+5-3
kernel/locking/osq_lock.c
···9393 node->cpu = curr;94949595 /*9696- * ACQUIRE semantics, pairs with corresponding RELEASE9797- * in unlock() uncontended, or fastpath.9696+ * We need both ACQUIRE (pairs with corresponding RELEASE in9797+ * unlock() uncontended, or fastpath) and RELEASE (to publish9898+ * the node fields we just initialised) semantics when updating9999+ * the lock tail.98100 */9999- old = atomic_xchg_acquire(&lock->tail, curr);101101+ old = atomic_xchg(&lock->tail, curr);100102 if (old == OSQ_UNLOCKED_VAL)101103 return true;102104
+1-1
kernel/sched/clock.c
···11/*22 * sched_clock for unstable cpu clocks33 *44- * Copyright (C) 2008 Red Hat, Inc., Peter Zijlstra <pzijlstr@redhat.com>44+ * Copyright (C) 2008 Red Hat, Inc., Peter Zijlstra55 *66 * Updates and enhancements:77 * Copyright (C) 2008 Red Hat, Inc. Steven Rostedt <srostedt@redhat.com>
···1717 * Copyright (C) 2007, Thomas Gleixner <tglx@linutronix.de>1818 *1919 * Adaptive scheduling granularity, math enhancements by Peter Zijlstra2020- * Copyright (C) 2007 Red Hat, Inc., Peter Zijlstra <pzijlstr@redhat.com>2020+ * Copyright (C) 2007 Red Hat, Inc., Peter Zijlstra2121 */22222323#include <linux/latencytop.h>
+10-10
kernel/sched/wait.c
···392392 do {393393 prepare_to_wait(wq, &q->wait, mode);394394 if (test_bit(q->key.bit_nr, q->key.flags))395395- ret = (*action)(&q->key);395395+ ret = (*action)(&q->key, mode);396396 } while (test_bit(q->key.bit_nr, q->key.flags) && !ret);397397 finish_wait(wq, &q->wait);398398 return ret;···431431 prepare_to_wait_exclusive(wq, &q->wait, mode);432432 if (!test_bit(q->key.bit_nr, q->key.flags))433433 continue;434434- ret = action(&q->key);434434+ ret = action(&q->key, mode);435435 if (!ret)436436 continue;437437 abort_exclusive_wait(wq, &q->wait, mode, &q->key);···581581}582582EXPORT_SYMBOL(wake_up_atomic_t);583583584584-__sched int bit_wait(struct wait_bit_key *word)584584+__sched int bit_wait(struct wait_bit_key *word, int mode)585585{586586 schedule();587587- if (signal_pending(current))587587+ if (signal_pending_state(mode, current))588588 return -EINTR;589589 return 0;590590}591591EXPORT_SYMBOL(bit_wait);592592593593-__sched int bit_wait_io(struct wait_bit_key *word)593593+__sched int bit_wait_io(struct wait_bit_key *word, int mode)594594{595595 io_schedule();596596- if (signal_pending(current))596596+ if (signal_pending_state(mode, current))597597 return -EINTR;598598 return 0;599599}600600EXPORT_SYMBOL(bit_wait_io);601601602602-__sched int bit_wait_timeout(struct wait_bit_key *word)602602+__sched int bit_wait_timeout(struct wait_bit_key *word, int mode)603603{604604 unsigned long now = READ_ONCE(jiffies);605605 if (time_after_eq(now, word->timeout))606606 return -EAGAIN;607607 schedule_timeout(word->timeout - now);608608- if (signal_pending(current))608608+ if (signal_pending_state(mode, current))609609 return -EINTR;610610 return 0;611611}612612EXPORT_SYMBOL_GPL(bit_wait_timeout);613613614614-__sched int bit_wait_io_timeout(struct wait_bit_key *word)614614+__sched int bit_wait_io_timeout(struct wait_bit_key *word, int mode)615615{616616 unsigned long now = READ_ONCE(jiffies);617617 if (time_after_eq(now, word->timeout))618618 return -EAGAIN;619619 io_schedule_timeout(word->timeout - now);620620- if (signal_pending(current))620620+ if (signal_pending_state(mode, current))621621 return -EINTR;622622 return 0;623623}
···11/*22 * trace event based perf event profiling/tracing33 *44- * Copyright (C) 2009 Red Hat Inc, Peter Zijlstra <pzijlstr@redhat.com>44+ * Copyright (C) 2009 Red Hat Inc, Peter Zijlstra55 * Copyright (C) 2009-2010 Frederic Weisbecker <fweisbec@gmail.com>66 */77
+1-1
lib/btree.c
···55 *66 * Copyright (c) 2007-2008 Joern Engel <joern@logfs.org>77 * Bits and pieces stolen from Peter Zijlstra's code, which is88- * Copyright 2007, Red Hat Inc. Peter Zijlstra <pzijlstr@redhat.com>88+ * Copyright 2007, Red Hat Inc. Peter Zijlstra99 * GPLv21010 *1111 * see http://programming.kicks-ass.net/kernel-patches/vma_lookup/btree.patch
···11/*22 * Floating proportions33 *44- * Copyright (C) 2007 Red Hat, Inc., Peter Zijlstra <pzijlstr@redhat.com>44+ * Copyright (C) 2007 Red Hat, Inc., Peter Zijlstra55 *66 * Description:77 *
+41-28
lib/rhashtable.c
···389389 return false;390390}391391392392-int rhashtable_insert_rehash(struct rhashtable *ht)392392+int rhashtable_insert_rehash(struct rhashtable *ht,393393+ struct bucket_table *tbl)393394{394395 struct bucket_table *old_tbl;395396 struct bucket_table *new_tbl;396396- struct bucket_table *tbl;397397 unsigned int size;398398 int err;399399400400 old_tbl = rht_dereference_rcu(ht->tbl, ht);401401- tbl = rhashtable_last_table(ht, old_tbl);402401403402 size = tbl->size;403403+404404+ err = -EBUSY;404405405406 if (rht_grow_above_75(ht, tbl))406407 size *= 2;407408 /* Do not schedule more than one rehash */408409 else if (old_tbl != tbl)409409- return -EBUSY;410410+ goto fail;411411+412412+ err = -ENOMEM;410413411414 new_tbl = bucket_table_alloc(ht, size, GFP_ATOMIC);412412- if (new_tbl == NULL) {413413- /* Schedule async resize/rehash to try allocation414414- * non-atomic context.415415- */416416- schedule_work(&ht->run_work);417417- return -ENOMEM;418418- }415415+ if (new_tbl == NULL)416416+ goto fail;419417420418 err = rhashtable_rehash_attach(ht, tbl, new_tbl);421419 if (err) {···424426 schedule_work(&ht->run_work);425427426428 return err;429429+430430+fail:431431+ /* Do not fail the insert if someone else did a rehash. */432432+ if (likely(rcu_dereference_raw(tbl->future_tbl)))433433+ return 0;434434+435435+ /* Schedule async rehash to retry allocation in process context. */436436+ if (err == -ENOMEM)437437+ schedule_work(&ht->run_work);438438+439439+ return err;427440}428441EXPORT_SYMBOL_GPL(rhashtable_insert_rehash);429442430430-int rhashtable_insert_slow(struct rhashtable *ht, const void *key,431431- struct rhash_head *obj,432432- struct bucket_table *tbl)443443+struct bucket_table *rhashtable_insert_slow(struct rhashtable *ht,444444+ const void *key,445445+ struct rhash_head *obj,446446+ struct bucket_table *tbl)433447{434448 struct rhash_head *head;435449 unsigned int hash;···477467exit:478468 spin_unlock(rht_bucket_lock(tbl, hash));479469480480- return err;470470+ if (err == 0)471471+ return NULL;472472+ else if (err == -EAGAIN)473473+ return tbl;474474+ else475475+ return ERR_PTR(err);481476}482477EXPORT_SYMBOL_GPL(rhashtable_insert_slow);483478···518503 if (!iter->walker)519504 return -ENOMEM;520505521521- mutex_lock(&ht->mutex);506506+ spin_lock(&ht->lock);522507 iter->walker->tbl = rht_dereference(ht->tbl, ht);523508 list_add(&iter->walker->list, &iter->walker->tbl->walkers);524524- mutex_unlock(&ht->mutex);509509+ spin_unlock(&ht->lock);525510526511 return 0;527512}···535520 */536521void rhashtable_walk_exit(struct rhashtable_iter *iter)537522{538538- mutex_lock(&iter->ht->mutex);523523+ spin_lock(&iter->ht->lock);539524 if (iter->walker->tbl)540525 list_del(&iter->walker->list);541541- mutex_unlock(&iter->ht->mutex);526526+ spin_unlock(&iter->ht->lock);542527 kfree(iter->walker);543528}544529EXPORT_SYMBOL_GPL(rhashtable_walk_exit);···562547{563548 struct rhashtable *ht = iter->ht;564549565565- mutex_lock(&ht->mutex);566566-567567- if (iter->walker->tbl)568568- list_del(&iter->walker->list);569569-570550 rcu_read_lock();571551572572- mutex_unlock(&ht->mutex);552552+ spin_lock(&ht->lock);553553+ if (iter->walker->tbl)554554+ list_del(&iter->walker->list);555555+ spin_unlock(&ht->lock);573556574557 if (!iter->walker->tbl) {575558 iter->walker->tbl = rht_dereference_rcu(ht->tbl, ht);···736723 if (params->nulls_base && params->nulls_base < (1U << RHT_BASE_SHIFT))737724 return -EINVAL;738725739739- if (params->nelem_hint)740740- size = rounded_hashtable_size(params);741741-742726 memset(ht, 0, sizeof(*ht));743727 mutex_init(&ht->mutex);744728 spin_lock_init(&ht->lock);···754744 ht->p.insecure_max_entries = ht->p.max_size * 2;755745756746 ht->p.min_size = max(ht->p.min_size, HASH_MIN_SIZE);747747+748748+ if (params->nelem_hint)749749+ size = rounded_hashtable_size(&ht->p);757750758751 /* The maximum (not average) chain length grows with the759752 * size of the hash table, at a rate of (log N)/(log log N).
+16-3
mm/backing-dev.c
···957957 * jiffies for either a BDI to exit congestion of the given @sync queue958958 * or a write to complete.959959 *960960- * In the absence of zone congestion, cond_resched() is called to yield961961- * the processor if necessary but otherwise does not sleep.960960+ * In the absence of zone congestion, a short sleep or a cond_resched is961961+ * performed to yield the processor and to allow other subsystems to make962962+ * a forward progress.962963 *963964 * The return value is 0 if the sleep is for the full timeout. Otherwise,964965 * it is the number of jiffies that were still remaining when the function···979978 */980979 if (atomic_read(&nr_wb_congested[sync]) == 0 ||981980 !test_bit(ZONE_CONGESTED, &zone->flags)) {982982- cond_resched();981981+982982+ /*983983+ * Memory allocation/reclaim might be called from a WQ984984+ * context and the current implementation of the WQ985985+ * concurrency control doesn't recognize that a particular986986+ * WQ is congested if the worker thread is looping without987987+ * ever sleeping. Therefore we have to do a short sleep988988+ * here rather than calling cond_resched().989989+ */990990+ if (current->flags & PF_WQ_WORKER)991991+ schedule_timeout(1);992992+ else993993+ cond_resched();983994984995 /* In case we scheduled, work out time remaining */985996 ret = timeout - (jiffies - start);
+20-7
mm/hugetlb.c
···372372 spin_unlock(&resv->lock);373373374374 trg = kmalloc(sizeof(*trg), GFP_KERNEL);375375- if (!trg)375375+ if (!trg) {376376+ kfree(nrg);376377 return -ENOMEM;378378+ }377379378380 spin_lock(&resv->lock);379381 list_add(&trg->link, &resv->region_cache);···485483retry:486484 spin_lock(&resv->lock);487485 list_for_each_entry_safe(rg, trg, head, link) {488488- if (rg->to <= f)486486+ /*487487+ * Skip regions before the range to be deleted. file_region488488+ * ranges are normally of the form [from, to). However, there489489+ * may be a "placeholder" entry in the map which is of the form490490+ * (from, to) with from == to. Check for placeholder entries491491+ * at the beginning of the range to be deleted.492492+ */493493+ if (rg->to <= f && (rg->to != rg->from || rg->to != f))489494 continue;495495+490496 if (rg->from >= t)491497 break;492498···18961886 page = __alloc_buddy_huge_page_with_mpol(h, vma, addr);18971887 if (!page)18981888 goto out_uncharge_cgroup;18991899-18891889+ if (!avoid_reserve && vma_has_reserves(vma, gbl_chg)) {18901890+ SetPagePrivate(page);18911891+ h->resv_huge_pages--;18921892+ }19001893 spin_lock(&hugetlb_lock);19011894 list_move(&page->lru, &h->hugepage_activelist);19021895 /* Fall through */···37063693 } else if (unlikely(is_hugetlb_entry_hwpoisoned(entry)))37073694 return VM_FAULT_HWPOISON_LARGE |37083695 VM_FAULT_SET_HINDEX(hstate_index(h));36963696+ } else {36973697+ ptep = huge_pte_alloc(mm, address, huge_page_size(h));36983698+ if (!ptep)36993699+ return VM_FAULT_OOM;37093700 }37103710-37113711- ptep = huge_pte_alloc(mm, address, huge_page_size(h));37123712- if (!ptep)37133713- return VM_FAULT_OOM;3714370137153702 mapping = vma->vm_file->f_mapping;37163703 idx = vma_hugecache_offset(h, vma, address);
+25-24
mm/memcontrol.c
···21282128 */21292129 do {21302130 if (page_counter_read(&memcg->memory) > memcg->high) {21312131- current->memcg_nr_pages_over_high += nr_pages;21312131+ current->memcg_nr_pages_over_high += batch;21322132 set_notify_resume(current);21332133 break;21342134 }···47794779 spin_unlock(&mc.lock);47804780}4781478147824782-static int mem_cgroup_can_attach(struct cgroup_subsys_state *css,47834783- struct cgroup_taskset *tset)47824782+static int mem_cgroup_can_attach(struct cgroup_taskset *tset)47844783{47854785- struct mem_cgroup *memcg = mem_cgroup_from_css(css);47844784+ struct cgroup_subsys_state *css;47854785+ struct mem_cgroup *memcg;47864786 struct mem_cgroup *from;47874787 struct task_struct *leader, *p;47884788 struct mm_struct *mm;47894789 unsigned long move_flags;47904790 int ret = 0;4791479147924792- /*47934793- * We are now commited to this value whatever it is. Changes in this47944794- * tunable will only affect upcoming migrations, not the current one.47954795- * So we need to save it, and keep it going.47964796- */47974797- move_flags = READ_ONCE(memcg->move_charge_at_immigrate);47984798- if (!move_flags)47924792+ /* charge immigration isn't supported on the default hierarchy */47934793+ if (cgroup_subsys_on_dfl(memory_cgrp_subsys))47994794 return 0;4800479548014796 /*···48004805 * multiple.48014806 */48024807 p = NULL;48034803- cgroup_taskset_for_each_leader(leader, tset) {48084808+ cgroup_taskset_for_each_leader(leader, css, tset) {48044809 WARN_ON_ONCE(p);48054810 p = leader;48114811+ memcg = mem_cgroup_from_css(css);48064812 }48074813 if (!p)48144814+ return 0;48154815+48164816+ /*48174817+ * We are now commited to this value whatever it is. Changes in this48184818+ * tunable will only affect upcoming migrations, not the current one.48194819+ * So we need to save it, and keep it going.48204820+ */48214821+ move_flags = READ_ONCE(memcg->move_charge_at_immigrate);48224822+ if (!move_flags)48084823 return 0;4809482448104825 from = mem_cgroup_from_task(p);···48474842 return ret;48484843}4849484448504850-static void mem_cgroup_cancel_attach(struct cgroup_subsys_state *css,48514851- struct cgroup_taskset *tset)48454845+static void mem_cgroup_cancel_attach(struct cgroup_taskset *tset)48524846{48534847 if (mc.to)48544848 mem_cgroup_clear_mc();···49894985 atomic_dec(&mc.from->moving_account);49904986}4991498749924992-static void mem_cgroup_move_task(struct cgroup_subsys_state *css,49934993- struct cgroup_taskset *tset)49884988+static void mem_cgroup_move_task(struct cgroup_taskset *tset)49944989{49954995- struct task_struct *p = cgroup_taskset_first(tset);49904990+ struct cgroup_subsys_state *css;49914991+ struct task_struct *p = cgroup_taskset_first(tset, &css);49964992 struct mm_struct *mm = get_task_mm(p);4997499349984994 if (mm) {···50045000 mem_cgroup_clear_mc();50055001}50065002#else /* !CONFIG_MMU */50075007-static int mem_cgroup_can_attach(struct cgroup_subsys_state *css,50085008- struct cgroup_taskset *tset)50035003+static int mem_cgroup_can_attach(struct cgroup_taskset *tset)50095004{50105005 return 0;50115006}50125012-static void mem_cgroup_cancel_attach(struct cgroup_subsys_state *css,50135013- struct cgroup_taskset *tset)50075007+static void mem_cgroup_cancel_attach(struct cgroup_taskset *tset)50145008{50155009}50165016-static void mem_cgroup_move_task(struct cgroup_subsys_state *css,50175017- struct cgroup_taskset *tset)50105010+static void mem_cgroup_move_task(struct cgroup_taskset *tset)50185011{50195012}50205013#endif···55125511 * mem_cgroup_replace_page - migrate a charge to another page55135512 * @oldpage: currently charged page55145513 * @newpage: page to transfer the charge to55155515- * @lrucare: either or both pages might be on the LRU already55165514 *55175515 * Migrate the charge from @oldpage to @newpage.55185516 *55195517 * Both pages must be locked, @newpage->mapping must be set up.55185518+ * Either or both pages might be on the LRU already.55205519 */55215520void mem_cgroup_replace_page(struct page *oldpage, struct page *newpage)55225521{
+2
mm/oom_kill.c
···608608 continue;609609 if (unlikely(p->flags & PF_KTHREAD))610610 continue;611611+ if (is_global_init(p))612612+ continue;611613 if (p->signal->oom_score_adj == OOM_SCORE_ADJ_MIN)612614 continue;613615
+1-1
mm/page-writeback.c
···22 * mm/page-writeback.c33 *44 * Copyright (C) 2002, Linus Torvalds.55- * Copyright (C) 2007 Red Hat, Inc., Peter Zijlstra <pzijlstr@redhat.com>55+ * Copyright (C) 2007 Red Hat, Inc., Peter Zijlstra66 *77 * Contains functions related to writing back dirty pages at the88 * address_space level.
···541541 return last;542542}543543544544+/* type and compressor must be null-terminated */544545static struct zswap_pool *zswap_pool_find_get(char *type, char *compressor)545546{546547 struct zswap_pool *pool;···549548 assert_spin_locked(&zswap_pools_lock);550549551550 list_for_each_entry_rcu(pool, &zswap_pools, list) {552552- if (strncmp(pool->tfm_name, compressor, sizeof(pool->tfm_name)))551551+ if (strcmp(pool->tfm_name, compressor))553552 continue;554554- if (strncmp(zpool_get_type(pool->zpool), type,555555- sizeof(zswap_zpool_type)))553553+ if (strcmp(zpool_get_type(pool->zpool), type))556554 continue;557555 /* if we can't get it, it's about to be destroyed */558556 if (!zswap_pool_get(pool))
···836836 u8 *orig_addr;837837 struct batadv_orig_node *orig_node = NULL;838838 int check, hdr_size = sizeof(*unicast_packet);839839+ enum batadv_subtype subtype;839840 bool is4addr;840841841842 unicast_packet = (struct batadv_unicast_packet *)skb->data;···864863 /* packet for me */865864 if (batadv_is_my_mac(bat_priv, unicast_packet->dest)) {866865 if (is4addr) {867867- batadv_dat_inc_counter(bat_priv,868868- unicast_4addr_packet->subtype);869869- orig_addr = unicast_4addr_packet->src;870870- orig_node = batadv_orig_hash_find(bat_priv, orig_addr);866866+ subtype = unicast_4addr_packet->subtype;867867+ batadv_dat_inc_counter(bat_priv, subtype);868868+869869+ /* Only payload data should be considered for speedy870870+ * join. For example, DAT also uses unicast 4addr871871+ * types, but those packets should not be considered872872+ * for speedy join, since the clients do not actually873873+ * reside at the sending originator.874874+ */875875+ if (subtype == BATADV_P_DATA) {876876+ orig_addr = unicast_4addr_packet->src;877877+ orig_node = batadv_orig_hash_find(bat_priv,878878+ orig_addr);879879+ }871880 }872881873882 if (batadv_dat_snoop_incoming_arp_request(bat_priv, skb,
+12-4
net/batman-adv/translation-table.c
···6868 unsigned short vid, const char *message,6969 bool roaming);70707171-/* returns 1 if they are the same mac addr */7171+/* returns 1 if they are the same mac addr and vid */7272static int batadv_compare_tt(const struct hlist_node *node, const void *data2)7373{7474 const void *data1 = container_of(node, struct batadv_tt_common_entry,7575 hash_entry);7676+ const struct batadv_tt_common_entry *tt1 = data1;7777+ const struct batadv_tt_common_entry *tt2 = data2;76787777- return batadv_compare_eth(data1, data2);7979+ return (tt1->vid == tt2->vid) && batadv_compare_eth(data1, data2);7880}79818082/**···14291427 }1430142814311429 /* if the client was temporary added before receiving the first14321432- * OGM announcing it, we have to clear the TEMP flag14301430+ * OGM announcing it, we have to clear the TEMP flag. Also,14311431+ * remove the previous temporary orig node and re-add it14321432+ * if required. If the orig entry changed, the new one which14331433+ * is a non-temporary entry is preferred.14331434 */14341434- common->flags &= ~BATADV_TT_CLIENT_TEMP;14351435+ if (common->flags & BATADV_TT_CLIENT_TEMP) {14361436+ batadv_tt_global_del_orig_list(tt_global_entry);14371437+ common->flags &= ~BATADV_TT_CLIENT_TEMP;14381438+ }1435143914361440 /* the change can carry possible "attribute" flags like the14371441 * TT_CLIENT_WIFI, therefore they have to be copied in the
+3
net/bluetooth/sco.c
···526526 if (!addr || addr->sa_family != AF_BLUETOOTH)527527 return -EINVAL;528528529529+ if (addr_len < sizeof(struct sockaddr_sco))530530+ return -EINVAL;531531+529532 lock_sock(sk);530533531534 if (sk->sk_state != BT_OPEN) {
···433433 }434434}435435436436-#define SK_FLAGS_TIMESTAMP ((1UL << SOCK_TIMESTAMP) | (1UL << SOCK_TIMESTAMPING_RX_SOFTWARE))437437-438436static void sock_disable_timestamp(struct sock *sk, unsigned long flags)439437{440438 if (sk->sk_flags & flags) {···872874873875 if (val & SOF_TIMESTAMPING_OPT_ID &&874876 !(sk->sk_tsflags & SOF_TIMESTAMPING_OPT_ID)) {875875- if (sk->sk_protocol == IPPROTO_TCP) {877877+ if (sk->sk_protocol == IPPROTO_TCP &&878878+ sk->sk_type == SOCK_STREAM) {876879 if (sk->sk_state != TCP_ESTABLISHED) {877880 ret = -EINVAL;878881 break;···15511552 */15521553 is_charged = sk_filter_charge(newsk, filter);1553155415541554- if (unlikely(!is_charged || xfrm_sk_clone_policy(newsk))) {15551555+ if (unlikely(!is_charged || xfrm_sk_clone_policy(newsk, sk))) {15551556 /* It is still raw copy of parent, so invalidate15561557 * destructor and make plain sk_free() */15571558 newsk->sk_destruct = NULL;
+3
net/decnet/af_decnet.c
···678678{679679 struct sock *sk;680680681681+ if (protocol < 0 || protocol > SK_PROTOCOL_MAX)682682+ return -EINVAL;683683+681684 if (!net_eq(net, &init_net))682685 return -EAFNOSUPPORT;683686
+3
net/ipv4/af_inet.c
···257257 int try_loading_module = 0;258258 int err;259259260260+ if (protocol < 0 || protocol >= IPPROTO_MAX)261261+ return -EINVAL;262262+260263 sock->state = SS_UNCONNECTED;261264262265 /* Look for the requested type/protocol pair. */
+9
net/ipv4/fib_frontend.c
···11551155static int fib_netdev_event(struct notifier_block *this, unsigned long event, void *ptr)11561156{11571157 struct net_device *dev = netdev_notifier_info_to_dev(ptr);11581158+ struct netdev_notifier_changeupper_info *info;11581159 struct in_device *in_dev;11591160 struct net *net = dev_net(dev);11601161 unsigned int flags;···11931192 /* fall through */11941193 case NETDEV_CHANGEMTU:11951194 rt_cache_flush(net);11951195+ break;11961196+ case NETDEV_CHANGEUPPER:11971197+ info = ptr;11981198+ /* flush all routes if dev is linked to or unlinked from11991199+ * an L3 master device (e.g., VRF)12001200+ */12011201+ if (info->upper_dev && netif_is_l3_master(info->upper_dev))12021202+ fib_disable_ip(dev, NETDEV_DOWN, true);11961203 break;11971204 }11981205 return NOTIFY_DONE;
···16411641 drv_stop(local);16421642}1643164316441644+static void ieee80211_flush_completed_scan(struct ieee80211_local *local,16451645+ bool aborted)16461646+{16471647+ /* It's possible that we don't handle the scan completion in16481648+ * time during suspend, so if it's still marked as completed16491649+ * here, queue the work and flush it to clean things up.16501650+ * Instead of calling the worker function directly here, we16511651+ * really queue it to avoid potential races with other flows16521652+ * scheduling the same work.16531653+ */16541654+ if (test_bit(SCAN_COMPLETED, &local->scanning)) {16551655+ /* If coming from reconfiguration failure, abort the scan so16561656+ * we don't attempt to continue a partial HW scan - which is16571657+ * possible otherwise if (e.g.) the 2.4 GHz portion was the16581658+ * completed scan, and a 5 GHz portion is still pending.16591659+ */16601660+ if (aborted)16611661+ set_bit(SCAN_ABORTED, &local->scanning);16621662+ ieee80211_queue_delayed_work(&local->hw, &local->scan_work, 0);16631663+ flush_delayed_work(&local->scan_work);16641664+ }16651665+}16661666+16441667static void ieee80211_handle_reconfig_failure(struct ieee80211_local *local)16451668{16461669 struct ieee80211_sub_if_data *sdata;···16821659 local->resuming = false;16831660 local->suspended = false;16841661 local->in_reconfig = false;16621662+16631663+ ieee80211_flush_completed_scan(local, true);1685166416861665 /* scheduled scan clearly can't be running any more, but tell16871666 * cfg80211 and clear local state···17211696 drv_assign_vif_chanctx(local, sdata, ctx);17221697 }17231698 mutex_unlock(&local->chanctx_mtx);16991699+}17001700+17011701+static void ieee80211_reconfig_stations(struct ieee80211_sub_if_data *sdata)17021702+{17031703+ struct ieee80211_local *local = sdata->local;17041704+ struct sta_info *sta;17051705+17061706+ /* add STAs back */17071707+ mutex_lock(&local->sta_mtx);17081708+ list_for_each_entry(sta, &local->sta_list, list) {17091709+ enum ieee80211_sta_state state;17101710+17111711+ if (!sta->uploaded || sta->sdata != sdata)17121712+ continue;17131713+17141714+ for (state = IEEE80211_STA_NOTEXIST;17151715+ state < sta->sta_state; state++)17161716+ WARN_ON(drv_sta_state(local, sta->sdata, sta, state,17171717+ state + 1));17181718+ }17191719+ mutex_unlock(&local->sta_mtx);17241720}1725172117261722int ieee80211_reconfig(struct ieee80211_local *local)···18791833 WARN_ON(drv_add_chanctx(local, ctx));18801834 mutex_unlock(&local->chanctx_mtx);1881183518821882- list_for_each_entry(sdata, &local->interfaces, list) {18831883- if (!ieee80211_sdata_running(sdata))18841884- continue;18851885- ieee80211_assign_chanctx(local, sdata);18861886- }18871887-18881836 sdata = rtnl_dereference(local->monitor_sdata);18891837 if (sdata && ieee80211_sdata_running(sdata))18901838 ieee80211_assign_chanctx(local, sdata);18911891- }18921892-18931893- /* add STAs back */18941894- mutex_lock(&local->sta_mtx);18951895- list_for_each_entry(sta, &local->sta_list, list) {18961896- enum ieee80211_sta_state state;18971897-18981898- if (!sta->uploaded)18991899- continue;19001900-19011901- /* AP-mode stations will be added later */19021902- if (sta->sdata->vif.type == NL80211_IFTYPE_AP)19031903- continue;19041904-19051905- for (state = IEEE80211_STA_NOTEXIST;19061906- state < sta->sta_state; state++)19071907- WARN_ON(drv_sta_state(local, sta->sdata, sta, state,19081908- state + 1));19091909- }19101910- mutex_unlock(&local->sta_mtx);19111911-19121912- /* reconfigure tx conf */19131913- if (hw->queues >= IEEE80211_NUM_ACS) {19141914- list_for_each_entry(sdata, &local->interfaces, list) {19151915- if (sdata->vif.type == NL80211_IFTYPE_AP_VLAN ||19161916- sdata->vif.type == NL80211_IFTYPE_MONITOR ||19171917- !ieee80211_sdata_running(sdata))19181918- continue;19191919-19201920- for (i = 0; i < IEEE80211_NUM_ACS; i++)19211921- drv_conf_tx(local, sdata, i,19221922- &sdata->tx_conf[i]);19231923- }19241839 }1925184019261841 /* reconfigure hardware */···1895188818961889 if (!ieee80211_sdata_running(sdata))18971890 continue;18911891+18921892+ ieee80211_assign_chanctx(local, sdata);18931893+18941894+ switch (sdata->vif.type) {18951895+ case NL80211_IFTYPE_AP_VLAN:18961896+ case NL80211_IFTYPE_MONITOR:18971897+ break;18981898+ default:18991899+ ieee80211_reconfig_stations(sdata);19001900+ /* fall through */19011901+ case NL80211_IFTYPE_AP: /* AP stations are handled later */19021902+ for (i = 0; i < IEEE80211_NUM_ACS; i++)19031903+ drv_conf_tx(local, sdata, i,19041904+ &sdata->tx_conf[i]);19051905+ break;19061906+ }1898190718991908 /* common change flags for all interface types */19001909 changed = BSS_CHANGED_ERP_CTS_PROT |···20972074 mb();20982075 local->resuming = false;2099207621002100- /* It's possible that we don't handle the scan completion in21012101- * time during suspend, so if it's still marked as completed21022102- * here, queue the work and flush it to clean things up.21032103- * Instead of calling the worker function directly here, we21042104- * really queue it to avoid potential races with other flows21052105- * scheduling the same work.21062106- */21072107- if (test_bit(SCAN_COMPLETED, &local->scanning)) {21082108- ieee80211_queue_delayed_work(&local->hw, &local->scan_work, 0);21092109- flush_delayed_work(&local->scan_work);21102110- }20772077+ ieee80211_flush_completed_scan(local, false);2111207821122079 if (local->open_count && !reconfig_due_to_wowlan)21132080 drv_reconfig_complete(local, IEEE80211_RECONFIG_TYPE_SUSPEND);
···1652165216531653 /* Set an expiration time for the cookie. */16541654 cookie->c.expiration = ktime_add(asoc->cookie_life,16551655- ktime_get());16551655+ ktime_get_real());1656165616571657 /* Copy the peer's init packet. */16581658 memcpy(&cookie->c.peer_init[0], init_chunk->chunk_hdr,···17801780 if (sock_flag(ep->base.sk, SOCK_TIMESTAMP))17811781 kt = skb_get_ktime(skb);17821782 else17831783- kt = ktime_get();17831783+ kt = ktime_get_real();1784178417851785 if (!asoc && ktime_before(bear_cookie->expiration, kt)) {17861786 /*
+2-1
net/sctp/sm_statefuns.c
···54125412 SCTP_INC_STATS(net, SCTP_MIB_T3_RTX_EXPIREDS);5413541354145414 if (asoc->overall_error_count >= asoc->max_retrans) {54155415- if (asoc->state == SCTP_STATE_SHUTDOWN_PENDING) {54155415+ if (asoc->peer.zero_window_announced &&54165416+ asoc->state == SCTP_STATE_SHUTDOWN_PENDING) {54165417 /*54175418 * We are here likely because the receiver had its rwnd54185419 * closed for a while and we have not been able to
+6-6
net/sctp/socket.c
···1952195219531953 /* Now send the (possibly) fragmented message. */19541954 list_for_each_entry(chunk, &datamsg->chunks, frag_list) {19551955- sctp_chunk_hold(chunk);19561956-19571955 /* Do accounting for the write space. */19581956 sctp_set_owner_w(chunk);19591957···19641966 * breaks.19651967 */19661968 err = sctp_primitive_SEND(net, asoc, datamsg);19691969+ sctp_datamsg_put(datamsg);19671970 /* Did the lower layer accept the chunk? */19681968- if (err) {19691969- sctp_datamsg_free(datamsg);19711971+ if (err)19701972 goto out_free;19711971- }1972197319731974 pr_debug("%s: we sent primitively\n", __func__);1974197519751975- sctp_datamsg_put(datamsg);19761976 err = msg_len;1977197719781978 if (unlikely(wait_connect)) {···71637167 newsk->sk_type = sk->sk_type;71647168 newsk->sk_bound_dev_if = sk->sk_bound_dev_if;71657169 newsk->sk_flags = sk->sk_flags;71707170+ newsk->sk_tsflags = sk->sk_tsflags;71667171 newsk->sk_no_check_tx = sk->sk_no_check_tx;71677172 newsk->sk_no_check_rx = sk->sk_no_check_rx;71687173 newsk->sk_reuse = sk->sk_reuse;···71967199 newinet->mc_ttl = 1;71977200 newinet->mc_index = 0;71987201 newinet->mc_list = NULL;72027202+72037203+ if (newsk->sk_flags & SK_FLAGS_TIMESTAMP)72047204+ net_enable_timestamp();71997205}7200720672017207static inline void sctp_copy_descendant(struct sock *sk_to,
+1
net/socket.c
···16951695 msg.msg_name = addr ? (struct sockaddr *)&address : NULL;16961696 /* We assume all kernel code knows the size of sockaddr_storage */16971697 msg.msg_namelen = 0;16981698+ msg.msg_iocb = NULL;16981699 if (sock->file->f_flags & O_NONBLOCK)16991700 flags |= MSG_DONTWAIT;17001701 err = sock_recvmsg(sock, &msg, iov_iter_count(&msg.msg_iter), flags);
···250250}251251EXPORT_SYMBOL_GPL(rpc_destroy_wait_queue);252252253253-static int rpc_wait_bit_killable(struct wait_bit_key *key)253253+static int rpc_wait_bit_killable(struct wait_bit_key *key, int mode)254254{255255- if (fatal_signal_pending(current))256256- return -ERESTARTSYS;257255 freezable_schedule_unsafe();256256+ if (signal_pending_state(mode, current))257257+ return -ERESTARTSYS;258258 return 0;259259}260260
···30293029 break;30303030 default:30313031 WARN(1, "invalid initiator %d\n", lr->initiator);30323032+ kfree(rd);30323033 return -EINVAL;30333034 }30343035···32223221 /* We always try to get an update for the static regdomain */32233222 err = regulatory_hint_core(cfg80211_world_regdom->alpha2);32243223 if (err) {32253225- if (err == -ENOMEM)32243224+ if (err == -ENOMEM) {32253225+ platform_device_unregister(reg_pdev);32263226 return err;32273227+ }32273228 /*32283229 * N.B. kobject_uevent_env() can fail mainly for when we're out32293230 * memory which is handled and propagated appropriately above
+36-14
net/xfrm/xfrm_policy.c
···303303}304304EXPORT_SYMBOL(xfrm_policy_alloc);305305306306+static void xfrm_policy_destroy_rcu(struct rcu_head *head)307307+{308308+ struct xfrm_policy *policy = container_of(head, struct xfrm_policy, rcu);309309+310310+ security_xfrm_policy_free(policy->security);311311+ kfree(policy);312312+}313313+306314/* Destroy xfrm_policy: descendant resources must be released to this moment. */307315308316void xfrm_policy_destroy(struct xfrm_policy *policy)···320312 if (del_timer(&policy->timer) || del_timer(&policy->polq.hold_timer))321313 BUG();322314323323- security_xfrm_policy_free(policy->security);324324- kfree(policy);315315+ call_rcu(&policy->rcu, xfrm_policy_destroy_rcu);325316}326317EXPORT_SYMBOL(xfrm_policy_destroy);327318···12211214 struct xfrm_policy *pol;12221215 struct net *net = sock_net(sk);1223121612171217+ rcu_read_lock();12241218 read_lock_bh(&net->xfrm.xfrm_policy_lock);12251225- if ((pol = sk->sk_policy[dir]) != NULL) {12191219+ pol = rcu_dereference(sk->sk_policy[dir]);12201220+ if (pol != NULL) {12261221 bool match = xfrm_selector_match(&pol->selector, fl,12271222 sk->sk_family);12281223 int err = 0;···12481239 }12491240out:12501241 read_unlock_bh(&net->xfrm.xfrm_policy_lock);12421242+ rcu_read_unlock();12511243 return pol;12521244}12531245···13171307#endif1318130813191309 write_lock_bh(&net->xfrm.xfrm_policy_lock);13201320- old_pol = sk->sk_policy[dir];13211321- sk->sk_policy[dir] = pol;13101310+ old_pol = rcu_dereference_protected(sk->sk_policy[dir],13111311+ lockdep_is_held(&net->xfrm.xfrm_policy_lock));13221312 if (pol) {13231313 pol->curlft.add_time = get_seconds();13241314 pol->index = xfrm_gen_index(net, XFRM_POLICY_MAX+dir, 0);13251315 xfrm_sk_policy_link(pol, dir);13261316 }13171317+ rcu_assign_pointer(sk->sk_policy[dir], pol);13271318 if (old_pol) {13281319 if (pol)13291320 xfrm_policy_requeue(old_pol, pol);···13721361 return newp;13731362}1374136313751375-int __xfrm_sk_clone_policy(struct sock *sk)13641364+int __xfrm_sk_clone_policy(struct sock *sk, const struct sock *osk)13761365{13771377- struct xfrm_policy *p0 = sk->sk_policy[0],13781378- *p1 = sk->sk_policy[1];13661366+ const struct xfrm_policy *p;13671367+ struct xfrm_policy *np;13681368+ int i, ret = 0;1379136913801380- sk->sk_policy[0] = sk->sk_policy[1] = NULL;13811381- if (p0 && (sk->sk_policy[0] = clone_policy(p0, 0)) == NULL)13821382- return -ENOMEM;13831383- if (p1 && (sk->sk_policy[1] = clone_policy(p1, 1)) == NULL)13841384- return -ENOMEM;13851385- return 0;13701370+ rcu_read_lock();13711371+ for (i = 0; i < 2; i++) {13721372+ p = rcu_dereference(osk->sk_policy[i]);13731373+ if (p) {13741374+ np = clone_policy(p, i);13751375+ if (unlikely(!np)) {13761376+ ret = -ENOMEM;13771377+ break;13781378+ }13791379+ rcu_assign_pointer(sk->sk_policy[i], np);13801380+ }13811381+ }13821382+ rcu_read_unlock();13831383+ return ret;13861384}1387138513881386static int···22182198 xdst = NULL;22192199 route = NULL;2220220022012201+ sk = sk_const_to_full_sk(sk);22212202 if (sk && sk->sk_policy[XFRM_POLICY_OUT]) {22222203 num_pols = 1;22232204 pols[0] = xfrm_sk_policy_lookup(sk, XFRM_POLICY_OUT, fl);···24982477 }2499247825002479 pol = NULL;24802480+ sk = sk_to_full_sk(sk);25012481 if (sk && sk->sk_policy[dir]) {25022482 pol = xfrm_sk_policy_lookup(sk, dir, &fl);25032483 if (IS_ERR(pol)) {
···13541354 }13551355 }1356135613571357+ snd_usb_mixer_fu_apply_quirk(state->mixer, cval, unitid, kctl);13581358+13571359 range = (cval->max - cval->min) / cval->res;13581360 /*13591361 * Are there devices with volume range more than 255? I use a bit more
···3737#include <sound/control.h>3838#include <sound/hwdep.h>3939#include <sound/info.h>4040+#include <sound/tlv.h>40414142#include "usbaudio.h"4243#include "mixer.h"···18221821 break;18231822 default:18241823 usb_audio_dbg(mixer->chip, "memory change in unknown unit %d\n", unitid);18241824+ break;18251825+ }18261826+}18271827+18281828+static void snd_dragonfly_quirk_db_scale(struct usb_mixer_interface *mixer,18291829+ struct snd_kcontrol *kctl)18301830+{18311831+ /* Approximation using 10 ranges based on output measurement on hw v1.2.18321832+ * This seems close to the cubic mapping e.g. alsamixer uses. */18331833+ static const DECLARE_TLV_DB_RANGE(scale,18341834+ 0, 1, TLV_DB_MINMAX_ITEM(-5300, -4970),18351835+ 2, 5, TLV_DB_MINMAX_ITEM(-4710, -4160),18361836+ 6, 7, TLV_DB_MINMAX_ITEM(-3884, -3710),18371837+ 8, 14, TLV_DB_MINMAX_ITEM(-3443, -2560),18381838+ 15, 16, TLV_DB_MINMAX_ITEM(-2475, -2324),18391839+ 17, 19, TLV_DB_MINMAX_ITEM(-2228, -2031),18401840+ 20, 26, TLV_DB_MINMAX_ITEM(-1910, -1393),18411841+ 27, 31, TLV_DB_MINMAX_ITEM(-1322, -1032),18421842+ 32, 40, TLV_DB_MINMAX_ITEM(-968, -490),18431843+ 41, 50, TLV_DB_MINMAX_ITEM(-441, 0),18441844+ );18451845+18461846+ usb_audio_info(mixer->chip, "applying DragonFly dB scale quirk\n");18471847+ kctl->tlv.p = scale;18481848+ kctl->vd[0].access |= SNDRV_CTL_ELEM_ACCESS_TLV_READ;18491849+ kctl->vd[0].access &= ~SNDRV_CTL_ELEM_ACCESS_TLV_CALLBACK;18501850+}18511851+18521852+void snd_usb_mixer_fu_apply_quirk(struct usb_mixer_interface *mixer,18531853+ struct usb_mixer_elem_info *cval, int unitid,18541854+ struct snd_kcontrol *kctl)18551855+{18561856+ switch (mixer->chip->usb_id) {18571857+ case USB_ID(0x21b4, 0x0081): /* AudioQuest DragonFly */18581858+ if (unitid == 7 && cval->min == 0 && cval->max == 50)18591859+ snd_dragonfly_quirk_db_scale(mixer, kctl);18251860 break;18261861 }18271862}
+4
sound/usb/mixer_quirks.h
···99void snd_usb_mixer_rc_memory_change(struct usb_mixer_interface *mixer,1010 int unitid);11111212+void snd_usb_mixer_fu_apply_quirk(struct usb_mixer_interface *mixer,1313+ struct usb_mixer_elem_info *cval, int unitid,1414+ struct snd_kcontrol *kctl);1515+1216#endif /* SND_USB_MIXER_QUIRKS_H */1317
+1
sound/usb/quirks.c
···11251125 case USB_ID(0x045E, 0x0779): /* MS Lifecam HD-3000 */11261126 case USB_ID(0x04D8, 0xFEEA): /* Benchmark DAC1 Pre */11271127 case USB_ID(0x074D, 0x3553): /* Outlaw RR2150 (Micronas UAC3553B) */11281128+ case USB_ID(0x21B4, 0x0081): /* AudioQuest DragonFly */11281129 return true;11291130 }11301131 return false;
+6
tools/virtio/linux/kernel.h
···110110 (void) (&_min1 == &_min2); \111111 _min1 < _min2 ? _min1 : _min2; })112112113113+/* TODO: empty stubs for now. Broken but enough for virtio_ring.c */114114+#define list_add_tail(a, b) do {} while (0)115115+#define list_del(a) do {} while (0)116116+#define list_for_each_entry(a, b, c) while (0)117117+/* end of stubs */118118+113119#endif /* KERNEL_H */
-6
tools/virtio/linux/virtio.h
···33#include <linux/scatterlist.h>44#include <linux/kernel.h>5566-/* TODO: empty stubs for now. Broken but enough for virtio_ring.c */77-#define list_add_tail(a, b) do {} while (0)88-#define list_del(a) do {} while (0)99-#define list_for_each_entry(a, b, c) while (0)1010-/* end of stubs */1111-126struct virtio_device {137 void *dev;148 u64 features;