···2222Optional properties:2323- ti,hwmods: Name of the hwmods associated to the eDMA CC2424- ti,edma-memcpy-channels: List of channels allocated to be used for memcpy, iow2525- these channels will be SW triggered channels. The list must2626- contain 16 bits numbers, see example.2525+ these channels will be SW triggered channels. See example.2726- ti,edma-reserved-slot-ranges: PaRAM slot ranges which should not be used by2827 the driver, they are allocated to be used by for example the2928 DSP. See example.···5556 ti,tptcs = <&edma_tptc0 7>, <&edma_tptc1 7>, <&edma_tptc2 0>;56575758 /* Channel 20 and 21 is allocated for memcpy */5858- ti,edma-memcpy-channels = /bits/ 16 <20 21>;5959- /* The following PaRAM slots are reserved: 35-45 and 100-110 */6060- ti,edma-reserved-slot-ranges = /bits/ 16 <35 10>,6161- /bits/ 16 <100 10>;5959+ ti,edma-memcpy-channels = <20 21>;6060+ /* The following PaRAM slots are reserved: 35-44 and 100-109 */6161+ ti,edma-reserved-slot-ranges = <35 10>, <100 10>;6262};63636464edma_tptc0: tptc@49800000 {
···1212Required subnode-properties:1313 - label: Descriptive name of the key.1414 - linux,code: Keycode to emit.1515- - channel: Channel this key is attached to, mut be 0 or 1.1515+ - channel: Channel this key is attached to, must be 0 or 1.1616 - voltage: Voltage in µV at lradc input when this key is pressed.17171818Example:
···66as RedBoot.7788The partition table should be a subnode of the mtd node and should be named99-'partitions'. Partitions are defined in subnodes of the partitions node.99+'partitions'. This node should have the following property:1010+- compatible : (required) must be "fixed-partitions"1111+Partitions are then defined in subnodes of the partitions node.10121113For backwards compatibility partitions as direct subnodes of the mtd device are1214supported. This use is discouraged.···38363937flash@0 {4038 partitions {3939+ compatible = "fixed-partitions";4140 #address-cells = <1>;4241 #size-cells = <1>;4342···56535754flash@1 {5855 partitions {5656+ compatible = "fixed-partitions";5957 #address-cells = <1>;6058 #size-cells = <2>;6159···70667167flash@2 {7268 partitions {6969+ compatible = "fixed-partitions";7370 #address-cells = <2>;7471 #size-cells = <2>;7572
-14
Documentation/networking/e100.txt
···181181If an issue is identified with the released source code on the supported182182kernel with a supported adapter, email the specific information related to the183183issue to e1000-devel@lists.sourceforge.net.184184-185185-186186-License187187-=======188188-189189-This software program is released under the terms of a license agreement190190-between you ('Licensee') and Intel. Do not use or load this software or any191191-associated materials (collectively, the 'Software') until you have carefully192192-read the full terms and conditions of the file COPYING located in this software193193-package. By loading or using the Software, you agree to the terms of this194194-Agreement. If you do not agree with the terms of this Agreement, do not install195195-or use the Software.196196-197197-* Other names and brands may be claimed as the property of others.
+16-1
MAINTAINERS
···55785578R: Shannon Nelson <shannon.nelson@intel.com>55795579R: Carolyn Wyborny <carolyn.wyborny@intel.com>55805580R: Don Skidmore <donald.c.skidmore@intel.com>55815581-R: Matthew Vick <matthew.vick@intel.com>55815581+R: Bruce Allan <bruce.w.allan@intel.com>55825582R: John Ronciak <john.ronciak@intel.com>55835583R: Mitch Williams <mitch.a.williams@intel.com>55845584L: intel-wired-lan@lists.osuosl.org···83808380S: Maintained83818381F: drivers/pinctrl/samsung/8382838283838383+PIN CONTROLLER - SINGLE83848384+M: Tony Lindgren <tony@atomide.com>83858385+M: Haojian Zhuang <haojian.zhuang@linaro.org>83868386+L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)83878387+L: linux-omap@vger.kernel.org83888388+S: Maintained83898389+F: drivers/pinctrl/pinctrl-single.c83908390+83838391PIN CONTROLLER - ST SPEAR83848392M: Viresh Kumar <vireshk@kernel.org>83858393L: spear-devel@list.st.com···89538945F: drivers/rpmsg/89548946F: Documentation/rpmsg.txt89558947F: include/linux/rpmsg.h89488948+89498949+RENESAS ETHERNET DRIVERS89508950+R: Sergei Shtylyov <sergei.shtylyov@cogentembedded.com>89518951+L: netdev@vger.kernel.org89528952+L: linux-sh@vger.kernel.org89538953+F: drivers/net/ethernet/renesas/89548954+F: include/linux/sh_eth.h8956895589578956RESET CONTROLLER FRAMEWORK89588957M: Philipp Zabel <p.zabel@pengutronix.de>
···445445 However some customers have peripherals mapped at this addr, so446446 Linux needs to be scooted a bit.447447 If you don't know what the above means, leave this setting alone.448448+ This needs to match memory start address specified in Device Tree448449449450config HIGHMEM450451 bool "High Memory Support"
···17171818 memory {1919 device_type = "memory";2020- reg = <0x0 0x80000000 0x0 0x40000000 /* 1 GB low mem */2020+ /* CONFIG_LINUX_LINK_BASE needs to match low mem start */2121+ reg = <0x0 0x80000000 0x0 0x20000000 /* 512 MB low mem */2122 0x1 0x00000000 0x0 0x40000000>; /* 1 GB highmem */2223 };2324
+2-2
arch/arc/include/asm/mach_desc.h
···2323 * @dt_compat: Array of device tree 'compatible' strings2424 * (XXX: although only 1st entry is looked at)2525 * @init_early: Very early callback [called from setup_arch()]2626- * @init_cpu_smp: for each CPU as it is coming up (SMP as well as UP)2626+ * @init_per_cpu: for each CPU as it is coming up (SMP as well as UP)2727 * [(M):init_IRQ(), (o):start_kernel_secondary()]2828 * @init_machine: arch initcall level callback (e.g. populate static2929 * platform devices or parse Devicetree)···3535 const char **dt_compat;3636 void (*init_early)(void);3737#ifdef CONFIG_SMP3838- void (*init_cpu_smp)(unsigned int);3838+ void (*init_per_cpu)(unsigned int);3939#endif4040 void (*init_machine)(void);4141 void (*init_late)(void);
+2-2
arch/arc/include/asm/smp.h
···4848 * @init_early_smp: A SMP specific h/w block can init itself4949 * Could be common across platforms so not covered by5050 * mach_desc->init_early()5151- * @init_irq_cpu: Called for each core so SMP h/w block driver can do5151+ * @init_per_cpu: Called for each core so SMP h/w block driver can do5252 * any needed setup per cpu (e.g. IPI request)5353 * @cpu_kick: For Master to kickstart a cpu (optionally at a PC)5454 * @ipi_send: To send IPI to a @cpu···5757struct plat_smp_ops {5858 const char *info;5959 void (*init_early_smp)(void);6060- void (*init_irq_cpu)(int cpu);6060+ void (*init_per_cpu)(int cpu);6161 void (*cpu_kick)(int cpu, unsigned long pc);6262 void (*ipi_send)(int cpu);6363 void (*ipi_clear)(int irq);
···106106static int arcv2_irq_map(struct irq_domain *d, unsigned int irq,107107 irq_hw_number_t hw)108108{109109- if (irq == TIMER0_IRQ || irq == IPI_IRQ)109109+ /*110110+ * core intc IRQs [16, 23]:111111+ * Statically assigned always private-per-core (Timers, WDT, IPI, PCT)112112+ */113113+ if (hw < 24) {114114+ /*115115+ * A subsequent request_percpu_irq() fails if percpu_devid is116116+ * not set. That in turns sets NOAUTOEN, meaning each core needs117117+ * to call enable_percpu_irq()118118+ */119119+ irq_set_percpu_devid(irq);110120 irq_set_chip_and_handler(irq, &arcv2_irq_chip, handle_percpu_irq);111111- else121121+ } else {112122 irq_set_chip_and_handler(irq, &arcv2_irq_chip, handle_level_irq);123123+ }113124114125 return 0;115126}
+24-9
arch/arc/kernel/irq.c
···29293030#ifdef CONFIG_SMP3131 /* a SMP H/w block could do IPI IRQ request here */3232- if (plat_smp_ops.init_irq_cpu)3333- plat_smp_ops.init_irq_cpu(smp_processor_id());3232+ if (plat_smp_ops.init_per_cpu)3333+ plat_smp_ops.init_per_cpu(smp_processor_id());34343535- if (machine_desc->init_cpu_smp)3636- machine_desc->init_cpu_smp(smp_processor_id());3535+ if (machine_desc->init_per_cpu)3636+ machine_desc->init_per_cpu(smp_processor_id());3737#endif3838}3939···5151 set_irq_regs(old_regs);5252}53535454+/*5555+ * API called for requesting percpu interrupts - called by each CPU5656+ * - For boot CPU, actually request the IRQ with genirq core + enables5757+ * - For subsequent callers only enable called locally5858+ *5959+ * Relies on being called by boot cpu first (i.e. request called ahead) of6060+ * any enable as expected by genirq. Hence Suitable only for TIMER, IPI6161+ * which are guaranteed to be setup on boot core first.6262+ * Late probed peripherals such as perf can't use this as there no guarantee6363+ * of being called on boot CPU first.6464+ */6565+5466void arc_request_percpu_irq(int irq, int cpu,5567 irqreturn_t (*isr)(int irq, void *dev),5668 const char *irq_nm,···7260 if (!cpu) {7361 int rc;74626363+#ifdef CONFIG_ISA_ARCOMPACT7564 /*7676- * These 2 calls are essential to making percpu IRQ APIs work7777- * Ideally these details could be hidden in irq chip map function7878- * but the issue is IPIs IRQs being static (non-DT) and platform7979- * specific, so we can't identify them there.6565+ * A subsequent request_percpu_irq() fails if percpu_devid is6666+ * not set. That in turns sets NOAUTOEN, meaning each core needs6767+ * to call enable_percpu_irq()6868+ *6969+ * For ARCv2, this is done in irq map function since we know7070+ * which irqs are strictly per cpu8071 */8172 irq_set_percpu_devid(irq);8282- irq_modify_status(irq, IRQ_NOAUTOEN, 0); /* @irq, @clr, @set */7373+#endif83748475 rc = request_percpu_irq(irq, isr, irq_nm, percpu_dev);8576 if (rc)
···428428429429#endif /* CONFIG_ISA_ARCV2 */430430431431-void arc_cpu_pmu_irq_init(void)431431+static void arc_cpu_pmu_irq_init(void *data)432432{433433- struct arc_pmu_cpu *pmu_cpu = this_cpu_ptr(&arc_pmu_cpu);433433+ int irq = *(int *)data;434434435435- arc_request_percpu_irq(arc_pmu->irq, smp_processor_id(), arc_pmu_intr,436436- "ARC perf counters", pmu_cpu);435435+ enable_percpu_irq(irq, IRQ_TYPE_NONE);437436438437 /* Clear all pending interrupt flags */439438 write_aux_reg(ARC_REG_PCT_INT_ACT, 0xffffffff);···514515515516 if (has_interrupts) {516517 int irq = platform_get_irq(pdev, 0);517517- unsigned long flags;518518519519 if (irq < 0) {520520 pr_err("Cannot get IRQ number for the platform\n");···522524523525 arc_pmu->irq = irq;524526525525- /*526526- * arc_cpu_pmu_irq_init() needs to be called on all cores for527527- * their respective local PMU.528528- * However we use opencoded on_each_cpu() to ensure it is called529529- * on core0 first, so that arc_request_percpu_irq() sets up530530- * AUTOEN etc. Otherwise enable_percpu_irq() fails to enable531531- * perf IRQ on non master cores.532532- * see arc_request_percpu_irq()533533- */534534- preempt_disable();535535- local_irq_save(flags);536536- arc_cpu_pmu_irq_init();537537- local_irq_restore(flags);538538- smp_call_function((smp_call_func_t)arc_cpu_pmu_irq_init, 0, 1);539539- preempt_enable();527527+ /* intc map function ensures irq_set_percpu_devid() called */528528+ request_percpu_irq(irq, arc_pmu_intr, "ARC perf counters",529529+ this_cpu_ptr(&arc_pmu_cpu));540530541541- /* Clean all pending interrupt flags */542542- write_aux_reg(ARC_REG_PCT_INT_ACT, 0xffffffff);531531+ on_each_cpu(arc_cpu_pmu_irq_init, &irq, 1);532532+543533 } else544534 arc_pmu->pmu.capabilities |= PERF_PMU_CAP_NO_INTERRUPT;545535
···132132 pr_info("## CPU%u LIVE ##: Executing Code...\n", cpu);133133134134 /* Some SMP H/w setup - for each cpu */135135- if (plat_smp_ops.init_irq_cpu)136136- plat_smp_ops.init_irq_cpu(cpu);135135+ if (plat_smp_ops.init_per_cpu)136136+ plat_smp_ops.init_per_cpu(cpu);137137138138- if (machine_desc->init_cpu_smp)139139- machine_desc->init_cpu_smp(cpu);138138+ if (machine_desc->init_per_cpu)139139+ machine_desc->init_per_cpu(cpu);140140141141 arc_local_timer_setup();142142
+35-18
arch/arc/kernel/unwind.c
···170170171171static unsigned long read_pointer(const u8 **pLoc,172172 const void *end, signed ptrType);173173+static void init_unwind_hdr(struct unwind_table *table,174174+ void *(*alloc) (unsigned long));175175+176176+/*177177+ * wrappers for header alloc (vs. calling one vs. other at call site)178178+ * to elide section mismatches warnings179179+ */180180+static void *__init unw_hdr_alloc_early(unsigned long sz)181181+{182182+ return __alloc_bootmem_nopanic(sz, sizeof(unsigned int),183183+ MAX_DMA_ADDRESS);184184+}185185+186186+static void *unw_hdr_alloc(unsigned long sz)187187+{188188+ return kmalloc(sz, GFP_KERNEL);189189+}173190174191static void init_unwind_table(struct unwind_table *table, const char *name,175192 const void *core_start, unsigned long core_size,···226209 __start_unwind, __end_unwind - __start_unwind,227210 NULL, 0);228211 /*__start_unwind_hdr, __end_unwind_hdr - __start_unwind_hdr);*/212212+213213+ init_unwind_hdr(&root_table, unw_hdr_alloc_early);229214}230215231216static const u32 bad_cie, not_fde;···260241 e2->fde = v;261242}262243263263-static void __init setup_unwind_table(struct unwind_table *table,264264- void *(*alloc) (unsigned long))244244+static void init_unwind_hdr(struct unwind_table *table,245245+ void *(*alloc) (unsigned long))265246{266247 const u8 *ptr;267248 unsigned long tableSize = table->size, hdrSize;···293274 const u32 *cie = cie_for_fde(fde, table);294275 signed ptrType;295276296296- if (cie == ¬_fde)277277+ if (cie == ¬_fde) /* only process FDE here */297278 continue;298279 if (cie == NULL || cie == &bad_cie)299299- return;280280+ continue; /* say FDE->CIE.version != 1 */300281 ptrType = fde_pointer_type(cie);301282 if (ptrType < 0)302302- return;283283+ continue;303284304285 ptr = (const u8 *)(fde + 2);305286 if (!read_pointer(&ptr, (const u8 *)(fde + 1) + *fde,···319300320301 hdrSize = 4 + sizeof(unsigned long) + sizeof(unsigned int)321302 + 2 * n * sizeof(unsigned long);303303+322304 header = alloc(hdrSize);323305 if (!header)324306 return;307307+325308 header->version = 1;326309 header->eh_frame_ptr_enc = DW_EH_PE_abs | DW_EH_PE_native;327310 header->fde_count_enc = DW_EH_PE_abs | DW_EH_PE_data4;···343322344323 if (fde[1] == 0xffffffff)345324 continue; /* this is a CIE */325325+326326+ if (*(u8 *)(cie + 2) != 1)327327+ continue; /* FDE->CIE.version not supported */328328+346329 ptr = (const u8 *)(fde + 2);347330 header->table[n].start = read_pointer(&ptr,348331 (const u8 *)(fde + 1) +···365340 table->hdrsz = hdrSize;366341 smp_wmb();367342 table->header = (const void *)header;368368-}369369-370370-static void *__init balloc(unsigned long sz)371371-{372372- return __alloc_bootmem_nopanic(sz,373373- sizeof(unsigned int),374374- __pa(MAX_DMA_ADDRESS));375375-}376376-377377-void __init arc_unwind_setup(void)378378-{379379- setup_unwind_table(&root_table, balloc);380343}381344382345#ifdef CONFIG_MODULES···389376 module->module_init, module->init_size,390377 table_start, table_size,391378 NULL, 0);379379+380380+ init_unwind_hdr(table, unw_hdr_alloc);392381393382#ifdef UNWIND_DEBUG394383 unw_debug("Table added for [%s] %lx %lx\n",···454439 info.init_only = init_only;455440456441 unlink_table(&info); /* XXX: SMP */442442+ kfree(table->header);457443 kfree(table);458444}459445···523507524508 if (*cie <= sizeof(*cie) + 4 || *cie >= fde[1] - sizeof(*fde)525509 || (*cie & (sizeof(*cie) - 1))526526- || (cie[1] != 0xffffffff))510510+ || (cie[1] != 0xffffffff)511511+ || ( *(u8 *)(cie + 2) != 1)) /* version 1 supported */527512 return NULL; /* this is not a (valid) CIE */528513 return cie;529514}
+3-1
arch/arc/mm/init.c
···5151 int in_use = 0;52525353 if (!low_mem_sz) {5454- BUG_ON(base != low_mem_start);5454+ if (base != low_mem_start)5555+ panic("CONFIG_LINUX_LINK_BASE != DT memory { }");5656+5557 low_mem_sz = size;5658 in_use = 1;5759 } else {
+4
arch/arm/include/asm/uaccess.h
···510510static inline unsigned long __must_check511511__copy_to_user(void __user *to, const void *from, unsigned long n)512512{513513+#ifndef CONFIG_UACCESS_WITH_MEMCPY513514 unsigned int __ua_flags = uaccess_save_and_enable();514515 n = arm_copy_to_user(to, from, n);515516 uaccess_restore(__ua_flags);516517 return n;518518+#else519519+ return arm_copy_to_user(to, from, n);520520+#endif517521}518522519523extern unsigned long __must_check
+18-15
arch/arm/kernel/process.c
···9595{9696 unsigned long flags;9797 char buf[64];9898+#ifndef CONFIG_CPU_V7M9999+ unsigned int domain;100100+#ifdef CONFIG_CPU_SW_DOMAIN_PAN101101+ /*102102+ * Get the domain register for the parent context. In user103103+ * mode, we don't save the DACR, so lets use what it should104104+ * be. For other modes, we place it after the pt_regs struct.105105+ */106106+ if (user_mode(regs))107107+ domain = DACR_UACCESS_ENABLE;108108+ else109109+ domain = *(unsigned int *)(regs + 1);110110+#else111111+ domain = get_domain();112112+#endif113113+#endif9811499115 show_regs_print_info(KERN_DEFAULT);100116···139123140124#ifndef CONFIG_CPU_V7M141125 {142142- unsigned int domain = get_domain();143126 const char *segment;144144-145145-#ifdef CONFIG_CPU_SW_DOMAIN_PAN146146- /*147147- * Get the domain register for the parent context. In user148148- * mode, we don't save the DACR, so lets use what it should149149- * be. For other modes, we place it after the pt_regs struct.150150- */151151- if (user_mode(regs))152152- domain = DACR_UACCESS_ENABLE;153153- else154154- domain = *(unsigned int *)(regs + 1);155155-#endif156127157128 if ((domain & domain_mask(DOMAIN_USER)) ==158129 domain_val(DOMAIN_USER, DOMAIN_NOACCESS))···166163 buf[0] = '\0';167164#ifdef CONFIG_CPU_CP15_MMU168165 {169169- unsigned int transbase, dac = get_domain();166166+ unsigned int transbase;170167 asm("mrc p15, 0, %0, c2, c0\n\t"171168 : "=r" (transbase));172169 snprintf(buf, sizeof(buf), " Table: %08x DAC: %08x",173173- transbase, dac);170170+ transbase, domain);174171 }175172#endif176173 asm("mrc p15, 0, %0, c1, c0\n" : "=r" (ctrl));
···8888static unsigned long noinline8989__copy_to_user_memcpy(void __user *to, const void *from, unsigned long n)9090{9191+ unsigned long ua_flags;9192 int atomic;92939394 if (unlikely(segment_eq(get_fs(), KERNEL_DS))) {···119118 if (tocopy > n)120119 tocopy = n;121120121121+ ua_flags = uaccess_save_and_enable();122122 memcpy((void *)to, from, tocopy);123123+ uaccess_restore(ua_flags);123124 to += tocopy;124125 from += tocopy;125126 n -= tocopy;···148145 * With frame pointer disabled, tail call optimization kicks in149146 * as well making this test almost invisible.150147 */151151- if (n < 64)152152- return __copy_to_user_std(to, from, n);153153- return __copy_to_user_memcpy(to, from, n);148148+ if (n < 64) {149149+ unsigned long ua_flags = uaccess_save_and_enable();150150+ n = __copy_to_user_std(to, from, n);151151+ uaccess_restore(ua_flags);152152+ } else {153153+ n = __copy_to_user_memcpy(to, from, n);154154+ }155155+ return n;154156}155157156158static unsigned long noinline157159__clear_user_memset(void __user *addr, unsigned long n)158160{161161+ unsigned long ua_flags;162162+159163 if (unlikely(segment_eq(get_fs(), KERNEL_DS))) {160164 memset((void *)addr, 0, n);161165 return 0;···185175 if (tocopy > n)186176 tocopy = n;187177178178+ ua_flags = uaccess_save_and_enable();188179 memset((void *)addr, 0, tocopy);180180+ uaccess_restore(ua_flags);189181 addr += tocopy;190182 n -= tocopy;191183···205193unsigned long arm_clear_user(void __user *addr, unsigned long n)206194{207195 /* See rational for this in __copy_to_user() above. */208208- if (n < 64)209209- return __clear_user_std(addr, n);210210- return __clear_user_memset(addr, n);196196+ if (n < 64) {197197+ unsigned long ua_flags = uaccess_save_and_enable();198198+ n = __clear_user_std(addr, n);199199+ uaccess_restore(ua_flags);200200+ } else {201201+ n = __clear_user_memset(addr, n);202202+ }203203+ return n;211204}212205213206#if 0
+26-12
arch/arm/mm/context.c
···165165 __flush_icache_all();166166}167167168168-static int is_reserved_asid(u64 asid)168168+static bool check_update_reserved_asid(u64 asid, u64 newasid)169169{170170 int cpu;171171- for_each_possible_cpu(cpu)172172- if (per_cpu(reserved_asids, cpu) == asid)173173- return 1;174174- return 0;171171+ bool hit = false;172172+173173+ /*174174+ * Iterate over the set of reserved ASIDs looking for a match.175175+ * If we find one, then we can update our mm to use newasid176176+ * (i.e. the same ASID in the current generation) but we can't177177+ * exit the loop early, since we need to ensure that all copies178178+ * of the old ASID are updated to reflect the mm. Failure to do179179+ * so could result in us missing the reserved ASID in a future180180+ * generation.181181+ */182182+ for_each_possible_cpu(cpu) {183183+ if (per_cpu(reserved_asids, cpu) == asid) {184184+ hit = true;185185+ per_cpu(reserved_asids, cpu) = newasid;186186+ }187187+ }188188+189189+ return hit;175190}176191177192static u64 new_context(struct mm_struct *mm, unsigned int cpu)···196181 u64 generation = atomic64_read(&asid_generation);197182198183 if (asid != 0) {184184+ u64 newasid = generation | (asid & ~ASID_MASK);185185+199186 /*200187 * If our current ASID was active during a rollover, we201188 * can continue to use it and this was just a false alarm.202189 */203203- if (is_reserved_asid(asid))204204- return generation | (asid & ~ASID_MASK);190190+ if (check_update_reserved_asid(asid, newasid))191191+ return newasid;205192206193 /*207194 * We had a valid ASID in a previous life, so try to re-use···211194 */212195 asid &= ~ASID_MASK;213196 if (!__test_and_set_bit(asid, asid_map))214214- goto bump_gen;197197+ return newasid;215198 }216199217200 /*···233216234217 __set_bit(asid, asid_map);235218 cur_idx = asid;236236-237237-bump_gen:238238- asid |= generation;239219 cpumask_clear(mm_cpumask(mm));240240- return asid;220220+ return asid | generation;241221}242222243223void check_and_switch_context(struct mm_struct *mm, struct task_struct *tsk)
+1-1
arch/arm/mm/dma-mapping.c
···15211521 return -ENOMEM;1522152215231523 for (count = 0, s = sg; count < (size >> PAGE_SHIFT); s = sg_next(s)) {15241524- phys_addr_t phys = sg_phys(s) & PAGE_MASK;15241524+ phys_addr_t phys = page_to_phys(sg_page(s));15251525 unsigned int len = PAGE_ALIGN(s->offset + s->length);1526152615271527 if (!is_coherent &&
+62-30
arch/arm/mm/init.c
···2222#include <linux/memblock.h>2323#include <linux/dma-contiguous.h>2424#include <linux/sizes.h>2525+#include <linux/stop_machine.h>25262627#include <asm/cp15.h>2728#include <asm/mach-types.h>···628627 * safe to be called with preemption disabled, as under stop_machine().629628 */630629static inline void section_update(unsigned long addr, pmdval_t mask,631631- pmdval_t prot)630630+ pmdval_t prot, struct mm_struct *mm)632631{633633- struct mm_struct *mm;634632 pmd_t *pmd;635633636636- mm = current->active_mm;637634 pmd = pmd_offset(pud_offset(pgd_offset(mm, addr), addr), addr);638635639636#ifdef CONFIG_ARM_LPAE···655656 return !!(get_cr() & CR_XP);656657}657658658658-#define set_section_perms(perms, field) { \659659- size_t i; \660660- unsigned long addr; \661661- \662662- if (!arch_has_strict_perms()) \663663- return; \664664- \665665- for (i = 0; i < ARRAY_SIZE(perms); i++) { \666666- if (!IS_ALIGNED(perms[i].start, SECTION_SIZE) || \667667- !IS_ALIGNED(perms[i].end, SECTION_SIZE)) { \668668- pr_err("BUG: section %lx-%lx not aligned to %lx\n", \669669- perms[i].start, perms[i].end, \670670- SECTION_SIZE); \671671- continue; \672672- } \673673- \674674- for (addr = perms[i].start; \675675- addr < perms[i].end; \676676- addr += SECTION_SIZE) \677677- section_update(addr, perms[i].mask, \678678- perms[i].field); \679679- } \659659+void set_section_perms(struct section_perm *perms, int n, bool set,660660+ struct mm_struct *mm)661661+{662662+ size_t i;663663+ unsigned long addr;664664+665665+ if (!arch_has_strict_perms())666666+ return;667667+668668+ for (i = 0; i < n; i++) {669669+ if (!IS_ALIGNED(perms[i].start, SECTION_SIZE) ||670670+ !IS_ALIGNED(perms[i].end, SECTION_SIZE)) {671671+ pr_err("BUG: section %lx-%lx not aligned to %lx\n",672672+ perms[i].start, perms[i].end,673673+ SECTION_SIZE);674674+ continue;675675+ }676676+677677+ for (addr = perms[i].start;678678+ addr < perms[i].end;679679+ addr += SECTION_SIZE)680680+ section_update(addr, perms[i].mask,681681+ set ? perms[i].prot : perms[i].clear, mm);682682+ }683683+680684}681685682682-static inline void fix_kernmem_perms(void)686686+static void update_sections_early(struct section_perm perms[], int n)683687{684684- set_section_perms(nx_perms, prot);688688+ struct task_struct *t, *s;689689+690690+ read_lock(&tasklist_lock);691691+ for_each_process(t) {692692+ if (t->flags & PF_KTHREAD)693693+ continue;694694+ for_each_thread(t, s)695695+ set_section_perms(perms, n, true, s->mm);696696+ }697697+ read_unlock(&tasklist_lock);698698+ set_section_perms(perms, n, true, current->active_mm);699699+ set_section_perms(perms, n, true, &init_mm);700700+}701701+702702+int __fix_kernmem_perms(void *unused)703703+{704704+ update_sections_early(nx_perms, ARRAY_SIZE(nx_perms));705705+ return 0;706706+}707707+708708+void fix_kernmem_perms(void)709709+{710710+ stop_machine(__fix_kernmem_perms, NULL, NULL);685711}686712687713#ifdef CONFIG_DEBUG_RODATA714714+int __mark_rodata_ro(void *unused)715715+{716716+ update_sections_early(ro_perms, ARRAY_SIZE(ro_perms));717717+ return 0;718718+}719719+688720void mark_rodata_ro(void)689721{690690- set_section_perms(ro_perms, prot);722722+ stop_machine(__mark_rodata_ro, NULL, NULL);691723}692724693725void set_kernel_text_rw(void)694726{695695- set_section_perms(ro_perms, clear);727727+ set_section_perms(ro_perms, ARRAY_SIZE(ro_perms), false,728728+ current->active_mm);696729}697730698731void set_kernel_text_ro(void)699732{700700- set_section_perms(ro_perms, prot);733733+ set_section_perms(ro_perms, ARRAY_SIZE(ro_perms), true,734734+ current->active_mm);701735}702736#endif /* CONFIG_DEBUG_RODATA */703737
···1111121213131414-#define NR_syscalls 322 /* length of syscall table */1414+#define NR_syscalls 323 /* length of syscall table */15151616/*1717 * The following defines stop scripts/checksyscalls.sh from complaining about
···8383 set_bit(d->hwirq, &opal_event_irqchip.mask);84848585 opal_poll_events(&events);8686- opal_handle_events(be64_to_cpu(events));8686+ last_outstanding_events = be64_to_cpu(events);8787+8888+ /*8989+ * We can't just handle the events now with opal_handle_events().9090+ * If we did we would deadlock when opal_event_unmask() is called from9191+ * handle_level_irq() with the irq descriptor lock held, because9292+ * calling opal_handle_events() would call generic_handle_irq() and9393+ * then handle_level_irq() which would try to take the descriptor lock9494+ * again. Instead queue the events for later.9595+ */9696+ if (last_outstanding_events & opal_event_irqchip.mask)9797+ /* Need to retrigger the interrupt */9898+ irq_work_queue(&opal_event_irq_work);8799}8810089101static int opal_event_set_type(struct irq_data *d, unsigned int flow_type)
···24952495{24962496 x86_init.paging.pagetable_init = xen_pagetable_init;2497249724982498- /* Optimization - we can use the HVM one but it has no idea which24992499- * VCPUs are descheduled - which means that it will needlessly IPI25002500- * them. Xen knows so let it do the job.25012501- */25022502- if (xen_feature(XENFEAT_auto_translated_physmap)) {25032503- pv_mmu_ops.flush_tlb_others = xen_flush_tlb_others;24982498+ if (xen_feature(XENFEAT_auto_translated_physmap))25042499 return;25052505- }25002500+25062501 pv_mmu_ops = xen_mmu_ops;2507250225082503 memset(dummy_mapping, 0xff, PAGE_SIZE);
+10-10
arch/x86/xen/suspend.c
···68686969void xen_arch_pre_suspend(void)7070{7171- int cpu;7272-7373- for_each_online_cpu(cpu)7474- xen_pmu_finish(cpu);7575-7671 if (xen_pv_domain())7772 xen_pv_pre_suspend();7873}79748075void xen_arch_post_suspend(int cancelled)8176{8282- int cpu;8383-8477 if (xen_pv_domain())8578 xen_pv_post_suspend(cancelled);8679 else8780 xen_hvm_post_suspend(cancelled);8888-8989- for_each_online_cpu(cpu)9090- xen_pmu_init(cpu);9181}92829383static void xen_vcpu_notify_restore(void *data)···9610697107void xen_arch_resume(void)98108{109109+ int cpu;110110+99111 on_each_cpu(xen_vcpu_notify_restore, NULL, 1);112112+113113+ for_each_online_cpu(cpu)114114+ xen_pmu_init(cpu);100115}101116102117void xen_arch_suspend(void)103118{119119+ int cpu;120120+121121+ for_each_online_cpu(cpu)122122+ xen_pmu_finish(cpu);123123+104124 on_each_cpu(xen_vcpu_notify_suspend, NULL, 1);105125}
+1-1
crypto/ablkcipher.c
···277277 if (WARN_ON_ONCE(in_irq()))278278 return -EDEADLK;279279280280+ walk->iv = req->info;280281 walk->nbytes = walk->total;281282 if (unlikely(!walk->total))282283 return 0;283284284285 walk->iv_buffer = NULL;285285- walk->iv = req->info;286286 if (unlikely(((unsigned long)walk->iv & alignmask))) {287287 int err = ablkcipher_copy_iv(walk, tfm, alignmask);288288
+1-1
crypto/blkcipher.c
···326326 if (WARN_ON_ONCE(in_irq()))327327 return -EDEADLK;328328329329+ walk->iv = desc->info;329330 walk->nbytes = walk->total;330331 if (unlikely(!walk->total))331332 return 0;332333333334 walk->buffer = NULL;334334- walk->iv = desc->info;335335 if (unlikely(((unsigned long)walk->iv & walk->alignmask))) {336336 int err = blkcipher_copy_iv(walk);337337 if (err)
+1-1
drivers/acpi/nfit.c
···18101810 if (!dev->driver) {18111811 /* dev->driver may be null if we're being removed */18121812 dev_dbg(dev, "%s: no driver found for dev\n", __func__);18131813- return;18131813+ goto out_unlock;18141814 }1815181518161816 if (!acpi_desc) {
+22-11
drivers/base/power/domain.c
···390390 struct generic_pm_domain *genpd;391391 bool (*stop_ok)(struct device *__dev);392392 struct gpd_timing_data *td = &dev_gpd_data(dev)->td;393393+ bool runtime_pm = pm_runtime_enabled(dev);393394 ktime_t time_start;394395 s64 elapsed_ns;395396 int ret;···401400 if (IS_ERR(genpd))402401 return -EINVAL;403402403403+ /*404404+ * A runtime PM centric subsystem/driver may re-use the runtime PM405405+ * callbacks for other purposes than runtime PM. In those scenarios406406+ * runtime PM is disabled. Under these circumstances, we shall skip407407+ * validating/measuring the PM QoS latency.408408+ */404409 stop_ok = genpd->gov ? genpd->gov->stop_ok : NULL;405405- if (stop_ok && !stop_ok(dev))410410+ if (runtime_pm && stop_ok && !stop_ok(dev))406411 return -EBUSY;407412408413 /* Measure suspend latency. */409409- time_start = ktime_get();414414+ if (runtime_pm)415415+ time_start = ktime_get();410416411417 ret = genpd_save_dev(genpd, dev);412418 if (ret)···426418 }427419428420 /* Update suspend latency value if the measured time exceeds it. */429429- elapsed_ns = ktime_to_ns(ktime_sub(ktime_get(), time_start));430430- if (elapsed_ns > td->suspend_latency_ns) {431431- td->suspend_latency_ns = elapsed_ns;432432- dev_dbg(dev, "suspend latency exceeded, %lld ns\n",433433- elapsed_ns);434434- genpd->max_off_time_changed = true;435435- td->constraint_changed = true;421421+ if (runtime_pm) {422422+ elapsed_ns = ktime_to_ns(ktime_sub(ktime_get(), time_start));423423+ if (elapsed_ns > td->suspend_latency_ns) {424424+ td->suspend_latency_ns = elapsed_ns;425425+ dev_dbg(dev, "suspend latency exceeded, %lld ns\n",426426+ elapsed_ns);427427+ genpd->max_off_time_changed = true;428428+ td->constraint_changed = true;429429+ }436430 }437431438432 /*···463453{464454 struct generic_pm_domain *genpd;465455 struct gpd_timing_data *td = &dev_gpd_data(dev)->td;456456+ bool runtime_pm = pm_runtime_enabled(dev);466457 ktime_t time_start;467458 s64 elapsed_ns;468459 int ret;···490479491480 out:492481 /* Measure resume latency. */493493- if (timed)482482+ if (timed && runtime_pm)494483 time_start = ktime_get();495484496485 genpd_start_dev(genpd, dev);497486 genpd_restore_dev(genpd, dev);498487499488 /* Update resume latency value if the measured time exceeds it. */500500- if (timed) {489489+ if (timed && runtime_pm) {501490 elapsed_ns = ktime_to_ns(ktime_sub(ktime_get(), time_start));502491 if (elapsed_ns > td->resume_latency_ns) {503492 td->resume_latency_ns = elapsed_ns;
···408408 struct blkif_x86_32_request *src)409409{410410 int i, n = BLKIF_MAX_SEGMENTS_PER_REQUEST, j;411411- dst->operation = src->operation;412412- switch (src->operation) {411411+ dst->operation = READ_ONCE(src->operation);412412+ switch (dst->operation) {413413 case BLKIF_OP_READ:414414 case BLKIF_OP_WRITE:415415 case BLKIF_OP_WRITE_BARRIER:···456456 struct blkif_x86_64_request *src)457457{458458 int i, n = BLKIF_MAX_SEGMENTS_PER_REQUEST, j;459459- dst->operation = src->operation;460460- switch (src->operation) {459459+ dst->operation = READ_ONCE(src->operation);460460+ switch (dst->operation) {461461 case BLKIF_OP_READ:462462 case BLKIF_OP_WRITE:463463 case BLKIF_OP_WRITE_BARRIER:
+1-1
drivers/cpufreq/Kconfig.arm
···226226227227config ARM_TEGRA124_CPUFREQ228228 tristate "Tegra124 CPUFreq support"229229- depends on ARCH_TEGRA && CPUFREQ_DT229229+ depends on ARCH_TEGRA && CPUFREQ_DT && REGULATOR230230 default y231231 help232232 This adds the CPUFreq driver support for Tegra124 SOCs.
···141141 unsigned long pinmask = bgc->pin2mask(bgc, gpio);142142143143 if (bgc->dir & pinmask)144144- return bgc->read_reg(bgc->reg_set) & pinmask;144144+ return !!(bgc->read_reg(bgc->reg_set) & pinmask);145145 else146146- return bgc->read_reg(bgc->reg_dat) & pinmask;146146+ return !!(bgc->read_reg(bgc->reg_dat) & pinmask);147147}148148149149static int bgpio_get(struct gpio_chip *gc, unsigned int gpio)
+7-1
drivers/gpio/gpiolib.c
···12791279 chip = desc->chip;12801280 offset = gpio_chip_hwgpio(desc);12811281 value = chip->get ? chip->get(chip, offset) : -EIO;12821282- value = value < 0 ? value : !!value;12821282+ /*12831283+ * FIXME: fix all drivers to clamp to [0,1] or return negative,12841284+ * then change this to:12851285+ * value = value < 0 ? value : !!value;12861286+ * so we can properly propagate error codes.12871287+ */12881288+ value = !!value;12831289 trace_gpio_value(desc_to_gpio(desc), 1, value);12841290 return value;12851291}
···47824782 /* 2b: Program RC6 thresholds.*/4783478347844784 /* WaRsDoubleRc6WrlWithCoarsePowerGating: Doubling WRL only when CPG is enabled */47854785- if (IS_SKYLAKE(dev) && !((IS_SKL_GT3(dev) || IS_SKL_GT4(dev)) &&47864786- (INTEL_REVID(dev) <= SKL_REVID_E0)))47854785+ if (IS_SKYLAKE(dev))47874786 I915_WRITE(GEN6_RC6_WAKE_RATE_LIMIT, 108 << 16);47884787 else47894788 I915_WRITE(GEN6_RC6_WAKE_RATE_LIMIT, 54 << 16);···48244825 * WaRsDisableCoarsePowerGating:skl,bxt - Render/Media PG need to be disabled with RC6.48254826 */48264827 if ((IS_BROXTON(dev) && (INTEL_REVID(dev) < BXT_REVID_B0)) ||48274827- ((IS_SKL_GT3(dev) || IS_SKL_GT4(dev)) && (INTEL_REVID(dev) <= SKL_REVID_E0)))48284828+ ((IS_SKL_GT3(dev) || IS_SKL_GT4(dev)) && (INTEL_REVID(dev) <= SKL_REVID_F0)))48284829 I915_WRITE(GEN9_PG_ENABLE, 0);48294830 else48304831 I915_WRITE(GEN9_PG_ENABLE, (rc6_mask & GEN6_RC_CTL_RC6_ENABLE) ?
+1-4
drivers/gpu/drm/omapdrm/omap_fbdev.c
···112112 dma_addr_t paddr;113113 int ret;114114115115- /* only doing ARGB32 since this is what is needed to alpha-blend116116- * with video overlays:117117- */118115 sizes->surface_bpp = 32;119119- sizes->surface_depth = 32;116116+ sizes->surface_depth = 24;120117121118 DBG("create fbdev: %dx%d@%d (%dx%d)", sizes->surface_width,122119 sizes->surface_height, sizes->surface_bpp,
+1
drivers/hwmon/Kconfig
···12171217config SENSORS_SHT1512181218 tristate "Sensiron humidity and temperature sensors. SHT15 and compat."12191219 depends on GPIOLIB || COMPILE_TEST12201220+ select BITREVERSE12201221 help12211222 If you say yes here you get support for the Sensiron SHT10, SHT11,12221223 SHT15, SHT71, SHT75 humidity and temperature sensors.
+15-1
drivers/hwmon/tmp102.c
···5858 u16 config_orig;5959 unsigned long last_update;6060 int temp[3];6161+ bool first_time;6162};62636364/* convert left adjusted 13-bit TMP102 register value to milliCelsius */···9493 tmp102->temp[i] = tmp102_reg_to_mC(status);9594 }9695 tmp102->last_update = jiffies;9696+ tmp102->first_time = false;9797 }9898 mutex_unlock(&tmp102->lock);9999 return tmp102;···103101static int tmp102_read_temp(void *dev, int *temp)104102{105103 struct tmp102 *tmp102 = tmp102_update_device(dev);104104+105105+ /* Is it too early even to return a conversion? */106106+ if (tmp102->first_time) {107107+ dev_dbg(dev, "%s: Conversion not ready yet..\n", __func__);108108+ return -EAGAIN;109109+ }106110107111 *temp = tmp102->temp[0];108112···121113{122114 struct sensor_device_attribute *sda = to_sensor_dev_attr(attr);123115 struct tmp102 *tmp102 = tmp102_update_device(dev);116116+117117+ /* Is it too early even to return a read? */118118+ if (tmp102->first_time)119119+ return -EAGAIN;124120125121 return sprintf(buf, "%d\n", tmp102->temp[sda->index]);126122}···219207 status = -ENODEV;220208 goto fail_restore_config;221209 }222222- tmp102->last_update = jiffies - HZ;210210+ tmp102->last_update = jiffies;211211+ /* Mark that we are not ready with data until conversion is complete */212212+ tmp102->first_time = true;223213 mutex_init(&tmp102->lock);224214225215 hwmon_dev = hwmon_device_register_with_groups(dev, client->name,
+9-2
drivers/i2c/busses/i2c-davinci.c
···202202 * d is always 6 on Keystone I2C controller203203 */204204205205- /* get minimum of 7 MHz clock, but max of 12 MHz */206206- psc = (input_clock / 7000000) - 1;205205+ /*206206+ * Both Davinci and current Keystone User Guides recommend a value207207+ * between 7MHz and 12MHz. In reality 7MHz module clock doesn't208208+ * always produce enough margin between SDA and SCL transitions.209209+ * Measurements show that the higher the module clock is, the210210+ * bigger is the margin, providing more reliable communication.211211+ * So we better target for 12MHz.212212+ */213213+ psc = (input_clock / 12000000) - 1;207214 if ((input_clock / (psc + 1)) > 12000000)208215 psc++; /* better to run under spec than over */209216 d = (psc >= 2) ? 5 : 7 - psc;
+6
drivers/i2c/busses/i2c-designware-core.c
···813813tx_aborted:814814 if ((stat & (DW_IC_INTR_TX_ABRT | DW_IC_INTR_STOP_DET)) || dev->msg_err)815815 complete(&dev->cmd_complete);816816+ else if (unlikely(dev->accessor_flags & ACCESS_INTR_MASK)) {817817+ /* workaround to trigger pending interrupt */818818+ stat = dw_readl(dev, DW_IC_INTR_MASK);819819+ i2c_dw_disable_int(dev);820820+ dw_writel(dev, stat, DW_IC_INTR_MASK);821821+ }816822817823 return IRQ_HANDLED;818824}
···18191819 input_set_abs_params(inputdev, ABS_TILT_Y, AIPTEK_TILT_MIN, AIPTEK_TILT_MAX, 0, 0);18201820 input_set_abs_params(inputdev, ABS_WHEEL, AIPTEK_WHEEL_MIN, AIPTEK_WHEEL_MAX - 1, 0, 0);1821182118221822+ /* Verify that a device really has an endpoint */18231823+ if (intf->altsetting[0].desc.bNumEndpoints < 1) {18241824+ dev_err(&intf->dev,18251825+ "interface has %d endpoints, but must have minimum 1\n",18261826+ intf->altsetting[0].desc.bNumEndpoints);18271827+ err = -EINVAL;18281828+ goto fail3;18291829+ }18221830 endpoint = &intf->altsetting[0].endpoint[0].desc;1823183118241832 /* Go set up our URB, which is called when the tablet receives···18691861 if (i == ARRAY_SIZE(speeds)) {18701862 dev_info(&intf->dev,18711863 "Aiptek tried all speeds, no sane response\n");18641864+ err = -EINVAL;18721865 goto fail3;18731866 }18741867
···805805{806806 int i;807807808808- for (i = 0; i < IVTV_CARD_MAX_VIDEO_INPUTS - 1; i++)808808+ for (i = 0; i < IVTV_CARD_MAX_VIDEO_INPUTS; i++)809809 if (itv->card->video_inputs[i].video_type == 0)810810 break;811811 itv->nof_inputs = i;812812- for (i = 0; i < IVTV_CARD_MAX_AUDIO_INPUTS - 1; i++)812812+ for (i = 0; i < IVTV_CARD_MAX_AUDIO_INPUTS; i++)813813 if (itv->card->audio_inputs[i].audio_type == 0)814814 break;815815 itv->nof_audio_inputs = i;
+1-1
drivers/media/usb/airspy/airspy.c
···134134 int urbs_submitted;135135136136 /* USB control message buffer */137137- #define BUF_SIZE 24137137+ #define BUF_SIZE 128138138 u8 buf[BUF_SIZE];139139140140 /* Current configuration */
+12-1
drivers/media/usb/hackrf/hackrf.c
···2424#include <media/videobuf2-v4l2.h>2525#include <media/videobuf2-vmalloc.h>26262727+/*2828+ * Used Avago MGA-81563 RF amplifier could be destroyed pretty easily with too2929+ * strong signal or transmitting to bad antenna.3030+ * Set RF gain control to 'grabbed' state by default for sure.3131+ */3232+static bool hackrf_enable_rf_gain_ctrl;3333+module_param_named(enable_rf_gain_ctrl, hackrf_enable_rf_gain_ctrl, bool, 0644);3434+MODULE_PARM_DESC(enable_rf_gain_ctrl, "enable RX/TX RF amplifier control (warn: could damage amplifier)");3535+2736/* HackRF USB API commands (from HackRF Library) */2837enum {2938 CMD_SET_TRANSCEIVER_MODE = 0x01,···14601451 dev_err(dev->dev, "Could not initialize controls\n");14611452 goto err_v4l2_ctrl_handler_free_rx;14621453 }14541454+ v4l2_ctrl_grab(dev->rx_rf_gain, !hackrf_enable_rf_gain_ctrl);14631455 v4l2_ctrl_handler_setup(&dev->rx_ctrl_handler);1464145614651457 /* Register controls for transmitter */···14811471 dev_err(dev->dev, "Could not initialize controls\n");14821472 goto err_v4l2_ctrl_handler_free_tx;14831473 }14741474+ v4l2_ctrl_grab(dev->tx_rf_gain, !hackrf_enable_rf_gain_ctrl);14841475 v4l2_ctrl_handler_setup(&dev->tx_ctrl_handler);1485147614861477 /* Register the v4l2_device structure */···15411530err_kfree:15421531 kfree(dev);15431532err:15441544- dev_dbg(dev->dev, "failed=%d\n", ret);15331533+ dev_dbg(&intf->dev, "failed=%d\n", ret);15451534 return ret;15461535}15471536
+10-2
drivers/mtd/ofpart.c
···46464747 ofpart_node = of_get_child_by_name(mtd_node, "partitions");4848 if (!ofpart_node) {4949- pr_warn("%s: 'partitions' subnode not found on %s. Trying to parse direct subnodes as partitions.\n",5050- master->name, mtd_node->full_name);4949+ /*5050+ * We might get here even when ofpart isn't used at all (e.g.,5151+ * when using another parser), so don't be louder than5252+ * KERN_DEBUG5353+ */5454+ pr_debug("%s: 'partitions' subnode not found on %s. Trying to parse direct subnodes as partitions.\n",5555+ master->name, mtd_node->full_name);5156 ofpart_node = mtd_node;5257 dedicated = false;5858+ } else if (!of_device_is_compatible(ofpart_node, "fixed-partitions")) {5959+ /* The 'partitions' subnode might be used by another parser */6060+ return 0;5361 }54625563 /* First count the subnodes */
+2-2
drivers/net/ethernet/amd/xgbe/xgbe-dev.c
···18491849 usleep_range(10, 15);1850185018511851 /* Poll Until Poll Condition */18521852- while (count-- && XGMAC_IOREAD_BITS(pdata, DMA_MR, SWR))18521852+ while (--count && XGMAC_IOREAD_BITS(pdata, DMA_MR, SWR))18531853 usleep_range(500, 600);1854185418551855 if (!count)···18731873 /* Poll Until Poll Condition */18741874 for (i = 0; i < pdata->tx_q_count; i++) {18751875 count = 2000;18761876- while (count-- && XGMAC_MTL_IOREAD_BITS(pdata, i,18761876+ while (--count && XGMAC_MTL_IOREAD_BITS(pdata, i,18771877 MTL_Q_TQOMR, FTQ))18781878 usleep_range(500, 600);18791879
···10161016 sizeof(struct atl1c_recv_ret_status) * rx_desc_count +10171017 8 * 4;1018101810191019- ring_header->desc = pci_alloc_consistent(pdev, ring_header->size,10201020- &ring_header->dma);10191019+ ring_header->desc = dma_zalloc_coherent(&pdev->dev, ring_header->size,10201020+ &ring_header->dma, GFP_KERNEL);10211021 if (unlikely(!ring_header->desc)) {10221022- dev_err(&pdev->dev, "pci_alloc_consistend failed\n");10221022+ dev_err(&pdev->dev, "could not get memory for DMA buffer\n");10231023 goto err_nomem;10241024 }10251025- memset(ring_header->desc, 0, ring_header->size);10261025 /* init TPD ring */1027102610281027 tpd_ring[0].dma = roundup(ring_header->dma, 8);
+1
drivers/net/ethernet/aurora/Kconfig
···13131414config AURORA_NB88001515 tristate "Aurora AU-NB8800 support"1616+ depends on HAS_DMA1617 select PHYLIB1718 help1819 Support for the AU-NB8800 gigabit Ethernet controller.
+32-14
drivers/net/ethernet/broadcom/bnxt/bnxt.c
···26932693 req.ver_upd = DRV_VER_UPD;2694269426952695 if (BNXT_PF(bp)) {26962696- unsigned long vf_req_snif_bmap[4];26962696+ DECLARE_BITMAP(vf_req_snif_bmap, 256);26972697 u32 *data = (u32 *)vf_req_snif_bmap;2698269826992699- memset(vf_req_snif_bmap, 0, 32);26992699+ memset(vf_req_snif_bmap, 0, sizeof(vf_req_snif_bmap));27002700 for (i = 0; i < ARRAY_SIZE(bnxt_vf_req_snif); i++)27012701 __set_bit(bnxt_vf_req_snif[i], vf_req_snif_bmap);2702270227032703- for (i = 0; i < 8; i++) {27042704- req.vf_req_fwd[i] = cpu_to_le32(*data);27052705- data++;27062706- }27032703+ for (i = 0; i < 8; i++)27042704+ req.vf_req_fwd[i] = cpu_to_le32(data[i]);27052705+27072706 req.enables |=27082707 cpu_to_le32(FUNC_DRV_RGTR_REQ_ENABLES_VF_REQ_FWD);27092708 }···46024603 bp->nge_port_cnt = 1;46034604 }4604460546054605- bp->state = BNXT_STATE_OPEN;46064606+ set_bit(BNXT_STATE_OPEN, &bp->state);46064607 bnxt_enable_int(bp);46074608 /* Enable TX queues */46084609 bnxt_tx_enable(bp);···46784679 /* Change device state to avoid TX queue wake up's */46794680 bnxt_tx_disable(bp);4680468146814681- bp->state = BNXT_STATE_CLOSED;46824682- cancel_work_sync(&bp->sp_task);46824682+ clear_bit(BNXT_STATE_OPEN, &bp->state);46834683+ smp_mb__after_atomic();46844684+ while (test_bit(BNXT_STATE_IN_SP_TASK, &bp->state))46854685+ msleep(20);4683468646844687 /* Flush rings before disabling interrupts */46854688 bnxt_shutdown_nic(bp, irq_re_init);···50315030static void bnxt_reset_task(struct bnxt *bp)50325031{50335032 bnxt_dbg_dump_states(bp);50345034- if (netif_running(bp->dev))50355035- bnxt_tx_disable(bp); /* prevent tx timout again */50335033+ if (netif_running(bp->dev)) {50345034+ bnxt_close_nic(bp, false, false);50355035+ bnxt_open_nic(bp, false, false);50365036+ }50365037}5037503850385039static void bnxt_tx_timeout(struct net_device *dev)···50845081 struct bnxt *bp = container_of(work, struct bnxt, sp_task);50855082 int rc;5086508350875087- if (bp->state != BNXT_STATE_OPEN)50845084+ set_bit(BNXT_STATE_IN_SP_TASK, &bp->state);50855085+ smp_mb__after_atomic();50865086+ if (!test_bit(BNXT_STATE_OPEN, &bp->state)) {50875087+ clear_bit(BNXT_STATE_IN_SP_TASK, &bp->state);50885088 return;50895089+ }5089509050905091 if (test_and_clear_bit(BNXT_RX_MASK_SP_EVENT, &bp->sp_event))50915092 bnxt_cfg_rx_mode(bp);···51135106 bnxt_hwrm_tunnel_dst_port_free(51145107 bp, TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_VXLAN);51155108 }51165116- if (test_and_clear_bit(BNXT_RESET_TASK_SP_EVENT, &bp->sp_event))51095109+ if (test_and_clear_bit(BNXT_RESET_TASK_SP_EVENT, &bp->sp_event)) {51105110+ /* bnxt_reset_task() calls bnxt_close_nic() which waits51115111+ * for BNXT_STATE_IN_SP_TASK to clear.51125112+ */51135113+ clear_bit(BNXT_STATE_IN_SP_TASK, &bp->state);51145114+ rtnl_lock();51175115 bnxt_reset_task(bp);51165116+ set_bit(BNXT_STATE_IN_SP_TASK, &bp->state);51175117+ rtnl_unlock();51185118+ }51195119+51205120+ smp_mb__before_atomic();51215121+ clear_bit(BNXT_STATE_IN_SP_TASK, &bp->state);51185122}5119512351205124static int bnxt_init_board(struct pci_dev *pdev, struct net_device *dev)···52045186 bp->timer.function = bnxt_timer;52055187 bp->current_interval = BNXT_TIMER_INTERVAL;5206518852075207- bp->state = BNXT_STATE_CLOSED;51895189+ clear_bit(BNXT_STATE_OPEN, &bp->state);5208519052095191 return 0;52105192
···2121#ifdef CONFIG_BNXT_SRIOV2222static int bnxt_vf_ndo_prep(struct bnxt *bp, int vf_id)2323{2424- if (bp->state != BNXT_STATE_OPEN) {2424+ if (!test_bit(BNXT_STATE_OPEN, &bp->state)) {2525 netdev_err(bp->dev, "vf ndo called though PF is down\n");2626 return -EINVAL;2727 }
+18-21
drivers/net/ethernet/cavium/thunder/nic_main.c
···3737#define NIC_GET_BGX_FROM_VF_LMAC_MAP(map) ((map >> 4) & 0xF)3838#define NIC_GET_LMAC_FROM_VF_LMAC_MAP(map) (map & 0xF)3939 u8 vf_lmac_map[MAX_LMAC];4040- u8 lmac_cnt;4140 struct delayed_work dwork;4241 struct workqueue_struct *check_link;4342 u8 link[MAX_LMAC];···279280 u64 lmac_credit;280281281282 nic->num_vf_en = 0;282282- nic->lmac_cnt = 0;283283284284 for (bgx = 0; bgx < NIC_MAX_BGX; bgx++) {285285 if (!(bgx_map & (1 << bgx)))···288290 nic->vf_lmac_map[next_bgx_lmac++] =289291 NIC_SET_VF_LMAC_MAP(bgx, lmac);290292 nic->num_vf_en += lmac_cnt;291291- nic->lmac_cnt += lmac_cnt;292293293294 /* Program LMAC credits */294295 lmac_credit = (1ull << 1); /* channel credit enable */···615618 return 0;616619}617620621621+static void nic_enable_vf(struct nicpf *nic, int vf, bool enable)622622+{623623+ int bgx, lmac;624624+625625+ nic->vf_enabled[vf] = enable;626626+627627+ if (vf >= nic->num_vf_en)628628+ return;629629+630630+ bgx = NIC_GET_BGX_FROM_VF_LMAC_MAP(nic->vf_lmac_map[vf]);631631+ lmac = NIC_GET_LMAC_FROM_VF_LMAC_MAP(nic->vf_lmac_map[vf]);632632+633633+ bgx_lmac_rx_tx_enable(nic->node, bgx, lmac, enable);634634+}635635+618636/* Interrupt handler to handle mailbox messages from VFs */619637static void nic_handle_mbx_intr(struct nicpf *nic, int vf)620638{···729717 break;730718 case NIC_MBOX_MSG_CFG_DONE:731719 /* Last message of VF config msg sequence */732732- nic->vf_enabled[vf] = true;733733- if (vf >= nic->lmac_cnt)734734- goto unlock;735735-736736- bgx = NIC_GET_BGX_FROM_VF_LMAC_MAP(nic->vf_lmac_map[vf]);737737- lmac = NIC_GET_LMAC_FROM_VF_LMAC_MAP(nic->vf_lmac_map[vf]);738738-739739- bgx_lmac_rx_tx_enable(nic->node, bgx, lmac, true);720720+ nic_enable_vf(nic, vf, true);740721 goto unlock;741722 case NIC_MBOX_MSG_SHUTDOWN:742723 /* First msg in VF teardown sequence */743743- nic->vf_enabled[vf] = false;744724 if (vf >= nic->num_vf_en)745725 nic->sqs_used[vf - nic->num_vf_en] = false;746726 nic->pqs_vf[vf] = 0;747747-748748- if (vf >= nic->lmac_cnt)749749- break;750750-751751- bgx = NIC_GET_BGX_FROM_VF_LMAC_MAP(nic->vf_lmac_map[vf]);752752- lmac = NIC_GET_LMAC_FROM_VF_LMAC_MAP(nic->vf_lmac_map[vf]);753753-754754- bgx_lmac_rx_tx_enable(nic->node, bgx, lmac, false);727727+ nic_enable_vf(nic, vf, false);755728 break;756729 case NIC_MBOX_MSG_ALLOC_SQS:757730 nic_alloc_sqs(nic, &mbx.sqs_alloc);···955958956959 mbx.link_status.msg = NIC_MBOX_MSG_BGX_LINK_CHANGE;957960958958- for (vf = 0; vf < nic->lmac_cnt; vf++) {961961+ for (vf = 0; vf < nic->num_vf_en; vf++) {959962 /* Poll only if VF is UP */960963 if (!nic->vf_enabled[vf])961964 continue;
+8-20
drivers/net/ethernet/ezchip/nps_enet.c
···4848 *reg = nps_enet_reg_get(priv, NPS_ENET_REG_RX_BUF);4949 else { /* !dst_is_aligned */5050 for (i = 0; i < len; i++, reg++) {5151- u32 buf =5252- nps_enet_reg_get(priv, NPS_ENET_REG_RX_BUF);5353-5454- /* to accommodate word-unaligned address of "reg"5555- * we have to do memcpy_toio() instead of simple "=".5656- */5757- memcpy_toio((void __iomem *)reg, &buf, sizeof(buf));5151+ u32 buf = nps_enet_reg_get(priv, NPS_ENET_REG_RX_BUF);5252+ put_unaligned(buf, reg);5853 }5954 }60556156 /* copy last bytes (if any) */6257 if (last) {6358 u32 buf = nps_enet_reg_get(priv, NPS_ENET_REG_RX_BUF);6464-6565- memcpy_toio((void __iomem *)reg, &buf, last);5959+ memcpy((u8*)reg, &buf, last);6660 }6761}6862···361367 struct nps_enet_tx_ctl tx_ctrl;362368 short length = skb->len;363369 u32 i, len = DIV_ROUND_UP(length, sizeof(u32));364364- u32 *src = (u32 *)virt_to_phys(skb->data);370370+ u32 *src = (void *)skb->data;365371 bool src_is_aligned = IS_ALIGNED((unsigned long)src, sizeof(u32));366372367373 tx_ctrl.value = 0;···369375 if (src_is_aligned)370376 for (i = 0; i < len; i++, src++)371377 nps_enet_reg_set(priv, NPS_ENET_REG_TX_BUF, *src);372372- else { /* !src_is_aligned */373373- for (i = 0; i < len; i++, src++) {374374- u32 buf;378378+ else /* !src_is_aligned */379379+ for (i = 0; i < len; i++, src++)380380+ nps_enet_reg_set(priv, NPS_ENET_REG_TX_BUF,381381+ get_unaligned(src));375382376376- /* to accommodate word-unaligned address of "src"377377- * we have to do memcpy_fromio() instead of simple "="378378- */379379- memcpy_fromio(&buf, (void __iomem *)src, sizeof(buf));380380- nps_enet_reg_set(priv, NPS_ENET_REG_TX_BUF, buf);381381- }382382- }383383 /* Write the length of the Frame */384384 tx_ctrl.nt = length;385385
+1-1
drivers/net/ethernet/freescale/fs_enet/mac-fcc.c
···552552 cbd_t __iomem *prev_bd;553553 cbd_t __iomem *last_tx_bd;554554555555- last_tx_bd = fep->tx_bd_base + (fpi->tx_ring * sizeof(cbd_t));555555+ last_tx_bd = fep->tx_bd_base + ((fpi->tx_ring - 1) * sizeof(cbd_t));556556557557 /* get the current bd held in TBPTR and scan back from this point */558558 recheck_bd = curr_tbptr = (cbd_t __iomem *)
+1-1
drivers/net/ethernet/freescale/fsl_pq_mdio.c
···464464 * address). Print error message but continue anyway.465465 */466466 if ((void *)tbipa > priv->map + resource_size(&res) - 4)467467- dev_err(&pdev->dev, "invalid register map (should be at least 0x%04x to contain TBI address)\n",467467+ dev_err(&pdev->dev, "invalid register map (should be at least 0x%04zx to contain TBI address)\n",468468 ((void *)tbipa - priv->map) + 4);469469470470 iowrite32be(be32_to_cpup(prop), tbipa);
···567567 goto init_adminq_exit;568568 }569569570570- /* initialize locks */571571- mutex_init(&hw->aq.asq_mutex);572572- mutex_init(&hw->aq.arq_mutex);573573-574570 /* Set up register offsets */575571 i40e_adminq_init_regs(hw);576572···659663660664 i40e_shutdown_asq(hw);661665 i40e_shutdown_arq(hw);662662-663663- /* destroy the locks */664666665667 if (hw->nvm_buff.va)666668 i40e_free_virt_mem(hw, &hw->nvm_buff);
+10-1
drivers/net/ethernet/intel/i40e/i40e_main.c
···1029510295 /* set up a default setting for link flow control */1029610296 pf->hw.fc.requested_mode = I40E_FC_NONE;10297102971029810298+ /* set up the locks for the AQ, do this only once in probe1029910299+ * and destroy them only once in remove1030010300+ */1030110301+ mutex_init(&hw->aq.asq_mutex);1030210302+ mutex_init(&hw->aq.arq_mutex);1030310303+1029810304 err = i40e_init_adminq(hw);10299103051030010306 /* provide nvm, fw, api versions */···1070310697 set_bit(__I40E_DOWN, &pf->state);1070410698 del_timer_sync(&pf->service_timer);1070510699 cancel_work_sync(&pf->service_task);1070610706- i40e_fdir_teardown(pf);10707107001070810701 if (pf->flags & I40E_FLAG_SRIOV_ENABLED) {1070910702 i40e_free_vfs(pf);···1074410739 dev_warn(&pdev->dev,1074510740 "Failed to destroy the Admin Queue resources: %d\n",1074610741 ret_code);1074210742+1074310743+ /* destroy the locks only once, here */1074410744+ mutex_destroy(&hw->aq.arq_mutex);1074510745+ mutex_destroy(&hw->aq.asq_mutex);10747107461074810747 /* Clear all dynamic memory lists of rings, q_vectors, and VSIs */1074910748 i40e_clear_interrupt_scheme(pf);
-6
drivers/net/ethernet/intel/i40evf/i40e_adminq.c
···551551 goto init_adminq_exit;552552 }553553554554- /* initialize locks */555555- mutex_init(&hw->aq.asq_mutex);556556- mutex_init(&hw->aq.arq_mutex);557557-558554 /* Set up register offsets */559555 i40e_adminq_init_regs(hw);560556···591595592596 i40e_shutdown_asq(hw);593597 i40e_shutdown_arq(hw);594594-595595- /* destroy the locks */596598597599 if (hw->nvm_buff.va)598600 i40e_free_virt_mem(hw, &hw->nvm_buff);
+10
drivers/net/ethernet/intel/i40evf/i40evf_main.c
···24762476 hw->bus.device = PCI_SLOT(pdev->devfn);24772477 hw->bus.func = PCI_FUNC(pdev->devfn);2478247824792479+ /* set up the locks for the AQ, do this only once in probe24802480+ * and destroy them only once in remove24812481+ */24822482+ mutex_init(&hw->aq.asq_mutex);24832483+ mutex_init(&hw->aq.arq_mutex);24842484+24792485 INIT_LIST_HEAD(&adapter->mac_filter_list);24802486 INIT_LIST_HEAD(&adapter->vlan_filter_list);24812487···2634262826352629 if (hw->aq.asq.count)26362630 i40evf_shutdown_adminq(hw);26312631+26322632+ /* destroy the locks only once, here */26332633+ mutex_destroy(&hw->aq.arq_mutex);26342634+ mutex_destroy(&hw->aq.asq_mutex);2637263526382636 iounmap(hw->hw_addr);26392637 pci_release_regions(pdev);
···246246 u32 state;247247248248 state = QLCRDX(ahw, QLC_83XX_VNIC_STATE);249249- while (state != QLCNIC_DEV_NPAR_OPER && idc->vnic_wait_limit--) {249249+ while (state != QLCNIC_DEV_NPAR_OPER && idc->vnic_wait_limit) {250250+ idc->vnic_wait_limit--;250251 msleep(1000);251252 state = QLCRDX(ahw, QLC_83XX_VNIC_STATE);252253 }
+3-2
drivers/net/ethernet/qlogic/qlge/qlge_main.c
···4211421142124212 /* Wait for an outstanding reset to complete. */42134213 if (!test_bit(QL_ADAPTER_UP, &qdev->flags)) {42144214- int i = 3;42154215- while (i-- && !test_bit(QL_ADAPTER_UP, &qdev->flags)) {42144214+ int i = 4;42154215+42164216+ while (--i && !test_bit(QL_ADAPTER_UP, &qdev->flags)) {42164217 netif_err(qdev, ifup, qdev->ndev,42174218 "Waiting for adapter UP...\n");42184219 ssleep(1);
+2-3
drivers/net/ethernet/qualcomm/qca_spi.c
···736736 netdev_info(qca->net_dev, "Transmit timeout at %ld, latency %ld\n",737737 jiffies, jiffies - dev->trans_start);738738 qca->net_dev->stats.tx_errors++;739739- /* wake the queue if there is room */740740- if (qcaspi_tx_ring_has_space(&qca->txr))741741- netif_wake_queue(dev);739739+ /* Trigger tx queue flush and QCA7000 reset */740740+ qca->sync = QCASPI_SYNC_UNKNOWN;742741}743742744743static int
+4-1
drivers/net/ethernet/renesas/ravb_main.c
···905905 netdev_info(ndev, "limited PHY to 100Mbit/s\n");906906 }907907908908+ /* 10BASE is not supported */909909+ phydev->supported &= ~PHY_10BT_FEATURES;910910+908911 netdev_info(ndev, "attached PHY %d (IRQ %d) to driver %s\n",909912 phydev->addr, phydev->irq, phydev->drv->name);910913···10401037 "rx_queue_1_mcast_packets",10411038 "rx_queue_1_errors",10421039 "rx_queue_1_crc_errors",10431043- "rx_queue_1_frame_errors_",10401040+ "rx_queue_1_frame_errors",10441041 "rx_queue_1_length_errors",10451042 "rx_queue_1_missed_errors",10461043 "rx_queue_1_over_errors",
+41-14
drivers/net/ethernet/renesas/sh_eth.c
···5252 NETIF_MSG_RX_ERR| \5353 NETIF_MSG_TX_ERR)54545555+#define SH_ETH_OFFSET_INVALID ((u16)~0)5656+5557#define SH_ETH_OFFSET_DEFAULTS \5658 [0 ... SH_ETH_MAX_REGISTER_OFFSET - 1] = SH_ETH_OFFSET_INVALID5759···405403406404static void sh_eth_rcv_snd_disable(struct net_device *ndev);407405static struct net_device_stats *sh_eth_get_stats(struct net_device *ndev);406406+407407+static void sh_eth_write(struct net_device *ndev, u32 data, int enum_index)408408+{409409+ struct sh_eth_private *mdp = netdev_priv(ndev);410410+ u16 offset = mdp->reg_offset[enum_index];411411+412412+ if (WARN_ON(offset == SH_ETH_OFFSET_INVALID))413413+ return;414414+415415+ iowrite32(data, mdp->addr + offset);416416+}417417+418418+static u32 sh_eth_read(struct net_device *ndev, int enum_index)419419+{420420+ struct sh_eth_private *mdp = netdev_priv(ndev);421421+ u16 offset = mdp->reg_offset[enum_index];422422+423423+ if (WARN_ON(offset == SH_ETH_OFFSET_INVALID))424424+ return ~0U;425425+426426+ return ioread32(mdp->addr + offset);427427+}408428409429static bool sh_eth_is_gether(struct sh_eth_private *mdp)410430{···11961172 break;11971173 }11981174 mdp->rx_skbuff[i] = skb;11991199- rxdesc->addr = dma_addr;11751175+ rxdesc->addr = cpu_to_edmac(mdp, dma_addr);12001176 rxdesc->status = cpu_to_edmac(mdp, RD_RACT | RD_RFP);1201117712021178 /* Rx descriptor address set */···14271403 entry, edmac_to_cpu(mdp, txdesc->status));14281404 /* Free the original skb. */14291405 if (mdp->tx_skbuff[entry]) {14301430- dma_unmap_single(&ndev->dev, txdesc->addr,14061406+ dma_unmap_single(&ndev->dev,14071407+ edmac_to_cpu(mdp, txdesc->addr),14311408 txdesc->buffer_length, DMA_TO_DEVICE);14321409 dev_kfree_skb_irq(mdp->tx_skbuff[entry]);14331410 mdp->tx_skbuff[entry] = NULL;···14871462 if (mdp->cd->shift_rd0)14881463 desc_status >>= 16;1489146414651465+ skb = mdp->rx_skbuff[entry];14901466 if (desc_status & (RD_RFS1 | RD_RFS2 | RD_RFS3 | RD_RFS4 |14911467 RD_RFS5 | RD_RFS6 | RD_RFS10)) {14921468 ndev->stats.rx_errors++;···15031477 ndev->stats.rx_missed_errors++;15041478 if (desc_status & RD_RFS10)15051479 ndev->stats.rx_over_errors++;15061506- } else {14801480+ } else if (skb) {14811481+ dma_addr = edmac_to_cpu(mdp, rxdesc->addr);15071482 if (!mdp->cd->hw_swap)15081483 sh_eth_soft_swap(15091509- phys_to_virt(ALIGN(rxdesc->addr, 4)),14841484+ phys_to_virt(ALIGN(dma_addr, 4)),15101485 pkt_len + 2);15111511- skb = mdp->rx_skbuff[entry];15121486 mdp->rx_skbuff[entry] = NULL;15131487 if (mdp->cd->rpadir)15141488 skb_reserve(skb, NET_IP_ALIGN);15151515- dma_unmap_single(&ndev->dev, rxdesc->addr,14891489+ dma_unmap_single(&ndev->dev, dma_addr,15161490 ALIGN(mdp->rx_buf_sz, 32),15171491 DMA_FROM_DEVICE);15181492 skb_put(skb, pkt_len);···15491523 mdp->rx_skbuff[entry] = skb;1550152415511525 skb_checksum_none_assert(skb);15521552- rxdesc->addr = dma_addr;15261526+ rxdesc->addr = cpu_to_edmac(mdp, dma_addr);15531527 }15541528 dma_wmb(); /* RACT bit must be set after all the above writes */15551529 if (entry >= mdp->num_rx_ring - 1)···23572331 /* Free all the skbuffs in the Rx queue. */23582332 for (i = 0; i < mdp->num_rx_ring; i++) {23592333 rxdesc = &mdp->rx_ring[i];23602360- rxdesc->status = 0;23612361- rxdesc->addr = 0xBADF00D0;23342334+ rxdesc->status = cpu_to_edmac(mdp, 0);23352335+ rxdesc->addr = cpu_to_edmac(mdp, 0xBADF00D0);23622336 dev_kfree_skb(mdp->rx_skbuff[i]);23632337 mdp->rx_skbuff[i] = NULL;23642338 }···23762350{23772351 struct sh_eth_private *mdp = netdev_priv(ndev);23782352 struct sh_eth_txdesc *txdesc;23532353+ dma_addr_t dma_addr;23792354 u32 entry;23802355 unsigned long flags;23812356···23992372 txdesc = &mdp->tx_ring[entry];24002373 /* soft swap. */24012374 if (!mdp->cd->hw_swap)24022402- sh_eth_soft_swap(phys_to_virt(ALIGN(txdesc->addr, 4)),24032403- skb->len + 2);24042404- txdesc->addr = dma_map_single(&ndev->dev, skb->data, skb->len,24052405- DMA_TO_DEVICE);24062406- if (dma_mapping_error(&ndev->dev, txdesc->addr)) {23752375+ sh_eth_soft_swap(PTR_ALIGN(skb->data, 4), skb->len + 2);23762376+ dma_addr = dma_map_single(&ndev->dev, skb->data, skb->len,23772377+ DMA_TO_DEVICE);23782378+ if (dma_mapping_error(&ndev->dev, dma_addr)) {24072379 kfree_skb(skb);24082380 return NETDEV_TX_OK;24092381 }23822382+ txdesc->addr = cpu_to_edmac(mdp, dma_addr);24102383 txdesc->buffer_length = skb->len;2411238424122385 dma_wmb(); /* TACT bit must be set after all the above writes */
···339339{340340 const struct device *dev = &phydev->dev;341341 const struct device_node *of_node = dev->of_node;342342+ const struct device *dev_walker;342343343343- if (!of_node && dev->parent->of_node)344344- of_node = dev->parent->of_node;344344+ /* The Micrel driver has a deprecated option to place phy OF345345+ * properties in the MAC node. Walk up the tree of devices to346346+ * find a device with an OF node.347347+ */348348+ dev_walker = &phydev->dev;349349+ do {350350+ of_node = dev_walker->of_node;351351+ dev_walker = dev_walker->parent;352352+353353+ } while (!of_node && dev_walker);345354346355 if (of_node) {347356 ksz9021_load_values_from_of(phydev, of_node,
···419419 struct pptp_opt *opt = &po->proto.pptp;420420 int error = 0;421421422422+ if (sockaddr_len < sizeof(struct sockaddr_pppox))423423+ return -EINVAL;424424+422425 lock_sock(sk);423426424427 opt->src_addr = sp->sa_addr.pptp;···442439 struct rtable *rt;443440 struct flowi4 fl4;444441 int error = 0;442442+443443+ if (sockaddr_len < sizeof(struct sockaddr_pppox))444444+ return -EINVAL;445445446446 if (sp->sa_protocol != PX_PROTO_PPTP)447447 return -EINVAL;
+25-1
drivers/net/usb/cdc_mbim.c
···158158 if (!cdc_ncm_comm_intf_is_mbim(intf->cur_altsetting))159159 goto err;160160161161- ret = cdc_ncm_bind_common(dev, intf, data_altsetting, 0);161161+ ret = cdc_ncm_bind_common(dev, intf, data_altsetting, dev->driver_info->data);162162 if (ret)163163 goto err;164164···582582 .tx_fixup = cdc_mbim_tx_fixup,583583};584584585585+/* The spefication explicitly allows NDPs to be placed anywhere in the586586+ * frame, but some devices fail unless the NDP is placed after the IP587587+ * packets. Using the CDC_NCM_FLAG_NDP_TO_END flags to force this588588+ * behaviour.589589+ *590590+ * Note: The current implementation of this feature restricts each NTB591591+ * to a single NDP, implying that multiplexed sessions cannot share an592592+ * NTB. This might affect performace for multiplexed sessions.593593+ */594594+static const struct driver_info cdc_mbim_info_ndp_to_end = {595595+ .description = "CDC MBIM",596596+ .flags = FLAG_NO_SETINT | FLAG_MULTI_PACKET | FLAG_WWAN,597597+ .bind = cdc_mbim_bind,598598+ .unbind = cdc_mbim_unbind,599599+ .manage_power = cdc_mbim_manage_power,600600+ .rx_fixup = cdc_mbim_rx_fixup,601601+ .tx_fixup = cdc_mbim_tx_fixup,602602+ .data = CDC_NCM_FLAG_NDP_TO_END,603603+};604604+585605static const struct usb_device_id mbim_devs[] = {586606 /* This duplicate NCM entry is intentional. MBIM devices can587607 * be disguised as NCM by default, and this is necessary to···616596 /* ZLP conformance whitelist: All Ericsson MBIM devices */617597 { USB_VENDOR_AND_INTERFACE_INFO(0x0bdb, USB_CLASS_COMM, USB_CDC_SUBCLASS_MBIM, USB_CDC_PROTO_NONE),618598 .driver_info = (unsigned long)&cdc_mbim_info,599599+ },600600+ /* Huawei E3372 fails unless NDP comes after the IP packets */601601+ { USB_DEVICE_AND_INTERFACE_INFO(0x12d1, 0x157d, USB_CLASS_COMM, USB_CDC_SUBCLASS_MBIM, USB_CDC_PROTO_NONE),602602+ .driver_info = (unsigned long)&cdc_mbim_info_ndp_to_end,619603 },620604 /* default entry */621605 { USB_INTERFACE_INFO(USB_CLASS_COMM, USB_CDC_SUBCLASS_MBIM, USB_CDC_PROTO_NONE),
+9-1
drivers/net/usb/cdc_ncm.c
···955955 * NTH16 header as we would normally do. NDP isn't written to the SKB yet, and956956 * the wNdpIndex field in the header is actually not consistent with reality. It will be later.957957 */958958- if (ctx->drvflags & CDC_NCM_FLAG_NDP_TO_END)958958+ if (ctx->drvflags & CDC_NCM_FLAG_NDP_TO_END) {959959 if (ctx->delayed_ndp16->dwSignature == sign)960960 return ctx->delayed_ndp16;961961+962962+ /* We can only push a single NDP to the end. Return963963+ * NULL to send what we've already got and queue this964964+ * skb for later.965965+ */966966+ else if (ctx->delayed_ndp16->dwSignature)967967+ return NULL;968968+ }961969962970 /* follow the chain of NDPs, looking for a match */963971 while (ndpoffset) {
+3-18
drivers/net/usb/r8152.c
···3067306730683068 mutex_lock(&tp->control);3069306930703070- /* The WORK_ENABLE may be set when autoresume occurs */30713071- if (test_bit(WORK_ENABLE, &tp->flags)) {30723072- clear_bit(WORK_ENABLE, &tp->flags);30733073- usb_kill_urb(tp->intr_urb);30743074- cancel_delayed_work_sync(&tp->schedule);30753075-30763076- /* disable the tx/rx, if the workqueue has enabled them. */30773077- if (netif_carrier_ok(netdev))30783078- tp->rtl_ops.disable(tp);30793079- }30803080-30813070 tp->rtl_ops.up(tp);3082307130833072 rtl8152_set_speed(tp, AUTONEG_ENABLE,···31123123 rtl_stop_rx(tp);31133124 } else {31143125 mutex_lock(&tp->control);31153115-31163116- /* The autosuspend may have been enabled and wouldn't31173117- * be disable when autoresume occurs, because the31183118- * netif_running() would be false.31193119- */31203120- rtl_runtime_suspend_enable(tp, false);3121312631223127 tp->rtl_ops.down(tp);31233128···34953512 netif_device_attach(tp->netdev);34963513 }3497351434983498- if (netif_running(tp->netdev)) {35153515+ if (netif_running(tp->netdev) && tp->netdev->flags & IFF_UP) {34993516 if (test_bit(SELECTIVE_SUSPEND, &tp->flags)) {35003517 rtl_runtime_suspend_enable(tp, false);35013518 clear_bit(SELECTIVE_SUSPEND, &tp->flags);···35153532 }35163533 usb_submit_urb(tp->intr_urb, GFP_KERNEL);35173534 } else if (test_bit(SELECTIVE_SUSPEND, &tp->flags)) {35353535+ if (tp->netdev->flags & IFF_UP)35363536+ rtl_runtime_suspend_enable(tp, false);35183537 clear_bit(SELECTIVE_SUSPEND, &tp->flags);35193538 }35203539
+59-16
drivers/net/vxlan.c
···11581158 struct pcpu_sw_netstats *stats;11591159 union vxlan_addr saddr;11601160 int err = 0;11611161- union vxlan_addr *remote_ip;1162116111631162 /* For flow based devices, map all packets to VNI 0 */11641163 if (vs->flags & VXLAN_F_COLLECT_METADATA)···11681169 if (!vxlan)11691170 goto drop;1170117111711171- remote_ip = &vxlan->default_dst.remote_ip;11721172 skb_reset_mac_header(skb);11731173 skb_scrub_packet(skb, !net_eq(vxlan->net, dev_net(vxlan->dev)));11741174 skb->protocol = eth_type_trans(skb, vxlan->dev);···11771179 if (ether_addr_equal(eth_hdr(skb)->h_source, vxlan->dev->dev_addr))11781180 goto drop;1179118111801180- /* Re-examine inner Ethernet packet */11811181- if (remote_ip->sa.sa_family == AF_INET) {11821182+ /* Get data from the outer IP header */11831183+ if (vxlan_get_sk_family(vs) == AF_INET) {11821184 oip = ip_hdr(skb);11831185 saddr.sin.sin_addr.s_addr = oip->saddr;11841186 saddr.sa.sa_family = AF_INET;···18461848 !(vxflags & VXLAN_F_UDP_CSUM));18471849}1848185018511851+#if IS_ENABLED(CONFIG_IPV6)18521852+static struct dst_entry *vxlan6_get_route(struct vxlan_dev *vxlan,18531853+ struct sk_buff *skb, int oif,18541854+ const struct in6_addr *daddr,18551855+ struct in6_addr *saddr)18561856+{18571857+ struct dst_entry *ndst;18581858+ struct flowi6 fl6;18591859+ int err;18601860+18611861+ memset(&fl6, 0, sizeof(fl6));18621862+ fl6.flowi6_oif = oif;18631863+ fl6.daddr = *daddr;18641864+ fl6.saddr = vxlan->cfg.saddr.sin6.sin6_addr;18651865+ fl6.flowi6_mark = skb->mark;18661866+ fl6.flowi6_proto = IPPROTO_UDP;18671867+18681868+ err = ipv6_stub->ipv6_dst_lookup(vxlan->net,18691869+ vxlan->vn6_sock->sock->sk,18701870+ &ndst, &fl6);18711871+ if (err < 0)18721872+ return ERR_PTR(err);18731873+18741874+ *saddr = fl6.saddr;18751875+ return ndst;18761876+}18771877+#endif18781878+18491879/* Bypass encapsulation if the destination is local */18501880static void vxlan_encap_bypass(struct sk_buff *skb, struct vxlan_dev *src_vxlan,18511881 struct vxlan_dev *dst_vxlan)···20612035#if IS_ENABLED(CONFIG_IPV6)20622036 } else {20632037 struct dst_entry *ndst;20642064- struct flowi6 fl6;20382038+ struct in6_addr saddr;20652039 u32 rt6i_flags;2066204020672041 if (!vxlan->vn6_sock)20682042 goto drop;20692043 sk = vxlan->vn6_sock->sock->sk;2070204420712071- memset(&fl6, 0, sizeof(fl6));20722072- fl6.flowi6_oif = rdst ? rdst->remote_ifindex : 0;20732073- fl6.daddr = dst->sin6.sin6_addr;20742074- fl6.saddr = vxlan->cfg.saddr.sin6.sin6_addr;20752075- fl6.flowi6_mark = skb->mark;20762076- fl6.flowi6_proto = IPPROTO_UDP;20772077-20782078- if (ipv6_stub->ipv6_dst_lookup(vxlan->net, sk, &ndst, &fl6)) {20452045+ ndst = vxlan6_get_route(vxlan, skb,20462046+ rdst ? rdst->remote_ifindex : 0,20472047+ &dst->sin6.sin6_addr, &saddr);20482048+ if (IS_ERR(ndst)) {20792049 netdev_dbg(dev, "no route to %pI6\n",20802050 &dst->sin6.sin6_addr);20812051 dev->stats.tx_carrier_errors++;···21032081 }2104208221052083 ttl = ttl ? : ip6_dst_hoplimit(ndst);21062106- err = vxlan6_xmit_skb(ndst, sk, skb, dev, &fl6.saddr, &fl6.daddr,20842084+ err = vxlan6_xmit_skb(ndst, sk, skb, dev, &saddr, &dst->sin6.sin6_addr,21072085 0, ttl, src_port, dst_port, htonl(vni << 8), md,21082086 !net_eq(vxlan->net, dev_net(vxlan->dev)),21092087 flags);···24172395 vxlan->cfg.port_max, true);24182396 dport = info->key.tp_dst ? : vxlan->cfg.dst_port;2419239724202420- if (ip_tunnel_info_af(info) == AF_INET)23982398+ if (ip_tunnel_info_af(info) == AF_INET) {23992399+ if (!vxlan->vn4_sock)24002400+ return -EINVAL;24212401 return egress_ipv4_tun_info(dev, skb, info, sport, dport);24222422- return -EINVAL;24022402+ } else {24032403+#if IS_ENABLED(CONFIG_IPV6)24042404+ struct dst_entry *ndst;24052405+24062406+ if (!vxlan->vn6_sock)24072407+ return -EINVAL;24082408+ ndst = vxlan6_get_route(vxlan, skb, 0,24092409+ &info->key.u.ipv6.dst,24102410+ &info->key.u.ipv6.src);24112411+ if (IS_ERR(ndst))24122412+ return PTR_ERR(ndst);24132413+ dst_release(ndst);24142414+24152415+ info->key.tp_src = sport;24162416+ info->key.tp_dst = dport;24172417+#else /* !CONFIG_IPV6 */24182418+ return -EPFNOSUPPORT;24192419+#endif24202420+ }24212421+ return 0;24232422}2424242324252424static const struct net_device_ops vxlan_netdev_ops = {
+15-19
drivers/net/xen-netback/netback.c
···258258 struct netrx_pending_operations *npo)259259{260260 struct xenvif_rx_meta *meta;261261- struct xen_netif_rx_request *req;261261+ struct xen_netif_rx_request req;262262263263- req = RING_GET_REQUEST(&queue->rx, queue->rx.req_cons++);263263+ RING_COPY_REQUEST(&queue->rx, queue->rx.req_cons++, &req);264264265265 meta = npo->meta + npo->meta_prod++;266266 meta->gso_type = XEN_NETIF_GSO_TYPE_NONE;267267 meta->gso_size = 0;268268 meta->size = 0;269269- meta->id = req->id;269269+ meta->id = req.id;270270271271 npo->copy_off = 0;272272- npo->copy_gref = req->gref;272272+ npo->copy_gref = req.gref;273273274274 return meta;275275}···424424 struct xenvif *vif = netdev_priv(skb->dev);425425 int nr_frags = skb_shinfo(skb)->nr_frags;426426 int i;427427- struct xen_netif_rx_request *req;427427+ struct xen_netif_rx_request req;428428 struct xenvif_rx_meta *meta;429429 unsigned char *data;430430 int head = 1;···443443444444 /* Set up a GSO prefix descriptor, if necessary */445445 if ((1 << gso_type) & vif->gso_prefix_mask) {446446- req = RING_GET_REQUEST(&queue->rx, queue->rx.req_cons++);446446+ RING_COPY_REQUEST(&queue->rx, queue->rx.req_cons++, &req);447447 meta = npo->meta + npo->meta_prod++;448448 meta->gso_type = gso_type;449449 meta->gso_size = skb_shinfo(skb)->gso_size;450450 meta->size = 0;451451- meta->id = req->id;451451+ meta->id = req.id;452452 }453453454454- req = RING_GET_REQUEST(&queue->rx, queue->rx.req_cons++);454454+ RING_COPY_REQUEST(&queue->rx, queue->rx.req_cons++, &req);455455 meta = npo->meta + npo->meta_prod++;456456457457 if ((1 << gso_type) & vif->gso_mask) {···463463 }464464465465 meta->size = 0;466466- meta->id = req->id;466466+ meta->id = req.id;467467 npo->copy_off = 0;468468- npo->copy_gref = req->gref;468468+ npo->copy_gref = req.gref;469469470470 data = skb->data;471471 while (data < skb_tail_pointer(skb)) {···679679 * Allow a burst big enough to transmit a jumbo packet of up to 128kB.680680 * Otherwise the interface can seize up due to insufficient credit.681681 */682682- max_burst = RING_GET_REQUEST(&queue->tx, queue->tx.req_cons)->size;683683- max_burst = min(max_burst, 131072UL);684684- max_burst = max(max_burst, queue->credit_bytes);682682+ max_burst = max(131072UL, queue->credit_bytes);685683686684 /* Take care that adding a new chunk of credit doesn't wrap to zero. */687685 max_credit = queue->remaining_credit + queue->credit_bytes;···709711 spin_unlock_irqrestore(&queue->response_lock, flags);710712 if (cons == end)711713 break;712712- txp = RING_GET_REQUEST(&queue->tx, cons++);714714+ RING_COPY_REQUEST(&queue->tx, cons++, txp);713715 } while (1);714716 queue->tx.req_cons = cons;715717}···776778 if (drop_err)777779 txp = &dropped_tx;778780779779- memcpy(txp, RING_GET_REQUEST(&queue->tx, cons + slots),780780- sizeof(*txp));781781+ RING_COPY_REQUEST(&queue->tx, cons + slots, txp);781782782783 /* If the guest submitted a frame >= 64 KiB then783784 * first->size overflowed and following slots will···11091112 return -EBADR;11101113 }1111111411121112- memcpy(&extra, RING_GET_REQUEST(&queue->tx, cons),11131113- sizeof(extra));11151115+ RING_COPY_REQUEST(&queue->tx, cons, &extra);11141116 if (unlikely(!extra.type ||11151117 extra.type >= XEN_NETIF_EXTRA_TYPE_MAX)) {11161118 queue->tx.req_cons = ++cons;···1318132213191323 idx = queue->tx.req_cons;13201324 rmb(); /* Ensure that we see the request before we copy it. */13211321- memcpy(&txreq, RING_GET_REQUEST(&queue->tx, idx), sizeof(txreq));13251325+ RING_COPY_REQUEST(&queue->tx, idx, &txreq);1322132613231327 /* Credit-based scheduling. */13241328 if (txreq.size > queue->remaining_credit &&
+1
drivers/phy/Kconfig
···233233 tristate "Allwinner sun9i SoC USB PHY driver"234234 depends on ARCH_SUNXI && HAS_IOMEM && OF235235 depends on RESET_CONTROLLER236236+ depends on USB_COMMON236237 select GENERIC_PHY237238 help238239 Enable this to support the transceiver that is part of Allwinner
+12-4
drivers/phy/phy-bcm-cygnus-pcie.c
···128128 struct phy_provider *provider;129129 struct resource *res;130130 unsigned cnt = 0;131131+ int ret;131132132133 if (of_get_child_count(node) == 0) {133134 dev_err(dev, "PHY no child node\n");···155154 if (of_property_read_u32(child, "reg", &id)) {156155 dev_err(dev, "missing reg property for %s\n",157156 child->name);158158- return -EINVAL;157157+ ret = -EINVAL;158158+ goto put_child;159159 }160160161161 if (id >= MAX_NUM_PHYS) {162162 dev_err(dev, "invalid PHY id: %u\n", id);163163- return -EINVAL;163163+ ret = -EINVAL;164164+ goto put_child;164165 }165166166167 if (core->phys[id].phy) {167168 dev_err(dev, "duplicated PHY id: %u\n", id);168168- return -EINVAL;169169+ ret = -EINVAL;170170+ goto put_child;169171 }170172171173 p = &core->phys[id];172174 p->phy = devm_phy_create(dev, child, &cygnus_pcie_phy_ops);173175 if (IS_ERR(p->phy)) {174176 dev_err(dev, "failed to create PHY\n");175175- return PTR_ERR(p->phy);177177+ ret = PTR_ERR(p->phy);178178+ goto put_child;176179 }177180178181 p->core = core;···196191 dev_dbg(dev, "registered %u PCIe PHY(s)\n", cnt);197192198193 return 0;194194+put_child:195195+ of_node_put(child);196196+ return ret;199197}200198201199static const struct of_device_id cygnus_pcie_phy_match_table[] = {
+14-6
drivers/phy/phy-berlin-sata.c
···195195 struct phy_provider *phy_provider;196196 struct phy_berlin_priv *priv;197197 struct resource *res;198198- int i = 0;198198+ int ret, i = 0;199199 u32 phy_id;200200201201 priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL);···237237 if (of_property_read_u32(child, "reg", &phy_id)) {238238 dev_err(dev, "missing reg property in node %s\n",239239 child->name);240240- return -EINVAL;240240+ ret = -EINVAL;241241+ goto put_child;241242 }242243243244 if (phy_id >= ARRAY_SIZE(phy_berlin_power_down_bits)) {244245 dev_err(dev, "invalid reg in node %s\n", child->name);245245- return -EINVAL;246246+ ret = -EINVAL;247247+ goto put_child;246248 }247249248250 phy_desc = devm_kzalloc(dev, sizeof(*phy_desc), GFP_KERNEL);249249- if (!phy_desc)250250- return -ENOMEM;251251+ if (!phy_desc) {252252+ ret = -ENOMEM;253253+ goto put_child;254254+ }251255252256 phy = devm_phy_create(dev, NULL, &phy_berlin_sata_ops);253257 if (IS_ERR(phy)) {254258 dev_err(dev, "failed to create PHY %d\n", phy_id);255255- return PTR_ERR(phy);259259+ ret = PTR_ERR(phy);260260+ goto put_child;256261 }257262258263 phy_desc->phy = phy;···274269 phy_provider =275270 devm_of_phy_provider_register(dev, phy_berlin_sata_phy_xlate);276271 return PTR_ERR_OR_ZERO(phy_provider);272272+put_child:273273+ of_node_put(child);274274+ return ret;277275}278276279277static const struct of_device_id phy_berlin_sata_of_match[] = {
+12-5
drivers/phy/phy-brcmstb-sata.c
···140140 struct brcm_sata_phy *priv;141141 struct resource *res;142142 struct phy_provider *provider;143143- int count = 0;143143+ int ret, count = 0;144144145145 if (of_get_child_count(dn) == 0)146146 return -ENODEV;···163163 if (of_property_read_u32(child, "reg", &id)) {164164 dev_err(dev, "missing reg property in node %s\n",165165 child->name);166166- return -EINVAL;166166+ ret = -EINVAL;167167+ goto put_child;167168 }168169169170 if (id >= MAX_PORTS) {170171 dev_err(dev, "invalid reg: %u\n", id);171171- return -EINVAL;172172+ ret = -EINVAL;173173+ goto put_child;172174 }173175 if (priv->phys[id].phy) {174176 dev_err(dev, "already registered port %u\n", id);175175- return -EINVAL;177177+ ret = -EINVAL;178178+ goto put_child;176179 }177180178181 port = &priv->phys[id];···185182 port->ssc_en = of_property_read_bool(child, "brcm,enable-ssc");186183 if (IS_ERR(port->phy)) {187184 dev_err(dev, "failed to create PHY\n");188188- return PTR_ERR(port->phy);185185+ ret = PTR_ERR(port->phy);186186+ goto put_child;189187 }190188191189 phy_set_drvdata(port->phy, port);···202198 dev_info(dev, "registered %d port(s)\n", count);203199204200 return 0;201201+put_child:202202+ of_node_put(child);203203+ return ret;205204}206205207206static struct platform_driver brcm_sata_phy_driver = {
+15-6
drivers/phy/phy-core.c
···636636 * @np: node containing the phy637637 * @index: index of the phy638638 *639639- * Gets the phy using _of_phy_get(), and associates a device with it using640640- * devres. On driver detach, release function is invoked on the devres data,639639+ * Gets the phy using _of_phy_get(), then gets a refcount to it,640640+ * and associates a device with it using devres. On driver detach,641641+ * release function is invoked on the devres data,641642 * then, devres data is freed.642643 *643644 */···652651 return ERR_PTR(-ENOMEM);653652654653 phy = _of_phy_get(np, index);655655- if (!IS_ERR(phy)) {656656- *ptr = phy;657657- devres_add(dev, ptr);658658- } else {654654+ if (IS_ERR(phy)) {659655 devres_free(ptr);656656+ return phy;660657 }658658+659659+ if (!try_module_get(phy->ops->owner)) {660660+ devres_free(ptr);661661+ return ERR_PTR(-EPROBE_DEFER);662662+ }663663+664664+ get_device(&phy->dev);665665+666666+ *ptr = phy;667667+ devres_add(dev, ptr);661668662669 return phy;663670}
+11-5
drivers/phy/phy-miphy28lp.c
···1226122612271227 miphy_phy = devm_kzalloc(&pdev->dev, sizeof(*miphy_phy),12281228 GFP_KERNEL);12291229- if (!miphy_phy)12301230- return -ENOMEM;12291229+ if (!miphy_phy) {12301230+ ret = -ENOMEM;12311231+ goto put_child;12321232+ }1231123312321234 miphy_dev->phys[port] = miphy_phy;1233123512341236 phy = devm_phy_create(&pdev->dev, child, &miphy28lp_ops);12351237 if (IS_ERR(phy)) {12361238 dev_err(&pdev->dev, "failed to create PHY\n");12371237- return PTR_ERR(phy);12391239+ ret = PTR_ERR(phy);12401240+ goto put_child;12381241 }1239124212401243 miphy_dev->phys[port]->phy = phy;···1245124212461243 ret = miphy28lp_of_probe(child, miphy_phy);12471244 if (ret)12481248- return ret;12451245+ goto put_child;1249124612501247 ret = miphy28lp_probe_resets(child, miphy_dev->phys[port]);12511248 if (ret)12521252- return ret;12491249+ goto put_child;1253125012541251 phy_set_drvdata(phy, miphy_dev->phys[port]);12551252 port++;···1258125512591256 provider = devm_of_phy_provider_register(&pdev->dev, miphy28lp_xlate);12601257 return PTR_ERR_OR_ZERO(provider);12581258+put_child:12591259+ of_node_put(child);12601260+ return ret;12611261}1262126212631263static const struct of_device_id miphy28lp_of_match[] = {
+11-5
drivers/phy/phy-miphy365x.c
···566566567567 miphy_phy = devm_kzalloc(&pdev->dev, sizeof(*miphy_phy),568568 GFP_KERNEL);569569- if (!miphy_phy)570570- return -ENOMEM;569569+ if (!miphy_phy) {570570+ ret = -ENOMEM;571571+ goto put_child;572572+ }571573572574 miphy_dev->phys[port] = miphy_phy;573575574576 phy = devm_phy_create(&pdev->dev, child, &miphy365x_ops);575577 if (IS_ERR(phy)) {576578 dev_err(&pdev->dev, "failed to create PHY\n");577577- return PTR_ERR(phy);579579+ ret = PTR_ERR(phy);580580+ goto put_child;578581 }579582580583 miphy_dev->phys[port]->phy = phy;581584582585 ret = miphy365x_of_probe(child, miphy_phy);583586 if (ret)584584- return ret;587587+ goto put_child;585588586589 phy_set_drvdata(phy, miphy_dev->phys[port]);587590···594591 &miphy_phy->ctrlreg);595592 if (ret) {596593 dev_err(&pdev->dev, "No sysconfig offset found\n");597597- return ret;594594+ goto put_child;598595 }599596 }600597601598 provider = devm_of_phy_provider_register(&pdev->dev, miphy365x_xlate);602599 return PTR_ERR_OR_ZERO(provider);600600+put_child:601601+ of_node_put(child);602602+ return ret;603603}604604605605static const struct of_device_id miphy365x_of_match[] = {
···5555 * ACPI).5656 * @ie_offset: Register offset of GPI_IE from @regs.5757 * @pin_base: Starting pin of pins in this community5858+ * @gpp_size: Maximum number of pads in each group, such as PADCFGLOCK,5959+ * HOSTSW_OWN, GPI_IS, GPI_IE, etc.5860 * @npins: Number of pins in this community5961 * @regs: Community specific common registers (reserved for core driver)6062 * @pad_regs: Community specific pad registers (reserved for core driver)···7068 unsigned hostown_offset;7169 unsigned ie_offset;7270 unsigned pin_base;7171+ unsigned gpp_size;7372 size_t npins;7473 void __iomem *regs;7574 void __iomem *pad_regs;
···5656 int irq;5757};58585959+/*6060+ * The Rockchip calendar used by the RK808 counts November with 31 days. We use6161+ * these translation functions to convert its dates to/from the Gregorian6262+ * calendar used by the rest of the world. We arbitrarily define Jan 1st, 20166363+ * as the day when both calendars were in sync, and treat all other dates6464+ * relative to that.6565+ * NOTE: Other system software (e.g. firmware) that reads the same hardware must6666+ * implement this exact same conversion algorithm, with the same anchor date.6767+ */6868+static time64_t nov2dec_transitions(struct rtc_time *tm)6969+{7070+ return (tm->tm_year + 1900) - 2016 + (tm->tm_mon + 1 > 11 ? 1 : 0);7171+}7272+7373+static void rockchip_to_gregorian(struct rtc_time *tm)7474+{7575+ /* If it's Nov 31st, rtc_tm_to_time64() will count that like Dec 1st */7676+ time64_t time = rtc_tm_to_time64(tm);7777+ rtc_time64_to_tm(time + nov2dec_transitions(tm) * 86400, tm);7878+}7979+8080+static void gregorian_to_rockchip(struct rtc_time *tm)8181+{8282+ time64_t extra_days = nov2dec_transitions(tm);8383+ time64_t time = rtc_tm_to_time64(tm);8484+ rtc_time64_to_tm(time - extra_days * 86400, tm);8585+8686+ /* Compensate if we went back over Nov 31st (will work up to 2381) */8787+ if (nov2dec_transitions(tm) < extra_days) {8888+ if (tm->tm_mon + 1 == 11)8989+ tm->tm_mday++; /* This may result in 31! */9090+ else9191+ rtc_time64_to_tm(time - (extra_days - 1) * 86400, tm);9292+ }9393+}9494+5995/* Read current time and date in RTC */6096static int rk808_rtc_readtime(struct device *dev, struct rtc_time *tm)6197{···137101 tm->tm_mon = (bcd2bin(rtc_data[4] & MONTHS_REG_MSK)) - 1;138102 tm->tm_year = (bcd2bin(rtc_data[5] & YEARS_REG_MSK)) + 100;139103 tm->tm_wday = bcd2bin(rtc_data[6] & WEEKS_REG_MSK);104104+ rockchip_to_gregorian(tm);140105 dev_dbg(dev, "RTC date/time %4d-%02d-%02d(%d) %02d:%02d:%02d\n",141106 1900 + tm->tm_year, tm->tm_mon + 1, tm->tm_mday,142142- tm->tm_wday, tm->tm_hour , tm->tm_min, tm->tm_sec);107107+ tm->tm_wday, tm->tm_hour, tm->tm_min, tm->tm_sec);143108144109 return ret;145110}···153116 u8 rtc_data[NUM_TIME_REGS];154117 int ret;155118119119+ dev_dbg(dev, "set RTC date/time %4d-%02d-%02d(%d) %02d:%02d:%02d\n",120120+ 1900 + tm->tm_year, tm->tm_mon + 1, tm->tm_mday,121121+ tm->tm_wday, tm->tm_hour, tm->tm_min, tm->tm_sec);122122+ gregorian_to_rockchip(tm);156123 rtc_data[0] = bin2bcd(tm->tm_sec);157124 rtc_data[1] = bin2bcd(tm->tm_min);158125 rtc_data[2] = bin2bcd(tm->tm_hour);···164123 rtc_data[4] = bin2bcd(tm->tm_mon + 1);165124 rtc_data[5] = bin2bcd(tm->tm_year - 100);166125 rtc_data[6] = bin2bcd(tm->tm_wday);167167- dev_dbg(dev, "set RTC date/time %4d-%02d-%02d(%d) %02d:%02d:%02d\n",168168- 1900 + tm->tm_year, tm->tm_mon + 1, tm->tm_mday,169169- tm->tm_wday, tm->tm_hour , tm->tm_min, tm->tm_sec);170126171127 /* Stop RTC while updating the RTC registers */172128 ret = regmap_update_bits(rk808->regmap, RK808_RTC_CTRL_REG,···208170 alrm->time.tm_mday = bcd2bin(alrm_data[3] & DAYS_REG_MSK);209171 alrm->time.tm_mon = (bcd2bin(alrm_data[4] & MONTHS_REG_MSK)) - 1;210172 alrm->time.tm_year = (bcd2bin(alrm_data[5] & YEARS_REG_MSK)) + 100;173173+ rockchip_to_gregorian(&alrm->time);211174212175 ret = regmap_read(rk808->regmap, RK808_RTC_INT_REG, &int_reg);213176 if (ret) {···266227 alrm->time.tm_mday, alrm->time.tm_wday, alrm->time.tm_hour,267228 alrm->time.tm_min, alrm->time.tm_sec);268229230230+ gregorian_to_rockchip(&alrm->time);269231 alrm_data[0] = bin2bcd(alrm->time.tm_sec);270232 alrm_data[1] = bin2bcd(alrm->time.tm_min);271233 alrm_data[2] = bin2bcd(alrm->time.tm_hour);
+10-10
drivers/scsi/scsi_pm.c
···219219 struct scsi_device *sdev = to_scsi_device(dev);220220 int err = 0;221221222222- if (pm && pm->runtime_suspend) {223223- err = blk_pre_runtime_suspend(sdev->request_queue);224224- if (err)225225- return err;222222+ err = blk_pre_runtime_suspend(sdev->request_queue);223223+ if (err)224224+ return err;225225+ if (pm && pm->runtime_suspend)226226 err = pm->runtime_suspend(dev);227227- blk_post_runtime_suspend(sdev->request_queue, err);228228- }227227+ blk_post_runtime_suspend(sdev->request_queue, err);228228+229229 return err;230230}231231···248248 const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL;249249 int err = 0;250250251251- if (pm && pm->runtime_resume) {252252- blk_pre_runtime_resume(sdev->request_queue);251251+ blk_pre_runtime_resume(sdev->request_queue);252252+ if (pm && pm->runtime_resume)253253 err = pm->runtime_resume(dev);254254- blk_post_runtime_resume(sdev->request_queue, err);255255- }254254+ blk_post_runtime_resume(sdev->request_queue, err);255255+256256 return err;257257}258258
+28-2
drivers/scsi/ses.c
···8484static int ses_recv_diag(struct scsi_device *sdev, int page_code,8585 void *buf, int bufflen)8686{8787+ int ret;8788 unsigned char cmd[] = {8889 RECEIVE_DIAGNOSTIC,8990 1, /* Set PCV bit */···9392 bufflen & 0xff,9493 09594 };9595+ unsigned char recv_page_code;96969797- return scsi_execute_req(sdev, cmd, DMA_FROM_DEVICE, buf, bufflen,9797+ ret = scsi_execute_req(sdev, cmd, DMA_FROM_DEVICE, buf, bufflen,9898 NULL, SES_TIMEOUT, SES_RETRIES, NULL);9999+ if (unlikely(!ret))100100+ return ret;101101+102102+ recv_page_code = ((unsigned char *)buf)[0];103103+104104+ if (likely(recv_page_code == page_code))105105+ return ret;106106+107107+ /* successful diagnostic but wrong page code. This happens to some108108+ * USB devices, just print a message and pretend there was an error */109109+110110+ sdev_printk(KERN_ERR, sdev,111111+ "Wrong diagnostic page; asked for %d got %u\n",112112+ page_code, recv_page_code);113113+114114+ return -EINVAL;99115}100116101117static int ses_send_diag(struct scsi_device *sdev, int page_code,···559541 if (desc_ptr)560542 desc_ptr += len;561543562562- if (addl_desc_ptr)544544+ if (addl_desc_ptr &&545545+ /* only find additional descriptions for specific devices */546546+ (type_ptr[0] == ENCLOSURE_COMPONENT_DEVICE ||547547+ type_ptr[0] == ENCLOSURE_COMPONENT_ARRAY_DEVICE ||548548+ type_ptr[0] == ENCLOSURE_COMPONENT_SAS_EXPANDER ||549549+ /* these elements are optional */550550+ type_ptr[0] == ENCLOSURE_COMPONENT_SCSI_TARGET_PORT ||551551+ type_ptr[0] == ENCLOSURE_COMPONENT_SCSI_INITIATOR_PORT ||552552+ type_ptr[0] == ENCLOSURE_COMPONENT_CONTROLLER_ELECTRONICS))563553 addl_desc_ptr += addl_desc_ptr[1] + 2;564554565555 }
···10351035 unsigned delay;1036103610371037 /* Continue a partial initialization */10381038- if (type == HUB_INIT2)10391039- goto init2;10401040- if (type == HUB_INIT3)10381038+ if (type == HUB_INIT2 || type == HUB_INIT3) {10391039+ device_lock(hub->intfdev);10401040+10411041+ /* Was the hub disconnected while we were waiting? */10421042+ if (hub->disconnected) {10431043+ device_unlock(hub->intfdev);10441044+ kref_put(&hub->kref, hub_release);10451045+ return;10461046+ }10471047+ if (type == HUB_INIT2)10481048+ goto init2;10411049 goto init3;10501050+ }10511051+ kref_get(&hub->kref);1042105210431053 /* The superspeed hub except for root hub has to use Hub Depth10441054 * value as an offset into the route string to locate the bits···12461236 queue_delayed_work(system_power_efficient_wq,12471237 &hub->init_work,12481238 msecs_to_jiffies(delay));12391239+ device_unlock(hub->intfdev);12491240 return; /* Continues at init3: below */12501241 } else {12511242 msleep(delay);···12681257 /* Allow autosuspend if it was suppressed */12691258 if (type <= HUB_INIT3)12701259 usb_autopm_put_interface_async(to_usb_interface(hub->intfdev));12601260+12611261+ if (type == HUB_INIT2 || type == HUB_INIT3)12621262+ device_unlock(hub->intfdev);12631263+12641264+ kref_put(&hub->kref, hub_release);12711265}1272126612731267/* Implement the continuations for the delays above */
+2-1
drivers/usb/serial/ipaq.c
···531531 * through. Since this has a reasonably high failure rate, we retry532532 * several times.533533 */534534- while (retries--) {534534+ while (retries) {535535+ retries--;535536 result = usb_control_msg(serial->dev,536537 usb_sndctrlpipe(serial->dev, 0), 0x22, 0x21,537538 0x1, 0, NULL, 0, 100);
+12-1
drivers/video/fbdev/fsl-diu-fb.c
···479479 port = FSL_DIU_PORT_DLVDS;480480 }481481482482- return diu_ops.valid_monitor_port(port);482482+ if (diu_ops.valid_monitor_port)483483+ port = diu_ops.valid_monitor_port(port);484484+485485+ return port;483486}484487485488/*···19181915#else19191916 monitor_port = fsl_diu_name_to_port(monitor_string);19201917#endif19181918+19191919+ /*19201920+ * Must to verify set_pixel_clock. If not implement on platform,19211921+ * then that means that there is no platform support for the DIU.19221922+ */19231923+ if (!diu_ops.set_pixel_clock)19241924+ return -ENODEV;19251925+19211926 pr_info("Freescale Display Interface Unit (DIU) framebuffer driver\n");1922192719231928#ifdef CONFIG_NOT_COHERENT_CACHE
···1048010480 * until transaction commit to do the actual discard.1048110481 */1048210482 if (trimming) {1048310483- WARN_ON(!list_empty(&block_group->bg_list));1048410484- spin_lock(&trans->transaction->deleted_bgs_lock);1048310483+ spin_lock(&fs_info->unused_bgs_lock);1048410484+ /*1048510485+ * A concurrent scrub might have added us to the list1048610486+ * fs_info->unused_bgs, so use a list_move operation1048710487+ * to add the block group to the deleted_bgs list.1048810488+ */1048510489 list_move(&block_group->bg_list,1048610490 &trans->transaction->deleted_bgs);1048710487- spin_unlock(&trans->transaction->deleted_bgs_lock);1049110491+ spin_unlock(&fs_info->unused_bgs_lock);1048810492 btrfs_get_block_group(block_group);1048910493 }1049010494end_trans:
+14-4
fs/btrfs/file.c
···12911291 * on error we return an unlocked page and the error value12921292 * on success we return a locked page and 012931293 */12941294-static int prepare_uptodate_page(struct page *page, u64 pos,12941294+static int prepare_uptodate_page(struct inode *inode,12951295+ struct page *page, u64 pos,12951296 bool force_uptodate)12961297{12971298 int ret = 0;···13061305 if (!PageUptodate(page)) {13071306 unlock_page(page);13081307 return -EIO;13081308+ }13091309+ if (page->mapping != inode->i_mapping) {13101310+ unlock_page(page);13111311+ return -EAGAIN;13091312 }13101313 }13111314 return 0;···13291324 int faili;1330132513311326 for (i = 0; i < num_pages; i++) {13271327+again:13321328 pages[i] = find_or_create_page(inode->i_mapping, index + i,13331329 mask | __GFP_WRITE);13341330 if (!pages[i]) {···13391333 }1340133413411335 if (i == 0)13421342- err = prepare_uptodate_page(pages[i], pos,13361336+ err = prepare_uptodate_page(inode, pages[i], pos,13431337 force_uptodate);13441344- if (i == num_pages - 1)13451345- err = prepare_uptodate_page(pages[i],13381338+ if (!err && i == num_pages - 1)13391339+ err = prepare_uptodate_page(inode, pages[i],13461340 pos + write_bytes, false);13471341 if (err) {13481342 page_cache_release(pages[i]);13431343+ if (err == -EAGAIN) {13441344+ err = 0;13451345+ goto again;13461346+ }13491347 faili = i - 1;13501348 goto fail;13511349 }
+6-4
fs/btrfs/free-space-cache.c
···891891 spin_unlock(&block_group->lock);892892 ret = 0;893893894894- btrfs_warn(fs_info, "failed to load free space cache for block group %llu, rebuild it now",894894+ btrfs_warn(fs_info, "failed to load free space cache for block group %llu, rebuilding it now",895895 block_group->key.objectid);896896 }897897···29722972 u64 cont1_bytes, u64 min_bytes)29732973{29742974 struct btrfs_free_space_ctl *ctl = block_group->free_space_ctl;29752975- struct btrfs_free_space *entry;29752975+ struct btrfs_free_space *entry = NULL;29762976 int ret = -ENOSPC;29772977 u64 bitmap_offset = offset_to_bitmap(ctl, offset);29782978···29832983 * The bitmap that covers offset won't be in the list unless offset29842984 * is just its start offset.29852985 */29862986- entry = list_first_entry(bitmaps, struct btrfs_free_space, list);29872987- if (entry->offset != bitmap_offset) {29862986+ if (!list_empty(bitmaps))29872987+ entry = list_first_entry(bitmaps, struct btrfs_free_space, list);29882988+29892989+ if (!entry || entry->offset != bitmap_offset) {29882990 entry = tree_search_offset(ctl, bitmap_offset, 1, 0);29892991 if (entry && list_empty(&entry->list))29902992 list_add(&entry->list, bitmaps);
···2929/* A few generic types ... taken from ses-2 */3030enum enclosure_component_type {3131 ENCLOSURE_COMPONENT_DEVICE = 0x01,3232+ ENCLOSURE_COMPONENT_CONTROLLER_ELECTRONICS = 0x07,3333+ ENCLOSURE_COMPONENT_SCSI_TARGET_PORT = 0x14,3434+ ENCLOSURE_COMPONENT_SCSI_INITIATOR_PORT = 0x15,3235 ENCLOSURE_COMPONENT_ARRAY_DEVICE = 0x17,3636+ ENCLOSURE_COMPONENT_SAS_EXPANDER = 0x18,3337};34383539/* ses-2 common element status */
···322322 }323323}324324325325+/**326326+ * dst_hold_safe - Take a reference on a dst if possible327327+ * @dst: pointer to dst entry328328+ *329329+ * This helper returns false if it could not safely330330+ * take a reference on a dst.331331+ */332332+static inline bool dst_hold_safe(struct dst_entry *dst)333333+{334334+ if (dst->flags & DST_NOCACHE)335335+ return atomic_inc_not_zero(&dst->__refcnt);336336+ dst_hold(dst);337337+ return true;338338+}339339+340340+/**341341+ * skb_dst_force_safe - makes sure skb dst is refcounted342342+ * @skb: buffer343343+ *344344+ * If dst is not yet refcounted and not destroyed, grab a ref on it.345345+ */346346+static inline void skb_dst_force_safe(struct sk_buff *skb)347347+{348348+ if (skb_dst_is_noref(skb)) {349349+ struct dst_entry *dst = skb_dst(skb);350350+351351+ if (!dst_hold_safe(dst))352352+ dst = NULL;353353+354354+ skb->_skb_refdst = (unsigned long)dst;355355+ }356356+}357357+325358326359/**327360 * __skb_tunnel_rx - prepare skb for rx reinsert
+23-4
include/net/inet_sock.h
···210210#define IP_CMSG_ORIGDSTADDR BIT(6)211211#define IP_CMSG_CHECKSUM BIT(7)212212213213-/* SYNACK messages might be attached to request sockets.213213+/**214214+ * sk_to_full_sk - Access to a full socket215215+ * @sk: pointer to a socket216216+ *217217+ * SYNACK messages might be attached to request sockets.214218 * Some places want to reach the listener in this case.215219 */216216-static inline struct sock *skb_to_full_sk(const struct sk_buff *skb)220220+static inline struct sock *sk_to_full_sk(struct sock *sk)217221{218218- struct sock *sk = skb->sk;219219-222222+#ifdef CONFIG_INET220223 if (sk && sk->sk_state == TCP_NEW_SYN_RECV)221224 sk = inet_reqsk(sk)->rsk_listener;225225+#endif222226 return sk;227227+}228228+229229+/* sk_to_full_sk() variant with a const argument */230230+static inline const struct sock *sk_const_to_full_sk(const struct sock *sk)231231+{232232+#ifdef CONFIG_INET233233+ if (sk && sk->sk_state == TCP_NEW_SYN_RECV)234234+ sk = ((const struct request_sock *)sk)->rsk_listener;235235+#endif236236+ return sk;237237+}238238+239239+static inline struct sock *skb_to_full_sk(const struct sk_buff *skb)240240+{241241+ return sk_to_full_sk(skb->sk);223242}224243225244static inline struct inet_sock *inet_sk(const struct sock *sk)
···14931493 * : SACK's are not delayed (see Section 6).14941494 */14951495 __u8 sack_needed:1, /* Do we need to sack the peer? */14961496- sack_generation:1;14961496+ sack_generation:1,14971497+ zero_window_announced:1;14971498 __u32 sack_cnt;1498149914991500 __u32 adaptation_ind; /* Adaptation Code point. */
+5-2
include/net/sock.h
···388388 struct socket_wq *sk_wq_raw;389389 };390390#ifdef CONFIG_XFRM391391- struct xfrm_policy *sk_policy[2];391391+ struct xfrm_policy __rcu *sk_policy[2];392392#endif393393 struct dst_entry *sk_rx_dst;394394 struct dst_entry __rcu *sk_dst_cache;···404404 sk_userlocks : 4,405405 sk_protocol : 8,406406 sk_type : 16;407407+#define SK_PROTOCOL_MAX U8_MAX407408 kmemcheck_bitfield_end(flags);408409 int sk_wmem_queued;409410 gfp_t sk_allocation;···741740 SOCK_SELECT_ERR_QUEUE, /* Wake select on error queue */742741};743742743743+#define SK_FLAGS_TIMESTAMP ((1UL << SOCK_TIMESTAMP) | (1UL << SOCK_TIMESTAMPING_RX_SOFTWARE))744744+744745static inline void sock_copy_flags(struct sock *nsk, struct sock *osk)745746{746747 nsk->sk_flags = osk->sk_flags;···817814static inline void __sk_add_backlog(struct sock *sk, struct sk_buff *skb)818815{819816 /* dont let skb dst not refcounted, we are going to leave rcu lock */820820- skb_dst_force(skb);817817+ skb_dst_force_safe(skb);821818822819 if (!sk->sk_backlog.tail)823820 sk->sk_backlog.head = skb;
···628628 * @OVS_CT_ATTR_MARK: u32 value followed by u32 mask. For each bit set in the629629 * mask, the corresponding bit in the value is copied to the connection630630 * tracking mark field in the connection.631631- * @OVS_CT_ATTR_LABEL: %OVS_CT_LABELS_LEN value followed by %OVS_CT_LABELS_LEN631631+ * @OVS_CT_ATTR_LABELS: %OVS_CT_LABELS_LEN value followed by %OVS_CT_LABELS_LEN632632 * mask. For each bit set in the mask, the corresponding bit in the value is633633 * copied to the connection tracking label field in the connection.634634 * @OVS_CT_ATTR_HELPER: variable length string defining conntrack ALG.
+14
include/xen/interface/io/ring.h
···181181#define RING_GET_REQUEST(_r, _idx) \182182 (&((_r)->sring->ring[((_idx) & (RING_SIZE(_r) - 1))].req))183183184184+/*185185+ * Get a local copy of a request.186186+ *187187+ * Use this in preference to RING_GET_REQUEST() so all processing is188188+ * done on a local copy that cannot be modified by the other end.189189+ *190190+ * Note that https://gcc.gnu.org/bugzilla/show_bug.cgi?id=58145 may cause this191191+ * to be ineffective where _req is a struct which consists of only bitfields.192192+ */193193+#define RING_COPY_REQUEST(_r, _idx, _req) do { \194194+ /* Use volatile to force the copy into _req. */ \195195+ *(_req) = *(volatile typeof(_req))RING_GET_REQUEST(_r, _idx); \196196+} while (0)197197+184198#define RING_GET_RESPONSE(_r, _idx) \185199 (&((_r)->sring->ring[((_idx) & (RING_SIZE(_r) - 1))].rsp))186200
+5-3
kernel/locking/osq_lock.c
···9393 node->cpu = curr;94949595 /*9696- * ACQUIRE semantics, pairs with corresponding RELEASE9797- * in unlock() uncontended, or fastpath.9696+ * We need both ACQUIRE (pairs with corresponding RELEASE in9797+ * unlock() uncontended, or fastpath) and RELEASE (to publish9898+ * the node fields we just initialised) semantics when updating9999+ * the lock tail.98100 */9999- old = atomic_xchg_acquire(&lock->tail, curr);101101+ old = atomic_xchg(&lock->tail, curr);100102 if (old == OSQ_UNLOCKED_VAL)101103 return true;102104
···389389 return false;390390}391391392392-int rhashtable_insert_rehash(struct rhashtable *ht)392392+int rhashtable_insert_rehash(struct rhashtable *ht,393393+ struct bucket_table *tbl)393394{394395 struct bucket_table *old_tbl;395396 struct bucket_table *new_tbl;396396- struct bucket_table *tbl;397397 unsigned int size;398398 int err;399399400400 old_tbl = rht_dereference_rcu(ht->tbl, ht);401401- tbl = rhashtable_last_table(ht, old_tbl);402401403402 size = tbl->size;403403+404404+ err = -EBUSY;404405405406 if (rht_grow_above_75(ht, tbl))406407 size *= 2;407408 /* Do not schedule more than one rehash */408409 else if (old_tbl != tbl)409409- return -EBUSY;410410+ goto fail;411411+412412+ err = -ENOMEM;410413411414 new_tbl = bucket_table_alloc(ht, size, GFP_ATOMIC);412412- if (new_tbl == NULL) {413413- /* Schedule async resize/rehash to try allocation414414- * non-atomic context.415415- */416416- schedule_work(&ht->run_work);417417- return -ENOMEM;418418- }415415+ if (new_tbl == NULL)416416+ goto fail;419417420418 err = rhashtable_rehash_attach(ht, tbl, new_tbl);421419 if (err) {···424426 schedule_work(&ht->run_work);425427426428 return err;429429+430430+fail:431431+ /* Do not fail the insert if someone else did a rehash. */432432+ if (likely(rcu_dereference_raw(tbl->future_tbl)))433433+ return 0;434434+435435+ /* Schedule async rehash to retry allocation in process context. */436436+ if (err == -ENOMEM)437437+ schedule_work(&ht->run_work);438438+439439+ return err;427440}428441EXPORT_SYMBOL_GPL(rhashtable_insert_rehash);429442430430-int rhashtable_insert_slow(struct rhashtable *ht, const void *key,431431- struct rhash_head *obj,432432- struct bucket_table *tbl)443443+struct bucket_table *rhashtable_insert_slow(struct rhashtable *ht,444444+ const void *key,445445+ struct rhash_head *obj,446446+ struct bucket_table *tbl)433447{434448 struct rhash_head *head;435449 unsigned int hash;···477467exit:478468 spin_unlock(rht_bucket_lock(tbl, hash));479469480480- return err;470470+ if (err == 0)471471+ return NULL;472472+ else if (err == -EAGAIN)473473+ return tbl;474474+ else475475+ return ERR_PTR(err);481476}482477EXPORT_SYMBOL_GPL(rhashtable_insert_slow);483478···518503 if (!iter->walker)519504 return -ENOMEM;520505521521- mutex_lock(&ht->mutex);506506+ spin_lock(&ht->lock);522507 iter->walker->tbl = rht_dereference(ht->tbl, ht);523508 list_add(&iter->walker->list, &iter->walker->tbl->walkers);524524- mutex_unlock(&ht->mutex);509509+ spin_unlock(&ht->lock);525510526511 return 0;527512}···535520 */536521void rhashtable_walk_exit(struct rhashtable_iter *iter)537522{538538- mutex_lock(&iter->ht->mutex);523523+ spin_lock(&iter->ht->lock);539524 if (iter->walker->tbl)540525 list_del(&iter->walker->list);541541- mutex_unlock(&iter->ht->mutex);526526+ spin_unlock(&iter->ht->lock);542527 kfree(iter->walker);543528}544529EXPORT_SYMBOL_GPL(rhashtable_walk_exit);···562547{563548 struct rhashtable *ht = iter->ht;564549565565- mutex_lock(&ht->mutex);566566-567567- if (iter->walker->tbl)568568- list_del(&iter->walker->list);569569-570550 rcu_read_lock();571551572572- mutex_unlock(&ht->mutex);552552+ spin_lock(&ht->lock);553553+ if (iter->walker->tbl)554554+ list_del(&iter->walker->list);555555+ spin_unlock(&ht->lock);573556574557 if (!iter->walker->tbl) {575558 iter->walker->tbl = rht_dereference_rcu(ht->tbl, ht);···736723 if (params->nulls_base && params->nulls_base < (1U << RHT_BASE_SHIFT))737724 return -EINVAL;738725739739- if (params->nelem_hint)740740- size = rounded_hashtable_size(params);741741-742726 memset(ht, 0, sizeof(*ht));743727 mutex_init(&ht->mutex);744728 spin_lock_init(&ht->lock);···754744 ht->p.insecure_max_entries = ht->p.max_size * 2;755745756746 ht->p.min_size = max(ht->p.min_size, HASH_MIN_SIZE);747747+748748+ if (params->nelem_hint)749749+ size = rounded_hashtable_size(&ht->p);757750758751 /* The maximum (not average) chain length grows with the759752 * size of the hash table, at a rate of (log N)/(log log N).
+3-3
mm/zswap.c
···541541 return last;542542}543543544544+/* type and compressor must be null-terminated */544545static struct zswap_pool *zswap_pool_find_get(char *type, char *compressor)545546{546547 struct zswap_pool *pool;···549548 assert_spin_locked(&zswap_pools_lock);550549551550 list_for_each_entry_rcu(pool, &zswap_pools, list) {552552- if (strncmp(pool->tfm_name, compressor, sizeof(pool->tfm_name)))551551+ if (strcmp(pool->tfm_name, compressor))553552 continue;554554- if (strncmp(zpool_get_type(pool->zpool), type,555555- sizeof(zswap_zpool_type)))553553+ if (strcmp(zpool_get_type(pool->zpool), type))556554 continue;557555 /* if we can't get it, it's about to be destroyed */558556 if (!zswap_pool_get(pool))
···836836 u8 *orig_addr;837837 struct batadv_orig_node *orig_node = NULL;838838 int check, hdr_size = sizeof(*unicast_packet);839839+ enum batadv_subtype subtype;839840 bool is4addr;840841841842 unicast_packet = (struct batadv_unicast_packet *)skb->data;···864863 /* packet for me */865864 if (batadv_is_my_mac(bat_priv, unicast_packet->dest)) {866865 if (is4addr) {867867- batadv_dat_inc_counter(bat_priv,868868- unicast_4addr_packet->subtype);869869- orig_addr = unicast_4addr_packet->src;870870- orig_node = batadv_orig_hash_find(bat_priv, orig_addr);866866+ subtype = unicast_4addr_packet->subtype;867867+ batadv_dat_inc_counter(bat_priv, subtype);868868+869869+ /* Only payload data should be considered for speedy870870+ * join. For example, DAT also uses unicast 4addr871871+ * types, but those packets should not be considered872872+ * for speedy join, since the clients do not actually873873+ * reside at the sending originator.874874+ */875875+ if (subtype == BATADV_P_DATA) {876876+ orig_addr = unicast_4addr_packet->src;877877+ orig_node = batadv_orig_hash_find(bat_priv,878878+ orig_addr);879879+ }871880 }872881873882 if (batadv_dat_snoop_incoming_arp_request(bat_priv, skb,
+12-4
net/batman-adv/translation-table.c
···6868 unsigned short vid, const char *message,6969 bool roaming);70707171-/* returns 1 if they are the same mac addr */7171+/* returns 1 if they are the same mac addr and vid */7272static int batadv_compare_tt(const struct hlist_node *node, const void *data2)7373{7474 const void *data1 = container_of(node, struct batadv_tt_common_entry,7575 hash_entry);7676+ const struct batadv_tt_common_entry *tt1 = data1;7777+ const struct batadv_tt_common_entry *tt2 = data2;76787777- return batadv_compare_eth(data1, data2);7979+ return (tt1->vid == tt2->vid) && batadv_compare_eth(data1, data2);7880}79818082/**···14291427 }1430142814311429 /* if the client was temporary added before receiving the first14321432- * OGM announcing it, we have to clear the TEMP flag14301430+ * OGM announcing it, we have to clear the TEMP flag. Also,14311431+ * remove the previous temporary orig node and re-add it14321432+ * if required. If the orig entry changed, the new one which14331433+ * is a non-temporary entry is preferred.14331434 */14341434- common->flags &= ~BATADV_TT_CLIENT_TEMP;14351435+ if (common->flags & BATADV_TT_CLIENT_TEMP) {14361436+ batadv_tt_global_del_orig_list(tt_global_entry);14371437+ common->flags &= ~BATADV_TT_CLIENT_TEMP;14381438+ }1435143914361440 /* the change can carry possible "attribute" flags like the14371441 * TT_CLIENT_WIFI, therefore they have to be copied in the
+3
net/bluetooth/sco.c
···526526 if (!addr || addr->sa_family != AF_BLUETOOTH)527527 return -EINVAL;528528529529+ if (addr_len < sizeof(struct sockaddr_sco))530530+ return -EINVAL;531531+529532 lock_sock(sk);530533531534 if (sk->sk_state != BT_OPEN) {
···433433 }434434}435435436436-#define SK_FLAGS_TIMESTAMP ((1UL << SOCK_TIMESTAMP) | (1UL << SOCK_TIMESTAMPING_RX_SOFTWARE))437437-438436static void sock_disable_timestamp(struct sock *sk, unsigned long flags)439437{440438 if (sk->sk_flags & flags) {···872874873875 if (val & SOF_TIMESTAMPING_OPT_ID &&874876 !(sk->sk_tsflags & SOF_TIMESTAMPING_OPT_ID)) {875875- if (sk->sk_protocol == IPPROTO_TCP) {877877+ if (sk->sk_protocol == IPPROTO_TCP &&878878+ sk->sk_type == SOCK_STREAM) {876879 if (sk->sk_state != TCP_ESTABLISHED) {877880 ret = -EINVAL;878881 break;···15511552 */15521553 is_charged = sk_filter_charge(newsk, filter);1553155415541554- if (unlikely(!is_charged || xfrm_sk_clone_policy(newsk))) {15551555+ if (unlikely(!is_charged || xfrm_sk_clone_policy(newsk, sk))) {15551556 /* It is still raw copy of parent, so invalidate15561557 * destructor and make plain sk_free() */15571558 newsk->sk_destruct = NULL;
+3
net/decnet/af_decnet.c
···678678{679679 struct sock *sk;680680681681+ if (protocol < 0 || protocol > SK_PROTOCOL_MAX)682682+ return -EINVAL;683683+681684 if (!net_eq(net, &init_net))682685 return -EAFNOSUPPORT;683686
+3
net/ipv4/af_inet.c
···257257 int try_loading_module = 0;258258 int err;259259260260+ if (protocol < 0 || protocol >= IPPROTO_MAX)261261+ return -EINVAL;262262+260263 sock->state = SS_UNCONNECTED;261264262265 /* Look for the requested type/protocol pair. */
+9
net/ipv4/fib_frontend.c
···11551155static int fib_netdev_event(struct notifier_block *this, unsigned long event, void *ptr)11561156{11571157 struct net_device *dev = netdev_notifier_info_to_dev(ptr);11581158+ struct netdev_notifier_changeupper_info *info;11581159 struct in_device *in_dev;11591160 struct net *net = dev_net(dev);11601161 unsigned int flags;···11931192 /* fall through */11941193 case NETDEV_CHANGEMTU:11951194 rt_cache_flush(net);11951195+ break;11961196+ case NETDEV_CHANGEUPPER:11971197+ info = ptr;11981198+ /* flush all routes if dev is linked to or unlinked from11991199+ * an L3 master device (e.g., VRF)12001200+ */12011201+ if (info->upper_dev && netif_is_l3_master(info->upper_dev))12021202+ fib_disable_ip(dev, NETDEV_DOWN, true);11961203 break;11971204 }11981205 return NOTIFY_DONE;
···16411641 drv_stop(local);16421642}1643164316441644+static void ieee80211_flush_completed_scan(struct ieee80211_local *local,16451645+ bool aborted)16461646+{16471647+ /* It's possible that we don't handle the scan completion in16481648+ * time during suspend, so if it's still marked as completed16491649+ * here, queue the work and flush it to clean things up.16501650+ * Instead of calling the worker function directly here, we16511651+ * really queue it to avoid potential races with other flows16521652+ * scheduling the same work.16531653+ */16541654+ if (test_bit(SCAN_COMPLETED, &local->scanning)) {16551655+ /* If coming from reconfiguration failure, abort the scan so16561656+ * we don't attempt to continue a partial HW scan - which is16571657+ * possible otherwise if (e.g.) the 2.4 GHz portion was the16581658+ * completed scan, and a 5 GHz portion is still pending.16591659+ */16601660+ if (aborted)16611661+ set_bit(SCAN_ABORTED, &local->scanning);16621662+ ieee80211_queue_delayed_work(&local->hw, &local->scan_work, 0);16631663+ flush_delayed_work(&local->scan_work);16641664+ }16651665+}16661666+16441667static void ieee80211_handle_reconfig_failure(struct ieee80211_local *local)16451668{16461669 struct ieee80211_sub_if_data *sdata;···16821659 local->resuming = false;16831660 local->suspended = false;16841661 local->in_reconfig = false;16621662+16631663+ ieee80211_flush_completed_scan(local, true);1685166416861665 /* scheduled scan clearly can't be running any more, but tell16871666 * cfg80211 and clear local state···17211696 drv_assign_vif_chanctx(local, sdata, ctx);17221697 }17231698 mutex_unlock(&local->chanctx_mtx);16991699+}17001700+17011701+static void ieee80211_reconfig_stations(struct ieee80211_sub_if_data *sdata)17021702+{17031703+ struct ieee80211_local *local = sdata->local;17041704+ struct sta_info *sta;17051705+17061706+ /* add STAs back */17071707+ mutex_lock(&local->sta_mtx);17081708+ list_for_each_entry(sta, &local->sta_list, list) {17091709+ enum ieee80211_sta_state state;17101710+17111711+ if (!sta->uploaded || sta->sdata != sdata)17121712+ continue;17131713+17141714+ for (state = IEEE80211_STA_NOTEXIST;17151715+ state < sta->sta_state; state++)17161716+ WARN_ON(drv_sta_state(local, sta->sdata, sta, state,17171717+ state + 1));17181718+ }17191719+ mutex_unlock(&local->sta_mtx);17241720}1725172117261722int ieee80211_reconfig(struct ieee80211_local *local)···18791833 WARN_ON(drv_add_chanctx(local, ctx));18801834 mutex_unlock(&local->chanctx_mtx);1881183518821882- list_for_each_entry(sdata, &local->interfaces, list) {18831883- if (!ieee80211_sdata_running(sdata))18841884- continue;18851885- ieee80211_assign_chanctx(local, sdata);18861886- }18871887-18881836 sdata = rtnl_dereference(local->monitor_sdata);18891837 if (sdata && ieee80211_sdata_running(sdata))18901838 ieee80211_assign_chanctx(local, sdata);18911891- }18921892-18931893- /* add STAs back */18941894- mutex_lock(&local->sta_mtx);18951895- list_for_each_entry(sta, &local->sta_list, list) {18961896- enum ieee80211_sta_state state;18971897-18981898- if (!sta->uploaded)18991899- continue;19001900-19011901- /* AP-mode stations will be added later */19021902- if (sta->sdata->vif.type == NL80211_IFTYPE_AP)19031903- continue;19041904-19051905- for (state = IEEE80211_STA_NOTEXIST;19061906- state < sta->sta_state; state++)19071907- WARN_ON(drv_sta_state(local, sta->sdata, sta, state,19081908- state + 1));19091909- }19101910- mutex_unlock(&local->sta_mtx);19111911-19121912- /* reconfigure tx conf */19131913- if (hw->queues >= IEEE80211_NUM_ACS) {19141914- list_for_each_entry(sdata, &local->interfaces, list) {19151915- if (sdata->vif.type == NL80211_IFTYPE_AP_VLAN ||19161916- sdata->vif.type == NL80211_IFTYPE_MONITOR ||19171917- !ieee80211_sdata_running(sdata))19181918- continue;19191919-19201920- for (i = 0; i < IEEE80211_NUM_ACS; i++)19211921- drv_conf_tx(local, sdata, i,19221922- &sdata->tx_conf[i]);19231923- }19241839 }1925184019261841 /* reconfigure hardware */···1895188818961889 if (!ieee80211_sdata_running(sdata))18971890 continue;18911891+18921892+ ieee80211_assign_chanctx(local, sdata);18931893+18941894+ switch (sdata->vif.type) {18951895+ case NL80211_IFTYPE_AP_VLAN:18961896+ case NL80211_IFTYPE_MONITOR:18971897+ break;18981898+ default:18991899+ ieee80211_reconfig_stations(sdata);19001900+ /* fall through */19011901+ case NL80211_IFTYPE_AP: /* AP stations are handled later */19021902+ for (i = 0; i < IEEE80211_NUM_ACS; i++)19031903+ drv_conf_tx(local, sdata, i,19041904+ &sdata->tx_conf[i]);19051905+ break;19061906+ }1898190718991908 /* common change flags for all interface types */19001909 changed = BSS_CHANGED_ERP_CTS_PROT |···20972074 mb();20982075 local->resuming = false;2099207621002100- /* It's possible that we don't handle the scan completion in21012101- * time during suspend, so if it's still marked as completed21022102- * here, queue the work and flush it to clean things up.21032103- * Instead of calling the worker function directly here, we21042104- * really queue it to avoid potential races with other flows21052105- * scheduling the same work.21062106- */21072107- if (test_bit(SCAN_COMPLETED, &local->scanning)) {21082108- ieee80211_queue_delayed_work(&local->hw, &local->scan_work, 0);21092109- flush_delayed_work(&local->scan_work);21102110- }20772077+ ieee80211_flush_completed_scan(local, false);2111207821122079 if (local->open_count && !reconfig_due_to_wowlan)21132080 drv_reconfig_complete(local, IEEE80211_RECONFIG_TYPE_SUSPEND);
···1652165216531653 /* Set an expiration time for the cookie. */16541654 cookie->c.expiration = ktime_add(asoc->cookie_life,16551655- ktime_get());16551655+ ktime_get_real());1656165616571657 /* Copy the peer's init packet. */16581658 memcpy(&cookie->c.peer_init[0], init_chunk->chunk_hdr,···17801780 if (sock_flag(ep->base.sk, SOCK_TIMESTAMP))17811781 kt = skb_get_ktime(skb);17821782 else17831783- kt = ktime_get();17831783+ kt = ktime_get_real();1784178417851785 if (!asoc && ktime_before(bear_cookie->expiration, kt)) {17861786 /*
+2-1
net/sctp/sm_statefuns.c
···54125412 SCTP_INC_STATS(net, SCTP_MIB_T3_RTX_EXPIREDS);5413541354145414 if (asoc->overall_error_count >= asoc->max_retrans) {54155415- if (asoc->state == SCTP_STATE_SHUTDOWN_PENDING) {54155415+ if (asoc->peer.zero_window_announced &&54165416+ asoc->state == SCTP_STATE_SHUTDOWN_PENDING) {54165417 /*54175418 * We are here likely because the receiver had its rwnd54185419 * closed for a while and we have not been able to
+6-6
net/sctp/socket.c
···1952195219531953 /* Now send the (possibly) fragmented message. */19541954 list_for_each_entry(chunk, &datamsg->chunks, frag_list) {19551955- sctp_chunk_hold(chunk);19561956-19571955 /* Do accounting for the write space. */19581956 sctp_set_owner_w(chunk);19591957···19641966 * breaks.19651967 */19661968 err = sctp_primitive_SEND(net, asoc, datamsg);19691969+ sctp_datamsg_put(datamsg);19671970 /* Did the lower layer accept the chunk? */19681968- if (err) {19691969- sctp_datamsg_free(datamsg);19711971+ if (err)19701972 goto out_free;19711971- }1972197319731974 pr_debug("%s: we sent primitively\n", __func__);1974197519751975- sctp_datamsg_put(datamsg);19761976 err = msg_len;1977197719781978 if (unlikely(wait_connect)) {···71637167 newsk->sk_type = sk->sk_type;71647168 newsk->sk_bound_dev_if = sk->sk_bound_dev_if;71657169 newsk->sk_flags = sk->sk_flags;71707170+ newsk->sk_tsflags = sk->sk_tsflags;71667171 newsk->sk_no_check_tx = sk->sk_no_check_tx;71677172 newsk->sk_no_check_rx = sk->sk_no_check_rx;71687173 newsk->sk_reuse = sk->sk_reuse;···71967199 newinet->mc_ttl = 1;71977200 newinet->mc_index = 0;71987201 newinet->mc_list = NULL;72027202+72037203+ if (newsk->sk_flags & SK_FLAGS_TIMESTAMP)72047204+ net_enable_timestamp();71997205}7200720672017207static inline void sctp_copy_descendant(struct sock *sk_to,
+1
net/socket.c
···16951695 msg.msg_name = addr ? (struct sockaddr *)&address : NULL;16961696 /* We assume all kernel code knows the size of sockaddr_storage */16971697 msg.msg_namelen = 0;16981698+ msg.msg_iocb = NULL;16981699 if (sock->file->f_flags & O_NONBLOCK)16991700 flags |= MSG_DONTWAIT;17001701 err = sock_recvmsg(sock, &msg, iov_iter_count(&msg.msg_iter), flags);
+3-10
net/unix/af_unix.c
···22562256 /* Lock the socket to prevent queue disordering22572257 * while sleeps in memcpy_tomsg22582258 */22592259- err = mutex_lock_interruptible(&u->readlock);22602260- if (unlikely(err)) {22612261- /* recvmsg() in non blocking mode is supposed to return -EAGAIN22622262- * sk_rcvtimeo is not honored by mutex_lock_interruptible()22632263- */22642264- err = noblock ? -EAGAIN : -ERESTARTSYS;22652265- goto out;22662266- }22592259+ mutex_lock(&u->readlock);2267226022682261 if (flags & MSG_PEEK)22692262 skip = sk_peek_offset(sk, flags);···23002307 timeo = unix_stream_data_wait(sk, timeo, last,23012308 last_len);2302230923032303- if (signal_pending(current) ||23042304- mutex_lock_interruptible(&u->readlock)) {23102310+ if (signal_pending(current)) {23052311 err = sock_intr_errno(timeo);23062312 goto out;23072313 }2308231423152315+ mutex_lock(&u->readlock);23092316 continue;23102317unlock:23112318 unix_state_unlock(sk);
···30293029 break;30303030 default:30313031 WARN(1, "invalid initiator %d\n", lr->initiator);30323032+ kfree(rd);30323033 return -EINVAL;30333034 }30343035···32223221 /* We always try to get an update for the static regdomain */32233222 err = regulatory_hint_core(cfg80211_world_regdom->alpha2);32243223 if (err) {32253225- if (err == -ENOMEM)32243224+ if (err == -ENOMEM) {32253225+ platform_device_unregister(reg_pdev);32263226 return err;32273227+ }32273228 /*32283229 * N.B. kobject_uevent_env() can fail mainly for when we're out32293230 * memory which is handled and propagated appropriately above
+36-14
net/xfrm/xfrm_policy.c
···303303}304304EXPORT_SYMBOL(xfrm_policy_alloc);305305306306+static void xfrm_policy_destroy_rcu(struct rcu_head *head)307307+{308308+ struct xfrm_policy *policy = container_of(head, struct xfrm_policy, rcu);309309+310310+ security_xfrm_policy_free(policy->security);311311+ kfree(policy);312312+}313313+306314/* Destroy xfrm_policy: descendant resources must be released to this moment. */307315308316void xfrm_policy_destroy(struct xfrm_policy *policy)···320312 if (del_timer(&policy->timer) || del_timer(&policy->polq.hold_timer))321313 BUG();322314323323- security_xfrm_policy_free(policy->security);324324- kfree(policy);315315+ call_rcu(&policy->rcu, xfrm_policy_destroy_rcu);325316}326317EXPORT_SYMBOL(xfrm_policy_destroy);327318···12211214 struct xfrm_policy *pol;12221215 struct net *net = sock_net(sk);1223121612171217+ rcu_read_lock();12241218 read_lock_bh(&net->xfrm.xfrm_policy_lock);12251225- if ((pol = sk->sk_policy[dir]) != NULL) {12191219+ pol = rcu_dereference(sk->sk_policy[dir]);12201220+ if (pol != NULL) {12261221 bool match = xfrm_selector_match(&pol->selector, fl,12271222 sk->sk_family);12281223 int err = 0;···12481239 }12491240out:12501241 read_unlock_bh(&net->xfrm.xfrm_policy_lock);12421242+ rcu_read_unlock();12511243 return pol;12521244}12531245···13171307#endif1318130813191309 write_lock_bh(&net->xfrm.xfrm_policy_lock);13201320- old_pol = sk->sk_policy[dir];13211321- sk->sk_policy[dir] = pol;13101310+ old_pol = rcu_dereference_protected(sk->sk_policy[dir],13111311+ lockdep_is_held(&net->xfrm.xfrm_policy_lock));13221312 if (pol) {13231313 pol->curlft.add_time = get_seconds();13241314 pol->index = xfrm_gen_index(net, XFRM_POLICY_MAX+dir, 0);13251315 xfrm_sk_policy_link(pol, dir);13261316 }13171317+ rcu_assign_pointer(sk->sk_policy[dir], pol);13271318 if (old_pol) {13281319 if (pol)13291320 xfrm_policy_requeue(old_pol, pol);···13721361 return newp;13731362}1374136313751375-int __xfrm_sk_clone_policy(struct sock *sk)13641364+int __xfrm_sk_clone_policy(struct sock *sk, const struct sock *osk)13761365{13771377- struct xfrm_policy *p0 = sk->sk_policy[0],13781378- *p1 = sk->sk_policy[1];13661366+ const struct xfrm_policy *p;13671367+ struct xfrm_policy *np;13681368+ int i, ret = 0;1379136913801380- sk->sk_policy[0] = sk->sk_policy[1] = NULL;13811381- if (p0 && (sk->sk_policy[0] = clone_policy(p0, 0)) == NULL)13821382- return -ENOMEM;13831383- if (p1 && (sk->sk_policy[1] = clone_policy(p1, 1)) == NULL)13841384- return -ENOMEM;13851385- return 0;13701370+ rcu_read_lock();13711371+ for (i = 0; i < 2; i++) {13721372+ p = rcu_dereference(osk->sk_policy[i]);13731373+ if (p) {13741374+ np = clone_policy(p, i);13751375+ if (unlikely(!np)) {13761376+ ret = -ENOMEM;13771377+ break;13781378+ }13791379+ rcu_assign_pointer(sk->sk_policy[i], np);13801380+ }13811381+ }13821382+ rcu_read_unlock();13831383+ return ret;13861384}1387138513881386static int···22182198 xdst = NULL;22192199 route = NULL;2220220022012201+ sk = sk_const_to_full_sk(sk);22212202 if (sk && sk->sk_policy[XFRM_POLICY_OUT]) {22222203 num_pols = 1;22232204 pols[0] = xfrm_sk_policy_lookup(sk, XFRM_POLICY_OUT, fl);···24982477 }2499247825002479 pol = NULL;24802480+ sk = sk_to_full_sk(sk);25012481 if (sk && sk->sk_policy[dir]) {25022482 pol = xfrm_sk_policy_lookup(sk, dir, &fl);25032483 if (IS_ERR(pol)) {
···13541354 }13551355 }1356135613571357+ snd_usb_mixer_fu_apply_quirk(state->mixer, cval, unitid, kctl);13581358+13571359 range = (cval->max - cval->min) / cval->res;13581360 /*13591361 * Are there devices with volume range more than 255? I use a bit more