···11 <title>Codec Interface</title>2233- <note>44- <title>Suspended</title>55-66- <para>This interface has been be suspended from the V4L2 API77-implemented in Linux 2.6 until we have more experience with codec88-device interfaces.</para>99- </note>1010-113 <para>A V4L2 codec can compress, decompress, transform, or otherwise1212-convert video data from one format into another format, in memory.1313-Applications send data to be converted to the driver through a1414-&func-write; call, and receive the converted data through a1515-&func-read; call. For efficiency a driver may also support streaming1616-I/O.</para>44+convert video data from one format into another format, in memory. Typically55+such devices are memory-to-memory devices (i.e. devices with the66+<constant>V4L2_CAP_VIDEO_M2M</constant> or <constant>V4L2_CAP_VIDEO_M2M_MPLANE</constant>77+capability set).88+</para>1791818- <para>[to do]</para>1010+ <para>A memory-to-memory video node acts just like a normal video node, but it1111+supports both output (sending frames from memory to the codec hardware) and1212+capture (receiving the processed frames from the codec hardware into memory)1313+stream I/O. An application will have to setup the stream1414+I/O for both sides and finally call &VIDIOC-STREAMON; for both capture and output1515+to start the codec.</para>1616+1717+ <para>Video compression codecs use the MPEG controls to setup their codec parameters1818+(note that the MPEG controls actually support many more codecs than just MPEG).1919+See <xref linkend="mpeg-controls"></xref>.</para>2020+2121+ <para>Memory-to-memory devices can often be used as a shared resource: you can2222+open the video node multiple times, each application setting up their own codec properties2323+that are local to the file handle, and each can use it independently from the others.2424+The driver will arbitrate access to the codec and reprogram it whenever another file2525+handler gets access. This is different from the usual video node behavior where the video properties2626+are global to the device (i.e. changing something through one file handle is visible2727+through another file handle).</para>
+1-1
Documentation/DocBook/media/v4l/v4l2.xml
···493493</partinfo>494494495495<title>Video for Linux Two API Specification</title>496496- <subtitle>Revision 3.9</subtitle>496496+ <subtitle>Revision 3.10</subtitle>497497498498 <chapter id="common">499499 &sub-common;
···2233Required properties:4455-- compatible : should be "samsung,exynos4212-fimc" for Exynos4212 and55+- compatible : should be "samsung,exynos4212-fimc-lite" for Exynos4212 and66 Exynos4412 SoCs;77- reg : physical base address and size of the device memory mapped88 registers;
+3
Documentation/sound/alsa/HD-Audio-Models.txt
···2929 alc271-dmic Enable ALC271X digital mic workaround3030 inv-dmic Inverted internal mic workaround3131 lenovo-dock Enables docking station I/O for some Lenovos3232+ dell-headset-multi Headset jack, which can also be used as mic-in3333+ dell-headset-dock Headset jack (without mic-in), and also dock I/O32343335ALC662/663/2723436==============···4442 asus-mode7 ASUS4543 asus-mode8 ASUS4644 inv-dmic Inverted internal mic workaround4545+ dell-headset-multi Headset jack, which can also be used as mic-in47464847ALC6804948======
···11891189 is not correctly implemented in PL310 as clean lines are not11901190 invalidated as a result of these operations.1191119111921192+config ARM_ERRATA_64371911931193+ bool "ARM errata: LoUIS bit field in CLIDR register is incorrect"11941194+ depends on CPU_V7 && SMP11951195+ help11961196+ This option enables the workaround for the 643719 Cortex-A9 (prior to11971197+ r1p0) erratum. On affected cores the LoUIS bit field of the CLIDR11981198+ register returns zero when it should return one. The workaround11991199+ corrects this value, ensuring cache maintenance operations which use12001200+ it behave as intended and avoiding data corruption.12011201+11921202config ARM_ERRATA_72078911931203 bool "ARM errata: TLBIASIDIS and TLBIMVAIS operations can broadcast a faulty ASID"11941204 depends on CPU_V7···2016200620172007config KEXEC20182008 bool "Kexec system call (EXPERIMENTAL)"20192019- depends on (!SMP || HOTPLUG_CPU)20092009+ depends on (!SMP || PM_SLEEP_SMP)20202010 help20212011 kexec is a system call that implements the ability to shutdown your20222012 current kernel, and to start another kernel. It is like a reboot
+2-1
arch/arm/boot/compressed/Makefile
···116116117117# Make sure files are removed during clean118118extra-y += piggy.gzip piggy.lzo piggy.lzma piggy.xzkern \119119- lib1funcs.S ashldi3.S $(libfdt) $(libfdt_hdrs)119119+ lib1funcs.S ashldi3.S $(libfdt) $(libfdt_hdrs) \120120+ hyp-stub.S120121121122ifeq ($(CONFIG_FUNCTION_TRACER),y)122123ORIG_CFLAGS := $(KBUILD_CFLAGS)
···134134 unsigned long reboot_code_buffer_phys;135135 void *reboot_code_buffer;136136137137+ if (num_online_cpus() > 1) {138138+ pr_err("kexec: error: multiple CPUs still online\n");139139+ return;140140+ }137141138142 page_list = image->head & PAGE_MASK;139143
+37-6
arch/arm/kernel/process.c
···184184185185__setup("reboot=", reboot_setup);186186187187+/*188188+ * Called by kexec, immediately prior to machine_kexec().189189+ *190190+ * This must completely disable all secondary CPUs; simply causing those CPUs191191+ * to execute e.g. a RAM-based pin loop is not sufficient. This allows the192192+ * kexec'd kernel to use any and all RAM as it sees fit, without having to193193+ * avoid any code or data used by any SW CPU pin loop. The CPU hotplug194194+ * functionality embodied in disable_nonboot_cpus() to achieve this.195195+ */187196void machine_shutdown(void)188197{189189-#ifdef CONFIG_SMP190190- smp_send_stop();191191-#endif198198+ disable_nonboot_cpus();192199}193200201201+/*202202+ * Halting simply requires that the secondary CPUs stop performing any203203+ * activity (executing tasks, handling interrupts). smp_send_stop()204204+ * achieves this.205205+ */194206void machine_halt(void)195207{196196- machine_shutdown();208208+ smp_send_stop();209209+197210 local_irq_disable();198211 while (1);199212}200213214214+/*215215+ * Power-off simply requires that the secondary CPUs stop performing any216216+ * activity (executing tasks, handling interrupts). smp_send_stop()217217+ * achieves this. When the system power is turned off, it will take all CPUs218218+ * with it.219219+ */201220void machine_power_off(void)202221{203203- machine_shutdown();222222+ smp_send_stop();223223+204224 if (pm_power_off)205225 pm_power_off();206226}207227228228+/*229229+ * Restart requires that the secondary CPUs stop performing any activity230230+ * while the primary CPU resets the system. Systems with a single CPU can231231+ * use soft_restart() as their machine descriptor's .restart hook, since that232232+ * will cause the only available CPU to reset. Systems with multiple CPUs must233233+ * provide a HW restart implementation, to ensure that all CPUs reset at once.234234+ * This is required so that any code running after reset on the primary CPU235235+ * doesn't have to co-ordinate with other CPUs to ensure they aren't still236236+ * executing pre-reset code, and using RAM that the primary CPU's code wishes237237+ * to use. Implementing such co-ordination would be essentially impossible.238238+ */208239void machine_restart(char *cmd)209240{210210- machine_shutdown();241241+ smp_send_stop();211242212243 arm_pm_restart(reboot_mode, cmd);213244
-13
arch/arm/kernel/smp.c
···651651 smp_cross_call(cpumask_of(cpu), IPI_RESCHEDULE);652652}653653654654-#ifdef CONFIG_HOTPLUG_CPU655655-static void smp_kill_cpus(cpumask_t *mask)656656-{657657- unsigned int cpu;658658- for_each_cpu(cpu, mask)659659- platform_cpu_kill(cpu);660660-}661661-#else662662-static void smp_kill_cpus(cpumask_t *mask) { }663663-#endif664664-665654void smp_send_stop(void)666655{667656 unsigned long timeout;···668679669680 if (num_online_cpus() > 1)670681 pr_warning("SMP: failed to stop secondary CPUs\n");671671-672672- smp_kill_cpus(&mask);673682}674683675684/*
+8
arch/arm/mm/cache-v7.S
···9292 mrc p15, 1, r0, c0, c0, 1 @ read clidr, r0 = clidr9393 ALT_SMP(ands r3, r0, #(7 << 21)) @ extract LoUIS from clidr9494 ALT_UP(ands r3, r0, #(7 << 27)) @ extract LoUU from clidr9595+#ifdef CONFIG_ARM_ERRATA_6437199696+ ALT_SMP(mrceq p15, 0, r2, c0, c0, 0) @ read main ID register9797+ ALT_UP(moveq pc, lr) @ LoUU is zero, so nothing to do9898+ ldreq r1, =0x410fc090 @ ID of ARM Cortex A9 r0p?9999+ biceq r2, r2, #0x0000000f @ clear minor revision number100100+ teqeq r2, r1 @ test for errata affected core and if so...101101+ orreqs r3, #(1 << 21) @ fix LoUIS value (and set flags state to 'ne')102102+#endif95103 ALT_SMP(mov r3, r3, lsr #20) @ r3 = LoUIS * 296104 ALT_UP(mov r3, r3, lsr #26) @ r3 = LoUU * 297105 moveq pc, lr @ return if level == 0
+33
arch/arm/mm/flush.c
···301301EXPORT_SYMBOL(flush_dcache_page);302302303303/*304304+ * Ensure cache coherency for the kernel mapping of this page. We can305305+ * assume that the page is pinned via kmap.306306+ *307307+ * If the page only exists in the page cache and there are no user308308+ * space mappings, this is a no-op since the page was already marked309309+ * dirty at creation. Otherwise, we need to flush the dirty kernel310310+ * cache lines directly.311311+ */312312+void flush_kernel_dcache_page(struct page *page)313313+{314314+ if (cache_is_vivt() || cache_is_vipt_aliasing()) {315315+ struct address_space *mapping;316316+317317+ mapping = page_mapping(page);318318+319319+ if (!mapping || mapping_mapped(mapping)) {320320+ void *addr;321321+322322+ addr = page_address(page);323323+ /*324324+ * kmap_atomic() doesn't set the page virtual325325+ * address for highmem pages, and326326+ * kunmap_atomic() takes care of cache327327+ * flushing already.328328+ */329329+ if (!IS_ENABLED(CONFIG_HIGHMEM) || addr)330330+ __cpuc_flush_dcache_area(addr, PAGE_SIZE);331331+ }332332+ }333333+}334334+EXPORT_SYMBOL(flush_kernel_dcache_page);335335+336336+/*304337 * Flush an anonymous page so that users of get_user_pages()305338 * can safely access the data. The expected sequence is:306339 *
+5-3
arch/arm/mm/mmu.c
···616616 } while (pte++, addr += PAGE_SIZE, addr != end);617617}618618619619-static void __init map_init_section(pmd_t *pmd, unsigned long addr,619619+static void __init __map_init_section(pmd_t *pmd, unsigned long addr,620620 unsigned long end, phys_addr_t phys,621621 const struct mem_type *type)622622{623623+ pmd_t *p = pmd;624624+623625#ifndef CONFIG_ARM_LPAE624626 /*625627 * In classic MMU format, puds and pmds are folded in to···640638 phys += SECTION_SIZE;641639 } while (pmd++, addr += SECTION_SIZE, addr != end);642640643643- flush_pmd_entry(pmd);641641+ flush_pmd_entry(p);644642}645643646644static void __init alloc_init_pmd(pud_t *pud, unsigned long addr,···663661 */664662 if (type->prot_sect &&665663 ((addr | next | phys) & ~SECTION_MASK) == 0) {666666- map_init_section(pmd, addr, next, phys, type);664664+ __map_init_section(pmd, addr, next, phys, type);667665 } else {668666 alloc_init_pte(pmd, addr, next,669667 __phys_to_pfn(phys), type);
···220220}221221222222223223+int pci_mmap_page_range(struct pci_dev *dev, struct vm_area_struct *vma,224224+ enum pci_mmap_state mmap_state, int write_combine)225225+{226226+ unsigned long prot;227227+228228+ /*229229+ * I/O space can be accessed via normal processor loads and stores on230230+ * this platform but for now we elect not to do this and portable231231+ * drivers should not do this anyway.232232+ */233233+ if (mmap_state == pci_mmap_io)234234+ return -EINVAL;235235+236236+ if (write_combine)237237+ return -EINVAL;238238+239239+ /*240240+ * Ignore write-combine; for now only return uncached mappings.241241+ */242242+ prot = pgprot_val(vma->vm_page_prot);243243+ prot |= _PAGE_NO_CACHE;244244+ vma->vm_page_prot = __pgprot(prot);245245+246246+ return remap_pfn_range(vma, vma->vm_start, vma->vm_pgoff,247247+ vma->vm_end - vma->vm_start, vma->vm_page_prot);248248+}249249+223250/*224251 * A driver is enabling the device. We make sure that all the appropriate225252 * bits are set to allow the device to operate as the driver is expecting.
···592592 do {593593 pmd = pmd_offset(pud, addr);594594 next = pmd_addr_end(addr, end);595595- if (pmd_none_or_clear_bad(pmd))595595+ if (!is_hugepd(pmd)) {596596+ /*597597+ * if it is not hugepd pointer, we should already find598598+ * it cleared.599599+ */600600+ WARN_ON(!pmd_none_or_clear_bad(pmd));596601 continue;602602+ }597603#ifdef CONFIG_PPC_FSL_BOOK3E598604 /*599605 * Increment next by the size of the huge mapping since
···11-#ifndef __ASM_LINKAGE_H22-#define __ASM_LINKAGE_H33-44-/* Nothing to see here... */55-66-#endif
+2-1
arch/sparc/kernel/ds.c
···843843 unsigned long len;844844845845 strcpy(full_boot_str, "boot ");846846- strcpy(full_boot_str + strlen("boot "), boot_command);846846+ strlcpy(full_boot_str + strlen("boot "), boot_command,847847+ sizeof(full_boot_str + strlen("boot ")));847848 len = strlen(full_boot_str);848849849850 if (reboot_data_supported) {
+24-44
arch/sparc/kernel/leon_kernel.c
···38383939unsigned long leon3_gptimer_irq; /* interrupt controller irq number */4040unsigned long leon3_gptimer_idx; /* Timer Index (0..6) within Timer Core */4141-int leon3_ticker_irq; /* Timer ticker IRQ */4241unsigned int sparc_leon_eirq;4342#define LEON_IMASK(cpu) (&leon3_irqctrl_regs->mask[cpu])4443#define LEON_IACK (&leon3_irqctrl_regs->iclear)···277278278279 leon_clear_profile_irq(cpu);279280281281+ if (cpu == boot_cpu_id)282282+ timer_interrupt(irq, NULL);283283+280284 ce = &per_cpu(sparc32_clockevent, cpu);281285282286 irq_enter();···301299 int icsel;302300 int ampopts;303301 int err;302302+ u32 config;304303305304 sparc_config.get_cycles_offset = leon_cycles_offset;306305 sparc_config.cs_period = 1000000 / HZ;···380377 LEON3_BYPASS_STORE_PA(381378 &leon3_gptimer_regs->e[leon3_gptimer_idx].ctrl, 0);382379383383-#ifdef CONFIG_SMP384384- leon3_ticker_irq = leon3_gptimer_irq + 1 + leon3_gptimer_idx;385385-386386- if (!(LEON3_BYPASS_LOAD_PA(&leon3_gptimer_regs->config) &387387- (1<<LEON3_GPTIMER_SEPIRQ))) {388388- printk(KERN_ERR "timer not configured with separate irqs\n");389389- BUG();390390- }391391-392392- LEON3_BYPASS_STORE_PA(&leon3_gptimer_regs->e[leon3_gptimer_idx+1].val,393393- 0);394394- LEON3_BYPASS_STORE_PA(&leon3_gptimer_regs->e[leon3_gptimer_idx+1].rld,395395- (((1000000/HZ) - 1)));396396- LEON3_BYPASS_STORE_PA(&leon3_gptimer_regs->e[leon3_gptimer_idx+1].ctrl,397397- 0);398398-#endif399399-400380 /*401381 * The IRQ controller may (if implemented) consist of multiple402382 * IRQ controllers, each mapped on a 4Kb boundary.···402416 if (eirq != 0)403417 leon_eirq_setup(eirq);404418405405- irq = _leon_build_device_irq(NULL, leon3_gptimer_irq+leon3_gptimer_idx);406406- err = request_irq(irq, timer_interrupt, IRQF_TIMER, "timer", NULL);407407- if (err) {408408- printk(KERN_ERR "unable to attach timer IRQ%d\n", irq);409409- prom_halt();410410- }411411-412419#ifdef CONFIG_SMP413420 {414421 unsigned long flags;···418439 }419440#endif420441442442+ config = LEON3_BYPASS_LOAD_PA(&leon3_gptimer_regs->config);443443+ if (config & (1 << LEON3_GPTIMER_SEPIRQ))444444+ leon3_gptimer_irq += leon3_gptimer_idx;445445+ else if ((config & LEON3_GPTIMER_TIMERS) > 1)446446+ pr_warn("GPTIMER uses shared irqs, using other timers of the same core will fail.\n");447447+448448+#ifdef CONFIG_SMP449449+ /* Install per-cpu IRQ handler for broadcasted ticker */450450+ irq = leon_build_device_irq(leon3_gptimer_irq, handle_percpu_irq,451451+ "per-cpu", 0);452452+ err = request_irq(irq, leon_percpu_timer_ce_interrupt,453453+ IRQF_PERCPU | IRQF_TIMER, "timer", NULL);454454+#else455455+ irq = _leon_build_device_irq(NULL, leon3_gptimer_irq);456456+ err = request_irq(irq, timer_interrupt, IRQF_TIMER, "timer", NULL);457457+#endif458458+ if (err) {459459+ pr_err("Unable to attach timer IRQ%d\n", irq);460460+ prom_halt();461461+ }421462 LEON3_BYPASS_STORE_PA(&leon3_gptimer_regs->e[leon3_gptimer_idx].ctrl,422463 LEON3_GPTIMER_EN |423464 LEON3_GPTIMER_RL |424465 LEON3_GPTIMER_LD |425466 LEON3_GPTIMER_IRQEN);426426-427427-#ifdef CONFIG_SMP428428- /* Install per-cpu IRQ handler for broadcasted ticker */429429- irq = leon_build_device_irq(leon3_ticker_irq, handle_percpu_irq,430430- "per-cpu", 0);431431- err = request_irq(irq, leon_percpu_timer_ce_interrupt,432432- IRQF_PERCPU | IRQF_TIMER, "ticker",433433- NULL);434434- if (err) {435435- printk(KERN_ERR "unable to attach ticker IRQ%d\n", irq);436436- prom_halt();437437- }438438-439439- LEON3_BYPASS_STORE_PA(&leon3_gptimer_regs->e[leon3_gptimer_idx+1].ctrl,440440- LEON3_GPTIMER_EN |441441- LEON3_GPTIMER_RL |442442- LEON3_GPTIMER_LD |443443- LEON3_GPTIMER_IRQEN);444444-#endif445467 return;446468bad:447469 printk(KERN_ERR "No Timer/irqctrl found\n");
+3-5
arch/sparc/kernel/leon_pci_grpci1.c
···536536537537 /* find device register base address */538538 res = platform_get_resource(ofdev, IORESOURCE_MEM, 0);539539- regs = devm_request_and_ioremap(&ofdev->dev, res);540540- if (!regs) {541541- dev_err(&ofdev->dev, "io-regs mapping failed\n");542542- return -EADDRNOTAVAIL;543543- }539539+ regs = devm_ioremap_resource(&ofdev->dev, res);540540+ if (IS_ERR(regs))541541+ return PTR_ERR(regs);544542545543 /*546544 * check that we're in Host Slot and that we can act as a Host Bridge
+7
arch/sparc/kernel/leon_pmc.c
···4747 * MMU does not get a TLB miss here by using the MMU BYPASS ASI.4848 */4949 register unsigned int address = (unsigned int)leon3_irqctrl_regs;5050+5151+ /* Interrupts need to be enabled to not hang the CPU */5252+ local_irq_enable();5353+5054 __asm__ __volatile__ (5155 "wr %%g0, %%asr19\n"5256 "lda [%0] %1, %%g0\n"···6460 */6561void pmc_leon_idle(void)6662{6363+ /* Interrupts need to be enabled to not hang the CPU */6464+ local_irq_enable();6565+6766 /* For systems without power-down, this will be no-op */6867 __asm__ __volatile__ ("wr %g0, %asr19\n\t");6968}
···2323 return barg_buf;2424 }25252626- switch(prom_vers) {2626+ switch (prom_vers) {2727 case PROM_V0:2828 cp = barg_buf;2929 /* Start from 1 and go over fd(0,0,0)kernel */3030- for(iter = 1; iter < 8; iter++) {3030+ for (iter = 1; iter < 8; iter++) {3131 arg = (*(romvec->pv_v0bootargs))->argv[iter];3232 if (arg == NULL)3333 break;3434- while(*arg != 0) {3434+ while (*arg != 0) {3535 /* Leave place for space and null. */3636- if(cp >= barg_buf + BARG_LEN-2){3636+ if (cp >= barg_buf + BARG_LEN - 2)3737 /* We might issue a warning here. */3838 break;3939- }4039 *cp++ = *arg++;4140 }4241 *cp++ = ' ';4242+ if (cp >= barg_buf + BARG_LEN - 1)4343+ /* We might issue a warning here. */4444+ break;4345 }4446 *cp = 0;4547 break;
+8-8
arch/sparc/prom/tree_64.c
···3939 return prom_node_to_node("child", node);4040}41414242-inline phandle prom_getchild(phandle node)4242+phandle prom_getchild(phandle node)4343{4444 phandle cnode;4545···7272 return prom_node_to_node(prom_peer_name, node);7373}74747575-inline phandle prom_getsibling(phandle node)7575+phandle prom_getsibling(phandle node)7676{7777 phandle sibnode;7878···8989/* Return the length in bytes of property 'prop' at node 'node'.9090 * Return -1 on error.9191 */9292-inline int prom_getproplen(phandle node, const char *prop)9292+int prom_getproplen(phandle node, const char *prop)9393{9494 unsigned long args[6];9595···113113 * 'buffer' which has a size of 'bufsize'. If the acquisition114114 * was successful the length will be returned, else -1 is returned.115115 */116116-inline int prom_getproperty(phandle node, const char *prop,117117- char *buffer, int bufsize)116116+int prom_getproperty(phandle node, const char *prop,117117+ char *buffer, int bufsize)118118{119119 unsigned long args[8];120120 int plen;···141141/* Acquire an integer property and return its value. Returns -1142142 * on failure.143143 */144144-inline int prom_getint(phandle node, const char *prop)144144+int prom_getint(phandle node, const char *prop)145145{146146 int intprop;147147···235235/* Return the first property type for node 'node'.236236 * buffer should be at least 32B in length237237 */238238-inline char *prom_firstprop(phandle node, char *buffer)238238+char *prom_firstprop(phandle node, char *buffer)239239{240240 unsigned long args[7];241241···261261 * at node 'node' . Returns NULL string if no more262262 * property types for this node.263263 */264264-inline char *prom_nextprop(phandle node, const char *oprop, char *buffer)264264+char *prom_nextprop(phandle node, const char *oprop, char *buffer)265265{266266 unsigned long args[7];267267 char buf[32];
···192192 /* struct user */193193 DUMP_WRITE(&dump, sizeof(dump));194194 /* Now dump all of the user data. Include malloced stuff as well */195195- DUMP_SEEK(PAGE_SIZE);195195+ DUMP_SEEK(PAGE_SIZE - sizeof(dump));196196 /* now we start writing out the user space info */197197 set_fs(USER_DS);198198 /* Dump the data area */
···242242 if (!mem)243243 return;244244 hv_clock = __va(mem);245245+ memset(hv_clock, 0, size);245246246247 if (kvm_register_clock("boot clock")) {247248 hv_clock = NULL;
-12
arch/x86/kernel/process.c
···277277}278278#endif279279280280-void arch_cpu_idle_prepare(void)281281-{282282- /*283283- * If we're the non-boot CPU, nothing set the stack canary up284284- * for us. CPU0 already has it initialized but no harm in285285- * doing it again. This is a good place for updating it, as286286- * we wont ever return from this function (so the invalid287287- * canaries already on the stack wont ever trigger).288288- */289289- boot_init_stack_canary();290290-}291291-292280void arch_cpu_idle_enter(void)293281{294282 local_touch_nmi();
···582582 if (index != XCR_XFEATURE_ENABLED_MASK)583583 return 1;584584 xcr0 = xcr;585585- if (kvm_x86_ops->get_cpl(vcpu) != 0)586586- return 1;587585 if (!(xcr0 & XSTATE_FP))588586 return 1;589587 if ((xcr0 & XSTATE_YMM) && !(xcr0 & XSTATE_SSE))···595597596598int kvm_set_xcr(struct kvm_vcpu *vcpu, u32 index, u64 xcr)597599{598598- if (__kvm_set_xcr(vcpu, index, xcr)) {600600+ if (kvm_x86_ops->get_cpl(vcpu) != 0 ||601601+ __kvm_set_xcr(vcpu, index, xcr)) {599602 kvm_inject_gp(vcpu, 0);600603 return 1;601604 }
+6-1
arch/x86/platform/efi/efi.c
···10691069 * that by attempting to use more space than is available.10701070 */10711071 unsigned long dummy_size = remaining_size + 1024;10721072- void *dummy = kmalloc(dummy_size, GFP_ATOMIC);10721072+ void *dummy = kzalloc(dummy_size, GFP_ATOMIC);10731073+10741074+ if (!dummy)10751075+ return EFI_OUT_OF_RESOURCES;1073107610741077 status = efi.set_variable(efi_dummy_name, &EFI_DUMMY_GUID,10751078 EFI_VARIABLE_NON_VOLATILE |···10911088 EFI_VARIABLE_RUNTIME_ACCESS,10921089 0, dummy);10931090 }10911091+10921092+ kfree(dummy);1094109310951094 /*10961095 * The runtime code may now have triggered a garbage collection
+15-6
drivers/acpi/acpi_lpss.c
···164164 if (dev_desc->clk_required) {165165 ret = register_device_clock(adev, pdata);166166 if (ret) {167167- /*168168- * Skip the device, but don't terminate the namespace169169- * scan.170170- */171171- kfree(pdata);172172- return 0;167167+ /* Skip the device, but continue the namespace scan. */168168+ ret = 0;169169+ goto err_out;173170 }171171+ }172172+173173+ /*174174+ * This works around a known issue in ACPI tables where LPSS devices175175+ * have _PS0 and _PS3 without _PSC (and no power resources), so176176+ * acpi_bus_init_power() will assume that the BIOS has put them into D0.177177+ */178178+ ret = acpi_device_fix_up_power(adev);179179+ if (ret) {180180+ /* Skip the device, but continue the namespace scan. */181181+ ret = 0;182182+ goto err_out;174183 }175184176185 adev->driver_data = pdata;
+20
drivers/acpi/device_pm.c
···290290 return 0;291291}292292293293+/**294294+ * acpi_device_fix_up_power - Force device with missing _PSC into D0.295295+ * @device: Device object whose power state is to be fixed up.296296+ *297297+ * Devices without power resources and _PSC, but having _PS0 and _PS3 defined,298298+ * are assumed to be put into D0 by the BIOS. However, in some cases that may299299+ * not be the case and this function should be used then.300300+ */301301+int acpi_device_fix_up_power(struct acpi_device *device)302302+{303303+ int ret = 0;304304+305305+ if (!device->power.flags.power_resources306306+ && !device->power.flags.explicit_get307307+ && device->power.state == ACPI_STATE_D0)308308+ ret = acpi_dev_pm_explicit_set(device, ACPI_STATE_D0);309309+310310+ return ret;311311+}312312+293313int acpi_bus_update_power(acpi_handle handle, int *state_p)294314{295315 struct acpi_device *device;
+2
drivers/acpi/dock.c
···868868 if (!count)869869 return -EINVAL;870870871871+ acpi_scan_lock_acquire();871872 begin_undock(dock_station);872873 ret = handle_eject_request(dock_station, ACPI_NOTIFY_EJECT_REQUEST);874874+ acpi_scan_lock_release();873875 return ret ? ret: count;874876}875877static DEVICE_ATTR(undock, S_IWUSR, NULL, write_undock);
···304304}305305306306static void acpi_dev_get_irqresource(struct resource *res, u32 gsi,307307- u8 triggering, u8 polarity, u8 shareable)307307+ u8 triggering, u8 polarity, u8 shareable,308308+ bool legacy)308309{309310 int irq, p, t;310311···318317 * In IO-APIC mode, use overrided attribute. Two reasons:319318 * 1. BIOS bug in DSDT320319 * 2. BIOS uses IO-APIC mode Interrupt Source Override320320+ *321321+ * We do this only if we are dealing with IRQ() or IRQNoFlags()322322+ * resource (the legacy ISA resources). With modern ACPI 5 devices323323+ * using extended IRQ descriptors we take the IRQ configuration324324+ * from _CRS directly.321325 */322322- if (!acpi_get_override_irq(gsi, &t, &p)) {326326+ if (legacy && !acpi_get_override_irq(gsi, &t, &p)) {323327 u8 trig = t ? ACPI_LEVEL_SENSITIVE : ACPI_EDGE_SENSITIVE;324328 u8 pol = p ? ACPI_ACTIVE_LOW : ACPI_ACTIVE_HIGH;325329326330 if (triggering != trig || polarity != pol) {327331 pr_warning("ACPI: IRQ %d override to %s, %s\n", gsi,328328- t ? "edge" : "level", p ? "low" : "high");332332+ t ? "level" : "edge", p ? "low" : "high");329333 triggering = trig;330334 polarity = pol;331335 }···379373 }380374 acpi_dev_get_irqresource(res, irq->interrupts[index],381375 irq->triggering, irq->polarity,382382- irq->sharable);376376+ irq->sharable, true);383377 break;384378 case ACPI_RESOURCE_TYPE_EXTENDED_IRQ:385379 ext_irq = &ares->data.extended_irq;···389383 }390384 acpi_dev_get_irqresource(res, ext_irq->interrupts[index],391385 ext_irq->triggering, ext_irq->polarity,392392- ext_irq->sharable);386386+ ext_irq->sharable, false);393387 break;394388 default:395389 return false;
+5-1
drivers/block/rbd.c
···10361036 char *name;10371037 u64 segment;10381038 int ret;10391039+ char *name_format;1039104010401041 name = kmem_cache_alloc(rbd_segment_name_cache, GFP_NOIO);10411042 if (!name)10421043 return NULL;10431044 segment = offset >> rbd_dev->header.obj_order;10441044- ret = snprintf(name, MAX_OBJ_NAME_SIZE + 1, "%s.%012llx",10451045+ name_format = "%s.%012llx";10461046+ if (rbd_dev->image_format == 2)10471047+ name_format = "%s.%016llx";10481048+ ret = snprintf(name, MAX_OBJ_NAME_SIZE + 1, name_format,10451049 rbd_dev->header.object_prefix, segment);10461050 if (ret < 0 || ret > MAX_OBJ_NAME_SIZE) {10471051 pr_err("error formatting segment name for #%llu (%d)\n",
+1
drivers/clk/clk.c
···19551955 /* XXX the notifier code should handle this better */19561956 if (!cn->notifier_head.head) {19571957 srcu_cleanup_notifier_head(&cn->notifier_head);19581958+ list_del(&cn->node);19581959 kfree(cn);19591960 }19601961
···705705static int __cpuinit gic_secondary_init(struct notifier_block *nfb,706706 unsigned long action, void *hcpu)707707{708708- if (action == CPU_STARTING)708708+ if (action == CPU_STARTING || action == CPU_STARTING_FROZEN)709709 gic_cpu_init(&gic_data[0]);710710 return NOTIFY_OK;711711}
+9-3
drivers/media/Kconfig
···136136137137# This Kconfig option is used by both PCI and USB drivers138138config TTPCI_EEPROM139139- tristate140140- depends on I2C141141- default n139139+ tristate140140+ depends on I2C141141+ default n142142143143source "drivers/media/dvb-core/Kconfig"144144···188188 the needed demodulators).189189190190 If unsure say Y.191191+192192+config MEDIA_ATTACH193193+ bool194194+ depends on MEDIA_ANALOG_TV_SUPPORT || MEDIA_DIGITAL_TV_SUPPORT || MEDIA_RADIO_SUPPORT195195+ depends on MODULES196196+ default MODULES191197192198source "drivers/media/i2c/Kconfig"193199source "drivers/media/tuners/Kconfig"
+1-1
drivers/media/i2c/s5c73m3/s5c73m3-core.c
···956956957957 if (fie->pad != OIF_SOURCE_PAD)958958 return -EINVAL;959959- if (fie->index > ARRAY_SIZE(s5c73m3_intervals))959959+ if (fie->index >= ARRAY_SIZE(s5c73m3_intervals))960960 return -EINVAL;961961962962 mutex_lock(&state->lock);
+3-4
drivers/media/pci/cx88/cx88-alsa.c
···615615 int changed = 0;616616 u32 old;617617618618- if (core->board.audio_chip == V4L2_IDENT_WM8775)618618+ if (core->sd_wm8775)619619 snd_cx88_wm8775_volume_put(kcontrol, value);620620621621 left = value->value.integer.value[0] & 0x3f;···682682 vol ^= bit;683683 cx_swrite(SHADOW_AUD_VOL_CTL, AUD_VOL_CTL, vol);684684 /* Pass mute onto any WM8775 */685685- if ((core->board.audio_chip == V4L2_IDENT_WM8775) &&686686- ((1<<6) == bit))685685+ if (core->sd_wm8775 && ((1<<6) == bit))687686 wm8775_s_ctrl(core, V4L2_CID_AUDIO_MUTE, 0 != (vol & bit));688687 ret = 1;689688 }···902903 goto error;903904904905 /* If there's a wm8775 then add a Line-In ALC switch */905905- if (core->board.audio_chip == V4L2_IDENT_WM8775)906906+ if (core->sd_wm8775)906907 snd_ctl_add(card, snd_ctl_new1(&snd_cx88_alc_switch, chip));907908908909 strcpy (card->driver, "CX88x");
+3-5
drivers/media/pci/cx88/cx88-video.c
···385385 /* The wm8775 module has the "2" route hardwired into386386 the initialization. Some boards may use different387387 routes for different inputs. HVR-1300 surely does */388388- if (core->board.audio_chip &&389389- core->board.audio_chip == V4L2_IDENT_WM8775) {388388+ if (core->sd_wm8775) {390389 call_all(core, audio, s_routing,391390 INPUT(input).audioroute, 0, 0);392391 }···770771 cx_write(MO_GP1_IO, core->board.radio.gpio1);771772 cx_write(MO_GP2_IO, core->board.radio.gpio2);772773 if (core->board.radio.audioroute) {773773- if(core->board.audio_chip &&774774- core->board.audio_chip == V4L2_IDENT_WM8775) {774774+ if (core->sd_wm8775) {775775 call_all(core, audio, s_routing,776776 core->board.radio.audioroute, 0, 0);777777 }···957959 u32 value,mask;958960959961 /* Pass changes onto any WM8775 */960960- if (core->board.audio_chip == V4L2_IDENT_WM8775) {962962+ if (core->sd_wm8775) {961963 switch (ctrl->id) {962964 case V4L2_CID_AUDIO_MUTE:963965 wm8775_s_ctrl(core, ctrl->id, ctrl->val);
···746746 node = v4l2_of_get_next_endpoint(node, NULL);747747 if (!node) {748748 dev_err(&pdev->dev, "No port node at %s\n",749749- node->full_name);749749+ pdev->dev.of_node->full_name);750750 return -EINVAL;751751 }752752 /* Get port node and validate MIPI-CSI channel id. */
···643643644644 if (ici->ops->init_videobuf2)645645 vb2_queue_release(&icd->vb2_vidq);646646- ici->ops->remove(icd);647647-648646 __soc_camera_power_off(icd);647647+648648+ ici->ops->remove(icd);649649 }650650651651 if (icd->streamer == file)
+1
drivers/media/radio/Kconfig
···2222 tristate "Silicon Laboratories Si476x I2C FM Radio"2323 depends on I2C && VIDEO_V4L22424 depends on MFD_SI476X_CORE2525+ depends on SND_SOC2526 select SND_SOC_SI476X2627 ---help---2728 Choose Y here if you have this FM radio chip.
···11-config MEDIA_ATTACH22- bool "Load and attach frontend and tuner driver modules as needed"33- depends on MEDIA_ANALOG_TV_SUPPORT || MEDIA_DIGITAL_TV_SUPPORT || MEDIA_RADIO_SUPPORT44- depends on MODULES55- default y if !EXPERT66- help77- Remove the static dependency of DVB card drivers on all88- frontend modules for all possible card variants. Instead,99- allow the card drivers to only load the frontend modules1010- they require.1111-1212- Also, tuner module will automatically load a tuner driver1313- when needed, for analog mode.1414-1515- This saves several KBytes of memory.1616-1717- Note: You will need module-init-tools v3.2 or later for this feature.1818-1919- If unsure say Y.2020-211# Analog TV tuners, auto-loaded via tuner.ko222config MEDIA_TUNER233 tristate
···11591159 regs[0x01] = 0x44; /* Select 24 Mhz clock */11601160 regs[0x12] = 0x02; /* Set hstart to 2 */11611161 }11621162+ break;11631163+ case SENSOR_PAS202:11641164+ /* For some unknown reason we need to increase hstart by 1 on11651165+ the sn9c103, otherwise we get wrong colors (bayer shift). */11661166+ if (sd->bridge == BRIDGE_103)11671167+ regs[0x12] += 1;11681168+ break;11621169 }11631170 /* Disable compression when the raw bayer format has been selected */11641171 if (cam->cam_mode[gspca_dev->curr_mode].priv & MODE_RAW)
+1-1
drivers/media/usb/pwc/pwc.h
···226226 struct list_head queued_bufs;227227 spinlock_t queued_bufs_lock; /* Protects queued_bufs */228228229229- /* Note if taking both locks v4l2_lock must always be locked first! */229229+ /* If taking both locks vb_queue_lock must always be locked first! */230230 struct mutex v4l2_lock; /* Protects everything else */231231 struct mutex vb_queue_lock; /* Protects vb_queue and capt_file */232232
+2
drivers/media/v4l2-core/v4l2-ctrls.c
···18351835{18361836 if (V4L2_CTRL_ID2CLASS(ctrl->id) == V4L2_CTRL_CLASS_FM_TX)18371837 return true;18381838+ if (V4L2_CTRL_ID2CLASS(ctrl->id) == V4L2_CTRL_CLASS_FM_RX)18391839+ return true;18381840 switch (ctrl->id) {18391841 case V4L2_CID_AUDIO_MUTE:18401842 case V4L2_CID_AUDIO_VOLUME:
···205205static void v4l2_m2m_try_schedule(struct v4l2_m2m_ctx *m2m_ctx)206206{207207 struct v4l2_m2m_dev *m2m_dev;208208- unsigned long flags_job, flags;208208+ unsigned long flags_job, flags_out, flags_cap;209209210210 m2m_dev = m2m_ctx->m2m_dev;211211 dprintk("Trying to schedule a job for m2m_ctx: %p\n", m2m_ctx);···223223 return;224224 }225225226226- spin_lock_irqsave(&m2m_ctx->out_q_ctx.rdy_spinlock, flags);226226+ spin_lock_irqsave(&m2m_ctx->out_q_ctx.rdy_spinlock, flags_out);227227 if (list_empty(&m2m_ctx->out_q_ctx.rdy_queue)) {228228- spin_unlock_irqrestore(&m2m_ctx->out_q_ctx.rdy_spinlock, flags);228228+ spin_unlock_irqrestore(&m2m_ctx->out_q_ctx.rdy_spinlock,229229+ flags_out);229230 spin_unlock_irqrestore(&m2m_dev->job_spinlock, flags_job);230231 dprintk("No input buffers available\n");231232 return;232233 }233233- spin_lock_irqsave(&m2m_ctx->cap_q_ctx.rdy_spinlock, flags);234234+ spin_lock_irqsave(&m2m_ctx->cap_q_ctx.rdy_spinlock, flags_cap);234235 if (list_empty(&m2m_ctx->cap_q_ctx.rdy_queue)) {235235- spin_unlock_irqrestore(&m2m_ctx->cap_q_ctx.rdy_spinlock, flags);236236- spin_unlock_irqrestore(&m2m_ctx->out_q_ctx.rdy_spinlock, flags);236236+ spin_unlock_irqrestore(&m2m_ctx->cap_q_ctx.rdy_spinlock,237237+ flags_cap);238238+ spin_unlock_irqrestore(&m2m_ctx->out_q_ctx.rdy_spinlock,239239+ flags_out);237240 spin_unlock_irqrestore(&m2m_dev->job_spinlock, flags_job);238241 dprintk("No output buffers available\n");239242 return;240243 }241241- spin_unlock_irqrestore(&m2m_ctx->cap_q_ctx.rdy_spinlock, flags);242242- spin_unlock_irqrestore(&m2m_ctx->out_q_ctx.rdy_spinlock, flags);244244+ spin_unlock_irqrestore(&m2m_ctx->cap_q_ctx.rdy_spinlock, flags_cap);245245+ spin_unlock_irqrestore(&m2m_ctx->out_q_ctx.rdy_spinlock, flags_out);243246244247 if (m2m_dev->m2m_ops->job_ready245248 && (!m2m_dev->m2m_ops->job_ready(m2m_ctx->priv))) {···375372EXPORT_SYMBOL_GPL(v4l2_m2m_dqbuf);376373377374/**375375+ * v4l2_m2m_create_bufs() - create a source or destination buffer, depending376376+ * on the type377377+ */378378+int v4l2_m2m_create_bufs(struct file *file, struct v4l2_m2m_ctx *m2m_ctx,379379+ struct v4l2_create_buffers *create)380380+{381381+ struct vb2_queue *vq;382382+383383+ vq = v4l2_m2m_get_vq(m2m_ctx, create->format.type);384384+ return vb2_create_bufs(vq, create);385385+}386386+EXPORT_SYMBOL_GPL(v4l2_m2m_create_bufs);387387+388388+/**378389 * v4l2_m2m_expbuf() - export a source or destination buffer, depending on379390 * the type380391 */···503486 if (m2m_ctx->m2m_dev->m2m_ops->unlock)504487 m2m_ctx->m2m_dev->m2m_ops->unlock(m2m_ctx->priv);505488506506- poll_wait(file, &src_q->done_wq, wait);507507- poll_wait(file, &dst_q->done_wq, wait);489489+ if (list_empty(&src_q->done_list))490490+ poll_wait(file, &src_q->done_wq, wait);491491+ if (list_empty(&dst_q->done_list))492492+ poll_wait(file, &dst_q->done_wq, wait);508493509494 if (m2m_ctx->m2m_dev->m2m_ops->lock)510495 m2m_ctx->m2m_dev->m2m_ops->lock(m2m_ctx->priv);
+2-1
drivers/media/v4l2-core/videobuf2-core.c
···20142014 if (list_empty(&q->queued_list))20152015 return res | POLLERR;2016201620172017- poll_wait(file, &q->done_wq, wait);20172017+ if (list_empty(&q->done_list))20182018+ poll_wait(file, &q->done_wq, wait);2018201920192020 /*20202021 * Take first buffer available for dequeuing.
···811811 return pcidev->irq;812812}813813814814+static struct iosapic_info *first_isi = NULL;815815+816816+#ifdef CONFIG_64BIT817817+int iosapic_serial_irq(int num)818818+{819819+ struct iosapic_info *isi = first_isi;820820+ struct irt_entry *irte = NULL; /* only used if PAT PDC */821821+ struct vector_info *vi;822822+ int isi_line; /* line used by device */823823+824824+ /* lookup IRT entry for isi/slot/pin set */825825+ irte = &irt_cell[num];826826+827827+ DBG_IRT("iosapic_serial_irq(): irte %p %x %x %x %x %x %x %x %x\n",828828+ irte,829829+ irte->entry_type,830830+ irte->entry_length,831831+ irte->polarity_trigger,832832+ irte->src_bus_irq_devno,833833+ irte->src_bus_id,834834+ irte->src_seg_id,835835+ irte->dest_iosapic_intin,836836+ (u32) irte->dest_iosapic_addr);837837+ isi_line = irte->dest_iosapic_intin;838838+839839+ /* get vector info for this input line */840840+ vi = isi->isi_vector + isi_line;841841+ DBG_IRT("iosapic_serial_irq: line %d vi 0x%p\n", isi_line, vi);842842+843843+ /* If this IRQ line has already been setup, skip it */844844+ if (vi->irte)845845+ goto out;846846+847847+ vi->irte = irte;848848+849849+ /*850850+ * Allocate processor IRQ851851+ *852852+ * XXX/FIXME The txn_alloc_irq() code and related code should be853853+ * moved to enable_irq(). That way we only allocate processor IRQ854854+ * bits for devices that actually have drivers claiming them.855855+ * Right now we assign an IRQ to every PCI device present,856856+ * regardless of whether it's used or not.857857+ */858858+ vi->txn_irq = txn_alloc_irq(8);859859+860860+ if (vi->txn_irq < 0)861861+ panic("I/O sapic: couldn't get TXN IRQ\n");862862+863863+ /* enable_irq() will use txn_* to program IRdT */864864+ vi->txn_addr = txn_alloc_addr(vi->txn_irq);865865+ vi->txn_data = txn_alloc_data(vi->txn_irq);866866+867867+ vi->eoi_addr = isi->addr + IOSAPIC_REG_EOI;868868+ vi->eoi_data = cpu_to_le32(vi->txn_data);869869+870870+ cpu_claim_irq(vi->txn_irq, &iosapic_interrupt_type, vi);871871+872872+ out:873873+874874+ return vi->txn_irq;875875+}876876+#endif877877+814878815879/*816880** squirrel away the I/O Sapic Version···941877 vip->irqline = (unsigned char) cnt;942878 vip->iosapic = isi;943879 }880880+ if (!first_isi)881881+ first_isi = isi;944882 return isi;945883}946884
···688688 * For FCP_READ with CHECK_CONDITION status, clear cmd->bufflen689689 * for qla_tgt_xmit_response LLD code690690 */691691+ if (se_cmd->se_cmd_flags & SCF_OVERFLOW_BIT) {692692+ se_cmd->se_cmd_flags &= ~SCF_OVERFLOW_BIT;693693+ se_cmd->residual_count = 0;694694+ }691695 se_cmd->se_cmd_flags |= SCF_UNDERFLOW_BIT;692692- se_cmd->residual_count = se_cmd->data_length;696696+ se_cmd->residual_count += se_cmd->data_length;693697694698 cmd->bufflen = 0;695699 }
+1-1
drivers/staging/media/davinci_vpfe/Kconfig
···11config VIDEO_DM365_VPFE22 tristate "DM365 VPFE Media Controller Capture Driver"33- depends on VIDEO_V4L2 && ARCH_DAVINCI_DM365 && !VIDEO_VPFE_CAPTURE33+ depends on VIDEO_V4L2 && ARCH_DAVINCI_DM365 && !VIDEO_DM365_ISIF44 select VIDEOBUF2_DMA_CONTIG55 help66 Support for DM365 VPFE based Media Controller Capture driver.
···639639 if (ret)640640 goto probe_free_dev_mem;641641642642- if (vpfe_initialize_modules(vpfe_dev, pdev))642642+ ret = vpfe_initialize_modules(vpfe_dev, pdev);643643+ if (ret)643644 goto probe_disable_clock;644645645646 vpfe_dev->media_dev.dev = vpfe_dev->pdev;···664663 /* set the driver data in platform device */665664 platform_set_drvdata(pdev, vpfe_dev);666665 /* register subdevs/entities */667667- if (vpfe_register_entities(vpfe_dev))666666+ ret = vpfe_register_entities(vpfe_dev);667667+ if (ret)668668 goto probe_out_v4l2_unregister;669669670670 ret = vpfe_attach_irq(vpfe_dev);
+1
drivers/staging/media/solo6x10/Kconfig
···55 select VIDEOBUF2_DMA_SG66 select VIDEOBUF2_DMA_CONTIG77 select SND_PCM88+ select FONT_8x1689 ---help---910 This driver supports the Softlogic based MPEG-4 and h.264 codec1011 cards.
+14-13
drivers/target/iscsi/iscsi_target_configfs.c
···155155 struct iscsi_tpg_np *tpg_np_iser = NULL;156156 char *endptr;157157 u32 op;158158- int rc;158158+ int rc = 0;159159160160 op = simple_strtoul(page, &endptr, 0);161161 if ((op != 1) && (op != 0)) {···174174 return -EINVAL;175175176176 if (op) {177177- int rc = request_module("ib_isert");178178- if (rc != 0)177177+ rc = request_module("ib_isert");178178+ if (rc != 0) {179179 pr_warn("Unable to request_module for ib_isert\n");180180+ rc = 0;181181+ }180182181183 tpg_np_iser = iscsit_tpg_add_network_portal(tpg, &np->np_sockaddr,182184 np->np_ip, tpg_np, ISCSI_INFINIBAND);183183- if (!tpg_np_iser || IS_ERR(tpg_np_iser))185185+ if (IS_ERR(tpg_np_iser)) {186186+ rc = PTR_ERR(tpg_np_iser);184187 goto out;188188+ }185189 } else {186190 tpg_np_iser = iscsit_tpg_locate_child_np(tpg_np, ISCSI_INFINIBAND);187187- if (!tpg_np_iser)188188- goto out;189189-190190- rc = iscsit_tpg_del_network_portal(tpg, tpg_np_iser);191191- if (rc < 0)192192- goto out;191191+ if (tpg_np_iser) {192192+ rc = iscsit_tpg_del_network_portal(tpg, tpg_np_iser);193193+ if (rc < 0)194194+ goto out;195195+ }193196 }194194-195195- printk("lio_target_np_store_iser() done, op: %d\n", op);196197197198 iscsit_put_tpg(tpg);198199 return count;199200out:200201 iscsit_put_tpg(tpg);201201- return -EINVAL;202202+ return rc;202203}203204204205TF_NP_BASE_ATTR(lio_target, iser, S_IRUGO | S_IWUSR);
···44menuconfig USB_PHY55 bool "USB Physical Layer drivers"66 help77- USB controllers (those which are host, device or DRD) need a88- device to handle the physical layer signalling, commonly called99- a PHY.77+ Most USB controllers have the physical layer signalling part88+ (commonly called a PHY) built in. However, dual-role devices99+ (a.k.a. USB on-the-go) which support being USB master or slave1010+ with the same connector often use an external PHY.10111111- The following drivers add support for such PHY devices.1212+ The drivers in this submenu add support for such PHY devices.1313+ They are not needed for standard master-only (or the vast1414+ majority of slave-only) USB interfaces.1515+1616+ If you're not sure if this applies to you, it probably doesn't;1717+ say N here.12181319if USB_PHY1420
···2323#include <linux/ratelimit.h>2424#include <linux/err.h>2525#include <linux/irqflags.h>2626+#include <linux/context_tracking.h>2627#include <asm/signal.h>27282829#include <linux/kvm.h>···760759 return 0;761760}762761#endif763763-764764-static inline void __guest_enter(void)765765-{766766- /*767767- * This is running in ioctl context so we can avoid768768- * the call to vtime_account() with its unnecessary idle check.769769- */770770- vtime_account_system(current);771771- current->flags |= PF_VCPU;772772-}773773-774774-static inline void __guest_exit(void)775775-{776776- /*777777- * This is running in ioctl context so we can avoid778778- * the call to vtime_account() with its unnecessary idle check.779779- */780780- vtime_account_system(current);781781- current->flags &= ~PF_VCPU;782782-}783783-784784-#ifdef CONFIG_CONTEXT_TRACKING785785-extern void guest_enter(void);786786-extern void guest_exit(void);787787-788788-#else /* !CONFIG_CONTEXT_TRACKING */789789-static inline void guest_enter(void)790790-{791791- __guest_enter();792792-}793793-794794-static inline void guest_exit(void)795795-{796796- __guest_exit();797797-}798798-#endif /* !CONFIG_CONTEXT_TRACKING */799762800763static inline void kvm_guest_enter(void)801764{
···3333 preempt_schedule(); \3434} while (0)35353636+#ifdef CONFIG_CONTEXT_TRACKING3737+3838+void preempt_schedule_context(void);3939+4040+#define preempt_check_resched_context() \4141+do { \4242+ if (unlikely(test_thread_flag(TIF_NEED_RESCHED))) \4343+ preempt_schedule_context(); \4444+} while (0)4545+#else4646+4747+#define preempt_check_resched_context() preempt_check_resched()4848+4949+#endif /* CONFIG_CONTEXT_TRACKING */5050+3651#else /* !CONFIG_PREEMPT */37523853#define preempt_check_resched() do { } while (0)5454+#define preempt_check_resched_context() do { } while (0)39554056#endif /* CONFIG_PREEMPT */4157···10488do { \10589 preempt_enable_no_resched_notrace(); \10690 barrier(); \107107- preempt_check_resched(); \9191+ preempt_check_resched_context(); \10892} while (0)1099311094#else /* !CONFIG_PREEMPT_COUNT */
+1
include/linux/splice.h
···3535 void *data; /* cookie */3636 } u;3737 loff_t pos; /* file position */3838+ loff_t *opos; /* sendfile: output position */3839 size_t num_spliced; /* number of bytes already spliced */3940 bool need_wakeup; /* need to wake up writer */4041};
···1515 */16161717#include <linux/context_tracking.h>1818-#include <linux/kvm_host.h>1918#include <linux/rcupdate.h>2019#include <linux/sched.h>2120#include <linux/hardirq.h>···7071 local_irq_restore(flags);7172}72737474+#ifdef CONFIG_PREEMPT7575+/**7676+ * preempt_schedule_context - preempt_schedule called by tracing7777+ *7878+ * The tracing infrastructure uses preempt_enable_notrace to prevent7979+ * recursion and tracing preempt enabling caused by the tracing8080+ * infrastructure itself. But as tracing can happen in areas coming8181+ * from userspace or just about to enter userspace, a preempt enable8282+ * can occur before user_exit() is called. This will cause the scheduler8383+ * to be called when the system is still in usermode.8484+ *8585+ * To prevent this, the preempt_enable_notrace will use this function8686+ * instead of preempt_schedule() to exit user context if needed before8787+ * calling the scheduler.8888+ */8989+void __sched notrace preempt_schedule_context(void)9090+{9191+ struct thread_info *ti = current_thread_info();9292+ enum ctx_state prev_ctx;9393+9494+ if (likely(ti->preempt_count || irqs_disabled()))9595+ return;9696+9797+ /*9898+ * Need to disable preemption in case user_exit() is traced9999+ * and the tracer calls preempt_enable_notrace() causing100100+ * an infinite recursion.101101+ */102102+ preempt_disable_notrace();103103+ prev_ctx = exception_enter();104104+ preempt_enable_no_resched_notrace();105105+106106+ preempt_schedule();107107+108108+ preempt_disable_notrace();109109+ exception_exit(prev_ctx);110110+ preempt_enable_notrace();111111+}112112+EXPORT_SYMBOL_GPL(preempt_schedule_context);113113+#endif /* CONFIG_PREEMPT */7311474115/**75116 * user_exit - Inform the context tracking that the CPU is
+17
kernel/cpu/idle.c
···55#include <linux/cpu.h>66#include <linux/tick.h>77#include <linux/mm.h>88+#include <linux/stackprotector.h>89910#include <asm/tlb.h>1011···5958void __weak arch_cpu_idle(void)6059{6160 cpu_idle_force_poll = 1;6161+ local_irq_enable();6262}63636464/*···114112115113void cpu_startup_entry(enum cpuhp_state state)116114{115115+ /*116116+ * This #ifdef needs to die, but it's too late in the cycle to117117+ * make this generic (arm and sh have never invoked the canary118118+ * init for the non boot cpus!). Will be fixed in 3.11119119+ */120120+#ifdef CONFIG_X86121121+ /*122122+ * If we're the non-boot CPU, nothing set the stack canary up123123+ * for us. The boot CPU already has it initialized but no harm124124+ * in doing it again. This is a good place for updating it, as125125+ * we wont ever return from this function (so the invalid126126+ * canaries already on the stack wont ever trigger).127127+ */128128+ boot_init_stack_canary();129129+#endif117130 current_set_polling();118131 arch_cpu_idle_prepare();119132 cpu_idle_loop();
+162-73
kernel/events/core.c
···196196static void update_context_time(struct perf_event_context *ctx);197197static u64 perf_event_time(struct perf_event *event);198198199199-static void ring_buffer_attach(struct perf_event *event,200200- struct ring_buffer *rb);201201-202199void __weak perf_event_print_debug(void) { }203200204201extern __weak const char *perf_pmu_name(void)···29152918}2916291929172920static void ring_buffer_put(struct ring_buffer *rb);29212921+static void ring_buffer_detach(struct perf_event *event, struct ring_buffer *rb);2918292229192923static void free_event(struct perf_event *event)29202924{···29402942 if (has_branch_stack(event)) {29412943 static_key_slow_dec_deferred(&perf_sched_events);29422944 /* is system-wide event */29432943- if (!(event->attach_state & PERF_ATTACH_TASK))29452945+ if (!(event->attach_state & PERF_ATTACH_TASK)) {29442946 atomic_dec(&per_cpu(perf_branch_stack_events,29452947 event->cpu));29482948+ }29462949 }29472950 }2948295129492952 if (event->rb) {29502950- ring_buffer_put(event->rb);29512951- event->rb = NULL;29532953+ struct ring_buffer *rb;29542954+29552955+ /*29562956+ * Can happen when we close an event with re-directed output.29572957+ *29582958+ * Since we have a 0 refcount, perf_mmap_close() will skip29592959+ * over us; possibly making our ring_buffer_put() the last.29602960+ */29612961+ mutex_lock(&event->mmap_mutex);29622962+ rb = event->rb;29632963+ if (rb) {29642964+ rcu_assign_pointer(event->rb, NULL);29652965+ ring_buffer_detach(event, rb);29662966+ ring_buffer_put(rb); /* could be last */29672967+ }29682968+ mutex_unlock(&event->mmap_mutex);29522969 }2953297029542971 if (is_cgroup_event(event))···32013188 unsigned int events = POLL_HUP;3202318932033190 /*32043204- * Race between perf_event_set_output() and perf_poll(): perf_poll()32053205- * grabs the rb reference but perf_event_set_output() overrides it.32063206- * Here is the timeline for two threads T1, T2:32073207- * t0: T1, rb = rcu_dereference(event->rb)32083208- * t1: T2, old_rb = event->rb32093209- * t2: T2, event->rb = new rb32103210- * t3: T2, ring_buffer_detach(old_rb)32113211- * t4: T1, ring_buffer_attach(rb1)32123212- * t5: T1, poll_wait(event->waitq)32133213- *32143214- * To avoid this problem, we grab mmap_mutex in perf_poll()32153215- * thereby ensuring that the assignment of the new ring buffer32163216- * and the detachment of the old buffer appear atomic to perf_poll()31913191+ * Pin the event->rb by taking event->mmap_mutex; otherwise31923192+ * perf_event_set_output() can swizzle our rb and make us miss wakeups.32173193 */32183194 mutex_lock(&event->mmap_mutex);32193219-32203220- rcu_read_lock();32213221- rb = rcu_dereference(event->rb);32223222- if (rb) {32233223- ring_buffer_attach(event, rb);31953195+ rb = event->rb;31963196+ if (rb)32243197 events = atomic_xchg(&rb->poll, 0);32253225- }32263226- rcu_read_unlock();32273227-32283198 mutex_unlock(&event->mmap_mutex);3229319932303200 poll_wait(file, &event->waitq, wait);···35173521 return;3518352235193523 spin_lock_irqsave(&rb->event_lock, flags);35203520- if (!list_empty(&event->rb_entry))35213521- goto unlock;35223522-35233523- list_add(&event->rb_entry, &rb->event_list);35243524-unlock:35243524+ if (list_empty(&event->rb_entry))35253525+ list_add(&event->rb_entry, &rb->event_list);35253526 spin_unlock_irqrestore(&rb->event_lock, flags);35263527}3527352835283528-static void ring_buffer_detach(struct perf_event *event,35293529- struct ring_buffer *rb)35293529+static void ring_buffer_detach(struct perf_event *event, struct ring_buffer *rb)35303530{35313531 unsigned long flags;35323532···3541354935423550 rcu_read_lock();35433551 rb = rcu_dereference(event->rb);35443544- if (!rb)35453545- goto unlock;35463546-35473547- list_for_each_entry_rcu(event, &rb->event_list, rb_entry)35483548- wake_up_all(&event->waitq);35493549-35503550-unlock:35523552+ if (rb) {35533553+ list_for_each_entry_rcu(event, &rb->event_list, rb_entry)35543554+ wake_up_all(&event->waitq);35553555+ }35513556 rcu_read_unlock();35523557}35533558···3573358435743585static void ring_buffer_put(struct ring_buffer *rb)35753586{35763576- struct perf_event *event, *n;35773577- unsigned long flags;35783578-35793587 if (!atomic_dec_and_test(&rb->refcount))35803588 return;3581358935823582- spin_lock_irqsave(&rb->event_lock, flags);35833583- list_for_each_entry_safe(event, n, &rb->event_list, rb_entry) {35843584- list_del_init(&event->rb_entry);35853585- wake_up_all(&event->waitq);35863586- }35873587- spin_unlock_irqrestore(&rb->event_lock, flags);35903590+ WARN_ON_ONCE(!list_empty(&rb->event_list));3588359135893592 call_rcu(&rb->rcu_head, rb_free_rcu);35903593}···35863605 struct perf_event *event = vma->vm_file->private_data;3587360635883607 atomic_inc(&event->mmap_count);36083608+ atomic_inc(&event->rb->mmap_count);35893609}3590361036113611+/*36123612+ * A buffer can be mmap()ed multiple times; either directly through the same36133613+ * event, or through other events by use of perf_event_set_output().36143614+ *36153615+ * In order to undo the VM accounting done by perf_mmap() we need to destroy36163616+ * the buffer here, where we still have a VM context. This means we need36173617+ * to detach all events redirecting to us.36183618+ */35913619static void perf_mmap_close(struct vm_area_struct *vma)35923620{35933621 struct perf_event *event = vma->vm_file->private_data;3594362235953595- if (atomic_dec_and_mutex_lock(&event->mmap_count, &event->mmap_mutex)) {35963596- unsigned long size = perf_data_size(event->rb);35973597- struct user_struct *user = event->mmap_user;35983598- struct ring_buffer *rb = event->rb;36233623+ struct ring_buffer *rb = event->rb;36243624+ struct user_struct *mmap_user = rb->mmap_user;36253625+ int mmap_locked = rb->mmap_locked;36263626+ unsigned long size = perf_data_size(rb);3599362736003600- atomic_long_sub((size >> PAGE_SHIFT) + 1, &user->locked_vm);36013601- vma->vm_mm->pinned_vm -= event->mmap_locked;36023602- rcu_assign_pointer(event->rb, NULL);36033603- ring_buffer_detach(event, rb);36043604- mutex_unlock(&event->mmap_mutex);36283628+ atomic_dec(&rb->mmap_count);3605362936063606- ring_buffer_put(rb);36073607- free_uid(user);36303630+ if (!atomic_dec_and_mutex_lock(&event->mmap_count, &event->mmap_mutex))36313631+ return;36323632+36333633+ /* Detach current event from the buffer. */36343634+ rcu_assign_pointer(event->rb, NULL);36353635+ ring_buffer_detach(event, rb);36363636+ mutex_unlock(&event->mmap_mutex);36373637+36383638+ /* If there's still other mmap()s of this buffer, we're done. */36393639+ if (atomic_read(&rb->mmap_count)) {36403640+ ring_buffer_put(rb); /* can't be last */36413641+ return;36083642 }36433643+36443644+ /*36453645+ * No other mmap()s, detach from all other events that might redirect36463646+ * into the now unreachable buffer. Somewhat complicated by the36473647+ * fact that rb::event_lock otherwise nests inside mmap_mutex.36483648+ */36493649+again:36503650+ rcu_read_lock();36513651+ list_for_each_entry_rcu(event, &rb->event_list, rb_entry) {36523652+ if (!atomic_long_inc_not_zero(&event->refcount)) {36533653+ /*36543654+ * This event is en-route to free_event() which will36553655+ * detach it and remove it from the list.36563656+ */36573657+ continue;36583658+ }36593659+ rcu_read_unlock();36603660+36613661+ mutex_lock(&event->mmap_mutex);36623662+ /*36633663+ * Check we didn't race with perf_event_set_output() which can36643664+ * swizzle the rb from under us while we were waiting to36653665+ * acquire mmap_mutex.36663666+ *36673667+ * If we find a different rb; ignore this event, a next36683668+ * iteration will no longer find it on the list. We have to36693669+ * still restart the iteration to make sure we're not now36703670+ * iterating the wrong list.36713671+ */36723672+ if (event->rb == rb) {36733673+ rcu_assign_pointer(event->rb, NULL);36743674+ ring_buffer_detach(event, rb);36753675+ ring_buffer_put(rb); /* can't be last, we still have one */36763676+ }36773677+ mutex_unlock(&event->mmap_mutex);36783678+ put_event(event);36793679+36803680+ /*36813681+ * Restart the iteration; either we're on the wrong list or36823682+ * destroyed its integrity by doing a deletion.36833683+ */36843684+ goto again;36853685+ }36863686+ rcu_read_unlock();36873687+36883688+ /*36893689+ * It could be there's still a few 0-ref events on the list; they'll36903690+ * get cleaned up by free_event() -- they'll also still have their36913691+ * ref on the rb and will free it whenever they are done with it.36923692+ *36933693+ * Aside from that, this buffer is 'fully' detached and unmapped,36943694+ * undo the VM accounting.36953695+ */36963696+36973697+ atomic_long_sub((size >> PAGE_SHIFT) + 1, &mmap_user->locked_vm);36983698+ vma->vm_mm->pinned_vm -= mmap_locked;36993699+ free_uid(mmap_user);37003700+37013701+ ring_buffer_put(rb); /* could be last */36093702}3610370336113704static const struct vm_operations_struct perf_mmap_vmops = {···37293674 return -EINVAL;3730367537313676 WARN_ON_ONCE(event->ctx->parent_ctx);36773677+again:37323678 mutex_lock(&event->mmap_mutex);37333679 if (event->rb) {37343734- if (event->rb->nr_pages == nr_pages)37353735- atomic_inc(&event->rb->refcount);37363736- else36803680+ if (event->rb->nr_pages != nr_pages) {37373681 ret = -EINVAL;36823682+ goto unlock;36833683+ }36843684+36853685+ if (!atomic_inc_not_zero(&event->rb->mmap_count)) {36863686+ /*36873687+ * Raced against perf_mmap_close() through36883688+ * perf_event_set_output(). Try again, hope for better36893689+ * luck.36903690+ */36913691+ mutex_unlock(&event->mmap_mutex);36923692+ goto again;36933693+ }36943694+37383695 goto unlock;37393696 }37403697···37873720 ret = -ENOMEM;37883721 goto unlock;37893722 }37903790- rcu_assign_pointer(event->rb, rb);37233723+37243724+ atomic_set(&rb->mmap_count, 1);37253725+ rb->mmap_locked = extra;37263726+ rb->mmap_user = get_current_user();3791372737923728 atomic_long_add(user_extra, &user->locked_vm);37933793- event->mmap_locked = extra;37943794- event->mmap_user = get_current_user();37953795- vma->vm_mm->pinned_vm += event->mmap_locked;37293729+ vma->vm_mm->pinned_vm += extra;37303730+37313731+ ring_buffer_attach(event, rb);37323732+ rcu_assign_pointer(event->rb, rb);3796373337973734 perf_event_update_userpage(event);37983735···38053734 atomic_inc(&event->mmap_count);38063735 mutex_unlock(&event->mmap_mutex);3807373638083808- vma->vm_flags |= VM_DONTEXPAND | VM_DONTDUMP;37373737+ /*37383738+ * Since pinned accounting is per vm we cannot allow fork() to copy our37393739+ * vma.37403740+ */37413741+ vma->vm_flags |= VM_DONTCOPY | VM_DONTEXPAND | VM_DONTDUMP;38093742 vma->vm_ops = &perf_mmap_vmops;3810374338113744 return ret;···64876412 if (atomic_read(&event->mmap_count))64886413 goto unlock;6489641464156415+ old_rb = event->rb;64166416+64906417 if (output_event) {64916418 /* get the rb we want to redirect to */64926419 rb = ring_buffer_get(output_event);···64966419 goto unlock;64976420 }6498642164996499- old_rb = event->rb;65006500- rcu_assign_pointer(event->rb, rb);65016422 if (old_rb)65026423 ring_buffer_detach(event, old_rb);64246424+64256425+ if (rb)64266426+ ring_buffer_attach(event, rb);64276427+64286428+ rcu_assign_pointer(event->rb, rb);64296429+64306430+ if (old_rb) {64316431+ ring_buffer_put(old_rb);64326432+ /*64336433+ * Since we detached before setting the new rb, so that we64346434+ * could attach the new rb, we could have missed a wakeup.64356435+ * Provide it now.64366436+ */64376437+ wake_up_all(&event->waitq);64386438+ }64396439+65036440 ret = 0;65046441unlock:65056442 mutex_unlock(&event->mmap_mutex);6506644365076507- if (old_rb)65086508- ring_buffer_put(old_rb);65096444out:65106445 return ret;65116446}
···467467/* Optimization staging list, protected by kprobe_mutex */468468static LIST_HEAD(optimizing_list);469469static LIST_HEAD(unoptimizing_list);470470+static LIST_HEAD(freeing_list);470471471472static void kprobe_optimizer(struct work_struct *work);472473static DECLARE_DELAYED_WORK(optimizing_work, kprobe_optimizer);···505504 * Unoptimize (replace a jump with a breakpoint and remove the breakpoint506505 * if need) kprobes listed on unoptimizing_list.507506 */508508-static __kprobes void do_unoptimize_kprobes(struct list_head *free_list)507507+static __kprobes void do_unoptimize_kprobes(void)509508{510509 struct optimized_kprobe *op, *tmp;511510···516515 /* Ditto to do_optimize_kprobes */517516 get_online_cpus();518517 mutex_lock(&text_mutex);519519- arch_unoptimize_kprobes(&unoptimizing_list, free_list);518518+ arch_unoptimize_kprobes(&unoptimizing_list, &freeing_list);520519 /* Loop free_list for disarming */521521- list_for_each_entry_safe(op, tmp, free_list, list) {520520+ list_for_each_entry_safe(op, tmp, &freeing_list, list) {522521 /* Disarm probes if marked disabled */523522 if (kprobe_disabled(&op->kp))524523 arch_disarm_kprobe(&op->kp);···537536}538537539538/* Reclaim all kprobes on the free_list */540540-static __kprobes void do_free_cleaned_kprobes(struct list_head *free_list)539539+static __kprobes void do_free_cleaned_kprobes(void)541540{542541 struct optimized_kprobe *op, *tmp;543542544544- list_for_each_entry_safe(op, tmp, free_list, list) {543543+ list_for_each_entry_safe(op, tmp, &freeing_list, list) {545544 BUG_ON(!kprobe_unused(&op->kp));546545 list_del_init(&op->list);547546 free_aggr_kprobe(&op->kp);···557556/* Kprobe jump optimizer */558557static __kprobes void kprobe_optimizer(struct work_struct *work)559558{560560- LIST_HEAD(free_list);561561-562559 mutex_lock(&kprobe_mutex);563560 /* Lock modules while optimizing kprobes */564561 mutex_lock(&module_mutex);···565566 * Step 1: Unoptimize kprobes and collect cleaned (unused and disarmed)566567 * kprobes before waiting for quiesence period.567568 */568568- do_unoptimize_kprobes(&free_list);569569+ do_unoptimize_kprobes();569570570571 /*571572 * Step 2: Wait for quiesence period to ensure all running interrupts···580581 do_optimize_kprobes();581582582583 /* Step 4: Free cleaned kprobes after quiesence period */583583- do_free_cleaned_kprobes(&free_list);584584+ do_free_cleaned_kprobes();584585585586 mutex_unlock(&module_mutex);586587 mutex_unlock(&kprobe_mutex);···722723 if (!list_empty(&op->list))723724 /* Dequeue from the (un)optimization queue */724725 list_del_init(&op->list);725725-726726 op->kp.flags &= ~KPROBE_FLAG_OPTIMIZED;727727+728728+ if (kprobe_unused(p)) {729729+ /* Enqueue if it is unused */730730+ list_add(&op->list, &freeing_list);731731+ /*732732+ * Remove unused probes from the hash list. After waiting733733+ * for synchronization, this probe is reclaimed.734734+ * (reclaiming is done by do_free_cleaned_kprobes().)735735+ */736736+ hlist_del_rcu(&op->kp.hlist);737737+ }738738+727739 /* Don't touch the code, because it is already freed. */728740 arch_remove_optimized_kprobe(op);729741}
+11-10
kernel/range.c
···44#include <linux/kernel.h>55#include <linux/init.h>66#include <linux/sort.h>77-77+#include <linux/string.h>88#include <linux/range.h>991010int add_range(struct range *range, int az, int nr_range, u64 start, u64 end)···3232 if (start >= end)3333 return nr_range;34343535- /* Try to merge it with old one: */3535+ /* get new start/end: */3636 for (i = 0; i < nr_range; i++) {3737- u64 final_start, final_end;3837 u64 common_start, common_end;39384039 if (!range[i].end)···4445 if (common_start > common_end)4546 continue;46474747- final_start = min(range[i].start, start);4848- final_end = max(range[i].end, end);4848+ /* new start/end, will add it back at last */4949+ start = min(range[i].start, start);5050+ end = max(range[i].end, end);49515050- /* clear it and add it back for further merge */5151- range[i].start = 0;5252- range[i].end = 0;5353- return add_range_with_merge(range, az, nr_range,5454- final_start, final_end);5252+ memmove(&range[i], &range[i + 1],5353+ (nr_range - (i + 1)) * sizeof(range[i]));5454+ range[nr_range - 1].start = 0;5555+ range[nr_range - 1].end = 0;5656+ nr_range--;5757+ i--;5558 }56595760 /* Need to add it: */
+18-5
kernel/sched/core.c
···633633static inline bool got_nohz_idle_kick(void)634634{635635 int cpu = smp_processor_id();636636- return idle_cpu(cpu) && test_bit(NOHZ_BALANCE_KICK, nohz_flags(cpu));636636+637637+ if (!test_bit(NOHZ_BALANCE_KICK, nohz_flags(cpu)))638638+ return false;639639+640640+ if (idle_cpu(cpu) && !need_resched())641641+ return true;642642+643643+ /*644644+ * We can't run Idle Load Balance on this CPU for this time so we645645+ * cancel it and clear NOHZ_BALANCE_KICK646646+ */647647+ clear_bit(NOHZ_BALANCE_KICK, nohz_flags(cpu));648648+ return false;637649}638650639651#else /* CONFIG_NO_HZ_COMMON */···1405139314061394void scheduler_ipi(void)14071395{14081408- if (llist_empty(&this_rq()->wake_list) && !got_nohz_idle_kick()14091409- && !tick_nohz_full_cpu(smp_processor_id()))13961396+ if (llist_empty(&this_rq()->wake_list)13971397+ && !tick_nohz_full_cpu(smp_processor_id())13981398+ && !got_nohz_idle_kick())14101399 return;1411140014121401 /*···14301417 /*14311418 * Check if someone kicked us for doing the nohz idle load balance.14321419 */14331433- if (unlikely(got_nohz_idle_kick() && !need_resched())) {14201420+ if (unlikely(got_nohz_idle_kick())) {14341421 this_rq()->idle_balance = 1;14351422 raise_softirq_irqoff(SCHED_SOFTIRQ);14361423 }···47584745 */47594746 idle->sched_class = &idle_sched_class;47604747 ftrace_graph_init_idle_task(idle, cpu);47614761- vtime_init_idle(idle);47484748+ vtime_init_idle(idle, cpu);47624749#if defined(CONFIG_SMP)47634750 sprintf(idle->comm, "%s/%d", INIT_TASK_COMM, cpu);47644751#endif
···698698699699 bc->event_handler = tick_handle_oneshot_broadcast;700700701701- /* Take the do_timer update */702702- if (!tick_nohz_full_cpu(cpu))703703- tick_do_timer_cpu = cpu;704704-705701 /*706702 * We must be careful here. There might be other CPUs707703 * waiting for periodic broadcast. We need to set the
+1-1
kernel/time/tick-sched.c
···306306 * we can't safely shutdown that CPU.307307 */308308 if (have_nohz_full_mask && tick_do_timer_cpu == cpu)309309- return -EINVAL;309309+ return NOTIFY_BAD;310310 break;311311 }312312 return NOTIFY_OK;
+3-1
mm/slab_common.c
···373373{374374 int index;375375376376- if (WARN_ON_ONCE(size > KMALLOC_MAX_SIZE))376376+ if (size > KMALLOC_MAX_SIZE) {377377+ WARN_ON_ONCE(!(flags & __GFP_NOWARN));377378 return NULL;379379+ }378380379381 if (size <= 192) {380382 if (!size)
···885885886886 case USB_ID(0x046d, 0x0808):887887 case USB_ID(0x046d, 0x0809):888888+ case USB_ID(0x046d, 0x081b): /* HD Webcam c310 */888889 case USB_ID(0x046d, 0x081d): /* HD Webcam c510 */889890 case USB_ID(0x046d, 0x0825): /* HD Webcam c270 */890891 case USB_ID(0x046d, 0x0991):