···142142Contact: linux-mtd@lists.infradead.org143143Description:144144 This allows the user to examine and adjust the criteria by which145145- mtd returns -EUCLEAN from mtd_read(). If the maximum number of146146- bit errors that were corrected on any single region comprising147147- an ecc step (as reported by the driver) equals or exceeds this148148- value, -EUCLEAN is returned. Otherwise, absent an error, 0 is149149- returned. Higher layers (e.g., UBI) use this return code as an150150- indication that an erase block may be degrading and should be151151- scrutinized as a candidate for being marked as bad.145145+ mtd returns -EUCLEAN from mtd_read() and mtd_read_oob(). If the146146+ maximum number of bit errors that were corrected on any single147147+ region comprising an ecc step (as reported by the driver) equals148148+ or exceeds this value, -EUCLEAN is returned. Otherwise, absent149149+ an error, 0 is returned. Higher layers (e.g., UBI) use this150150+ return code as an indication that an erase block may be151151+ degrading and should be scrutinized as a candidate for being152152+ marked as bad.152153153154 The initial value may be specified by the flash device driver.154155 If not, then the default value is ecc_strength.···168167 block degradation, but high enough to avoid the consequences of169168 a persistent return value of -EUCLEAN on devices where sticky170169 bitflips occur. Note that if bitflip_threshold exceeds171171- ecc_strength, -EUCLEAN is never returned by mtd_read().170170+ ecc_strength, -EUCLEAN is never returned by the read operations.172171 Conversely, if bitflip_threshold is zero, -EUCLEAN is always173172 returned, absent a hard error.174173
+1-1
Documentation/DocBook/media/v4l/controls.xml
···39883988 from RGB to Y'CbCr color space.39893989 </entry>39903990 </row>39913991- <row id = "v4l2-jpeg-chroma-subsampling">39913991+ <row>39923992 <entrytbl spanname="descr" cols="2">39933993 <tbody valign="top">39943994 <row>
···284284 processing controls. These controls are described in <xref285285 linkend="image-process-controls" />.</entry>286286 </row>287287- <row>288288- <entry><constant>V4L2_CTRL_CLASS_JPEG</constant></entry>289289- <entry>0x9d0000</entry>290290- <entry>The class containing JPEG compression controls.291291-These controls are described in <xref292292- linkend="jpeg-controls" />.</entry>293293- </row>294287 </tbody>295288 </tgroup>296289 </table>
···33This isn't an exhaustive list, but you should add new prefixes to it before44using them to avoid name-space collisions.5566+ad Avionic Design GmbH67adi Analog Devices, Inc.78amcc Applied Micro Circuits Corporation (APM, formally AMCC)89apm Applied Micro Circuits Corporation (APM)
+1-1
Documentation/kdump/kdump.txt
···8686http://www.kernel.org/git/?p=utils/kernel/kexec/kexec-tools.git87878888More information about kexec-tools can be found at8989-http://www.kernel.org/pub/linux/utils/kernel/kexec/README.html8989+http://horms.net/projects/kexec/909091913) Unpack the tarball with the tar command, as follows:9292
+7
Documentation/prctl/no_new_privs.txt
···2525add to the permitted set, and LSMs will not relax constraints after2626execve.27272828+To set no_new_privs, use prctl(PR_SET_NO_NEW_PRIVS, 1, 0, 0, 0).2929+3030+Be careful, though: LSMs might also not tighten constraints on exec3131+in no_new_privs mode. (This means that setting up a general-purpose3232+service launcher to set no_new_privs before execing daemons may3333+interfere with LSM-based sandboxing.)3434+2835Note that no_new_privs does not prevent privilege changes that do not2936involve execve. An appropriately privileged task can still call3037setuid(2) and receive SCM_RIGHTS datagrams.
+17
Documentation/virtual/kvm/api.txt
···19301930PTE's RPN field (ie, it needs to be shifted left by 12 to OR it19311931into the hash PTE second double word).1932193219331933+4.75 KVM_IRQFD19341934+19351935+Capability: KVM_CAP_IRQFD19361936+Architectures: x8619371937+Type: vm ioctl19381938+Parameters: struct kvm_irqfd (in)19391939+Returns: 0 on success, -1 on error19401940+19411941+Allows setting an eventfd to directly trigger a guest interrupt.19421942+kvm_irqfd.fd specifies the file descriptor to use as the eventfd and19431943+kvm_irqfd.gsi specifies the irqchip pin toggled by this event. When19441944+an event is tiggered on the eventfd, an interrupt is injected into19451945+the guest using the specified gsi pin. The irqfd is removed using19461946+the KVM_IRQFD_FLAG_DEASSIGN flag, specifying both kvm_irqfd.fd19471947+and kvm_irqfd.gsi.19481948+19491949+193319505. The kvm_run structure19341951------------------------19351952
···2727 */2828#define SWI_SYS_SIGRETURN (0xef000000|(__NR_sigreturn)|(__NR_OABI_SYSCALL_BASE))2929#define SWI_SYS_RT_SIGRETURN (0xef000000|(__NR_rt_sigreturn)|(__NR_OABI_SYSCALL_BASE))3030+#define SWI_SYS_RESTART (0xef000000|__NR_restart_syscall|__NR_OABI_SYSCALL_BASE)30313132/*3233 * With EABI, the syscall number has to be loaded into r7.···4544const unsigned long sigreturn_codes[7] = {4645 MOV_R7_NR_SIGRETURN, SWI_SYS_SIGRETURN, SWI_THUMB_SIGRETURN,4746 MOV_R7_NR_RT_SIGRETURN, SWI_SYS_RT_SIGRETURN, SWI_THUMB_RT_SIGRETURN,4747+};4848+4949+/*5050+ * Either we support OABI only, or we have EABI with the OABI5151+ * compat layer enabled. In the later case we don't know if5252+ * user space is EABI or not, and if not we must not clobber r7.5353+ * Always using the OABI syscall solves that issue and works for5454+ * all those cases.5555+ */5656+const unsigned long syscall_restart_code[2] = {5757+ SWI_SYS_RESTART, /* swi __NR_restart_syscall */5858+ 0xe49df004, /* ldr pc, [sp], #4 */4859};49605061/*···605592 case -ERESTARTNOHAND:606593 case -ERESTARTSYS:607594 case -ERESTARTNOINTR:608608- case -ERESTART_RESTARTBLOCK:609595 regs->ARM_r0 = regs->ARM_ORIG_r0;610596 regs->ARM_pc = restart_addr;597597+ break;598598+ case -ERESTART_RESTARTBLOCK:599599+ regs->ARM_r0 = -EINTR;611600 break;612601 }613602 }···626611 * debugger has chosen to restart at a different PC.627612 */628613 if (regs->ARM_pc == restart_addr) {629629- if (retval == -ERESTARTNOHAND ||630630- retval == -ERESTART_RESTARTBLOCK614614+ if (retval == -ERESTARTNOHAND631615 || (retval == -ERESTARTSYS632616 && !(ka.sa.sa_flags & SA_RESTART))) {633617 regs->ARM_r0 = -EINTR;634618 regs->ARM_pc = continue_addr;635619 }636636- clear_thread_flag(TIF_SYSCALL_RESTARTSYS);637620 }638621639622 handle_signal(signr, &ka, &info, regs);···645632 * ignore the restart.646633 */647634 if (retval == -ERESTART_RESTARTBLOCK648648- && regs->ARM_pc == restart_addr)649649- set_thread_flag(TIF_SYSCALL_RESTARTSYS);635635+ && regs->ARM_pc == continue_addr) {636636+ if (thumb_mode(regs)) {637637+ regs->ARM_r7 = __NR_restart_syscall - __NR_SYSCALL_BASE;638638+ regs->ARM_pc -= 2;639639+ } else {640640+#if defined(CONFIG_AEABI) && !defined(CONFIG_OABI_COMPAT)641641+ regs->ARM_r7 = __NR_restart_syscall;642642+ regs->ARM_pc -= 4;643643+#else644644+ u32 __user *usp;645645+646646+ regs->ARM_sp -= 4;647647+ usp = (u32 __user *)regs->ARM_sp;648648+649649+ if (put_user(regs->ARM_pc, usp) == 0) {650650+ regs->ARM_pc = KERN_RESTART_CODE;651651+ } else {652652+ regs->ARM_sp += 4;653653+ force_sigsegv(0, current);654654+ }655655+#endif656656+ }657657+ }650658 }651659652660 restore_saved_sigmask();
+2
arch/arm/kernel/signal.h
···88 * published by the Free Software Foundation.99 */1010#define KERN_SIGRETURN_CODE (CONFIG_VECTORS_BASE + 0x00000500)1111+#define KERN_RESTART_CODE (KERN_SIGRETURN_CODE + sizeof(sigreturn_codes))11121213extern const unsigned long sigreturn_codes[7];1414+extern const unsigned long syscall_restart_code[2];
···119119 struct exynos_pm_domain *pd)120120{121121 if (pdev->dev.bus) {122122- if (pm_genpd_add_device(&pd->pd, &pdev->dev))122122+ if (!pm_genpd_add_device(&pd->pd, &pdev->dev))123123+ pm_genpd_dev_need_restore(&pdev->dev, true);124124+ else123125 pr_info("%s: error in adding %s device to %s power"124126 "domain\n", __func__, dev_name(&pdev->dev),125127 pd->name);···153151 if (of_have_populated_dt())154152 return exynos_pm_dt_parse_domains();155153156156- for (idx = 0; idx < ARRAY_SIZE(exynos4_pm_domains); idx++)157157- pm_genpd_init(&exynos4_pm_domains[idx]->pd, NULL,158158- exynos4_pm_domains[idx]->is_off);154154+ for (idx = 0; idx < ARRAY_SIZE(exynos4_pm_domains); idx++) {155155+ struct exynos_pm_domain *pd = exynos4_pm_domains[idx];156156+ int on = __raw_readl(pd->base + 0x4) & S5P_INT_LOCAL_PWR_EN;157157+158158+ pm_genpd_init(&pd->pd, NULL, !on);159159+ }159160160161#ifdef CONFIG_S5P_DEV_FIMD0161162 exynos_pm_add_dev_to_genpd(&s5p_device_fimd0, &exynos4_pd_lcd0);
+8-1
arch/arm/mach-imx/clk-imx35.c
···201201 pr_err("i.MX35 clk %d: register failed with %ld\n",202202 i, PTR_ERR(clk[i]));203203204204-205204 clk_register_clkdev(clk[pata_gate], NULL, "pata_imx");206205 clk_register_clkdev(clk[can1_gate], NULL, "flexcan.0");207206 clk_register_clkdev(clk[can2_gate], NULL, "flexcan.1");···262263 clk_prepare_enable(clk[gpio3_gate]);263264 clk_prepare_enable(clk[iim_gate]);264265 clk_prepare_enable(clk[emi_gate]);266266+267267+ /*268268+ * SCC is needed to boot via mmc after a watchdog reset. The clock code269269+ * before conversion to common clk also enabled UART1 (which isn't270270+ * handled here and not needed for mmc) and IIM (which is enabled271271+ * unconditionally above).272272+ */273273+ clk_prepare_enable(clk[scc_gate]);265274266275 imx_print_silicon_rev("i.MX35", mx35_revision());267276
···11241124 * _enable_sysc - try to bring a module out of idle via OCP_SYSCONFIG11251125 * @oh: struct omap_hwmod *11261126 *11271127- * If module is marked as SWSUP_SIDLE, force the module out of slave11281128- * idle; otherwise, configure it for smart-idle. If module is marked11291129- * as SWSUP_MSUSPEND, force the module out of master standby;11301130- * otherwise, configure it for smart-standby. No return value.11271127+ * Ensure that the OCP_SYSCONFIG register for the IP block represented11281128+ * by @oh is set to indicate to the PRCM that the IP block is active.11291129+ * Usually this means placing the module into smart-idle mode and11301130+ * smart-standby, but if there is a bug in the automatic idle handling11311131+ * for the IP block, it may need to be placed into the force-idle or11321132+ * no-idle variants of these modes. No return value.11311133 */11321134static void _enable_sysc(struct omap_hwmod *oh)11331135{11341136 u8 idlemode, sf;11351137 u32 v;11381138+ bool clkdm_act;1136113911371140 if (!oh->class->sysc)11381141 return;···11441141 sf = oh->class->sysc->sysc_flags;1145114211461143 if (sf & SYSC_HAS_SIDLEMODE) {11471147- idlemode = (oh->flags & HWMOD_SWSUP_SIDLE) ?11481148- HWMOD_IDLEMODE_NO : HWMOD_IDLEMODE_SMART;11441144+ clkdm_act = ((oh->clkdm &&11451145+ oh->clkdm->flags & CLKDM_ACTIVE_WITH_MPU) ||11461146+ (oh->_clk && oh->_clk->clkdm &&11471147+ oh->_clk->clkdm->flags & CLKDM_ACTIVE_WITH_MPU));11481148+ if (clkdm_act && !(oh->class->sysc->idlemodes &11491149+ (SIDLE_SMART | SIDLE_SMART_WKUP)))11501150+ idlemode = HWMOD_IDLEMODE_FORCE;11511151+ else11521152+ idlemode = (oh->flags & HWMOD_SWSUP_SIDLE) ?11531153+ HWMOD_IDLEMODE_NO : HWMOD_IDLEMODE_SMART;11491154 _set_slave_idlemode(oh, idlemode, &v);11501155 }11511156···12191208 sf = oh->class->sysc->sysc_flags;1220120912211210 if (sf & SYSC_HAS_SIDLEMODE) {12221222- idlemode = (oh->flags & HWMOD_SWSUP_SIDLE) ?12231223- HWMOD_IDLEMODE_FORCE : HWMOD_IDLEMODE_SMART;12111211+ /* XXX What about HWMOD_IDLEMODE_SMART_WKUP? */12121212+ if (oh->flags & HWMOD_SWSUP_SIDLE ||12131213+ !(oh->class->sysc->idlemodes &12141214+ (SIDLE_SMART | SIDLE_SMART_WKUP)))12151215+ idlemode = HWMOD_IDLEMODE_FORCE;12161216+ else12171217+ idlemode = HWMOD_IDLEMODE_SMART;12241218 _set_slave_idlemode(oh, idlemode, &v);12251219 }12261220
···63636464 /* TODO: Once MTU has been DT:ed place code above into else. */6565 if (of_have_populated_dt()) {6666+#ifdef CONFIG_OF6667 np = of_find_matching_node(NULL, prcmu_timer_of_match);6768 if (!np)6969+#endif6870 goto dt_fail;69717072 tmp_base = of_iomap(np, 0);
-1
arch/arm/mach-versatile/pci.c
···339339static int __init versatile_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)340340{341341 int irq;342342- int devslot = PCI_SLOT(dev->devfn);343342344343 /* slot, pin, irq345344 * 24 1 27
···447447 * want to handle. Thus you cannot kill init even with a SIGKILL even by448448 * mistake.449449 */450450-statis void do_signal(struct pt_regs *regs)450450+static void do_signal(struct pt_regs *regs)451451{452452 siginfo_t info;453453 int signr;
···99 * 2 of the Licence, or (at your option) any later version.1010 */11111212+#include <linux/irqreturn.h>1313+1214struct clocksource;1315struct clock_event_device;1416
+2-2
arch/mn10300/kernel/irq.c
···170170 case SC1TXIRQ:171171#ifdef CONFIG_MN10300_TTYSM1_TIMER12172172 case TM12IRQ:173173-#elif CONFIG_MN10300_TTYSM1_TIMER9173173+#elif defined(CONFIG_MN10300_TTYSM1_TIMER9)174174 case TM9IRQ:175175-#elif CONFIG_MN10300_TTYSM1_TIMER3175175+#elif defined(CONFIG_MN10300_TTYSM1_TIMER3)176176 case TM3IRQ:177177#endif /* CONFIG_MN10300_TTYSM1_TIMER12 */178178#endif /* CONFIG_MN10300_TTYSM1 */
+3-2
arch/mn10300/kernel/signal.c
···459459 else460460 ret = setup_frame(sig, ka, oldset, regs);461461 if (ret)462462- return;462462+ return ret;463463464464 signal_delivered(sig, info, ka, regs,465465- test_thread_flag(TIF_SINGLESTEP));465465+ test_thread_flag(TIF_SINGLESTEP));466466+ return 0;466467}467468468469/*
···229229 */230230 if (unlikely(irq_happened != PACA_IRQ_HARD_DIS))231231 __hard_irq_disable();232232-#ifdef CONFIG_TRACE_IRQFLAG232232+#ifdef CONFIG_TRACE_IRQFLAGS233233 else {234234 /*235235 * We should already be hard disabled here. We had bugs···284284 local_irq_enable();285285 } else286286 __hard_irq_enable();287287+}288288+289289+/*290290+ * This is a helper to use when about to go into idle low-power291291+ * when the latter has the side effect of re-enabling interrupts292292+ * (such as calling H_CEDE under pHyp).293293+ *294294+ * You call this function with interrupts soft-disabled (this is295295+ * already the case when ppc_md.power_save is called). The function296296+ * will return whether to enter power save or just return.297297+ *298298+ * In the former case, it will have notified lockdep of interrupts299299+ * being re-enabled and generally sanitized the lazy irq state,300300+ * and in the latter case it will leave with interrupts hard301301+ * disabled and marked as such, so the local_irq_enable() call302302+ * in cpu_idle() will properly re-enable everything.303303+ */304304+bool prep_irq_for_idle(void)305305+{306306+ /*307307+ * First we need to hard disable to ensure no interrupt308308+ * occurs before we effectively enter the low power state309309+ */310310+ hard_irq_disable();311311+312312+ /*313313+ * If anything happened while we were soft-disabled,314314+ * we return now and do not enter the low power state.315315+ */316316+ if (lazy_irq_pending())317317+ return false;318318+319319+ /* Tell lockdep we are about to re-enable */320320+ trace_hardirqs_on();321321+322322+ /*323323+ * Mark interrupts as soft-enabled and clear the324324+ * PACA_IRQ_HARD_DIS from the pending mask since we325325+ * are about to hard enable as well as a side effect326326+ * of entering the low power state.327327+ */328328+ local_paca->irq_happened &= ~PACA_IRQ_HARD_DIS;329329+ local_paca->soft_enabled = 1;330330+331331+ /* Tell the caller to enter the low power state */332332+ return true;287333}288334289335#endif /* CONFIG_PPC64 */
+1
arch/powerpc/kvm/book3s_pr_papr.c
···241241 case H_PUT_TCE:242242 return kvmppc_h_pr_put_tce(vcpu);243243 case H_CEDE:244244+ vcpu->arch.shared->msr |= MSR_EE;244245 kvm_vcpu_block(vcpu);245246 clear_bit(KVM_REQ_UNHALT, &vcpu->requests);246247 vcpu->stat.halt_wakeup++;
+1-1
arch/powerpc/mm/numa.c
···639639 unsigned int n, rc, ranges, is_kexec_kdump = 0;640640 unsigned long lmb_size, base, size, sz;641641 int nid;642642- struct assoc_arrays aa;642642+ struct assoc_arrays aa = { .arrays = NULL };643643644644 n = of_get_drconf_memory(memory, &dm);645645 if (!n)
+6-5
arch/powerpc/platforms/cell/pervasive.c
···4242{4343 unsigned long ctrl, thread_switch_control;44444545- /*4646- * We need to hard disable interrupts, the local_irq_enable() done by4747- * our caller upon return will hard re-enable.4848- */4949- hard_irq_disable();4545+ /* Ensure our interrupt state is properly tracked */4646+ if (!prep_irq_for_idle())4747+ return;50485149 ctrl = mfspr(SPRN_CTRLF);5250···7981 */8082 ctrl &= ~(CTRL_RUNLATCH | CTRL_TE);8183 mtspr(SPRN_CTRLT, ctrl);8484+8585+ /* Re-enable interrupts in MSR */8686+ __hard_irq_enable();8287}83888489static int cbe_system_reset_exception(struct pt_regs *regs)
+10-7
arch/powerpc/platforms/pseries/processor_idle.c
···9999static void check_and_cede_processor(void)100100{101101 /*102102- * Interrupts are soft-disabled at this point,103103- * but not hard disabled. So an interrupt might have104104- * occurred before entering NAP, and would be potentially105105- * lost (edge events, decrementer events, etc...) unless106106- * we first hard disable then check.102102+ * Ensure our interrupt state is properly tracked,103103+ * also checks if no interrupt has occurred while we104104+ * were soft-disabled107105 */108108- hard_irq_disable();109109- if (!lazy_irq_pending())106106+ if (prep_irq_for_idle()) {110107 cede_processor();108108+#ifdef CONFIG_TRACE_IRQFLAGS109109+ /* Ensure that H_CEDE returns with IRQs on */110110+ if (WARN_ON(!(mfmsr() & MSR_EE)))111111+ __hard_irq_enable();112112+#endif113113+ }111114}112115113116static int dedicated_cede_loop(struct cpuidle_device *dev,
+14-3
arch/sh/include/asm/io_noioport.h
···1919 return -1;2020}21212222-#define outb(x, y) BUG()2323-#define outw(x, y) BUG()2424-#define outl(x, y) BUG()2222+static inline void outb(unsigned char x, unsigned long port)2323+{2424+ BUG();2525+}2626+2727+static inline void outw(unsigned short x, unsigned long port)2828+{2929+ BUG();3030+}3131+3232+static inline void outl(unsigned int x, unsigned long port)3333+{3434+ BUG();3535+}25362637#define inb_p(addr) inb(addr)2738#define inw_p(addr) inw(addr)
···139139 return nr;140140}141141142142+#ifdef CONFIG_SECCOMP143143+static int vsyscall_seccomp(struct task_struct *tsk, int syscall_nr)144144+{145145+ if (!seccomp_mode(&tsk->seccomp))146146+ return 0;147147+ task_pt_regs(tsk)->orig_ax = syscall_nr;148148+ task_pt_regs(tsk)->ax = syscall_nr;149149+ return __secure_computing(syscall_nr);150150+}151151+#else152152+#define vsyscall_seccomp(_tsk, _nr) 0153153+#endif154154+142155static bool write_ok_or_segv(unsigned long ptr, size_t size)143156{144157 /*···187174 int vsyscall_nr;188175 int prev_sig_on_uaccess_error;189176 long ret;177177+ int skip;190178191179 /*192180 * No point in checking CS -- the only way to get here is a user mode···219205 }220206221207 tsk = current;222222- if (seccomp_mode(&tsk->seccomp))223223- do_exit(SIGKILL);224224-225208 /*226209 * With a real vsyscall, page faults cause SIGSEGV. We want to227210 * preserve that behavior to make writing exploits harder.···233222 * address 0".234223 */235224 ret = -EFAULT;225225+ skip = 0;236226 switch (vsyscall_nr) {237227 case 0:228228+ skip = vsyscall_seccomp(tsk, __NR_gettimeofday);229229+ if (skip)230230+ break;231231+238232 if (!write_ok_or_segv(regs->di, sizeof(struct timeval)) ||239233 !write_ok_or_segv(regs->si, sizeof(struct timezone)))240234 break;···250234 break;251235252236 case 1:237237+ skip = vsyscall_seccomp(tsk, __NR_time);238238+ if (skip)239239+ break;240240+253241 if (!write_ok_or_segv(regs->di, sizeof(time_t)))254242 break;255243···261241 break;262242263243 case 2:244244+ skip = vsyscall_seccomp(tsk, __NR_getcpu);245245+ if (skip)246246+ break;247247+264248 if (!write_ok_or_segv(regs->di, sizeof(unsigned)) ||265249 !write_ok_or_segv(regs->si, sizeof(unsigned)))266250 break;···276252 }277253278254 current_thread_info()->sig_on_uaccess_error = prev_sig_on_uaccess_error;255255+256256+ if (skip) {257257+ if ((long)regs->ax <= 0L) /* seccomp errno emulation */258258+ goto do_ret;259259+ goto done; /* seccomp trace/trap */260260+ }279261280262 if (ret == -EFAULT) {281263 /* Bad news -- userspace fed a bad pointer to a vsyscall. */···301271302272 regs->ax = ret;303273274274+do_ret:304275 /* Emulate a ret instruction. */305276 regs->ip = caller;306277 regs->sp += 8;307307-278278+done:308279 return true;309280310281sigsegv:
···277277278278 /* Don't leak any random bits. */279279280280- memset(elfregs, 0, sizeof (elfregs));280280+ memset(elfregs, 0, sizeof(*elfregs));281281282282 /* Note: PS.EXCM is not set while user task is running; its283283 * being set in regs->ps is for exception handling convenience.
-22
drivers/acpi/acpica/hwsleep.c
···9595 return_ACPI_STATUS(status);9696 }97979898- if (sleep_state != ACPI_STATE_S5) {9999- /*100100- * Disable BM arbitration. This feature is contained within an101101- * optional register (PM2 Control), so ignore a BAD_ADDRESS102102- * exception.103103- */104104- status = acpi_write_bit_register(ACPI_BITREG_ARB_DISABLE, 1);105105- if (ACPI_FAILURE(status) && (status != AE_BAD_ADDRESS)) {106106- return_ACPI_STATUS(status);107107- }108108- }109109-11098 /*11199 * 1) Disable/Clear all GPEs112100 * 2) Enable all wakeup GPEs···351363 acpi_write_bit_register(acpi_gbl_fixed_event_info352364 [ACPI_EVENT_POWER_BUTTON].353365 status_register_id, ACPI_CLEAR_STATUS);354354-355355- /*356356- * Enable BM arbitration. This feature is contained within an357357- * optional register (PM2 Control), so ignore a BAD_ADDRESS358358- * exception.359359- */360360- status = acpi_write_bit_register(ACPI_BITREG_ARB_DISABLE, 0);361361- if (ACPI_FAILURE(status) && (status != AE_BAD_ADDRESS)) {362362- return_ACPI_STATUS(status);363363- }364366365367 acpi_hw_execute_sleep_method(METHOD_PATHNAME__SST, ACPI_SST_WORKING);366368 return_ACPI_STATUS(status);
+1-1
drivers/acpi/acpica/nspredef.c
···638638 /* Create the new outer package and populate it */639639640640 status =641641- acpi_ns_wrap_with_package(data, *elements,641641+ acpi_ns_wrap_with_package(data, return_object,642642 return_object_ptr);643643 if (ACPI_FAILURE(status)) {644644 return (status);
+4-2
drivers/acpi/processor_core.c
···189189 * Processor (CPU3, 0x03, 0x00000410, 0x06) {}190190 * }191191 *192192- * Ignores apic_id and always return 0 for CPU0's handle.192192+ * Ignores apic_id and always returns 0 for the processor193193+ * handle with acpi id 0 if nr_cpu_ids is 1.194194+ * This should be the case if SMP tables are not found.193195 * Return -1 for other CPU's handle.194196 */195195- if (acpi_id == 0)197197+ if (nr_cpu_ids <= 1 && acpi_id == 0)196198 return acpi_id;197199 else198200 return apic_id;
+2
drivers/base/dd.c
···2424#include <linux/wait.h>2525#include <linux/async.h>2626#include <linux/pm_runtime.h>2727+#include <scsi/scsi_scan.h>27282829#include "base.h"2930#include "power/power.h"···333332 /* wait for the known devices to complete their probing */334333 wait_event(probe_waitqueue, atomic_read(&probe_count) == 0);335334 async_synchronize_full();335335+ scsi_complete_async_scans();336336}337337EXPORT_SYMBOL_GPL(wait_for_device_probe);338338
+3-5
drivers/block/loop.c
···15971597 struct gendisk *disk;15981598 int err;1599159916001600+ err = -ENOMEM;16001601 lo = kzalloc(sizeof(*lo), GFP_KERNEL);16011601- if (!lo) {16021602- err = -ENOMEM;16021602+ if (!lo)16031603 goto out;16041604- }1605160416061606- err = idr_pre_get(&loop_index_idr, GFP_KERNEL);16071607- if (err < 0)16051605+ if (!idr_pre_get(&loop_index_idr, GFP_KERNEL))16081606 goto out_free_dev;1609160716101608 if (i >= 0) {
···136136137137config GPIO_MSM_V1138138 tristate "Qualcomm MSM GPIO v1"139139- depends on GPIOLIB && ARCH_MSM139139+ depends on GPIOLIB && ARCH_MSM && (ARCH_MSM7X00A || ARCH_MSM7X30 || ARCH_QSD8X50)140140 help141141 Say yes here to support the GPIO interface on ARM v6 based142142 Qualcomm MSM chips. Most of the pins on the MSM can be
···398398 writel(~0, port->base + GPIO_ISR);399399400400 if (mxc_gpio_hwtype == IMX21_GPIO) {401401- /* setup one handler for all GPIO interrupts */402402- if (pdev->id == 0)403403- irq_set_chained_handler(port->irq,404404- mx2_gpio_irq_handler);401401+ /*402402+ * Setup one handler for all GPIO interrupts. Actually setting403403+ * the handler is needed only once, but doing it for every port404404+ * is more robust and easier.405405+ */406406+ irq_set_chained_handler(port->irq, mx2_gpio_irq_handler);405407 } else {406408 /* setup one handler for each entry */407409 irq_set_chained_handler(port->irq, mx3_gpio_irq_handler);
+13-1
drivers/gpio/gpio-omap.c
···174174 if (bank->dbck_enable_mask && !bank->dbck_enabled) {175175 clk_enable(bank->dbck);176176 bank->dbck_enabled = true;177177+178178+ __raw_writel(bank->dbck_enable_mask,179179+ bank->base + bank->regs->debounce_en);177180 }178181}179182180183static inline void _gpio_dbck_disable(struct gpio_bank *bank)181184{182185 if (bank->dbck_enable_mask && bank->dbck_enabled) {186186+ /*187187+ * Disable debounce before cutting it's clock. If debounce is188188+ * enabled but the clock is not, GPIO module seems to be unable189189+ * to detect events and generate interrupts at least on OMAP3.190190+ */191191+ __raw_writel(0, bank->base + bank->regs->debounce_en);192192+183193 clk_disable(bank->dbck);184194 bank->dbck_enabled = false;185195 }···10911081 bank->is_mpuio = pdata->is_mpuio;10921082 bank->non_wakeup_gpios = pdata->non_wakeup_gpios;10931083 bank->loses_context = pdata->loses_context;10941094- bank->get_context_loss_count = pdata->get_context_loss_count;10951084 bank->regs = pdata->regs;10961085#ifdef CONFIG_OF_GPIO10971086 bank->chip.of_node = of_node_get(node);···11431134 omap_gpio_mod_init(bank);11441135 omap_gpio_chip_init(bank);11451136 omap_gpio_show_rev(bank);11371137+11381138+ if (bank->loses_context)11391139+ bank->get_context_loss_count = pdata->get_context_loss_count;1146114011471141 pm_runtime_put(bank->dev);11481142
+3-2
drivers/gpio/gpio-sta2x11.c
···383383 }384384 spin_lock_init(&chip->lock);385385 gsta_gpio_setup(chip);386386- for (i = 0; i < GSTA_NR_GPIO; i++)387387- gsta_set_config(chip, i, gpio_pdata->pinconfig[i]);386386+ if (gpio_pdata)387387+ for (i = 0; i < GSTA_NR_GPIO; i++)388388+ gsta_set_config(chip, i, gpio_pdata->pinconfig[i]);388389389390 /* 384 was used in previous code: be compatible for other drivers */390391 err = irq_alloc_descs(-1, 384, GSTA_NR_GPIO, NUMA_NO_NODE);
···7878 return REG_READ(BLC_PWM_CTL2) & PWM_LEGACY_MODE;7979}80808181-static int cdv_get_brightness(struct backlight_device *bd)8282-{8383- struct drm_device *dev = bl_get_data(bd);8484- u32 val = REG_READ(BLC_PWM_CTL) & BACKLIGHT_DUTY_CYCLE_MASK;8585-8686- if (cdv_backlight_combination_mode(dev)) {8787- u8 lbpc;8888-8989- val &= ~1;9090- pci_read_config_byte(dev->pdev, 0xF4, &lbpc);9191- val *= lbpc;9292- }9393- return val;9494-}9595-9681static u32 cdv_get_max_backlight(struct drm_device *dev)9782{9883 u32 max = REG_READ(BLC_PWM_CTL);···95110 return max;96111}97112113113+static int cdv_get_brightness(struct backlight_device *bd)114114+{115115+ struct drm_device *dev = bl_get_data(bd);116116+ u32 val = REG_READ(BLC_PWM_CTL) & BACKLIGHT_DUTY_CYCLE_MASK;117117+118118+ if (cdv_backlight_combination_mode(dev)) {119119+ u8 lbpc;120120+121121+ val &= ~1;122122+ pci_read_config_byte(dev->pdev, 0xF4, &lbpc);123123+ val *= lbpc;124124+ }125125+ return (val * 100)/cdv_get_max_backlight(dev);126126+127127+}128128+98129static int cdv_set_brightness(struct backlight_device *bd)99130{100131 struct drm_device *dev = bl_get_data(bd);···120119 /* Percentage 1-100% being valid */121120 if (level < 1)122121 level = 1;122122+123123+ level *= cdv_get_max_backlight(dev);124124+ level /= 100;123125124126 if (cdv_backlight_combination_mode(dev)) {125127 u32 max = cdv_get_max_backlight(dev);···161157162158 cdv_backlight_device->props.brightness =163159 cdv_get_brightness(cdv_backlight_device);164164- cdv_backlight_device->props.max_brightness = cdv_get_max_backlight(dev);165160 backlight_update_status(cdv_backlight_device);166161 dev_priv->backlight_device = cdv_backlight_device;167162 return 0;
+3-5
drivers/gpu/drm/gma500/opregion.c
···144144145145#define ASLE_CBLV_VALID (1<<31)146146147147+static struct psb_intel_opregion *system_opregion;148148+147149static u32 asle_set_backlight(struct drm_device *dev, u32 bclp)148150{149151 struct drm_psb_private *dev_priv = dev->dev_private;···207205 struct drm_psb_private *dev_priv = dev->dev_private;208206 struct opregion_asle *asle = dev_priv->opregion.asle;209207210210- if (asle) {208208+ if (asle && system_opregion ) {211209 /* Don't do this on Medfield or other non PC like devices, they212210 use the bit for something different altogether */213211 psb_enable_pipestat(dev_priv, 0, PIPE_LEGACY_BLC_EVENT_ENABLE);···223221#define ACPI_EV_LID (1<<1)224222#define ACPI_EV_DOCK (1<<2)225223226226-static struct psb_intel_opregion *system_opregion;227224228225static int psb_intel_opregion_video_event(struct notifier_block *nb,229226 unsigned long val, void *data)···267266 system_opregion = opregion;268267 register_acpi_notifier(&psb_intel_opregion_notifier);269268 }270270-271271- if (opregion->asle)272272- psb_intel_opregion_enable_asle(dev);273269}274270275271void psb_intel_opregion_fini(struct drm_device *dev)
···8383static ATOMIC_NOTIFIER_HEAD(ppr_notifier);8484int amd_iommu_max_glx_val = -1;85858686+static struct dma_map_ops amd_iommu_dma_ops;8787+8688/*8789 * general struct to manage commands send to an IOMMU8890 */···404402 return;405403406404 de_fflush = debugfs_create_bool("fullflush", 0444, stats_dir,407407- (u32 *)&amd_iommu_unmap_flush);405405+ &amd_iommu_unmap_flush);408406409407 amd_iommu_stats_add(&compl_wait);410408 amd_iommu_stats_add(&cnt_map_single);···22682266 spin_lock_irqsave(&iommu_pd_list_lock, flags);22692267 list_add_tail(&dma_domain->list, &iommu_pd_list);22702268 spin_unlock_irqrestore(&iommu_pd_list_lock, flags);22692269+22702270+ dev_data = get_dev_data(dev);22712271+22722272+ if (!dev_data->passthrough)22732273+ dev->archdata.dma_ops = &amd_iommu_dma_ops;22742274+ else22752275+ dev->archdata.dma_ops = &nommu_dma_ops;2271227622722277 break;22732278 case BUS_NOTIFY_DEL_DEVICE:
+3-3
drivers/iommu/amd_iommu_init.c
···129129 to handle */130130LIST_HEAD(amd_iommu_unity_map); /* a list of required unity mappings131131 we find in ACPI */132132-bool amd_iommu_unmap_flush; /* if true, flush on every unmap */132132+u32 amd_iommu_unmap_flush; /* if true, flush on every unmap */133133134134LIST_HEAD(amd_iommu_list); /* list of all AMD IOMMUs in the135135 system */···1641164116421642 amd_iommu_init_api();1643164316441644+ x86_platform.iommu_shutdown = disable_iommus;16451645+16441646 if (iommu_pass_through)16451647 goto out;16461648···16501648 printk(KERN_INFO "AMD-Vi: IO/TLB flush on unmap enabled\n");16511649 else16521650 printk(KERN_INFO "AMD-Vi: Lazy IO/TLB flushing enabled\n");16531653-16541654- x86_platform.iommu_shutdown = disable_iommus;1655165116561652out:16571653 return ret;
+1-1
drivers/iommu/amd_iommu_types.h
···652652 * If true, the addresses will be flushed on unmap time, not when653653 * they are reused654654 */655655-extern bool amd_iommu_unmap_flush;655655+extern u32 amd_iommu_unmap_flush;656656657657/* Smallest number of PASIDs supported by any IOMMU in the system */658658extern u32 amd_iommu_max_pasids;
···2121#include <linux/reboot.h>2222#include "leds.h"23232424+static int panic_heartbeats;2525+2426struct heartbeat_trig_data {2527 unsigned int phase;2628 unsigned int period;···3533 struct heartbeat_trig_data *heartbeat_data = led_cdev->trigger_data;3634 unsigned long brightness = LED_OFF;3735 unsigned long delay = 0;3636+3737+ if (unlikely(panic_heartbeats)) {3838+ led_set_brightness(led_cdev, LED_OFF);3939+ return;4040+ }38413942 /* acts like an actual heart beat -- ie thump-thump-pause... */4043 switch (heartbeat_data->phase) {···118111 return NOTIFY_DONE;119112}120113114114+static int heartbeat_panic_notifier(struct notifier_block *nb,115115+ unsigned long code, void *unused)116116+{117117+ panic_heartbeats = 1;118118+ return NOTIFY_DONE;119119+}120120+121121static struct notifier_block heartbeat_reboot_nb = {122122 .notifier_call = heartbeat_reboot_notifier,123123};124124125125static struct notifier_block heartbeat_panic_nb = {126126- .notifier_call = heartbeat_reboot_notifier,126126+ .notifier_call = heartbeat_panic_notifier,127127};128128129129static int __init heartbeat_trig_init(void)
+24-13
drivers/md/md.c
···29312931 * can be sane */29322932 return -EBUSY;29332933 rdev->data_offset = offset;29342934+ rdev->new_data_offset = offset;29342935 return len;29352936}29362937···39273926 return sprintf(page, "%s\n", array_states[st]);39283927}3929392839303930-static int do_md_stop(struct mddev * mddev, int ro, int is_open);39313931-static int md_set_readonly(struct mddev * mddev, int is_open);39293929+static int do_md_stop(struct mddev * mddev, int ro, struct block_device *bdev);39303930+static int md_set_readonly(struct mddev * mddev, struct block_device *bdev);39323931static int do_md_run(struct mddev * mddev);39333932static int restart_array(struct mddev *mddev);39343933···39443943 /* stopping an active array */39453944 if (atomic_read(&mddev->openers) > 0)39463945 return -EBUSY;39473947- err = do_md_stop(mddev, 0, 0);39463946+ err = do_md_stop(mddev, 0, NULL);39483947 break;39493948 case inactive:39503949 /* stopping an active array */39513950 if (mddev->pers) {39523951 if (atomic_read(&mddev->openers) > 0)39533952 return -EBUSY;39543954- err = do_md_stop(mddev, 2, 0);39533953+ err = do_md_stop(mddev, 2, NULL);39553954 } else39563955 err = 0; /* already inactive */39573956 break;···39593958 break; /* not supported yet */39603959 case readonly:39613960 if (mddev->pers)39623962- err = md_set_readonly(mddev, 0);39613961+ err = md_set_readonly(mddev, NULL);39633962 else {39643963 mddev->ro = 1;39653964 set_disk_ro(mddev->gendisk, 1);···39693968 case read_auto:39703969 if (mddev->pers) {39713970 if (mddev->ro == 0)39723972- err = md_set_readonly(mddev, 0);39713971+ err = md_set_readonly(mddev, NULL);39733972 else if (mddev->ro == 1)39743973 err = restart_array(mddev);39753974 if (err == 0) {···53525351}53535352EXPORT_SYMBOL_GPL(md_stop);5354535353555355-static int md_set_readonly(struct mddev *mddev, int is_open)53545354+static int md_set_readonly(struct mddev *mddev, struct block_device *bdev)53565355{53575356 int err = 0;53585357 mutex_lock(&mddev->open_mutex);53595359- if (atomic_read(&mddev->openers) > is_open) {53585358+ if (atomic_read(&mddev->openers) > !!bdev) {53605359 printk("md: %s still in use.\n",mdname(mddev));53615360 err = -EBUSY;53625361 goto out;53635362 }53635363+ if (bdev)53645364+ sync_blockdev(bdev);53645365 if (mddev->pers) {53655366 __md_stop_writes(mddev);53665367···53845381 * 0 - completely stop and dis-assemble array53855382 * 2 - stop but do not disassemble array53865383 */53875387-static int do_md_stop(struct mddev * mddev, int mode, int is_open)53845384+static int do_md_stop(struct mddev * mddev, int mode,53855385+ struct block_device *bdev)53885386{53895387 struct gendisk *disk = mddev->gendisk;53905388 struct md_rdev *rdev;5391538953925390 mutex_lock(&mddev->open_mutex);53935393- if (atomic_read(&mddev->openers) > is_open ||53915391+ if (atomic_read(&mddev->openers) > !!bdev ||53945392 mddev->sysfs_active) {53955393 printk("md: %s still in use.\n",mdname(mddev));53965394 mutex_unlock(&mddev->open_mutex);53975395 return -EBUSY;53985396 }53975397+ if (bdev)53985398+ /* It is possible IO was issued on some other53995399+ * open file which was closed before we took ->open_mutex.54005400+ * As that was not the last close __blkdev_put will not54015401+ * have called sync_blockdev, so we must.54025402+ */54035403+ sync_blockdev(bdev);5399540454005405 if (mddev->pers) {54015406 if (mddev->ro)···54775466 err = do_md_run(mddev);54785467 if (err) {54795468 printk(KERN_WARNING "md: do_md_run() returned %d\n", err);54805480- do_md_stop(mddev, 0, 0);54695469+ do_md_stop(mddev, 0, NULL);54815470 }54825471}54835472···64926481 goto done_unlock;6493648264946483 case STOP_ARRAY:64956495- err = do_md_stop(mddev, 0, 1);64846484+ err = do_md_stop(mddev, 0, bdev);64966485 goto done_unlock;6497648664986487 case STOP_ARRAY_RO:64996499- err = md_set_readonly(mddev, 1);64886488+ err = md_set_readonly(mddev, bdev);65006489 goto done_unlock;6501649065026491 case BLKROSET:
+10-3
drivers/md/raid1.c
···1818181818191819 if (atomic_dec_and_test(&r1_bio->remaining)) {18201820 /* if we're here, all write(s) have completed, so clean up */18211821- md_done_sync(mddev, r1_bio->sectors, 1);18221822- put_buf(r1_bio);18211821+ int s = r1_bio->sectors;18221822+ if (test_bit(R1BIO_MadeGood, &r1_bio->state) ||18231823+ test_bit(R1BIO_WriteError, &r1_bio->state))18241824+ reschedule_retry(r1_bio);18251825+ else {18261826+ put_buf(r1_bio);18271827+ md_done_sync(mddev, s, 1);18281828+ }18231829 }18241830}18251831···24912485 */24922486 if (test_bit(MD_RECOVERY_REQUESTED, &mddev->recovery)) {24932487 atomic_set(&r1_bio->remaining, read_targets);24942494- for (i = 0; i < conf->raid_disks * 2; i++) {24882488+ for (i = 0; i < conf->raid_disks * 2 && read_targets; i++) {24952489 bio = r1_bio->bios[i];24962490 if (bio->bi_end_io == end_sync_read) {24912491+ read_targets--;24972492 md_sync_acct(bio->bi_bdev, nr_sectors);24982493 generic_make_request(bio);24992494 }
···2893289328942894 if (dev->board.has_dvb)28952895 request_module("em28xx-dvb");28962896- if (dev->board.has_ir_i2c && !disable_ir)28962896+ if (dev->board.ir_codes && !disable_ir)28972897 request_module("em28xx-rc");28982898}28992899
+8-5
drivers/media/video/gspca/sn9c20x.c
···20702070 set_gamma(gspca_dev, v4l2_ctrl_g_ctrl(sd->gamma));20712071 set_redblue(gspca_dev, v4l2_ctrl_g_ctrl(sd->blue),20722072 v4l2_ctrl_g_ctrl(sd->red));20732073- set_gain(gspca_dev, v4l2_ctrl_g_ctrl(sd->gain));20742074- set_exposure(gspca_dev, v4l2_ctrl_g_ctrl(sd->exposure));20752075- set_hvflip(gspca_dev, v4l2_ctrl_g_ctrl(sd->hflip),20762076- v4l2_ctrl_g_ctrl(sd->vflip));20732073+ if (sd->gain)20742074+ set_gain(gspca_dev, v4l2_ctrl_g_ctrl(sd->gain));20752075+ if (sd->exposure)20762076+ set_exposure(gspca_dev, v4l2_ctrl_g_ctrl(sd->exposure));20772077+ if (sd->hflip)20782078+ set_hvflip(gspca_dev, v4l2_ctrl_g_ctrl(sd->hflip),20792079+ v4l2_ctrl_g_ctrl(sd->vflip));2077208020782081 reg_w1(gspca_dev, 0x1007, 0x20);20792082 reg_w1(gspca_dev, 0x1061, 0x03);···21792176 struct sd *sd = (struct sd *) gspca_dev;21802177 int avg_lum;2181217821822182- if (!v4l2_ctrl_g_ctrl(sd->autogain))21792179+ if (sd->autogain == NULL || !v4l2_ctrl_g_ctrl(sd->autogain))21832180 return;2184218121852182 avg_lum = atomic_read(&sd->avg_lum);
···451451static int fimc_lite_open(struct file *file)452452{453453 struct fimc_lite *fimc = video_drvdata(file);454454- int ret = v4l2_fh_open(file);454454+ int ret;455455456456- if (ret)457457- return ret;456456+ if (mutex_lock_interruptible(&fimc->lock))457457+ return -ERESTARTSYS;458458459459 set_bit(ST_FLITE_IN_USE, &fimc->state);460460- pm_runtime_get_sync(&fimc->pdev->dev);460460+ ret = pm_runtime_get_sync(&fimc->pdev->dev);461461+ if (ret < 0)462462+ goto done;461463462462- if (++fimc->ref_count != 1 || fimc->out_path != FIMC_IO_DMA)463463- return ret;464464+ ret = v4l2_fh_open(file);465465+ if (ret < 0)466466+ goto done;464467465465- ret = fimc_pipeline_initialize(&fimc->pipeline, &fimc->vfd->entity,466466- true);467467- if (ret < 0) {468468- v4l2_err(fimc->vfd, "Video pipeline initialization failed\n");469469- pm_runtime_put_sync(&fimc->pdev->dev);470470- fimc->ref_count--;471471- v4l2_fh_release(file);472472- clear_bit(ST_FLITE_IN_USE, &fimc->state);468468+ if (++fimc->ref_count == 1 && fimc->out_path == FIMC_IO_DMA) {469469+ ret = fimc_pipeline_initialize(&fimc->pipeline,470470+ &fimc->vfd->entity, true);471471+ if (ret < 0) {472472+ pm_runtime_put_sync(&fimc->pdev->dev);473473+ fimc->ref_count--;474474+ v4l2_fh_release(file);475475+ clear_bit(ST_FLITE_IN_USE, &fimc->state);476476+ }477477+478478+ fimc_lite_clear_event_counters(fimc);473479 }474474-475475- fimc_lite_clear_event_counters(fimc);480480+done:481481+ mutex_unlock(&fimc->lock);476482 return ret;477483}478484479485static int fimc_lite_close(struct file *file)480486{481487 struct fimc_lite *fimc = video_drvdata(file);488488+ int ret;489489+490490+ if (mutex_lock_interruptible(&fimc->lock))491491+ return -ERESTARTSYS;482492483493 if (--fimc->ref_count == 0 && fimc->out_path == FIMC_IO_DMA) {484494 clear_bit(ST_FLITE_IN_USE, &fimc->state);···502492 if (fimc->ref_count == 0)503493 vb2_queue_release(&fimc->vb_queue);504494505505- return v4l2_fh_release(file);495495+ ret = v4l2_fh_release(file);496496+497497+ mutex_unlock(&fimc->lock);498498+ return ret;506499}507500508501static unsigned int fimc_lite_poll(struct file *file,509502 struct poll_table_struct *wait)510503{511504 struct fimc_lite *fimc = video_drvdata(file);512512- return vb2_poll(&fimc->vb_queue, file, wait);505505+ int ret;506506+507507+ if (mutex_lock_interruptible(&fimc->lock))508508+ return POLL_ERR;509509+510510+ ret = vb2_poll(&fimc->vb_queue, file, wait);511511+ mutex_unlock(&fimc->lock);512512+513513+ return ret;513514}514515515516static int fimc_lite_mmap(struct file *file, struct vm_area_struct *vma)516517{517518 struct fimc_lite *fimc = video_drvdata(file);518518- return vb2_mmap(&fimc->vb_queue, vma);519519+ int ret;520520+521521+ if (mutex_lock_interruptible(&fimc->lock))522522+ return -ERESTARTSYS;523523+524524+ ret = vb2_mmap(&fimc->vb_queue, vma);525525+ mutex_unlock(&fimc->lock);526526+527527+ return ret;519528}520529521530static const struct v4l2_file_operations fimc_lite_fops = {···791762 if (fimc_lite_active(fimc))792763 return -EBUSY;793764794794- media_entity_pipeline_start(&sensor->entity, p->m_pipeline);765765+ ret = media_entity_pipeline_start(&sensor->entity, p->m_pipeline);766766+ if (ret < 0)767767+ return ret;795768796769 ret = fimc_pipeline_validate(fimc);797770 if (ret) {···15391508 return 0;1540150915411510 ret = fimc_lite_stop_capture(fimc, suspend);15421542- if (ret)15111511+ if (ret < 0 || !fimc_lite_active(fimc))15431512 return ret;1544151315451514 return fimc_pipeline_shutdown(&fimc->pipeline);
+24-24
drivers/media/video/s5p-fimc/fimc-mdevice.c
···193193194194int fimc_pipeline_shutdown(struct fimc_pipeline *p)195195{196196- struct media_entity *me = &p->subdevs[IDX_SENSOR]->entity;196196+ struct media_entity *me;197197 int ret;198198199199+ if (!p || !p->subdevs[IDX_SENSOR])200200+ return -EINVAL;201201+202202+ me = &p->subdevs[IDX_SENSOR]->entity;199203 mutex_lock(&me->parent->graph_mutex);200204 ret = __fimc_pipeline_shutdown(p);201205 mutex_unlock(&me->parent->graph_mutex);···502498 * @source: the source entity to create links to all fimc entities from503499 * @sensor: sensor subdev linked to FIMC[fimc_id] entity, may be null504500 * @pad: the source entity pad index505505- * @fimc_id: index of the fimc device for which link should be enabled501501+ * @link_mask: bitmask of the fimc devices for which link should be enabled506502 */507503static int __fimc_md_create_fimc_sink_links(struct fimc_md *fmd,508504 struct media_entity *source,509505 struct v4l2_subdev *sensor,510510- int pad, int fimc_id)506506+ int pad, int link_mask)511507{512508 struct fimc_sensor_info *s_info;513509 struct media_entity *sink;···524520 if (!fmd->fimc[i]->variant->has_cam_if)525521 continue;526522527527- flags = (i == fimc_id) ? MEDIA_LNK_FL_ENABLED : 0;523523+ flags = ((1 << i) & link_mask) ? MEDIA_LNK_FL_ENABLED : 0;528524529525 sink = &fmd->fimc[i]->vid_cap.subdev.entity;530526 ret = media_entity_create_link(source, pad, sink,···556552 if (!fmd->fimc_lite[i])557553 continue;558554559559- flags = (i == fimc_id) ? MEDIA_LNK_FL_ENABLED : 0;555555+ if (link_mask & (1 << (i + FIMC_MAX_DEVS)))556556+ flags = MEDIA_LNK_FL_ENABLED;557557+ else558558+ flags = 0;560559561560 sink = &fmd->fimc_lite[i]->subdev.entity;562561 ret = media_entity_create_link(source, pad, sink,···621614 struct s5p_fimc_isp_info *pdata;622615 struct fimc_sensor_info *s_info;623616 struct media_entity *source, *sink;624624- int i, pad, fimc_id = 0;625625- int ret = 0;626626- u32 flags;617617+ int i, pad, fimc_id = 0, ret = 0;618618+ u32 flags, link_mask = 0;627619628620 for (i = 0; i < fmd->num_sensors; i++) {629621 if (fmd->sensor[i].subdev == NULL)···674668 if (source == NULL)675669 continue;676670671671+ link_mask = 1 << fimc_id++;677672 ret = __fimc_md_create_fimc_sink_links(fmd, source, sensor,678678- pad, fimc_id++);673673+ pad, link_mask);679674 }680675681681- fimc_id = 0;682676 for (i = 0; i < ARRAY_SIZE(fmd->csis); i++) {683677 if (fmd->csis[i].sd == NULL)684678 continue;685679 source = &fmd->csis[i].sd->entity;686680 pad = CSIS_PAD_SOURCE;687681682682+ link_mask = 1 << fimc_id++;688683 ret = __fimc_md_create_fimc_sink_links(fmd, source, NULL,689689- pad, fimc_id++);684684+ pad, link_mask);690685 }691686692687 /* Create immutable links between each FIMC's subdev and video node */···741734}742735743736static int __fimc_md_set_camclk(struct fimc_md *fmd,744744- struct fimc_sensor_info *s_info,745745- bool on)737737+ struct fimc_sensor_info *s_info,738738+ bool on)746739{747740 struct s5p_fimc_isp_info *pdata = s_info->pdata;748741 struct fimc_camclk_info *camclk;···751744 if (WARN_ON(pdata->clk_id >= FIMC_MAX_CAMCLKS) || fmd == NULL)752745 return -EINVAL;753746754754- if (s_info->clk_on == on)755755- return 0;756747 camclk = &fmd->camclk[pdata->clk_id];757748758758- dbg("camclk %d, f: %lu, clk: %p, on: %d",759759- pdata->clk_id, pdata->clk_frequency, camclk, on);749749+ dbg("camclk %d, f: %lu, use_count: %d, on: %d",750750+ pdata->clk_id, pdata->clk_frequency, camclk->use_count, on);760751761752 if (on) {762753 if (camclk->use_count > 0 &&···765760 clk_set_rate(camclk->clock, pdata->clk_frequency);766761 camclk->frequency = pdata->clk_frequency;767762 ret = clk_enable(camclk->clock);763763+ dbg("Enabled camclk %d: f: %lu", pdata->clk_id,764764+ clk_get_rate(camclk->clock));768765 }769769- s_info->clk_on = 1;770770- dbg("Enabled camclk %d: f: %lu", pdata->clk_id,771771- clk_get_rate(camclk->clock));772772-773766 return ret;774767 }775768···776773777774 if (--camclk->use_count == 0) {778775 clk_disable(camclk->clock);779779- s_info->clk_on = 0;780776 dbg("Disabled camclk %d", pdata->clk_id);781777 }782778 return ret;···791789 * devices to which sensors can be attached, either directly or through792790 * the MIPI CSI receiver. The clock is allowed here to be used by793791 * multiple sensors concurrently if they use same frequency.794794- * The per sensor subdev clk_on attribute helps to synchronize accesses795795- * to the sclk_cam clocks from the video and media device nodes.796792 * This function should only be called when the graph mutex is held.797793 */798794int fimc_md_set_camclk(struct v4l2_subdev *sd, bool on)
-2
drivers/media/video/s5p-fimc/fimc-mdevice.h
···4747 * @pdata: sensor's atrributes passed as media device's platform data4848 * @subdev: image sensor v4l2 subdev4949 * @host: fimc device the sensor is currently linked to5050- * @clk_on: sclk_cam clock's state associated with this subdev5150 *5251 * This data structure applies to image sensor and the writeback subdevs.5352 */···5455 struct s5p_fimc_isp_info *pdata;5556 struct v4l2_subdev *subdev;5657 struct fimc_dev *host;5757- bool clk_on;5858};59596060/**
+1
drivers/media/video/s5p-mfc/s5p_mfc_dec.c
···996996997997 for (i = 0; i < NUM_CTRLS; i++) {998998 if (IS_MFC51_PRIV(controls[i].id)) {999999+ memset(&cfg, 0, sizeof(struct v4l2_ctrl_config));9991000 cfg.ops = &s5p_mfc_dec_ctrl_ops;10001001 cfg.id = controls[i].id;10011002 cfg.min = controls[i].minimum;
+1
drivers/media/video/s5p-mfc/s5p_mfc_enc.c
···17731773 }17741774 for (i = 0; i < NUM_CTRLS; i++) {17751775 if (IS_MFC51_PRIV(controls[i].id)) {17761776+ memset(&cfg, 0, sizeof(struct v4l2_ctrl_config));17761777 cfg.ops = &s5p_mfc_enc_ctrl_ops;17771778 cfg.id = controls[i].id;17781779 cfg.min = controls[i].minimum;
···286286 depends on I2C=y && GENERIC_HARDIRQS287287 select MFD_CORE288288 select REGMAP_I2C289289+ select IRQ_DOMAIN289290 default n290291 help291292 Say yes here if you want support for Texas Instruments TWL6040 audio
-87
drivers/mfd/ab5500-core.h
···11-/*22- * Copyright (C) 2011 ST-Ericsson33- * License terms: GNU General Public License (GPL) version 244- * Shared definitions and data structures for the AB5500 MFD driver55- */66-77-/* Read/write operation values. */88-#define AB5500_PERM_RD (0x01)99-#define AB5500_PERM_WR (0x02)1010-1111-/* Read/write permissions. */1212-#define AB5500_PERM_RO (AB5500_PERM_RD)1313-#define AB5500_PERM_RW (AB5500_PERM_RD | AB5500_PERM_WR)1414-1515-#define AB5500_MASK_BASE (0x60)1616-#define AB5500_MASK_END (0x79)1717-#define AB5500_CHIP_ID (0x20)1818-1919-/**2020- * struct ab5500_reg_range2121- * @first: the first address of the range2222- * @last: the last address of the range2323- * @perm: access permissions for the range2424- */2525-struct ab5500_reg_range {2626- u8 first;2727- u8 last;2828- u8 perm;2929-};3030-3131-/**3232- * struct ab5500_i2c_ranges3333- * @count: the number of ranges in the list3434- * @range: the list of register ranges3535- */3636-struct ab5500_i2c_ranges {3737- u8 nranges;3838- u8 bankid;3939- const struct ab5500_reg_range *range;4040-};4141-4242-/**4343- * struct ab5500_i2c_banks4444- * @count: the number of ranges in the list4545- * @range: the list of register ranges4646- */4747-struct ab5500_i2c_banks {4848- u8 nbanks;4949- const struct ab5500_i2c_ranges *bank;5050-};5151-5252-/**5353- * struct ab5500_bank5454- * @slave_addr: I2C slave_addr found in AB5500 specification5555- * @name: Documentation name of the bank. For reference5656- */5757-struct ab5500_bank {5858- u8 slave_addr;5959- const char *name;6060-};6161-6262-static const struct ab5500_bank bankinfo[AB5500_NUM_BANKS] = {6363- [AB5500_BANK_VIT_IO_I2C_CLK_TST_OTP] = {6464- AB5500_ADDR_VIT_IO_I2C_CLK_TST_OTP, "VIT_IO_I2C_CLK_TST_OTP"},6565- [AB5500_BANK_VDDDIG_IO_I2C_CLK_TST] = {6666- AB5500_ADDR_VDDDIG_IO_I2C_CLK_TST, "VDDDIG_IO_I2C_CLK_TST"},6767- [AB5500_BANK_VDENC] = {AB5500_ADDR_VDENC, "VDENC"},6868- [AB5500_BANK_SIM_USBSIM] = {AB5500_ADDR_SIM_USBSIM, "SIM_USBSIM"},6969- [AB5500_BANK_LED] = {AB5500_ADDR_LED, "LED"},7070- [AB5500_BANK_ADC] = {AB5500_ADDR_ADC, "ADC"},7171- [AB5500_BANK_RTC] = {AB5500_ADDR_RTC, "RTC"},7272- [AB5500_BANK_STARTUP] = {AB5500_ADDR_STARTUP, "STARTUP"},7373- [AB5500_BANK_DBI_ECI] = {AB5500_ADDR_DBI_ECI, "DBI-ECI"},7474- [AB5500_BANK_CHG] = {AB5500_ADDR_CHG, "CHG"},7575- [AB5500_BANK_FG_BATTCOM_ACC] = {7676- AB5500_ADDR_FG_BATTCOM_ACC, "FG_BATCOM_ACC"},7777- [AB5500_BANK_USB] = {AB5500_ADDR_USB, "USB"},7878- [AB5500_BANK_IT] = {AB5500_ADDR_IT, "IT"},7979- [AB5500_BANK_VIBRA] = {AB5500_ADDR_VIBRA, "VIBRA"},8080- [AB5500_BANK_AUDIO_HEADSETUSB] = {8181- AB5500_ADDR_AUDIO_HEADSETUSB, "AUDIO_HEADSETUSB"},8282-};8383-8484-int ab5500_get_register_interruptible_raw(struct ab5500 *ab, u8 bank, u8 reg,8585- u8 *value);8686-int ab5500_mask_and_set_register_interruptible_raw(struct ab5500 *ab, u8 bank,8787- u8 reg, u8 bitmask, u8 bitvalues);
+65-2
drivers/mfd/mc13xxx-spi.c
···4949 .reg_bits = 7,5050 .pad_bits = 1,5151 .val_bits = 24,5252+ .write_flag_mask = 0x80,52535354 .max_register = MC13XXX_NUMREGS,54555556 .cache_type = REGCACHE_NONE,5757+ .use_single_rw = 1,5858+};5959+6060+static int mc13xxx_spi_read(void *context, const void *reg, size_t reg_size,6161+ void *val, size_t val_size)6262+{6363+ unsigned char w[4] = { *((unsigned char *) reg), 0, 0, 0};6464+ unsigned char r[4];6565+ unsigned char *p = val;6666+ struct device *dev = context;6767+ struct spi_device *spi = to_spi_device(dev);6868+ struct spi_transfer t = {6969+ .tx_buf = w,7070+ .rx_buf = r,7171+ .len = 4,7272+ };7373+7474+ struct spi_message m;7575+ int ret;7676+7777+ if (val_size != 3 || reg_size != 1)7878+ return -ENOTSUPP;7979+8080+ spi_message_init(&m);8181+ spi_message_add_tail(&t, &m);8282+ ret = spi_sync(spi, &m);8383+8484+ memcpy(p, &r[1], 3);8585+8686+ return ret;8787+}8888+8989+static int mc13xxx_spi_write(void *context, const void *data, size_t count)9090+{9191+ struct device *dev = context;9292+ struct spi_device *spi = to_spi_device(dev);9393+9494+ if (count != 4)9595+ return -ENOTSUPP;9696+9797+ return spi_write(spi, data, count);9898+}9999+100100+/*101101+ * We cannot use regmap-spi generic bus implementation here.102102+ * The MC13783 chip will get corrupted if CS signal is deasserted103103+ * and on i.Mx31 SoC (the target SoC for MC13783 PMIC) the SPI controller104104+ * has the following errata (DSPhl22960):105105+ * "The CSPI negates SS when the FIFO becomes empty with106106+ * SSCTL= 0. Software cannot guarantee that the FIFO will not107107+ * drain because of higher priority interrupts and the108108+ * non-realtime characteristics of the operating system. As a109109+ * result, the SS will negate before all of the data has been110110+ * transferred to/from the peripheral."111111+ * We workaround this by accessing the SPI controller with a112112+ * single transfert.113113+ */114114+115115+static struct regmap_bus regmap_mc13xxx_bus = {116116+ .write = mc13xxx_spi_write,117117+ .read = mc13xxx_spi_read,56118};5711958120static int mc13xxx_spi_probe(struct spi_device *spi)···1357313674 dev_set_drvdata(&spi->dev, mc13xxx);13775 spi->mode = SPI_MODE_0 | SPI_CS_HIGH;138138- spi->bits_per_word = 32;1397614077 mc13xxx->dev = &spi->dev;14178 mutex_init(&mc13xxx->lock);14279143143- mc13xxx->regmap = regmap_init_spi(spi, &mc13xxx_regmap_spi_config);8080+ mc13xxx->regmap = regmap_init(&spi->dev, ®map_mc13xxx_bus, &spi->dev,8181+ &mc13xxx_regmap_spi_config);8282+14483 if (IS_ERR(mc13xxx->regmap)) {14584 ret = PTR_ERR(mc13xxx->regmap);14685 dev_err(mc13xxx->dev, "Failed to initialize register map: %d\n",
+47-1
drivers/mfd/omap-usb-host.c
···2525#include <linux/clk.h>2626#include <linux/dma-mapping.h>2727#include <linux/spinlock.h>2828+#include <linux/gpio.h>2829#include <plat/cpu.h>2930#include <plat/usb.h>3031#include <linux/pm_runtime.h>···501500 dev_dbg(dev, "starting TI HSUSB Controller\n");502501503502 pm_runtime_get_sync(dev);504504- spin_lock_irqsave(&omap->lock, flags);505503504504+ if (pdata->ehci_data->phy_reset) {505505+ if (gpio_is_valid(pdata->ehci_data->reset_gpio_port[0]))506506+ gpio_request_one(pdata->ehci_data->reset_gpio_port[0],507507+ GPIOF_OUT_INIT_LOW, "USB1 PHY reset");508508+509509+ if (gpio_is_valid(pdata->ehci_data->reset_gpio_port[1]))510510+ gpio_request_one(pdata->ehci_data->reset_gpio_port[1],511511+ GPIOF_OUT_INIT_LOW, "USB2 PHY reset");512512+513513+ /* Hold the PHY in RESET for enough time till DIR is high */514514+ udelay(10);515515+ }516516+517517+ spin_lock_irqsave(&omap->lock, flags);506518 omap->usbhs_rev = usbhs_read(omap->uhh_base, OMAP_UHH_REVISION);507519 dev_dbg(dev, "OMAP UHH_REVISION 0x%x\n", omap->usbhs_rev);508520···595581 }596582597583 spin_unlock_irqrestore(&omap->lock, flags);584584+585585+ if (pdata->ehci_data->phy_reset) {586586+ /* Hold the PHY in RESET for enough time till587587+ * PHY is settled and ready588588+ */589589+ udelay(10);590590+591591+ if (gpio_is_valid(pdata->ehci_data->reset_gpio_port[0]))592592+ gpio_set_value_cansleep593593+ (pdata->ehci_data->reset_gpio_port[0], 1);594594+595595+ if (gpio_is_valid(pdata->ehci_data->reset_gpio_port[1]))596596+ gpio_set_value_cansleep597597+ (pdata->ehci_data->reset_gpio_port[1], 1);598598+ }599599+598600 pm_runtime_put_sync(dev);601601+}602602+603603+static void omap_usbhs_deinit(struct device *dev)604604+{605605+ struct usbhs_hcd_omap *omap = dev_get_drvdata(dev);606606+ struct usbhs_omap_platform_data *pdata = &omap->platdata;607607+608608+ if (pdata->ehci_data->phy_reset) {609609+ if (gpio_is_valid(pdata->ehci_data->reset_gpio_port[0]))610610+ gpio_free(pdata->ehci_data->reset_gpio_port[0]);611611+612612+ if (gpio_is_valid(pdata->ehci_data->reset_gpio_port[1]))613613+ gpio_free(pdata->ehci_data->reset_gpio_port[1]);614614+ }599615}600616601617···811767 goto end_probe;812768813769err_alloc:770770+ omap_usbhs_deinit(&pdev->dev);814771 iounmap(omap->tll_base);815772816773err_tll:···863818{864819 struct usbhs_hcd_omap *omap = platform_get_drvdata(pdev);865820821821+ omap_usbhs_deinit(&pdev->dev);866822 iounmap(omap->tll_base);867823 iounmap(omap->uhh_base);868824 clk_put(omap->init_60m_fclk);
+12-1
drivers/mfd/palmas.c
···356356 }357357 }358358359359- ret = regmap_add_irq_chip(palmas->regmap[1], palmas->irq,359359+ /* Change IRQ into clear on read mode for efficiency */360360+ slave = PALMAS_BASE_TO_SLAVE(PALMAS_INTERRUPT_BASE);361361+ addr = PALMAS_BASE_TO_REG(PALMAS_INTERRUPT_BASE, PALMAS_INT_CTRL);362362+ reg = PALMAS_INT_CTRL_INT_CLEAR;363363+364364+ regmap_write(palmas->regmap[slave], addr, reg);365365+366366+ ret = regmap_add_irq_chip(palmas->regmap[slave], palmas->irq,360367 IRQF_ONESHOT | IRQF_TRIGGER_LOW, -1, &palmas_irq_chip,361368 &palmas->irq_data);362369 if (ret < 0)···448441 goto err;449442 }450443444444+ children[PALMAS_PMIC_ID].platform_data = pdata->pmic_pdata;445445+ children[PALMAS_PMIC_ID].pdata_size = sizeof(*pdata->pmic_pdata);446446+451447 ret = mfd_add_devices(palmas->dev, -1,452448 children, ARRAY_SIZE(palmas_children),453449 NULL, regmap_irq_chip_get_base(palmas->irq_data));···482472 { "twl6035", },483473 { "twl6037", },484474 { "tps65913", },475475+ { /* end */ }485476};486477MODULE_DEVICE_TABLE(i2c, palmas_i2c_id);487478
···273273274274static const char *part_probes[] = { "RedBoot", "cmdlinepart", "ofpart", NULL };275275276276+static void memcpy32_fromio(void *trg, const void __iomem *src, size_t size)277277+{278278+ int i;279279+ u32 *t = trg;280280+ const __iomem u32 *s = src;281281+282282+ for (i = 0; i < (size >> 2); i++)283283+ *t++ = __raw_readl(s++);284284+}285285+286286+static void memcpy32_toio(void __iomem *trg, const void *src, int size)287287+{288288+ int i;289289+ u32 __iomem *t = trg;290290+ const u32 *s = src;291291+292292+ for (i = 0; i < (size >> 2); i++)293293+ __raw_writel(*s++, t++);294294+}295295+276296static int check_int_v3(struct mxc_nand_host *host)277297{278298 uint32_t tmp;···539519540520 wait_op_done(host, true);541521542542- memcpy_fromio(host->data_buf, host->main_area0, 16);522522+ memcpy32_fromio(host->data_buf, host->main_area0, 16);543523}544524545525/* Request the NANDFC to perform a read of the NAND device ID. */···555535 /* Wait for operation to complete */556536 wait_op_done(host, true);557537558558- memcpy_fromio(host->data_buf, host->main_area0, 16);538538+ memcpy32_fromio(host->data_buf, host->main_area0, 16);559539560540 if (this->options & NAND_BUSWIDTH_16) {561541 /* compress the ID info */···817797818798 if (bfrom) {819799 for (i = 0; i < n - 1; i++)820820- memcpy_fromio(d + i * j, s + i * t, j);800800+ memcpy32_fromio(d + i * j, s + i * t, j);821801822802 /* the last section */823823- memcpy_fromio(d + i * j, s + i * t, mtd->oobsize - i * j);803803+ memcpy32_fromio(d + i * j, s + i * t, mtd->oobsize - i * j);824804 } else {825805 for (i = 0; i < n - 1; i++)826826- memcpy_toio(&s[i * t], &d[i * j], j);806806+ memcpy32_toio(&s[i * t], &d[i * j], j);827807828808 /* the last section */829829- memcpy_toio(&s[i * t], &d[i * j], mtd->oobsize - i * j);809809+ memcpy32_toio(&s[i * t], &d[i * j], mtd->oobsize - i * j);830810 }831811}832812···1090107010911071 host->devtype_data->send_page(mtd, NFC_OUTPUT);1092107210931093- memcpy_fromio(host->data_buf, host->main_area0, mtd->writesize);10731073+ memcpy32_fromio(host->data_buf, host->main_area0,10741074+ mtd->writesize);10941075 copy_spare(mtd, true);10951076 break;10961077···11071086 break;1108108711091088 case NAND_CMD_PAGEPROG:11101110- memcpy_toio(host->main_area0, host->data_buf, mtd->writesize);10891089+ memcpy32_toio(host->main_area0, host->data_buf, mtd->writesize);11111090 copy_spare(mtd, false);11121091 host->devtype_data->send_page(mtd, NFC_INPUT);11131092 host->devtype_data->send_cmd(host, command, true);
+7
drivers/mtd/nand/nand_base.c
···35013501 /* propagate ecc info to mtd_info */35023502 mtd->ecclayout = chip->ecc.layout;35033503 mtd->ecc_strength = chip->ecc.strength;35043504+ /*35053505+ * Initialize bitflip_threshold to its default prior scan_bbt() call.35063506+ * scan_bbt() might invoke mtd_read(), thus bitflip_threshold must be35073507+ * properly set.35083508+ */35093509+ if (!mtd->bitflip_threshold)35103510+ mtd->bitflip_threshold = mtd->ecc_strength;3504351135053512 /* Check, if we should skip the bad block table scan */35063513 if (chip->options & NAND_SKIP_BBTSCAN)
···15721572 ctrl = er32(CTRL);15731573 status = er32(STATUS);15741574 rxcw = er32(RXCW);15751575+ /* SYNCH bit and IV bit are sticky */15761576+ udelay(10);15771577+ rxcw = er32(RXCW);1575157815761579 if ((rxcw & E1000_RXCW_SYNCH) && !(rxcw & E1000_RXCW_IV)) {15771580
+32-10
drivers/net/ethernet/intel/e1000e/ich8lan.c
···325325 **/326326static bool e1000_phy_is_accessible_pchlan(struct e1000_hw *hw)327327{328328- u16 phy_reg;329329- u32 phy_id;328328+ u16 phy_reg = 0;329329+ u32 phy_id = 0;330330+ s32 ret_val;331331+ u16 retry_count;330332331331- e1e_rphy_locked(hw, PHY_ID1, &phy_reg);332332- phy_id = (u32)(phy_reg << 16);333333- e1e_rphy_locked(hw, PHY_ID2, &phy_reg);334334- phy_id |= (u32)(phy_reg & PHY_REVISION_MASK);333333+ for (retry_count = 0; retry_count < 2; retry_count++) {334334+ ret_val = e1e_rphy_locked(hw, PHY_ID1, &phy_reg);335335+ if (ret_val || (phy_reg == 0xFFFF))336336+ continue;337337+ phy_id = (u32)(phy_reg << 16);338338+339339+ ret_val = e1e_rphy_locked(hw, PHY_ID2, &phy_reg);340340+ if (ret_val || (phy_reg == 0xFFFF)) {341341+ phy_id = 0;342342+ continue;343343+ }344344+ phy_id |= (u32)(phy_reg & PHY_REVISION_MASK);345345+ break;346346+ }335347336348 if (hw->phy.id) {337349 if (hw->phy.id == phy_id)338350 return true;339339- } else {340340- if ((phy_id != 0) && (phy_id != PHY_REVISION_MASK))341341- hw->phy.id = phy_id;351351+ } else if (phy_id) {352352+ hw->phy.id = phy_id;353353+ hw->phy.revision = (u32)(phy_reg & ~PHY_REVISION_MASK);342354 return true;343355 }344356345345- return false;357357+ /*358358+ * In case the PHY needs to be in mdio slow mode,359359+ * set slow mode and try to get the PHY id again.360360+ */361361+ hw->phy.ops.release(hw);362362+ ret_val = e1000_set_mdio_slow_mode_hv(hw);363363+ if (!ret_val)364364+ ret_val = e1000e_get_phy_id(hw);365365+ hw->phy.ops.acquire(hw);366366+367367+ return !ret_val;346368}347369348370/**
+3
drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
···193193 unsigned int i, eop, count = 0;194194 unsigned int total_bytes = 0, total_packets = 0;195195196196+ if (test_bit(__IXGBEVF_DOWN, &adapter->state))197197+ return true;198198+196199 i = tx_ring->next_to_clean;197200 eop = tx_ring->tx_buffer_info[i].next_to_watch;198201 eop_desc = IXGBEVF_TX_DESC(tx_ring, eop);
+4-4
drivers/of/platform.c
···317317 for(; lookup->compatible != NULL; lookup++) {318318 if (!of_device_is_compatible(np, lookup->compatible))319319 continue;320320- if (of_address_to_resource(np, 0, &res))321321- continue;322322- if (res.start != lookup->phys_addr)323323- continue;320320+ if (!of_address_to_resource(np, 0, &res))321321+ if (res.start != lookup->phys_addr)322322+ continue;324323 pr_debug("%s: devname=%s\n", np->full_name, lookup->name);325324 return lookup;326325 }···461462 of_node_put(root);462463 return rc;463464}465465+EXPORT_SYMBOL_GPL(of_platform_populate);464466#endif /* CONFIG_OF_ADDRESS */
+12
drivers/pci/pci-driver.c
···748748749749 pci_pm_set_unknown_state(pci_dev);750750751751+ /*752752+ * Some BIOSes from ASUS have a bug: If a USB EHCI host controller's753753+ * PCI COMMAND register isn't 0, the BIOS assumes that the controller754754+ * hasn't been quiesced and tries to turn it off. If the controller755755+ * is already in D3, this can hang or cause memory corruption.756756+ *757757+ * Since the value of the COMMAND register doesn't matter once the758758+ * device has been suspended, we can safely set it to 0 here.759759+ */760760+ if (pci_dev->class == PCI_CLASS_SERIAL_USB_EHCI)761761+ pci_write_config_word(pci_dev, PCI_COMMAND, 0);762762+751763 return 0;752764}753765
-5
drivers/pci/pci.c
···17441744 if (target_state == PCI_POWER_ERROR)17451745 return -EIO;1746174617471747- /* Some devices mustn't be in D3 during system sleep */17481748- if (target_state == PCI_D3hot &&17491749- (dev->dev_flags & PCI_DEV_FLAGS_NO_D3_DURING_SLEEP))17501750- return 0;17511751-17521747 pci_enable_wake(dev, target_state, device_may_wakeup(&dev->dev));1753174817541749 error = pci_set_power_state(dev, target_state);
-26
drivers/pci/quirks.c
···29292929DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x0102, disable_igfx_irq);29302930DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x010a, disable_igfx_irq);2931293129322932-/*29332933- * The Intel 6 Series/C200 Series chipset's EHCI controllers on many29342934- * ASUS motherboards will cause memory corruption or a system crash29352935- * if they are in D3 while the system is put into S3 sleep.29362936- */29372937-static void __devinit asus_ehci_no_d3(struct pci_dev *dev)29382938-{29392939- const char *sys_info;29402940- static const char good_Asus_board[] = "P8Z68-V";29412941-29422942- if (dev->dev_flags & PCI_DEV_FLAGS_NO_D3_DURING_SLEEP)29432943- return;29442944- if (dev->subsystem_vendor != PCI_VENDOR_ID_ASUSTEK)29452945- return;29462946- sys_info = dmi_get_system_info(DMI_BOARD_NAME);29472947- if (sys_info && memcmp(sys_info, good_Asus_board,29482948- sizeof(good_Asus_board) - 1) == 0)29492949- return;29502950-29512951- dev_info(&dev->dev, "broken D3 during system sleep on ASUS\n");29522952- dev->dev_flags |= PCI_DEV_FLAGS_NO_D3_DURING_SLEEP;29532953- device_set_wakeup_capable(&dev->dev, false);29542954-}29552955-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x1c26, asus_ehci_no_d3);29562956-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x1c2d, asus_ehci_no_d3);29572957-29582932static void pci_do_fixups(struct pci_dev *dev, struct pci_fixup *f,29592933 struct pci_fixup *end)29602934{
···694694static int __devinit ideapad_acpi_add(struct acpi_device *adevice)695695{696696 int ret, i;697697- unsigned long cfg;697697+ int cfg;698698 struct ideapad_private *priv;699699700700- if (read_method_int(adevice->handle, "_CFG", (int *)&cfg))700700+ if (read_method_int(adevice->handle, "_CFG", &cfg))701701 return -ENODEV;702702703703 priv = kzalloc(sizeof(*priv), GFP_KERNEL);···721721 goto input_failed;722722723723 for (i = 0; i < IDEAPAD_RFKILL_DEV_NUM; i++) {724724- if (test_bit(ideapad_rfk_data[i].cfgbit, &cfg))724724+ if (test_bit(ideapad_rfk_data[i].cfgbit, &priv->cfg))725725 ideapad_register_rfkill(adevice, i);726726 else727727 priv->rfk[i] = NULL;
···973973 struct device_attribute *attr,974974 const char *buffer, size_t count)975975{976976- unsigned long value = 0;976976+ int value;977977 int ret = 0;978978 struct sony_nc_value *item =979979 container_of(attr, struct sony_nc_value, devattr);···984984 if (count > 31)985985 return -EINVAL;986986987987- if (kstrtoul(buffer, 10, &value))987987+ if (kstrtoint(buffer, 10, &value))988988 return -EINVAL;989989990990 if (item->validate)···994994 return value;995995996996 ret = sony_nc_int_call(sony_nc_acpi_handle, *item->acpiset,997997- (int *)&value, NULL);997997+ &value, NULL);998998 if (ret < 0)999999 return -EIO;10001000···10101010struct sony_backlight_props {10111011 struct backlight_device *dev;10121012 int handle;10131013+ int cmd_base;10131014 u8 offset;10141015 u8 maxlvl;10151016};···10381037 struct sony_backlight_props *sdev =10391038 (struct sony_backlight_props *)bl_get_data(bd);1040103910411041- sony_call_snc_handle(sdev->handle, 0x0200, &result);10401040+ sony_call_snc_handle(sdev->handle, sdev->cmd_base + 0x100, &result);1042104110431042 return (result & 0xff) - sdev->offset;10441043}···10501049 (struct sony_backlight_props *)bl_get_data(bd);1051105010521051 value = bd->props.brightness + sdev->offset;10531053- if (sony_call_snc_handle(sdev->handle, 0x0100 | (value << 16), &result))10521052+ if (sony_call_snc_handle(sdev->handle, sdev->cmd_base | (value << 0x10),10531053+ &result))10541054 return -EIO;1055105510561056 return value;···11741172/*11751173 * ACPI callbacks11761174 */11751175+enum event_types {11761176+ HOTKEY = 1,11771177+ KILLSWITCH,11781178+ GFX_SWITCH11791179+};11771180static void sony_nc_notify(struct acpi_device *device, u32 event)11781181{11791182 u32 real_ev = event;···12031196 /* hotkey event */12041197 case 0x0100:12051198 case 0x0127:12061206- ev_type = 1;11991199+ ev_type = HOTKEY;12071200 real_ev = sony_nc_hotkeys_decode(event, handle);1208120112091202 if (real_ev > 0)···12231216 * update the rfkill device status when the12241217 * switch is moved.12251218 */12261226- ev_type = 2;12191219+ ev_type = KILLSWITCH;12271220 sony_call_snc_handle(handle, 0x0100, &result);12281221 real_ev = result & 0x03;12291222···12311224 if (real_ev == 1)12321225 sony_nc_rfkill_update();1233122612271227+ break;12281228+12291229+ case 0x0128:12301230+ case 0x0146:12311231+ /* Hybrid GFX switching */12321232+ sony_call_snc_handle(handle, 0x0000, &result);12331233+ dprintk("GFX switch event received (reason: %s)\n",12341234+ (result & 0x01) ?12351235+ "switch change" : "unknown");12361236+12371237+ /* verify the switch state12381238+ * 1: discrete GFX12391239+ * 0: integrated GFX12401240+ */12411241+ sony_call_snc_handle(handle, 0x0100, &result);12421242+12431243+ ev_type = GFX_SWITCH;12441244+ real_ev = result & 0xff;12341245 break;1235124612361247 default:···1263123812641239 } else {12651240 /* old style event */12661266- ev_type = 1;12411241+ ev_type = HOTKEY;12671242 sony_laptop_report_input_event(real_ev);12681243 }12691244···19181893 * bits 4,5: store the limit into the EC19191894 * bits 6,7: store the limit into the battery19201895 */18961896+ cmd = 0;1921189719221922- /*19231923- * handle 0x0115 should allow storing on battery too;19241924- * handle 0x0136 same as 0x0115 + health status;19251925- * handle 0x013f, same as 0x0136 but no storing on the battery19261926- *19271927- * Store only inside the EC for now, regardless the handle number19281928- */19291929- if (value == 0)19301930- /* disable limits */19311931- cmd = 0x0;18981898+ if (value > 0) {18991899+ if (value <= 50)19001900+ cmd = 0x20;1932190119331933- else if (value <= 50)19341934- cmd = 0x21;19021902+ else if (value <= 80)19031903+ cmd = 0x10;1935190419361936- else if (value <= 80)19371937- cmd = 0x11;19051905+ else if (value <= 100)19061906+ cmd = 0x30;1938190719391939- else if (value <= 100)19401940- cmd = 0x31;19081908+ else19091909+ return -EINVAL;1941191019421942- else19431943- return -EINVAL;19111911+ /*19121912+ * handle 0x0115 should allow storing on battery too;19131913+ * handle 0x0136 same as 0x0115 + health status;19141914+ * handle 0x013f, same as 0x0136 but no storing on the battery19151915+ */19161916+ if (bcare_ctl->handle != 0x013f)19171917+ cmd = cmd | (cmd << 2);1944191819451945- if (sony_call_snc_handle(bcare_ctl->handle, (cmd << 0x10) | 0x0100,19461946- &result))19191919+ cmd = (cmd | 0x1) << 0x10;19201920+ }19211921+19221922+ if (sony_call_snc_handle(bcare_ctl->handle, cmd | 0x0100, &result))19471923 return -EIO;1948192419491925 return count;···21392113 struct device_attribute *attr, char *buffer)21402114{21412115 ssize_t count = 0;21422142- unsigned int mode = sony_nc_thermal_mode_get();21162116+ int mode = sony_nc_thermal_mode_get();2143211721442118 if (mode < 0)21452119 return mode;···24982472{24992473 u64 offset;25002474 int i;24752475+ int lvl_table_len = 0;25012476 u8 min = 0xff, max = 0x00;25022477 unsigned char buffer[32] = { 0 };25032478···25072480 props->maxlvl = 0xff;2508248125092482 offset = sony_find_snc_handle(handle);25102510- if (offset < 0)25112511- return;2512248325132484 /* try to read the boundaries from ACPI tables, if we fail the above25142485 * defaults should be reasonable···25162491 if (i < 0)25172492 return;2518249324942494+ switch (handle) {24952495+ case 0x012f:24962496+ case 0x0137:24972497+ lvl_table_len = 9;24982498+ break;24992499+ case 0x143:25002500+ lvl_table_len = 16;25012501+ break;25022502+ }25032503+25192504 /* the buffer lists brightness levels available, brightness levels are25202505 * from position 0 to 8 in the array, other values are used by ALS25212506 * control.25222507 */25232523- for (i = 0; i < 9 && i < ARRAY_SIZE(buffer); i++) {25082508+ for (i = 0; i < lvl_table_len && i < ARRAY_SIZE(buffer); i++) {2524250925252510 dprintk("Brightness level: %d\n", buffer[i]);25262511···25552520 const struct backlight_ops *ops = NULL;25562521 struct backlight_properties props;2557252225582558- if (sony_find_snc_handle(0x12f) != -1) {25232523+ if (sony_find_snc_handle(0x12f) >= 0) {25592524 ops = &sony_backlight_ng_ops;25252525+ sony_bl_props.cmd_base = 0x0100;25602526 sony_nc_backlight_ng_read_limits(0x12f, &sony_bl_props);25612527 max_brightness = sony_bl_props.maxlvl - sony_bl_props.offset;2562252825632563- } else if (sony_find_snc_handle(0x137) != -1) {25292529+ } else if (sony_find_snc_handle(0x137) >= 0) {25642530 ops = &sony_backlight_ng_ops;25312531+ sony_bl_props.cmd_base = 0x0100;25652532 sony_nc_backlight_ng_read_limits(0x137, &sony_bl_props);25332533+ max_brightness = sony_bl_props.maxlvl - sony_bl_props.offset;25342534+25352535+ } else if (sony_find_snc_handle(0x143) >= 0) {25362536+ ops = &sony_backlight_ng_ops;25372537+ sony_bl_props.cmd_base = 0x3000;25382538+ sony_nc_backlight_ng_read_limits(0x143, &sony_bl_props);25662539 max_brightness = sony_bl_props.maxlvl - sony_bl_props.offset;2567254025682541 } else if (ACPI_SUCCESS(acpi_get_handle(sony_nc_acpi_handle, "GBRT",···26402597 }26412598 }2642259926002600+ result = sony_laptop_setup_input(device);26012601+ if (result) {26022602+ pr_err("Unable to create input devices\n");26032603+ goto outplatform;26042604+ }26052605+26432606 if (ACPI_SUCCESS(acpi_get_handle(sony_nc_acpi_handle, "ECON",26442607 &handle))) {26452608 int arg = 1;···26632614 }2664261526652616 /* setup input devices and helper fifo */26662666- result = sony_laptop_setup_input(device);26672667- if (result) {26682668- pr_err("Unable to create input devices\n");26692669- goto outsnc;26702670- }26712671-26722617 if (acpi_video_backlight_support()) {26732618 pr_info("brightness ignored, must be controlled by ACPI video driver\n");26742619 } else {···2710266727112668 return 0;2712266927132713- out_sysfs:26702670+out_sysfs:27142671 for (item = sony_nc_values; item->name; ++item) {27152672 device_remove_file(&sony_pf_device->dev, &item->devattr);27162673 }27172674 sony_nc_backlight_cleanup();27182718-27192719- sony_laptop_remove_input();27202720-27212721- outsnc:27222675 sony_nc_function_cleanup(sony_pf_device);27232676 sony_nc_handles_cleanup(sony_pf_device);2724267727252725- outpresent:26782678+outplatform:26792679+ sony_laptop_remove_input();26802680+26812681+outpresent:27262682 sony_pf_remove();2727268327282728- outwalk:26842684+outwalk:27292685 sony_nc_rfkill_cleanup();27302686 return result;27312687}
+5-5
drivers/regulator/core.c
···25192519{25202520 struct regulator_dev *rdev = regulator->rdev;25212521 struct regulator *consumer;25222522- int ret, output_uV, input_uV, total_uA_load = 0;25222522+ int ret, output_uV, input_uV = 0, total_uA_load = 0;25232523 unsigned int mode;25242524+25252525+ if (rdev->supply)25262526+ input_uV = regulator_get_voltage(rdev->supply);2524252725252528 mutex_lock(&rdev->mutex);25262529···25572554 goto out;25582555 }2559255625602560- /* get input voltage */25612561- input_uV = 0;25622562- if (rdev->supply)25632563- input_uV = regulator_get_voltage(rdev->supply);25572557+ /* No supply? Use constraint voltage */25642558 if (input_uV <= 0)25652559 input_uV = rdev->constraints->input_uV;25662560 if (input_uV <= 0) {
+2
drivers/remoteproc/Kconfig
···44config REMOTEPROC55 tristate66 depends on EXPERIMENTAL77+ select FW_CONFIG7889config OMAP_REMOTEPROC910 tristate "OMAP remoteproc support"1111+ depends on EXPERIMENTAL1012 depends on ARCH_OMAP41113 depends on OMAP_IOMMU1214 select REMOTEPROC
+51-6
drivers/rpmsg/virtio_rpmsg_bus.c
···188188 rpdev->id.name);189189}190190191191+/**192192+ * __ept_release() - deallocate an rpmsg endpoint193193+ * @kref: the ept's reference count194194+ *195195+ * This function deallocates an ept, and is invoked when its @kref refcount196196+ * drops to zero.197197+ *198198+ * Never invoke this function directly!199199+ */200200+static void __ept_release(struct kref *kref)201201+{202202+ struct rpmsg_endpoint *ept = container_of(kref, struct rpmsg_endpoint,203203+ refcount);204204+ /*205205+ * At this point no one holds a reference to ept anymore,206206+ * so we can directly free it207207+ */208208+ kfree(ept);209209+}210210+191211/* for more info, see below documentation of rpmsg_create_ept() */192212static struct rpmsg_endpoint *__rpmsg_create_ept(struct virtproc_info *vrp,193213 struct rpmsg_channel *rpdev, rpmsg_rx_cb_t cb,···225205 dev_err(dev, "failed to kzalloc a new ept\n");226206 return NULL;227207 }208208+209209+ kref_init(&ept->refcount);210210+ mutex_init(&ept->cb_lock);228211229212 ept->rpdev = rpdev;230213 ept->cb = cb;···261238 idr_remove(&vrp->endpoints, request);262239free_ept:263240 mutex_unlock(&vrp->endpoints_lock);264264- kfree(ept);241241+ kref_put(&ept->refcount, __ept_release);265242 return NULL;266243}267244···325302static void326303__rpmsg_destroy_ept(struct virtproc_info *vrp, struct rpmsg_endpoint *ept)327304{305305+ /* make sure new inbound messages can't find this ept anymore */328306 mutex_lock(&vrp->endpoints_lock);329307 idr_remove(&vrp->endpoints, ept->addr);330308 mutex_unlock(&vrp->endpoints_lock);331309332332- kfree(ept);310310+ /* make sure in-flight inbound messages won't invoke cb anymore */311311+ mutex_lock(&ept->cb_lock);312312+ ept->cb = NULL;313313+ mutex_unlock(&ept->cb_lock);314314+315315+ kref_put(&ept->refcount, __ept_release);333316}334317335318/**···819790820791 /* use the dst addr to fetch the callback of the appropriate user */821792 mutex_lock(&vrp->endpoints_lock);793793+822794 ept = idr_find(&vrp->endpoints, msg->dst);795795+796796+ /* let's make sure no one deallocates ept while we use it */797797+ if (ept)798798+ kref_get(&ept->refcount);799799+823800 mutex_unlock(&vrp->endpoints_lock);824801825825- if (ept && ept->cb)826826- ept->cb(ept->rpdev, msg->data, msg->len, ept->priv, msg->src);827827- else802802+ if (ept) {803803+ /* make sure ept->cb doesn't go away while we use it */804804+ mutex_lock(&ept->cb_lock);805805+806806+ if (ept->cb)807807+ ept->cb(ept->rpdev, msg->data, msg->len, ept->priv,808808+ msg->src);809809+810810+ mutex_unlock(&ept->cb_lock);811811+812812+ /* farewell, ept, we don't need you anymore */813813+ kref_put(&ept->refcount, __ept_release);814814+ } else828815 dev_warn(dev, "msg received with no recepient\n");829816830817 /* publish the real size of the buffer */···1085104010861041 return ret;10871042}10881088-module_init(rpmsg_init);10431043+subsys_initcall(rpmsg_init);1089104410901045static void __exit rpmsg_fini(void)10911046{
···2222 * and might not yet have reached the scsi async scanning2323 */2424 wait_for_device_probe();2525- /*2626- * and then we wait for the actual asynchronous scsi scan2727- * to finish.2828- */2929- scsi_complete_async_scans();3025 return 0;3126}3227
+1-1
drivers/target/target_core_cdb.c
···10951095 if (num_blocks != 0)10961096 range = num_blocks;10971097 else10981098- range = (dev->transport->get_blocks(dev) - lba);10981098+ range = (dev->transport->get_blocks(dev) - lba) + 1;1099109911001100 pr_debug("WRITE_SAME UNMAP: LBA: %llu Range: %llu\n",11011101 (unsigned long long)lba, (unsigned long long)range);
+4-3
drivers/target/target_core_pr.c
···20312031 if (IS_ERR(file) || !file || !file->f_dentry) {20322032 pr_err("filp_open(%s) for APTPL metadata"20332033 " failed\n", path);20342034- return (PTR_ERR(file) < 0 ? PTR_ERR(file) : -ENOENT);20342034+ return IS_ERR(file) ? PTR_ERR(file) : -ENOENT;20352035 }2036203620372037 iov[0].iov_base = &buf[0];···38183818 " SPC-2 reservation is held, returning"38193819 " RESERVATION_CONFLICT\n");38203820 cmd->scsi_sense_reason = TCM_RESERVATION_CONFLICT;38213821- ret = EINVAL;38213821+ ret = -EINVAL;38223822 goto out;38233823 }38243824···38283828 */38293829 if (!cmd->se_sess) {38303830 cmd->scsi_sense_reason = TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;38313831- return -EINVAL;38313831+ ret = -EINVAL;38323832+ goto out;38323833 }3833383438343835 if (cmd->data_length < 24) {
···500500 goto retry;501501 }502502 if (!desc->reslength) { /* zero length read */503503+ dev_dbg(&desc->intf->dev, "%s: zero length - clearing WDM_READ\n", __func__);504504+ clear_bit(WDM_READ, &desc->flags);503505 spin_unlock_irq(&desc->iuspin);504506 goto retry;505507 }
+10-8
drivers/usb/core/hub.c
···23242324static int hub_port_reset(struct usb_hub *hub, int port1,23252325 struct usb_device *udev, unsigned int delay, bool warm);2326232623272327-/* Is a USB 3.0 port in the Inactive state? */23282328-static bool hub_port_inactive(struct usb_hub *hub, u16 portstatus)23272327+/* Is a USB 3.0 port in the Inactive or Complinance Mode state?23282328+ * Port worm reset is required to recover23292329+ */23302330+static bool hub_port_warm_reset_required(struct usb_hub *hub, u16 portstatus)23292331{23302332 return hub_is_superspeed(hub->hdev) &&23312331- (portstatus & USB_PORT_STAT_LINK_STATE) ==23322332- USB_SS_PORT_LS_SS_INACTIVE;23332333+ (((portstatus & USB_PORT_STAT_LINK_STATE) ==23342334+ USB_SS_PORT_LS_SS_INACTIVE) ||23352335+ ((portstatus & USB_PORT_STAT_LINK_STATE) ==23362336+ USB_SS_PORT_LS_COMP_MOD)) ;23332337}2334233823352339static int hub_port_wait_reset(struct usb_hub *hub, int port1,···23692365 *23702366 * See https://bugzilla.kernel.org/show_bug.cgi?id=4175223712367 */23722372- if (hub_port_inactive(hub, portstatus)) {23682368+ if (hub_port_warm_reset_required(hub, portstatus)) {23732369 int ret;2374237023752371 if ((portchange & USB_PORT_STAT_C_CONNECTION))···44124408 /* Warm reset a USB3 protocol port if it's in44134409 * SS.Inactive state.44144410 */44154415- if (hub_is_superspeed(hub->hdev) &&44164416- (portstatus & USB_PORT_STAT_LINK_STATE)44174417- == USB_SS_PORT_LS_SS_INACTIVE) {44114411+ if (hub_port_warm_reset_required(hub, portstatus)) {44184412 dev_dbg(hub_dev, "warm reset port %d\n", i);44194413 hub_port_reset(hub, i, NULL,44204414 HUB_BH_RESET_TIME, true);
+8-10
drivers/usb/host/ehci-omap.c
···281281 }282282 }283283284284+ /* Hold PHYs in reset while initializing EHCI controller */284285 if (pdata->phy_reset) {285286 if (gpio_is_valid(pdata->reset_gpio_port[0]))286286- gpio_request_one(pdata->reset_gpio_port[0],287287- GPIOF_OUT_INIT_LOW, "USB1 PHY reset");287287+ gpio_set_value_cansleep(pdata->reset_gpio_port[0], 0);288288289289 if (gpio_is_valid(pdata->reset_gpio_port[1]))290290- gpio_request_one(pdata->reset_gpio_port[1],291291- GPIOF_OUT_INIT_LOW, "USB2 PHY reset");290290+ gpio_set_value_cansleep(pdata->reset_gpio_port[1], 0);292291293292 /* Hold the PHY in RESET for enough time till DIR is high */294293 udelay(10);···329330 omap_ehci->hcs_params = readl(&omap_ehci->caps->hcs_params);330331331332 ehci_reset(omap_ehci);333333+ ret = usb_add_hcd(hcd, irq, IRQF_SHARED);334334+ if (ret) {335335+ dev_err(dev, "failed to add hcd with err %d\n", ret);336336+ goto err_add_hcd;337337+ }332338333339 if (pdata->phy_reset) {334340 /* Hold the PHY in RESET for enough time till···346342347343 if (gpio_is_valid(pdata->reset_gpio_port[1]))348344 gpio_set_value_cansleep(pdata->reset_gpio_port[1], 1);349349- }350350-351351- ret = usb_add_hcd(hcd, irq, IRQF_SHARED);352352- if (ret) {353353- dev_err(dev, "failed to add hcd with err %d\n", ret);354354- goto err_add_hcd;355345 }356346357347 /* root ports should always stay powered */
+38-6
drivers/usb/host/xhci-hub.c
···462462 }463463}464464465465+/* Updates Link Status for super Speed port */466466+static void xhci_hub_report_link_state(u32 *status, u32 status_reg)467467+{468468+ u32 pls = status_reg & PORT_PLS_MASK;469469+470470+ /* resume state is a xHCI internal state.471471+ * Do not report it to usb core.472472+ */473473+ if (pls == XDEV_RESUME)474474+ return;475475+476476+ /* When the CAS bit is set then warm reset477477+ * should be performed on port478478+ */479479+ if (status_reg & PORT_CAS) {480480+ /* The CAS bit can be set while the port is481481+ * in any link state.482482+ * Only roothubs have CAS bit, so we483483+ * pretend to be in compliance mode484484+ * unless we're already in compliance485485+ * or the inactive state.486486+ */487487+ if (pls != USB_SS_PORT_LS_COMP_MOD &&488488+ pls != USB_SS_PORT_LS_SS_INACTIVE) {489489+ pls = USB_SS_PORT_LS_COMP_MOD;490490+ }491491+ /* Return also connection bit -492492+ * hub state machine resets port493493+ * when this bit is set.494494+ */495495+ pls |= USB_PORT_STAT_CONNECTION;496496+ }497497+ /* update status field */498498+ *status |= pls;499499+}500500+465501int xhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue,466502 u16 wIndex, char *buf, u16 wLength)467503{···642606 else643607 status |= USB_PORT_STAT_POWER;644608 }645645- /* Port Link State */609609+ /* Update Port Link State for super speed ports*/646610 if (hcd->speed == HCD_USB3) {647647- /* resume state is a xHCI internal state.648648- * Do not report it to usb core.649649- */650650- if ((temp & PORT_PLS_MASK) != XDEV_RESUME)651651- status |= (temp & PORT_PLS_MASK);611611+ xhci_hub_report_link_state(&status, temp);652612 }653613 if (bus_state->port_c_suspend & (1 << wIndex))654614 status |= 1 << USB_PORT_FEAT_C_SUSPEND;
+11
drivers/usb/host/xhci-ring.c
···885885 num_trbs_free_temp = ep_ring->num_trbs_free;886886 dequeue_temp = ep_ring->dequeue;887887888888+ /* If we get two back-to-back stalls, and the first stalled transfer889889+ * ends just before a link TRB, the dequeue pointer will be left on890890+ * the link TRB by the code in the while loop. So we have to update891891+ * the dequeue pointer one segment further, or we'll jump off892892+ * the segment into la-la-land.893893+ */894894+ if (last_trb(xhci, ep_ring, ep_ring->deq_seg, ep_ring->dequeue)) {895895+ ep_ring->deq_seg = ep_ring->deq_seg->next;896896+ ep_ring->dequeue = ep_ring->deq_seg->trbs;897897+ }898898+888899 while (ep_ring->dequeue != dev->eps[ep_index].queued_deq_ptr) {889900 /* We have more usable TRBs */890901 ep_ring->num_trbs_free++;
+5-1
drivers/usb/host/xhci.h
···341341#define PORT_PLC (1 << 22)342342/* port configure error change - port failed to configure its link partner */343343#define PORT_CEC (1 << 23)344344-/* bit 24 reserved */344344+/* Cold Attach Status - xHC can set this bit to report device attached during345345+ * Sx state. Warm port reset should be perfomed to clear this bit and move port346346+ * to connected state.347347+ */348348+#define PORT_CAS (1 << 24)345349/* wake on connect (enable) */346350#define PORT_WKCONN_E (1 << 25)347351/* wake on disconnect (enable) */
-8
drivers/usb/serial/metro-usb.c
···222222 metro_priv->throttled = 0;223223 spin_unlock_irqrestore(&metro_priv->lock, flags);224224225225- /*226226- * Force low_latency on so that our tty_push actually forces the data227227- * through, otherwise it is scheduled, and with high data rates (like228228- * with OHCI) data can get lost.229229- */230230- if (tty)231231- tty->low_latency = 1;232232-233225 /* Clear the urb pipe. */234226 usb_clear_halt(serial->dev, port->interrupt_in_urb->pipe);235227
···384384 DSSDBG("dispc_runtime_put\n");385385386386 r = pm_runtime_put_sync(&dispc.pdev->dev);387387- WARN_ON(r < 0);387387+ WARN_ON(r < 0 && r != -ENOSYS);388388}389389390390static inline bool dispc_mgr_is_lcd(enum omap_channel channel)
+1-1
drivers/video/omap2/dss/dsi.c
···10751075 DSSDBG("dsi_runtime_put\n");1076107610771077 r = pm_runtime_put_sync(&dsi->pdev->dev);10781078- WARN_ON(r < 0);10781078+ WARN_ON(r < 0 && r != -ENOSYS);10791079}1080108010811081/* source clock for DSI PLL. this could also be PCLKFREE */
+1-1
drivers/video/omap2/dss/dss.c
···731731 DSSDBG("dss_runtime_put\n");732732733733 r = pm_runtime_put_sync(&dss.pdev->dev);734734- WARN_ON(r < 0 && r != -EBUSY);734734+ WARN_ON(r < 0 && r != -ENOSYS && r != -EBUSY);735735}736736737737/* DEBUGFS */
+1-1
drivers/video/omap2/dss/hdmi.c
···138138 DSSDBG("hdmi_runtime_put\n");139139140140 r = pm_runtime_put_sync(&hdmi.pdev->dev);141141- WARN_ON(r < 0);141141+ WARN_ON(r < 0 && r != -ENOSYS);142142}143143144144static int __init hdmi_init_display(struct omap_dss_device *dssdev)
+1-1
drivers/video/omap2/dss/rfbi.c
···141141 DSSDBG("rfbi_runtime_put\n");142142143143 r = pm_runtime_put_sync(&rfbi.pdev->dev);144144- WARN_ON(r < 0);144144+ WARN_ON(r < 0 && r != -ENOSYS);145145}146146147147void rfbi_bus_lock(void)
+1-1
drivers/video/omap2/dss/venc.c
···402402 DSSDBG("venc_runtime_put\n");403403404404 r = pm_runtime_put_sync(&venc.pdev->dev);405405- WARN_ON(r < 0);405405+ WARN_ON(r < 0 && r != -ENOSYS);406406}407407408408static const struct venc_config *venc_timings_to_config(
+10-14
drivers/virtio/virtio_balloon.c
···4747 struct task_struct *thread;48484949 /* Waiting for host to ack the pages we released. */5050- struct completion acked;5050+ wait_queue_head_t acked;51515252 /* Number of balloon pages we've told the Host we're not using. */5353 unsigned int num_pages;···89899090static void balloon_ack(struct virtqueue *vq)9191{9292- struct virtio_balloon *vb;9393- unsigned int len;9292+ struct virtio_balloon *vb = vq->vdev->priv;94939595- vb = virtqueue_get_buf(vq, &len);9696- if (vb)9797- complete(&vb->acked);9494+ wake_up(&vb->acked);9895}999610097static void tell_host(struct virtio_balloon *vb, struct virtqueue *vq)10198{10299 struct scatterlist sg;100100+ unsigned int len;103101104102 sg_init_one(&sg, vb->pfns, sizeof(vb->pfns[0]) * vb->num_pfns);105105-106106- init_completion(&vb->acked);107103108104 /* We should always be able to add one buffer to an empty queue. */109105 if (virtqueue_add_buf(vq, &sg, 1, 0, vb, GFP_KERNEL) < 0)···107111 virtqueue_kick(vq);108112109113 /* When host has read buffer, this completes via balloon_ack */110110- wait_for_completion(&vb->acked);114114+ wait_event(vb->acked, virtqueue_get_buf(vq, &len));111115}112116113117static void set_page_pfns(u32 pfns[], struct page *page)···227231 */228232static void stats_request(struct virtqueue *vq)229233{230230- struct virtio_balloon *vb;231231- unsigned int len;234234+ struct virtio_balloon *vb = vq->vdev->priv;232235233233- vb = virtqueue_get_buf(vq, &len);234234- if (!vb)235235- return;236236 vb->need_stats_update = 1;237237 wake_up(&vb->config_change);238238}···237245{238246 struct virtqueue *vq;239247 struct scatterlist sg;248248+ unsigned int len;240249241250 vb->need_stats_update = 0;242251 update_balloon_stats(vb);243252244253 vq = vb->stats_vq;254254+ if (!virtqueue_get_buf(vq, &len))255255+ return;245256 sg_init_one(&sg, vb->stats, sizeof(vb->stats));246257 if (virtqueue_add_buf(vq, &sg, 1, 0, vb, GFP_KERNEL) < 0)247258 BUG();···353358 INIT_LIST_HEAD(&vb->pages);354359 vb->num_pages = 0;355360 init_waitqueue_head(&vb->config_change);361361+ init_waitqueue_head(&vb->acked);356362 vb->vdev = vdev;357363 vb->need_stats_update = 0;358364
+9-6
fs/btrfs/backref.c
···301301 goto out;302302303303 eb = path->nodes[level];304304- if (!eb) {305305- WARN_ON(1);306306- ret = 1;307307- goto out;304304+ while (!eb) {305305+ if (!level) {306306+ WARN_ON(1);307307+ ret = 1;308308+ goto out;309309+ }310310+ level--;311311+ eb = path->nodes[level];308312 }309313310314 ret = add_all_parents(root, path, parents, level, &ref->key_for_search,···839835 }840836 ret = __add_delayed_refs(head, delayed_ref_seq,841837 &prefs_delayed);838838+ mutex_unlock(&head->mutex);842839 if (ret) {843840 spin_unlock(&delayed_refs->lock);844841 goto out;···933928 }934929935930out:936936- if (head)937937- mutex_unlock(&head->mutex);938931 btrfs_free_path(path);939932 while (!list_empty(&prefs)) {940933 ref = list_first_entry(&prefs, struct __prelim_ref, list);
+35-25
fs/btrfs/ctree.c
···10241024 if (!looped && !tm)10251025 return 0;10261026 /*10271027- * we must have key remove operations in the log before the10281028- * replace operation.10271027+ * if there are no tree operation for the oldest root, we simply10281028+ * return it. this should only happen if that (old) root is at10291029+ * level 0.10291030 */10301030- BUG_ON(!tm);10311031+ if (!tm)10321032+ break;1031103310341034+ /*10351035+ * if there's an operation that's not a root replacement, we10361036+ * found the oldest version of our root. normally, we'll find a10371037+ * MOD_LOG_KEY_REMOVE_WHILE_FREEING operation here.10381038+ */10321039 if (tm->op != MOD_LOG_ROOT_REPLACE)10331040 break;10341041···10941087 tm->generation);10951088 break;10961089 case MOD_LOG_KEY_ADD:10971097- if (tm->slot != n - 1) {10981098- o_dst = btrfs_node_key_ptr_offset(tm->slot);10991099- o_src = btrfs_node_key_ptr_offset(tm->slot + 1);11001100- memmove_extent_buffer(eb, o_dst, o_src, p_size);11011101- }10901090+ /* if a move operation is needed it's in the log */11021091 n--;11031092 break;11041093 case MOD_LOG_MOVE_KEYS:···11951192 }1196119311971194 tm = tree_mod_log_search(root->fs_info, logical, time_seq);11981198- /*11991199- * there was an item in the log when __tree_mod_log_oldest_root12001200- * returned. this one must not go away, because the time_seq passed to12011201- * us must be blocking its removal.12021202- */12031203- BUG_ON(!tm);12041204-12051195 if (old_root)12061206- eb = alloc_dummy_extent_buffer(tm->index << PAGE_CACHE_SHIFT,12071207- root->nodesize);11961196+ eb = alloc_dummy_extent_buffer(logical, root->nodesize);12081197 else12091198 eb = btrfs_clone_extent_buffer(root->node);12101199 btrfs_tree_read_unlock(root->node);···12111216 btrfs_set_header_level(eb, old_root->level);12121217 btrfs_set_header_generation(eb, old_generation);12131218 }12141214- __tree_mod_log_rewind(eb, time_seq, tm);12191219+ if (tm)12201220+ __tree_mod_log_rewind(eb, time_seq, tm);12211221+ else12221222+ WARN_ON(btrfs_header_level(eb) != 0);12151223 extent_buffer_get(eb);1216122412171225 return eb;···29932995static void insert_ptr(struct btrfs_trans_handle *trans,29942996 struct btrfs_root *root, struct btrfs_path *path,29952997 struct btrfs_disk_key *key, u64 bytenr,29962996- int slot, int level, int tree_mod_log)29982998+ int slot, int level)29972999{29983000 struct extent_buffer *lower;29993001 int nritems;···30063008 BUG_ON(slot > nritems);30073009 BUG_ON(nritems == BTRFS_NODEPTRS_PER_BLOCK(root));30083010 if (slot != nritems) {30093009- if (tree_mod_log && level)30113011+ if (level)30103012 tree_mod_log_eb_move(root->fs_info, lower, slot + 1,30113013 slot, nritems - slot);30123014 memmove_extent_buffer(lower,···30143016 btrfs_node_key_ptr_offset(slot),30153017 (nritems - slot) * sizeof(struct btrfs_key_ptr));30163018 }30173017- if (tree_mod_log && level) {30193019+ if (level) {30183020 ret = tree_mod_log_insert_key(root->fs_info, lower, slot,30193021 MOD_LOG_KEY_ADD);30203022 BUG_ON(ret < 0);···31023104 btrfs_mark_buffer_dirty(split);3103310531043106 insert_ptr(trans, root, path, &disk_key, split->start,31053105- path->slots[level + 1] + 1, level + 1, 1);31073107+ path->slots[level + 1] + 1, level + 1);3106310831073109 if (path->slots[level] >= mid) {31083110 path->slots[level] -= mid;···36393641 btrfs_set_header_nritems(l, mid);36403642 btrfs_item_key(right, &disk_key, 0);36413643 insert_ptr(trans, root, path, &disk_key, right->start,36423642- path->slots[1] + 1, 1, 0);36443644+ path->slots[1] + 1, 1);3643364536443646 btrfs_mark_buffer_dirty(right);36453647 btrfs_mark_buffer_dirty(l);···38463848 if (mid <= slot) {38473849 btrfs_set_header_nritems(right, 0);38483850 insert_ptr(trans, root, path, &disk_key, right->start,38493849- path->slots[1] + 1, 1, 0);38513851+ path->slots[1] + 1, 1);38503852 btrfs_tree_unlock(path->nodes[0]);38513853 free_extent_buffer(path->nodes[0]);38523854 path->nodes[0] = right;···38553857 } else {38563858 btrfs_set_header_nritems(right, 0);38573859 insert_ptr(trans, root, path, &disk_key, right->start,38583858- path->slots[1], 1, 0);38603860+ path->slots[1], 1);38593861 btrfs_tree_unlock(path->nodes[0]);38603862 free_extent_buffer(path->nodes[0]);38613863 path->nodes[0] = right;···5119512151205122 if (!path->skip_locking) {51215123 ret = btrfs_try_tree_read_lock(next);51245124+ if (!ret && time_seq) {51255125+ /*51265126+ * If we don't get the lock, we may be racing51275127+ * with push_leaf_left, holding that lock while51285128+ * itself waiting for the leaf we've currently51295129+ * locked. To solve this situation, we give up51305130+ * on our lock and cycle.51315131+ */51325132+ btrfs_release_path(path);51335133+ cond_resched();51345134+ goto again;51355135+ }51225136 if (!ret) {51235137 btrfs_set_path_blocking(path);51245138 btrfs_tree_read_lock(next);
+21-13
fs/btrfs/disk-io.c
···23542354 BTRFS_CSUM_TREE_OBJECTID, csum_root);23552355 if (ret)23562356 goto recovery_tree_root;23572357-23582357 csum_root->track_dirty = 1;2359235823602359 fs_info->generation = generation;23612360 fs_info->last_trans_committed = generation;23612361+23622362+ ret = btrfs_recover_balance(fs_info);23632363+ if (ret) {23642364+ printk(KERN_WARNING "btrfs: failed to recover balance\n");23652365+ goto fail_block_groups;23662366+ }2362236723632368 ret = btrfs_init_dev_stats(fs_info);23642369 if (ret) {···24902485 goto fail_trans_kthread;24912486 }2492248724932493- if (!(sb->s_flags & MS_RDONLY)) {24942494- down_read(&fs_info->cleanup_work_sem);24952495- err = btrfs_orphan_cleanup(fs_info->fs_root);24962496- if (!err)24972497- err = btrfs_orphan_cleanup(fs_info->tree_root);24882488+ if (sb->s_flags & MS_RDONLY)24892489+ return 0;24902490+24912491+ down_read(&fs_info->cleanup_work_sem);24922492+ if ((ret = btrfs_orphan_cleanup(fs_info->fs_root)) ||24932493+ (ret = btrfs_orphan_cleanup(fs_info->tree_root))) {24982494 up_read(&fs_info->cleanup_work_sem);24952495+ close_ctree(tree_root);24962496+ return ret;24972497+ }24982498+ up_read(&fs_info->cleanup_work_sem);2499249925002500- if (!err)25012501- err = btrfs_recover_balance(fs_info->tree_root);25022502-25032503- if (err) {25042504- close_ctree(tree_root);25052505- return err;25062506- }25002500+ ret = btrfs_resume_balance_async(fs_info);25012501+ if (ret) {25022502+ printk(KERN_WARNING "btrfs: failed to resume balance\n");25032503+ close_ctree(tree_root);25042504+ return ret;25072505 }2508250625092507 return 0;
+6-5
fs/btrfs/extent-tree.c
···23472347 return count;23482348}2349234923502350-23512350static void wait_for_more_refs(struct btrfs_delayed_ref_root *delayed_refs,23522352- unsigned long num_refs)23512351+ unsigned long num_refs,23522352+ struct list_head *first_seq)23532353{23542354- struct list_head *first_seq = delayed_refs->seq_head.next;23552355-23562354 spin_unlock(&delayed_refs->lock);23572355 pr_debug("waiting for more refs (num %ld, first %p)\n",23582356 num_refs, first_seq);···23792381 struct btrfs_delayed_ref_root *delayed_refs;23802382 struct btrfs_delayed_ref_node *ref;23812383 struct list_head cluster;23842384+ struct list_head *first_seq = NULL;23822385 int ret;23832386 u64 delayed_start;23842387 int run_all = count == (unsigned long)-1;···24352436 */24362437 consider_waiting = 1;24372438 num_refs = delayed_refs->num_entries;24392439+ first_seq = root->fs_info->tree_mod_seq_list.next;24382440 } else {24392439- wait_for_more_refs(delayed_refs, num_refs);24412441+ wait_for_more_refs(delayed_refs,24422442+ num_refs, first_seq);24402443 /*24412444 * after waiting, things have changed. we24422445 * dropped the lock and someone else might have
+14
fs/btrfs/extent_io.c
···33243324 writepage_t writepage, void *data,33253325 void (*flush_fn)(void *))33263326{33273327+ struct inode *inode = mapping->host;33273328 int ret = 0;33283329 int done = 0;33293330 int nr_to_write_done = 0;···33343333 pgoff_t end; /* Inclusive */33353334 int scanned = 0;33363335 int tag;33363336+33373337+ /*33383338+ * We have to hold onto the inode so that ordered extents can do their33393339+ * work when the IO finishes. The alternative to this is failing to add33403340+ * an ordered extent if the igrab() fails there and that is a huge pain33413341+ * to deal with, so instead just hold onto the inode throughout the33423342+ * writepages operation. If it fails here we are freeing up the inode33433343+ * anyway and we'd rather not waste our time writing out stuff that is33443344+ * going to be truncated anyway.33453345+ */33463346+ if (!igrab(inode))33473347+ return 0;3337334833383349 pagevec_init(&pvec, 0);33393350 if (wbc->range_cyclic) {···34413428 index = 0;34423429 goto retry;34433430 }34313431+ btrfs_add_delayed_iput(inode);34443432 return ret;34453433}34463434
-13
fs/btrfs/file.c
···13341334 loff_t *ppos, size_t count, size_t ocount)13351335{13361336 struct file *file = iocb->ki_filp;13371337- struct inode *inode = fdentry(file)->d_inode;13381337 struct iov_iter i;13391338 ssize_t written;13401339 ssize_t written_buffered;···1342134313431344 written = generic_file_direct_write(iocb, iov, &nr_segs, pos, ppos,13441345 count, ocount);13451345-13461346- /*13471347- * the generic O_DIRECT will update in-memory i_size after the13481348- * DIOs are done. But our endio handlers that update the on13491349- * disk i_size never update past the in memory i_size. So we13501350- * need one more update here to catch any additions to the13511351- * file13521352- */13531353- if (inode->i_size != BTRFS_I(inode)->disk_i_size) {13541354- btrfs_ordered_update_i_size(inode, inode->i_size, NULL);13551355- mark_inode_dirty(inode);13561356- }1357134613581347 if (written < 0 || written == count)13591348 return written;
+51-92
fs/btrfs/free-space-cache.c
···15431543 end = bitmap_info->offset + (u64)(BITS_PER_BITMAP * ctl->unit) - 1;1544154415451545 /*15461546- * XXX - this can go away after a few releases.15471547- *15481548- * since the only user of btrfs_remove_free_space is the tree logging15491549- * stuff, and the only way to test that is under crash conditions, we15501550- * want to have this debug stuff here just in case somethings not15511551- * working. Search the bitmap for the space we are trying to use to15521552- * make sure its actually there. If its not there then we need to stop15531553- * because something has gone wrong.15461546+ * We need to search for bits in this bitmap. We could only cover some15471547+ * of the extent in this bitmap thanks to how we add space, so we need15481548+ * to search for as much as it as we can and clear that amount, and then15491549+ * go searching for the next bit.15541550 */15551551 search_start = *offset;15561556- search_bytes = *bytes;15521552+ search_bytes = ctl->unit;15571553 search_bytes = min(search_bytes, end - search_start + 1);15581554 ret = search_bitmap(ctl, bitmap_info, &search_start, &search_bytes);15591555 BUG_ON(ret < 0 || search_start != *offset);1560155615611561- if (*offset > bitmap_info->offset && *offset + *bytes > end) {15621562- bitmap_clear_bits(ctl, bitmap_info, *offset, end - *offset + 1);15631563- *bytes -= end - *offset + 1;15641564- *offset = end + 1;15651565- } else if (*offset >= bitmap_info->offset && *offset + *bytes <= end) {15661566- bitmap_clear_bits(ctl, bitmap_info, *offset, *bytes);15671567- *bytes = 0;15681568- }15571557+ /* We may have found more bits than what we need */15581558+ search_bytes = min(search_bytes, *bytes);15591559+15601560+ /* Cannot clear past the end of the bitmap */15611561+ search_bytes = min(search_bytes, end - search_start + 1);15621562+15631563+ bitmap_clear_bits(ctl, bitmap_info, search_start, search_bytes);15641564+ *offset += search_bytes;15651565+ *bytes -= search_bytes;1569156615701567 if (*bytes) {15711568 struct rb_node *next = rb_next(&bitmap_info->offset_index);···15931596 * everything over again.15941597 */15951598 search_start = *offset;15961596- search_bytes = *bytes;15991599+ search_bytes = ctl->unit;15971600 ret = search_bitmap(ctl, bitmap_info, &search_start,15981601 &search_bytes);15991602 if (ret < 0 || search_start != *offset)···18761879{18771880 struct btrfs_free_space_ctl *ctl = block_group->free_space_ctl;18781881 struct btrfs_free_space *info;18791879- struct btrfs_free_space *next_info = NULL;18801882 int ret = 0;1881188318821884 spin_lock(&ctl->tree_lock);1883188518841886again:18871887+ if (!bytes)18881888+ goto out_lock;18891889+18851890 info = tree_search_offset(ctl, offset, 0, 0);18861891 if (!info) {18871892 /*···19041905 }19051906 }1906190719071907- if (info->bytes < bytes && rb_next(&info->offset_index)) {19081908- u64 end;19091909- next_info = rb_entry(rb_next(&info->offset_index),19101910- struct btrfs_free_space,19111911- offset_index);19121912-19131913- if (next_info->bitmap)19141914- end = next_info->offset +19151915- BITS_PER_BITMAP * ctl->unit - 1;19161916- else19171917- end = next_info->offset + next_info->bytes;19181918-19191919- if (next_info->bytes < bytes ||19201920- next_info->offset > offset || offset > end) {19211921- printk(KERN_CRIT "Found free space at %llu, size %llu,"19221922- " trying to use %llu\n",19231923- (unsigned long long)info->offset,19241924- (unsigned long long)info->bytes,19251925- (unsigned long long)bytes);19261926- WARN_ON(1);19271927- ret = -EINVAL;19281928- goto out_lock;19291929- }19301930-19311931- info = next_info;19321932- }19331933-19341934- if (info->bytes == bytes) {19081908+ if (!info->bitmap) {19351909 unlink_free_space(ctl, info);19361936- if (info->bitmap) {19371937- kfree(info->bitmap);19381938- ctl->total_bitmaps--;19391939- }19401940- kmem_cache_free(btrfs_free_space_cachep, info);19411941- ret = 0;19421942- goto out_lock;19431943- }19101910+ if (offset == info->offset) {19111911+ u64 to_free = min(bytes, info->bytes);1944191219451945- if (!info->bitmap && info->offset == offset) {19461946- unlink_free_space(ctl, info);19471947- info->offset += bytes;19481948- info->bytes -= bytes;19491949- ret = link_free_space(ctl, info);19501950- WARN_ON(ret);19511951- goto out_lock;19521952- }19131913+ info->bytes -= to_free;19141914+ info->offset += to_free;19151915+ if (info->bytes) {19161916+ ret = link_free_space(ctl, info);19171917+ WARN_ON(ret);19181918+ } else {19191919+ kmem_cache_free(btrfs_free_space_cachep, info);19201920+ }1953192119541954- if (!info->bitmap && info->offset <= offset &&19551955- info->offset + info->bytes >= offset + bytes) {19561956- u64 old_start = info->offset;19571957- /*19581958- * we're freeing space in the middle of the info,19591959- * this can happen during tree log replay19601960- *19611961- * first unlink the old info and then19621962- * insert it again after the hole we're creating19631963- */19641964- unlink_free_space(ctl, info);19651965- if (offset + bytes < info->offset + info->bytes) {19661966- u64 old_end = info->offset + info->bytes;19221922+ offset += to_free;19231923+ bytes -= to_free;19241924+ goto again;19251925+ } else {19261926+ u64 old_end = info->bytes + info->offset;1967192719681968- info->offset = offset + bytes;19691969- info->bytes = old_end - info->offset;19281928+ info->bytes = offset - info->offset;19701929 ret = link_free_space(ctl, info);19711930 WARN_ON(ret);19721931 if (ret)19731932 goto out_lock;19741974- } else {19751975- /* the hole we're creating ends at the end19761976- * of the info struct, just free the info19771977- */19781978- kmem_cache_free(btrfs_free_space_cachep, info);19791979- }19801980- spin_unlock(&ctl->tree_lock);1981193319821982- /* step two, insert a new info struct to cover19831983- * anything before the hole19841984- */19851985- ret = btrfs_add_free_space(block_group, old_start,19861986- offset - old_start);19871987- WARN_ON(ret); /* -ENOMEM */19881988- goto out;19341934+ /* Not enough bytes in this entry to satisfy us */19351935+ if (old_end < offset + bytes) {19361936+ bytes -= old_end - offset;19371937+ offset = old_end;19381938+ goto again;19391939+ } else if (old_end == offset + bytes) {19401940+ /* all done */19411941+ goto out_lock;19421942+ }19431943+ spin_unlock(&ctl->tree_lock);19441944+19451945+ ret = btrfs_add_free_space(block_group, offset + bytes,19461946+ old_end - (offset + bytes));19471947+ WARN_ON(ret);19481948+ goto out;19491949+ }19891950 }1990195119911952 ret = remove_from_bitmap(ctl, info, &offset, &bytes);
+51-6
fs/btrfs/inode.c
···37543754 btrfs_wait_ordered_range(inode, 0, (u64)-1);3755375537563756 if (root->fs_info->log_root_recovering) {37573757- BUG_ON(!test_bit(BTRFS_INODE_HAS_ORPHAN_ITEM,37573757+ BUG_ON(test_bit(BTRFS_INODE_HAS_ORPHAN_ITEM,37583758 &BTRFS_I(inode)->runtime_flags));37593759 goto no_delete;37603760 }···58765876 bh_result->b_size = len;58775877 bh_result->b_bdev = em->bdev;58785878 set_buffer_mapped(bh_result);58795879- if (create && !test_bit(EXTENT_FLAG_PREALLOC, &em->flags))58805880- set_buffer_new(bh_result);58795879+ if (create) {58805880+ if (!test_bit(EXTENT_FLAG_PREALLOC, &em->flags))58815881+ set_buffer_new(bh_result);58825882+58835883+ /*58845884+ * Need to update the i_size under the extent lock so buffered58855885+ * readers will get the updated i_size when we unlock.58865886+ */58875887+ if (start + len > i_size_read(inode))58885888+ i_size_write(inode, start + len);58895889+ }5881589058825891 free_extent_map(em);58835892···63696360 */63706361 ordered = btrfs_lookup_ordered_range(inode, lockstart,63716362 lockend - lockstart + 1);63726372- if (!ordered)63636363+63646364+ /*63656365+ * We need to make sure there are no buffered pages in this63666366+ * range either, we could have raced between the invalidate in63676367+ * generic_file_direct_write and locking the extent. The63686368+ * invalidate needs to happen so that reads after a write do not63696369+ * get stale data.63706370+ */63716371+ if (!ordered && (!writing ||63726372+ !test_range_bit(&BTRFS_I(inode)->io_tree,63736373+ lockstart, lockend, EXTENT_UPTODATE, 0,63746374+ cached_state)))63736375 break;63766376+63746377 unlock_extent_cached(&BTRFS_I(inode)->io_tree, lockstart, lockend,63756378 &cached_state, GFP_NOFS);63766376- btrfs_start_ordered_extent(inode, ordered, 1);63776377- btrfs_put_ordered_extent(ordered);63796379+63806380+ if (ordered) {63816381+ btrfs_start_ordered_extent(inode, ordered, 1);63826382+ btrfs_put_ordered_extent(ordered);63836383+ } else {63846384+ /* Screw you mmap */63856385+ ret = filemap_write_and_wait_range(file->f_mapping,63866386+ lockstart,63876387+ lockend);63886388+ if (ret)63896389+ goto out;63906390+63916391+ /*63926392+ * If we found a page that couldn't be invalidated just63936393+ * fall back to buffered.63946394+ */63956395+ ret = invalidate_inode_pages2_range(file->f_mapping,63966396+ lockstart >> PAGE_CACHE_SHIFT,63976397+ lockend >> PAGE_CACHE_SHIFT);63986398+ if (ret) {63996399+ if (ret == -EBUSY)64006400+ ret = 0;64016401+ goto out;64026402+ }64036403+ }64046404+63786405 cond_resched();63796406 }63806407
···10361036static struct buffer_head *10371037__getblk_slow(struct block_device *bdev, sector_t block, int size)10381038{10391039+ int ret;10401040+ struct buffer_head *bh;10411041+10391042 /* Size must be multiple of hard sectorsize */10401043 if (unlikely(size & (bdev_logical_block_size(bdev)-1) ||10411044 (size < 512 || size > PAGE_SIZE))) {···10511048 return NULL;10521049 }1053105010541054- for (;;) {10551055- struct buffer_head * bh;10561056- int ret;10511051+retry:10521052+ bh = __find_get_block(bdev, block, size);10531053+ if (bh)10541054+ return bh;1057105510561056+ ret = grow_buffers(bdev, block, size);10571057+ if (ret == 0) {10581058+ free_more_memory();10591059+ goto retry;10601060+ } else if (ret > 0) {10581061 bh = __find_get_block(bdev, block, size);10591062 if (bh)10601063 return bh;10611061-10621062- ret = grow_buffers(bdev, block, size);10631063- if (ret < 0)10641064- return NULL;10651065- if (ret == 0)10661066- free_more_memory();10671064 }10651065+ return NULL;10681066}1069106710701068/*
+29-1
fs/cifs/cifssmb.c
···8686#endif /* CONFIG_CIFS_WEAK_PW_HASH */8787#endif /* CIFS_POSIX */88888989-/* Forward declarations */8989+#ifdef CONFIG_HIGHMEM9090+/*9191+ * On arches that have high memory, kmap address space is limited. By9292+ * serializing the kmap operations on those arches, we ensure that we don't9393+ * end up with a bunch of threads in writeback with partially mapped page9494+ * arrays, stuck waiting for kmap to come back. That situation prevents9595+ * progress and can deadlock.9696+ */9797+static DEFINE_MUTEX(cifs_kmap_mutex);9898+9999+static inline void100100+cifs_kmap_lock(void)101101+{102102+ mutex_lock(&cifs_kmap_mutex);103103+}104104+105105+static inline void106106+cifs_kmap_unlock(void)107107+{108108+ mutex_unlock(&cifs_kmap_mutex);109109+}110110+#else /* !CONFIG_HIGHMEM */111111+#define cifs_kmap_lock() do { ; } while(0)112112+#define cifs_kmap_unlock() do { ; } while(0)113113+#endif /* CONFIG_HIGHMEM */9011491115/* Mark as invalid, all open files on tree connections since they92116 were closed when session to server was lost */···15271503 }1528150415291505 /* marshal up the page array */15061506+ cifs_kmap_lock();15301507 len = rdata->marshal_iov(rdata, data_len);15081508+ cifs_kmap_unlock();15311509 data_len -= len;1532151015331511 /* issue the read if we have any iovecs left to fill */···20952069 * and set the iov_len properly for each one. It may also set20962070 * wdata->bytes too.20972071 */20722072+ cifs_kmap_lock();20982073 wdata->marshal_iov(iov, wdata);20742074+ cifs_kmap_unlock();2099207521002076 cFYI(1, "async write at %llu %u bytes", wdata->offset, wdata->bytes);21012077
+38-21
fs/cifs/connect.c
···16531653 * If yes, we have encountered a double deliminator16541654 * reset the NULL character to the deliminator16551655 */16561656- if (tmp_end < end && tmp_end[1] == delim)16561656+ if (tmp_end < end && tmp_end[1] == delim) {16571657 tmp_end[0] = delim;1658165816591659- /* Keep iterating until we get to a single deliminator16601660- * OR the end16611661- */16621662- while ((tmp_end = strchr(tmp_end, delim)) != NULL &&16631663- (tmp_end[1] == delim)) {16641664- tmp_end = (char *) &tmp_end[2];16651665- }16591659+ /* Keep iterating until we get to a single16601660+ * deliminator OR the end16611661+ */16621662+ while ((tmp_end = strchr(tmp_end, delim))16631663+ != NULL && (tmp_end[1] == delim)) {16641664+ tmp_end = (char *) &tmp_end[2];16651665+ }1666166616671667- /* Reset var options to point to next element */16681668- if (tmp_end) {16691669- tmp_end[0] = '\0';16701670- options = (char *) &tmp_end[1];16711671- } else16721672- /* Reached the end of the mount option string */16731673- options = end;16671667+ /* Reset var options to point to next element */16681668+ if (tmp_end) {16691669+ tmp_end[0] = '\0';16701670+ options = (char *) &tmp_end[1];16711671+ } else16721672+ /* Reached the end of the mount option16731673+ * string */16741674+ options = end;16751675+ }1674167616751677 /* Now build new password string */16761678 temp_len = strlen(value);···34453443#define CIFS_DEFAULT_NON_POSIX_RSIZE (60 * 1024)34463444#define CIFS_DEFAULT_NON_POSIX_WSIZE (65536)3447344534463446+/*34473447+ * On hosts with high memory, we can't currently support wsize/rsize that are34483448+ * larger than we can kmap at once. Cap the rsize/wsize at34493449+ * LAST_PKMAP * PAGE_SIZE. We'll never be able to fill a read or write request34503450+ * larger than that anyway.34513451+ */34523452+#ifdef CONFIG_HIGHMEM34533453+#define CIFS_KMAP_SIZE_LIMIT (LAST_PKMAP * PAGE_CACHE_SIZE)34543454+#else /* CONFIG_HIGHMEM */34553455+#define CIFS_KMAP_SIZE_LIMIT (1<<24)34563456+#endif /* CONFIG_HIGHMEM */34573457+34483458static unsigned int34493459cifs_negotiate_wsize(struct cifs_tcon *tcon, struct smb_vol *pvolume_info)34503460{···34873473 wsize = min_t(unsigned int, wsize,34883474 server->maxBuf - sizeof(WRITE_REQ) + 4);3489347534763476+ /* limit to the amount that we can kmap at once */34773477+ wsize = min_t(unsigned int, wsize, CIFS_KMAP_SIZE_LIMIT);34783478+34903479 /* hard limit of CIFS_MAX_WSIZE */34913480 wsize = min_t(unsigned int, wsize, CIFS_MAX_WSIZE);34923481···35103493 * MS-CIFS indicates that servers are only limited by the client's35113494 * bufsize for reads, testing against win98se shows that it throws35123495 * INVALID_PARAMETER errors if you try to request too large a read.34963496+ * OS/2 just sends back short reads.35133497 *35143514- * If the server advertises a MaxBufferSize of less than one page,35153515- * assume that it also can't satisfy reads larger than that either.35163516- *35173517- * FIXME: Is there a better heuristic for this?34983498+ * If the server doesn't advertise CAP_LARGE_READ_X, then assume that34993499+ * it can't handle a read request larger than its MaxBufferSize either.35183500 */35193501 if (tcon->unix_ext && (unix_cap & CIFS_UNIX_LARGE_READ_CAP))35203502 defsize = CIFS_DEFAULT_IOSIZE;35213503 else if (server->capabilities & CAP_LARGE_READ_X)35223504 defsize = CIFS_DEFAULT_NON_POSIX_RSIZE;35233523- else if (server->maxBuf >= PAGE_CACHE_SIZE)35243524- defsize = CIFSMaxBufSize;35253505 else35263506 defsize = server->maxBuf - sizeof(READ_RSP);35273507···35303516 */35313517 if (!(server->capabilities & CAP_LARGE_READ_X))35323518 rsize = min_t(unsigned int, CIFSMaxBufSize, rsize);35193519+35203520+ /* limit to the amount that we can kmap at once */35213521+ rsize = min_t(unsigned int, rsize, CIFS_KMAP_SIZE_LIMIT);3533352235343523 /* hard limit of CIFS_MAX_RSIZE */35353524 rsize = min_t(unsigned int, rsize, CIFS_MAX_RSIZE);
+5-2
fs/cifs/readdir.c
···86868787 dentry = d_lookup(parent, name);8888 if (dentry) {8989- /* FIXME: check for inode number changes? */9090- if (dentry->d_inode != NULL)8989+ inode = dentry->d_inode;9090+ /* update inode in place if i_ino didn't change */9191+ if (inode && CIFS_I(inode)->uniqueid == fattr->cf_uniqueid) {9292+ cifs_fattr_to_inode(inode, fattr);9193 return dentry;9494+ }9295 d_drop(dentry);9396 dput(dentry);9497 }
+14-12
fs/cifs/transport.c
···365365 if (mid == NULL)366366 return -ENOMEM;367367368368- /* put it on the pending_mid_q */369369- spin_lock(&GlobalMid_Lock);370370- list_add_tail(&mid->qhead, &server->pending_mid_q);371371- spin_unlock(&GlobalMid_Lock);372372-373368 rc = cifs_sign_smb2(iov, nvec, server, &mid->sequence_number);374374- if (rc)375375- delete_mid(mid);369369+ if (rc) {370370+ DeleteMidQEntry(mid);371371+ return rc;372372+ }373373+376374 *ret_mid = mid;377377- return rc;375375+ return 0;378376}379377380378/*···405407 mid->callback_data = cbdata;406408 mid->mid_state = MID_REQUEST_SUBMITTED;407409410410+ /* put it on the pending_mid_q */411411+ spin_lock(&GlobalMid_Lock);412412+ list_add_tail(&mid->qhead, &server->pending_mid_q);413413+ spin_unlock(&GlobalMid_Lock);414414+415415+408416 cifs_in_send_inc(server);409417 rc = smb_sendv(server, iov, nvec);410418 cifs_in_send_dec(server);411419 cifs_save_when_sent(mid);412420 mutex_unlock(&server->srv_mutex);413421414414- if (rc)415415- goto out_err;422422+ if (rc == 0)423423+ return 0;416424417417- return rc;418418-out_err:419425 delete_mid(mid);420426 add_credits(server, 1);421427 wake_up(&server->request_q);
+1-1
fs/ecryptfs/kthread.c
···149149 (*lower_file) = dentry_open(lower_dentry, lower_mnt, flags, cred);150150 if (!IS_ERR(*lower_file))151151 goto out;152152- if (flags & O_RDONLY) {152152+ if ((flags & O_ACCMODE) == O_RDONLY) {153153 rc = PTR_ERR((*lower_file));154154 goto out;155155 }
···399399 msecs_to_jiffies(oinfo->dqi_syncms));400400401401out_err:402402- if (status)403403- mlog_errno(status);404402 return status;405403out_unlock:406404 ocfs2_unlock_global_qf(oinfo, 0);
+3-3
fs/open.c
···397397{398398 struct file *file;399399 struct inode *inode;400400- int error;400400+ int error, fput_needed;401401402402 error = -EBADF;403403- file = fget(fd);403403+ file = fget_raw_light(fd, &fput_needed);404404 if (!file)405405 goto out;406406···414414 if (!error)415415 set_fs_pwd(current->fs, &file->f_path);416416out_putf:417417- fput(file);417417+ fput_light(file, fput_needed);418418out:419419 return error;420420}
+1
fs/ramfs/file-nommu.c
···110110111111 /* prevent the page from being discarded on memory pressure */112112 SetPageDirty(page);113113+ SetPageUptodate(page);113114114115 unlock_page(page);115116 put_page(page);
+14-5
fs/xfs/xfs_alloc.c
···10741074 * If we couldn't get anything, give up.10751075 */10761076 if (bno_cur_lt == NULL && bno_cur_gt == NULL) {10771077+ xfs_btree_del_cursor(cnt_cur, XFS_BTREE_NOERROR);10781078+10771079 if (!forced++) {10781080 trace_xfs_alloc_near_busy(args);10791081 xfs_log_force(args->mp, XFS_LOG_SYNC);10801082 goto restart;10811083 }10821082-10831083- xfs_btree_del_cursor(cnt_cur, XFS_BTREE_NOERROR);10841084 trace_xfs_alloc_size_neither(args);10851085 args->agbno = NULLAGBLOCK;10861086 return 0;···24342434 current_restore_flags_nested(&pflags, PF_FSTRANS);24352435}2436243624372437-24382438-int /* error */24372437+/*24382438+ * Data allocation requests often come in with little stack to work on. Push24392439+ * them off to a worker thread so there is lots of stack to use. Metadata24402440+ * requests, OTOH, are generally from low stack usage paths, so avoid the24412441+ * context switch overhead here.24422442+ */24432443+int24392444xfs_alloc_vextent(24402440- xfs_alloc_arg_t *args) /* allocation argument structure */24452445+ struct xfs_alloc_arg *args)24412446{24422447 DECLARE_COMPLETION_ONSTACK(done);24482448+24492449+ if (!args->userdata)24502450+ return __xfs_alloc_vextent(args);24512451+2443245224442453 args->done = &done;24452454 INIT_WORK_ONSTACK(&args->work, xfs_alloc_vextent_worker);
+23-30
fs/xfs/xfs_buf.c
···989989 (__uint64_t)XFS_BUF_ADDR(bp), func, bp->b_error, bp->b_length);990990}991991992992-int993993-xfs_bwrite(994994- struct xfs_buf *bp)995995-{996996- int error;997997-998998- ASSERT(xfs_buf_islocked(bp));999999-10001000- bp->b_flags |= XBF_WRITE;10011001- bp->b_flags &= ~(XBF_ASYNC | XBF_READ | _XBF_DELWRI_Q);10021002-10031003- xfs_bdstrat_cb(bp);10041004-10051005- error = xfs_buf_iowait(bp);10061006- if (error) {10071007- xfs_force_shutdown(bp->b_target->bt_mount,10081008- SHUTDOWN_META_IO_ERROR);10091009- }10101010- return error;10111011-}10121012-1013992/*1014993 * Called when we want to stop a buffer from getting written or read.1015994 * We attach the EIO error, muck with its flags, and call xfs_buf_ioend···10581079 return EIO;10591080}1060108110611061-10621062-/*10631063- * All xfs metadata buffers except log state machine buffers10641064- * get this attached as their b_bdstrat callback function.10651065- * This is so that we can catch a buffer10661066- * after prematurely unpinning it to forcibly shutdown the filesystem.10671067- */10681068-int10821082+STATIC int10691083xfs_bdstrat_cb(10701084 struct xfs_buf *bp)10711085{···1077110510781106 xfs_buf_iorequest(bp);10791107 return 0;11081108+}11091109+11101110+int11111111+xfs_bwrite(11121112+ struct xfs_buf *bp)11131113+{11141114+ int error;11151115+11161116+ ASSERT(xfs_buf_islocked(bp));11171117+11181118+ bp->b_flags |= XBF_WRITE;11191119+ bp->b_flags &= ~(XBF_ASYNC | XBF_READ | _XBF_DELWRI_Q);11201120+11211121+ xfs_bdstrat_cb(bp);11221122+11231123+ error = xfs_buf_iowait(bp);11241124+ if (error) {11251125+ xfs_force_shutdown(bp->b_target->bt_mount,11261126+ SHUTDOWN_META_IO_ERROR);11271127+ }11281128+ return error;10801129}1081113010821131/*···12361243 */12371244 atomic_set(&bp->b_io_remaining, 1);12381245 _xfs_buf_ioapply(bp);12391239- _xfs_buf_ioend(bp, 0);12461246+ _xfs_buf_ioend(bp, 1);1240124712411248 xfs_buf_rele(bp);12421249}
···9191 unsigned long size,9292 unsigned long align,9393 unsigned long goal);9494+void *___alloc_bootmem_node_nopanic(pg_data_t *pgdat,9595+ unsigned long size,9696+ unsigned long align,9797+ unsigned long goal,9898+ unsigned long limit);9499extern void *__alloc_bootmem_low(unsigned long size,95100 unsigned long align,96101 unsigned long goal);
+3-3
include/linux/capability.h
···360360361361#define CAP_WAKE_ALARM 35362362363363-/* Allow preventing system suspends while epoll events are pending */363363+/* Allow preventing system suspends */364364365365-#define CAP_EPOLLWAKEUP 36365365+#define CAP_BLOCK_SUSPEND 36366366367367-#define CAP_LAST_CAP CAP_EPOLLWAKEUP367367+#define CAP_LAST_CAP CAP_BLOCK_SUSPEND368368369369#define cap_valid(x) ((x) >= 0 && (x) <= CAP_LAST_CAP)370370
···3434 * re-allowed until epoll_wait is called again after consuming the wakeup3535 * event(s).3636 *3737- * Requires CAP_EPOLLWAKEUP3737+ * Requires CAP_BLOCK_SUSPEND3838 */3939#define EPOLLWAKEUP (1 << 29)4040
···165165 * @lock: lock protecting the base and associated clock bases166166 * and timers167167 * @active_bases: Bitfield to mark bases with active timers168168+ * @clock_was_set: Indicates that clock was set from irq context.168169 * @expires_next: absolute time of the next event which was scheduled169170 * via clock_set_next_event()170171 * @hres_active: State of high resolution mode···178177 */179178struct hrtimer_cpu_base {180179 raw_spinlock_t lock;181181- unsigned long active_bases;180180+ unsigned int active_bases;181181+ unsigned int clock_was_set;182182#ifdef CONFIG_HIGH_RES_TIMERS183183 ktime_t expires_next;184184 int hres_active;···288286# define MONOTONIC_RES_NSEC HIGH_RES_NSEC289287# define KTIME_MONOTONIC_RES KTIME_HIGH_RES290288289289+extern void clock_was_set_delayed(void);290290+291291#else292292293293# define MONOTONIC_RES_NSEC LOW_RES_NSEC···310306{311307 return 0;312308}309309+310310+static inline void clock_was_set_delayed(void) { }311311+313312#endif314313315314extern void clock_was_set(void);···327320extern ktime_t ktime_get_real(void);328321extern ktime_t ktime_get_boottime(void);329322extern ktime_t ktime_get_monotonic_offset(void);323323+extern ktime_t ktime_get_update_offsets(ktime_t *offs_real, ktime_t *offs_boot);330324331325DECLARE_PER_CPU(struct tick_device, tick_cpu_device);332326
+1
include/linux/input.h
···116116117117/**118118 * EVIOCGMTSLOTS(len) - get MT slot values119119+ * @len: size of the data buffer in bytes119120 *120121 * The ioctl buffer argument should be binary equivalent to121122 *
+2-2
include/linux/kvm_host.h
···815815#ifdef CONFIG_HAVE_KVM_EVENTFD816816817817void kvm_eventfd_init(struct kvm *kvm);818818-int kvm_irqfd(struct kvm *kvm, int fd, int gsi, int flags);818818+int kvm_irqfd(struct kvm *kvm, struct kvm_irqfd *args);819819void kvm_irqfd_release(struct kvm *kvm);820820void kvm_irq_routing_update(struct kvm *, struct kvm_irq_routing_table *);821821int kvm_ioeventfd(struct kvm *kvm, struct kvm_ioeventfd *args);···824824825825static inline void kvm_eventfd_init(struct kvm *kvm) {}826826827827-static inline int kvm_irqfd(struct kvm *kvm, int fd, int gsi, int flags)827827+static inline int kvm_irqfd(struct kvm *kvm, struct kvm_irqfd *args)828828{829829 return -EINVAL;830830}
···694694 range, including holes */695695 int node_id;696696 wait_queue_head_t kswapd_wait;697697- struct task_struct *kswapd;697697+ struct task_struct *kswapd; /* Protected by lock_memory_hotplug() */698698 int kswapd_max_order;699699 enum zone_type classzone_idx;700700} pg_data_t;
-2
include/linux/pci.h
···176176 PCI_DEV_FLAGS_NO_D3 = (__force pci_dev_flags_t) 2,177177 /* Provide indication device is assigned by a Virtual Machine Manager */178178 PCI_DEV_FLAGS_ASSIGNED = (__force pci_dev_flags_t) 4,179179- /* Device causes system crash if in D3 during S3 sleep */180180- PCI_DEV_FLAGS_NO_D3_DURING_SLEEP = (__force pci_dev_flags_t) 8,181179};182180183181enum pci_irq_reroute_variant {
+2
include/linux/prctl.h
···141141 * Changing LSM security domain is considered a new privilege. So, for example,142142 * asking selinux for a specific new context (e.g. with runcon) will result143143 * in execve returning -EPERM.144144+ *145145+ * See Documentation/prctl/no_new_privs.txt for more details.144146 */145147#define PR_SET_NO_NEW_PRIVS 38146148#define PR_GET_NO_NEW_PRIVS 39
···87878888#ifdef CONFIG_TINY_RCU89899090+static inline void rcu_preempt_note_context_switch(void)9191+{9292+}9393+9094static inline int rcu_needs_cpu(int cpu, unsigned long *delta_jiffies)9195{9296 *delta_jiffies = ULONG_MAX;···999510096#else /* #ifdef CONFIG_TINY_RCU */101979898+void rcu_preempt_note_context_switch(void);10299int rcu_preempt_needs_cpu(void);103100104101static inline int rcu_needs_cpu(int cpu, unsigned long *delta_jiffies)···113108static inline void rcu_note_context_switch(int cpu)114109{115110 rcu_sched_qs(cpu);111111+ rcu_preempt_note_context_switch();116112}117113118114/*
+6
include/linux/rpmsg.h
···3838#include <linux/types.h>3939#include <linux/device.h>4040#include <linux/mod_devicetable.h>4141+#include <linux/kref.h>4242+#include <linux/mutex.h>41434244/* The feature bitmap for virtio rpmsg */4345#define VIRTIO_RPMSG_F_NS 0 /* RP supports name service notifications */···122120/**123121 * struct rpmsg_endpoint - binds a local rpmsg address to its user124122 * @rpdev: rpmsg channel device123123+ * @refcount: when this drops to zero, the ept is deallocated125124 * @cb: rx callback handler125125+ * @cb_lock: must be taken before accessing/changing @cb126126 * @addr: local rpmsg address127127 * @priv: private data for the driver's use128128 *···144140 */145141struct rpmsg_endpoint {146142 struct rpmsg_channel *rpdev;143143+ struct kref refcount;147144 rpmsg_rx_cb_t cb;145145+ struct mutex cb_lock;148146 u32 addr;149147 void *priv;150148};
···901901 mutex_unlock(&cgroup_mutex);902902903903 /*904904- * We want to drop the active superblock reference from the905905- * cgroup creation after all the dentry refs are gone -906906- * kill_sb gets mighty unhappy otherwise. Mark907907- * dentry->d_fsdata with cgroup_diput() to tell908908- * cgroup_d_release() to call deactivate_super().904904+ * Drop the active superblock reference that we took when we905905+ * created the cgroup909906 */910910- dentry->d_fsdata = cgroup_diput;907907+ deactivate_super(cgrp->root->sb);911908912909 /*913910 * if we're getting rid of the cgroup, refcount should ensure···928931static int cgroup_delete(const struct dentry *d)929932{930933 return 1;931931-}932932-933933-static void cgroup_d_release(struct dentry *dentry)934934-{935935- /* did cgroup_diput() tell me to deactivate super? */936936- if (dentry->d_fsdata == cgroup_diput)937937- deactivate_super(dentry->d_sb);938934}939935940936static void remove_dir(struct dentry *d)···15371547 static const struct dentry_operations cgroup_dops = {15381548 .d_iput = cgroup_diput,15391549 .d_delete = cgroup_delete,15401540- .d_release = cgroup_d_release,15411550 };1542155115431552 struct inode *inode =···38833894{38843895 struct cgroup_subsys_state *css =38853896 container_of(work, struct cgroup_subsys_state, dput_work);38973897+ struct dentry *dentry = css->cgroup->dentry;38983898+ struct super_block *sb = dentry->d_sb;3886389938873887- dput(css->cgroup->dentry);39003900+ atomic_inc(&sb->s_active);39013901+ dput(dentry);39023902+ deactivate_super(sb);38883903}3889390438903905static void init_cgroup_css(struct cgroup_subsys_state *css,
+8-3
kernel/fork.c
···304304 }305305306306 err = arch_dup_task_struct(tsk, orig);307307+308308+ /*309309+ * We defer looking at err, because we will need this setup310310+ * for the clean up path to work correctly.311311+ */312312+ tsk->stack = ti;313313+ setup_thread_stack(tsk, orig);314314+307315 if (err)308316 goto out;309317310310- tsk->stack = ti;311311-312312- setup_thread_stack(tsk, orig);313318 clear_user_return_notifier(tsk);314319 clear_tsk_need_resched(tsk);315320 stackend = end_of_stack(tsk);
+37-16
kernel/hrtimer.c
···657657 return 0;658658}659659660660+static inline ktime_t hrtimer_update_base(struct hrtimer_cpu_base *base)661661+{662662+ ktime_t *offs_real = &base->clock_base[HRTIMER_BASE_REALTIME].offset;663663+ ktime_t *offs_boot = &base->clock_base[HRTIMER_BASE_BOOTTIME].offset;664664+665665+ return ktime_get_update_offsets(offs_real, offs_boot);666666+}667667+660668/*661669 * Retrigger next event is called after clock was set662670 *···673665static void retrigger_next_event(void *arg)674666{675667 struct hrtimer_cpu_base *base = &__get_cpu_var(hrtimer_bases);676676- struct timespec realtime_offset, xtim, wtm, sleep;677668678669 if (!hrtimer_hres_active())679670 return;680671681681- /* Optimized out for !HIGH_RES */682682- get_xtime_and_monotonic_and_sleep_offset(&xtim, &wtm, &sleep);683683- set_normalized_timespec(&realtime_offset, -wtm.tv_sec, -wtm.tv_nsec);684684-685685- /* Adjust CLOCK_REALTIME offset */686672 raw_spin_lock(&base->lock);687687- base->clock_base[HRTIMER_BASE_REALTIME].offset =688688- timespec_to_ktime(realtime_offset);689689- base->clock_base[HRTIMER_BASE_BOOTTIME].offset =690690- timespec_to_ktime(sleep);691691-673673+ hrtimer_update_base(base);692674 hrtimer_force_reprogram(base, 0);693675 raw_spin_unlock(&base->lock);694676}···708710 base->clock_base[i].resolution = KTIME_HIGH_RES;709711710712 tick_setup_sched_timer();711711-712713 /* "Retrigger" the interrupt to get things going */713714 retrigger_next_event(NULL);714715 local_irq_restore(flags);715716 return 1;717717+}718718+719719+/*720720+ * Called from timekeeping code to reprogramm the hrtimer interrupt721721+ * device. If called from the timer interrupt context we defer it to722722+ * softirq context.723723+ */724724+void clock_was_set_delayed(void)725725+{726726+ struct hrtimer_cpu_base *cpu_base = &__get_cpu_var(hrtimer_bases);727727+728728+ cpu_base->clock_was_set = 1;729729+ __raise_softirq_irqoff(HRTIMER_SOFTIRQ);716730}717731718732#else···12601250 cpu_base->nr_events++;12611251 dev->next_event.tv64 = KTIME_MAX;1262125212631263- entry_time = now = ktime_get();12531253+ raw_spin_lock(&cpu_base->lock);12541254+ entry_time = now = hrtimer_update_base(cpu_base);12641255retry:12651256 expires_next.tv64 = KTIME_MAX;12661266-12671267- raw_spin_lock(&cpu_base->lock);12681257 /*12691258 * We set expires_next to KTIME_MAX here with cpu_base->lock12701259 * held to prevent that a timer is enqueued in our queue via···13391330 * We need to prevent that we loop forever in the hrtimer13401331 * interrupt routine. We give it 3 attempts to avoid13411332 * overreacting on some spurious event.13331333+ *13341334+ * Acquire base lock for updating the offsets and retrieving13351335+ * the current time.13421336 */13431343- now = ktime_get();13371337+ raw_spin_lock(&cpu_base->lock);13381338+ now = hrtimer_update_base(cpu_base);13441339 cpu_base->nr_retries++;13451340 if (++retries < 3)13461341 goto retry;···13561343 */13571344 cpu_base->nr_hangs++;13581345 cpu_base->hang_detected = 1;13461346+ raw_spin_unlock(&cpu_base->lock);13591347 delta = ktime_sub(now, entry_time);13601348 if (delta.tv64 > cpu_base->max_hang_time.tv64)13611349 cpu_base->max_hang_time = delta;···1409139514101396static void run_hrtimer_softirq(struct softirq_action *h)14111397{13981398+ struct hrtimer_cpu_base *cpu_base = &__get_cpu_var(hrtimer_bases);13991399+14001400+ if (cpu_base->clock_was_set) {14011401+ cpu_base->clock_was_set = 0;14021402+ clock_was_set();14031403+ }14041404+14121405 hrtimer_peek_ahead_timers();14131406}14141407
-8
kernel/power/hibernate.c
···2727#include <linux/syscore_ops.h>2828#include <linux/ctype.h>2929#include <linux/genhd.h>3030-#include <scsi/scsi_scan.h>31303231#include "power.h"3332···746747 msleep(10);747748 async_synchronize_full();748749 }749749-750750- /*751751- * We can't depend on SCSI devices being available after loading752752- * one of their modules until scsi_complete_async_scans() is753753- * called and the resume device usually is a SCSI one.754754- */755755- scsi_complete_async_scans();756750757751 swsusp_resume_device = name_to_dev_t(resume_file);758752 if (!swsusp_resume_device) {
···153153 *154154 * Caller must disable preemption.155155 */156156-void rcu_preempt_note_context_switch(void)156156+static void rcu_preempt_note_context_switch(int cpu)157157{158158 struct task_struct *t = current;159159 unsigned long flags;···164164 (t->rcu_read_unlock_special & RCU_READ_UNLOCK_BLOCKED) == 0) {165165166166 /* Possibly blocking in an RCU read-side critical section. */167167- rdp = __this_cpu_ptr(rcu_preempt_state.rda);167167+ rdp = per_cpu_ptr(rcu_preempt_state.rda, cpu);168168 rnp = rdp->mynode;169169 raw_spin_lock_irqsave(&rnp->lock, flags);170170 t->rcu_read_unlock_special |= RCU_READ_UNLOCK_BLOCKED;···228228 * means that we continue to block the current grace period.229229 */230230 local_irq_save(flags);231231- rcu_preempt_qs(smp_processor_id());231231+ rcu_preempt_qs(cpu);232232 local_irq_restore(flags);233233}234234···10001000 rcu_sched_force_quiescent_state();10011001}10021002EXPORT_SYMBOL_GPL(rcu_force_quiescent_state);10031003+10041004+/*10051005+ * Because preemptible RCU does not exist, we never have to check for10061006+ * CPUs being in quiescent states.10071007+ */10081008+static void rcu_preempt_note_context_switch(int cpu)10091009+{10101010+}1003101110041012/*10051013 * Because preemptible RCU does not exist, there are never any preempted
+204-74
kernel/sched/core.c
···20812081#endif2082208220832083 /* Here we just switch the register state and the stack. */20842084- rcu_switch_from(prev);20852084 switch_to(prev, next, prev);2086208520872086 barrier();···21602161}216121622162216321642164+/*21652165+ * Global load-average calculations21662166+ *21672167+ * We take a distributed and async approach to calculating the global load-avg21682168+ * in order to minimize overhead.21692169+ *21702170+ * The global load average is an exponentially decaying average of nr_running +21712171+ * nr_uninterruptible.21722172+ *21732173+ * Once every LOAD_FREQ:21742174+ *21752175+ * nr_active = 0;21762176+ * for_each_possible_cpu(cpu)21772177+ * nr_active += cpu_of(cpu)->nr_running + cpu_of(cpu)->nr_uninterruptible;21782178+ *21792179+ * avenrun[n] = avenrun[0] * exp_n + nr_active * (1 - exp_n)21802180+ *21812181+ * Due to a number of reasons the above turns in the mess below:21822182+ *21832183+ * - for_each_possible_cpu() is prohibitively expensive on machines with21842184+ * serious number of cpus, therefore we need to take a distributed approach21852185+ * to calculating nr_active.21862186+ *21872187+ * \Sum_i x_i(t) = \Sum_i x_i(t) - x_i(t_0) | x_i(t_0) := 021882188+ * = \Sum_i { \Sum_j=1 x_i(t_j) - x_i(t_j-1) }21892189+ *21902190+ * So assuming nr_active := 0 when we start out -- true per definition, we21912191+ * can simply take per-cpu deltas and fold those into a global accumulate21922192+ * to obtain the same result. See calc_load_fold_active().21932193+ *21942194+ * Furthermore, in order to avoid synchronizing all per-cpu delta folding21952195+ * across the machine, we assume 10 ticks is sufficient time for every21962196+ * cpu to have completed this task.21972197+ *21982198+ * This places an upper-bound on the IRQ-off latency of the machine. Then21992199+ * again, being late doesn't loose the delta, just wrecks the sample.22002200+ *22012201+ * - cpu_rq()->nr_uninterruptible isn't accurately tracked per-cpu because22022202+ * this would add another cross-cpu cacheline miss and atomic operation22032203+ * to the wakeup path. Instead we increment on whatever cpu the task ran22042204+ * when it went into uninterruptible state and decrement on whatever cpu22052205+ * did the wakeup. This means that only the sum of nr_uninterruptible over22062206+ * all cpus yields the correct result.22072207+ *22082208+ * This covers the NO_HZ=n code, for extra head-aches, see the comment below.22092209+ */22102210+21632211/* Variables and functions for calc_load */21642212static atomic_long_t calc_load_tasks;21652213static unsigned long calc_load_update;21662214unsigned long avenrun[3];21672167-EXPORT_SYMBOL(avenrun);22152215+EXPORT_SYMBOL(avenrun); /* should be removed */22162216+22172217+/**22182218+ * get_avenrun - get the load average array22192219+ * @loads: pointer to dest load array22202220+ * @offset: offset to add22212221+ * @shift: shift count to shift the result left22222222+ *22232223+ * These values are estimates at best, so no need for locking.22242224+ */22252225+void get_avenrun(unsigned long *loads, unsigned long offset, int shift)22262226+{22272227+ loads[0] = (avenrun[0] + offset) << shift;22282228+ loads[1] = (avenrun[1] + offset) << shift;22292229+ loads[2] = (avenrun[2] + offset) << shift;22302230+}2168223121692232static long calc_load_fold_active(struct rq *this_rq)21702233{···22432182 return delta;22442183}2245218421852185+/*21862186+ * a1 = a0 * e + a * (1 - e)21872187+ */22462188static unsigned long22472189calc_load(unsigned long load, unsigned long exp, unsigned long active)22482190{···2257219322582194#ifdef CONFIG_NO_HZ22592195/*22602260- * For NO_HZ we delay the active fold to the next LOAD_FREQ update.21962196+ * Handle NO_HZ for the global load-average.21972197+ *21982198+ * Since the above described distributed algorithm to compute the global21992199+ * load-average relies on per-cpu sampling from the tick, it is affected by22002200+ * NO_HZ.22012201+ *22022202+ * The basic idea is to fold the nr_active delta into a global idle-delta upon22032203+ * entering NO_HZ state such that we can include this as an 'extra' cpu delta22042204+ * when we read the global state.22052205+ *22062206+ * Obviously reality has to ruin such a delightfully simple scheme:22072207+ *22082208+ * - When we go NO_HZ idle during the window, we can negate our sample22092209+ * contribution, causing under-accounting.22102210+ *22112211+ * We avoid this by keeping two idle-delta counters and flipping them22122212+ * when the window starts, thus separating old and new NO_HZ load.22132213+ *22142214+ * The only trick is the slight shift in index flip for read vs write.22152215+ *22162216+ * 0s 5s 10s 15s22172217+ * +10 +10 +10 +1022182218+ * |-|-----------|-|-----------|-|-----------|-|22192219+ * r:0 0 1 1 0 0 1 1 022202220+ * w:0 1 1 0 0 1 1 0 022212221+ *22222222+ * This ensures we'll fold the old idle contribution in this window while22232223+ * accumlating the new one.22242224+ *22252225+ * - When we wake up from NO_HZ idle during the window, we push up our22262226+ * contribution, since we effectively move our sample point to a known22272227+ * busy state.22282228+ *22292229+ * This is solved by pushing the window forward, and thus skipping the22302230+ * sample, for this cpu (effectively using the idle-delta for this cpu which22312231+ * was in effect at the time the window opened). This also solves the issue22322232+ * of having to deal with a cpu having been in NOHZ idle for multiple22332233+ * LOAD_FREQ intervals.22612234 *22622235 * When making the ILB scale, we should try to pull this in as well.22632236 */22642264-static atomic_long_t calc_load_tasks_idle;22372237+static atomic_long_t calc_load_idle[2];22382238+static int calc_load_idx;2265223922662266-void calc_load_account_idle(struct rq *this_rq)22402240+static inline int calc_load_write_idx(void)22672241{22422242+ int idx = calc_load_idx;22432243+22442244+ /*22452245+ * See calc_global_nohz(), if we observe the new index, we also22462246+ * need to observe the new update time.22472247+ */22482248+ smp_rmb();22492249+22502250+ /*22512251+ * If the folding window started, make sure we start writing in the22522252+ * next idle-delta.22532253+ */22542254+ if (!time_before(jiffies, calc_load_update))22552255+ idx++;22562256+22572257+ return idx & 1;22582258+}22592259+22602260+static inline int calc_load_read_idx(void)22612261+{22622262+ return calc_load_idx & 1;22632263+}22642264+22652265+void calc_load_enter_idle(void)22662266+{22672267+ struct rq *this_rq = this_rq();22682268 long delta;2269226922702270+ /*22712271+ * We're going into NOHZ mode, if there's any pending delta, fold it22722272+ * into the pending idle delta.22732273+ */22702274 delta = calc_load_fold_active(this_rq);22712271- if (delta)22722272- atomic_long_add(delta, &calc_load_tasks_idle);22752275+ if (delta) {22762276+ int idx = calc_load_write_idx();22772277+ atomic_long_add(delta, &calc_load_idle[idx]);22782278+ }22792279+}22802280+22812281+void calc_load_exit_idle(void)22822282+{22832283+ struct rq *this_rq = this_rq();22842284+22852285+ /*22862286+ * If we're still before the sample window, we're done.22872287+ */22882288+ if (time_before(jiffies, this_rq->calc_load_update))22892289+ return;22902290+22912291+ /*22922292+ * We woke inside or after the sample window, this means we're already22932293+ * accounted through the nohz accounting, so skip the entire deal and22942294+ * sync up for the next window.22952295+ */22962296+ this_rq->calc_load_update = calc_load_update;22972297+ if (time_before(jiffies, this_rq->calc_load_update + 10))22982298+ this_rq->calc_load_update += LOAD_FREQ;22732299}2274230022752301static long calc_load_fold_idle(void)22762302{23032303+ int idx = calc_load_read_idx();22772304 long delta = 0;2278230522792279- /*22802280- * Its got a race, we don't care...22812281- */22822282- if (atomic_long_read(&calc_load_tasks_idle))22832283- delta = atomic_long_xchg(&calc_load_tasks_idle, 0);23062306+ if (atomic_long_read(&calc_load_idle[idx]))23072307+ delta = atomic_long_xchg(&calc_load_idle[idx], 0);2284230822852309 return delta;22862310}···24542302{24552303 long delta, active, n;2456230424572457- /*24582458- * If we crossed a calc_load_update boundary, make sure to fold24592459- * any pending idle changes, the respective CPUs might have24602460- * missed the tick driven calc_load_account_active() update24612461- * due to NO_HZ.24622462- */24632463- delta = calc_load_fold_idle();24642464- if (delta)24652465- atomic_long_add(delta, &calc_load_tasks);23052305+ if (!time_before(jiffies, calc_load_update + 10)) {23062306+ /*23072307+ * Catch-up, fold however many we are behind still23082308+ */23092309+ delta = jiffies - calc_load_update - 10;23102310+ n = 1 + (delta / LOAD_FREQ);23112311+23122312+ active = atomic_long_read(&calc_load_tasks);23132313+ active = active > 0 ? active * FIXED_1 : 0;23142314+23152315+ avenrun[0] = calc_load_n(avenrun[0], EXP_1, active, n);23162316+ avenrun[1] = calc_load_n(avenrun[1], EXP_5, active, n);23172317+ avenrun[2] = calc_load_n(avenrun[2], EXP_15, active, n);23182318+23192319+ calc_load_update += n * LOAD_FREQ;23202320+ }2466232124672322 /*24682468- * It could be the one fold was all it took, we done!23232323+ * Flip the idle index...23242324+ *23252325+ * Make sure we first write the new time then flip the index, so that23262326+ * calc_load_write_idx() will see the new time when it reads the new23272327+ * index, this avoids a double flip messing things up.24692328 */24702470- if (time_before(jiffies, calc_load_update + 10))24712471- return;24722472-24732473- /*24742474- * Catch-up, fold however many we are behind still24752475- */24762476- delta = jiffies - calc_load_update - 10;24772477- n = 1 + (delta / LOAD_FREQ);24782478-24792479- active = atomic_long_read(&calc_load_tasks);24802480- active = active > 0 ? active * FIXED_1 : 0;24812481-24822482- avenrun[0] = calc_load_n(avenrun[0], EXP_1, active, n);24832483- avenrun[1] = calc_load_n(avenrun[1], EXP_5, active, n);24842484- avenrun[2] = calc_load_n(avenrun[2], EXP_15, active, n);24852485-24862486- calc_load_update += n * LOAD_FREQ;23292329+ smp_wmb();23302330+ calc_load_idx++;24872331}24882488-#else24892489-void calc_load_account_idle(struct rq *this_rq)24902490-{24912491-}23322332+#else /* !CONFIG_NO_HZ */2492233324932493-static inline long calc_load_fold_idle(void)24942494-{24952495- return 0;24962496-}23342334+static inline long calc_load_fold_idle(void) { return 0; }23352335+static inline void calc_global_nohz(void) { }2497233624982498-static void calc_global_nohz(void)24992499-{25002500-}25012501-#endif25022502-25032503-/**25042504- * get_avenrun - get the load average array25052505- * @loads: pointer to dest load array25062506- * @offset: offset to add25072507- * @shift: shift count to shift the result left25082508- *25092509- * These values are estimates at best, so no need for locking.25102510- */25112511-void get_avenrun(unsigned long *loads, unsigned long offset, int shift)25122512-{25132513- loads[0] = (avenrun[0] + offset) << shift;25142514- loads[1] = (avenrun[1] + offset) << shift;25152515- loads[2] = (avenrun[2] + offset) << shift;25162516-}23372337+#endif /* CONFIG_NO_HZ */2517233825182339/*25192340 * calc_load - update the avenrun load estimates 10 ticks after the···24942369 */24952370void calc_global_load(unsigned long ticks)24962371{24972497- long active;23722372+ long active, delta;2498237324992374 if (time_before(jiffies, calc_load_update + 10))25002375 return;23762376+23772377+ /*23782378+ * Fold the 'old' idle-delta to include all NO_HZ cpus.23792379+ */23802380+ delta = calc_load_fold_idle();23812381+ if (delta)23822382+ atomic_long_add(delta, &calc_load_tasks);2501238325022384 active = atomic_long_read(&calc_load_tasks);25032385 active = active > 0 ? active * FIXED_1 : 0;···25162384 calc_load_update += LOAD_FREQ;2517238525182386 /*25192519- * Account one period with whatever state we found before25202520- * folding in the nohz state and ageing the entire idle period.25212521- *25222522- * This avoids loosing a sample when we go idle between 25232523- * calc_load_account_active() (10 ticks ago) and now and thus25242524- * under-accounting.23872387+ * In case we idled for multiple LOAD_FREQ intervals, catch up in bulk.25252388 */25262389 calc_global_nohz();25272390}···25332406 return;2534240725352408 delta = calc_load_fold_active(this_rq);25362536- delta += calc_load_fold_idle();25372409 if (delta)25382410 atomic_long_add(delta, &calc_load_tasks);2539241125402412 this_rq->calc_load_update += LOAD_FREQ;25412413}24142414+24152415+/*24162416+ * End of global load-average stuff24172417+ */2542241825432419/*25442420 * The exact cpuload at various idx values, calculated at every tick would be
···17881788#ifdef CONFIG_CHECKPOINT_RESTORE17891789static int prctl_set_mm_exe_file(struct mm_struct *mm, unsigned int fd)17901790{17911791- struct vm_area_struct *vma;17921791 struct file *exe_file;17931792 struct dentry *dentry;17941793 int err;···18151816 down_write(&mm->mmap_sem);1816181718171818 /*18181818- * Forbid mm->exe_file change if there are mapped other files.18191819+ * Forbid mm->exe_file change if old file still mapped.18191820 */18201821 err = -EBUSY;18211821- for (vma = mm->mmap; vma; vma = vma->vm_next) {18221822- if (vma->vm_file && !path_equal(&vma->vm_file->f_path,18231823- &exe_file->f_path))18241824- goto exit_unlock;18221822+ if (mm->exe_file) {18231823+ struct vm_area_struct *vma;18241824+18251825+ for (vma = mm->mmap; vma; vma = vma->vm_next)18261826+ if (vma->vm_file &&18271827+ path_equal(&vma->vm_file->f_path,18281828+ &mm->exe_file->f_path))18291829+ goto exit_unlock;18251830 }1826183118271832 /*···18381835 if (test_and_set_bit(MMF_EXE_FILE_CHANGED, &mm->flags))18391836 goto exit_unlock;1840183718381838+ err = 0;18411839 set_mm_exe_file(mm, exe_file);18421840exit_unlock:18431841 up_write(&mm->mmap_sem);
+6-2
kernel/time/ntp.c
···409409 time_state = TIME_DEL;410410 break;411411 case TIME_INS:412412- if (secs % 86400 == 0) {412412+ if (!(time_status & STA_INS))413413+ time_state = TIME_OK;414414+ else if (secs % 86400 == 0) {413415 leap = -1;414416 time_state = TIME_OOP;415417 time_tai++;···420418 }421419 break;422420 case TIME_DEL:423423- if ((secs + 1) % 86400 == 0) {421421+ if (!(time_status & STA_DEL))422422+ time_state = TIME_OK;423423+ else if ((secs + 1) % 86400 == 0) {424424 leap = 1;425425 time_tai--;426426 time_state = TIME_WAIT;
+2
kernel/time/tick-sched.c
···406406 */407407 if (!ts->tick_stopped) {408408 select_nohz_load_balancer(1);409409+ calc_load_enter_idle();409410410411 ts->idle_tick = hrtimer_get_expires(&ts->sched_timer);411412 ts->tick_stopped = 1;···598597 account_idle_ticks(ticks);599598#endif600599600600+ calc_load_exit_idle();601601 touch_softlockup_watchdog();602602 /*603603 * Cancel the scheduled timer and restore the tick
+62-2
kernel/time/timekeeping.c
···7070 /* The raw monotonic time for the CLOCK_MONOTONIC_RAW posix clock. */7171 struct timespec raw_time;72727373+ /* Offset clock monotonic -> clock realtime */7474+ ktime_t offs_real;7575+7676+ /* Offset clock monotonic -> clock boottime */7777+ ktime_t offs_boot;7878+7379 /* Seqlock for all timekeeper values */7480 seqlock_t lock;7581};···178172 return clocksource_cyc2ns(cycle_delta, clock->mult, clock->shift);179173}180174175175+static void update_rt_offset(void)176176+{177177+ struct timespec tmp, *wtm = &timekeeper.wall_to_monotonic;178178+179179+ set_normalized_timespec(&tmp, -wtm->tv_sec, -wtm->tv_nsec);180180+ timekeeper.offs_real = timespec_to_ktime(tmp);181181+}182182+181183/* must hold write on timekeeper.lock */182184static void timekeeping_update(bool clearntp)183185{···193179 timekeeper.ntp_error = 0;194180 ntp_clear();195181 }182182+ update_rt_offset();196183 update_vsyscall(&timekeeper.xtime, &timekeeper.wall_to_monotonic,197184 timekeeper.clock, timekeeper.mult);198185}···619604 }620605 set_normalized_timespec(&timekeeper.wall_to_monotonic,621606 -boot.tv_sec, -boot.tv_nsec);607607+ update_rt_offset();622608 timekeeper.total_sleep_time.tv_sec = 0;623609 timekeeper.total_sleep_time.tv_nsec = 0;624610 write_sequnlock_irqrestore(&timekeeper.lock, flags);···627611628612/* time in seconds when suspend began */629613static struct timespec timekeeping_suspend_time;614614+615615+static void update_sleep_time(struct timespec t)616616+{617617+ timekeeper.total_sleep_time = t;618618+ timekeeper.offs_boot = timespec_to_ktime(t);619619+}630620631621/**632622 * __timekeeping_inject_sleeptime - Internal function to add sleep interval···652630 timekeeper.xtime = timespec_add(timekeeper.xtime, *delta);653631 timekeeper.wall_to_monotonic =654632 timespec_sub(timekeeper.wall_to_monotonic, *delta);655655- timekeeper.total_sleep_time = timespec_add(656656- timekeeper.total_sleep_time, *delta);633633+ update_sleep_time(timespec_add(timekeeper.total_sleep_time, *delta));657634}658635659636···717696 timekeeper.clock->cycle_last = timekeeper.clock->read(timekeeper.clock);718697 timekeeper.ntp_error = 0;719698 timekeeping_suspended = 0;699699+ timekeeping_update(false);720700 write_sequnlock_irqrestore(&timekeeper.lock, flags);721701722702 touch_softlockup_watchdog();···985963 leap = second_overflow(timekeeper.xtime.tv_sec);986964 timekeeper.xtime.tv_sec += leap;987965 timekeeper.wall_to_monotonic.tv_sec -= leap;966966+ if (leap)967967+ clock_was_set_delayed();988968 }989969990970 /* Accumulate raw time */···11031079 leap = second_overflow(timekeeper.xtime.tv_sec);11041080 timekeeper.xtime.tv_sec += leap;11051081 timekeeper.wall_to_monotonic.tv_sec -= leap;10821082+ if (leap)10831083+ clock_was_set_delayed();11061084 }1107108511081086 timekeeping_update(false);···12711245 *sleep = timekeeper.total_sleep_time;12721246 } while (read_seqretry(&timekeeper.lock, seq));12731247}12481248+12491249+#ifdef CONFIG_HIGH_RES_TIMERS12501250+/**12511251+ * ktime_get_update_offsets - hrtimer helper12521252+ * @offs_real: pointer to storage for monotonic -> realtime offset12531253+ * @offs_boot: pointer to storage for monotonic -> boottime offset12541254+ *12551255+ * Returns current monotonic time and updates the offsets12561256+ * Called from hrtimer_interupt() or retrigger_next_event()12571257+ */12581258+ktime_t ktime_get_update_offsets(ktime_t *offs_real, ktime_t *offs_boot)12591259+{12601260+ ktime_t now;12611261+ unsigned int seq;12621262+ u64 secs, nsecs;12631263+12641264+ do {12651265+ seq = read_seqbegin(&timekeeper.lock);12661266+12671267+ secs = timekeeper.xtime.tv_sec;12681268+ nsecs = timekeeper.xtime.tv_nsec;12691269+ nsecs += timekeeping_get_ns();12701270+ /* If arch requires, add in gettimeoffset() */12711271+ nsecs += arch_gettimeoffset();12721272+12731273+ *offs_real = timekeeper.offs_real;12741274+ *offs_boot = timekeeper.offs_boot;12751275+ } while (read_seqretry(&timekeeper.lock, seq));12761276+12771277+ now = ktime_add_ns(ktime_set(secs, 0), nsecs);12781278+ now = ktime_sub(now, *offs_real);12791279+ return now;12801280+}12811281+#endif1274128212751283/**12761284 * ktime_get_monotonic_offset() - get wall_to_monotonic in ktime_t format
+3-3
kernel/trace/ring_buffer.c
···10751075 rb_init_page(bpage->page);1076107610771077 INIT_LIST_HEAD(&cpu_buffer->reader_page->list);10781078+ INIT_LIST_HEAD(&cpu_buffer->new_pages);1078107910791080 ret = rb_allocate_pages(cpu_buffer, nr_pages);10801081 if (ret < 0)···13471346 * If something was added to this page, it was full13481347 * since it is not the tail page. So we deduct the13491348 * bytes consumed in ring buffer from here.13501350- * No need to update overruns, since this page is13511351- * deleted from ring buffer and its entries are13521352- * already accounted for.13491349+ * Increment overrun to account for the lost events.13531350 */13511351+ local_add(page_entries, &cpu_buffer->overrun);13541352 local_sub(BUF_PAGE_SIZE, &cpu_buffer->entries_bytes);13551353 }13561354
+2-2
lib/dma-debug.c
···7878static DEFINE_SPINLOCK(free_entries_lock);79798080/* Global disable flag - will be set in case of an error */8181-static bool global_disable __read_mostly;8181+static u32 global_disable __read_mostly;82828383/* Global error count */8484static u32 error_count;···657657658658 global_disable_dent = debugfs_create_bool("disabled", 0444,659659 dma_debug_dent,660660- (u32 *)&global_disable);660660+ &global_disable);661661 if (!global_disable_dent)662662 goto out_err;663663
+5-1
mm/bootmem.c
···698698 return ___alloc_bootmem(size, align, goal, limit);699699}700700701701-static void * __init ___alloc_bootmem_node_nopanic(pg_data_t *pgdat,701701+void * __init ___alloc_bootmem_node_nopanic(pg_data_t *pgdat,702702 unsigned long size, unsigned long align,703703 unsigned long goal, unsigned long limit)704704{···709709 align, goal, limit);710710 if (ptr)711711 return ptr;712712+713713+ /* do not panic in alloc_bootmem_bdata() */714714+ if (limit && goal + size > limit)715715+ limit = 0;712716713717 ptr = alloc_bootmem_bdata(pgdat->bdata, size, align, goal, limit);714718 if (ptr)
···1515#include <linux/sched.h>1616#include <linux/ksm.h>1717#include <linux/fs.h>1818+#include <linux/file.h>18191920/*2021 * Any behaviour which results in changes to the vma->vm_flags needs to···205204{206205 loff_t offset;207206 int error;207207+ struct file *f;208208209209 *prev = NULL; /* tell sys_madvise we drop mmap_sem */210210211211 if (vma->vm_flags & (VM_LOCKED|VM_NONLINEAR|VM_HUGETLB))212212 return -EINVAL;213213214214- if (!vma->vm_file || !vma->vm_file->f_mapping215215- || !vma->vm_file->f_mapping->host) {214214+ f = vma->vm_file;215215+216216+ if (!f || !f->f_mapping || !f->f_mapping->host) {216217 return -EINVAL;217218 }218219···224221 offset = (loff_t)(start - vma->vm_start)225222 + ((loff_t)vma->vm_pgoff << PAGE_SHIFT);226223227227- /* filesystem's fallocate may need to take i_mutex */224224+ /*225225+ * Filesystem's fallocate may need to take i_mutex. We need to226226+ * explicitly grab a reference because the vma (and hence the227227+ * vma's reference to the file) can go away as soon as we drop228228+ * mmap_sem.229229+ */230230+ get_file(f);228231 up_read(¤t->mm->mmap_sem);229229- error = do_fallocate(vma->vm_file,232232+ error = do_fallocate(f,230233 FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE,231234 offset, end - start);235235+ fput(f);232236 down_read(¤t->mm->mmap_sem);233237 return error;234238}
+23-28
mm/memblock.c
···143143 MAX_NUMNODES);144144}145145146146-/*147147- * Free memblock.reserved.regions148148- */149149-int __init_memblock memblock_free_reserved_regions(void)150150-{151151- if (memblock.reserved.regions == memblock_reserved_init_regions)152152- return 0;153153-154154- return memblock_free(__pa(memblock.reserved.regions),155155- sizeof(struct memblock_region) * memblock.reserved.max);156156-}157157-158158-/*159159- * Reserve memblock.reserved.regions160160- */161161-int __init_memblock memblock_reserve_reserved_regions(void)162162-{163163- if (memblock.reserved.regions == memblock_reserved_init_regions)164164- return 0;165165-166166- return memblock_reserve(__pa(memblock.reserved.regions),167167- sizeof(struct memblock_region) * memblock.reserved.max);168168-}169169-170146static void __init_memblock memblock_remove_region(struct memblock_type *type, unsigned long r)171147{172148 type->total_size -= type->regions[r].size;···158182 type->regions[0].size = 0;159183 memblock_set_region_node(&type->regions[0], MAX_NUMNODES);160184 }185185+}186186+187187+phys_addr_t __init_memblock get_allocated_memblock_reserved_regions_info(188188+ phys_addr_t *addr)189189+{190190+ if (memblock.reserved.regions == memblock_reserved_init_regions)191191+ return 0;192192+193193+ *addr = __pa(memblock.reserved.regions);194194+195195+ return PAGE_ALIGN(sizeof(struct memblock_region) *196196+ memblock.reserved.max);161197}162198163199/**···192204 phys_addr_t new_area_size)193205{194206 struct memblock_region *new_array, *old_array;207207+ phys_addr_t old_alloc_size, new_alloc_size;195208 phys_addr_t old_size, new_size, addr;196209 int use_slab = slab_is_available();197210 int *in_slab;···206217 /* Calculate new doubled size */207218 old_size = type->max * sizeof(struct memblock_region);208219 new_size = old_size << 1;220220+ /*221221+ * We need to allocated new one align to PAGE_SIZE,222222+ * so we can free them completely later.223223+ */224224+ old_alloc_size = PAGE_ALIGN(old_size);225225+ new_alloc_size = PAGE_ALIGN(new_size);209226210227 /* Retrieve the slab flag */211228 if (type == &memblock.memory)···240245241246 addr = memblock_find_in_range(new_area_start + new_area_size,242247 memblock.current_limit,243243- new_size, sizeof(phys_addr_t));248248+ new_alloc_size, PAGE_SIZE);244249 if (!addr && new_area_size)245250 addr = memblock_find_in_range(0,246251 min(new_area_start, memblock.current_limit),247247- new_size, sizeof(phys_addr_t));252252+ new_alloc_size, PAGE_SIZE);248253249254 new_array = addr ? __va(addr) : 0;250255 }···274279 kfree(old_array);275280 else if (old_array != memblock_memory_init_regions &&276281 old_array != memblock_reserved_init_regions)277277- memblock_free(__pa(old_array), old_size);282282+ memblock_free(__pa(old_array), old_alloc_size);278283279284 /* Reserve the new array if that comes from the memblock.280285 * Otherwise, we needn't do it281286 */282287 if (!use_slab)283283- BUG_ON(memblock_reserve(addr, new_size));288288+ BUG_ON(memblock_reserve(addr, new_alloc_size));284289285290 /* Update slab flag */286291 *in_slab = use_slab;
···105105 __free_pages_bootmem(pfn_to_page(i), 0);106106}107107108108+static unsigned long __init __free_memory_core(phys_addr_t start,109109+ phys_addr_t end)110110+{111111+ unsigned long start_pfn = PFN_UP(start);112112+ unsigned long end_pfn = min_t(unsigned long,113113+ PFN_DOWN(end), max_low_pfn);114114+115115+ if (start_pfn > end_pfn)116116+ return 0;117117+118118+ __free_pages_memory(start_pfn, end_pfn);119119+120120+ return end_pfn - start_pfn;121121+}122122+108123unsigned long __init free_low_memory_core_early(int nodeid)109124{110125 unsigned long count = 0;111111- phys_addr_t start, end;126126+ phys_addr_t start, end, size;112127 u64 i;113128114114- /* free reserved array temporarily so that it's treated as free area */115115- memblock_free_reserved_regions();129129+ for_each_free_mem_range(i, MAX_NUMNODES, &start, &end, NULL)130130+ count += __free_memory_core(start, end);116131117117- for_each_free_mem_range(i, MAX_NUMNODES, &start, &end, NULL) {118118- unsigned long start_pfn = PFN_UP(start);119119- unsigned long end_pfn = min_t(unsigned long,120120- PFN_DOWN(end), max_low_pfn);121121- if (start_pfn < end_pfn) {122122- __free_pages_memory(start_pfn, end_pfn);123123- count += end_pfn - start_pfn;124124- }125125- }132132+ /* free range that is used for reserved array if we allocate it */133133+ size = get_allocated_memblock_reserved_regions_info(&start);134134+ if (size)135135+ count += __free_memory_core(start, start + size);126136127127- /* put region array back? */128128- memblock_reserve_reserved_regions();129137 return count;130138}131139···282274 return ___alloc_bootmem(size, align, goal, limit);283275}284276285285-static void * __init ___alloc_bootmem_node_nopanic(pg_data_t *pgdat,277277+void * __init ___alloc_bootmem_node_nopanic(pg_data_t *pgdat,286278 unsigned long size,287279 unsigned long align,288280 unsigned long goal,
+6-1
mm/page_alloc.c
···56355635__alloc_contig_migrate_alloc(struct page *page, unsigned long private,56365636 int **resultp)56375637{56385638- return alloc_page(GFP_HIGHUSER_MOVABLE);56385638+ gfp_t gfp_mask = GFP_USER | __GFP_MOVABLE;56395639+56405640+ if (PageHighMem(page))56415641+ gfp_mask |= __GFP_HIGHMEM;56425642+56435643+ return alloc_page(gfp_mask);56395644}5640564556415646/* [start, end) must belong to a single zone. */
+59-136
mm/shmem.c
···264264}265265266266/*267267+ * Sometimes, before we decide whether to proceed or to fail, we must check268268+ * that an entry was not already brought back from swap by a racing thread.269269+ *270270+ * Checking page is not enough: by the time a SwapCache page is locked, it271271+ * might be reused, and again be SwapCache, using the same swap as before.272272+ */273273+static bool shmem_confirm_swap(struct address_space *mapping,274274+ pgoff_t index, swp_entry_t swap)275275+{276276+ void *item;277277+278278+ rcu_read_lock();279279+ item = radix_tree_lookup(&mapping->page_tree, index);280280+ rcu_read_unlock();281281+ return item == swp_to_radix_entry(swap);282282+}283283+284284+/*267285 * Like add_to_page_cache_locked, but error if expected item has gone.268286 */269287static int shmem_add_to_page_cache(struct page *page,270288 struct address_space *mapping,271289 pgoff_t index, gfp_t gfp, void *expected)272290{273273- int error = 0;291291+ int error;274292275293 VM_BUG_ON(!PageLocked(page));276294 VM_BUG_ON(!PageSwapBacked(page));277295278278- if (!expected)279279- error = radix_tree_preload(gfp & GFP_RECLAIM_MASK);280280- if (!error) {281281- page_cache_get(page);282282- page->mapping = mapping;283283- page->index = index;296296+ page_cache_get(page);297297+ page->mapping = mapping;298298+ page->index = index;284299285285- spin_lock_irq(&mapping->tree_lock);286286- if (!expected)287287- error = radix_tree_insert(&mapping->page_tree,288288- index, page);289289- else290290- error = shmem_radix_tree_replace(mapping, index,291291- expected, page);292292- if (!error) {293293- mapping->nrpages++;294294- __inc_zone_page_state(page, NR_FILE_PAGES);295295- __inc_zone_page_state(page, NR_SHMEM);296296- spin_unlock_irq(&mapping->tree_lock);297297- } else {298298- page->mapping = NULL;299299- spin_unlock_irq(&mapping->tree_lock);300300- page_cache_release(page);301301- }302302- if (!expected)303303- radix_tree_preload_end();300300+ spin_lock_irq(&mapping->tree_lock);301301+ if (!expected)302302+ error = radix_tree_insert(&mapping->page_tree, index, page);303303+ else304304+ error = shmem_radix_tree_replace(mapping, index, expected,305305+ page);306306+ if (!error) {307307+ mapping->nrpages++;308308+ __inc_zone_page_state(page, NR_FILE_PAGES);309309+ __inc_zone_page_state(page, NR_SHMEM);310310+ spin_unlock_irq(&mapping->tree_lock);311311+ } else {312312+ page->mapping = NULL;313313+ spin_unlock_irq(&mapping->tree_lock);314314+ page_cache_release(page);304315 }305305- if (error)306306- mem_cgroup_uncharge_cache_page(page);307316 return error;308317}309318···11331124 /* We have to do this with page locked to prevent races */11341125 lock_page(page);11351126 if (!PageSwapCache(page) || page_private(page) != swap.val ||11361136- page->mapping) {11271127+ !shmem_confirm_swap(mapping, index, swap)) {11371128 error = -EEXIST; /* try again */11381138- goto failed;11291129+ goto unlock;11391130 }11401131 if (!PageUptodate(page)) {11411132 error = -EIO;···1151114211521143 error = mem_cgroup_cache_charge(page, current->mm,11531144 gfp & GFP_RECLAIM_MASK);11541154- if (!error)11451145+ if (!error) {11551146 error = shmem_add_to_page_cache(page, mapping, index,11561147 gfp, swp_to_radix_entry(swap));11481148+ /* We already confirmed swap, and make no allocation */11491149+ VM_BUG_ON(error);11501150+ }11571151 if (error)11581152 goto failed;11591153···11931181 __set_page_locked(page);11941182 error = mem_cgroup_cache_charge(page, current->mm,11951183 gfp & GFP_RECLAIM_MASK);11961196- if (!error)11971197- error = shmem_add_to_page_cache(page, mapping, index,11981198- gfp, NULL);11991184 if (error)12001185 goto decused;11861186+ error = radix_tree_preload(gfp & GFP_RECLAIM_MASK);11871187+ if (!error) {11881188+ error = shmem_add_to_page_cache(page, mapping, index,11891189+ gfp, NULL);11901190+ radix_tree_preload_end();11911191+ }11921192+ if (error) {11931193+ mem_cgroup_uncharge_cache_page(page);11941194+ goto decused;11951195+ }12011196 lru_cache_add_anon(page);1202119712031198 spin_lock(&info->lock);···12641245unacct:12651246 shmem_unacct_blocks(info->flags, 1);12661247failed:12671267- if (swap.val && error != -EINVAL) {12681268- struct page *test = find_get_page(mapping, index);12691269- if (test && !radix_tree_exceptional_entry(test))12701270- page_cache_release(test);12711271- /* Have another try if the entry has changed */12721272- if (test != swp_to_radix_entry(swap))12731273- error = -EEXIST;12741274- }12481248+ if (swap.val && error != -EINVAL &&12491249+ !shmem_confirm_swap(mapping, index, swap))12501250+ error = -EEXIST;12511251+unlock:12751252 if (page) {12761253 unlock_page(page);12771254 page_cache_release(page);···12791264 spin_unlock(&info->lock);12801265 goto repeat;12811266 }12821282- if (error == -EEXIST)12671267+ if (error == -EEXIST) /* from above or from radix_tree_insert */12831268 goto repeat;12841269 return error;12851270}···17051690 file_accessed(in);17061691 }17071692 return error;17081708-}17091709-17101710-/*17111711- * llseek SEEK_DATA or SEEK_HOLE through the radix_tree.17121712- */17131713-static pgoff_t shmem_seek_hole_data(struct address_space *mapping,17141714- pgoff_t index, pgoff_t end, int origin)17151715-{17161716- struct page *page;17171717- struct pagevec pvec;17181718- pgoff_t indices[PAGEVEC_SIZE];17191719- bool done = false;17201720- int i;17211721-17221722- pagevec_init(&pvec, 0);17231723- pvec.nr = 1; /* start small: we may be there already */17241724- while (!done) {17251725- pvec.nr = shmem_find_get_pages_and_swap(mapping, index,17261726- pvec.nr, pvec.pages, indices);17271727- if (!pvec.nr) {17281728- if (origin == SEEK_DATA)17291729- index = end;17301730- break;17311731- }17321732- for (i = 0; i < pvec.nr; i++, index++) {17331733- if (index < indices[i]) {17341734- if (origin == SEEK_HOLE) {17351735- done = true;17361736- break;17371737- }17381738- index = indices[i];17391739- }17401740- page = pvec.pages[i];17411741- if (page && !radix_tree_exceptional_entry(page)) {17421742- if (!PageUptodate(page))17431743- page = NULL;17441744- }17451745- if (index >= end ||17461746- (page && origin == SEEK_DATA) ||17471747- (!page && origin == SEEK_HOLE)) {17481748- done = true;17491749- break;17501750- }17511751- }17521752- shmem_deswap_pagevec(&pvec);17531753- pagevec_release(&pvec);17541754- pvec.nr = PAGEVEC_SIZE;17551755- cond_resched();17561756- }17571757- return index;17581758-}17591759-17601760-static loff_t shmem_file_llseek(struct file *file, loff_t offset, int origin)17611761-{17621762- struct address_space *mapping;17631763- struct inode *inode;17641764- pgoff_t start, end;17651765- loff_t new_offset;17661766-17671767- if (origin != SEEK_DATA && origin != SEEK_HOLE)17681768- return generic_file_llseek_size(file, offset, origin,17691769- MAX_LFS_FILESIZE);17701770- mapping = file->f_mapping;17711771- inode = mapping->host;17721772- mutex_lock(&inode->i_mutex);17731773- /* We're holding i_mutex so we can access i_size directly */17741774-17751775- if (offset < 0)17761776- offset = -EINVAL;17771777- else if (offset >= inode->i_size)17781778- offset = -ENXIO;17791779- else {17801780- start = offset >> PAGE_CACHE_SHIFT;17811781- end = (inode->i_size + PAGE_CACHE_SIZE - 1) >> PAGE_CACHE_SHIFT;17821782- new_offset = shmem_seek_hole_data(mapping, start, end, origin);17831783- new_offset <<= PAGE_CACHE_SHIFT;17841784- if (new_offset > offset) {17851785- if (new_offset < inode->i_size)17861786- offset = new_offset;17871787- else if (origin == SEEK_DATA)17881788- offset = -ENXIO;17891789- else17901790- offset = inode->i_size;17911791- }17921792- }17931793-17941794- if (offset >= 0 && offset != file->f_pos) {17951795- file->f_pos = offset;17961796- file->f_version = 0;17971797- }17981798- mutex_unlock(&inode->i_mutex);17991799- return offset;18001693}1801169418021695static long shmem_fallocate(struct file *file, int mode, loff_t offset,···27102787static const struct file_operations shmem_file_operations = {27112788 .mmap = shmem_mmap,27122789#ifdef CONFIG_TMPFS27132713- .llseek = shmem_file_llseek,27902790+ .llseek = generic_file_llseek,27142791 .read = do_sync_read,27152792 .write = do_sync_write,27162793 .aio_read = shmem_file_aio_read,
+14-6
mm/sparse.c
···275275sparse_early_usemaps_alloc_pgdat_section(struct pglist_data *pgdat,276276 unsigned long size)277277{278278- pg_data_t *host_pgdat;279279- unsigned long goal;278278+ unsigned long goal, limit;279279+ unsigned long *p;280280+ int nid;280281 /*281282 * A page may contain usemaps for other sections preventing the282283 * page being freed and making a section unremovable while···288287 * from the same section as the pgdat where possible to avoid289288 * this problem.290289 */291291- goal = __pa(pgdat) & PAGE_SECTION_MASK;292292- host_pgdat = NODE_DATA(early_pfn_to_nid(goal >> PAGE_SHIFT));293293- return __alloc_bootmem_node_nopanic(host_pgdat, size,294294- SMP_CACHE_BYTES, goal);290290+ goal = __pa(pgdat) & (PAGE_SECTION_MASK << PAGE_SHIFT);291291+ limit = goal + (1UL << PA_SECTION_SHIFT);292292+ nid = early_pfn_to_nid(goal >> PAGE_SHIFT);293293+again:294294+ p = ___alloc_bootmem_node_nopanic(NODE_DATA(nid), size,295295+ SMP_CACHE_BYTES, goal, limit);296296+ if (!p && limit) {297297+ limit = 0;298298+ goto again;299299+ }300300+ return p;295301}296302297303static void __init check_usemap_section_nr(int nid, unsigned long *usemap)
+9-3
mm/vmscan.c
···26882688 * them before going back to sleep.26892689 */26902690 set_pgdat_percpu_threshold(pgdat, calculate_normal_threshold);26912691- schedule();26912691+26922692+ if (!kthread_should_stop())26932693+ schedule();26942694+26922695 set_pgdat_percpu_threshold(pgdat, calculate_pressure_threshold);26932696 } else {26942697 if (remaining)···29582955}2959295629602957/*29612961- * Called by memory hotplug when all memory in a node is offlined.29582958+ * Called by memory hotplug when all memory in a node is offlined. Caller must29592959+ * hold lock_memory_hotplug().29622960 */29632961void kswapd_stop(int nid)29642962{29652963 struct task_struct *kswapd = NODE_DATA(nid)->kswapd;2966296429672967- if (kswapd)29652965+ if (kswapd) {29682966 kthread_stop(kswapd);29672967+ NODE_DATA(nid)->kswapd = NULL;29682968+ }29692969}2970297029712971static int __init kswapd_init(void)
+1
net/ax25/af_ax25.c
···842842 case AX25_P_NETROM:843843 if (ax25_protocol_is_registered(AX25_P_NETROM))844844 return -ESOCKTNOSUPPORT;845845+ break;845846#endif846847#ifdef CONFIG_ROSE_MODULE847848 case AX25_P_ROSE:
···63136313/* Initialize per network namespace state */63146314static int __net_init netdev_init(struct net *net)63156315{63166316- INIT_LIST_HEAD(&net->dev_base_head);63166316+ if (net != &init_net)63176317+ INIT_LIST_HEAD(&net->dev_base_head);6317631863186319 net->dev_name_head = netdev_create_hash();63196320 if (net->dev_name_head == NULL)
+3-1
net/core/net_namespace.c
···2727LIST_HEAD(net_namespace_list);2828EXPORT_SYMBOL_GPL(net_namespace_list);29293030-struct net init_net;3030+struct net init_net = {3131+ .dev_base_head = LIST_HEAD_INIT(init_net.dev_base_head),3232+};3133EXPORT_SYMBOL(init_net);32343335#define INITIAL_NET_GEN_PTRS 13 /* +1 for len +2 for rcu_head */
+55-18
net/core/netprio_cgroup.c
···6565 spin_unlock_irqrestore(&prioidx_map_lock, flags);6666}67676868-static void extend_netdev_table(struct net_device *dev, u32 new_len)6868+static int extend_netdev_table(struct net_device *dev, u32 new_len)6969{7070 size_t new_size = sizeof(struct netprio_map) +7171 ((sizeof(u32) * new_len));···77777878 if (!new_priomap) {7979 pr_warn("Unable to alloc new priomap!\n");8080- return;8080+ return -ENOMEM;8181 }82828383 for (i = 0;···9090 rcu_assign_pointer(dev->priomap, new_priomap);9191 if (old_priomap)9292 kfree_rcu(old_priomap, rcu);9393+ return 0;9394}94959595-static void update_netdev_tables(void)9696+static int write_update_netdev_table(struct net_device *dev)9697{9797- struct net_device *dev;9898- u32 max_len = atomic_read(&max_prioidx) + 1;9898+ int ret = 0;9999+ u32 max_len;99100 struct netprio_map *map;100101101102 rtnl_lock();103103+ max_len = atomic_read(&max_prioidx) + 1;104104+ map = rtnl_dereference(dev->priomap);105105+ if (!map || map->priomap_len < max_len)106106+ ret = extend_netdev_table(dev, max_len);107107+ rtnl_unlock();108108+109109+ return ret;110110+}111111+112112+static int update_netdev_tables(void)113113+{114114+ int ret = 0;115115+ struct net_device *dev;116116+ u32 max_len;117117+ struct netprio_map *map;118118+119119+ rtnl_lock();120120+ max_len = atomic_read(&max_prioidx) + 1;102121 for_each_netdev(&init_net, dev) {103122 map = rtnl_dereference(dev->priomap);104104- if ((!map) ||105105- (map->priomap_len < max_len))106106- extend_netdev_table(dev, max_len);123123+ /*124124+ * don't allocate priomap if we didn't125125+ * change net_prio.ifpriomap (map == NULL),126126+ * this will speed up skb_update_prio.127127+ */128128+ if (map && map->priomap_len < max_len) {129129+ ret = extend_netdev_table(dev, max_len);130130+ if (ret < 0)131131+ break;132132+ }107133 }108134 rtnl_unlock();135135+ return ret;109136}110137111138static struct cgroup_subsys_state *cgrp_create(struct cgroup *cgrp)112139{113140 struct cgroup_netprio_state *cs;114114- int ret;141141+ int ret = -EINVAL;115142116143 cs = kzalloc(sizeof(*cs), GFP_KERNEL);117144 if (!cs)118145 return ERR_PTR(-ENOMEM);119146120120- if (cgrp->parent && cgrp_netprio_state(cgrp->parent)->prioidx) {121121- kfree(cs);122122- return ERR_PTR(-EINVAL);123123- }147147+ if (cgrp->parent && cgrp_netprio_state(cgrp->parent)->prioidx)148148+ goto out;124149125150 ret = get_prioidx(&cs->prioidx);126126- if (ret != 0) {151151+ if (ret < 0) {127152 pr_warn("No space in priority index array\n");128128- kfree(cs);129129- return ERR_PTR(ret);153153+ goto out;154154+ }155155+156156+ ret = update_netdev_tables();157157+ if (ret < 0) {158158+ put_prioidx(cs->prioidx);159159+ goto out;130160 }131161132162 return &cs->css;163163+out:164164+ kfree(cs);165165+ return ERR_PTR(ret);133166}134167135168static void cgrp_destroy(struct cgroup *cgrp)···254221 if (!dev)255222 goto out_free_devname;256223257257- update_netdev_tables();258258- ret = 0;224224+ ret = write_update_netdev_table(dev);225225+ if (ret < 0)226226+ goto out_put_dev;227227+259228 rcu_read_lock();260229 map = rcu_dereference(dev->priomap);261230 if (map)262231 map->priomap[prioidx] = priority;263232 rcu_read_unlock();233233+234234+out_put_dev:264235 dev_put(dev);265236266237out_free_devname:
+1-1
net/core/skbuff.c
···365365 unsigned int fragsz = SKB_DATA_ALIGN(length + NET_SKB_PAD) +366366 SKB_DATA_ALIGN(sizeof(struct skb_shared_info));367367368368- if (fragsz <= PAGE_SIZE && !(gfp_mask & __GFP_WAIT)) {368368+ if (fragsz <= PAGE_SIZE && !(gfp_mask & (__GFP_WAIT | GFP_DMA))) {369369 void *data = netdev_alloc_frag(fragsz);370370371371 if (likely(data)) {
+4-2
net/ipv4/cipso_ipv4.c
···17251725 case CIPSO_V4_TAG_LOCAL:17261726 /* This is a non-standard tag that we only allow for17271727 * local connections, so if the incoming interface is17281728- * not the loopback device drop the packet. */17291729- if (!(skb->dev->flags & IFF_LOOPBACK)) {17281728+ * not the loopback device drop the packet. Further,17291729+ * there is no legitimate reason for setting this from17301730+ * userspace so reject it if skb is NULL. */17311731+ if (skb == NULL || !(skb->dev->flags & IFF_LOOPBACK)) {17301732 err_offset = opt_iter;17311733 goto validate_return_locked;17321734 }
···752752753753 epb = &ep->base;754754755755- if (hlist_unhashed(&epb->node))756756- return;757757-758755 epb->hashent = sctp_ep_hashfn(epb->bind_addr.port);759756760757 head = &sctp_ep_hashtable[epb->hashent];761758762759 sctp_write_lock(&head->lock);763763- __hlist_del(&epb->node);760760+ hlist_del_init(&epb->node);764761 sctp_write_unlock(&head->lock);765762}766763···838841 head = &sctp_assoc_hashtable[epb->hashent];839842840843 sctp_write_lock(&head->lock);841841- __hlist_del(&epb->node);844844+ hlist_del_init(&epb->node);842845 sctp_write_unlock(&head->lock);843846}844847
+10-2
net/sctp/socket.c
···12311231 SCTP_DEBUG_PRINTK("About to exit __sctp_connect() free asoc: %p"12321232 " kaddrs: %p err: %d\n",12331233 asoc, kaddrs, err);12341234- if (asoc)12341234+ if (asoc) {12351235+ /* sctp_primitive_ASSOCIATE may have added this association12361236+ * To the hash table, try to unhash it, just in case, its a noop12371237+ * if it wasn't hashed so we're safe12381238+ */12391239+ sctp_unhash_established(asoc);12351240 sctp_association_free(asoc);12411241+ }12361242 return err;12371243}12381244···19481942 goto out_unlock;1949194319501944out_free:19511951- if (new_asoc)19451945+ if (new_asoc) {19461946+ sctp_unhash_established(asoc);19521947 sctp_association_free(asoc);19481948+ }19531949out_unlock:19541950 sctp_release_sock(sk);19551951
+1-1
security/selinux/hooks.c
···27172717 ATTR_ATIME_SET | ATTR_MTIME_SET | ATTR_TIMES_SET))27182718 return dentry_has_perm(cred, dentry, FILE__SETATTR);2719271927202720- if (ia_valid & ATTR_SIZE)27202720+ if (selinux_policycap_openperm && (ia_valid & ATTR_SIZE))27212721 av |= FILE__OPEN;2722272227232723 return dentry_has_perm(cred, dentry, av);
···414414{415415 struct list_head *p;416416 struct snd_usb_endpoint *ep;417417- int ret, is_playback = direction == SNDRV_PCM_STREAM_PLAYBACK;417417+ int is_playback = direction == SNDRV_PCM_STREAM_PLAYBACK;418418419419 mutex_lock(&chip->mutex);420420···433433 is_playback ? "playback" : "capture",434434 type == SND_USB_ENDPOINT_TYPE_DATA ? "data" : "sync",435435 ep_num);436436-437437- /* select the alt setting once so the endpoints become valid */438438- ret = usb_set_interface(chip->dev, alts->desc.bInterfaceNumber,439439- alts->desc.bAlternateSetting);440440- if (ret < 0) {441441- snd_printk(KERN_ERR "%s(): usb_set_interface() failed, ret = %d\n",442442- __func__, ret);443443- ep = NULL;444444- goto __exit_unlock;445445- }446436447437 ep = kzalloc(sizeof(*ep), GFP_KERNEL);448438 if (!ep)···821831 if (++ep->use_count != 1)822832 return 0;823833824824- if (snd_BUG_ON(!test_bit(EP_FLAG_ACTIVATED, &ep->flags)))825825- return -EINVAL;826826-827834 /* just to be sure */828835 deactivate_urbs(ep, 0, 1);829836 wait_clear_urbs(ep);···898911 if (snd_BUG_ON(ep->use_count == 0))899912 return;900913901901- if (snd_BUG_ON(!test_bit(EP_FLAG_ACTIVATED, &ep->flags)))902902- return;903903-904914 if (--ep->use_count == 0) {905915 deactivate_urbs(ep, force, can_sleep);906916 ep->data_subs = NULL;···908924 if (wait)909925 wait_clear_urbs(ep);910926 }911911-}912912-913913-/**914914- * snd_usb_endpoint_activate: activate an snd_usb_endpoint915915- *916916- * @ep: the endpoint to activate917917- *918918- * If the endpoint is not currently in use, this functions will select the919919- * correct alternate interface setting for the interface of this endpoint.920920- *921921- * In case of any active users, this functions does nothing.922922- *923923- * Returns an error if usb_set_interface() failed, 0 in all other924924- * cases.925925- */926926-int snd_usb_endpoint_activate(struct snd_usb_endpoint *ep)927927-{928928- if (ep->use_count != 0)929929- return 0;930930-931931- if (!ep->chip->shutdown &&932932- !test_and_set_bit(EP_FLAG_ACTIVATED, &ep->flags)) {933933- int ret;934934-935935- ret = usb_set_interface(ep->chip->dev, ep->iface, ep->alt_idx);936936- if (ret < 0) {937937- snd_printk(KERN_ERR "%s() usb_set_interface() failed, ret = %d\n",938938- __func__, ret);939939- clear_bit(EP_FLAG_ACTIVATED, &ep->flags);940940- return ret;941941- }942942-943943- return 0;944944- }945945-946946- return -EBUSY;947927}948928949929/**···928980 if (!ep)929981 return -EINVAL;930982983983+ deactivate_urbs(ep, 1, 1);984984+ wait_clear_urbs(ep);985985+931986 if (ep->use_count != 0)932987 return 0;933988934934- if (!ep->chip->shutdown &&935935- test_and_clear_bit(EP_FLAG_ACTIVATED, &ep->flags)) {936936- int ret;989989+ clear_bit(EP_FLAG_ACTIVATED, &ep->flags);937990938938- ret = usb_set_interface(ep->chip->dev, ep->iface, 0);939939- if (ret < 0) {940940- snd_printk(KERN_ERR "%s(): usb_set_interface() failed, ret = %d\n",941941- __func__, ret);942942- return ret;943943- }944944-945945- return 0;946946- }947947-948948- return -EBUSY;991991+ return 0;949992}950993951994/**
+37-24
sound/usb/pcm.c
···261261 force, can_sleep, wait);262262}263263264264-static int activate_endpoints(struct snd_usb_substream *subs)265265-{266266- if (subs->sync_endpoint) {267267- int ret;268268-269269- ret = snd_usb_endpoint_activate(subs->sync_endpoint);270270- if (ret < 0)271271- return ret;272272- }273273-274274- return snd_usb_endpoint_activate(subs->data_endpoint);275275-}276276-277264static int deactivate_endpoints(struct snd_usb_substream *subs)278265{279266 int reta, retb;···300313301314 if (fmt == subs->cur_audiofmt)302315 return 0;316316+317317+ /* close the old interface */318318+ if (subs->interface >= 0 && subs->interface != fmt->iface) {319319+ err = usb_set_interface(subs->dev, subs->interface, 0);320320+ if (err < 0) {321321+ snd_printk(KERN_ERR "%d:%d:%d: return to setting 0 failed (%d)\n",322322+ dev->devnum, fmt->iface, fmt->altsetting, err);323323+ return -EIO;324324+ }325325+ subs->interface = -1;326326+ subs->altset_idx = 0;327327+ }328328+329329+ /* set interface */330330+ if (subs->interface != fmt->iface ||331331+ subs->altset_idx != fmt->altset_idx) {332332+ err = usb_set_interface(dev, fmt->iface, fmt->altsetting);333333+ if (err < 0) {334334+ snd_printk(KERN_ERR "%d:%d:%d: usb_set_interface failed (%d)\n",335335+ dev->devnum, fmt->iface, fmt->altsetting, err);336336+ return -EIO;337337+ }338338+ snd_printdd(KERN_INFO "setting usb interface %d:%d\n",339339+ fmt->iface, fmt->altsetting);340340+ subs->interface = fmt->iface;341341+ subs->altset_idx = fmt->altset_idx;342342+ }303343304344 subs->data_endpoint = snd_usb_add_endpoint(subs->stream->chip,305345 alts, fmt->endpoint, subs->direction,···401387 subs->data_endpoint->sync_master = subs->sync_endpoint;402388 }403389404404- if ((err = snd_usb_init_pitch(subs->stream->chip, subs->interface, alts, fmt)) < 0)390390+ if ((err = snd_usb_init_pitch(subs->stream->chip, fmt->iface, alts, fmt)) < 0)405391 return err;406392407393 subs->cur_audiofmt = fmt;···464450 struct usb_interface *iface;465451 iface = usb_ifnum_to_if(subs->dev, fmt->iface);466452 alts = &iface->altsetting[fmt->altset_idx];467467- ret = snd_usb_init_sample_rate(subs->stream->chip, subs->interface, alts, fmt, rate);453453+ ret = snd_usb_init_sample_rate(subs->stream->chip, fmt->iface, alts, fmt, rate);468454 if (ret < 0)469455 return ret;470456 subs->cur_rate = rate;···474460 mutex_lock(&subs->stream->chip->shutdown_mutex);475461 /* format changed */476462 stop_endpoints(subs, 0, 0, 0);477477- deactivate_endpoints(subs);478478-479479- ret = activate_endpoints(subs);480480- if (ret < 0)481481- goto unlock;482482-483463 ret = snd_usb_endpoint_set_params(subs->data_endpoint, hw_params, fmt,484464 subs->sync_endpoint);485465 if (ret < 0)···508500 subs->period_bytes = 0;509501 mutex_lock(&subs->stream->chip->shutdown_mutex);510502 stop_endpoints(subs, 0, 1, 1);503503+ deactivate_endpoints(subs);511504 mutex_unlock(&subs->stream->chip->shutdown_mutex);512505 return snd_pcm_lib_free_vmalloc_buffer(substream);513506}···947938948939static int snd_usb_pcm_close(struct snd_pcm_substream *substream, int direction)949940{950950- int ret;951941 struct snd_usb_stream *as = snd_pcm_substream_chip(substream);952942 struct snd_usb_substream *subs = &as->substream[direction];953943954944 stop_endpoints(subs, 0, 0, 0);955955- ret = deactivate_endpoints(subs);945945+946946+ if (!as->chip->shutdown && subs->interface >= 0) {947947+ usb_set_interface(subs->dev, subs->interface, 0);948948+ subs->interface = -1;949949+ }950950+956951 subs->pcm_substream = NULL;957952 snd_usb_autosuspend(subs->stream->chip);958953959959- return ret;954954+ return 0;960955}961956962957/* Since a URB can handle only a single linear buffer, we must use double