···191191 };192192193193The bootargs property contains the kernel arguments, and the initrd-*194194-properties define the address and size of an initrd blob. The195195-chosen node may also optionally contain an arbitrary number of196196-additional properties for platform-specific configuration data.194194+properties define the address and size of an initrd blob. Note that195195+initrd-end is the first address after the initrd image, so this doesn't196196+match the usual semantic of struct resource. The chosen node may also197197+optionally contain an arbitrary number of additional properties for198198+platform-specific configuration data.197199198200During early boot, the architecture setup code calls of_scan_flat_dt()199201several times with different helper callbacks to parse device tree
+21
Documentation/kernel-parameters.txt
···30053005 Force threading of all interrupt handlers except those30063006 marked explicitly IRQF_NO_THREAD.3007300730083008+ tmem [KNL,XEN]30093009+ Enable the Transcendent memory driver if built-in.30103010+30113011+ tmem.cleancache=0|1 [KNL, XEN]30123012+ Default is on (1). Disable the usage of the cleancache30133013+ API to send anonymous pages to the hypervisor.30143014+30153015+ tmem.frontswap=0|1 [KNL, XEN]30163016+ Default is on (1). Disable the usage of the frontswap30173017+ API to send swap pages to the hypervisor. If disabled30183018+ the selfballooning and selfshrinking are force disabled.30193019+30203020+ tmem.selfballooning=0|1 [KNL, XEN]30213021+ Default is on (1). Disable the driving of swap pages30223022+ to the hypervisor.30233023+30243024+ tmem.selfshrinking=0|1 [KNL, XEN]30253025+ Default is on (1). Partial swapoff that immediately30263026+ transfers pages from Xen hypervisor back to the30273027+ kernel based on different criteria.30283028+30083029 topology= [S390]30093030 Format: {off | on}30103031 Specify if the kernel should make use of the cpu
+202
Documentation/kernel-per-CPU-kthreads.txt
···11+REDUCING OS JITTER DUE TO PER-CPU KTHREADS22+33+This document lists per-CPU kthreads in the Linux kernel and presents44+options to control their OS jitter. Note that non-per-CPU kthreads are55+not listed here. To reduce OS jitter from non-per-CPU kthreads, bind66+them to a "housekeeping" CPU dedicated to such work.77+88+99+REFERENCES1010+1111+o Documentation/IRQ-affinity.txt: Binding interrupts to sets of CPUs.1212+1313+o Documentation/cgroups: Using cgroups to bind tasks to sets of CPUs.1414+1515+o man taskset: Using the taskset command to bind tasks to sets1616+ of CPUs.1717+1818+o man sched_setaffinity: Using the sched_setaffinity() system1919+ call to bind tasks to sets of CPUs.2020+2121+o /sys/devices/system/cpu/cpuN/online: Control CPU N's hotplug state,2222+ writing "0" to offline and "1" to online.2323+2424+o In order to locate kernel-generated OS jitter on CPU N:2525+2626+ cd /sys/kernel/debug/tracing2727+ echo 1 > max_graph_depth # Increase the "1" for more detail2828+ echo function_graph > current_tracer2929+ # run workload3030+ cat per_cpu/cpuN/trace3131+3232+3333+KTHREADS3434+3535+Name: ehca_comp/%u3636+Purpose: Periodically process Infiniband-related work.3737+To reduce its OS jitter, do any of the following:3838+1. Don't use eHCA Infiniband hardware, instead choosing hardware3939+ that does not require per-CPU kthreads. This will prevent these4040+ kthreads from being created in the first place. (This will4141+ work for most people, as this hardware, though important, is4242+ relatively old and is produced in relatively low unit volumes.)4343+2. Do all eHCA-Infiniband-related work on other CPUs, including4444+ interrupts.4545+3. Rework the eHCA driver so that its per-CPU kthreads are4646+ provisioned only on selected CPUs.4747+4848+4949+Name: irq/%d-%s5050+Purpose: Handle threaded interrupts.5151+To reduce its OS jitter, do the following:5252+1. Use irq affinity to force the irq threads to execute on5353+ some other CPU.5454+5555+Name: kcmtpd_ctr_%d5656+Purpose: Handle Bluetooth work.5757+To reduce its OS jitter, do one of the following:5858+1. Don't use Bluetooth, in which case these kthreads won't be5959+ created in the first place.6060+2. Use irq affinity to force Bluetooth-related interrupts to6161+ occur on some other CPU and furthermore initiate all6262+ Bluetooth activity on some other CPU.6363+6464+Name: ksoftirqd/%u6565+Purpose: Execute softirq handlers when threaded or when under heavy load.6666+To reduce its OS jitter, each softirq vector must be handled6767+separately as follows:6868+TIMER_SOFTIRQ: Do all of the following:6969+1. To the extent possible, keep the CPU out of the kernel when it7070+ is non-idle, for example, by avoiding system calls and by forcing7171+ both kernel threads and interrupts to execute elsewhere.7272+2. Build with CONFIG_HOTPLUG_CPU=y. After boot completes, force7373+ the CPU offline, then bring it back online. This forces7474+ recurring timers to migrate elsewhere. If you are concerned7575+ with multiple CPUs, force them all offline before bringing the7676+ first one back online. Once you have onlined the CPUs in question,7777+ do not offline any other CPUs, because doing so could force the7878+ timer back onto one of the CPUs in question.7979+NET_TX_SOFTIRQ and NET_RX_SOFTIRQ: Do all of the following:8080+1. Force networking interrupts onto other CPUs.8181+2. Initiate any network I/O on other CPUs.8282+3. Once your application has started, prevent CPU-hotplug operations8383+ from being initiated from tasks that might run on the CPU to8484+ be de-jittered. (It is OK to force this CPU offline and then8585+ bring it back online before you start your application.)8686+BLOCK_SOFTIRQ: Do all of the following:8787+1. Force block-device interrupts onto some other CPU.8888+2. Initiate any block I/O on other CPUs.8989+3. Once your application has started, prevent CPU-hotplug operations9090+ from being initiated from tasks that might run on the CPU to9191+ be de-jittered. (It is OK to force this CPU offline and then9292+ bring it back online before you start your application.)9393+BLOCK_IOPOLL_SOFTIRQ: Do all of the following:9494+1. Force block-device interrupts onto some other CPU.9595+2. Initiate any block I/O and block-I/O polling on other CPUs.9696+3. Once your application has started, prevent CPU-hotplug operations9797+ from being initiated from tasks that might run on the CPU to9898+ be de-jittered. (It is OK to force this CPU offline and then9999+ bring it back online before you start your application.)100100+TASKLET_SOFTIRQ: Do one or more of the following:101101+1. Avoid use of drivers that use tasklets. (Such drivers will contain102102+ calls to things like tasklet_schedule().)103103+2. Convert all drivers that you must use from tasklets to workqueues.104104+3. Force interrupts for drivers using tasklets onto other CPUs,105105+ and also do I/O involving these drivers on other CPUs.106106+SCHED_SOFTIRQ: Do all of the following:107107+1. Avoid sending scheduler IPIs to the CPU to be de-jittered,108108+ for example, ensure that at most one runnable kthread is present109109+ on that CPU. If a thread that expects to run on the de-jittered110110+ CPU awakens, the scheduler will send an IPI that can result in111111+ a subsequent SCHED_SOFTIRQ.112112+2. Build with CONFIG_RCU_NOCB_CPU=y, CONFIG_RCU_NOCB_CPU_ALL=y,113113+ CONFIG_NO_HZ_FULL=y, and, in addition, ensure that the CPU114114+ to be de-jittered is marked as an adaptive-ticks CPU using the115115+ "nohz_full=" boot parameter. This reduces the number of116116+ scheduler-clock interrupts that the de-jittered CPU receives,117117+ minimizing its chances of being selected to do the load balancing118118+ work that runs in SCHED_SOFTIRQ context.119119+3. To the extent possible, keep the CPU out of the kernel when it120120+ is non-idle, for example, by avoiding system calls and by121121+ forcing both kernel threads and interrupts to execute elsewhere.122122+ This further reduces the number of scheduler-clock interrupts123123+ received by the de-jittered CPU.124124+HRTIMER_SOFTIRQ: Do all of the following:125125+1. To the extent possible, keep the CPU out of the kernel when it126126+ is non-idle. For example, avoid system calls and force both127127+ kernel threads and interrupts to execute elsewhere.128128+2. Build with CONFIG_HOTPLUG_CPU=y. Once boot completes, force the129129+ CPU offline, then bring it back online. This forces recurring130130+ timers to migrate elsewhere. If you are concerned with multiple131131+ CPUs, force them all offline before bringing the first one132132+ back online. Once you have onlined the CPUs in question, do not133133+ offline any other CPUs, because doing so could force the timer134134+ back onto one of the CPUs in question.135135+RCU_SOFTIRQ: Do at least one of the following:136136+1. Offload callbacks and keep the CPU in either dyntick-idle or137137+ adaptive-ticks state by doing all of the following:138138+ a. Build with CONFIG_RCU_NOCB_CPU=y, CONFIG_RCU_NOCB_CPU_ALL=y,139139+ CONFIG_NO_HZ_FULL=y, and, in addition ensure that the CPU140140+ to be de-jittered is marked as an adaptive-ticks CPU using141141+ the "nohz_full=" boot parameter. Bind the rcuo kthreads142142+ to housekeeping CPUs, which can tolerate OS jitter.143143+ b. To the extent possible, keep the CPU out of the kernel144144+ when it is non-idle, for example, by avoiding system145145+ calls and by forcing both kernel threads and interrupts146146+ to execute elsewhere.147147+2. Enable RCU to do its processing remotely via dyntick-idle by148148+ doing all of the following:149149+ a. Build with CONFIG_NO_HZ=y and CONFIG_RCU_FAST_NO_HZ=y.150150+ b. Ensure that the CPU goes idle frequently, allowing other151151+ CPUs to detect that it has passed through an RCU quiescent152152+ state. If the kernel is built with CONFIG_NO_HZ_FULL=y,153153+ userspace execution also allows other CPUs to detect that154154+ the CPU in question has passed through a quiescent state.155155+ c. To the extent possible, keep the CPU out of the kernel156156+ when it is non-idle, for example, by avoiding system157157+ calls and by forcing both kernel threads and interrupts158158+ to execute elsewhere.159159+160160+Name: rcuc/%u161161+Purpose: Execute RCU callbacks in CONFIG_RCU_BOOST=y kernels.162162+To reduce its OS jitter, do at least one of the following:163163+1. Build the kernel with CONFIG_PREEMPT=n. This prevents these164164+ kthreads from being created in the first place, and also obviates165165+ the need for RCU priority boosting. This approach is feasible166166+ for workloads that do not require high degrees of responsiveness.167167+2. Build the kernel with CONFIG_RCU_BOOST=n. This prevents these168168+ kthreads from being created in the first place. This approach169169+ is feasible only if your workload never requires RCU priority170170+ boosting, for example, if you ensure frequent idle time on all171171+ CPUs that might execute within the kernel.172172+3. Build with CONFIG_RCU_NOCB_CPU=y and CONFIG_RCU_NOCB_CPU_ALL=y,173173+ which offloads all RCU callbacks to kthreads that can be moved174174+ off of CPUs susceptible to OS jitter. This approach prevents the175175+ rcuc/%u kthreads from having any work to do, so that they are176176+ never awakened.177177+4. Ensure that the CPU never enters the kernel, and, in particular,178178+ avoid initiating any CPU hotplug operations on this CPU. This is179179+ another way of preventing any callbacks from being queued on the180180+ CPU, again preventing the rcuc/%u kthreads from having any work181181+ to do.182182+183183+Name: rcuob/%d, rcuop/%d, and rcuos/%d184184+Purpose: Offload RCU callbacks from the corresponding CPU.185185+To reduce its OS jitter, do at least one of the following:186186+1. Use affinity, cgroups, or other mechanism to force these kthreads187187+ to execute on some other CPU.188188+2. Build with CONFIG_RCU_NOCB_CPUS=n, which will prevent these189189+ kthreads from being created in the first place. However, please190190+ note that this will not eliminate OS jitter, but will instead191191+ shift it to RCU_SOFTIRQ.192192+193193+Name: watchdog/%u194194+Purpose: Detect software lockups on each CPU.195195+To reduce its OS jitter, do at least one of the following:196196+1. Build with CONFIG_LOCKUP_DETECTOR=n, which will prevent these197197+ kthreads from being created in the first place.198198+2. Echo a zero to /proc/sys/kernel/watchdog to disable the199199+ watchdog timer.200200+3. Echo a large number of /proc/sys/kernel/watchdog_thresh in201201+ order to reduce the frequency of OS jitter due to the watchdog202202+ timer down to a level that is acceptable for your workload.
+8-7
Documentation/power/devices.txt
···268268System Power Management Phases269269------------------------------270270Suspending or resuming the system is done in several phases. Different phases271271-are used for standby or memory sleep states ("suspend-to-RAM") and the271271+are used for freeze, standby, and memory sleep states ("suspend-to-RAM") and the272272hibernation state ("suspend-to-disk"). Each phase involves executing callbacks273273for every device before the next phase begins. Not all busses or classes274274support all these callbacks and not all drivers use all the callbacks. The···309309310310Entering System Suspend311311-----------------------312312-When the system goes into the standby or memory sleep state, the phases are:312312+When the system goes into the freeze, standby or memory sleep state,313313+the phases are:313314314315 prepare, suspend, suspend_late, suspend_noirq.315316···369368370369Leaving System Suspend371370----------------------372372-When resuming from standby or memory sleep, the phases are:371371+When resuming from freeze, standby or memory sleep, the phases are:373372374373 resume_noirq, resume_early, resume, complete.375374···434433435434Entering Hibernation436435--------------------437437-Hibernating the system is more complicated than putting it into the standby or438438-memory sleep state, because it involves creating and saving a system image.436436+Hibernating the system is more complicated than putting it into the other437437+sleep states, because it involves creating and saving a system image.439438Therefore there are more phases for hibernation, with a different set of440439callbacks. These phases always run after tasks have been frozen and memory has441440been freed.···486485487486At this point the system image is saved, and the devices then need to be488487prepared for the upcoming system shutdown. This is much like suspending them489489-before putting the system into the standby or memory sleep state, and the phases490490-are similar.488488+before putting the system into the freeze, standby or memory sleep state,489489+and the phases are similar.491490492491 9. The prepare phase is discussed above.493492
+2-2
Documentation/power/interface.txt
···77is mounted at /sys). 8899/sys/power/state controls system power state. Reading from this file1010-returns what states are supported, which is hard-coded to 'standby'1111-(Power-On Suspend), 'mem' (Suspend-to-RAM), and 'disk'1010+returns what states are supported, which is hard-coded to 'freeze',1111+'standby' (Power-On Suspend), 'mem' (Suspend-to-RAM), and 'disk'1212(Suspend-to-Disk). 13131414Writing to this file one of those strings causes the system to
+4-2
Documentation/power/notifiers.txt
···1515The subsystems or drivers having such needs can register suspend notifiers that1616will be called upon the following events by the PM core:17171818-PM_HIBERNATION_PREPARE The system is going to hibernate or suspend, tasks will1919- be frozen immediately.1818+PM_HIBERNATION_PREPARE The system is going to hibernate, tasks will be frozen1919+ immediately. This is different from PM_SUSPEND_PREPARE2020+ below because here we do additional work between notifiers2121+ and drivers freezing.20222123PM_POST_HIBERNATION The system memory state has been restored from a2224 hibernation image or an error occurred during
+17-13
Documentation/power/states.txt
···22System Power Management States334455-The kernel supports three power management states generically, though66-each is dependent on platform support code to implement the low-level77-details for each state. This file describes each state, what they are55+The kernel supports four power management states generically, though66+one is generic and the other three are dependent on platform support77+code to implement the low-level details for each state.88+This file describes each state, what they are89commonly called, what ACPI state they map to, and what string to write910to /sys/power/state to enter that state1111+1212+state: Freeze / Low-Power Idle1313+ACPI state: S01414+String: "freeze"1515+1616+This state is a generic, pure software, light-weight, low-power state.1717+It allows more energy to be saved relative to idle by freezing user1818+space and putting all I/O devices into low-power states (possibly1919+lower-power than available at run time), such that the processors can2020+spend more time in their idle states.2121+This state can be used for platforms without Standby/Suspend-to-RAM2222+support, or it can be used in addition to Suspend-to-RAM (memory sleep)2323+to provide reduced resume latency.102411251226State: Standby / Power-On Suspend···3521We try to put devices in a low-power state equivalent to D1, which3622also offers low power savings, but low resume latency. Not all devices3723support D1, and those that don't are left on. 3838-3939-A transition from Standby to the On state should take about 1-24040-seconds. 412442254326State: Suspend-to-RAM···52415342For at least ACPI, STR requires some minimal boot-strapping code to5443resume the system from STR. This may be true on other platforms. 5555-5656-A transition from Suspend-to-RAM to the On state should take about5757-3-5 seconds. 584459456046State: Suspend-to-disk···8274down offers greater savings, and allows this mechanism to work on any8375system. However, entering a real low-power state allows the user to8476trigger wake up events (e.g. pressing a key or opening a laptop lid).8585-8686-A transition from Suspend-to-Disk to the On state should take about 308787-seconds, though it's typically a bit more with the current8888-implementation.
···213213config GENERIC_SMP_IDLE_THREAD214214 bool215215216216+config GENERIC_IDLE_POLL_SETUP217217+ bool218218+216219# Select if arch init_task initializer is different to init/init_task.c217220config ARCH_INIT_TASK218221 bool
···5252 add x2, x2, #4 // add 4 (line length offset)5353 mov x4, #0x3ff5454 and x4, x4, x1, lsr #3 // find maximum number on the way size5555- clz x5, x4 // find bit position of way size increment5555+ clz w5, w4 // find bit position of way size increment5656 mov x7, #0x7fff5757 and x7, x7, x1, lsr #13 // extract max number of the index size5858loop2:
+1-2
arch/arm64/mm/proc.S
···119119120120 mov x0, #3 << 20121121 msr cpacr_el1, x0 // Enable FP/ASIMD122122- mov x0, #1123123- msr oslar_el1, x0 // Set the debug OS lock122122+ msr mdscr_el1, xzr // Reset mdscr_el1124123 tlbi vmalle1is // invalidate I + D TLBs125124 /*126125 * Memory region attributes for LPAE:
···245245246246config IRQSTACKS247247 bool "Use separate kernel stacks when processing interrupts"248248- default n248248+ default y249249 help250250 If you say Y here the kernel will use separate kernel stacks251251 for handling hard and soft interrupts. This can help avoid
···166166 seq_printf(p, "%*s: ", prec, "STK");167167 for_each_online_cpu(j)168168 seq_printf(p, "%10u ", irq_stats(j)->kernel_stack_usage);169169- seq_printf(p, " Kernel stack usage\n");169169+ seq_puts(p, " Kernel stack usage\n");170170+# ifdef CONFIG_IRQSTACKS171171+ seq_printf(p, "%*s: ", prec, "IST");172172+ for_each_online_cpu(j)173173+ seq_printf(p, "%10u ", irq_stats(j)->irq_stack_usage);174174+ seq_puts(p, " Interrupt stack usage\n");175175+ seq_printf(p, "%*s: ", prec, "ISC");176176+ for_each_online_cpu(j)177177+ seq_printf(p, "%10u ", irq_stats(j)->irq_stack_counter);178178+ seq_puts(p, " Interrupt stack usage counter\n");179179+# endif170180#endif171181#ifdef CONFIG_SMP172182 seq_printf(p, "%*s: ", prec, "RES");173183 for_each_online_cpu(j)174184 seq_printf(p, "%10u ", irq_stats(j)->irq_resched_count);175175- seq_printf(p, " Rescheduling interrupts\n");185185+ seq_puts(p, " Rescheduling interrupts\n");176186 seq_printf(p, "%*s: ", prec, "CAL");177187 for_each_online_cpu(j)178188 seq_printf(p, "%10u ", irq_stats(j)->irq_call_count);179179- seq_printf(p, " Function call interrupts\n");189189+ seq_puts(p, " Function call interrupts\n");180190#endif181191 seq_printf(p, "%*s: ", prec, "TLB");182192 for_each_online_cpu(j)183193 seq_printf(p, "%10u ", irq_stats(j)->irq_tlb_count);184184- seq_printf(p, " TLB shootdowns\n");194194+ seq_puts(p, " TLB shootdowns\n");185195 return 0;186196}187197···388378 unsigned long sp = regs->gr[30];389379 unsigned long stack_usage;390380 unsigned int *last_usage;381381+ int cpu = smp_processor_id();391382392383 /* if sr7 != 0, we interrupted a userspace process which we do not want393384 * to check for stack overflow. We will only check the kernel stack. */···397386398387 /* calculate kernel stack usage */399388 stack_usage = sp - stack_start;400400- last_usage = &per_cpu(irq_stat.kernel_stack_usage, smp_processor_id());389389+#ifdef CONFIG_IRQSTACKS390390+ if (likely(stack_usage <= THREAD_SIZE))391391+ goto check_kernel_stack; /* found kernel stack */392392+393393+ /* check irq stack usage */394394+ stack_start = (unsigned long) &per_cpu(irq_stack_union, cpu).stack;395395+ stack_usage = sp - stack_start;396396+397397+ last_usage = &per_cpu(irq_stat.irq_stack_usage, cpu);398398+ if (unlikely(stack_usage > *last_usage))399399+ *last_usage = stack_usage;400400+401401+ if (likely(stack_usage < (IRQ_STACK_SIZE - STACK_MARGIN)))402402+ return;403403+404404+ pr_emerg("stackcheck: %s will most likely overflow irq stack "405405+ "(sp:%lx, stk bottom-top:%lx-%lx)\n",406406+ current->comm, sp, stack_start, stack_start + IRQ_STACK_SIZE);407407+ goto panic_check;408408+409409+check_kernel_stack:410410+#endif411411+412412+ /* check kernel stack usage */413413+ last_usage = &per_cpu(irq_stat.kernel_stack_usage, cpu);401414402415 if (unlikely(stack_usage > *last_usage))403416 *last_usage = stack_usage;···433398 "(sp:%lx, stk bottom-top:%lx-%lx)\n",434399 current->comm, sp, stack_start, stack_start + THREAD_SIZE);435400401401+#ifdef CONFIG_IRQSTACKS402402+panic_check:403403+#endif436404 if (sysctl_panic_on_stackoverflow)437405 panic("low stack detected by irq handler - check messages\n");438406#endif439407}440408441409#ifdef CONFIG_IRQSTACKS442442-DEFINE_PER_CPU(union irq_stack_union, irq_stack_union);410410+DEFINE_PER_CPU(union irq_stack_union, irq_stack_union) = {411411+ .lock = __RAW_SPIN_LOCK_UNLOCKED((irq_stack_union).lock)412412+ };443413444414static void execute_on_irq_stack(void *func, unsigned long param1)445415{446446- unsigned long *irq_stack_start;416416+ union irq_stack_union *union_ptr;447417 unsigned long irq_stack;448448- int cpu = smp_processor_id();418418+ raw_spinlock_t *irq_stack_in_use;449419450450- irq_stack_start = &per_cpu(irq_stack_union, cpu).stack[0];451451- irq_stack = (unsigned long) irq_stack_start;452452- irq_stack = ALIGN(irq_stack, 16); /* align for stack frame usage */420420+ union_ptr = &per_cpu(irq_stack_union, smp_processor_id());421421+ irq_stack = (unsigned long) &union_ptr->stack;422422+ irq_stack = ALIGN(irq_stack + sizeof(irq_stack_union.lock),423423+ 64); /* align for stack frame usage */453424454454- BUG_ON(*irq_stack_start); /* report bug if we were called recursive. */455455- *irq_stack_start = 1;425425+ /* We may be called recursive. If we are already using the irq stack,426426+ * just continue to use it. Use spinlocks to serialize427427+ * the irq stack usage.428428+ */429429+ irq_stack_in_use = &union_ptr->lock;430430+ if (!raw_spin_trylock(irq_stack_in_use)) {431431+ void (*direct_call)(unsigned long p1) = func;432432+433433+ /* We are using the IRQ stack already.434434+ * Do direct call on current stack. */435435+ direct_call(param1);436436+ return;437437+ }456438457439 /* This is where we switch to the IRQ stack. */458440 call_on_stack(param1, func, irq_stack);459441460460- *irq_stack_start = 0;442442+ __inc_irq_stat(irq_stack_counter);443443+444444+ /* free up irq stack usage. */445445+ do_raw_spin_unlock(irq_stack_in_use);446446+}447447+448448+asmlinkage void do_softirq(void)449449+{450450+ __u32 pending;451451+ unsigned long flags;452452+453453+ if (in_interrupt())454454+ return;455455+456456+ local_irq_save(flags);457457+458458+ pending = local_softirq_pending();459459+460460+ if (pending)461461+ execute_on_irq_stack(__do_softirq, 0);462462+463463+ local_irq_restore(flags);461464}462465#endif /* CONFIG_IRQSTACKS */463466
+2-2
arch/parisc/mm/init.c
···10691069{10701070 int do_recycle;1071107110721072- inc_irq_stat(irq_tlb_count);10721072+ __inc_irq_stat(irq_tlb_count);10731073 do_recycle = 0;10741074 spin_lock(&sid_lock);10751075 if (dirty_space_ids > RECYCLE_THRESHOLD) {···10901090#else10911091void flush_tlb_all(void)10921092{10931093- inc_irq_stat(irq_tlb_count);10931093+ __inc_irq_stat(irq_tlb_count);10941094 spin_lock(&sid_lock);10951095 flush_tlb_all_local(NULL);10961096 recycle_sids();
+23
arch/powerpc/Kconfig.debug
···262262 Select this to enable early debugging for the PowerNV platform263263 using an "hvsi" console264264265265+config PPC_EARLY_DEBUG_MEMCONS266266+ bool "In memory console"267267+ help268268+ Select this to enable early debugging using an in memory console.269269+ This console provides input and output buffers stored within the270270+ kernel BSS and should be safe to select on any system. A debugger271271+ can then be used to read kernel output or send input to the console.265272endchoice273273+274274+config PPC_MEMCONS_OUTPUT_SIZE275275+ int "In memory console output buffer size"276276+ depends on PPC_EARLY_DEBUG_MEMCONS277277+ default 4096278278+ help279279+ Selects the size of the output buffer (in bytes) of the in memory280280+ console.281281+282282+config PPC_MEMCONS_INPUT_SIZE283283+ int "In memory console input buffer size"284284+ depends on PPC_EARLY_DEBUG_MEMCONS285285+ default 128286286+ help287287+ Selects the size of the input buffer (in bytes) of the in memory288288+ console.266289267290config PPC_EARLY_DEBUG_OPAL268291 def_bool y
···264264extern void rtas_initialize(void);265265extern int rtas_suspend_cpu(struct rtas_suspend_me_data *data);266266extern int rtas_suspend_last_cpu(struct rtas_suspend_me_data *data);267267+extern int rtas_online_cpus_mask(cpumask_var_t cpus);268268+extern int rtas_offline_cpus_mask(cpumask_var_t cpus);267269extern int rtas_ibm_suspend_me(struct rtas_args *);268270269271struct rtc_time;
+5-2
arch/powerpc/include/asm/thread_info.h
···9797#define TIF_PERFMON_CTXSW 6 /* perfmon needs ctxsw calls */9898#define TIF_SYSCALL_AUDIT 7 /* syscall auditing active */9999#define TIF_SINGLESTEP 8 /* singlestepping active */100100-#define TIF_MEMDIE 9 /* is terminating due to OOM killer */100100+#define TIF_NOHZ 9 /* in adaptive nohz mode */101101#define TIF_SECCOMP 10 /* secure computing */102102#define TIF_RESTOREALL 11 /* Restore all regs (implies NOERROR) */103103#define TIF_NOERROR 12 /* Force successful syscall return */···106106#define TIF_SYSCALL_TRACEPOINT 15 /* syscall tracepoint instrumentation */107107#define TIF_EMULATE_STACK_STORE 16 /* Is an instruction emulation108108 for stack store? */109109+#define TIF_MEMDIE 17 /* is terminating due to OOM killer */109110110111/* as above, but as bit values */111112#define _TIF_SYSCALL_TRACE (1<<TIF_SYSCALL_TRACE)···125124#define _TIF_UPROBE (1<<TIF_UPROBE)126125#define _TIF_SYSCALL_TRACEPOINT (1<<TIF_SYSCALL_TRACEPOINT)127126#define _TIF_EMULATE_STACK_STORE (1<<TIF_EMULATE_STACK_STORE)127127+#define _TIF_NOHZ (1<<TIF_NOHZ)128128#define _TIF_SYSCALL_T_OR_A (_TIF_SYSCALL_TRACE | _TIF_SYSCALL_AUDIT | \129129- _TIF_SECCOMP | _TIF_SYSCALL_TRACEPOINT)129129+ _TIF_SECCOMP | _TIF_SYSCALL_TRACEPOINT | \130130+ _TIF_NOHZ)130131131132#define _TIF_USER_WORK_MASK (_TIF_SIGPENDING | _TIF_NEED_RESCHED | \132133 _TIF_NOTIFY_RESUME | _TIF_UPROBE)
···489489 */490490491491 mfspr r14,SPRN_DBSR /* check single-step/branch taken */492492- andis. r15,r14,DBSR_IC@h492492+ andis. r15,r14,(DBSR_IC|DBSR_BT)@h493493 beq+ 1f494494495495 LOAD_REG_IMMEDIATE(r14,interrupt_base_book3e)···500500 bge+ cr1,1f501501502502 /* here it looks like we got an inappropriate debug exception. */503503- lis r14,DBSR_IC@h /* clear the IC event */503503+ lis r14,(DBSR_IC|DBSR_BT)@h /* clear the event */504504 rlwinm r11,r11,0,~MSR_DE /* clear DE in the CSRR1 value */505505 mtspr SPRN_DBSR,r14506506 mtspr SPRN_CSRR1,r11···555555 */556556557557 mfspr r14,SPRN_DBSR /* check single-step/branch taken */558558- andis. r15,r14,DBSR_IC@h558558+ andis. r15,r14,(DBSR_IC|DBSR_BT)@h559559 beq+ 1f560560561561 LOAD_REG_IMMEDIATE(r14,interrupt_base_book3e)···566566 bge+ cr1,1f567567568568 /* here it looks like we got an inappropriate debug exception. */569569- lis r14,DBSR_IC@h /* clear the IC event */569569+ lis r14,(DBSR_IC|DBSR_BT)@h /* clear the event */570570 rlwinm r11,r11,0,~MSR_DE /* clear DE in the DSRR1 value */571571 mtspr SPRN_DBSR,r14572572 mtspr SPRN_DSRR1,r11
+4
arch/powerpc/kernel/machine_kexec_64.c
···1717#include <linux/errno.h>1818#include <linux/kernel.h>1919#include <linux/cpu.h>2020+#include <linux/hardirq.h>20212122#include <asm/page.h>2223#include <asm/current.h>···336335 pr_debug("kexec: Starting switchover sequence.\n");337336338337 /* switch to a staticly allocated stack. Based on irq stack code.338338+ * We setup preempt_count to avoid using VMX in memcpy.339339 * XXX: the task struct will likely be invalid once we do the copy!340340 */341341 kexec_stack.thread_info.task = current_thread_info()->task;342342 kexec_stack.thread_info.flags = 0;343343+ kexec_stack.thread_info.preempt_count = HARDIRQ_OFFSET;344344+ kexec_stack.thread_info.cpu = current_thread_info()->cpu;343345344346 /* We need a static PACA, too; copy this CPU's PACA over and switch to345347 * it. Also poison per_cpu_offset to catch anyone using non-static
···359359 enum pci_mmap_state mmap_state,360360 int write_combine)361361{362362- unsigned long prot = pgprot_val(protection);363362364363 /* Write combine is always 0 on non-memory space mappings. On365364 * memory space, if the user didn't pass 1, we check for a···375376376377 /* XXX would be nice to have a way to ask for write-through */377378 if (write_combine)378378- return pgprot_noncached_wc(prot);379379+ return pgprot_noncached_wc(protection);379380 else380380- return pgprot_noncached(prot);381381+ return pgprot_noncached(protection);381382}382383383384/*
+2-1
arch/powerpc/kernel/ppc_ksyms.c
···143143int __ucmpdi2(unsigned long long, unsigned long long);144144EXPORT_SYMBOL(__ucmpdi2);145145#endif146146-146146+long long __bswapdi2(long long);147147+EXPORT_SYMBOL(__bswapdi2);147148EXPORT_SYMBOL(memcpy);148149EXPORT_SYMBOL(memset);149150EXPORT_SYMBOL(memmove);
+8
arch/powerpc/kernel/process.c
···339339340340static void prime_debug_regs(struct thread_struct *thread)341341{342342+ /*343343+ * We could have inherited MSR_DE from userspace, since344344+ * it doesn't get cleared on exception entry. Make sure345345+ * MSR_DE is clear before we enable any debug events.346346+ */347347+ mtmsr(mfmsr() & ~MSR_DE);348348+342349 mtspr(SPRN_IAC1, thread->iac1);343350 mtspr(SPRN_IAC2, thread->iac2);344351#if CONFIG_PPC_ADV_DEBUG_IACS > 2···978971 * do some house keeping and then return from the fork or clone979972 * system call, using the stack frame created above.980973 */974974+ ((unsigned long *)sp)[0] = 0;981975 sp -= sizeof(struct pt_regs);982976 kregs = (struct pt_regs *) sp;983977 sp -= STACK_FRAME_OVERHEAD;
+5
arch/powerpc/kernel/ptrace.c
···3232#include <trace/syscall.h>3333#include <linux/hw_breakpoint.h>3434#include <linux/perf_event.h>3535+#include <linux/context_tracking.h>35363637#include <asm/uaccess.h>3738#include <asm/page.h>···17891788{17901789 long ret = 0;1791179017911791+ user_exit();17921792+17921793 secure_computing_strict(regs->gpr[0]);1793179417941795 if (test_thread_flag(TIF_SYSCALL_TRACE) &&···18351832 step = test_thread_flag(TIF_SINGLESTEP);18361833 if (step || test_thread_flag(TIF_SYSCALL_TRACE))18371834 tracehook_report_syscall_exit(regs, step);18351835+18361836+ user_enter();18381837}
+113
arch/powerpc/kernel/rtas.c
···1919#include <linux/init.h>2020#include <linux/capability.h>2121#include <linux/delay.h>2222+#include <linux/cpu.h>2223#include <linux/smp.h>2324#include <linux/completion.h>2425#include <linux/cpumask.h>···808807 __rtas_suspend_cpu((struct rtas_suspend_me_data *)info, 1);809808}810809810810+enum rtas_cpu_state {811811+ DOWN,812812+ UP,813813+};814814+815815+#ifndef CONFIG_SMP816816+static int rtas_cpu_state_change_mask(enum rtas_cpu_state state,817817+ cpumask_var_t cpus)818818+{819819+ if (!cpumask_empty(cpus)) {820820+ cpumask_clear(cpus);821821+ return -EINVAL;822822+ } else823823+ return 0;824824+}825825+#else826826+/* On return cpumask will be altered to indicate CPUs changed.827827+ * CPUs with states changed will be set in the mask,828828+ * CPUs with status unchanged will be unset in the mask. */829829+static int rtas_cpu_state_change_mask(enum rtas_cpu_state state,830830+ cpumask_var_t cpus)831831+{832832+ int cpu;833833+ int cpuret = 0;834834+ int ret = 0;835835+836836+ if (cpumask_empty(cpus))837837+ return 0;838838+839839+ for_each_cpu(cpu, cpus) {840840+ switch (state) {841841+ case DOWN:842842+ cpuret = cpu_down(cpu);843843+ break;844844+ case UP:845845+ cpuret = cpu_up(cpu);846846+ break;847847+ }848848+ if (cpuret) {849849+ pr_debug("%s: cpu_%s for cpu#%d returned %d.\n",850850+ __func__,851851+ ((state == UP) ? "up" : "down"),852852+ cpu, cpuret);853853+ if (!ret)854854+ ret = cpuret;855855+ if (state == UP) {856856+ /* clear bits for unchanged cpus, return */857857+ cpumask_shift_right(cpus, cpus, cpu);858858+ cpumask_shift_left(cpus, cpus, cpu);859859+ break;860860+ } else {861861+ /* clear bit for unchanged cpu, continue */862862+ cpumask_clear_cpu(cpu, cpus);863863+ }864864+ }865865+ }866866+867867+ return ret;868868+}869869+#endif870870+871871+int rtas_online_cpus_mask(cpumask_var_t cpus)872872+{873873+ int ret;874874+875875+ ret = rtas_cpu_state_change_mask(UP, cpus);876876+877877+ if (ret) {878878+ cpumask_var_t tmp_mask;879879+880880+ if (!alloc_cpumask_var(&tmp_mask, GFP_TEMPORARY))881881+ return ret;882882+883883+ /* Use tmp_mask to preserve cpus mask from first failure */884884+ cpumask_copy(tmp_mask, cpus);885885+ rtas_offline_cpus_mask(tmp_mask);886886+ free_cpumask_var(tmp_mask);887887+ }888888+889889+ return ret;890890+}891891+EXPORT_SYMBOL(rtas_online_cpus_mask);892892+893893+int rtas_offline_cpus_mask(cpumask_var_t cpus)894894+{895895+ return rtas_cpu_state_change_mask(DOWN, cpus);896896+}897897+EXPORT_SYMBOL(rtas_offline_cpus_mask);898898+811899int rtas_ibm_suspend_me(struct rtas_args *args)812900{813901 long state;···904814 unsigned long retbuf[PLPAR_HCALL_BUFSIZE];905815 struct rtas_suspend_me_data data;906816 DECLARE_COMPLETION_ONSTACK(done);817817+ cpumask_var_t offline_mask;818818+ int cpuret;907819908820 if (!rtas_service_present("ibm,suspend-me"))909821 return -ENOSYS;···929837 return 0;930838 }931839840840+ if (!alloc_cpumask_var(&offline_mask, GFP_TEMPORARY))841841+ return -ENOMEM;842842+932843 atomic_set(&data.working, 0);933844 atomic_set(&data.done, 0);934845 atomic_set(&data.error, 0);935846 data.token = rtas_token("ibm,suspend-me");936847 data.complete = &done;848848+849849+ /* All present CPUs must be online */850850+ cpumask_andnot(offline_mask, cpu_present_mask, cpu_online_mask);851851+ cpuret = rtas_online_cpus_mask(offline_mask);852852+ if (cpuret) {853853+ pr_err("%s: Could not bring present CPUs online.\n", __func__);854854+ atomic_set(&data.error, cpuret);855855+ goto out;856856+ }857857+937858 stop_topology_update();938859939860 /* Call function on all CPUs. One of us will make the···962857963858 start_topology_update();964859860860+ /* Take down CPUs not online prior to suspend */861861+ cpuret = rtas_offline_cpus_mask(offline_mask);862862+ if (cpuret)863863+ pr_warn("%s: Could not restore CPUs to offline state.\n",864864+ __func__);865865+866866+out:867867+ free_cpumask_var(offline_mask);965868 return atomic_read(&data.error);966869}967870#else /* CONFIG_PPC_PSERIES */
+6-4
arch/powerpc/kernel/rtas_flash.c
···89899090/* Array sizes */9191#define VALIDATE_BUF_SIZE 4096 9292+#define VALIDATE_MSG_LEN 2569293#define RTAS_MSG_MAXLEN 6493949495/* Quirk - RTAS requires 4k list length and block size */···467466}468467469468static int get_validate_flash_msg(struct rtas_validate_flash_t *args_buf, 470470- char *msg)469469+ char *msg, int msglen)471470{472471 int n;473472···475474 n = sprintf(msg, "%d\n", args_buf->update_results);476475 if ((args_buf->update_results >= VALIDATE_CUR_UNKNOWN) ||477476 (args_buf->update_results == VALIDATE_TMP_UPDATE))478478- n += sprintf(msg + n, "%s\n", args_buf->buf);477477+ n += snprintf(msg + n, msglen - n, "%s\n",478478+ args_buf->buf);479479 } else {480480 n = sprintf(msg, "%d\n", args_buf->status);481481 }···488486{489487 struct rtas_validate_flash_t *const args_buf =490488 &rtas_validate_flash_data;491491- char msg[RTAS_MSG_MAXLEN];489489+ char msg[VALIDATE_MSG_LEN];492490 int msglen;493491494492 mutex_lock(&rtas_validate_flash_mutex);495495- msglen = get_validate_flash_msg(args_buf, msg);493493+ msglen = get_validate_flash_msg(args_buf, msg, VALIDATE_MSG_LEN);496494 mutex_unlock(&rtas_validate_flash_mutex);497495498496 return simple_read_from_buffer(buf, count, ppos, msg, msglen);
+6-1
arch/powerpc/kernel/signal.c
···1313#include <linux/signal.h>1414#include <linux/uprobes.h>1515#include <linux/key.h>1616+#include <linux/context_tracking.h>1617#include <asm/hw_breakpoint.h>1718#include <asm/uaccess.h>1819#include <asm/unistd.h>···2524 * through debug.exception-trace sysctl.2625 */27262828-int show_unhandled_signals = 0;2727+int show_unhandled_signals = 1;29283029/*3130 * Allocate space for the signal frame···160159161160void do_notify_resume(struct pt_regs *regs, unsigned long thread_info_flags)162161{162162+ user_exit();163163+163164 if (thread_info_flags & _TIF_UPROBE)164165 uprobe_notify_resume(regs);165166···172169 clear_thread_flag(TIF_NOTIFY_RESUME);173170 tracehook_notify_resume(regs);174171 }172172+173173+ user_enter();175174}
+58-22
arch/powerpc/kernel/traps.c
···3535#include <linux/kdebug.h>3636#include <linux/debugfs.h>3737#include <linux/ratelimit.h>3838+#include <linux/context_tracking.h>38393940#include <asm/emulated_ops.h>4041#include <asm/pgtable.h>···668667669668void machine_check_exception(struct pt_regs *regs)670669{670670+ enum ctx_state prev_state = exception_enter();671671 int recover = 0;672672673673 __get_cpu_var(irq_stat).mce_exceptions++;···685683 recover = cur_cpu_spec->machine_check(regs);686684687685 if (recover > 0)688688- return;686686+ goto bail;689687690688#if defined(CONFIG_8xx) && defined(CONFIG_PCI)691689 /* the qspan pci read routines can cause machine checks -- Cort···695693 * -- BenH696694 */697695 bad_page_fault(regs, regs->dar, SIGBUS);698698- return;696696+ goto bail;699697#endif700698701699 if (debugger_fault_handler(regs))702702- return;700700+ goto bail;703701704702 if (check_io_access(regs))705705- return;703703+ goto bail;706704707705 die("Machine check", regs, SIGBUS);708706709707 /* Must die if the interrupt is not recoverable */710708 if (!(regs->msr & MSR_RI))711709 panic("Unrecoverable Machine check");710710+711711+bail:712712+ exception_exit(prev_state);712713}713714714715void SMIException(struct pt_regs *regs)···721716722717void unknown_exception(struct pt_regs *regs)723718{719719+ enum ctx_state prev_state = exception_enter();720720+724721 printk("Bad trap at PC: %lx, SR: %lx, vector=%lx\n",725722 regs->nip, regs->msr, regs->trap);726723727724 _exception(SIGTRAP, regs, 0, 0);725725+726726+ exception_exit(prev_state);728727}729728730729void instruction_breakpoint_exception(struct pt_regs *regs)731730{731731+ enum ctx_state prev_state = exception_enter();732732+732733 if (notify_die(DIE_IABR_MATCH, "iabr_match", regs, 5,733734 5, SIGTRAP) == NOTIFY_STOP)734734- return;735735+ goto bail;735736 if (debugger_iabr_match(regs))736736- return;737737+ goto bail;737738 _exception(SIGTRAP, regs, TRAP_BRKPT, regs->nip);739739+740740+bail:741741+ exception_exit(prev_state);738742}739743740744void RunModeException(struct pt_regs *regs)···753739754740void __kprobes single_step_exception(struct pt_regs *regs)755741{742742+ enum ctx_state prev_state = exception_enter();743743+756744 clear_single_step(regs);757745758746 if (notify_die(DIE_SSTEP, "single_step", regs, 5,759747 5, SIGTRAP) == NOTIFY_STOP)760760- return;748748+ goto bail;761749 if (debugger_sstep(regs))762762- return;750750+ goto bail;763751764752 _exception(SIGTRAP, regs, TRAP_TRACE, regs->nip);753753+754754+bail:755755+ exception_exit(prev_state);765756}766757767758/*···1024100510251006void __kprobes program_check_exception(struct pt_regs *regs)10261007{10081008+ enum ctx_state prev_state = exception_enter();10271009 unsigned int reason = get_reason(regs);10281010 extern int do_mathemu(struct pt_regs *regs);10291011···10341014 if (reason & REASON_FP) {10351015 /* IEEE FP exception */10361016 parse_fpe(regs);10371037- return;10171017+ goto bail;10381018 }10391019 if (reason & REASON_TRAP) {10401020 /* Debugger is first in line to stop recursive faults in10411021 * rcu_lock, notify_die, or atomic_notifier_call_chain */10421022 if (debugger_bpt(regs))10431043- return;10231023+ goto bail;1044102410451025 /* trap exception */10461026 if (notify_die(DIE_BPT, "breakpoint", regs, 5, 5, SIGTRAP)10471027 == NOTIFY_STOP)10481048- return;10281028+ goto bail;1049102910501030 if (!(regs->msr & MSR_PR) && /* not user-mode */10511031 report_bug(regs->nip, regs) == BUG_TRAP_TYPE_WARN) {10521032 regs->nip += 4;10531053- return;10331033+ goto bail;10541034 }10551035 _exception(SIGTRAP, regs, TRAP_BRKPT, regs->nip);10561056- return;10361036+ goto bail;10571037 }10581038#ifdef CONFIG_PPC_TRANSACTIONAL_MEM10591039 if (reason & REASON_TM) {···10691049 if (!user_mode(regs) &&10701050 report_bug(regs->nip, regs) == BUG_TRAP_TYPE_WARN) {10711051 regs->nip += 4;10721072- return;10521052+ goto bail;10731053 }10741054 /* If usermode caused this, it's done something illegal and10751055 * gets a SIGILL slap on the wrist. We call it an illegal···10791059 */10801060 if (user_mode(regs)) {10811061 _exception(SIGILL, regs, ILL_ILLOPN, regs->nip);10821082- return;10621062+ goto bail;10831063 } else {10841064 printk(KERN_EMERG "Unexpected TM Bad Thing exception "10851065 "at %lx (msr 0x%x)\n", regs->nip, reason);···11031083 switch (do_mathemu(regs)) {11041084 case 0:11051085 emulate_single_step(regs);11061106- return;10861086+ goto bail;11071087 case 1: {11081088 int code = 0;11091089 code = __parse_fpscr(current->thread.fpscr.val);11101090 _exception(SIGFPE, regs, code, regs->nip);11111111- return;10911091+ goto bail;11121092 }11131093 case -EFAULT:11141094 _exception(SIGSEGV, regs, SEGV_MAPERR, regs->nip);11151115- return;10951095+ goto bail;11161096 }11171097 /* fall through on any other errors */11181098#endif /* CONFIG_MATH_EMULATION */···11231103 case 0:11241104 regs->nip += 4;11251105 emulate_single_step(regs);11261126- return;11061106+ goto bail;11271107 case -EFAULT:11281108 _exception(SIGSEGV, regs, SEGV_MAPERR, regs->nip);11291129- return;11091109+ goto bail;11301110 }11311111 }11321112···11341114 _exception(SIGILL, regs, ILL_PRVOPC, regs->nip);11351115 else11361116 _exception(SIGILL, regs, ILL_ILLOPC, regs->nip);11171117+11181118+bail:11191119+ exception_exit(prev_state);11371120}1138112111391122void alignment_exception(struct pt_regs *regs)11401123{11241124+ enum ctx_state prev_state = exception_enter();11411125 int sig, code, fixed = 0;1142112611431127 /* We restore the interrupt state now */···11551131 if (fixed == 1) {11561132 regs->nip += 4; /* skip over emulated instruction */11571133 emulate_single_step(regs);11581158- return;11341134+ goto bail;11591135 }1160113611611137 /* Operand address was bad */···11701146 _exception(sig, regs, code, regs->dar);11711147 else11721148 bad_page_fault(regs, regs->dar, sig);11491149+11501150+bail:11511151+ exception_exit(prev_state);11731152}1174115311751154void StackOverflow(struct pt_regs *regs)···1201117412021175void kernel_fp_unavailable_exception(struct pt_regs *regs)12031176{11771177+ enum ctx_state prev_state = exception_enter();11781178+12041179 printk(KERN_EMERG "Unrecoverable FP Unavailable Exception "12051180 "%lx at %lx\n", regs->trap, regs->nip);12061181 die("Unrecoverable FP Unavailable Exception", regs, SIGABRT);11821182+11831183+ exception_exit(prev_state);12071184}1208118512091186void altivec_unavailable_exception(struct pt_regs *regs)12101187{11881188+ enum ctx_state prev_state = exception_enter();11891189+12111190 if (user_mode(regs)) {12121191 /* A user program has executed an altivec instruction,12131192 but this kernel doesn't support altivec. */12141193 _exception(SIGILL, regs, ILL_ILLOPC, regs->nip);12151215- return;11941194+ goto bail;12161195 }1217119612181197 printk(KERN_EMERG "Unrecoverable VMX/Altivec Unavailable Exception "12191198 "%lx at %lx\n", regs->trap, regs->nip);12201199 die("Unrecoverable VMX/Altivec Unavailable Exception", regs, SIGABRT);12001200+12011201+bail:12021202+ exception_exit(prev_state);12211203}1222120412231205void vsx_unavailable_exception(struct pt_regs *regs)
···3232#include <linux/perf_event.h>3333#include <linux/magic.h>3434#include <linux/ratelimit.h>3535+#include <linux/context_tracking.h>35363637#include <asm/firmware.h>3738#include <asm/page.h>···197196int __kprobes do_page_fault(struct pt_regs *regs, unsigned long address,198197 unsigned long error_code)199198{199199+ enum ctx_state prev_state = exception_enter();200200 struct vm_area_struct * vma;201201 struct mm_struct *mm = current->mm;202202 unsigned int flags = FAULT_FLAG_ALLOW_RETRY | FAULT_FLAG_KILLABLE;···206204 int trap = TRAP(regs);207205 int is_exec = trap == 0x400;208206 int fault;207207+ int rc = 0;209208210209#if !(defined(CONFIG_4xx) || defined(CONFIG_BOOKE))211210 /*···233230 * look at it234231 */235232 if (error_code & ICSWX_DSI_UCT) {236236- int rc = acop_handle_fault(regs, address, error_code);233233+ rc = acop_handle_fault(regs, address, error_code);237234 if (rc)238238- return rc;235235+ goto bail;239236 }240237#endif /* CONFIG_PPC_ICSWX */241238242239 if (notify_page_fault(regs))243243- return 0;240240+ goto bail;244241245242 if (unlikely(debugger_fault_handler(regs)))246246- return 0;243243+ goto bail;247244248245 /* On a kernel SLB miss we can only check for a valid exception entry */249249- if (!user_mode(regs) && (address >= TASK_SIZE))250250- return SIGSEGV;246246+ if (!user_mode(regs) && (address >= TASK_SIZE)) {247247+ rc = SIGSEGV;248248+ goto bail;249249+ }251250252251#if !(defined(CONFIG_4xx) || defined(CONFIG_BOOKE) || \253252 defined(CONFIG_PPC_BOOK3S_64))254253 if (error_code & DSISR_DABRMATCH) {255254 /* breakpoint match */256255 do_break(regs, address, error_code);257257- return 0;256256+ goto bail;258257 }259258#endif260259···265260 local_irq_enable();266261267262 if (in_atomic() || mm == NULL) {268268- if (!user_mode(regs))269269- return SIGSEGV;263263+ if (!user_mode(regs)) {264264+ rc = SIGSEGV;265265+ goto bail;266266+ }270267 /* in_atomic() in user mode is really bad,271268 as is current->mm == NULL. */272269 printk(KERN_EMERG "Page fault in user mode with "···424417 */425418 fault = handle_mm_fault(mm, vma, address, flags);426419 if (unlikely(fault & (VM_FAULT_RETRY|VM_FAULT_ERROR))) {427427- int rc = mm_fault_error(regs, address, fault);420420+ rc = mm_fault_error(regs, address, fault);428421 if (rc >= MM_FAULT_RETURN)429429- return rc;422422+ goto bail;423423+ else424424+ rc = 0;430425 }431426432427 /*···463454 }464455465456 up_read(&mm->mmap_sem);466466- return 0;457457+ goto bail;467458468459bad_area:469460 up_read(&mm->mmap_sem);···472463 /* User mode accesses cause a SIGSEGV */473464 if (user_mode(regs)) {474465 _exception(SIGSEGV, regs, code, address);475475- return 0;466466+ goto bail;476467 }477468478469 if (is_exec && (error_code & DSISR_PROTFAULT))···480471 " page (%lx) - exploit attempt? (uid: %d)\n",481472 address, from_kuid(&init_user_ns, current_uid()));482473483483- return SIGSEGV;474474+ rc = SIGSEGV;475475+476476+bail:477477+ exception_exit(prev_state);478478+ return rc;484479485480}486481
+27-9
arch/powerpc/mm/hash_utils_64.c
···3333#include <linux/init.h>3434#include <linux/signal.h>3535#include <linux/memblock.h>3636+#include <linux/context_tracking.h>36373738#include <asm/processor.h>3839#include <asm/pgtable.h>···955954 */956955int hash_page(unsigned long ea, unsigned long access, unsigned long trap)957956{957957+ enum ctx_state prev_state = exception_enter();958958 pgd_t *pgdir;959959 unsigned long vsid;960960 struct mm_struct *mm;···975973 mm = current->mm;976974 if (! mm) {977975 DBG_LOW(" user region with no mm !\n");978978- return 1;976976+ rc = 1;977977+ goto bail;979978 }980979 psize = get_slice_psize(mm, ea);981980 ssize = user_segment_size(ea);···995992 /* Not a valid range996993 * Send the problem up to do_page_fault 997994 */998998- return 1;995995+ rc = 1;996996+ goto bail;999997 }1000998 DBG_LOW(" mm=%p, mm->pgdir=%p, vsid=%016lx\n", mm, mm->pgd, vsid);100199910021000 /* Bad address. */10031001 if (!vsid) {10041002 DBG_LOW("Bad address!\n");10051005- return 1;10031003+ rc = 1;10041004+ goto bail;10061005 }10071006 /* Get pgdir */10081007 pgdir = mm->pgd;10091009- if (pgdir == NULL)10101010- return 1;10081008+ if (pgdir == NULL) {10091009+ rc = 1;10101010+ goto bail;10111011+ }1011101210121013 /* Check CPU locality */10131014 tmp = cpumask_of(smp_processor_id());···10341027 ptep = find_linux_pte_or_hugepte(pgdir, ea, &hugeshift);10351028 if (ptep == NULL || !pte_present(*ptep)) {10361029 DBG_LOW(" no PTE !\n");10371037- return 1;10301030+ rc = 1;10311031+ goto bail;10381032 }1039103310401034 /* Add _PAGE_PRESENT to the required access perm */···10461038 */10471039 if (access & ~pte_val(*ptep)) {10481040 DBG_LOW(" no access !\n");10491049- return 1;10411041+ rc = 1;10421042+ goto bail;10501043 }1051104410521045#ifdef CONFIG_HUGETLB_PAGE10531053- if (hugeshift)10541054- return __hash_page_huge(ea, access, vsid, ptep, trap, local,10461046+ if (hugeshift) {10471047+ rc = __hash_page_huge(ea, access, vsid, ptep, trap, local,10551048 ssize, hugeshift, psize);10491049+ goto bail;10501050+ }10561051#endif /* CONFIG_HUGETLB_PAGE */1057105210581053#ifndef CONFIG_PPC_64K_PAGES···11351124 pte_val(*(ptep + PTRS_PER_PTE)));11361125#endif11371126 DBG_LOW(" -> rc=%d\n", rc);11271127+11281128+bail:11291129+ exception_exit(prev_state);11381130 return rc;11391131}11401132EXPORT_SYMBOL_GPL(hash_page);···12731259 */12741260void low_hash_fault(struct pt_regs *regs, unsigned long address, int rc)12751261{12621262+ enum ctx_state prev_state = exception_enter();12631263+12761264 if (user_mode(regs)) {12771265#ifdef CONFIG_PPC_SUBPAGE_PROT12781266 if (rc == -2)···12841268 _exception(SIGBUS, regs, BUS_ADRERR, address);12851269 } else12861270 bad_page_fault(regs, address, SIGBUS);12711271+12721272+ exception_exit(prev_state);12871273}1288127412891275long hpte_insert_repeating(unsigned long hash, unsigned long vpn,
+2-1
arch/powerpc/mm/init_64.c
···215215 unsigned long phys)216216{217217 int mapped = htab_bolt_mapping(start, start + page_size, phys,218218- PAGE_KERNEL, mmu_vmemmap_psize,218218+ pgprot_val(PAGE_KERNEL),219219+ mmu_vmemmap_psize,219220 mmu_kernel_ssize);220221 BUG_ON(mapped < 0);221222}
+159-121
arch/powerpc/perf/core-book3s.c
···1313#include <linux/perf_event.h>1414#include <linux/percpu.h>1515#include <linux/hardirq.h>1616+#include <linux/uaccess.h>1617#include <asm/reg.h>1718#include <asm/pmc.h>1819#include <asm/machdep.h>1920#include <asm/firmware.h>2021#include <asm/ptrace.h>2222+#include <asm/code-patching.h>21232224#define BHRB_MAX_ENTRIES 322325#define BHRB_TARGET 0x0000000000000002···102100 return 1;103101}104102103103+static inline void power_pmu_bhrb_enable(struct perf_event *event) {}104104+static inline void power_pmu_bhrb_disable(struct perf_event *event) {}105105+void power_pmu_flush_branch_stack(void) {}106106+static inline void power_pmu_bhrb_read(struct cpu_hw_events *cpuhw) {}105107#endif /* CONFIG_PPC32 */106108107109static bool regs_use_siar(struct pt_regs *regs)···312306 return mmcra & POWER7P_MMCRA_SIAR_VALID;313307314308 return 1;309309+}310310+311311+312312+/* Reset all possible BHRB entries */313313+static void power_pmu_bhrb_reset(void)314314+{315315+ asm volatile(PPC_CLRBHRB);316316+}317317+318318+static void power_pmu_bhrb_enable(struct perf_event *event)319319+{320320+ struct cpu_hw_events *cpuhw = &__get_cpu_var(cpu_hw_events);321321+322322+ if (!ppmu->bhrb_nr)323323+ return;324324+325325+ /* Clear BHRB if we changed task context to avoid data leaks */326326+ if (event->ctx->task && cpuhw->bhrb_context != event->ctx) {327327+ power_pmu_bhrb_reset();328328+ cpuhw->bhrb_context = event->ctx;329329+ }330330+ cpuhw->bhrb_users++;331331+}332332+333333+static void power_pmu_bhrb_disable(struct perf_event *event)334334+{335335+ struct cpu_hw_events *cpuhw = &__get_cpu_var(cpu_hw_events);336336+337337+ if (!ppmu->bhrb_nr)338338+ return;339339+340340+ cpuhw->bhrb_users--;341341+ WARN_ON_ONCE(cpuhw->bhrb_users < 0);342342+343343+ if (!cpuhw->disabled && !cpuhw->bhrb_users) {344344+ /* BHRB cannot be turned off when other345345+ * events are active on the PMU.346346+ */347347+348348+ /* avoid stale pointer */349349+ cpuhw->bhrb_context = NULL;350350+ }351351+}352352+353353+/* Called from ctxsw to prevent one process's branch entries to354354+ * mingle with the other process's entries during context switch.355355+ */356356+void power_pmu_flush_branch_stack(void)357357+{358358+ if (ppmu->bhrb_nr)359359+ power_pmu_bhrb_reset();360360+}361361+/* Calculate the to address for a branch */362362+static __u64 power_pmu_bhrb_to(u64 addr)363363+{364364+ unsigned int instr;365365+ int ret;366366+ __u64 target;367367+368368+ if (is_kernel_addr(addr))369369+ return branch_target((unsigned int *)addr);370370+371371+ /* Userspace: need copy instruction here then translate it */372372+ pagefault_disable();373373+ ret = __get_user_inatomic(instr, (unsigned int __user *)addr);374374+ if (ret) {375375+ pagefault_enable();376376+ return 0;377377+ }378378+ pagefault_enable();379379+380380+ target = branch_target(&instr);381381+ if ((!target) || (instr & BRANCH_ABSOLUTE))382382+ return target;383383+384384+ /* Translate relative branch target from kernel to user address */385385+ return target - (unsigned long)&instr + addr;386386+}387387+388388+/* Processing BHRB entries */389389+void power_pmu_bhrb_read(struct cpu_hw_events *cpuhw)390390+{391391+ u64 val;392392+ u64 addr;393393+ int r_index, u_index, pred;394394+395395+ r_index = 0;396396+ u_index = 0;397397+ while (r_index < ppmu->bhrb_nr) {398398+ /* Assembly read function */399399+ val = read_bhrb(r_index++);400400+ if (!val)401401+ /* Terminal marker: End of valid BHRB entries */402402+ break;403403+ else {404404+ addr = val & BHRB_EA;405405+ pred = val & BHRB_PREDICTION;406406+407407+ if (!addr)408408+ /* invalid entry */409409+ continue;410410+411411+ /* Branches are read most recent first (ie. mfbhrb 0 is412412+ * the most recent branch).413413+ * There are two types of valid entries:414414+ * 1) a target entry which is the to address of a415415+ * computed goto like a blr,bctr,btar. The next416416+ * entry read from the bhrb will be branch417417+ * corresponding to this target (ie. the actual418418+ * blr/bctr/btar instruction).419419+ * 2) a from address which is an actual branch. If a420420+ * target entry proceeds this, then this is the421421+ * matching branch for that target. If this is not422422+ * following a target entry, then this is a branch423423+ * where the target is given as an immediate field424424+ * in the instruction (ie. an i or b form branch).425425+ * In this case we need to read the instruction from426426+ * memory to determine the target/to address.427427+ */428428+429429+ if (val & BHRB_TARGET) {430430+ /* Target branches use two entries431431+ * (ie. computed gotos/XL form)432432+ */433433+ cpuhw->bhrb_entries[u_index].to = addr;434434+ cpuhw->bhrb_entries[u_index].mispred = pred;435435+ cpuhw->bhrb_entries[u_index].predicted = ~pred;436436+437437+ /* Get from address in next entry */438438+ val = read_bhrb(r_index++);439439+ addr = val & BHRB_EA;440440+ if (val & BHRB_TARGET) {441441+ /* Shouldn't have two targets in a442442+ row.. Reset index and try again */443443+ r_index--;444444+ addr = 0;445445+ }446446+ cpuhw->bhrb_entries[u_index].from = addr;447447+ } else {448448+ /* Branches to immediate field 449449+ (ie I or B form) */450450+ cpuhw->bhrb_entries[u_index].from = addr;451451+ cpuhw->bhrb_entries[u_index].to =452452+ power_pmu_bhrb_to(addr);453453+ cpuhw->bhrb_entries[u_index].mispred = pred;454454+ cpuhw->bhrb_entries[u_index].predicted = ~pred;455455+ }456456+ u_index++;457457+458458+ }459459+ }460460+ cpuhw->bhrb_stack.nr = u_index;461461+ return;315462}316463317464#endif /* CONFIG_PPC64 */···1063904 return n;1064905}106590610661066-/* Reset all possible BHRB entries */10671067-static void power_pmu_bhrb_reset(void)10681068-{10691069- asm volatile(PPC_CLRBHRB);10701070-}10711071-10721072-void power_pmu_bhrb_enable(struct perf_event *event)10731073-{10741074- struct cpu_hw_events *cpuhw = &__get_cpu_var(cpu_hw_events);10751075-10761076- if (!ppmu->bhrb_nr)10771077- return;10781078-10791079- /* Clear BHRB if we changed task context to avoid data leaks */10801080- if (event->ctx->task && cpuhw->bhrb_context != event->ctx) {10811081- power_pmu_bhrb_reset();10821082- cpuhw->bhrb_context = event->ctx;10831083- }10841084- cpuhw->bhrb_users++;10851085-}10861086-10871087-void power_pmu_bhrb_disable(struct perf_event *event)10881088-{10891089- struct cpu_hw_events *cpuhw = &__get_cpu_var(cpu_hw_events);10901090-10911091- if (!ppmu->bhrb_nr)10921092- return;10931093-10941094- cpuhw->bhrb_users--;10951095- WARN_ON_ONCE(cpuhw->bhrb_users < 0);10961096-10971097- if (!cpuhw->disabled && !cpuhw->bhrb_users) {10981098- /* BHRB cannot be turned off when other10991099- * events are active on the PMU.11001100- */11011101-11021102- /* avoid stale pointer */11031103- cpuhw->bhrb_context = NULL;11041104- }11051105-}11061106-1107907/*1108908 * Add a event to the PMU.1109909 * If all events are not already frozen, then we disable and···12961178 cpuhw->group_flag &= ~PERF_EVENT_TXN;12971179 perf_pmu_enable(pmu);12981180 return 0;12991299-}13001300-13011301-/* Called from ctxsw to prevent one process's branch entries to13021302- * mingle with the other process's entries during context switch.13031303- */13041304-void power_pmu_flush_branch_stack(void)13051305-{13061306- if (ppmu->bhrb_nr)13071307- power_pmu_bhrb_reset();13081181}1309118213101183/*···15661457 .event_idx = power_pmu_event_idx,15671458 .flush_branch_stack = power_pmu_flush_branch_stack,15681459};15691569-15701570-/* Processing BHRB entries */15711571-void power_pmu_bhrb_read(struct cpu_hw_events *cpuhw)15721572-{15731573- u64 val;15741574- u64 addr;15751575- int r_index, u_index, target, pred;15761576-15771577- r_index = 0;15781578- u_index = 0;15791579- while (r_index < ppmu->bhrb_nr) {15801580- /* Assembly read function */15811581- val = read_bhrb(r_index);15821582-15831583- /* Terminal marker: End of valid BHRB entries */15841584- if (val == 0) {15851585- break;15861586- } else {15871587- /* BHRB field break up */15881588- addr = val & BHRB_EA;15891589- pred = val & BHRB_PREDICTION;15901590- target = val & BHRB_TARGET;15911591-15921592- /* Probable Missed entry: Not applicable for POWER8 */15931593- if ((addr == 0) && (target == 0) && (pred == 1)) {15941594- r_index++;15951595- continue;15961596- }15971597-15981598- /* Real Missed entry: Power8 based missed entry */15991599- if ((addr == 0) && (target == 1) && (pred == 1)) {16001600- r_index++;16011601- continue;16021602- }16031603-16041604- /* Reserved condition: Not a valid entry */16051605- if ((addr == 0) && (target == 1) && (pred == 0)) {16061606- r_index++;16071607- continue;16081608- }16091609-16101610- /* Is a target address */16111611- if (val & BHRB_TARGET) {16121612- /* First address cannot be a target address */16131613- if (r_index == 0) {16141614- r_index++;16151615- continue;16161616- }16171617-16181618- /* Update target address for the previous entry */16191619- cpuhw->bhrb_entries[u_index - 1].to = addr;16201620- cpuhw->bhrb_entries[u_index - 1].mispred = pred;16211621- cpuhw->bhrb_entries[u_index - 1].predicted = ~pred;16221622-16231623- /* Dont increment u_index */16241624- r_index++;16251625- } else {16261626- /* Update address, flags for current entry */16271627- cpuhw->bhrb_entries[u_index].from = addr;16281628- cpuhw->bhrb_entries[u_index].mispred = pred;16291629- cpuhw->bhrb_entries[u_index].predicted = ~pred;16301630-16311631- /* Successfully popullated one entry */16321632- u_index++;16331633- r_index++;16341634- }16351635- }16361636- }16371637- cpuhw->bhrb_stack.nr = u_index;16381638- return;16391639-}1640146016411461/*16421462 * A counter has overflowed; update its count and record
+1-1
arch/powerpc/platforms/Kconfig
···128128129129config RTAS_PROC130130 bool "Proc interface to RTAS"131131- depends on PPC_RTAS131131+ depends on PPC_RTAS && PROC_FS132132 default y133133134134config RTAS_FLASH
+29-1
arch/powerpc/platforms/powernv/opal.c
···1515#include <linux/of.h>1616#include <linux/of_platform.h>1717#include <linux/interrupt.h>1818+#include <linux/slab.h>1819#include <asm/opal.h>1920#include <asm/firmware.h>2021···2928static struct device_node *opal_node;3029static DEFINE_SPINLOCK(opal_write_lock);3130extern u64 opal_mc_secondary_handler[];3131+static unsigned int *opal_irqs;3232+static unsigned int opal_irq_count;32333334int __init early_init_dt_scan_opal(unsigned long node,3435 const char *uname, int depth, void *data)···5653 opal.entry, entryp, entrysz);57545855 powerpc_firmware_features |= FW_FEATURE_OPAL;5959- if (of_flat_dt_is_compatible(node, "ibm,opal-v2")) {5656+ if (of_flat_dt_is_compatible(node, "ibm,opal-v3")) {5757+ powerpc_firmware_features |= FW_FEATURE_OPALv2;5858+ powerpc_firmware_features |= FW_FEATURE_OPALv3;5959+ printk("OPAL V3 detected !\n");6060+ } else if (of_flat_dt_is_compatible(node, "ibm,opal-v2")) {6061 powerpc_firmware_features |= FW_FEATURE_OPALv2;6162 printk("OPAL V2 detected !\n");6263 } else {···151144 rc == OPAL_BUSY_EVENT || rc == OPAL_SUCCESS)) {152145 len = total_len;153146 rc = opal_console_write(vtermno, &len, data);147147+148148+ /* Closed or other error drop */149149+ if (rc != OPAL_SUCCESS && rc != OPAL_BUSY &&150150+ rc != OPAL_BUSY_EVENT) {151151+ written = total_len;152152+ break;153153+ }154154 if (rc == OPAL_SUCCESS) {155155 total_len -= len;156156 data += len;···330316 irqs = of_get_property(opal_node, "opal-interrupts", &irqlen);331317 pr_debug("opal: Found %d interrupts reserved for OPAL\n",332318 irqs ? (irqlen / 4) : 0);319319+ opal_irq_count = irqlen / 4;320320+ opal_irqs = kzalloc(opal_irq_count * sizeof(unsigned int), GFP_KERNEL);333321 for (i = 0; irqs && i < (irqlen / 4); i++, irqs++) {334322 unsigned int hwirq = be32_to_cpup(irqs);335323 unsigned int irq = irq_create_mapping(NULL, hwirq);···343327 if (rc)344328 pr_warning("opal: Error %d requesting irq %d"345329 " (0x%x)\n", rc, irq, hwirq);330330+ opal_irqs[i] = irq;346331 }347332 return 0;348333}349334subsys_initcall(opal_init);335335+336336+void opal_shutdown(void)337337+{338338+ unsigned int i;339339+340340+ for (i = 0; i < opal_irq_count; i++) {341341+ if (opal_irqs[i])342342+ free_irq(opal_irqs[i], 0);343343+ opal_irqs[i] = 0;344344+ }345345+}
···7878 if (root)7979 model = of_get_property(root, "model", NULL);8080 seq_printf(m, "machine\t\t: PowerNV %s\n", model);8181- if (firmware_has_feature(FW_FEATURE_OPALv2))8181+ if (firmware_has_feature(FW_FEATURE_OPALv3))8282+ seq_printf(m, "firmware\t: OPAL v3\n");8383+ else if (firmware_has_feature(FW_FEATURE_OPALv2))8284 seq_printf(m, "firmware\t: OPAL v2\n");8385 else if (firmware_has_feature(FW_FEATURE_OPAL))8486 seq_printf(m, "firmware\t: OPAL v1\n");···126124127125static void pnv_progress(char *s, unsigned short hex)128126{127127+}128128+129129+static void pnv_shutdown(void)130130+{131131+ /* Let the PCI code clear up IODA tables */132132+ pnv_pci_shutdown();133133+134134+ /* And unregister all OPAL interrupts so they don't fire135135+ * up while we kexec136136+ */137137+ opal_shutdown();129138}130139131140#ifdef CONFIG_KEXEC···200187 .init_IRQ = pnv_init_IRQ,201188 .show_cpuinfo = pnv_show_cpuinfo,202189 .progress = pnv_progress,190190+ .machine_shutdown = pnv_shutdown,203191 .power_save = power7_idle,204192 .calibrate_decr = generic_calibrate_decr,205193#ifdef CONFIG_KEXEC
+56-6
arch/powerpc/platforms/powernv/smp.c
···71717272 BUG_ON(nr < 0 || nr >= NR_CPUS);73737474- /* On OPAL v2 the CPU are still spinning inside OPAL itself,7575- * get them back now7474+ /*7575+ * If we already started or OPALv2 is not supported, we just7676+ * kick the CPU via the PACA7677 */7777- if (!paca[nr].cpu_start && firmware_has_feature(FW_FEATURE_OPALv2)) {7878- pr_devel("OPAL: Starting CPU %d (HW 0x%x)...\n", nr, pcpu);7979- rc = opal_start_cpu(pcpu, start_here);7878+ if (paca[nr].cpu_start || !firmware_has_feature(FW_FEATURE_OPALv2))7979+ goto kick;8080+8181+ /*8282+ * At this point, the CPU can either be spinning on the way in8383+ * from kexec or be inside OPAL waiting to be started for the8484+ * first time. OPAL v3 allows us to query OPAL to know if it8585+ * has the CPUs, so we do that8686+ */8787+ if (firmware_has_feature(FW_FEATURE_OPALv3)) {8888+ uint8_t status;8989+9090+ rc = opal_query_cpu_status(pcpu, &status);8091 if (rc != OPAL_SUCCESS) {8181- pr_warn("OPAL Error %ld starting CPU %d\n",9292+ pr_warn("OPAL Error %ld querying CPU %d state\n",8293 rc, nr);8394 return -ENODEV;8495 }9696+9797+ /*9898+ * Already started, just kick it, probably coming from9999+ * kexec and spinning100100+ */101101+ if (status == OPAL_THREAD_STARTED)102102+ goto kick;103103+104104+ /*105105+ * Available/inactive, let's kick it106106+ */107107+ if (status == OPAL_THREAD_INACTIVE) {108108+ pr_devel("OPAL: Starting CPU %d (HW 0x%x)...\n",109109+ nr, pcpu);110110+ rc = opal_start_cpu(pcpu, start_here);111111+ if (rc != OPAL_SUCCESS) {112112+ pr_warn("OPAL Error %ld starting CPU %d\n",113113+ rc, nr);114114+ return -ENODEV;115115+ }116116+ } else {117117+ /*118118+ * An unavailable CPU (or any other unknown status)119119+ * shouldn't be started. It should also120120+ * not be in the possible map but currently it can121121+ * happen122122+ */123123+ pr_devel("OPAL: CPU %d (HW 0x%x) is unavailable"124124+ " (status %d)...\n", nr, pcpu, status);125125+ return -ENODEV;126126+ }127127+ } else {128128+ /*129129+ * On OPAL v2, we just kick it and hope for the best,130130+ * we must not test the error from opal_start_cpu() or131131+ * we would fail to get CPUs from kexec.132132+ */133133+ opal_start_cpu(pcpu, start_here);85134 }135135+ kick:86136 return smp_generic_kick_cpu(nr);87137}88138
···8181 ev_int_set_config(src, config, prio, cpuid);8282 spin_unlock_irqrestore(&ehv_pic_lock, flags);83838484- return 0;8484+ return IRQ_SET_MASK_OK;8585}86868787static unsigned int ehv_pic_type_to_vecpri(unsigned int type)
+1-1
arch/powerpc/sysdev/mpic.c
···836836 mpic_physmask(mask));837837 }838838839839- return 0;839839+ return IRQ_SET_MASK_OK;840840}841841842842static unsigned int mpic_type_to_vecpri(struct mpic *mpic, unsigned int type)
+105
arch/powerpc/sysdev/udbg_memcons.c
···11+/*22+ * A udbg backend which logs messages and reads input from in memory33+ * buffers.44+ *55+ * The console output can be read from memcons_output which is a66+ * circular buffer whose next write position is stored in memcons.output_pos.77+ *88+ * Input may be passed by writing into the memcons_input buffer when it is99+ * empty. The input buffer is empty when both input_pos == input_start and1010+ * *input_start == '\0'.1111+ *1212+ * Copyright (C) 2003-2005 Anton Blanchard and Milton Miller, IBM Corp1313+ * Copyright (C) 2013 Alistair Popple, IBM Corp1414+ *1515+ * This program is free software; you can redistribute it and/or1616+ * modify it under the terms of the GNU General Public License1717+ * as published by the Free Software Foundation; either version1818+ * 2 of the License, or (at your option) any later version.1919+ */2020+2121+#include <linux/init.h>2222+#include <linux/kernel.h>2323+#include <asm/barrier.h>2424+#include <asm/page.h>2525+#include <asm/processor.h>2626+#include <asm/udbg.h>2727+2828+struct memcons {2929+ char *output_start;3030+ char *output_pos;3131+ char *output_end;3232+ char *input_start;3333+ char *input_pos;3434+ char *input_end;3535+};3636+3737+static char memcons_output[CONFIG_PPC_MEMCONS_OUTPUT_SIZE];3838+static char memcons_input[CONFIG_PPC_MEMCONS_INPUT_SIZE];3939+4040+struct memcons memcons = {4141+ .output_start = memcons_output,4242+ .output_pos = memcons_output,4343+ .output_end = &memcons_output[CONFIG_PPC_MEMCONS_OUTPUT_SIZE],4444+ .input_start = memcons_input,4545+ .input_pos = memcons_input,4646+ .input_end = &memcons_input[CONFIG_PPC_MEMCONS_INPUT_SIZE],4747+};4848+4949+void memcons_putc(char c)5050+{5151+ char *new_output_pos;5252+5353+ *memcons.output_pos = c;5454+ wmb();5555+ new_output_pos = memcons.output_pos + 1;5656+ if (new_output_pos >= memcons.output_end)5757+ new_output_pos = memcons.output_start;5858+5959+ memcons.output_pos = new_output_pos;6060+}6161+6262+int memcons_getc_poll(void)6363+{6464+ char c;6565+ char *new_input_pos;6666+6767+ if (*memcons.input_pos) {6868+ c = *memcons.input_pos;6969+7070+ new_input_pos = memcons.input_pos + 1;7171+ if (new_input_pos >= memcons.input_end)7272+ new_input_pos = memcons.input_start;7373+ else if (*new_input_pos == '\0')7474+ new_input_pos = memcons.input_start;7575+7676+ *memcons.input_pos = '\0';7777+ wmb();7878+ memcons.input_pos = new_input_pos;7979+ return c;8080+ }8181+8282+ return -1;8383+}8484+8585+int memcons_getc(void)8686+{8787+ int c;8888+8989+ while (1) {9090+ c = memcons_getc_poll();9191+ if (c == -1)9292+ cpu_relax();9393+ else9494+ break;9595+ }9696+9797+ return c;9898+}9999+100100+void udbg_init_memcons(void)101101+{102102+ udbg_putc = memcons_putc;103103+ udbg_getc = memcons_getc;104104+ udbg_getc_poll = memcons_getc_poll;105105+}
···108108 select GENERIC_CLOCKEVENTS_BROADCAST if X86_64 || (X86_32 && X86_LOCAL_APIC)109109 select GENERIC_TIME_VSYSCALL if X86_64110110 select KTIME_SCALAR if X86_32111111- select ALWAYS_USE_PERSISTENT_CLOCK112111 select GENERIC_STRNCPY_FROM_USER113112 select GENERIC_STRNLEN_USER114113 select HAVE_CONTEXT_TRACKING if X86_64
+1-1
arch/x86/kernel/head64.c
···3434extern pgd_t early_level4_pgt[PTRS_PER_PGD];3535extern pmd_t early_dynamic_pgts[EARLY_DYNAMIC_PAGE_TABLES][PTRS_PER_PMD];3636static unsigned int __initdata next_early_pgt = 2;3737-pmdval_t __initdata early_pmd_flags = __PAGE_KERNEL_LARGE & ~(_PAGE_GLOBAL | _PAGE_NX);3737+pmdval_t early_pmd_flags = __PAGE_KERNEL_LARGE & ~(_PAGE_GLOBAL | _PAGE_NX);38383939/* Wipe all early page tables except for the kernel symbol map */4040static void __init reset_early_page_tables(void)
+3-2
arch/x86/kernel/microcode_intel_early.c
···487487#endif488488489489#if defined(CONFIG_MICROCODE_INTEL_EARLY) && defined(CONFIG_HOTPLUG_CPU)490490+static DEFINE_MUTEX(x86_cpu_microcode_mutex);490491/*491492 * Save this mc into mc_saved_data. So it will be loaded early when a CPU is492493 * hot added or resumes.···508507 * Hold hotplug lock so mc_saved_data is not accessed by a CPU in509508 * hotplug.510509 */511511- cpu_hotplug_driver_lock();510510+ mutex_lock(&x86_cpu_microcode_mutex);512511513512 mc_saved_count_init = mc_saved_data.mc_saved_count;514513 mc_saved_count = mc_saved_data.mc_saved_count;···545544 }546545547546out:548548- cpu_hotplug_driver_unlock();547547+ mutex_unlock(&x86_cpu_microcode_mutex);549548550549 return ret;551550}
···359359}360360361361/*362362- * would have hole in the middle or ends, and only ram parts will be mapped.362362+ * We need to iterate through the E820 memory map and create direct mappings363363+ * for only E820_RAM and E820_KERN_RESERVED regions. We cannot simply364364+ * create direct mappings for all pfns from [0 to max_low_pfn) and365365+ * [4GB to max_pfn) because of possible memory holes in high addresses366366+ * that cannot be marked as UC by fixed/variable range MTRRs.367367+ * Depending on the alignment of E820 ranges, this may possibly result368368+ * in using smaller size (i.e. 4K instead of 2M or 1G) page tables.369369+ *370370+ * init_mem_mapping() calls init_range_memory_mapping() with big range.371371+ * That range would have hole in the middle or ends, and only ram parts372372+ * will be mapped in init_range_memory_mapping().363373 */364374static unsigned long __init init_range_memory_mapping(365375 unsigned long r_start,···429419 max_pfn_mapped = 0; /* will get exact value next */430420 min_pfn_mapped = real_end >> PAGE_SHIFT;431421 last_start = start = real_end;422422+423423+ /*424424+ * We start from the top (end of memory) and go to the bottom.425425+ * The memblock_find_in_range() gets us a block of RAM from the426426+ * end of RAM in [min_pfn_mapped, max_pfn_mapped) used as new pages427427+ * for page table.428428+ */432429 while (last_start > ISA_END_ADDRESS) {433430 if (last_start > step_size) {434431 start = round_down(last_start - 1, step_size);
+33
drivers/acpi/ac.c
···2828#include <linux/slab.h>2929#include <linux/init.h>3030#include <linux/types.h>3131+#include <linux/dmi.h>3232+#include <linux/delay.h>3133#ifdef CONFIG_ACPI_PROCFS_POWER3234#include <linux/proc_fs.h>3335#include <linux/seq_file.h>···7573static int acpi_ac_resume(struct device *dev);7674#endif7775static SIMPLE_DEV_PM_OPS(acpi_ac_pm, NULL, acpi_ac_resume);7676+7777+static int ac_sleep_before_get_state_ms;78787979static struct acpi_driver acpi_ac_driver = {8080 .name = "ac",···256252 case ACPI_AC_NOTIFY_STATUS:257253 case ACPI_NOTIFY_BUS_CHECK:258254 case ACPI_NOTIFY_DEVICE_CHECK:255255+ /*256256+ * A buggy BIOS may notify AC first and then sleep for257257+ * a specific time before doing actual operations in the258258+ * EC event handler (_Qxx). This will cause the AC state259259+ * reported by the ACPI event to be incorrect, so wait for a260260+ * specific time for the EC event handler to make progress.261261+ */262262+ if (ac_sleep_before_get_state_ms > 0)263263+ msleep(ac_sleep_before_get_state_ms);264264+259265 acpi_ac_get_state(ac);260266 acpi_bus_generate_proc_event(device, event, (u32) ac->state);261267 acpi_bus_generate_netlink_event(device->pnp.device_class,···277263278264 return;279265}266266+267267+static int thinkpad_e530_quirk(const struct dmi_system_id *d)268268+{269269+ ac_sleep_before_get_state_ms = 1000;270270+ return 0;271271+}272272+273273+static struct dmi_system_id ac_dmi_table[] = {274274+ {275275+ .callback = thinkpad_e530_quirk,276276+ .ident = "thinkpad e530",277277+ .matches = {278278+ DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),279279+ DMI_MATCH(DMI_PRODUCT_NAME, "32597CG"),280280+ },281281+ },282282+ {},283283+};280284281285static int acpi_ac_add(struct acpi_device *device)282286{···344312 kfree(ac);345313 }346314315315+ dmi_check_system(ac_dmi_table);347316 return result;348317}349318
+1-3
drivers/acpi/ec.c
···223223static int ec_poll(struct acpi_ec *ec)224224{225225 unsigned long flags;226226- int repeat = 2; /* number of command restarts */226226+ int repeat = 5; /* number of command restarts */227227 while (repeat--) {228228 unsigned long delay = jiffies +229229 msecs_to_jiffies(ec_delay);···241241 }242242 advance_transaction(ec, acpi_ec_read_status(ec));243243 } while (time_before(jiffies, delay));244244- if (acpi_ec_read_status(ec) & ACPI_EC_FLAG_IBF)245245- break;246244 pr_debug(PREFIX "controller reset, restart transaction\n");247245 spin_lock_irqsave(&ec->lock, flags);248246 start_transaction(ec);
···6161int dev_pm_put_subsys_data(struct device *dev)6262{6363 struct pm_subsys_data *psd;6464- int ret = 0;6464+ int ret = 1;65656666 spin_lock_irq(&dev->power.lock);67676868 psd = dev_to_psd(dev);6969- if (!psd) {7070- ret = -EINVAL;6969+ if (!psd)7170 goto out;7272- }73717472 if (--psd->refcount == 0) {7573 dev->power.subsys_data = NULL;7676- kfree(psd);7777- ret = 1;7474+ } else {7575+ psd = NULL;7676+ ret = 0;7877 }79788079 out:8180 spin_unlock_irq(&dev->power.lock);8181+ kfree(psd);82828383 return ret;8484}
+554-391
drivers/block/rbd.c
···5555#define SECTOR_SHIFT 95656#define SECTOR_SIZE (1ULL << SECTOR_SHIFT)57575858+/*5959+ * Increment the given counter and return its updated value.6060+ * If the counter is already 0 it will not be incremented.6161+ * If the counter is already at its maximum value returns6262+ * -EINVAL without updating it.6363+ */6464+static int atomic_inc_return_safe(atomic_t *v)6565+{6666+ unsigned int counter;6767+6868+ counter = (unsigned int)__atomic_add_unless(v, 1, 0);6969+ if (counter <= (unsigned int)INT_MAX)7070+ return (int)counter;7171+7272+ atomic_dec(v);7373+7474+ return -EINVAL;7575+}7676+7777+/* Decrement the counter. Return the resulting value, or -EINVAL */7878+static int atomic_dec_return_safe(atomic_t *v)7979+{8080+ int counter;8181+8282+ counter = atomic_dec_return(v);8383+ if (counter >= 0)8484+ return counter;8585+8686+ atomic_inc(v);8787+8888+ return -EINVAL;8989+}9090+5891#define RBD_DRV_NAME "rbd"5992#define RBD_DRV_NAME_LONG "rbd (rados block device)"6093···133100 * block device image metadata (in-memory version)134101 */135102struct rbd_image_header {136136- /* These four fields never change for a given rbd image */103103+ /* These six fields never change for a given rbd image */137104 char *object_prefix;138138- u64 features;139105 __u8 obj_order;140106 __u8 crypt_type;141107 __u8 comp_type;108108+ u64 stripe_unit;109109+ u64 stripe_count;110110+ u64 features; /* Might be changeable someday? */142111143112 /* The remaining fields need to be updated occasionally */144113 u64 image_size;145114 struct ceph_snap_context *snapc;146146- char *snap_names;147147- u64 *snap_sizes;148148-149149- u64 stripe_unit;150150- u64 stripe_count;115115+ char *snap_names; /* format 1 only */116116+ u64 *snap_sizes; /* format 1 only */151117};152118153119/*···257225 };258226 };259227 struct page **copyup_pages;228228+ u32 copyup_page_count;260229261230 struct ceph_osd_request *osd_req;262231···290257 struct rbd_obj_request *obj_request; /* obj req initiator */291258 };292259 struct page **copyup_pages;260260+ u32 copyup_page_count;293261 spinlock_t completion_lock;/* protects next_completion */294262 u32 next_completion;295263 rbd_img_callback_t callback;···345311346312 struct rbd_spec *parent_spec;347313 u64 parent_overlap;314314+ atomic_t parent_ref;348315 struct rbd_device *parent;349316350317 /* protects updating the header */···394359 size_t count);395360static ssize_t rbd_remove(struct bus_type *bus, const char *buf,396361 size_t count);397397-static int rbd_dev_image_probe(struct rbd_device *rbd_dev);362362+static int rbd_dev_image_probe(struct rbd_device *rbd_dev, bool mapping);363363+static void rbd_spec_put(struct rbd_spec *spec);398364399365static struct bus_attribute rbd_bus_attrs[] = {400366 __ATTR(add, S_IWUSR, NULL, rbd_add),···462426static void rbd_dev_remove_parent(struct rbd_device *rbd_dev);463427464428static int rbd_dev_refresh(struct rbd_device *rbd_dev);465465-static int rbd_dev_v2_refresh(struct rbd_device *rbd_dev);429429+static int rbd_dev_v2_header_onetime(struct rbd_device *rbd_dev);430430+static int rbd_dev_v2_header_info(struct rbd_device *rbd_dev);466431static const char *rbd_dev_v2_snap_name(struct rbd_device *rbd_dev,467432 u64 snap_id);468433static int _rbd_dev_v2_snap_size(struct rbd_device *rbd_dev, u64 snap_id,···763726}764727765728/*766766- * Create a new header structure, translate header format from the on-disk767767- * header.729729+ * Fill an rbd image header with information from the given format 1730730+ * on-disk header.768731 */769769-static int rbd_header_from_disk(struct rbd_image_header *header,732732+static int rbd_header_from_disk(struct rbd_device *rbd_dev,770733 struct rbd_image_header_ondisk *ondisk)771734{735735+ struct rbd_image_header *header = &rbd_dev->header;736736+ bool first_time = header->object_prefix == NULL;737737+ struct ceph_snap_context *snapc;738738+ char *object_prefix = NULL;739739+ char *snap_names = NULL;740740+ u64 *snap_sizes = NULL;772741 u32 snap_count;773773- size_t len;774742 size_t size;743743+ int ret = -ENOMEM;775744 u32 i;776745777777- memset(header, 0, sizeof (*header));746746+ /* Allocate this now to avoid having to handle failure below */747747+748748+ if (first_time) {749749+ size_t len;750750+751751+ len = strnlen(ondisk->object_prefix,752752+ sizeof (ondisk->object_prefix));753753+ object_prefix = kmalloc(len + 1, GFP_KERNEL);754754+ if (!object_prefix)755755+ return -ENOMEM;756756+ memcpy(object_prefix, ondisk->object_prefix, len);757757+ object_prefix[len] = '\0';758758+ }759759+760760+ /* Allocate the snapshot context and fill it in */778761779762 snap_count = le32_to_cpu(ondisk->snap_count);780780-781781- len = strnlen(ondisk->object_prefix, sizeof (ondisk->object_prefix));782782- header->object_prefix = kmalloc(len + 1, GFP_KERNEL);783783- if (!header->object_prefix)784784- return -ENOMEM;785785- memcpy(header->object_prefix, ondisk->object_prefix, len);786786- header->object_prefix[len] = '\0';787787-763763+ snapc = ceph_create_snap_context(snap_count, GFP_KERNEL);764764+ if (!snapc)765765+ goto out_err;766766+ snapc->seq = le64_to_cpu(ondisk->snap_seq);788767 if (snap_count) {768768+ struct rbd_image_snap_ondisk *snaps;789769 u64 snap_names_len = le64_to_cpu(ondisk->snap_names_len);790770791791- /* Save a copy of the snapshot names */771771+ /* We'll keep a copy of the snapshot names... */792772793793- if (snap_names_len > (u64) SIZE_MAX)794794- return -EIO;795795- header->snap_names = kmalloc(snap_names_len, GFP_KERNEL);796796- if (!header->snap_names)773773+ if (snap_names_len > (u64)SIZE_MAX)774774+ goto out_2big;775775+ snap_names = kmalloc(snap_names_len, GFP_KERNEL);776776+ if (!snap_names)797777 goto out_err;778778+779779+ /* ...as well as the array of their sizes. */780780+781781+ size = snap_count * sizeof (*header->snap_sizes);782782+ snap_sizes = kmalloc(size, GFP_KERNEL);783783+ if (!snap_sizes)784784+ goto out_err;785785+798786 /*799799- * Note that rbd_dev_v1_header_read() guarantees800800- * the ondisk buffer we're working with has787787+ * Copy the names, and fill in each snapshot's id788788+ * and size.789789+ *790790+ * Note that rbd_dev_v1_header_info() guarantees the791791+ * ondisk buffer we're working with has801792 * snap_names_len bytes beyond the end of the802793 * snapshot id array, this memcpy() is safe.803794 */804804- memcpy(header->snap_names, &ondisk->snaps[snap_count],805805- snap_names_len);806806-807807- /* Record each snapshot's size */808808-809809- size = snap_count * sizeof (*header->snap_sizes);810810- header->snap_sizes = kmalloc(size, GFP_KERNEL);811811- if (!header->snap_sizes)812812- goto out_err;813813- for (i = 0; i < snap_count; i++)814814- header->snap_sizes[i] =815815- le64_to_cpu(ondisk->snaps[i].image_size);816816- } else {817817- header->snap_names = NULL;818818- header->snap_sizes = NULL;795795+ memcpy(snap_names, &ondisk->snaps[snap_count], snap_names_len);796796+ snaps = ondisk->snaps;797797+ for (i = 0; i < snap_count; i++) {798798+ snapc->snaps[i] = le64_to_cpu(snaps[i].id);799799+ snap_sizes[i] = le64_to_cpu(snaps[i].image_size);800800+ }819801 }820802821821- header->features = 0; /* No features support in v1 images */822822- header->obj_order = ondisk->options.order;823823- header->crypt_type = ondisk->options.crypt_type;824824- header->comp_type = ondisk->options.comp_type;803803+ /* We won't fail any more, fill in the header */825804826826- /* Allocate and fill in the snapshot context */805805+ down_write(&rbd_dev->header_rwsem);806806+ if (first_time) {807807+ header->object_prefix = object_prefix;808808+ header->obj_order = ondisk->options.order;809809+ header->crypt_type = ondisk->options.crypt_type;810810+ header->comp_type = ondisk->options.comp_type;811811+ /* The rest aren't used for format 1 images */812812+ header->stripe_unit = 0;813813+ header->stripe_count = 0;814814+ header->features = 0;815815+ } else {816816+ ceph_put_snap_context(header->snapc);817817+ kfree(header->snap_names);818818+ kfree(header->snap_sizes);819819+ }820820+821821+ /* The remaining fields always get updated (when we refresh) */827822828823 header->image_size = le64_to_cpu(ondisk->image_size);824824+ header->snapc = snapc;825825+ header->snap_names = snap_names;826826+ header->snap_sizes = snap_sizes;829827830830- header->snapc = ceph_create_snap_context(snap_count, GFP_KERNEL);831831- if (!header->snapc)832832- goto out_err;833833- header->snapc->seq = le64_to_cpu(ondisk->snap_seq);834834- for (i = 0; i < snap_count; i++)835835- header->snapc->snaps[i] = le64_to_cpu(ondisk->snaps[i].id);828828+ /* Make sure mapping size is consistent with header info */829829+830830+ if (rbd_dev->spec->snap_id == CEPH_NOSNAP || first_time)831831+ if (rbd_dev->mapping.size != header->image_size)832832+ rbd_dev->mapping.size = header->image_size;833833+834834+ up_write(&rbd_dev->header_rwsem);836835837836 return 0;838838-837837+out_2big:838838+ ret = -EIO;839839out_err:840840- kfree(header->snap_sizes);841841- header->snap_sizes = NULL;842842- kfree(header->snap_names);843843- header->snap_names = NULL;844844- kfree(header->object_prefix);845845- header->object_prefix = NULL;840840+ kfree(snap_sizes);841841+ kfree(snap_names);842842+ ceph_put_snap_context(snapc);843843+ kfree(object_prefix);846844847847- return -ENOMEM;845845+ return ret;848846}849847850848static const char *_rbd_dev_v1_snap_name(struct rbd_device *rbd_dev, u32 which)···10069341007935static int rbd_dev_mapping_set(struct rbd_device *rbd_dev)1008936{10091009- const char *snap_name = rbd_dev->spec->snap_name;10101010- u64 snap_id;937937+ u64 snap_id = rbd_dev->spec->snap_id;1011938 u64 size = 0;1012939 u64 features = 0;1013940 int ret;10141014-10151015- if (strcmp(snap_name, RBD_SNAP_HEAD_NAME)) {10161016- snap_id = rbd_snap_id_by_name(rbd_dev, snap_name);10171017- if (snap_id == CEPH_NOSNAP)10181018- return -ENOENT;10191019- } else {10201020- snap_id = CEPH_NOSNAP;10211021- }10229411023942 ret = rbd_snap_size(rbd_dev, snap_id, &size);1024943 if (ret)···1021958 rbd_dev->mapping.size = size;1022959 rbd_dev->mapping.features = features;102396010241024- /* If we are mapping a snapshot it must be marked read-only */10251025-10261026- if (snap_id != CEPH_NOSNAP)10271027- rbd_dev->mapping.read_only = true;10281028-1029961 return 0;1030962}1031963···1028970{1029971 rbd_dev->mapping.size = 0;1030972 rbd_dev->mapping.features = 0;10311031- rbd_dev->mapping.read_only = true;10321032-}10331033-10341034-static void rbd_dev_clear_mapping(struct rbd_device *rbd_dev)10351035-{10361036- rbd_dev->mapping.size = 0;10371037- rbd_dev->mapping.features = 0;10381038- rbd_dev->mapping.read_only = true;1039973}10409741041975static const char *rbd_segment_name(struct rbd_device *rbd_dev, u64 offset)···13921342 kref_put(&obj_request->kref, rbd_obj_request_destroy);13931343}1394134413951395-static void rbd_img_request_get(struct rbd_img_request *img_request)13961396-{13971397- dout("%s: img %p (was %d)\n", __func__, img_request,13981398- atomic_read(&img_request->kref.refcount));13991399- kref_get(&img_request->kref);14001400-}14011401-13451345+static bool img_request_child_test(struct rbd_img_request *img_request);13461346+static void rbd_parent_request_destroy(struct kref *kref);14021347static void rbd_img_request_destroy(struct kref *kref);14031348static void rbd_img_request_put(struct rbd_img_request *img_request)14041349{14051350 rbd_assert(img_request != NULL);14061351 dout("%s: img %p (was %d)\n", __func__, img_request,14071352 atomic_read(&img_request->kref.refcount));14081408- kref_put(&img_request->kref, rbd_img_request_destroy);13531353+ if (img_request_child_test(img_request))13541354+ kref_put(&img_request->kref, rbd_parent_request_destroy);13551355+ else13561356+ kref_put(&img_request->kref, rbd_img_request_destroy);14091357}1410135814111359static inline void rbd_img_obj_request_add(struct rbd_img_request *img_request,···15201472 smp_mb();15211473}1522147414751475+static void img_request_child_clear(struct rbd_img_request *img_request)14761476+{14771477+ clear_bit(IMG_REQ_CHILD, &img_request->flags);14781478+ smp_mb();14791479+}14801480+15231481static bool img_request_child_test(struct rbd_img_request *img_request)15241482{15251483 smp_mb();···15351481static void img_request_layered_set(struct rbd_img_request *img_request)15361482{15371483 set_bit(IMG_REQ_LAYERED, &img_request->flags);14841484+ smp_mb();14851485+}14861486+14871487+static void img_request_layered_clear(struct rbd_img_request *img_request)14881488+{14891489+ clear_bit(IMG_REQ_LAYERED, &img_request->flags);15381490 smp_mb();15391491}15401492···18871827 kmem_cache_free(rbd_obj_request_cache, obj_request);18881828}1889182918301830+/* It's OK to call this for a device with no parent */18311831+18321832+static void rbd_spec_put(struct rbd_spec *spec);18331833+static void rbd_dev_unparent(struct rbd_device *rbd_dev)18341834+{18351835+ rbd_dev_remove_parent(rbd_dev);18361836+ rbd_spec_put(rbd_dev->parent_spec);18371837+ rbd_dev->parent_spec = NULL;18381838+ rbd_dev->parent_overlap = 0;18391839+}18401840+18411841+/*18421842+ * Parent image reference counting is used to determine when an18431843+ * image's parent fields can be safely torn down--after there are no18441844+ * more in-flight requests to the parent image. When the last18451845+ * reference is dropped, cleaning them up is safe.18461846+ */18471847+static void rbd_dev_parent_put(struct rbd_device *rbd_dev)18481848+{18491849+ int counter;18501850+18511851+ if (!rbd_dev->parent_spec)18521852+ return;18531853+18541854+ counter = atomic_dec_return_safe(&rbd_dev->parent_ref);18551855+ if (counter > 0)18561856+ return;18571857+18581858+ /* Last reference; clean up parent data structures */18591859+18601860+ if (!counter)18611861+ rbd_dev_unparent(rbd_dev);18621862+ else18631863+ rbd_warn(rbd_dev, "parent reference underflow\n");18641864+}18651865+18661866+/*18671867+ * If an image has a non-zero parent overlap, get a reference to its18681868+ * parent.18691869+ *18701870+ * We must get the reference before checking for the overlap to18711871+ * coordinate properly with zeroing the parent overlap in18721872+ * rbd_dev_v2_parent_info() when an image gets flattened. We18731873+ * drop it again if there is no overlap.18741874+ *18751875+ * Returns true if the rbd device has a parent with a non-zero18761876+ * overlap and a reference for it was successfully taken, or18771877+ * false otherwise.18781878+ */18791879+static bool rbd_dev_parent_get(struct rbd_device *rbd_dev)18801880+{18811881+ int counter;18821882+18831883+ if (!rbd_dev->parent_spec)18841884+ return false;18851885+18861886+ counter = atomic_inc_return_safe(&rbd_dev->parent_ref);18871887+ if (counter > 0 && rbd_dev->parent_overlap)18881888+ return true;18891889+18901890+ /* Image was flattened, but parent is not yet torn down */18911891+18921892+ if (counter < 0)18931893+ rbd_warn(rbd_dev, "parent reference overflow\n");18941894+18951895+ return false;18961896+}18971897+18901898/*18911899 * Caller is responsible for filling in the list of object requests18921900 * that comprises the image request, and the Linux request pointer···19631835static struct rbd_img_request *rbd_img_request_create(19641836 struct rbd_device *rbd_dev,19651837 u64 offset, u64 length,19661966- bool write_request,19671967- bool child_request)18381838+ bool write_request)19681839{19691840 struct rbd_img_request *img_request;19701841···19881861 } else {19891862 img_request->snap_id = rbd_dev->spec->snap_id;19901863 }19911991- if (child_request)19921992- img_request_child_set(img_request);19931993- if (rbd_dev->parent_spec)18641864+ if (rbd_dev_parent_get(rbd_dev))19941865 img_request_layered_set(img_request);19951866 spin_lock_init(&img_request->completion_lock);19961867 img_request->next_completion = 0;···19971872 img_request->obj_request_count = 0;19981873 INIT_LIST_HEAD(&img_request->obj_requests);19991874 kref_init(&img_request->kref);20002000-20012001- rbd_img_request_get(img_request); /* Avoid a warning */20022002- rbd_img_request_put(img_request); /* TEMPORARY */2003187520041876 dout("%s: rbd_dev %p %s %llu/%llu -> img %p\n", __func__, rbd_dev,20051877 write_request ? "write" : "read", offset, length,···20191897 rbd_img_obj_request_del(img_request, obj_request);20201898 rbd_assert(img_request->obj_request_count == 0);2021189919001900+ if (img_request_layered_test(img_request)) {19011901+ img_request_layered_clear(img_request);19021902+ rbd_dev_parent_put(img_request->rbd_dev);19031903+ }19041904+20221905 if (img_request_write_test(img_request))20231906 ceph_put_snap_context(img_request->snapc);2024190720252025- if (img_request_child_test(img_request))20262026- rbd_obj_request_put(img_request->obj_request);20272027-20281908 kmem_cache_free(rbd_img_request_cache, img_request);19091909+}19101910+19111911+static struct rbd_img_request *rbd_parent_request_create(19121912+ struct rbd_obj_request *obj_request,19131913+ u64 img_offset, u64 length)19141914+{19151915+ struct rbd_img_request *parent_request;19161916+ struct rbd_device *rbd_dev;19171917+19181918+ rbd_assert(obj_request->img_request);19191919+ rbd_dev = obj_request->img_request->rbd_dev;19201920+19211921+ parent_request = rbd_img_request_create(rbd_dev->parent,19221922+ img_offset, length, false);19231923+ if (!parent_request)19241924+ return NULL;19251925+19261926+ img_request_child_set(parent_request);19271927+ rbd_obj_request_get(obj_request);19281928+ parent_request->obj_request = obj_request;19291929+19301930+ return parent_request;19311931+}19321932+19331933+static void rbd_parent_request_destroy(struct kref *kref)19341934+{19351935+ struct rbd_img_request *parent_request;19361936+ struct rbd_obj_request *orig_request;19371937+19381938+ parent_request = container_of(kref, struct rbd_img_request, kref);19391939+ orig_request = parent_request->obj_request;19401940+19411941+ parent_request->obj_request = NULL;19421942+ rbd_obj_request_put(orig_request);19431943+ img_request_child_clear(parent_request);19441944+19451945+ rbd_img_request_destroy(kref);20291946}2030194720311948static bool rbd_img_obj_end_request(struct rbd_obj_request *obj_request)···22752114{22762115 struct rbd_img_request *img_request;22772116 struct rbd_device *rbd_dev;22782278- u64 length;21172117+ struct page **pages;22792118 u32 page_count;2280211922812120 rbd_assert(obj_request->type == OBJ_REQUEST_BIO);···2285212422862125 rbd_dev = img_request->rbd_dev;22872126 rbd_assert(rbd_dev);22882288- length = (u64)1 << rbd_dev->header.obj_order;22892289- page_count = (u32)calc_pages_for(0, length);2290212722912291- rbd_assert(obj_request->copyup_pages);22922292- ceph_release_page_vector(obj_request->copyup_pages, page_count);21282128+ pages = obj_request->copyup_pages;21292129+ rbd_assert(pages != NULL);22932130 obj_request->copyup_pages = NULL;21312131+ page_count = obj_request->copyup_page_count;21322132+ rbd_assert(page_count);21332133+ obj_request->copyup_page_count = 0;21342134+ ceph_release_page_vector(pages, page_count);2294213522952136 /*22962137 * We want the transfer count to reflect the size of the···23162153 struct ceph_osd_client *osdc;23172154 struct rbd_device *rbd_dev;23182155 struct page **pages;23192319- int result;23202320- u64 obj_size;23212321- u64 xferred;21562156+ u32 page_count;21572157+ int img_result;21582158+ u64 parent_length;21592159+ u64 offset;21602160+ u64 length;2322216123232162 rbd_assert(img_request_child_test(img_request));23242163···23292164 pages = img_request->copyup_pages;23302165 rbd_assert(pages != NULL);23312166 img_request->copyup_pages = NULL;21672167+ page_count = img_request->copyup_page_count;21682168+ rbd_assert(page_count);21692169+ img_request->copyup_page_count = 0;2332217023332171 orig_request = img_request->obj_request;23342172 rbd_assert(orig_request != NULL);23352335- rbd_assert(orig_request->type == OBJ_REQUEST_BIO);23362336- result = img_request->result;23372337- obj_size = img_request->length;23382338- xferred = img_request->xferred;23392339-23402340- rbd_dev = img_request->rbd_dev;23412341- rbd_assert(rbd_dev);23422342- rbd_assert(obj_size == (u64)1 << rbd_dev->header.obj_order);23432343-21732173+ rbd_assert(obj_request_type_valid(orig_request->type));21742174+ img_result = img_request->result;21752175+ parent_length = img_request->length;21762176+ rbd_assert(parent_length == img_request->xferred);23442177 rbd_img_request_put(img_request);2345217823462346- if (result)21792179+ rbd_assert(orig_request->img_request);21802180+ rbd_dev = orig_request->img_request->rbd_dev;21812181+ rbd_assert(rbd_dev);21822182+21832183+ /*21842184+ * If the overlap has become 0 (most likely because the21852185+ * image has been flattened) we need to free the pages21862186+ * and re-submit the original write request.21872187+ */21882188+ if (!rbd_dev->parent_overlap) {21892189+ struct ceph_osd_client *osdc;21902190+21912191+ ceph_release_page_vector(pages, page_count);21922192+ osdc = &rbd_dev->rbd_client->client->osdc;21932193+ img_result = rbd_obj_request_submit(osdc, orig_request);21942194+ if (!img_result)21952195+ return;21962196+ }21972197+21982198+ if (img_result)23472199 goto out_err;2348220023492349- /* Allocate the new copyup osd request for the original request */23502350-23512351- result = -ENOMEM;23522352- rbd_assert(!orig_request->osd_req);22012201+ /*22022202+ * The original osd request is of no use to use any more.22032203+ * We need a new one that can hold the two ops in a copyup22042204+ * request. Allocate the new copyup osd request for the22052205+ * original request, and release the old one.22062206+ */22072207+ img_result = -ENOMEM;23532208 osd_req = rbd_osd_req_create_copyup(orig_request);23542209 if (!osd_req)23552210 goto out_err;22112211+ rbd_osd_req_destroy(orig_request->osd_req);23562212 orig_request->osd_req = osd_req;23572213 orig_request->copyup_pages = pages;22142214+ orig_request->copyup_page_count = page_count;2358221523592216 /* Initialize the copyup op */2360221723612218 osd_req_op_cls_init(osd_req, 0, CEPH_OSD_OP_CALL, "rbd", "copyup");23622362- osd_req_op_cls_request_data_pages(osd_req, 0, pages, obj_size, 0,22192219+ osd_req_op_cls_request_data_pages(osd_req, 0, pages, parent_length, 0,23632220 false, false);2364222123652222 /* Then the original write request op */2366222322242224+ offset = orig_request->offset;22252225+ length = orig_request->length;23672226 osd_req_op_extent_init(osd_req, 1, CEPH_OSD_OP_WRITE,23682368- orig_request->offset,23692369- orig_request->length, 0, 0);23702370- osd_req_op_extent_osd_data_bio(osd_req, 1, orig_request->bio_list,23712371- orig_request->length);22272227+ offset, length, 0, 0);22282228+ if (orig_request->type == OBJ_REQUEST_BIO)22292229+ osd_req_op_extent_osd_data_bio(osd_req, 1,22302230+ orig_request->bio_list, length);22312231+ else22322232+ osd_req_op_extent_osd_data_pages(osd_req, 1,22332233+ orig_request->pages, length,22342234+ offset & ~PAGE_MASK, false, false);2372223523732236 rbd_osd_req_format_write(orig_request);23742237···2404221124052212 orig_request->callback = rbd_img_obj_copyup_callback;24062213 osdc = &rbd_dev->rbd_client->client->osdc;24072407- result = rbd_obj_request_submit(osdc, orig_request);24082408- if (!result)22142214+ img_result = rbd_obj_request_submit(osdc, orig_request);22152215+ if (!img_result)24092216 return;24102217out_err:24112218 /* Record the error code and complete the request */2412221924132413- orig_request->result = result;22202220+ orig_request->result = img_result;24142221 orig_request->xferred = 0;24152222 obj_request_done_set(orig_request);24162223 rbd_obj_request_complete(orig_request);···24422249 int result;2443225024442251 rbd_assert(obj_request_img_data_test(obj_request));24452445- rbd_assert(obj_request->type == OBJ_REQUEST_BIO);22522252+ rbd_assert(obj_request_type_valid(obj_request->type));2446225324472254 img_request = obj_request->img_request;24482255 rbd_assert(img_request != NULL);24492256 rbd_dev = img_request->rbd_dev;24502257 rbd_assert(rbd_dev->parent != NULL);24512451-24522452- /*24532453- * First things first. The original osd request is of no24542454- * use to use any more, we'll need a new one that can hold24552455- * the two ops in a copyup request. We'll get that later,24562456- * but for now we can release the old one.24572457- */24582458- rbd_osd_req_destroy(obj_request->osd_req);24592459- obj_request->osd_req = NULL;2460225824612259 /*24622260 * Determine the byte range covered by the object in the···24792295 }2480229624812297 result = -ENOMEM;24822482- parent_request = rbd_img_request_create(rbd_dev->parent,24832483- img_offset, length,24842484- false, true);22982298+ parent_request = rbd_parent_request_create(obj_request,22992299+ img_offset, length);24852300 if (!parent_request)24862301 goto out_err;24872487- rbd_obj_request_get(obj_request);24882488- parent_request->obj_request = obj_request;2489230224902303 result = rbd_img_request_fill(parent_request, OBJ_REQUEST_PAGES, pages);24912304 if (result)24922305 goto out_err;24932306 parent_request->copyup_pages = pages;23072307+ parent_request->copyup_page_count = page_count;2494230824952309 parent_request->callback = rbd_img_obj_parent_read_full_callback;24962310 result = rbd_img_request_submit(parent_request);···24962314 return 0;2497231524982316 parent_request->copyup_pages = NULL;23172317+ parent_request->copyup_page_count = 0;24992318 parent_request->obj_request = NULL;25002319 rbd_obj_request_put(obj_request);25012320out_err:···25142331static void rbd_img_obj_exists_callback(struct rbd_obj_request *obj_request)25152332{25162333 struct rbd_obj_request *orig_request;23342334+ struct rbd_device *rbd_dev;25172335 int result;2518233625192337 rbd_assert(!obj_request_img_data_test(obj_request));···25372353 obj_request->xferred, obj_request->length);25382354 rbd_obj_request_put(obj_request);2539235525402540- rbd_assert(orig_request);25412541- rbd_assert(orig_request->img_request);23562356+ /*23572357+ * If the overlap has become 0 (most likely because the23582358+ * image has been flattened) we need to free the pages23592359+ * and re-submit the original write request.23602360+ */23612361+ rbd_dev = orig_request->img_request->rbd_dev;23622362+ if (!rbd_dev->parent_overlap) {23632363+ struct ceph_osd_client *osdc;23642364+23652365+ rbd_obj_request_put(orig_request);23662366+ osdc = &rbd_dev->rbd_client->client->osdc;23672367+ result = rbd_obj_request_submit(osdc, orig_request);23682368+ if (!result)23692369+ return;23702370+ }2542237125432372 /*25442373 * Our only purpose here is to determine whether the object···27092512 struct rbd_obj_request *obj_request;27102513 struct rbd_device *rbd_dev;27112514 u64 obj_end;25152515+ u64 img_xferred;25162516+ int img_result;2712251727132518 rbd_assert(img_request_child_test(img_request));2714251925202520+ /* First get what we need from the image request and release it */25212521+27152522 obj_request = img_request->obj_request;25232523+ img_xferred = img_request->xferred;25242524+ img_result = img_request->result;25252525+ rbd_img_request_put(img_request);25262526+25272527+ /*25282528+ * If the overlap has become 0 (most likely because the25292529+ * image has been flattened) we need to re-submit the25302530+ * original request.25312531+ */27162532 rbd_assert(obj_request);27172533 rbd_assert(obj_request->img_request);25342534+ rbd_dev = obj_request->img_request->rbd_dev;25352535+ if (!rbd_dev->parent_overlap) {25362536+ struct ceph_osd_client *osdc;2718253727192719- obj_request->result = img_request->result;25382538+ osdc = &rbd_dev->rbd_client->client->osdc;25392539+ img_result = rbd_obj_request_submit(osdc, obj_request);25402540+ if (!img_result)25412541+ return;25422542+ }25432543+25442544+ obj_request->result = img_result;27202545 if (obj_request->result)27212546 goto out;27222547···27512532 */27522533 rbd_assert(obj_request->img_offset < U64_MAX - obj_request->length);27532534 obj_end = obj_request->img_offset + obj_request->length;27542754- rbd_dev = obj_request->img_request->rbd_dev;27552535 if (obj_end > rbd_dev->parent_overlap) {27562536 u64 xferred = 0;27572537···27582540 xferred = rbd_dev->parent_overlap -27592541 obj_request->img_offset;2760254227612761- obj_request->xferred = min(img_request->xferred, xferred);25432543+ obj_request->xferred = min(img_xferred, xferred);27622544 } else {27632763- obj_request->xferred = img_request->xferred;25452545+ obj_request->xferred = img_xferred;27642546 }27652547out:27662766- rbd_img_request_put(img_request);27672548 rbd_img_obj_request_read_callback(obj_request);27682549 rbd_obj_request_complete(obj_request);27692550}2770255127712552static void rbd_img_parent_read(struct rbd_obj_request *obj_request)27722553{27732773- struct rbd_device *rbd_dev;27742554 struct rbd_img_request *img_request;27752555 int result;2776255627772557 rbd_assert(obj_request_img_data_test(obj_request));27782558 rbd_assert(obj_request->img_request != NULL);27792559 rbd_assert(obj_request->result == (s32) -ENOENT);27802780- rbd_assert(obj_request->type == OBJ_REQUEST_BIO);25602560+ rbd_assert(obj_request_type_valid(obj_request->type));2781256127822782- rbd_dev = obj_request->img_request->rbd_dev;27832783- rbd_assert(rbd_dev->parent != NULL);27842562 /* rbd_read_finish(obj_request, obj_request->length); */27852785- img_request = rbd_img_request_create(rbd_dev->parent,25632563+ img_request = rbd_parent_request_create(obj_request,27862564 obj_request->img_offset,27872787- obj_request->length,27882788- false, true);25652565+ obj_request->length);27892566 result = -ENOMEM;27902567 if (!img_request)27912568 goto out_err;2792256927932793- rbd_obj_request_get(obj_request);27942794- img_request->obj_request = obj_request;27952795-27962796- result = rbd_img_request_fill(img_request, OBJ_REQUEST_BIO,27972797- obj_request->bio_list);25702570+ if (obj_request->type == OBJ_REQUEST_BIO)25712571+ result = rbd_img_request_fill(img_request, OBJ_REQUEST_BIO,25722572+ obj_request->bio_list);25732573+ else25742574+ result = rbd_img_request_fill(img_request, OBJ_REQUEST_PAGES,25752575+ obj_request->pages);27982576 if (result)27992577 goto out_err;28002578···28402626static void rbd_watch_cb(u64 ver, u64 notify_id, u8 opcode, void *data)28412627{28422628 struct rbd_device *rbd_dev = (struct rbd_device *)data;26292629+ int ret;2843263028442631 if (!rbd_dev)28452632 return;···28482633 dout("%s: \"%s\" notify_id %llu opcode %u\n", __func__,28492634 rbd_dev->header_name, (unsigned long long)notify_id,28502635 (unsigned int)opcode);28512851- (void)rbd_dev_refresh(rbd_dev);26362636+ ret = rbd_dev_refresh(rbd_dev);26372637+ if (ret)26382638+ rbd_warn(rbd_dev, ": header refresh error (%d)\n", ret);2852263928532640 rbd_obj_notify_ack(rbd_dev, notify_id);28542641}···28592642 * Request sync osd watch/unwatch. The value of "start" determines28602643 * whether a watch request is being initiated or torn down.28612644 */28622862-static int rbd_dev_header_watch_sync(struct rbd_device *rbd_dev, int start)26452645+static int rbd_dev_header_watch_sync(struct rbd_device *rbd_dev, bool start)28632646{28642647 struct ceph_osd_client *osdc = &rbd_dev->rbd_client->client->osdc;28652648 struct rbd_obj_request *obj_request;···28932676 rbd_dev->watch_request->osd_req);2894267728952678 osd_req_op_watch_init(obj_request->osd_req, 0, CEPH_OSD_OP_WATCH,28962896- rbd_dev->watch_event->cookie, 0, start);26792679+ rbd_dev->watch_event->cookie, 0, start ? 1 : 0);28972680 rbd_osd_req_format_write(obj_request);2898268128992682 ret = rbd_obj_request_submit(osdc, obj_request);···30862869 goto end_request; /* Shouldn't happen */30872870 }3088287128722872+ result = -EIO;28732873+ if (offset + length > rbd_dev->mapping.size) {28742874+ rbd_warn(rbd_dev, "beyond EOD (%llu~%llu > %llu)\n",28752875+ offset, length, rbd_dev->mapping.size);28762876+ goto end_request;28772877+ }28782878+30892879 result = -ENOMEM;30902880 img_request = rbd_img_request_create(rbd_dev, offset, length,30913091- write_request, false);28812881+ write_request);30922882 if (!img_request)30932883 goto end_request;30942884···32463022}3247302332483024/*32493249- * Read the complete header for the given rbd device.32503250- *32513251- * Returns a pointer to a dynamically-allocated buffer containing32523252- * the complete and validated header. Caller can pass the address32533253- * of a variable that will be filled in with the version of the32543254- * header object at the time it was read.32553255- *32563256- * Returns a pointer-coded errno if a failure occurs.30253025+ * Read the complete header for the given rbd device. On successful30263026+ * return, the rbd_dev->header field will contain up-to-date30273027+ * information about the image.32573028 */32583258-static struct rbd_image_header_ondisk *32593259-rbd_dev_v1_header_read(struct rbd_device *rbd_dev)30293029+static int rbd_dev_v1_header_info(struct rbd_device *rbd_dev)32603030{32613031 struct rbd_image_header_ondisk *ondisk = NULL;32623032 u32 snap_count = 0;···32753057 size += names_size;32763058 ondisk = kmalloc(size, GFP_KERNEL);32773059 if (!ondisk)32783278- return ERR_PTR(-ENOMEM);30603060+ return -ENOMEM;3279306132803062 ret = rbd_obj_read_sync(rbd_dev, rbd_dev->header_name,32813063 0, size, ondisk);32823064 if (ret < 0)32833283- goto out_err;30653065+ goto out;32843066 if ((size_t)ret < size) {32853067 ret = -ENXIO;32863068 rbd_warn(rbd_dev, "short header read (want %zd got %d)",32873069 size, ret);32883288- goto out_err;30703070+ goto out;32893071 }32903072 if (!rbd_dev_ondisk_valid(ondisk)) {32913073 ret = -ENXIO;32923074 rbd_warn(rbd_dev, "invalid header");32933293- goto out_err;30753075+ goto out;32943076 }3295307732963078 names_size = le64_to_cpu(ondisk->snap_names_len);···32983080 snap_count = le32_to_cpu(ondisk->snap_count);32993081 } while (snap_count != want_count);3300308233013301- return ondisk;33023302-33033303-out_err:30833083+ ret = rbd_header_from_disk(rbd_dev, ondisk);30843084+out:33043085 kfree(ondisk);33053305-33063306- return ERR_PTR(ret);33073307-}33083308-33093309-/*33103310- * reload the ondisk the header33113311- */33123312-static int rbd_read_header(struct rbd_device *rbd_dev,33133313- struct rbd_image_header *header)33143314-{33153315- struct rbd_image_header_ondisk *ondisk;33163316- int ret;33173317-33183318- ondisk = rbd_dev_v1_header_read(rbd_dev);33193319- if (IS_ERR(ondisk))33203320- return PTR_ERR(ondisk);33213321- ret = rbd_header_from_disk(header, ondisk);33223322- kfree(ondisk);33233323-33243324- return ret;33253325-}33263326-33273327-static void rbd_update_mapping_size(struct rbd_device *rbd_dev)33283328-{33293329- if (rbd_dev->spec->snap_id != CEPH_NOSNAP)33303330- return;33313331-33323332- if (rbd_dev->mapping.size != rbd_dev->header.image_size) {33333333- sector_t size;33343334-33353335- rbd_dev->mapping.size = rbd_dev->header.image_size;33363336- size = (sector_t)rbd_dev->mapping.size / SECTOR_SIZE;33373337- dout("setting size to %llu sectors", (unsigned long long)size);33383338- set_capacity(rbd_dev->disk, size);33393339- }33403340-}33413341-33423342-/*33433343- * only read the first part of the ondisk header, without the snaps info33443344- */33453345-static int rbd_dev_v1_refresh(struct rbd_device *rbd_dev)33463346-{33473347- int ret;33483348- struct rbd_image_header h;33493349-33503350- ret = rbd_read_header(rbd_dev, &h);33513351- if (ret < 0)33523352- return ret;33533353-33543354- down_write(&rbd_dev->header_rwsem);33553355-33563356- /* Update image size, and check for resize of mapped image */33573357- rbd_dev->header.image_size = h.image_size;33583358- rbd_update_mapping_size(rbd_dev);33593359-33603360- /* rbd_dev->header.object_prefix shouldn't change */33613361- kfree(rbd_dev->header.snap_sizes);33623362- kfree(rbd_dev->header.snap_names);33633363- /* osd requests may still refer to snapc */33643364- ceph_put_snap_context(rbd_dev->header.snapc);33653365-33663366- rbd_dev->header.image_size = h.image_size;33673367- rbd_dev->header.snapc = h.snapc;33683368- rbd_dev->header.snap_names = h.snap_names;33693369- rbd_dev->header.snap_sizes = h.snap_sizes;33703370- /* Free the extra copy of the object prefix */33713371- if (strcmp(rbd_dev->header.object_prefix, h.object_prefix))33723372- rbd_warn(rbd_dev, "object prefix changed (ignoring)");33733373- kfree(h.object_prefix);33743374-33753375- up_write(&rbd_dev->header_rwsem);3376308633773087 return ret;33783088}···3326318033273181static int rbd_dev_refresh(struct rbd_device *rbd_dev)33283182{33293329- u64 image_size;31833183+ u64 mapping_size;33303184 int ret;3331318533323186 rbd_assert(rbd_image_format_valid(rbd_dev->image_format));33333333- image_size = rbd_dev->header.image_size;31873187+ mapping_size = rbd_dev->mapping.size;33343188 mutex_lock_nested(&ctl_mutex, SINGLE_DEPTH_NESTING);33353189 if (rbd_dev->image_format == 1)33363336- ret = rbd_dev_v1_refresh(rbd_dev);31903190+ ret = rbd_dev_v1_header_info(rbd_dev);33373191 else33383338- ret = rbd_dev_v2_refresh(rbd_dev);31923192+ ret = rbd_dev_v2_header_info(rbd_dev);3339319333403194 /* If it's a mapped snapshot, validate its EXISTS flag */3341319533423196 rbd_exists_validate(rbd_dev);33433197 mutex_unlock(&ctl_mutex);33443344- if (ret)33453345- rbd_warn(rbd_dev, "got notification but failed to "33463346- " update snaps: %d\n", ret);33473347- if (image_size != rbd_dev->header.image_size)31983198+ if (mapping_size != rbd_dev->mapping.size) {31993199+ sector_t size;32003200+32013201+ size = (sector_t)rbd_dev->mapping.size / SECTOR_SIZE;32023202+ dout("setting size to %llu sectors", (unsigned long long)size);32033203+ set_capacity(rbd_dev->disk, size);33483204 revalidate_disk(rbd_dev->disk);32053205+ }3349320633503207 return ret;33513208}···35523403 int ret;3553340435543405 ret = rbd_dev_refresh(rbd_dev);34063406+ if (ret)34073407+ rbd_warn(rbd_dev, ": manual header refresh error (%d)\n", ret);3555340835563409 return ret < 0 ? ret : size;35573410}···3652350136533502 spin_lock_init(&rbd_dev->lock);36543503 rbd_dev->flags = 0;35043504+ atomic_set(&rbd_dev->parent_ref, 0);36553505 INIT_LIST_HEAD(&rbd_dev->node);36563506 init_rwsem(&rbd_dev->header_rwsem);36573507···38023650 __le64 snapid;38033651 void *p;38043652 void *end;36533653+ u64 pool_id;38053654 char *image_id;38063655 u64 overlap;38073656 int ret;···38333680 p = reply_buf;38343681 end = reply_buf + ret;38353682 ret = -ERANGE;38363836- ceph_decode_64_safe(&p, end, parent_spec->pool_id, out_err);38373837- if (parent_spec->pool_id == CEPH_NOPOOL)36833683+ ceph_decode_64_safe(&p, end, pool_id, out_err);36843684+ if (pool_id == CEPH_NOPOOL) {36853685+ /*36863686+ * Either the parent never existed, or we have36873687+ * record of it but the image got flattened so it no36883688+ * longer has a parent. When the parent of a36893689+ * layered image disappears we immediately set the36903690+ * overlap to 0. The effect of this is that all new36913691+ * requests will be treated as if the image had no36923692+ * parent.36933693+ */36943694+ if (rbd_dev->parent_overlap) {36953695+ rbd_dev->parent_overlap = 0;36963696+ smp_mb();36973697+ rbd_dev_parent_put(rbd_dev);36983698+ pr_info("%s: clone image has been flattened\n",36993699+ rbd_dev->disk->disk_name);37003700+ }37013701+38383702 goto out; /* No parent? No problem. */37033703+ }3839370438403705 /* The ceph file layout needs to fit pool id in 32 bits */3841370638423707 ret = -EIO;38433843- if (parent_spec->pool_id > (u64)U32_MAX) {37083708+ if (pool_id > (u64)U32_MAX) {38443709 rbd_warn(NULL, "parent pool id too large (%llu > %u)\n",38453845- (unsigned long long)parent_spec->pool_id, U32_MAX);37103710+ (unsigned long long)pool_id, U32_MAX);38463711 goto out_err;38473712 }37133713+ parent_spec->pool_id = pool_id;3848371438493715 image_id = ceph_extract_encoded_string(&p, end, NULL, GFP_KERNEL);38503716 if (IS_ERR(image_id)) {···38743702 ceph_decode_64_safe(&p, end, parent_spec->snap_id, out_err);38753703 ceph_decode_64_safe(&p, end, overlap, out_err);3876370438773877- rbd_dev->parent_overlap = overlap;38783878- rbd_dev->parent_spec = parent_spec;38793879- parent_spec = NULL; /* rbd_dev now owns this */37053705+ if (overlap) {37063706+ rbd_spec_put(rbd_dev->parent_spec);37073707+ rbd_dev->parent_spec = parent_spec;37083708+ parent_spec = NULL; /* rbd_dev now owns this */37093709+ rbd_dev->parent_overlap = overlap;37103710+ } else {37113711+ rbd_warn(rbd_dev, "ignoring parent of clone with overlap 0\n");37123712+ }38803713out:38813714 ret = 0;38823715out_err:···41794002 for (i = 0; i < snap_count; i++)41804003 snapc->snaps[i] = ceph_decode_64(&p);4181400440054005+ ceph_put_snap_context(rbd_dev->header.snapc);41824006 rbd_dev->header.snapc = snapc;4183400741844008 dout(" snap context seq = %llu, snap_count = %u\n",···42314053 return snap_name;42324054}4233405542344234-static int rbd_dev_v2_refresh(struct rbd_device *rbd_dev)40564056+static int rbd_dev_v2_header_info(struct rbd_device *rbd_dev)42354057{40584058+ bool first_time = rbd_dev->header.object_prefix == NULL;42364059 int ret;4237406042384061 down_write(&rbd_dev->header_rwsem);4239406240634063+ if (first_time) {40644064+ ret = rbd_dev_v2_header_onetime(rbd_dev);40654065+ if (ret)40664066+ goto out;40674067+ }40684068+40694069+ /*40704070+ * If the image supports layering, get the parent info. We40714071+ * need to probe the first time regardless. Thereafter we40724072+ * only need to if there's a parent, to see if it has40734073+ * disappeared due to the mapped image getting flattened.40744074+ */40754075+ if (rbd_dev->header.features & RBD_FEATURE_LAYERING &&40764076+ (first_time || rbd_dev->parent_spec)) {40774077+ bool warn;40784078+40794079+ ret = rbd_dev_v2_parent_info(rbd_dev);40804080+ if (ret)40814081+ goto out;40824082+40834083+ /*40844084+ * Print a warning if this is the initial probe and40854085+ * the image has a parent. Don't print it if the40864086+ * image now being probed is itself a parent. We40874087+ * can tell at this point because we won't know its40884088+ * pool name yet (just its pool id).40894089+ */40904090+ warn = rbd_dev->parent_spec && rbd_dev->spec->pool_name;40914091+ if (first_time && warn)40924092+ rbd_warn(rbd_dev, "WARNING: kernel layering "40934093+ "is EXPERIMENTAL!");40944094+ }40954095+42404096 ret = rbd_dev_v2_image_size(rbd_dev);42414097 if (ret)42424098 goto out;42434243- rbd_update_mapping_size(rbd_dev);40994099+41004100+ if (rbd_dev->spec->snap_id == CEPH_NOSNAP)41014101+ if (rbd_dev->mapping.size != rbd_dev->header.image_size)41024102+ rbd_dev->mapping.size = rbd_dev->header.image_size;4244410342454104 ret = rbd_dev_v2_snap_context(rbd_dev);42464105 dout("rbd_dev_v2_snap_context returned %d\n", ret);42474247- if (ret)42484248- goto out;42494106out:42504107 up_write(&rbd_dev->header_rwsem);42514108···47034490{47044491 struct rbd_image_header *header;4705449247064706- rbd_dev_remove_parent(rbd_dev);47074707- rbd_spec_put(rbd_dev->parent_spec);47084708- rbd_dev->parent_spec = NULL;47094709- rbd_dev->parent_overlap = 0;44934493+ /* Drop parent reference unless it's already been done (or none) */44944494+44954495+ if (rbd_dev->parent_overlap)44964496+ rbd_dev_parent_put(rbd_dev);4710449747114498 /* Free dynamic fields from the header, then zero it out */47124499···47184505 memset(header, 0, sizeof (*header));47194506}4720450747214721-static int rbd_dev_v1_probe(struct rbd_device *rbd_dev)45084508+static int rbd_dev_v2_header_onetime(struct rbd_device *rbd_dev)47224509{47234510 int ret;47244724-47254725- /* Populate rbd image metadata */47264726-47274727- ret = rbd_read_header(rbd_dev, &rbd_dev->header);47284728- if (ret < 0)47294729- goto out_err;47304730-47314731- /* Version 1 images have no parent (no layering) */47324732-47334733- rbd_dev->parent_spec = NULL;47344734- rbd_dev->parent_overlap = 0;47354735-47364736- dout("discovered version 1 image, header name is %s\n",47374737- rbd_dev->header_name);47384738-47394739- return 0;47404740-47414741-out_err:47424742- kfree(rbd_dev->header_name);47434743- rbd_dev->header_name = NULL;47444744- kfree(rbd_dev->spec->image_id);47454745- rbd_dev->spec->image_id = NULL;47464746-47474747- return ret;47484748-}47494749-47504750-static int rbd_dev_v2_probe(struct rbd_device *rbd_dev)47514751-{47524752- int ret;47534753-47544754- ret = rbd_dev_v2_image_size(rbd_dev);47554755- if (ret)47564756- goto out_err;47574757-47584758- /* Get the object prefix (a.k.a. block_name) for the image */4759451147604512 ret = rbd_dev_v2_object_prefix(rbd_dev);47614513 if (ret)47624514 goto out_err;4763451547644764- /* Get the and check features for the image */47654765-45164516+ /*45174517+ * Get the and check features for the image. Currently the45184518+ * features are assumed to never change.45194519+ */47664520 ret = rbd_dev_v2_features(rbd_dev);47674521 if (ret)47684522 goto out_err;47694769-47704770- /* If the image supports layering, get the parent info */47714771-47724772- if (rbd_dev->header.features & RBD_FEATURE_LAYERING) {47734773- ret = rbd_dev_v2_parent_info(rbd_dev);47744774- if (ret)47754775- goto out_err;47764776-47774777- /*47784778- * Don't print a warning for parent images. We can47794779- * tell this point because we won't know its pool47804780- * name yet (just its pool id).47814781- */47824782- if (rbd_dev->spec->pool_name)47834783- rbd_warn(rbd_dev, "WARNING: kernel layering "47844784- "is EXPERIMENTAL!");47854785- }4786452347874524 /* If the image supports fancy striping, get its parameters */47884525···47414578 if (ret < 0)47424579 goto out_err;47434580 }47444744-47454745- /* crypto and compression type aren't (yet) supported for v2 images */47464746-47474747- rbd_dev->header.crypt_type = 0;47484748- rbd_dev->header.comp_type = 0;47494749-47504750- /* Get the snapshot context, plus the header version */47514751-47524752- ret = rbd_dev_v2_snap_context(rbd_dev);47534753- if (ret)47544754- goto out_err;47554755-47564756- dout("discovered version 2 image, header name is %s\n",47574757- rbd_dev->header_name);45814581+ /* No support for crypto and compression type format 2 images */4758458247594583 return 0;47604584out_err:47614761- rbd_dev->parent_overlap = 0;47624762- rbd_spec_put(rbd_dev->parent_spec);47634763- rbd_dev->parent_spec = NULL;47644764- kfree(rbd_dev->header_name);47654765- rbd_dev->header_name = NULL;45854585+ rbd_dev->header.features = 0;47664586 kfree(rbd_dev->header.object_prefix);47674587 rbd_dev->header.object_prefix = NULL;47684588···47744628 if (!parent)47754629 goto out_err;4776463047774777- ret = rbd_dev_image_probe(parent);46314631+ ret = rbd_dev_image_probe(parent, false);47784632 if (ret < 0)47794633 goto out_err;47804634 rbd_dev->parent = parent;46354635+ atomic_set(&rbd_dev->parent_ref, 1);4781463647824637 return 0;47834638out_err:47844639 if (parent) {47854785- rbd_spec_put(rbd_dev->parent_spec);46404640+ rbd_dev_unparent(rbd_dev);47864641 kfree(rbd_dev->header_name);47874642 rbd_dev_destroy(parent);47884643 } else {···47974650static int rbd_dev_device_setup(struct rbd_device *rbd_dev)47984651{47994652 int ret;48004800-48014801- ret = rbd_dev_mapping_set(rbd_dev);48024802- if (ret)48034803- return ret;4804465348054654 /* generate unique id: find highest unique id, add one */48064655 rbd_dev_id_get(rbd_dev);···48194676 if (ret)48204677 goto err_out_blkdev;4821467848224822- ret = rbd_bus_add_dev(rbd_dev);46794679+ ret = rbd_dev_mapping_set(rbd_dev);48234680 if (ret)48244681 goto err_out_disk;46824682+ set_capacity(rbd_dev->disk, rbd_dev->mapping.size / SECTOR_SIZE);46834683+46844684+ ret = rbd_bus_add_dev(rbd_dev);46854685+ if (ret)46864686+ goto err_out_mapping;4825468748264688 /* Everything's ready. Announce the disk to the world. */4827468948284828- set_capacity(rbd_dev->disk, rbd_dev->mapping.size / SECTOR_SIZE);48294690 set_bit(RBD_DEV_FLAG_EXISTS, &rbd_dev->flags);48304691 add_disk(rbd_dev->disk);48314692···4838469148394692 return ret;4840469346944694+err_out_mapping:46954695+ rbd_dev_mapping_clear(rbd_dev);48414696err_out_disk:48424697 rbd_free_disk(rbd_dev);48434698err_out_blkdev:···4880473148814732static void rbd_dev_image_release(struct rbd_device *rbd_dev)48824733{48834883- int ret;48844884-48854734 rbd_dev_unprobe(rbd_dev);48864886- ret = rbd_dev_header_watch_sync(rbd_dev, 0);48874887- if (ret)48884888- rbd_warn(rbd_dev, "failed to cancel watch event (%d)\n", ret);48894735 kfree(rbd_dev->header_name);48904736 rbd_dev->header_name = NULL;48914737 rbd_dev->image_format = 0;···4892474848934749/*48944750 * Probe for the existence of the header object for the given rbd48954895- * device. For format 2 images this includes determining the image48964896- * id.47514751+ * device. If this image is the one being mapped (i.e., not a47524752+ * parent), initiate a watch on its header object before using that47534753+ * object to get detailed information about the rbd image.48974754 */48984898-static int rbd_dev_image_probe(struct rbd_device *rbd_dev)47554755+static int rbd_dev_image_probe(struct rbd_device *rbd_dev, bool mapping)48994756{49004757 int ret;49014758 int tmp;···49164771 if (ret)49174772 goto err_out_format;4918477349194919- ret = rbd_dev_header_watch_sync(rbd_dev, 1);49204920- if (ret)49214921- goto out_header_name;47744774+ if (mapping) {47754775+ ret = rbd_dev_header_watch_sync(rbd_dev, true);47764776+ if (ret)47774777+ goto out_header_name;47784778+ }4922477949234780 if (rbd_dev->image_format == 1)49244924- ret = rbd_dev_v1_probe(rbd_dev);47814781+ ret = rbd_dev_v1_header_info(rbd_dev);49254782 else49264926- ret = rbd_dev_v2_probe(rbd_dev);47834783+ ret = rbd_dev_v2_header_info(rbd_dev);49274784 if (ret)49284785 goto err_out_watch;49294786···49344787 goto err_out_probe;4935478849364789 ret = rbd_dev_probe_parent(rbd_dev);49374937- if (!ret)49384938- return 0;47904790+ if (ret)47914791+ goto err_out_probe;4939479247934793+ dout("discovered format %u image, header name is %s\n",47944794+ rbd_dev->image_format, rbd_dev->header_name);47954795+47964796+ return 0;49404797err_out_probe:49414798 rbd_dev_unprobe(rbd_dev);49424799err_out_watch:49434943- tmp = rbd_dev_header_watch_sync(rbd_dev, 0);49444944- if (tmp)49454945- rbd_warn(rbd_dev, "unable to tear down watch request\n");48004800+ if (mapping) {48014801+ tmp = rbd_dev_header_watch_sync(rbd_dev, false);48024802+ if (tmp)48034803+ rbd_warn(rbd_dev, "unable to tear down "48044804+ "watch request (%d)\n", tmp);48054805+ }49464806out_header_name:49474807 kfree(rbd_dev->header_name);49484808 rbd_dev->header_name = NULL;···49734819 struct rbd_spec *spec = NULL;49744820 struct rbd_client *rbdc;49754821 struct ceph_osd_client *osdc;48224822+ bool read_only;49764823 int rc = -ENOMEM;4977482449784825 if (!try_module_get(THIS_MODULE))···49834828 rc = rbd_add_parse_args(buf, &ceph_opts, &rbd_opts, &spec);49844829 if (rc < 0)49854830 goto err_out_module;48314831+ read_only = rbd_opts->read_only;48324832+ kfree(rbd_opts);48334833+ rbd_opts = NULL; /* done with this */4986483449874835 rbdc = rbd_get_client(ceph_opts);49884836 if (IS_ERR(rbdc)) {···50164858 rbdc = NULL; /* rbd_dev now owns this */50174859 spec = NULL; /* rbd_dev now owns this */5018486050195019- rbd_dev->mapping.read_only = rbd_opts->read_only;50205020- kfree(rbd_opts);50215021- rbd_opts = NULL; /* done with this */50225022-50235023- rc = rbd_dev_image_probe(rbd_dev);48614861+ rc = rbd_dev_image_probe(rbd_dev, true);50244862 if (rc < 0)50254863 goto err_out_rbd_dev;48644864+48654865+ /* If we are mapping a snapshot it must be marked read-only */48664866+48674867+ if (rbd_dev->spec->snap_id != CEPH_NOSNAP)48684868+ read_only = true;48694869+ rbd_dev->mapping.read_only = read_only;5026487050274871 rc = rbd_dev_device_setup(rbd_dev);50284872 if (!rc)···5071491150724912 rbd_free_disk(rbd_dev);50734913 clear_bit(RBD_DEV_FLAG_EXISTS, &rbd_dev->flags);50745074- rbd_dev_clear_mapping(rbd_dev);49144914+ rbd_dev_mapping_clear(rbd_dev);50754915 unregister_blkdev(rbd_dev->major, rbd_dev->name);50764916 rbd_dev->major = 0;50774917 rbd_dev_id_put(rbd_dev);···51384978 spin_unlock_irq(&rbd_dev->lock);51394979 if (ret < 0)51404980 goto done;51415141- ret = count;51424981 rbd_bus_del_dev(rbd_dev);49824982+ ret = rbd_dev_header_watch_sync(rbd_dev, false);49834983+ if (ret)49844984+ rbd_warn(rbd_dev, "failed to cancel watch event (%d)\n", ret);51434985 rbd_dev_image_release(rbd_dev);51444986 module_put(THIS_MODULE);49874987+ ret = count;51454988done:51464989 mutex_unlock(&ctl_mutex);51474990
-6
drivers/char/hw_random/mxc-rnga.c
···167167 clk_prepare_enable(mxc_rng->clk);168168169169 res = platform_get_resource(pdev, IORESOURCE_MEM, 0);170170- if (!res) {171171- err = -ENOENT;172172- goto err_region;173173- }174174-175170 mxc_rng->mem = devm_ioremap_resource(&pdev->dev, res);176171 if (IS_ERR(mxc_rng->mem)) {177172 err = PTR_ERR(mxc_rng->mem);···184189 return 0;185190186191err_ioremap:187187-err_region:188192 clk_disable_unprepare(mxc_rng->clk);189193190194out:
-5
drivers/char/hw_random/omap-rng.c
···119119 dev_set_drvdata(&pdev->dev, priv);120120121121 priv->mem_res = platform_get_resource(pdev, IORESOURCE_MEM, 0);122122- if (!priv->mem_res) {123123- ret = -ENOENT;124124- goto err_ioremap;125125- }126126-127122 priv->base = devm_ioremap_resource(&pdev->dev, priv->mem_res);128123 if (IS_ERR(priv->base)) {129124 ret = PTR_ERR(priv->base);
+2-2
drivers/char/ipmi/ipmi_bt_sm.c
···9595 enum bt_states state;9696 unsigned char seq; /* BT sequence number */9797 struct si_sm_io *io;9898- unsigned char write_data[IPMI_MAX_MSG_LENGTH];9898+ unsigned char write_data[IPMI_MAX_MSG_LENGTH + 2]; /* +2 for memcpy */9999 int write_count;100100- unsigned char read_data[IPMI_MAX_MSG_LENGTH];100100+ unsigned char read_data[IPMI_MAX_MSG_LENGTH + 2]; /* +2 for memcpy */101101 int read_count;102102 int truncated;103103 long timeout; /* microseconds countdown */
···663663 /* We got the flags from the SMI, now handle them. */664664 smi_info->handlers->get_result(smi_info->si_sm, msg, 4);665665 if (msg[2] != 0) {666666- dev_warn(smi_info->dev, "Could not enable interrupts"667667- ", failed get, using polled mode.\n");666666+ dev_warn(smi_info->dev,667667+ "Couldn't get irq info: %x.\n", msg[2]);668668+ dev_warn(smi_info->dev,669669+ "Maybe ok, but ipmi might run very slowly.\n");668670 smi_info->si_state = SI_NORMAL;669671 } else {670672 msg[0] = (IPMI_NETFN_APP_REQUEST << 2);···687685688686 /* We got the flags from the SMI, now handle them. */689687 smi_info->handlers->get_result(smi_info->si_sm, msg, 4);690690- if (msg[2] != 0)691691- dev_warn(smi_info->dev, "Could not enable interrupts"692692- ", failed set, using polled mode.\n");693693- else688688+ if (msg[2] != 0) {689689+ dev_warn(smi_info->dev,690690+ "Couldn't set irq info: %x.\n", msg[2]);691691+ dev_warn(smi_info->dev,692692+ "Maybe ok, but ipmi might run very slowly.\n");693693+ } else694694 smi_info->interrupt_disabled = 0;695695 smi_info->si_state = SI_NORMAL;696696 break;
+1-1
drivers/cpufreq/Kconfig
···47474848choice4949 prompt "Default CPUFreq governor"5050- default CPU_FREQ_DEFAULT_GOV_USERSPACE if CPU_FREQ_SA1100 || CPU_FREQ_SA11105050+ default CPU_FREQ_DEFAULT_GOV_USERSPACE if ARM_SA1100_CPUFREQ || ARM_SA1110_CPUFREQ5151 default CPU_FREQ_DEFAULT_GOV_PERFORMANCE5252 help5353 This option sets which CPUFreq governor shall be loaded at
+8-7
drivers/cpufreq/Kconfig.arm
···33#4455config ARM_BIG_LITTLE_CPUFREQ66- tristate77- depends on ARM_CPU_TOPOLOGY66+ tristate "Generic ARM big LITTLE CPUfreq driver"77+ depends on ARM_CPU_TOPOLOGY && PM_OPP && HAVE_CLK88+ help99+ This enables the Generic CPUfreq driver for ARM big.LITTLE platforms.810911config ARM_DT_BL_CPUFREQ1010- tristate "Generic ARM big LITTLE CPUfreq driver probed via DT"1111- select ARM_BIG_LITTLE_CPUFREQ1212- depends on OF && HAVE_CLK1212+ tristate "Generic probing via DT for ARM big LITTLE CPUfreq driver"1313+ depends on ARM_BIG_LITTLE_CPUFREQ && OF1314 help1414- This enables the Generic CPUfreq driver for ARM big.LITTLE platform.1515- This gets frequency tables from DT.1515+ This enables probing via DT for Generic CPUfreq driver for ARM1616+ big.LITTLE platform. This gets frequency tables from DT.16171718config ARM_EXYNOS_CPUFREQ1819 bool "SAMSUNG EXYNOS SoCs"
+1-6
drivers/cpufreq/arm_big_little.c
···4040static struct cpufreq_frequency_table *freq_table[MAX_CLUSTERS];4141static atomic_t cluster_usage[MAX_CLUSTERS] = {ATOMIC_INIT(0), ATOMIC_INIT(0)};42424343-static int cpu_to_cluster(int cpu)4444-{4545- return topology_physical_package_id(cpu);4646-}4747-4843static unsigned int bL_cpufreq_get(unsigned int cpu)4944{5045 u32 cur_cluster = cpu_to_cluster(cpu);···187192188193 cpumask_copy(policy->cpus, topology_core_cpumask(policy->cpu));189194190190- dev_info(cpu_dev, "CPU %d initialized\n", policy->cpu);195195+ dev_info(cpu_dev, "%s: CPU %d initialized\n", __func__, policy->cpu);191196 return 0;192197}193198
+5
drivers/cpufreq/arm_big_little.h
···3434 int (*init_opp_table)(struct device *cpu_dev);3535};36363737+static inline int cpu_to_cluster(int cpu)3838+{3939+ return topology_physical_package_id(cpu);4040+}4141+3742int bL_cpufreq_register(struct cpufreq_arm_bL_ops *ops);3843void bL_cpufreq_unregister(struct cpufreq_arm_bL_ops *ops);3944
+5-4
drivers/cpufreq/arm_big_little_dt.c
···66666767 parent = of_find_node_by_path("/cpus");6868 if (!parent) {6969- pr_err("failed to find OF /cpus\n");7070- return -ENOENT;6969+ pr_info("Failed to find OF /cpus. Use CPUFREQ_ETERNAL transition latency\n");7070+ return CPUFREQ_ETERNAL;7171 }72727373 for_each_child_of_node(parent, np) {···7878 of_node_put(np);7979 of_node_put(parent);80808181- return 0;8181+ return transition_latency;8282 }83838484- return -ENODEV;8484+ pr_info("clock-latency isn't found, use CPUFREQ_ETERNAL transition latency\n");8585+ return CPUFREQ_ETERNAL;8586}86878788static struct cpufreq_arm_bL_ops dt_bL_ops = {
+20-7
drivers/cpufreq/cpufreq-cpu0.c
···189189190190 if (!np) {191191 pr_err("failed to find cpu0 node\n");192192- return -ENOENT;192192+ ret = -ENOENT;193193+ goto out_put_parent;193194 }194195195196 cpu_dev = &pdev->dev;196197 cpu_dev->of_node = np;198198+199199+ cpu_reg = devm_regulator_get(cpu_dev, "cpu0");200200+ if (IS_ERR(cpu_reg)) {201201+ /*202202+ * If cpu0 regulator supply node is present, but regulator is203203+ * not yet registered, we should try defering probe.204204+ */205205+ if (PTR_ERR(cpu_reg) == -EPROBE_DEFER) {206206+ dev_err(cpu_dev, "cpu0 regulator not ready, retry\n");207207+ ret = -EPROBE_DEFER;208208+ goto out_put_node;209209+ }210210+ pr_warn("failed to get cpu0 regulator: %ld\n",211211+ PTR_ERR(cpu_reg));212212+ cpu_reg = NULL;213213+ }197214198215 cpu_clk = devm_clk_get(cpu_dev, NULL);199216 if (IS_ERR(cpu_clk)) {200217 ret = PTR_ERR(cpu_clk);201218 pr_err("failed to get cpu0 clock: %d\n", ret);202219 goto out_put_node;203203- }204204-205205- cpu_reg = devm_regulator_get(cpu_dev, "cpu0");206206- if (IS_ERR(cpu_reg)) {207207- pr_warn("failed to get cpu0 regulator\n");208208- cpu_reg = NULL;209220 }210221211222 ret = of_init_opp_table(cpu_dev);···275264 opp_free_cpufreq_table(cpu_dev, &freq_table);276265out_put_node:277266 of_node_put(np);267267+out_put_parent:268268+ of_node_put(parent);278269 return ret;279270}280271
+4-6
drivers/cpufreq/cpufreq.c
···10751075 __func__, cpu_dev->id, cpu);10761076 }1077107710781078+ if ((cpus == 1) && (cpufreq_driver->target))10791079+ __cpufreq_governor(data, CPUFREQ_GOV_POLICY_EXIT);10801080+10781081 pr_debug("%s: removing link, cpu: %d\n", __func__, cpu);10791082 cpufreq_cpu_put(data);1080108310811084 /* If cpu is last user of policy, free policy */10821085 if (cpus == 1) {10831083- if (cpufreq_driver->target)10841084- __cpufreq_governor(data, CPUFREQ_GOV_POLICY_EXIT);10851085-10861086 lock_policy_rwsem_read(cpu);10871087 kobj = &data->kobj;10881088 cmp = &data->kobj_unregister;···18321832 if (dev) {18331833 switch (action) {18341834 case CPU_ONLINE:18351835- case CPU_ONLINE_FROZEN:18361835 cpufreq_add_dev(dev, NULL);18371836 break;18381837 case CPU_DOWN_PREPARE:18391839- case CPU_DOWN_PREPARE_FROZEN:18381838+ case CPU_UP_CANCELED_FROZEN:18401839 __cpufreq_remove_dev(dev, NULL);18411840 break;18421841 case CPU_DOWN_FAILED:18431843- case CPU_DOWN_FAILED_FROZEN:18441842 cpufreq_add_dev(dev, NULL);18451843 break;18461844 }
+7-4
drivers/cpufreq/cpufreq_governor.c
···255255 if (have_governor_per_policy()) {256256 WARN_ON(dbs_data);257257 } else if (dbs_data) {258258+ dbs_data->usage_count++;258259 policy->governor_data = dbs_data;259260 return 0;260261 }···267266 }268267269268 dbs_data->cdata = cdata;269269+ dbs_data->usage_count = 1;270270 rc = cdata->init(dbs_data);271271 if (rc) {272272 pr_err("%s: POLICY_INIT: init() failed\n", __func__);···296294 set_sampling_rate(dbs_data, max(dbs_data->min_sampling_rate,297295 latency * LATENCY_MULTIPLIER));298296299299- if (dbs_data->cdata->governor == GOV_CONSERVATIVE) {297297+ if ((cdata->governor == GOV_CONSERVATIVE) &&298298+ (!policy->governor->initialized)) {300299 struct cs_ops *cs_ops = dbs_data->cdata->gov_ops;301300302301 cpufreq_register_notifier(cs_ops->notifier_block,···309306310307 return 0;311308 case CPUFREQ_GOV_POLICY_EXIT:312312- if ((policy->governor->initialized == 1) ||313313- have_governor_per_policy()) {309309+ if (!--dbs_data->usage_count) {314310 sysfs_remove_group(get_governor_parent_kobj(policy),315311 get_sysfs_attr(dbs_data));316312317317- if (dbs_data->cdata->governor == GOV_CONSERVATIVE) {313313+ if ((dbs_data->cdata->governor == GOV_CONSERVATIVE) &&314314+ (policy->governor->initialized == 1)) {318315 struct cs_ops *cs_ops = dbs_data->cdata->gov_ops;319316320317 cpufreq_unregister_notifier(cs_ops->notifier_block,
+1
drivers/cpufreq/cpufreq_governor.h
···211211struct dbs_data {212212 struct common_dbs_data *cdata;213213 unsigned int min_sampling_rate;214214+ int usage_count;214215 void *tuners;215216216217 /* dbs_mutex protects dbs_enable in governor start/stop */
···349349350350 switch (action) {351351 case CPU_ONLINE:352352- case CPU_ONLINE_FROZEN:353352 cpufreq_update_policy(cpu);354353 break;355354 case CPU_DOWN_PREPARE:356356- case CPU_DOWN_PREPARE_FROZEN:357355 cpufreq_stats_free_sysfs(cpu);358356 break;359357 case CPU_DEAD:360360- case CPU_DEAD_FROZEN:358358+ cpufreq_stats_free_table(cpu);359359+ break;360360+ case CPU_UP_CANCELED_FROZEN:361361+ cpufreq_stats_free_sysfs(cpu);361362 cpufreq_stats_free_table(cpu);362363 break;363364 }
···171171 priv.dev = &pdev->dev;172172173173 res = platform_get_resource(pdev, IORESOURCE_MEM, 0);174174- if (!res) {175175- dev_err(&pdev->dev, "Cannot get memory resource\n");176176- return -ENODEV;177177- }178174 priv.base = devm_ioremap_resource(&pdev->dev, res);179175 if (IS_ERR(priv.base))180176 return PTR_ERR(priv.base);
-5
drivers/dma/tegra20-apb-dma.c
···12731273 platform_set_drvdata(pdev, tdma);1274127412751275 res = platform_get_resource(pdev, IORESOURCE_MEM, 0);12761276- if (!res) {12771277- dev_err(&pdev->dev, "No mem resource for DMA\n");12781278- return -EINVAL;12791279- }12801280-12811276 tdma->base_addr = devm_ioremap_resource(&pdev->dev, res);12821277 if (IS_ERR(tdma->base_addr))12831278 return PTR_ERR(tdma->base_addr);
-5
drivers/gpio/gpio-mvebu.c
···619619 * per-CPU registers */620620 if (soc_variant == MVEBU_GPIO_SOC_VARIANT_ARMADAXP) {621621 res = platform_get_resource(pdev, IORESOURCE_MEM, 1);622622- if (!res) {623623- dev_err(&pdev->dev, "Cannot get memory resource\n");624624- return -ENODEV;625625- }626626-627622 mvchip->percpu_membase = devm_ioremap_resource(&pdev->dev,628623 res);629624 if (IS_ERR(mvchip->percpu_membase))
-5
drivers/gpio/gpio-tegra.c
···463463 }464464465465 res = platform_get_resource(pdev, IORESOURCE_MEM, 0);466466- if (!res) {467467- dev_err(&pdev->dev, "Missing MEM resource\n");468468- return -ENODEV;469469- }470470-471466 regs = devm_ioremap_resource(&pdev->dev, res);472467 if (IS_ERR(regs))473468 return PTR_ERR(regs);
+5
drivers/gpu/drm/drm_crtc.c
···7878{7979 struct drm_crtc *crtc;80808181+ /* Locking is currently fubar in the panic handler. */8282+ if (oops_in_progress)8383+ return;8484+8185 list_for_each_entry(crtc, &dev->mode_config.crtc_list, head)8286 WARN_ON(!mutex_is_locked(&crtc->mutex));8387···250246 else251247 return "unknown";252248}249249+EXPORT_SYMBOL(drm_get_connector_status_name);253250254251/**255252 * drm_mode_object_get - allocate a new modeset identifier
+19-8
drivers/gpu/drm/drm_crtc_helper.c
···121121 connector->helper_private;122122 int count = 0;123123 int mode_flags = 0;124124+ bool verbose_prune = true;124125125126 DRM_DEBUG_KMS("[CONNECTOR:%d:%s]\n", connector->base.id,126127 drm_get_connector_name(connector));···150149 DRM_DEBUG_KMS("[CONNECTOR:%d:%s] disconnected\n",151150 connector->base.id, drm_get_connector_name(connector));152151 drm_mode_connector_update_edid_property(connector, NULL);152152+ verbose_prune = false;153153 goto prune;154154 }155155···184182 }185183186184prune:187187- drm_mode_prune_invalid(dev, &connector->modes, true);185185+ drm_mode_prune_invalid(dev, &connector->modes, verbose_prune);188186189187 if (list_empty(&connector->modes))190188 return 0;···10071005 continue;1008100610091007 connector->status = connector->funcs->detect(connector, false);10101010- DRM_DEBUG_KMS("[CONNECTOR:%d:%s] status updated from %d to %d\n",10111011- connector->base.id,10121012- drm_get_connector_name(connector),10131013- old_status, connector->status);10141014- if (old_status != connector->status)10081008+ if (old_status != connector->status) {10091009+ const char *old, *new;10101010+10111011+ old = drm_get_connector_status_name(old_status);10121012+ new = drm_get_connector_status_name(connector->status);10131013+10141014+ DRM_DEBUG_KMS("[CONNECTOR:%d:%s] "10151015+ "status updated from %s to %s\n",10161016+ connector->base.id,10171017+ drm_get_connector_name(connector),10181018+ old, new);10191019+10151020 changed = true;10211021+ }10161022 }1017102310181024 mutex_unlock(&dev->mode_config.mutex);···10931083 old_status = connector->status;1094108410951085 connector->status = connector->funcs->detect(connector, false);10961096- DRM_DEBUG_KMS("[CONNECTOR:%d:%s] status updated from %d to %d\n",10861086+ DRM_DEBUG_KMS("[CONNECTOR:%d:%s] status updated from %s to %s\n",10971087 connector->base.id,10981088 drm_get_connector_name(connector),10991099- old_status, connector->status);10891089+ drm_get_connector_status_name(old_status),10901090+ drm_get_connector_status_name(connector->status));11001091 if (old_status != connector->status)11011092 changed = true;11021093 }
···702702 /* Walk through all bpp values. Luckily they're all nicely spaced with 2703703 * bpc in between. */704704 bpp = min_t(int, 8*3, pipe_config->pipe_bpp);705705+ if (is_edp(intel_dp) && dev_priv->edp.bpp)706706+ bpp = min_t(int, bpp, dev_priv->edp.bpp);707707+705708 for (; bpp >= 6*3; bpp -= 2*3) {706709 mode_rate = intel_dp_link_required(target_clock, bpp);707710···742739 intel_dp->link_bw = bws[clock];743740 intel_dp->lane_count = lane_count;744741 adjusted_mode->clock = drm_dp_bw_code_to_link_rate(intel_dp->link_bw);742742+ pipe_config->pipe_bpp = bpp;745743 pipe_config->pixel_target_clock = target_clock;746744747745 DRM_DEBUG_KMS("DP link bw %02x lane count %d clock %d bpp %d\n",···754750 intel_link_compute_m_n(bpp, lane_count,755751 target_clock, adjusted_mode->clock,756752 &pipe_config->dp_m_n);757757-758758- /*759759- * XXX: We have a strange regression where using the vbt edp bpp value760760- * for the link bw computation results in black screens, the panel only761761- * works when we do the computation at the usual 24bpp (but still762762- * requires us to use 18bpp). Until that's fully debugged, stay763763- * bug-for-bug compatible with the old code.764764- */765765- if (is_edp(intel_dp) && dev_priv->edp.bpp) {766766- DRM_DEBUG_KMS("clamping display bpc (was %d) to eDP (%d)\n",767767- bpp, dev_priv->edp.bpp);768768- bpp = min_t(int, bpp, dev_priv->edp.bpp);769769- }770770- pipe_config->pipe_bpp = bpp;771753772754 return true;773755}···13791389 ironlake_edp_panel_on(intel_dp);13801390 ironlake_edp_panel_vdd_off(intel_dp, true);13811391 intel_dp_complete_link_train(intel_dp);13921392+ intel_dp_stop_link_train(intel_dp);13821393 ironlake_edp_backlight_on(intel_dp);13831394}13841395···17021711 struct drm_i915_private *dev_priv = dev->dev_private;17031712 enum port port = intel_dig_port->port;17041713 int ret;17051705- uint32_t temp;1706171417071715 if (HAS_DDI(dev)) {17081708- temp = I915_READ(DP_TP_CTL(port));17161716+ uint32_t temp = I915_READ(DP_TP_CTL(port));1709171717101718 if (dp_train_pat & DP_LINK_SCRAMBLING_DISABLE)17111719 temp |= DP_TP_CTL_SCRAMBLE_DISABLE;···17141724 temp &= ~DP_TP_CTL_LINK_TRAIN_MASK;17151725 switch (dp_train_pat & DP_TRAINING_PATTERN_MASK) {17161726 case DP_TRAINING_PATTERN_DISABLE:17171717-17181718- if (port != PORT_A) {17191719- temp |= DP_TP_CTL_LINK_TRAIN_IDLE;17201720- I915_WRITE(DP_TP_CTL(port), temp);17211721-17221722- if (wait_for((I915_READ(DP_TP_STATUS(port)) &17231723- DP_TP_STATUS_IDLE_DONE), 1))17241724- DRM_ERROR("Timed out waiting for DP idle patterns\n");17251725-17261726- temp &= ~DP_TP_CTL_LINK_TRAIN_MASK;17271727- }17281728-17291727 temp |= DP_TP_CTL_LINK_TRAIN_NORMAL;1730172817311729 break;···17871809 }1788181017891811 return true;18121812+}18131813+18141814+static void intel_dp_set_idle_link_train(struct intel_dp *intel_dp)18151815+{18161816+ struct intel_digital_port *intel_dig_port = dp_to_dig_port(intel_dp);18171817+ struct drm_device *dev = intel_dig_port->base.base.dev;18181818+ struct drm_i915_private *dev_priv = dev->dev_private;18191819+ enum port port = intel_dig_port->port;18201820+ uint32_t val;18211821+18221822+ if (!HAS_DDI(dev))18231823+ return;18241824+18251825+ val = I915_READ(DP_TP_CTL(port));18261826+ val &= ~DP_TP_CTL_LINK_TRAIN_MASK;18271827+ val |= DP_TP_CTL_LINK_TRAIN_IDLE;18281828+ I915_WRITE(DP_TP_CTL(port), val);18291829+18301830+ /*18311831+ * On PORT_A we can have only eDP in SST mode. There the only reason18321832+ * we need to set idle transmission mode is to work around a HW issue18331833+ * where we enable the pipe while not in idle link-training mode.18341834+ * In this case there is requirement to wait for a minimum number of18351835+ * idle patterns to be sent.18361836+ */18371837+ if (port == PORT_A)18381838+ return;18391839+18401840+ if (wait_for((I915_READ(DP_TP_STATUS(port)) & DP_TP_STATUS_IDLE_DONE),18411841+ 1))18421842+ DRM_ERROR("Timed out waiting for DP idle patterns\n");17901843}1791184417921845/* Enable corresponding port and start training pattern 1 */···19621953 ++tries;19631954 }1964195519561956+ intel_dp_set_idle_link_train(intel_dp);19571957+19581958+ intel_dp->DP = DP;19591959+19651960 if (channel_eq)19661961 DRM_DEBUG_KMS("Channel EQ done. DP Training successful\n");1967196219681968- intel_dp_set_link_train(intel_dp, DP, DP_TRAINING_PATTERN_DISABLE);19631963+}19641964+19651965+void intel_dp_stop_link_train(struct intel_dp *intel_dp)19661966+{19671967+ intel_dp_set_link_train(intel_dp, intel_dp->DP,19681968+ DP_TRAINING_PATTERN_DISABLE);19691969}1970197019711971static void···21822164 drm_get_encoder_name(&intel_encoder->base));21832165 intel_dp_start_link_train(intel_dp);21842166 intel_dp_complete_link_train(intel_dp);21672167+ intel_dp_stop_link_train(intel_dp);21852168 }21862169}21872170
···262262void intel_fbdev_set_suspend(struct drm_device *dev, int state)263263{264264 drm_i915_private_t *dev_priv = dev->dev_private;265265- if (!dev_priv->fbdev)265265+ struct intel_fbdev *ifbdev = dev_priv->fbdev;266266+ struct fb_info *info;267267+268268+ if (!ifbdev)266269 return;267270268268- fb_set_suspend(dev_priv->fbdev->helper.fbdev, state);271271+ info = ifbdev->helper.fbdev;272272+273273+ /* On resume from hibernation: If the object is shmemfs backed, it has274274+ * been restored from swap. If the object is stolen however, it will be275275+ * full of whatever garbage was left in there.276276+ */277277+ if (!state && ifbdev->ifb.obj->stolen)278278+ memset_io(info->screen_base, 0, info->screen_size);279279+280280+ fb_set_suspend(info, state);269281}270282271283MODULE_LICENSE("GPL and additional rights");
+22-22
drivers/gpu/drm/i915/intel_pm.c
···1301130113021302 vlv_update_drain_latency(dev);1303130313041304- if (g4x_compute_wm0(dev, 0,13041304+ if (g4x_compute_wm0(dev, PIPE_A,13051305 &valleyview_wm_info, latency_ns,13061306 &valleyview_cursor_wm_info, latency_ns,13071307 &planea_wm, &cursora_wm))13081308- enabled |= 1;13081308+ enabled |= 1 << PIPE_A;1309130913101310- if (g4x_compute_wm0(dev, 1,13101310+ if (g4x_compute_wm0(dev, PIPE_B,13111311 &valleyview_wm_info, latency_ns,13121312 &valleyview_cursor_wm_info, latency_ns,13131313 &planeb_wm, &cursorb_wm))13141314- enabled |= 2;13141314+ enabled |= 1 << PIPE_B;1315131513161316 if (single_plane_enabled(enabled) &&13171317 g4x_compute_srwm(dev, ffs(enabled) - 1,···13571357 int plane_sr, cursor_sr;13581358 unsigned int enabled = 0;1359135913601360- if (g4x_compute_wm0(dev, 0,13601360+ if (g4x_compute_wm0(dev, PIPE_A,13611361 &g4x_wm_info, latency_ns,13621362 &g4x_cursor_wm_info, latency_ns,13631363 &planea_wm, &cursora_wm))13641364- enabled |= 1;13641364+ enabled |= 1 << PIPE_A;1365136513661366- if (g4x_compute_wm0(dev, 1,13661366+ if (g4x_compute_wm0(dev, PIPE_B,13671367 &g4x_wm_info, latency_ns,13681368 &g4x_cursor_wm_info, latency_ns,13691369 &planeb_wm, &cursorb_wm))13701370- enabled |= 2;13701370+ enabled |= 1 << PIPE_B;1371137113721372 if (single_plane_enabled(enabled) &&13731373 g4x_compute_srwm(dev, ffs(enabled) - 1,···17161716 unsigned int enabled;1717171717181718 enabled = 0;17191719- if (g4x_compute_wm0(dev, 0,17191719+ if (g4x_compute_wm0(dev, PIPE_A,17201720 &ironlake_display_wm_info,17211721 ILK_LP0_PLANE_LATENCY,17221722 &ironlake_cursor_wm_info,···17271727 DRM_DEBUG_KMS("FIFO watermarks For pipe A -"17281728 " plane %d, " "cursor: %d\n",17291729 plane_wm, cursor_wm);17301730- enabled |= 1;17301730+ enabled |= 1 << PIPE_A;17311731 }1732173217331733- if (g4x_compute_wm0(dev, 1,17331733+ if (g4x_compute_wm0(dev, PIPE_B,17341734 &ironlake_display_wm_info,17351735 ILK_LP0_PLANE_LATENCY,17361736 &ironlake_cursor_wm_info,···17411741 DRM_DEBUG_KMS("FIFO watermarks For pipe B -"17421742 " plane %d, cursor: %d\n",17431743 plane_wm, cursor_wm);17441744- enabled |= 2;17441744+ enabled |= 1 << PIPE_B;17451745 }1746174617471747 /*···18011801 unsigned int enabled;1802180218031803 enabled = 0;18041804- if (g4x_compute_wm0(dev, 0,18041804+ if (g4x_compute_wm0(dev, PIPE_A,18051805 &sandybridge_display_wm_info, latency,18061806 &sandybridge_cursor_wm_info, latency,18071807 &plane_wm, &cursor_wm)) {···18121812 DRM_DEBUG_KMS("FIFO watermarks For pipe A -"18131813 " plane %d, " "cursor: %d\n",18141814 plane_wm, cursor_wm);18151815- enabled |= 1;18151815+ enabled |= 1 << PIPE_A;18161816 }1817181718181818- if (g4x_compute_wm0(dev, 1,18181818+ if (g4x_compute_wm0(dev, PIPE_B,18191819 &sandybridge_display_wm_info, latency,18201820 &sandybridge_cursor_wm_info, latency,18211821 &plane_wm, &cursor_wm)) {···18261826 DRM_DEBUG_KMS("FIFO watermarks For pipe B -"18271827 " plane %d, cursor: %d\n",18281828 plane_wm, cursor_wm);18291829- enabled |= 2;18291829+ enabled |= 1 << PIPE_B;18301830 }1831183118321832 /*···19041904 unsigned int enabled;1905190519061906 enabled = 0;19071907- if (g4x_compute_wm0(dev, 0,19071907+ if (g4x_compute_wm0(dev, PIPE_A,19081908 &sandybridge_display_wm_info, latency,19091909 &sandybridge_cursor_wm_info, latency,19101910 &plane_wm, &cursor_wm)) {···19151915 DRM_DEBUG_KMS("FIFO watermarks For pipe A -"19161916 " plane %d, " "cursor: %d\n",19171917 plane_wm, cursor_wm);19181918- enabled |= 1;19181918+ enabled |= 1 << PIPE_A;19191919 }1920192019211921- if (g4x_compute_wm0(dev, 1,19211921+ if (g4x_compute_wm0(dev, PIPE_B,19221922 &sandybridge_display_wm_info, latency,19231923 &sandybridge_cursor_wm_info, latency,19241924 &plane_wm, &cursor_wm)) {···19291929 DRM_DEBUG_KMS("FIFO watermarks For pipe B -"19301930 " plane %d, cursor: %d\n",19311931 plane_wm, cursor_wm);19321932- enabled |= 2;19321932+ enabled |= 1 << PIPE_B;19331933 }1934193419351935- if (g4x_compute_wm0(dev, 2,19351935+ if (g4x_compute_wm0(dev, PIPE_C,19361936 &sandybridge_display_wm_info, latency,19371937 &sandybridge_cursor_wm_info, latency,19381938 &plane_wm, &cursor_wm)) {···19431943 DRM_DEBUG_KMS("FIFO watermarks For pipe C -"19441944 " plane %d, cursor: %d\n",19451945 plane_wm, cursor_wm);19461946- enabled |= 3;19461946+ enabled |= 1 << PIPE_C;19471947 }1948194819491949 /*
+52-38
drivers/gpu/drm/mgag200/mgag200_mode.c
···46464747static inline void mga_wait_vsync(struct mga_device *mdev)4848{4949- unsigned int count = 0;4949+ unsigned long timeout = jiffies + HZ/10;5050 unsigned int status = 0;51515252 do {5353 status = RREG32(MGAREG_Status);5454- count++;5555- } while ((status & 0x08) && (count < 250000));5656- count = 0;5454+ } while ((status & 0x08) && time_before(jiffies, timeout));5555+ timeout = jiffies + HZ/10;5756 status = 0;5857 do {5958 status = RREG32(MGAREG_Status);6060- count++;6161- } while (!(status & 0x08) && (count < 250000));5959+ } while (!(status & 0x08) && time_before(jiffies, timeout));6260}63616462static inline void mga_wait_busy(struct mga_device *mdev)6563{6666- unsigned int count = 0;6464+ unsigned long timeout = jiffies + HZ;6765 unsigned int status = 0;6866 do {6967 status = RREG8(MGAREG_Status + 2);7070- count++;7171- } while ((status & 0x01) && (count < 500000));6868+ } while ((status & 0x01) && time_before(jiffies, timeout));7269}73707471/*···186189 WREG8(DAC_INDEX, MGA1064_PIX_CLK_CTL);187190 tmp = RREG8(DAC_DATA);188191 tmp |= MGA1064_PIX_CLK_CTL_CLK_DIS;189189- WREG_DAC(MGA1064_PIX_CLK_CTL_CLK_DIS, tmp);192192+ WREG8(DAC_DATA, tmp);190193191194 WREG8(DAC_INDEX, MGA1064_REMHEADCTL);192195 tmp = RREG8(DAC_DATA);193196 tmp |= MGA1064_REMHEADCTL_CLKDIS;194194- WREG_DAC(MGA1064_REMHEADCTL, tmp);197197+ WREG8(DAC_DATA, tmp);195198196199 /* select PLL Set C */197200 tmp = RREG8(MGAREG_MEM_MISC_READ);···201204 WREG8(DAC_INDEX, MGA1064_PIX_CLK_CTL);202205 tmp = RREG8(DAC_DATA);203206 tmp |= MGA1064_PIX_CLK_CTL_CLK_POW_DOWN | 0x80;204204- WREG_DAC(MGA1064_PIX_CLK_CTL, tmp);207207+ WREG8(DAC_DATA, tmp);205208206209 udelay(500);207210···209212 WREG8(DAC_INDEX, MGA1064_VREF_CTL);210213 tmp = RREG8(DAC_DATA);211214 tmp &= ~0x04;212212- WREG_DAC(MGA1064_VREF_CTL, tmp);215215+ WREG8(DAC_DATA, tmp);213216214217 udelay(50);215218···233236 tmp = RREG8(DAC_DATA);234237 tmp &= ~MGA1064_PIX_CLK_CTL_SEL_MSK;235238 tmp |= MGA1064_PIX_CLK_CTL_SEL_PLL;236236- WREG_DAC(MGA1064_PIX_CLK_CTL, tmp);239239+ WREG8(DAC_DATA, tmp);237240238241 WREG8(DAC_INDEX, MGA1064_REMHEADCTL);239242 tmp = RREG8(DAC_DATA);240243 tmp &= ~MGA1064_REMHEADCTL_CLKSL_MSK;241244 tmp |= MGA1064_REMHEADCTL_CLKSL_PLL;242242- WREG_DAC(MGA1064_REMHEADCTL, tmp);245245+ WREG8(DAC_DATA, tmp);243246244247 /* reset dotclock rate bit */245248 WREG8(MGAREG_SEQ_INDEX, 1);···250253 WREG8(DAC_INDEX, MGA1064_PIX_CLK_CTL);251254 tmp = RREG8(DAC_DATA);252255 tmp &= ~MGA1064_PIX_CLK_CTL_CLK_DIS;253253- WREG_DAC(MGA1064_PIX_CLK_CTL, tmp);256256+ WREG8(DAC_DATA, tmp);254257255258 vcount = RREG8(MGAREG_VCOUNT);256259···315318 WREG8(DAC_INDEX, MGA1064_PIX_CLK_CTL);316319 tmp = RREG8(DAC_DATA);317320 tmp |= MGA1064_PIX_CLK_CTL_CLK_DIS;318318- WREG_DAC(MGA1064_PIX_CLK_CTL_CLK_DIS, tmp);321321+ WREG8(DAC_DATA, tmp);319322320323 tmp = RREG8(MGAREG_MEM_MISC_READ);321324 tmp |= 0x3 << 2;···323326324327 WREG8(DAC_INDEX, MGA1064_PIX_PLL_STAT);325328 tmp = RREG8(DAC_DATA);326326- WREG_DAC(MGA1064_PIX_PLL_STAT, tmp & ~0x40);329329+ WREG8(DAC_DATA, tmp & ~0x40);327330328331 WREG8(DAC_INDEX, MGA1064_PIX_CLK_CTL);329332 tmp = RREG8(DAC_DATA);330333 tmp |= MGA1064_PIX_CLK_CTL_CLK_POW_DOWN;331331- WREG_DAC(MGA1064_PIX_CLK_CTL, tmp);334334+ WREG8(DAC_DATA, tmp);332335333336 WREG_DAC(MGA1064_EV_PIX_PLLC_M, m);334337 WREG_DAC(MGA1064_EV_PIX_PLLC_N, n);···339342 WREG8(DAC_INDEX, MGA1064_PIX_CLK_CTL);340343 tmp = RREG8(DAC_DATA);341344 tmp &= ~MGA1064_PIX_CLK_CTL_CLK_POW_DOWN;342342- WREG_DAC(MGA1064_PIX_CLK_CTL, tmp);345345+ WREG8(DAC_DATA, tmp);343346344347 udelay(500);345348···347350 tmp = RREG8(DAC_DATA);348351 tmp &= ~MGA1064_PIX_CLK_CTL_SEL_MSK;349352 tmp |= MGA1064_PIX_CLK_CTL_SEL_PLL;350350- WREG_DAC(MGA1064_PIX_CLK_CTL, tmp);353353+ WREG8(DAC_DATA, tmp);351354352355 WREG8(DAC_INDEX, MGA1064_PIX_PLL_STAT);353356 tmp = RREG8(DAC_DATA);354354- WREG_DAC(MGA1064_PIX_PLL_STAT, tmp | 0x40);357357+ WREG8(DAC_DATA, tmp | 0x40);355358356359 tmp = RREG8(MGAREG_MEM_MISC_READ);357360 tmp |= (0x3 << 2);···360363 WREG8(DAC_INDEX, MGA1064_PIX_CLK_CTL);361364 tmp = RREG8(DAC_DATA);362365 tmp &= ~MGA1064_PIX_CLK_CTL_CLK_DIS;363363- WREG_DAC(MGA1064_PIX_CLK_CTL, tmp);366366+ WREG8(DAC_DATA, tmp);364367365368 return 0;366369}···413416 WREG8(DAC_INDEX, MGA1064_PIX_CLK_CTL);414417 tmp = RREG8(DAC_DATA);415418 tmp |= MGA1064_PIX_CLK_CTL_CLK_DIS;416416- WREG_DAC(MGA1064_PIX_CLK_CTL_CLK_DIS, tmp);419419+ WREG8(DAC_DATA, tmp);417420418421 tmp = RREG8(MGAREG_MEM_MISC_READ);419422 tmp |= 0x3 << 2;···422425 WREG8(DAC_INDEX, MGA1064_PIX_CLK_CTL);423426 tmp = RREG8(DAC_DATA);424427 tmp |= MGA1064_PIX_CLK_CTL_CLK_POW_DOWN;425425- WREG_DAC(MGA1064_PIX_CLK_CTL, tmp);428428+ WREG8(DAC_DATA, tmp);426429427430 udelay(500);428431···436439 tmp = RREG8(DAC_DATA);437440 tmp &= ~MGA1064_PIX_CLK_CTL_SEL_MSK;438441 tmp |= MGA1064_PIX_CLK_CTL_SEL_PLL;439439- WREG_DAC(MGA1064_PIX_CLK_CTL, tmp);442442+ WREG8(DAC_DATA, tmp);440443441444 WREG8(DAC_INDEX, MGA1064_PIX_CLK_CTL);442445 tmp = RREG8(DAC_DATA);443446 tmp &= ~MGA1064_PIX_CLK_CTL_CLK_DIS;444447 tmp &= ~MGA1064_PIX_CLK_CTL_CLK_POW_DOWN;445445- WREG_DAC(MGA1064_PIX_CLK_CTL, tmp);448448+ WREG8(DAC_DATA, tmp);446449447450 vcount = RREG8(MGAREG_VCOUNT);448451···512515 WREG8(DAC_INDEX, MGA1064_PIX_CLK_CTL);513516 tmp = RREG8(DAC_DATA);514517 tmp |= MGA1064_PIX_CLK_CTL_CLK_DIS;515515- WREG_DAC(MGA1064_PIX_CLK_CTL_CLK_DIS, tmp);518518+ WREG8(DAC_DATA, tmp);516519517520 WREG8(DAC_INDEX, MGA1064_REMHEADCTL);518521 tmp = RREG8(DAC_DATA);519522 tmp |= MGA1064_REMHEADCTL_CLKDIS;520520- WREG_DAC(MGA1064_REMHEADCTL, tmp);523523+ WREG8(DAC_DATA, tmp);521524522525 tmp = RREG8(MGAREG_MEM_MISC_READ);523526 tmp |= (0x3<<2) | 0xc0;···527530 tmp = RREG8(DAC_DATA);528531 tmp &= ~MGA1064_PIX_CLK_CTL_CLK_DIS;529532 tmp |= MGA1064_PIX_CLK_CTL_CLK_POW_DOWN;530530- WREG_DAC(MGA1064_PIX_CLK_CTL, tmp);533533+ WREG8(DAC_DATA, tmp);531534532535 udelay(500);533536···654657 WREG_DAC(MGA1064_GEN_IO_DATA, tmp);655658}656659657657-660660+/*661661+ This is how the framebuffer base address is stored in g200 cards:662662+ * Assume @offset is the gpu_addr variable of the framebuffer object663663+ * Then addr is the number of _pixels_ (not bytes) from the start of664664+ VRAM to the first pixel we want to display. (divided by 2 for 32bit665665+ framebuffers)666666+ * addr is stored in the CRTCEXT0, CRTCC and CRTCD registers667667+ addr<20> -> CRTCEXT0<6>668668+ addr<19-16> -> CRTCEXT0<3-0>669669+ addr<15-8> -> CRTCC<7-0>670670+ addr<7-0> -> CRTCD<7-0>671671+ CRTCEXT0 has to be programmed last to trigger an update and make the672672+ new addr variable take effect.673673+ */658674void mga_set_start_address(struct drm_crtc *crtc, unsigned offset)659675{660676 struct mga_device *mdev = crtc->dev->dev_private;661677 u32 addr;662678 int count;679679+ u8 crtcext0;663680664681 while (RREG8(0x1fda) & 0x08);665682 while (!(RREG8(0x1fda) & 0x08));···681670 count = RREG8(MGAREG_VCOUNT) + 2;682671 while (RREG8(MGAREG_VCOUNT) < count);683672684684- addr = offset >> 2;673673+ WREG8(MGAREG_CRTCEXT_INDEX, 0);674674+ crtcext0 = RREG8(MGAREG_CRTCEXT_DATA);675675+ crtcext0 &= 0xB0;676676+ addr = offset / 8;677677+ /* Can't store addresses any higher than that...678678+ but we also don't have more than 16MB of memory, so it should be fine. */679679+ WARN_ON(addr > 0x1fffff);680680+ crtcext0 |= (!!(addr & (1<<20)))<<6;685681 WREG_CRT(0x0d, (u8)(addr & 0xff));686682 WREG_CRT(0x0c, (u8)(addr >> 8) & 0xff);687687- WREG_CRT(0xaf, (u8)(addr >> 16) & 0xf);683683+ WREG_ECRT(0x0, ((u8)(addr >> 16) & 0xf) | crtcext0);688684}689685690686···847829848830849831 for (i = 0; i < sizeof(dacvalue); i++) {850850- if ((i <= 0x03) ||851851- (i == 0x07) ||852852- (i == 0x0b) ||853853- (i == 0x0f) ||854854- ((i >= 0x13) && (i <= 0x17)) ||832832+ if ((i <= 0x17) ||855833 (i == 0x1b) ||856834 (i == 0x1c) ||857835 ((i >= 0x1f) && (i <= 0x29)) ||
+19-10
drivers/gpu/drm/qxl/qxl_cmd.c
···277277 return 0;278278}279279280280-static int wait_for_io_cmd_user(struct qxl_device *qdev, uint8_t val, long port)280280+static int wait_for_io_cmd_user(struct qxl_device *qdev, uint8_t val, long port, bool intr)281281{282282 int irq_num;283283 long addr = qdev->io_base + port;···285285286286 mutex_lock(&qdev->async_io_mutex);287287 irq_num = atomic_read(&qdev->irq_received_io_cmd);288288-289289-290288 if (qdev->last_sent_io_cmd > irq_num) {291291- ret = wait_event_interruptible(qdev->io_cmd_event,292292- atomic_read(&qdev->irq_received_io_cmd) > irq_num);293293- if (ret)289289+ if (intr)290290+ ret = wait_event_interruptible_timeout(qdev->io_cmd_event,291291+ atomic_read(&qdev->irq_received_io_cmd) > irq_num, 5*HZ);292292+ else293293+ ret = wait_event_timeout(qdev->io_cmd_event,294294+ atomic_read(&qdev->irq_received_io_cmd) > irq_num, 5*HZ);295295+ /* 0 is timeout, just bail the "hw" has gone away */296296+ if (ret <= 0)294297 goto out;295298 irq_num = atomic_read(&qdev->irq_received_io_cmd);296299 }297300 outb(val, addr);298301 qdev->last_sent_io_cmd = irq_num + 1;299299- ret = wait_event_interruptible(qdev->io_cmd_event,300300- atomic_read(&qdev->irq_received_io_cmd) > irq_num);302302+ if (intr)303303+ ret = wait_event_interruptible_timeout(qdev->io_cmd_event,304304+ atomic_read(&qdev->irq_received_io_cmd) > irq_num, 5*HZ);305305+ else306306+ ret = wait_event_timeout(qdev->io_cmd_event,307307+ atomic_read(&qdev->irq_received_io_cmd) > irq_num, 5*HZ);301308out:309309+ if (ret > 0)310310+ ret = 0;302311 mutex_unlock(&qdev->async_io_mutex);303312 return ret;304313}···317308 int ret;318309319310restart:320320- ret = wait_for_io_cmd_user(qdev, val, port);311311+ ret = wait_for_io_cmd_user(qdev, val, port, false);321312 if (ret == -ERESTARTSYS)322313 goto restart;323314}···349340 mutex_lock(&qdev->update_area_mutex);350341 qdev->ram_header->update_area = *area;351342 qdev->ram_header->update_surface = surface_id;352352- ret = wait_for_io_cmd_user(qdev, 0, QXL_IO_UPDATE_AREA_ASYNC);343343+ ret = wait_for_io_cmd_user(qdev, 0, QXL_IO_UPDATE_AREA_ASYNC, true);353344 mutex_unlock(&qdev->update_area_mutex);354345 return ret;355346}
···255255 struct qxl_gem gem;256256 struct qxl_mode_info mode_info;257257258258- /*259259- * last created framebuffer with fb_create260260- * only used by debugfs dumbppm261261- */262262- struct qxl_framebuffer *active_user_framebuffer;263263-264258 struct fb_info *fbdev_info;265259 struct qxl_framebuffer *fbdev_qfb;266260 void *ram_physical;···264270 struct qxl_ring *cursor_ring;265271266272 struct qxl_ram_header *ram_header;267267- bool mode_set;268273269274 bool primary_created;270275
+1
drivers/gpu/drm/qxl/qxl_ioctl.c
···294294 goto out;295295296296 if (!qobj->pin_count) {297297+ qxl_ttm_placement_from_domain(qobj, qobj->type);297298 ret = ttm_bo_validate(&qobj->tbo, &qobj->placement,298299 true, false);299300 if (unlikely(ret))
+1-1
drivers/gpu/drm/radeon/r300_cmdbuf.c
···7575 OUT_RING(CP_PACKET0(R300_RE_CLIPRECT_TL_0, nr * 2 - 1));76767777 for (i = 0; i < nr; ++i) {7878- if (DRM_COPY_FROM_USER_UNCHECKED7878+ if (DRM_COPY_FROM_USER7979 (&box, &cmdbuf->boxes[n + i], sizeof(box))) {8080 DRM_ERROR("copy cliprect faulted\n");8181 return -EFAULT;
+11-1
drivers/gpu/drm/radeon/radeon_drv.c
···147147#endif148148149149int radeon_no_wb;150150-int radeon_modeset = 1;150150+int radeon_modeset = -1;151151int radeon_dynclks = -1;152152int radeon_r4xx_atom = 0;153153int radeon_agpmode = 0;···456456457457static int __init radeon_init(void)458458{459459+#ifdef CONFIG_VGA_CONSOLE460460+ if (vgacon_text_force() && radeon_modeset == -1) {461461+ DRM_INFO("VGACON disable radeon kernel modesetting.\n");462462+ radeon_modeset = 0;463463+ }464464+#endif465465+ /* set to modesetting by default if not nomodeset */466466+ if (radeon_modeset == -1)467467+ radeon_modeset = 1;468468+459469 if (radeon_modeset == 1) {460470 DRM_INFO("radeon kernel modesetting enabled.\n");461471 driver = &kms_driver;
-5
drivers/gpu/host1x/drm/dc.c
···11281128 return err;1129112911301130 regs = platform_get_resource(pdev, IORESOURCE_MEM, 0);11311131- if (!regs) {11321132- dev_err(&pdev->dev, "failed to get registers\n");11331133- return -ENXIO;11341134- }11351135-11361131 dc->regs = devm_ioremap_resource(&pdev->dev, regs);11371132 if (IS_ERR(dc->regs))11381133 return PTR_ERR(dc->regs);
+10-6
drivers/hwmon/abituguru.c
···14141414 pr_info("found Abit uGuru\n");1415141514161416 /* Register sysfs hooks */14171417- for (i = 0; i < sysfs_attr_i; i++)14181418- if (device_create_file(&pdev->dev,14191419- &data->sysfs_attr[i].dev_attr))14171417+ for (i = 0; i < sysfs_attr_i; i++) {14181418+ res = device_create_file(&pdev->dev,14191419+ &data->sysfs_attr[i].dev_attr);14201420+ if (res)14201421 goto abituguru_probe_error;14211421- for (i = 0; i < ARRAY_SIZE(abituguru_sysfs_attr); i++)14221422- if (device_create_file(&pdev->dev,14231423- &abituguru_sysfs_attr[i].dev_attr))14221422+ }14231423+ for (i = 0; i < ARRAY_SIZE(abituguru_sysfs_attr); i++) {14241424+ res = device_create_file(&pdev->dev,14251425+ &abituguru_sysfs_attr[i].dev_attr);14261426+ if (res)14241427 goto abituguru_probe_error;14281428+ }1425142914261430 data->hwmon_dev = hwmon_device_register(&pdev->dev);14271431 if (!IS_ERR(data->hwmon_dev))
+5-3
drivers/hwmon/iio_hwmon.c
···8484 return PTR_ERR(channels);85858686 st = devm_kzalloc(dev, sizeof(*st), GFP_KERNEL);8787- if (st == NULL)8888- return -ENOMEM;8787+ if (st == NULL) {8888+ ret = -ENOMEM;8989+ goto error_release_channels;9090+ }89919092 st->channels = channels;9193···161159error_remove_group:162160 sysfs_remove_group(&dev->kobj, &st->attr_group);163161error_release_channels:164164- iio_channel_release_all(st->channels);162162+ iio_channel_release_all(channels);165163 return ret;166164}167165
···10821082 /* map the registers */1083108310841084 res = platform_get_resource(pdev, IORESOURCE_MEM, 0);10851085- if (res == NULL) {10861086- dev_err(&pdev->dev, "cannot find IO resource\n");10871087- return -ENOENT;10881088- }10891089-10901085 i2c->regs = devm_ioremap_resource(&pdev->dev, res);1091108610921087 if (IS_ERR(i2c->regs))
-6
drivers/i2c/busses/i2c-sirf.c
···303303 adap->class = I2C_CLASS_HWMON;304304305305 mem_res = platform_get_resource(pdev, IORESOURCE_MEM, 0);306306- if (mem_res == NULL) {307307- dev_err(&pdev->dev, "Unable to get MEM resource\n");308308- err = -EINVAL;309309- goto out;310310- }311311-312306 siic->base = devm_ioremap_resource(&pdev->dev, mem_res);313307 if (IS_ERR(siic->base)) {314308 err = PTR_ERR(siic->base);
-5
drivers/i2c/busses/i2c-tegra.c
···714714 int ret = 0;715715716716 res = platform_get_resource(pdev, IORESOURCE_MEM, 0);717717- if (!res) {718718- dev_err(&pdev->dev, "no mem resource\n");719719- return -EINVAL;720720- }721721-722717 base = devm_ioremap_resource(&pdev->dev, res);723718 if (IS_ERR(base))724719 return PTR_ERR(base);
···2188218821892189 *need_commit = false;2190219021912191- metadata_dev_size = get_metadata_dev_size(pool->md_dev);21912191+ metadata_dev_size = get_metadata_dev_size_in_blocks(pool->md_dev);2192219221932193 r = dm_pool_get_metadata_dev_size(pool->pmd, &sb_metadata_dev_size);21942194 if (r) {···21972197 }2198219821992199 if (metadata_dev_size < sb_metadata_dev_size) {22002200- DMERR("metadata device (%llu sectors) too small: expected %llu",22002200+ DMERR("metadata device (%llu blocks) too small: expected %llu",22012201 metadata_dev_size, sb_metadata_dev_size);22022202 return -EINVAL;22032203
-6
drivers/memory/emif.c
···15601560 platform_set_drvdata(pdev, emif);1561156115621562 res = platform_get_resource(pdev, IORESOURCE_MEM, 0);15631563- if (!res) {15641564- dev_err(emif->dev, "%s: error getting memory resource\n",15651565- __func__);15661566- goto error;15671567- }15681568-15691563 emif->base = devm_ioremap_resource(emif->dev, res);15701564 if (IS_ERR(emif->base))15711565 goto error;
-5
drivers/mfd/intel_msic.c
···414414 * the clients via intel_msic_irq_read().415415 */416416 res = platform_get_resource(pdev, IORESOURCE_MEM, 0);417417- if (!res) {418418- dev_err(&pdev->dev, "failed to get SRAM iomem resource\n");419419- return -ENODEV;420420- }421421-422417 msic->irq_base = devm_ioremap_resource(&pdev->dev, res);423418 if (IS_ERR(msic->irq_base))424419 return PTR_ERR(msic->irq_base);
-5
drivers/misc/atmel-ssc.c
···154154 ssc->pdata = (struct atmel_ssc_platform_data *)plat_dat;155155156156 regs = platform_get_resource(pdev, IORESOURCE_MEM, 0);157157- if (!regs) {158158- dev_dbg(&pdev->dev, "no mmio resource defined\n");159159- return -ENXIO;160160- }161161-162157 ssc->regs = devm_ioremap_resource(&pdev->dev, regs);163158 if (IS_ERR(ssc->regs))164159 return PTR_ERR(ssc->regs);
+7-2
drivers/mmc/host/mmci.c
···11301130 struct variant_data *variant = host->variant;11311131 u32 pwr = 0;11321132 unsigned long flags;11331133+ int ret;1133113411341135 pm_runtime_get_sync(mmc_dev(mmc));11351136···11621161 break;11631162 case MMC_POWER_ON:11641163 if (!IS_ERR(mmc->supply.vqmmc) &&11651165- !regulator_is_enabled(mmc->supply.vqmmc))11661166- regulator_enable(mmc->supply.vqmmc);11641164+ !regulator_is_enabled(mmc->supply.vqmmc)) {11651165+ ret = regulator_enable(mmc->supply.vqmmc);11661166+ if (ret < 0)11671167+ dev_err(mmc_dev(mmc),11681168+ "failed to enable vqmmc regulator\n");11691169+ }1167117011681171 pwr |= MCI_PWR_ON;11691172 break;
-5
drivers/mtd/nand/lpc32xx_mlc.c
···672672 }673673674674 rc = platform_get_resource(pdev, IORESOURCE_MEM, 0);675675- if (rc == NULL) {676676- dev_err(&pdev->dev, "No memory resource found for device!\r\n");677677- return -ENXIO;678678- }679679-680675 host->io_base = devm_ioremap_resource(&pdev->dev, rc);681676 if (IS_ERR(host->io_base))682677 return PTR_ERR(host->io_base);
+1-1
drivers/net/caif/Kconfig
···43434444config CAIF_VIRTIO4545 tristate "CAIF virtio transport driver"4646- depends on CAIF4646+ depends on CAIF && HAS_DMA4747 select VHOST_RING4848 select VIRTIO4949 select GENERIC_ALLOCATOR
+13-12
drivers/net/ethernet/3com/3c59x.c
···632632 pm_state_valid:1, /* pci_dev->saved_config_space has sane contents */633633 open:1,634634 medialock:1,635635- must_free_region:1, /* Flag: if zero, Cardbus owns the I/O region */636635 large_frames:1, /* accept large frames */637636 handling_irq:1; /* private in_irq indicator */638637 /* {get|set}_wol operations are already serialized by rtnl.···10111012 if (rc < 0)10121013 goto out;1013101410151015+ rc = pci_request_regions(pdev, DRV_NAME);10161016+ if (rc < 0) {10171017+ pci_disable_device(pdev);10181018+ goto out;10191019+ }10201020+10141021 unit = vortex_cards_found;1015102210161023 if (global_use_mmio < 0 && (unit >= MAX_UNITS || use_mmio[unit] < 0)) {···10321027 if (!ioaddr) /* If mapping fails, fall-back to BAR 0... */10331028 ioaddr = pci_iomap(pdev, 0, 0);10341029 if (!ioaddr) {10301030+ pci_release_regions(pdev);10351031 pci_disable_device(pdev);10361032 rc = -ENOMEM;10371033 goto out;···10421036 ent->driver_data, unit);10431037 if (rc < 0) {10441038 pci_iounmap(pdev, ioaddr);10391039+ pci_release_regions(pdev);10451040 pci_disable_device(pdev);10461041 goto out;10471042 }···1185117811861179 /* PCI-only startup logic */11871180 if (pdev) {11881188- /* EISA resources already marked, so only PCI needs to do this here */11891189- /* Ignore return value, because Cardbus drivers already allocate for us */11901190- if (request_region(dev->base_addr, vci->io_size, print_name) != NULL)11911191- vp->must_free_region = 1;11921192-11931181 /* enable bus-mastering if necessary */11941182 if (vci->flags & PCI_USES_MASTER)11951183 pci_set_master(pdev);···12221220 &vp->rx_ring_dma);12231221 retval = -ENOMEM;12241222 if (!vp->rx_ring)12251225- goto free_region;12231223+ goto free_device;1226122412271225 vp->tx_ring = (struct boom_tx_desc *)(vp->rx_ring + RX_RING_SIZE);12281226 vp->tx_ring_dma = vp->rx_ring_dma + sizeof(struct boom_rx_desc) * RX_RING_SIZE;···14861484 + sizeof(struct boom_tx_desc) * TX_RING_SIZE,14871485 vp->rx_ring,14881486 vp->rx_ring_dma);14891489-free_region:14901490- if (vp->must_free_region)14911491- release_region(dev->base_addr, vci->io_size);14871487+free_device:14921488 free_netdev(dev);14931489 pr_err(PFX "vortex_probe1 fails. Returns %d\n", retval);14941490out:···32543254 + sizeof(struct boom_tx_desc) * TX_RING_SIZE,32553255 vp->rx_ring,32563256 vp->rx_ring_dma);32573257- if (vp->must_free_region)32583258- release_region(dev->base_addr, vp->io_size);32573257+32583258+ pci_release_regions(pdev);32593259+32593260 free_netdev(dev);32603261}32613262
+3-2
drivers/net/ethernet/brocade/bna/bnad.c
···3236323632373237 sprintf(bnad->wq_name, "%s_wq_%d", BNAD_NAME, bnad->id);32383238 bnad->work_q = create_singlethread_workqueue(bnad->wq_name);32393239-32403240- if (!bnad->work_q)32393239+ if (!bnad->work_q) {32403240+ iounmap(bnad->bar0);32413241 return -ENOMEM;32423242+ }3242324332433244 return 0;32443245}
+2-1
drivers/net/ethernet/cadence/Kconfig
···22222323config ARM_AT91_ETHER2424 tristate "AT91RM9200 Ethernet support"2525- depends on GENERIC_HARDIRQS2525+ depends on GENERIC_HARDIRQS && HAS_DMA2626 select NET_CORE2727 select MACB2828 ---help---···31313232config MACB3333 tristate "Cadence MACB/GEM support"3434+ depends on HAS_DMA3435 select PHYLIB3536 ---help---3637 The Cadence MACB ethernet interface is found on many Atmel AT32 and
+1-1
drivers/net/ethernet/calxeda/Kconfig
···11config NET_CALXEDA_XGMAC22 tristate "Calxeda 1G/10G XGMAC Ethernet driver"33- depends on HAS_IOMEM33+ depends on HAS_IOMEM && HAS_DMA44 select CRC3255 help66 This is the driver for the XGMAC Ethernet IP block found on Calxeda
···17281728 sync_descbuffer_for_device(ring, dmaaddr, ring->rx_buffersize);17291729}1730173017311731+void b43_dma_handle_rx_overflow(struct b43_dmaring *ring)17321732+{17331733+ int current_slot, previous_slot;17341734+17351735+ B43_WARN_ON(ring->tx);17361736+17371737+ /* Device has filled all buffers, drop all packets and let TCP17381738+ * decrease speed.17391739+ * Decrement RX index by one will let the device to see all slots17401740+ * as free again17411741+ */17421742+ /*17431743+ *TODO: How to increase rx_drop in mac80211?17441744+ */17451745+ current_slot = ring->ops->get_current_rxslot(ring);17461746+ previous_slot = prev_slot(ring, current_slot);17471747+ ring->ops->set_current_rxslot(ring, previous_slot);17481748+}17491749+17311750void b43_dma_rx(struct b43_dmaring *ring)17321751{17331752 const struct b43_dma_ops *ops = ring->ops;
···851851852852 if (abx500_pdata)853853 pdata = abx500_pdata->gpio;854854- if (!pdata) {855855- if (np) {856856- const struct of_device_id *match;857854858858- match = of_match_device(abx500_gpio_match, &pdev->dev);859859- if (!match)860860- return -ENODEV;861861- id = (unsigned long)match->data;862862- } else {863863- dev_err(&pdev->dev, "gpio dt and platform data missing\n");864864- return -ENODEV;865865- }855855+ if (!(pdata || np)) {856856+ dev_err(&pdev->dev, "gpio dt and platform data missing\n");857857+ return -ENODEV;866858 }867867-868868- if (platid)869869- id = platid->driver_data;870859871860 pct = devm_kzalloc(&pdev->dev, sizeof(struct abx500_pinctrl),872861 GFP_KERNEL);···870881 pct->chip = abx500gpio_chip;871882 pct->chip.dev = &pdev->dev;872883 pct->chip.base = (np) ? -1 : pdata->gpio_base;884884+885885+ if (platid)886886+ id = platid->driver_data;887887+ else if (np) {888888+ const struct of_device_id *match;889889+890890+ match = of_match_device(abx500_gpio_match, &pdev->dev);891891+ if (match)892892+ id = (unsigned long)match->data;893893+ }873894874895 /* initialize the lock */875896 mutex_init(&pct->lock);···899900 abx500_pinctrl_ab8505_init(&pct->soc);900901 break;901902 default:902902- dev_err(&pdev->dev, "Unsupported pinctrl sub driver (%d)\n",903903- (int) platid->driver_data);903903+ dev_err(&pdev->dev, "Unsupported pinctrl sub driver (%d)\n", id);904904 mutex_destroy(&pct->lock);905905 return -EINVAL;906906 }
-5
drivers/pinctrl/pinctrl-coh901.c
···713713 gpio->dev = &pdev->dev;714714715715 memres = platform_get_resource(pdev, IORESOURCE_MEM, 0);716716- if (!memres) {717717- dev_err(gpio->dev, "could not get GPIO memory resource\n");718718- return -ENODEV;719719- }720720-721716 gpio->base = devm_ioremap_resource(&pdev->dev, memres);722717 if (IS_ERR(gpio->base))723718 return PTR_ERR(gpio->base);
-5
drivers/pinctrl/pinctrl-exynos5440.c
···10001000 }1001100110021002 res = platform_get_resource(pdev, IORESOURCE_MEM, 0);10031003- if (!res) {10041004- dev_err(dev, "cannot find IO resource\n");10051005- return -ENOENT;10061006- }10071007-10081003 priv->reg_base = devm_ioremap_resource(&pdev->dev, res);10091004 if (IS_ERR(priv->reg_base))10101005 return PTR_ERR(priv->reg_base);
+2-1
drivers/pinctrl/pinctrl-lantiq.c
···5252 int i;53535454 for (i = 0; i < num_maps; i++)5555- if (map[i].type == PIN_MAP_TYPE_CONFIGS_PIN)5555+ if (map[i].type == PIN_MAP_TYPE_CONFIGS_PIN ||5656+ map[i].type == PIN_MAP_TYPE_CONFIGS_GROUP)5657 kfree(map[i].data.configs.configs);5758 kfree(map);5859}
-5
drivers/pinctrl/pinctrl-samsung.c
···932932 drvdata->dev = dev;933933934934 res = platform_get_resource(pdev, IORESOURCE_MEM, 0);935935- if (!res) {936936- dev_err(dev, "cannot find IO resource\n");937937- return -ENOENT;938938- }939939-940935 drvdata->virt_base = devm_ioremap_resource(&pdev->dev, res);941936 if (IS_ERR(drvdata->virt_base))942937 return PTR_ERR(drvdata->virt_base);
+2-1
drivers/pinctrl/pinctrl-single.c
···11661166 (*map)->data.mux.function = np->name;1167116711681168 if (pcs->is_pinconf) {11691169- if (pcs_parse_pinconf(pcs, np, function, map))11691169+ res = pcs_parse_pinconf(pcs, np, function, map);11701170+ if (res)11701171 goto free_pingroups;11711172 *num_maps = 2;11721173 } else {
-4
drivers/pinctrl/pinctrl-xway.c
···716716717717 /* get and remap our register range */718718 res = platform_get_resource(pdev, IORESOURCE_MEM, 0);719719- if (!res) {720720- dev_err(&pdev->dev, "Failed to get resource\n");721721- return -ENOENT;722722- }723719 xway_info.membase[0] = devm_ioremap_resource(&pdev->dev, res);724720 if (IS_ERR(xway_info.membase[0]))725721 return PTR_ERR(xway_info.membase[0]);
···265265 imx->chip.npwm = 1;266266267267 r = platform_get_resource(pdev, IORESOURCE_MEM, 0);268268- if (r == NULL) {269269- dev_err(&pdev->dev, "no memory resource defined\n");270270- return -ENODEV;271271- }272272-273268 imx->mmio_base = devm_ioremap_resource(&pdev->dev, r);274269 if (IS_ERR(imx->mmio_base))275270 return PTR_ERR(imx->mmio_base);
-5
drivers/pwm/pwm-puv3.c
···117117 return PTR_ERR(puv3->clk);118118119119 r = platform_get_resource(pdev, IORESOURCE_MEM, 0);120120- if (r == NULL) {121121- dev_err(&pdev->dev, "no memory resource defined\n");122122- return -ENODEV;123123- }124124-125120 puv3->base = devm_ioremap_resource(&pdev->dev, r);126121 if (IS_ERR(puv3->base))127122 return PTR_ERR(puv3->base);
-5
drivers/pwm/pwm-pxa.c
···147147 pwm->chip.npwm = (id->driver_data & HAS_SECONDARY_PWM) ? 2 : 1;148148149149 r = platform_get_resource(pdev, IORESOURCE_MEM, 0);150150- if (r == NULL) {151151- dev_err(&pdev->dev, "no memory resource defined\n");152152- return -ENODEV;153153- }154154-155150 pwm->mmio_base = devm_ioremap_resource(&pdev->dev, r);156151 if (IS_ERR(pwm->mmio_base))157152 return PTR_ERR(pwm->mmio_base);
-5
drivers/pwm/pwm-tegra.c
···181181 pwm->dev = &pdev->dev;182182183183 r = platform_get_resource(pdev, IORESOURCE_MEM, 0);184184- if (!r) {185185- dev_err(&pdev->dev, "no memory resources defined\n");186186- return -ENODEV;187187- }188188-189184 pwm->mmio_base = devm_ioremap_resource(&pdev->dev, r);190185 if (IS_ERR(pwm->mmio_base))191186 return PTR_ERR(pwm->mmio_base);
-5
drivers/pwm/pwm-tiecap.c
···240240 pc->chip.npwm = 1;241241242242 r = platform_get_resource(pdev, IORESOURCE_MEM, 0);243243- if (!r) {244244- dev_err(&pdev->dev, "no memory resource defined\n");245245- return -ENODEV;246246- }247247-248243 pc->mmio_base = devm_ioremap_resource(&pdev->dev, r);249244 if (IS_ERR(pc->mmio_base))250245 return PTR_ERR(pc->mmio_base);
-5
drivers/pwm/pwm-tiehrpwm.c
···471471 pc->chip.npwm = NUM_PWM_CHANNEL;472472473473 r = platform_get_resource(pdev, IORESOURCE_MEM, 0);474474- if (!r) {475475- dev_err(&pdev->dev, "no memory resource defined\n");476476- return -ENODEV;477477- }478478-479474 pc->mmio_base = devm_ioremap_resource(&pdev->dev, r);480475 if (IS_ERR(pc->mmio_base))481476 return PTR_ERR(pc->mmio_base);
-5
drivers/pwm/pwm-tipwmss.c
···7070 mutex_init(&info->pwmss_lock);71717272 r = platform_get_resource(pdev, IORESOURCE_MEM, 0);7373- if (!r) {7474- dev_err(&pdev->dev, "no memory resource defined\n");7575- return -ENODEV;7676- }7777-7873 info->mmio_base = devm_ioremap_resource(&pdev->dev, r);7974 if (IS_ERR(info->mmio_base))8075 return PTR_ERR(info->mmio_base);
-5
drivers/pwm/pwm-vt8500.c
···230230 }231231232232 r = platform_get_resource(pdev, IORESOURCE_MEM, 0);233233- if (r == NULL) {234234- dev_err(&pdev->dev, "no memory resource defined\n");235235- return -ENODEV;236236- }237237-238233 chip->base = devm_ioremap_resource(&pdev->dev, r);239234 if (IS_ERR(chip->base))240235 return PTR_ERR(chip->base);
-2
drivers/rtc/Kconfig
···2020config RTC_HCTOSYS2121 bool "Set system time from RTC on startup and resume"2222 default y2323- depends on !ALWAYS_USE_PERSISTENT_CLOCK2423 help2524 If you say yes here, the system time (wall clock) will be set using2625 the value read from a specified RTC device. This is useful to avoid···2829config RTC_SYSTOHC2930 bool "Set the RTC time based on NTP synchronization"3031 default y3131- depends on !ALWAYS_USE_PERSISTENT_CLOCK3232 help3333 If you say yes here, the system time (wall clock) will be stored3434 in the RTC specified by RTC_HCTOSYS_DEVICE approximately every 11
-5
drivers/rtc/rtc-nuc900.c
···234234 return -ENOMEM;235235 }236236 res = platform_get_resource(pdev, IORESOURCE_MEM, 0);237237- if (!res) {238238- dev_err(&pdev->dev, "platform_get_resource failed\n");239239- return -ENXIO;240240- }241241-242237 nuc900_rtc->rtc_reg = devm_ioremap_resource(&pdev->dev, res);243238 if (IS_ERR(nuc900_rtc->rtc_reg))244239 return PTR_ERR(nuc900_rtc->rtc_reg);
-5
drivers/rtc/rtc-omap.c
···347347 }348348349349 res = platform_get_resource(pdev, IORESOURCE_MEM, 0);350350- if (!res) {351351- pr_debug("%s: RTC resource data missing\n", pdev->name);352352- return -ENOENT;353353- }354354-355350 rtc_base = devm_ioremap_resource(&pdev->dev, res);356351 if (IS_ERR(rtc_base))357352 return PTR_ERR(rtc_base);
-5
drivers/rtc/rtc-s3c.c
···477477 /* get the memory region */478478479479 res = platform_get_resource(pdev, IORESOURCE_MEM, 0);480480- if (res == NULL) {481481- dev_err(&pdev->dev, "failed to get memory region resource\n");482482- return -ENOENT;483483- }484484-485480 s3c_rtc_base = devm_ioremap_resource(&pdev->dev, res);486481 if (IS_ERR(s3c_rtc_base))487482 return PTR_ERR(s3c_rtc_base);
-6
drivers/rtc/rtc-tegra.c
···322322 return -ENOMEM;323323324324 res = platform_get_resource(pdev, IORESOURCE_MEM, 0);325325- if (!res) {326326- dev_err(&pdev->dev,327327- "Unable to allocate resources for device.\n");328328- return -EBUSY;329329- }330330-331325 info->rtc_base = devm_ioremap_resource(&pdev->dev, res);332326 if (IS_ERR(info->rtc_base))333327 return PTR_ERR(info->rtc_base);
···784784 },785785 { },786786};787787-MODULE_DEVICE_TABLE(of, davini_spi_of_match);787787+MODULE_DEVICE_TABLE(of, davinci_spi_of_match);788788789789/**790790 * spi_davinci_get_pdata - Get platform data from DTS binding
-5
drivers/spi/spi-tegra20-sflash.c
···489489 tegra_sflash_parse_dt(tsd);490490491491 r = platform_get_resource(pdev, IORESOURCE_MEM, 0);492492- if (!r) {493493- dev_err(&pdev->dev, "No IO memory resource\n");494494- ret = -ENODEV;495495- goto exit_free_master;496496- }497492 tsd->base = devm_ioremap_resource(&pdev->dev, r);498493 if (IS_ERR(tsd->base)) {499494 ret = PTR_ERR(tsd->base);
+6-3
drivers/spi/spi.c
···334334 spi->dev.parent = &master->dev;335335 spi->dev.bus = &spi_bus_type;336336 spi->dev.release = spidev_release;337337- spi->cs_gpio = -EINVAL;337337+ spi->cs_gpio = -ENOENT;338338 device_initialize(&spi->dev);339339 return spi;340340}···10671067 nb = of_gpio_named_count(np, "cs-gpios");10681068 master->num_chipselect = max(nb, (int)master->num_chipselect);1069106910701070- if (nb < 1)10701070+ /* Return error only for an incorrectly formed cs-gpios property */10711071+ if (nb == 0 || nb == -ENOENT)10711072 return 0;10731073+ else if (nb < 0)10741074+ return nb;1072107510731076 cs = devm_kzalloc(&master->dev,10741077 sizeof(int) * master->num_chipselect,···10821079 return -ENOMEM;1083108010841081 for (i = 0; i < master->num_chipselect; i++)10851085- cs[i] = -EINVAL;10821082+ cs[i] = -ENOENT;1086108310871084 for (i = 0; i < nb; i++)10881085 cs[i] = of_get_named_gpio(np, "cs-gpios", i);
-5
drivers/staging/dwc2/platform.c
···102102 }103103104104 res = platform_get_resource(dev, IORESOURCE_MEM, 0);105105- if (!res) {106106- dev_err(&dev->dev, "missing memory base resource\n");107107- return -EINVAL;108108- }109109-110105 hsotg->regs = devm_ioremap_resource(&dev->dev, res);111106 if (IS_ERR(hsotg->regs))112107 return PTR_ERR(hsotg->regs);
-5
drivers/staging/nvec/nvec.c
···800800 }801801802802 res = platform_get_resource(pdev, IORESOURCE_MEM, 0);803803- if (!res) {804804- dev_err(&pdev->dev, "no mem resource?\n");805805- return -ENODEV;806806- }807807-808803 base = devm_ioremap_resource(&pdev->dev, res);809804 if (IS_ERR(base))810805 return PTR_ERR(base);
···823823 /*824824 * CmdSN is greater than the tail of the list.825825 */826826- if (ooo_tail->cmdsn < ooo_cmdsn->cmdsn)826826+ if (iscsi_sna_lt(ooo_tail->cmdsn, ooo_cmdsn->cmdsn))827827 list_add_tail(&ooo_cmdsn->ooo_list,828828 &sess->sess_ooo_cmdsn_list);829829 else {···833833 */834834 list_for_each_entry(ooo_tmp, &sess->sess_ooo_cmdsn_list,835835 ooo_list) {836836- if (ooo_tmp->cmdsn < ooo_cmdsn->cmdsn)836836+ if (iscsi_sna_lt(ooo_tmp->cmdsn, ooo_cmdsn->cmdsn))837837 continue;838838839839+ /* Insert before this entry */839840 list_add(&ooo_cmdsn->ooo_list,840840- &ooo_tmp->ooo_list);841841+ ooo_tmp->ooo_list.prev);841842 break;842843 }843844 }
+4-4
drivers/target/iscsi/iscsi_target_parameters.c
···436436 /*437437 * Extra parameters for ISER from RFC-5046438438 */439439- param = iscsi_set_default_param(pl, RDMAEXTENTIONS, INITIAL_RDMAEXTENTIONS,439439+ param = iscsi_set_default_param(pl, RDMAEXTENSIONS, INITIAL_RDMAEXTENSIONS,440440 PHASE_OPERATIONAL, SCOPE_SESSION_WIDE, SENDER_BOTH,441441 TYPERANGE_BOOL_AND, USE_LEADING_ONLY);442442 if (!param)···529529 SET_PSTATE_NEGOTIATE(param);530530 } else if (!strcmp(param->name, OFMARKINT)) {531531 SET_PSTATE_NEGOTIATE(param);532532- } else if (!strcmp(param->name, RDMAEXTENTIONS)) {532532+ } else if (!strcmp(param->name, RDMAEXTENSIONS)) {533533 if (iser == true)534534 SET_PSTATE_NEGOTIATE(param);535535 } else if (!strcmp(param->name, INITIATORRECVDATASEGMENTLENGTH)) {···580580 param->state &= ~PSTATE_NEGOTIATE;581581 else if (!strcmp(param->name, OFMARKINT))582582 param->state &= ~PSTATE_NEGOTIATE;583583- else if (!strcmp(param->name, RDMAEXTENTIONS))583583+ else if (!strcmp(param->name, RDMAEXTENSIONS))584584 param->state &= ~PSTATE_NEGOTIATE;585585 else if (!strcmp(param->name, INITIATORRECVDATASEGMENTLENGTH))586586 param->state &= ~PSTATE_NEGOTIATE;···19771977 ops->SessionType = !strcmp(param->value, DISCOVERY);19781978 pr_debug("SessionType: %s\n",19791979 param->value);19801980- } else if (!strcmp(param->name, RDMAEXTENTIONS)) {19801980+ } else if (!strcmp(param->name, RDMAEXTENSIONS)) {19811981 ops->RDMAExtensions = !strcmp(param->value, YES);19821982 pr_debug("RDMAExtensions: %s\n",19831983 param->value);
+2-2
drivers/target/iscsi/iscsi_target_parameters.h
···9191/*9292 * Parameter names of iSCSI Extentions for RDMA (iSER). See RFC-50469393 */9494-#define RDMAEXTENTIONS "RDMAExtensions"9494+#define RDMAEXTENSIONS "RDMAExtensions"9595#define INITIATORRECVDATASEGMENTLENGTH "InitiatorRecvDataSegmentLength"9696#define TARGETRECVDATASEGMENTLENGTH "TargetRecvDataSegmentLength"9797···142142/*143143 * Initial values for iSER parameters following RFC-5046 Section 6144144 */145145-#define INITIAL_RDMAEXTENTIONS NO145145+#define INITIAL_RDMAEXTENSIONS NO146146#define INITIAL_INITIATORRECVDATASEGMENTLENGTH "262144"147147#define INITIAL_TARGETRECVDATASEGMENTLENGTH "8192"148148
···169169 return -ENOMEM;170170171171 res = platform_get_resource(pdev, IORESOURCE_MEM, 0);172172- if (!res) {173173- dev_err(&pdev->dev, "Failed to get platform resource\n");174174- return -ENODEV;175175- }176176-177172 priv->sensor = devm_ioremap_resource(&pdev->dev, res);178173 if (IS_ERR(priv->sensor))179174 return PTR_ERR(priv->sensor);180175181176 res = platform_get_resource(pdev, IORESOURCE_MEM, 1);182182- if (!res) {183183- dev_err(&pdev->dev, "Failed to get platform resource\n");184184- return -ENODEV;185185- }186186-187177 priv->control = devm_ioremap_resource(&pdev->dev, res);188178 if (IS_ERR(priv->control))189179 return PTR_ERR(priv->control);
-4
drivers/thermal/dove_thermal.c
···149149 return PTR_ERR(priv->sensor);150150151151 res = platform_get_resource(pdev, IORESOURCE_MEM, 1);152152- if (!res) {153153- dev_err(&pdev->dev, "Failed to get platform resource\n");154154- return -ENODEV;155155- }156152 priv->control = devm_ioremap_resource(&pdev->dev, res);157153 if (IS_ERR(priv->control))158154 return PTR_ERR(priv->control);
-5
drivers/thermal/exynos_thermal.c
···925925 INIT_WORK(&data->irq_work, exynos_tmu_work);926926927927 data->mem = platform_get_resource(pdev, IORESOURCE_MEM, 0);928928- if (!data->mem) {929929- dev_err(&pdev->dev, "Failed to get platform resource\n");930930- return -ENOENT;931931- }932932-933928 data->base = devm_ioremap_resource(&pdev->dev, data->mem);934929 if (IS_ERR(data->base))935930 return PTR_ERR(data->base);
-5
drivers/usb/chipidea/core.c
···370370 }371371372372 res = platform_get_resource(pdev, IORESOURCE_MEM, 0);373373- if (!res) {374374- dev_err(dev, "missing resource\n");375375- return -ENODEV;376376- }377377-378373 base = devm_ioremap_resource(dev, res);379374 if (IS_ERR(base))380375 return PTR_ERR(base);
-10
drivers/usb/gadget/bcm63xx_udc.c
···23342334 }2335233523362336 res = platform_get_resource(pdev, IORESOURCE_MEM, 0);23372337- if (!res) {23382338- dev_err(dev, "error finding USBD resource\n");23392339- return -ENXIO;23402340- }23412341-23422337 udc->usbd_regs = devm_ioremap_resource(dev, res);23432338 if (IS_ERR(udc->usbd_regs))23442339 return PTR_ERR(udc->usbd_regs);2345234023462341 res = platform_get_resource(pdev, IORESOURCE_MEM, 1);23472347- if (!res) {23482348- dev_err(dev, "error finding IUDMA resource\n");23492349- return -ENXIO;23502350- }23512351-23522342 udc->iudma_regs = devm_ioremap_resource(dev, res);23532343 if (IS_ERR(udc->iudma_regs))23542344 return PTR_ERR(udc->iudma_regs);
-6
drivers/usb/host/ohci-nxp.c
···300300 }301301302302 res = platform_get_resource(pdev, IORESOURCE_MEM, 0);303303- if (!res) {304304- dev_err(&pdev->dev, "Failed to get MEM resource\n");305305- ret = -ENOMEM;306306- goto out8;307307- }308308-309303 hcd->regs = devm_ioremap_resource(&pdev->dev, res);310304 if (IS_ERR(hcd->regs)) {311305 ret = PTR_ERR(hcd->regs);
-5
drivers/usb/phy/phy-mv-u3d-usb.c
···278278 }279279280280 res = platform_get_resource(pdev, IORESOURCE_MEM, 0);281281- if (!res) {282282- dev_err(dev, "missing mem resource\n");283283- return -ENODEV;284284- }285285-286281 phy_base = devm_ioremap_resource(dev, res);287282 if (IS_ERR(phy_base))288283 return PTR_ERR(phy_base);
-5
drivers/usb/phy/phy-mxs-usb.c
···130130 int ret;131131132132 res = platform_get_resource(pdev, IORESOURCE_MEM, 0);133133- if (!res) {134134- dev_err(&pdev->dev, "can't get device resources\n");135135- return -ENOENT;136136- }137137-138133 base = devm_ioremap_resource(&pdev->dev, res);139134 if (IS_ERR(base))140135 return PTR_ERR(base);
-5
drivers/usb/phy/phy-samsung-usb2.c
···363363 int ret;364364365365 phy_mem = platform_get_resource(pdev, IORESOURCE_MEM, 0);366366- if (!phy_mem) {367367- dev_err(dev, "%s: missing mem resource\n", __func__);368368- return -ENODEV;369369- }370370-371366 phy_base = devm_ioremap_resource(dev, phy_mem);372367 if (IS_ERR(phy_base))373368 return PTR_ERR(phy_base);
-5
drivers/usb/phy/phy-samsung-usb3.c
···239239 int ret;240240241241 phy_mem = platform_get_resource(pdev, IORESOURCE_MEM, 0);242242- if (!phy_mem) {243243- dev_err(dev, "%s: missing mem resource\n", __func__);244244- return -ENODEV;245245- }246246-247242 phy_base = devm_ioremap_resource(dev, phy_mem);248243 if (IS_ERR(phy_base))249244 return PTR_ERR(phy_base);
+3
drivers/vhost/vringh.c
···33 *44 * Since these may be in userspace, we use (inline) accessors.55 */66+#include <linux/module.h>67#include <linux/vringh.h>78#include <linux/virtio_ring.h>89#include <linux/kernel.h>···10061005 return __vringh_need_notify(vrh, getu16_kern);10071006}10081007EXPORT_SYMBOL(vringh_need_notify_kern);10081008+10091009+MODULE_LICENSE("GPL");
-4
drivers/video/omap2/dss/hdmi.c
···10651065 mutex_init(&hdmi.ip_data.lock);1066106610671067 res = platform_get_resource(hdmi.pdev, IORESOURCE_MEM, 0);10681068- if (!res) {10691069- DSSERR("can't get IORESOURCE_MEM HDMI\n");10701070- return -EINVAL;10711071- }1072106810731069 /* Base address taken from platform */10741070 hdmi.ip_data.base_wp = devm_ioremap_resource(&pdev->dev, res);
-5
drivers/video/omap2/vrfb.c
···353353 /* first resource is the register res, the rest are vrfb contexts */354354355355 mem = platform_get_resource(pdev, IORESOURCE_MEM, 0);356356- if (!mem) {357357- dev_err(&pdev->dev, "can't get vrfb base address\n");358358- return -EINVAL;359359- }360360-361356 vrfb_base = devm_ioremap_resource(&pdev->dev, mem);362357 if (IS_ERR(vrfb_base))363358 return PTR_ERR(vrfb_base);
-5
drivers/w1/masters/omap_hdq.c
···555555 platform_set_drvdata(pdev, hdq_data);556556557557 res = platform_get_resource(pdev, IORESOURCE_MEM, 0);558558- if (!res) {559559- dev_dbg(&pdev->dev, "unable to get resource\n");560560- return -ENXIO;561561- }562562-563558 hdq_data->hdq_base = devm_ioremap_resource(dev, res);564559 if (IS_ERR(hdq_data->hdq_base))565560 return PTR_ERR(hdq_data->hdq_base);
-5
drivers/watchdog/ath79_wdt.c
···248248 return -EBUSY;249249250250 res = platform_get_resource(pdev, IORESOURCE_MEM, 0);251251- if (!res) {252252- dev_err(&pdev->dev, "no memory resource found\n");253253- return -EINVAL;254254- }255255-256251 wdt_base = devm_ioremap_resource(&pdev->dev, res);257252 if (IS_ERR(wdt_base))258253 return PTR_ERR(wdt_base);
-5
drivers/watchdog/davinci_wdt.c
···217217 dev_info(dev, "heartbeat %d sec\n", heartbeat);218218219219 wdt_mem = platform_get_resource(pdev, IORESOURCE_MEM, 0);220220- if (wdt_mem == NULL) {221221- dev_err(dev, "failed to get memory region resource\n");222222- return -ENOENT;223223- }224224-225220 wdt_base = devm_ioremap_resource(dev, wdt_mem);226221 if (IS_ERR(wdt_base))227222 return PTR_ERR(wdt_base);
-5
drivers/watchdog/imx2_wdt.c
···257257 struct resource *res;258258259259 res = platform_get_resource(pdev, IORESOURCE_MEM, 0);260260- if (!res) {261261- dev_err(&pdev->dev, "can't get device resources\n");262262- return -ENODEV;263263- }264264-265260 imx2_wdt.base = devm_ioremap_resource(&pdev->dev, res);266261 if (IS_ERR(imx2_wdt.base))267262 return PTR_ERR(imx2_wdt.base);
+3-4
drivers/xen/Kconfig
···1919 by the current usage of anonymous memory ("committed AS") and2020 controlled by various sysfs-settable parameters. Configuring2121 FRONTSWAP is highly recommended; if it is not configured, self-2222- ballooning is disabled by default but can be enabled with the2323- 'selfballooning' kernel boot parameter. If FRONTSWAP is configured,2222+ ballooning is disabled by default. If FRONTSWAP is configured,2423 frontswap-selfshrinking is enabled by default but can be disabled2525- with the 'noselfshrink' kernel boot parameter; and self-ballooning2626- is enabled by default but can be disabled with the 'noselfballooning'2424+ with the 'tmem.selfshrink=0' kernel boot parameter; and self-ballooning2525+ is enabled by default but can be disabled with the 'tmem.selfballooning=0'2726 kernel boot parameter. Note that systems without a sufficiently2827 large swap device should not enable self-ballooning.2928
+2-1
drivers/xen/balloon.c
···407407 nr_pages = ARRAY_SIZE(frame_list);408408409409 for (i = 0; i < nr_pages; i++) {410410- if ((page = alloc_page(gfp)) == NULL) {410410+ page = alloc_page(gfp);411411+ if (page == NULL) {411412 nr_pages = i;412413 state = BP_EAGAIN;413414 break;
+1-1
drivers/xen/privcmd.c
···504504 struct page **pages = vma->vm_private_data;505505 int numpgs = (vma->vm_end - vma->vm_start) >> PAGE_SHIFT;506506507507- if (!xen_feature(XENFEAT_auto_translated_physmap || !numpgs || !pages))507507+ if (!xen_feature(XENFEAT_auto_translated_physmap) || !numpgs || !pages)508508 return;509509510510 xen_unmap_domain_mfn_range(vma, numpgs, pages);
···5353 * System configuration note: Selfballooning should not be enabled on5454 * systems without a sufficiently large swap device configured; for best5555 * results, it is recommended that total swap be increased by the size5656- * of the guest memory. Also, while technically not required to be5757- * configured, it is highly recommended that frontswap also be configured5858- * and enabled when selfballooning is running. So, selfballooning5959- * is disabled by default if frontswap is not configured and can only6060- * be enabled with the "selfballooning" kernel boot option; similarly6161- * selfballooning is enabled by default if frontswap is configured and6262- * can be disabled with the "noselfballooning" kernel boot option. Finally,6363- * when frontswap is configured, frontswap-selfshrinking can be disabled6464- * with the "noselfshrink" kernel boot option.5656+ * of the guest memory. Note, that selfballooning should be disabled by default5757+ * if frontswap is not configured. Similarly selfballooning should be enabled5858+ * by default if frontswap is configured and can be disabled with the5959+ * "tmem.selfballooning=0" kernel boot option. Finally, when frontswap is6060+ * configured, frontswap-selfshrinking can be disabled with the6161+ * "tmem.selfshrink=0" kernel boot option.6562 *6663 * Selfballooning is disallowed in domain0 and force-disabled.6764 *···117120/* Enable/disable with sysfs. */118121static bool frontswap_selfshrinking __read_mostly;119122120120-/* Enable/disable with kernel boot option. */121121-static bool use_frontswap_selfshrink = true;122122-123123/*124124 * The default values for the following parameters were deemed reasonable125125 * by experimentation, may be workload-dependent, and can all be···170176 frontswap_shrink(tgt_frontswap_pages);171177}172178173173-static int __init xen_nofrontswap_selfshrink_setup(char *s)174174-{175175- use_frontswap_selfshrink = false;176176- return 1;177177-}178178-179179-__setup("noselfshrink", xen_nofrontswap_selfshrink_setup);180180-181181-/* Disable with kernel boot option. */182182-static bool use_selfballooning = true;183183-184184-static int __init xen_noselfballooning_setup(char *s)185185-{186186- use_selfballooning = false;187187- return 1;188188-}189189-190190-__setup("noselfballooning", xen_noselfballooning_setup);191191-#else /* !CONFIG_FRONTSWAP */192192-/* Enable with kernel boot option. */193193-static bool use_selfballooning;194194-195195-static int __init xen_selfballooning_setup(char *s)196196-{197197- use_selfballooning = true;198198- return 1;199199-}200200-201201-__setup("selfballooning", xen_selfballooning_setup);202179#endif /* CONFIG_FRONTSWAP */203180204181#define MB2PAGES(mb) ((mb) << (20 - PAGE_SHIFT))
+10-11
drivers/xen/xenbus/xenbus_dev_backend.c
···7070 return err;7171}72727373-static long xenbus_backend_ioctl(struct file *file, unsigned int cmd, unsigned long data)7373+static long xenbus_backend_ioctl(struct file *file, unsigned int cmd,7474+ unsigned long data)7475{7576 if (!capable(CAP_SYS_ADMIN))7677 return -EPERM;77787879 switch (cmd) {7979- case IOCTL_XENBUS_BACKEND_EVTCHN:8080- if (xen_store_evtchn > 0)8181- return xen_store_evtchn;8282- return -ENODEV;8383-8484- case IOCTL_XENBUS_BACKEND_SETUP:8585- return xenbus_alloc(data);8686-8787- default:8888- return -ENOTTY;8080+ case IOCTL_XENBUS_BACKEND_EVTCHN:8181+ if (xen_store_evtchn > 0)8282+ return xen_store_evtchn;8383+ return -ENODEV;8484+ case IOCTL_XENBUS_BACKEND_SETUP:8585+ return xenbus_alloc(data);8686+ default:8787+ return -ENOTTY;8988 }9089}9190
+2-1
fs/btrfs/backref.c
···918918 ref->parent, bsz, 0);919919 if (!eb || !extent_buffer_uptodate(eb)) {920920 free_extent_buffer(eb);921921- return -EIO;921921+ ret = -EIO;922922+ goto out;922923 }923924 ret = find_extent_in_eb(eb, bytenr,924925 *extent_item_pos, &eie);
+1-1
fs/btrfs/check-integrity.c
···17001700 unsigned int j;17011701 DECLARE_COMPLETION_ONSTACK(complete);1702170217031703- bio = bio_alloc(GFP_NOFS, num_pages - i);17031703+ bio = btrfs_io_bio_alloc(GFP_NOFS, num_pages - i);17041704 if (!bio) {17051705 printk(KERN_INFO17061706 "btrfsic: bio_alloc() for %u pages failed!\n",
+3-1
fs/btrfs/ctree.c
···951951 BUG_ON(ret); /* -ENOMEM */952952 }953953 if (new_flags != 0) {954954+ int level = btrfs_header_level(buf);955955+954956 ret = btrfs_set_disk_extent_flags(trans, root,955957 buf->start,956958 buf->len,957957- new_flags, 0);959959+ new_flags, level, 0);958960 if (ret)959961 return ret;960962 }
+4-4
fs/btrfs/ctree.h
···8888/* holds checksums of all the data extents */8989#define BTRFS_CSUM_TREE_OBJECTID 7ULL90909191-/* for storing balance parameters in the root tree */9292-#define BTRFS_BALANCE_OBJECTID -4ULL9393-9491/* holds quota configuration and tracking */9592#define BTRFS_QUOTA_TREE_OBJECTID 8ULL9393+9494+/* for storing balance parameters in the root tree */9595+#define BTRFS_BALANCE_OBJECTID -4ULL96969797/* orhpan objectid for tracking unlinked/truncated files */9898#define BTRFS_ORPHAN_OBJECTID -5ULL···30753075int btrfs_set_disk_extent_flags(struct btrfs_trans_handle *trans,30763076 struct btrfs_root *root,30773077 u64 bytenr, u64 num_bytes, u64 flags,30783078- int is_data);30783078+ int level, int is_data);30793079int btrfs_free_extent(struct btrfs_trans_handle *trans,30803080 struct btrfs_root *root,30813081 u64 bytenr, u64 num_bytes, u64 parent, u64 root_objectid,
+1
fs/btrfs/delayed-ref.h
···6060struct btrfs_delayed_extent_op {6161 struct btrfs_disk_key key;6262 u64 flags_to_set;6363+ int level;6364 unsigned int update_key:1;6465 unsigned int update_flags:1;6566 unsigned int is_data:1;
···20702070 u32 item_size;20712071 int ret;20722072 int err = 0;20732073- int metadata = (node->type == BTRFS_TREE_BLOCK_REF_KEY ||20742074- node->type == BTRFS_SHARED_BLOCK_REF_KEY);20732073+ int metadata = !extent_op->is_data;2075207420762075 if (trans->aborted)20772076 return 0;···20852086 key.objectid = node->bytenr;2086208720872088 if (metadata) {20882088- struct btrfs_delayed_tree_ref *tree_ref;20892089-20902090- tree_ref = btrfs_delayed_node_to_tree_ref(node);20912089 key.type = BTRFS_METADATA_ITEM_KEY;20922092- key.offset = tree_ref->level;20902090+ key.offset = extent_op->level;20932091 } else {20942092 key.type = BTRFS_EXTENT_ITEM_KEY;20952093 key.offset = node->num_bytes;···27152719int btrfs_set_disk_extent_flags(struct btrfs_trans_handle *trans,27162720 struct btrfs_root *root,27172721 u64 bytenr, u64 num_bytes, u64 flags,27182718- int is_data)27222722+ int level, int is_data)27192723{27202724 struct btrfs_delayed_extent_op *extent_op;27212725 int ret;···27282732 extent_op->update_flags = 1;27292733 extent_op->update_key = 0;27302734 extent_op->is_data = is_data ? 1 : 0;27352735+ extent_op->level = level;2731273627322737 ret = btrfs_add_delayed_extent_op(root->fs_info, trans, bytenr,27332738 num_bytes, extent_op);···31063109 WARN_ON(ret);3107311031083111 if (i_size_read(inode) > 0) {31123112+ ret = btrfs_check_trunc_cache_free_space(root,31133113+ &root->fs_info->global_block_rsv);31143114+ if (ret)31153115+ goto out_put;31163116+31093117 ret = btrfs_truncate_free_space_cache(root, trans, path,31103118 inode);31113119 if (ret)···45644562 fs_info->csum_root->block_rsv = &fs_info->global_block_rsv;45654563 fs_info->dev_root->block_rsv = &fs_info->global_block_rsv;45664564 fs_info->tree_root->block_rsv = &fs_info->global_block_rsv;45654565+ if (fs_info->quota_root)45664566+ fs_info->quota_root->block_rsv = &fs_info->global_block_rsv;45674567 fs_info->chunk_root->block_rsv = &fs_info->chunk_block_rsv;4568456845694569 update_global_block_rsv(fs_info);···66556651 struct btrfs_block_rsv *block_rsv;66566652 struct btrfs_block_rsv *global_rsv = &root->fs_info->global_block_rsv;66576653 int ret;66546654+ bool global_updated = false;6658665566596656 block_rsv = get_block_rsv(trans, root);6660665766616661- if (block_rsv->size == 0) {66626662- ret = reserve_metadata_bytes(root, block_rsv, blocksize,66636663- BTRFS_RESERVE_NO_FLUSH);66646664- /*66656665- * If we couldn't reserve metadata bytes try and use some from66666666- * the global reserve.66676667- */66686668- if (ret && block_rsv != global_rsv) {66696669- ret = block_rsv_use_bytes(global_rsv, blocksize);66706670- if (!ret)66716671- return global_rsv;66726672- return ERR_PTR(ret);66736673- } else if (ret) {66746674- return ERR_PTR(ret);66756675- }66766676- return block_rsv;66776677- }66786678-66586658+ if (unlikely(block_rsv->size == 0))66596659+ goto try_reserve;66606660+again:66796661 ret = block_rsv_use_bytes(block_rsv, blocksize);66806662 if (!ret)66816663 return block_rsv;66826682- if (ret && !block_rsv->failfast) {66836683- if (btrfs_test_opt(root, ENOSPC_DEBUG)) {66846684- static DEFINE_RATELIMIT_STATE(_rs,66856685- DEFAULT_RATELIMIT_INTERVAL * 10,66866686- /*DEFAULT_RATELIMIT_BURST*/ 1);66876687- if (__ratelimit(&_rs))66886688- WARN(1, KERN_DEBUG66896689- "btrfs: block rsv returned %d\n", ret);66906690- }66916691- ret = reserve_metadata_bytes(root, block_rsv, blocksize,66926692- BTRFS_RESERVE_NO_FLUSH);66936693- if (!ret) {66946694- return block_rsv;66956695- } else if (ret && block_rsv != global_rsv) {66966696- ret = block_rsv_use_bytes(global_rsv, blocksize);66976697- if (!ret)66986698- return global_rsv;66996699- }66646664+66656665+ if (block_rsv->failfast)66666666+ return ERR_PTR(ret);66676667+66686668+ if (block_rsv->type == BTRFS_BLOCK_RSV_GLOBAL && !global_updated) {66696669+ global_updated = true;66706670+ update_global_block_rsv(root->fs_info);66716671+ goto again;67006672 }6701667367026702- return ERR_PTR(-ENOSPC);66746674+ if (btrfs_test_opt(root, ENOSPC_DEBUG)) {66756675+ static DEFINE_RATELIMIT_STATE(_rs,66766676+ DEFAULT_RATELIMIT_INTERVAL * 10,66776677+ /*DEFAULT_RATELIMIT_BURST*/ 1);66786678+ if (__ratelimit(&_rs))66796679+ WARN(1, KERN_DEBUG66806680+ "btrfs: block rsv returned %d\n", ret);66816681+ }66826682+try_reserve:66836683+ ret = reserve_metadata_bytes(root, block_rsv, blocksize,66846684+ BTRFS_RESERVE_NO_FLUSH);66856685+ if (!ret)66866686+ return block_rsv;66876687+ /*66886688+ * If we couldn't reserve metadata bytes try and use some from66896689+ * the global reserve if its space type is the same as the global66906690+ * reservation.66916691+ */66926692+ if (block_rsv->type != BTRFS_BLOCK_RSV_GLOBAL &&66936693+ block_rsv->space_info == global_rsv->space_info) {66946694+ ret = block_rsv_use_bytes(global_rsv, blocksize);66956695+ if (!ret)66966696+ return global_rsv;66976697+ }66986698+ return ERR_PTR(ret);67036699}6704670067056701static void unuse_block_rsv(struct btrfs_fs_info *fs_info,···67676763 extent_op->update_key = 1;67686764 extent_op->update_flags = 1;67696765 extent_op->is_data = 0;67666766+ extent_op->level = level;6770676767716768 ret = btrfs_add_delayed_tree_ref(root->fs_info, trans,67726769 ins.objectid,···69396934 ret = btrfs_dec_ref(trans, root, eb, 0, wc->for_reloc);69406935 BUG_ON(ret); /* -ENOMEM */69416936 ret = btrfs_set_disk_extent_flags(trans, root, eb->start,69426942- eb->len, flag, 0);69376937+ eb->len, flag,69386938+ btrfs_header_level(eb), 0);69436939 BUG_ON(ret); /* -ENOMEM */69446940 wc->flags[level] |= flag;69456941 }
+73-65
fs/btrfs/extent_io.c
···23232424static struct kmem_cache *extent_state_cache;2525static struct kmem_cache *extent_buffer_cache;2626+static struct bio_set *btrfs_bioset;26272728#ifdef CONFIG_BTRFS_DEBUG2829static LIST_HEAD(buffers);···126125 SLAB_RECLAIM_ACCOUNT | SLAB_MEM_SPREAD, NULL);127126 if (!extent_buffer_cache)128127 goto free_state_cache;128128+129129+ btrfs_bioset = bioset_create(BIO_POOL_SIZE,130130+ offsetof(struct btrfs_io_bio, bio));131131+ if (!btrfs_bioset)132132+ goto free_buffer_cache;129133 return 0;134134+135135+free_buffer_cache:136136+ kmem_cache_destroy(extent_buffer_cache);137137+ extent_buffer_cache = NULL;130138131139free_state_cache:132140 kmem_cache_destroy(extent_state_cache);141141+ extent_state_cache = NULL;133142 return -ENOMEM;134143}135144···156145 kmem_cache_destroy(extent_state_cache);157146 if (extent_buffer_cache)158147 kmem_cache_destroy(extent_buffer_cache);148148+ if (btrfs_bioset)149149+ bioset_free(btrfs_bioset);159150}160151161152void extent_io_tree_init(struct extent_io_tree *tree,···19611948}1962194919631950/*19641964- * helper function to unlock a page if all the extents in the tree19651965- * for that page are unlocked19661966- */19671967-static void check_page_locked(struct extent_io_tree *tree, struct page *page)19681968-{19691969- u64 start = page_offset(page);19701970- u64 end = start + PAGE_CACHE_SIZE - 1;19711971- if (!test_range_bit(tree, start, end, EXTENT_LOCKED, 0, NULL))19721972- unlock_page(page);19731973-}19741974-19751975-/*19761976- * helper function to end page writeback if all the extents19771977- * in the tree for that page are done with writeback19781978- */19791979-static void check_page_writeback(struct extent_io_tree *tree,19801980- struct page *page)19811981-{19821982- end_page_writeback(page);19831983-}19841984-19851985-/*19861951 * When IO fails, either with EIO or csum verification fails, we19871952 * try other mirrors that might have a good copy of the data. This19881953 * io_failure_record is used to record state as we go through all the···20372046 if (btrfs_is_parity_mirror(map_tree, logical, length, mirror_num))20382047 return 0;2039204820402040- bio = bio_alloc(GFP_NOFS, 1);20492049+ bio = btrfs_io_bio_alloc(GFP_NOFS, 1);20412050 if (!bio)20422051 return -EIO;20432052 bio->bi_private = &compl;···23272336 return -EIO;23282337 }2329233823302330- bio = bio_alloc(GFP_NOFS, 1);23392339+ bio = btrfs_io_bio_alloc(GFP_NOFS, 1);23312340 if (!bio) {23322341 free_io_failure(inode, failrec, 0);23332342 return -EIO;···23892398 struct extent_io_tree *tree;23902399 u64 start;23912400 u64 end;23922392- int whole_page;2393240123942402 do {23952403 struct page *page = bvec->bv_page;23962404 tree = &BTRFS_I(page->mapping->host)->io_tree;2397240523982398- start = page_offset(page) + bvec->bv_offset;23992399- end = start + bvec->bv_len - 1;24062406+ /* We always issue full-page reads, but if some block24072407+ * in a page fails to read, blk_update_request() will24082408+ * advance bv_offset and adjust bv_len to compensate.24092409+ * Print a warning for nonzero offsets, and an error24102410+ * if they don't add up to a full page. */24112411+ if (bvec->bv_offset || bvec->bv_len != PAGE_CACHE_SIZE)24122412+ printk("%s page write in btrfs with offset %u and length %u\n",24132413+ bvec->bv_offset + bvec->bv_len != PAGE_CACHE_SIZE24142414+ ? KERN_ERR "partial" : KERN_INFO "incomplete",24152415+ bvec->bv_offset, bvec->bv_len);2400241624012401- if (bvec->bv_offset == 0 && bvec->bv_len == PAGE_CACHE_SIZE)24022402- whole_page = 1;24032403- else24042404- whole_page = 0;24172417+ start = page_offset(page);24182418+ end = start + bvec->bv_offset + bvec->bv_len - 1;2405241924062420 if (--bvec >= bio->bi_io_vec)24072421 prefetchw(&bvec->bv_page->flags);···24142418 if (end_extent_writepage(page, err, start, end))24152419 continue;2416242024172417- if (whole_page)24182418- end_page_writeback(page);24192419- else24202420- check_page_writeback(tree, page);24212421+ end_page_writeback(page);24212422 } while (bvec >= bio->bi_io_vec);2422242324232424 bio_put(bio);···24392446 struct extent_io_tree *tree;24402447 u64 start;24412448 u64 end;24422442- int whole_page;24432449 int mirror;24442450 int ret;24452451···24492457 struct page *page = bvec->bv_page;24502458 struct extent_state *cached = NULL;24512459 struct extent_state *state;24602460+ struct btrfs_io_bio *io_bio = btrfs_io_bio(bio);2452246124532462 pr_debug("end_bio_extent_readpage: bi_sector=%llu, err=%d, "24542454- "mirror=%ld\n", (u64)bio->bi_sector, err,24552455- (long int)bio->bi_bdev);24632463+ "mirror=%lu\n", (u64)bio->bi_sector, err,24642464+ io_bio->mirror_num);24562465 tree = &BTRFS_I(page->mapping->host)->io_tree;2457246624582458- start = page_offset(page) + bvec->bv_offset;24592459- end = start + bvec->bv_len - 1;24672467+ /* We always issue full-page reads, but if some block24682468+ * in a page fails to read, blk_update_request() will24692469+ * advance bv_offset and adjust bv_len to compensate.24702470+ * Print a warning for nonzero offsets, and an error24712471+ * if they don't add up to a full page. */24722472+ if (bvec->bv_offset || bvec->bv_len != PAGE_CACHE_SIZE)24732473+ printk("%s page read in btrfs with offset %u and length %u\n",24742474+ bvec->bv_offset + bvec->bv_len != PAGE_CACHE_SIZE24752475+ ? KERN_ERR "partial" : KERN_INFO "incomplete",24762476+ bvec->bv_offset, bvec->bv_len);2460247724612461- if (bvec->bv_offset == 0 && bvec->bv_len == PAGE_CACHE_SIZE)24622462- whole_page = 1;24632463- else24642464- whole_page = 0;24782478+ start = page_offset(page);24792479+ end = start + bvec->bv_offset + bvec->bv_len - 1;2465248024662481 if (++bvec <= bvec_end)24672482 prefetchw(&bvec->bv_page->flags);···24842485 }24852486 spin_unlock(&tree->lock);2486248724872487- mirror = (int)(unsigned long)bio->bi_bdev;24882488+ mirror = io_bio->mirror_num;24882489 if (uptodate && tree->ops && tree->ops->readpage_end_io_hook) {24892490 ret = tree->ops->readpage_end_io_hook(page, start, end,24902491 state, mirror);···25272528 }25282529 unlock_extent_cached(tree, start, end, &cached, GFP_ATOMIC);2529253025302530- if (whole_page) {25312531- if (uptodate) {25322532- SetPageUptodate(page);25332533- } else {25342534- ClearPageUptodate(page);25352535- SetPageError(page);25362536- }25372537- unlock_page(page);25312531+ if (uptodate) {25322532+ SetPageUptodate(page);25382533 } else {25392539- if (uptodate) {25402540- check_page_uptodate(tree, page);25412541- } else {25422542- ClearPageUptodate(page);25432543- SetPageError(page);25442544- }25452545- check_page_locked(tree, page);25342534+ ClearPageUptodate(page);25352535+ SetPageError(page);25462536 }25372537+ unlock_page(page);25472538 } while (bvec <= bvec_end);2548253925492540 bio_put(bio);25502541}2551254225432543+/*25442544+ * this allocates from the btrfs_bioset. We're returning a bio right now25452545+ * but you can call btrfs_io_bio for the appropriate container_of magic25462546+ */25522547struct bio *25532548btrfs_bio_alloc(struct block_device *bdev, u64 first_sector, int nr_vecs,25542549 gfp_t gfp_flags)25552550{25562551 struct bio *bio;2557255225582558- bio = bio_alloc(gfp_flags, nr_vecs);25532553+ bio = bio_alloc_bioset(gfp_flags, nr_vecs, btrfs_bioset);2559255425602555 if (bio == NULL && (current->flags & PF_MEMALLOC)) {25612561- while (!bio && (nr_vecs /= 2))25622562- bio = bio_alloc(gfp_flags, nr_vecs);25562556+ while (!bio && (nr_vecs /= 2)) {25572557+ bio = bio_alloc_bioset(gfp_flags,25582558+ nr_vecs, btrfs_bioset);25592559+ }25632560 }2564256125652562 if (bio) {···25652570 }25662571 return bio;25672572}25732573+25742574+struct bio *btrfs_bio_clone(struct bio *bio, gfp_t gfp_mask)25752575+{25762576+ return bio_clone_bioset(bio, gfp_mask, btrfs_bioset);25772577+}25782578+25792579+25802580+/* this also allocates from the btrfs_bioset */25812581+struct bio *btrfs_io_bio_alloc(gfp_t gfp_mask, unsigned int nr_iovecs)25822582+{25832583+ return bio_alloc_bioset(gfp_mask, nr_iovecs, btrfs_bioset);25842584+}25852585+2568258625692587static int __must_check submit_one_bio(int rw, struct bio *bio,25702588 int mirror_num, unsigned long bio_flags)···39963988 last_for_get_extent = isize;39973989 }3998399039993999- lock_extent_bits(&BTRFS_I(inode)->io_tree, start, start + len, 0,39913991+ lock_extent_bits(&BTRFS_I(inode)->io_tree, start, start + len - 1, 0,40003992 &cached_state);4001399340023994 em = get_extent_skip_holes(inode, start, last_for_get_extent,···40834075out_free:40844076 free_extent_map(em);40854077out:40864086- unlock_extent_cached(&BTRFS_I(inode)->io_tree, start, start + len,40784078+ unlock_extent_cached(&BTRFS_I(inode)->io_tree, start, start + len - 1,40874079 &cached_state, GFP_NOFS);40884080 return ret;40894081}
+2
fs/btrfs/extent_io.h
···336336struct bio *337337btrfs_bio_alloc(struct block_device *bdev, u64 first_sector, int nr_vecs,338338 gfp_t gfp_flags);339339+struct bio *btrfs_io_bio_alloc(gfp_t gfp_mask, unsigned int nr_iovecs);340340+struct bio *btrfs_bio_clone(struct bio *bio, gfp_t gfp_mask);339341340342struct btrfs_fs_info;341343
+20-23
fs/btrfs/free-space-cache.c
···197197 block_group->key.objectid);198198}199199200200-int btrfs_truncate_free_space_cache(struct btrfs_root *root,201201- struct btrfs_trans_handle *trans,202202- struct btrfs_path *path,203203- struct inode *inode)200200+int btrfs_check_trunc_cache_free_space(struct btrfs_root *root,201201+ struct btrfs_block_rsv *rsv)204202{205205- struct btrfs_block_rsv *rsv;206203 u64 needed_bytes;207207- loff_t oldsize;208208- int ret = 0;209209-210210- rsv = trans->block_rsv;211211- trans->block_rsv = &root->fs_info->global_block_rsv;204204+ int ret;212205213206 /* 1 for slack space, 1 for updating the inode */214207 needed_bytes = btrfs_calc_trunc_metadata_size(root, 1) +215208 btrfs_calc_trans_metadata_size(root, 1);216209217217- spin_lock(&trans->block_rsv->lock);218218- if (trans->block_rsv->reserved < needed_bytes) {219219- spin_unlock(&trans->block_rsv->lock);220220- trans->block_rsv = rsv;221221- return -ENOSPC;222222- }223223- spin_unlock(&trans->block_rsv->lock);210210+ spin_lock(&rsv->lock);211211+ if (rsv->reserved < needed_bytes)212212+ ret = -ENOSPC;213213+ else214214+ ret = 0;215215+ spin_unlock(&rsv->lock);216216+ return 0;217217+}218218+219219+int btrfs_truncate_free_space_cache(struct btrfs_root *root,220220+ struct btrfs_trans_handle *trans,221221+ struct btrfs_path *path,222222+ struct inode *inode)223223+{224224+ loff_t oldsize;225225+ int ret = 0;224226225227 oldsize = i_size_read(inode);226228 btrfs_i_size_write(inode, 0);···234232 */235233 ret = btrfs_truncate_inode_items(trans, root, inode,236234 0, BTRFS_EXTENT_DATA_KEY);237237-238235 if (ret) {239239- trans->block_rsv = rsv;240236 btrfs_abort_transaction(trans, root, ret);241237 return ret;242238 }···242242 ret = btrfs_update_inode(trans, root, inode);243243 if (ret)244244 btrfs_abort_transaction(trans, root, ret);245245- trans->block_rsv = rsv;246245247246 return ret;248247}···919920920921 /* Make sure we can fit our crcs into the first page */921922 if (io_ctl.check_crcs &&922922- (io_ctl.num_pages * sizeof(u32)) >= PAGE_CACHE_SIZE) {923923- WARN_ON(1);923923+ (io_ctl.num_pages * sizeof(u32)) >= PAGE_CACHE_SIZE)924924 goto out_nospc;925925- }926925927926 io_ctl_set_generation(&io_ctl, trans->transid);928927
···429429 num_bytes = trans->bytes_reserved;430430 /*431431 * 1 item for inode item insertion if need432432- * 3 items for inode item update (in the worst case)432432+ * 4 items for inode item update (in the worst case)433433+ * 1 items for slack space if we need do truncation433434 * 1 item for free space object434435 * 3 items for pre-allocation435436 */436436- trans->bytes_reserved = btrfs_calc_trans_metadata_size(root, 8);437437+ trans->bytes_reserved = btrfs_calc_trans_metadata_size(root, 10);437438 ret = btrfs_block_rsv_add(root, trans->block_rsv,438439 trans->bytes_reserved,439440 BTRFS_RESERVE_NO_FLUSH);···469468 if (i_size_read(inode) > 0) {470469 ret = btrfs_truncate_free_space_cache(root, trans, path, inode);471470 if (ret) {472472- btrfs_abort_transaction(trans, root, ret);471471+ if (ret != -ENOSPC)472472+ btrfs_abort_transaction(trans, root, ret);473473 goto out_put;474474 }475475 }
+54-29
fs/btrfs/inode.c
···715715 async_extent->ram_size - 1, 0);716716717717 em = alloc_extent_map();718718- if (!em)718718+ if (!em) {719719+ ret = -ENOMEM;719720 goto out_free_reserve;721721+ }720722 em->start = async_extent->start;721723 em->len = async_extent->ram_size;722724 em->orig_start = em->start;···925923 }926924927925 em = alloc_extent_map();928928- if (!em)926926+ if (!em) {927927+ ret = -ENOMEM;929928 goto out_reserve;929929+ }930930 em->start = start;931931 em->orig_start = em->start;932932 ram_size = ins.offset;···47284724 btrfs_end_transaction(trans, root);47294725 btrfs_btree_balance_dirty(root);47304726no_delete:47274727+ btrfs_remove_delayed_node(inode);47314728 clear_inode(inode);47324729 return;47334730}···48444839 struct rb_node **p;48454840 struct rb_node *parent;48464841 u64 ino = btrfs_ino(inode);48474847-again:48484848- p = &root->inode_tree.rb_node;48494849- parent = NULL;4850484248514843 if (inode_unhashed(inode))48524844 return;48534853-48454845+again:48464846+ parent = NULL;48544847 spin_lock(&root->inode_lock);48484848+ p = &root->inode_tree.rb_node;48554849 while (*p) {48564850 parent = *p;48574851 entry = rb_entry(parent, struct btrfs_inode, rb_node);···69326928 /* IO errors */69336929 int errors;6934693069316931+ /* orig_bio is our btrfs_io_bio */69356932 struct bio *orig_bio;69336933+69346934+ /* dio_bio came from fs/direct-io.c */69356935+ struct bio *dio_bio;69366936};6937693769386938static void btrfs_endio_direct_read(struct bio *bio, int err)···69466938 struct bio_vec *bvec = bio->bi_io_vec;69476939 struct inode *inode = dip->inode;69486940 struct btrfs_root *root = BTRFS_I(inode)->root;69416941+ struct bio *dio_bio;69496942 u64 start;6950694369516944 start = dip->logical_offset;···6986697769876978 unlock_extent(&BTRFS_I(inode)->io_tree, dip->logical_offset,69886979 dip->logical_offset + dip->bytes - 1);69896989- bio->bi_private = dip->private;69806980+ dio_bio = dip->dio_bio;6990698169916982 kfree(dip);6992698369936984 /* If we had a csum failure make sure to clear the uptodate flag */69946985 if (err)69956995- clear_bit(BIO_UPTODATE, &bio->bi_flags);69966996- dio_end_io(bio, err);69866986+ clear_bit(BIO_UPTODATE, &dio_bio->bi_flags);69876987+ dio_end_io(dio_bio, err);69886988+ bio_put(bio);69976989}6998699069996991static void btrfs_endio_direct_write(struct bio *bio, int err)···70056995 struct btrfs_ordered_extent *ordered = NULL;70066996 u64 ordered_offset = dip->logical_offset;70076997 u64 ordered_bytes = dip->bytes;69986998+ struct bio *dio_bio;70086999 int ret;7009700070107001 if (err)···70337022 goto again;70347023 }70357024out_done:70367036- bio->bi_private = dip->private;70257025+ dio_bio = dip->dio_bio;7037702670387027 kfree(dip);7039702870407029 /* If we had an error make sure to clear the uptodate flag */70417030 if (err)70427042- clear_bit(BIO_UPTODATE, &bio->bi_flags);70437043- dio_end_io(bio, err);70317031+ clear_bit(BIO_UPTODATE, &dio_bio->bi_flags);70327032+ dio_end_io(dio_bio, err);70337033+ bio_put(bio);70447034}7045703570467036static int __btrfs_submit_bio_start_direct_io(struct inode *inode, int rw,···70777065 if (!atomic_dec_and_test(&dip->pending_bios))70787066 goto out;7079706770807080- if (dip->errors)70687068+ if (dip->errors) {70817069 bio_io_error(dip->orig_bio);70827082- else {70837083- set_bit(BIO_UPTODATE, &dip->orig_bio->bi_flags);70707070+ } else {70717071+ set_bit(BIO_UPTODATE, &dip->dio_bio->bi_flags);70847072 bio_endio(dip->orig_bio, 0);70857073 }70867074out:···72557243 return 0;72567244}7257724572587258-static void btrfs_submit_direct(int rw, struct bio *bio, struct inode *inode,72597259- loff_t file_offset)72467246+static void btrfs_submit_direct(int rw, struct bio *dio_bio,72477247+ struct inode *inode, loff_t file_offset)72607248{72617249 struct btrfs_root *root = BTRFS_I(inode)->root;72627250 struct btrfs_dio_private *dip;72637263- struct bio_vec *bvec = bio->bi_io_vec;72517251+ struct bio_vec *bvec = dio_bio->bi_io_vec;72527252+ struct bio *io_bio;72647253 int skip_sum;72657254 int write = rw & REQ_WRITE;72667255 int ret = 0;7267725672687257 skip_sum = BTRFS_I(inode)->flags & BTRFS_INODE_NODATASUM;7269725872707270- dip = kmalloc(sizeof(*dip), GFP_NOFS);72717271- if (!dip) {72597259+ io_bio = btrfs_bio_clone(dio_bio, GFP_NOFS);72607260+72617261+ if (!io_bio) {72727262 ret = -ENOMEM;72737263 goto free_ordered;72747264 }7275726572767276- dip->private = bio->bi_private;72667266+ dip = kmalloc(sizeof(*dip), GFP_NOFS);72677267+ if (!dip) {72687268+ ret = -ENOMEM;72697269+ goto free_io_bio;72707270+ }72717271+72727272+ dip->private = dio_bio->bi_private;72737273+ io_bio->bi_private = dio_bio->bi_private;72777274 dip->inode = inode;72787275 dip->logical_offset = file_offset;72797276···72907269 do {72917270 dip->bytes += bvec->bv_len;72927271 bvec++;72937293- } while (bvec <= (bio->bi_io_vec + bio->bi_vcnt - 1));72727272+ } while (bvec <= (dio_bio->bi_io_vec + dio_bio->bi_vcnt - 1));7294727372957295- dip->disk_bytenr = (u64)bio->bi_sector << 9;72967296- bio->bi_private = dip;72747274+ dip->disk_bytenr = (u64)dio_bio->bi_sector << 9;72757275+ io_bio->bi_private = dip;72977276 dip->errors = 0;72987298- dip->orig_bio = bio;72777277+ dip->orig_bio = io_bio;72787278+ dip->dio_bio = dio_bio;72997279 atomic_set(&dip->pending_bios, 0);7300728073017281 if (write)73027302- bio->bi_end_io = btrfs_endio_direct_write;72827282+ io_bio->bi_end_io = btrfs_endio_direct_write;73037283 else73047304- bio->bi_end_io = btrfs_endio_direct_read;72847284+ io_bio->bi_end_io = btrfs_endio_direct_read;7305728573067286 ret = btrfs_submit_direct_hook(rw, dip, skip_sum);73077287 if (!ret)73087288 return;72897289+72907290+free_io_bio:72917291+ bio_put(io_bio);72927292+73097293free_ordered:73107294 /*73117295 * If this is a write, we need to clean up the reserved space and kill···73267300 btrfs_put_ordered_extent(ordered);73277301 btrfs_put_ordered_extent(ordered);73287302 }73297329- bio_endio(bio, ret);73037303+ bio_endio(dio_bio, ret);73307304}7331730573327306static ssize_t check_direct_IO(struct btrfs_root *root, int rw, struct kiocb *iocb,···80057979 inode_tree_del(inode);80067980 btrfs_drop_extent_cache(inode, 0, (u64)-1, 0);80077981free:80088008- btrfs_remove_delayed_node(inode);80097982 call_rcu(&inode->i_rcu, btrfs_i_callback);80107983}80117984
+5-5
fs/btrfs/ioctl.c
···18011801 item_off = btrfs_item_ptr_offset(leaf, i);18021802 item_len = btrfs_item_size_nr(leaf, i);1803180318041804- if (item_len > BTRFS_SEARCH_ARGS_BUFSIZE)18041804+ btrfs_item_key_to_cpu(leaf, key, i);18051805+ if (!key_in_sk(key, sk))18061806+ continue;18071807+18081808+ if (sizeof(sh) + item_len > BTRFS_SEARCH_ARGS_BUFSIZE)18051809 item_len = 0;1806181018071811 if (sizeof(sh) + item_len + *sk_offset >···18131809 ret = 1;18141810 goto overflow;18151811 }18161816-18171817- btrfs_item_key_to_cpu(leaf, key, i);18181818- if (!key_in_sk(key, sk))18191819- continue;1820181218211813 sh.objectid = key->objectid;18221814 sh.offset = key->offset;
+1-1
fs/btrfs/raid56.c
···10501050 }1051105110521052 /* put a new bio on the list */10531053- bio = bio_alloc(GFP_NOFS, bio_max_len >> PAGE_SHIFT?:1);10531053+ bio = btrfs_io_bio_alloc(GFP_NOFS, bio_max_len >> PAGE_SHIFT?:1);10541054 if (!bio)10551055 return -ENOMEM;10561056
···12961296 }1297129712981298 WARN_ON(!page->page);12991299- bio = bio_alloc(GFP_NOFS, 1);12991299+ bio = btrfs_io_bio_alloc(GFP_NOFS, 1);13001300 if (!bio) {13011301 page->io_error = 1;13021302 sblock->no_io_error_seen = 0;···14311431 return -EIO;14321432 }1433143314341434- bio = bio_alloc(GFP_NOFS, 1);14341434+ bio = btrfs_io_bio_alloc(GFP_NOFS, 1);14351435 if (!bio)14361436 return -EIO;14371437 bio->bi_bdev = page_bad->dev->bdev;···15221522 sbio->dev = wr_ctx->tgtdev;15231523 bio = sbio->bio;15241524 if (!bio) {15251525- bio = bio_alloc(GFP_NOFS, wr_ctx->pages_per_wr_bio);15251525+ bio = btrfs_io_bio_alloc(GFP_NOFS, wr_ctx->pages_per_wr_bio);15261526 if (!bio) {15271527 mutex_unlock(&wr_ctx->wr_lock);15281528 return -ENOMEM;···19301930 sbio->dev = spage->dev;19311931 bio = sbio->bio;19321932 if (!bio) {19331933- bio = bio_alloc(GFP_NOFS, sctx->pages_per_rd_bio);19331933+ bio = btrfs_io_bio_alloc(GFP_NOFS, sctx->pages_per_rd_bio);19341934 if (!bio)19351935 return -ENOMEM;19361936 sbio->bio = bio;···33073307 "btrfs: scrub write_page_nocow(bdev == NULL) is unexpected!\n");33083308 return -EIO;33093309 }33103310- bio = bio_alloc(GFP_NOFS, 1);33103310+ bio = btrfs_io_bio_alloc(GFP_NOFS, 1);33113311 if (!bio) {33123312 spin_lock(&sctx->stat_lock);33133313 sctx->stat.malloc_errors++;
+1
fs/btrfs/super.c
···1263126312641264 btrfs_dev_replace_suspend_for_unmount(fs_info);12651265 btrfs_scrub_cancel(fs_info);12661266+ btrfs_pause_balance(fs_info);1266126712671268 ret = btrfs_commit_super(root);12681269 if (ret)
+12-42
fs/btrfs/volumes.c
···31203120 allowed = BTRFS_AVAIL_ALLOC_BIT_SINGLE;31213121 if (num_devices == 1)31223122 allowed |= BTRFS_BLOCK_GROUP_DUP;31233123- else if (num_devices < 4)31233123+ else if (num_devices > 1)31243124 allowed |= (BTRFS_BLOCK_GROUP_RAID0 | BTRFS_BLOCK_GROUP_RAID1);31253125- else31263126- allowed |= (BTRFS_BLOCK_GROUP_RAID0 | BTRFS_BLOCK_GROUP_RAID1 |31273127- BTRFS_BLOCK_GROUP_RAID10 |31283128- BTRFS_BLOCK_GROUP_RAID5 |31293129- BTRFS_BLOCK_GROUP_RAID6);31303130-31253125+ if (num_devices > 2)31263126+ allowed |= BTRFS_BLOCK_GROUP_RAID5;31273127+ if (num_devices > 3)31283128+ allowed |= (BTRFS_BLOCK_GROUP_RAID10 |31293129+ BTRFS_BLOCK_GROUP_RAID6);31313130 if ((bctl->data.flags & BTRFS_BALANCE_ARGS_CONVERT) &&31323131 (!alloc_profile_is_valid(bctl->data.target, 1) ||31333132 (bctl->data.target & ~allowed))) {···50185019 return 0;50195020}5020502150215021-static void *merge_stripe_index_into_bio_private(void *bi_private,50225022- unsigned int stripe_index)50235023-{50245024- /*50255025- * with single, dup, RAID0, RAID1 and RAID10, stripe_index is50265026- * at most 1.50275027- * The alternative solution (instead of stealing bits from the50285028- * pointer) would be to allocate an intermediate structure50295029- * that contains the old private pointer plus the stripe_index.50305030- */50315031- BUG_ON((((uintptr_t)bi_private) & 3) != 0);50325032- BUG_ON(stripe_index > 3);50335033- return (void *)(((uintptr_t)bi_private) | stripe_index);50345034-}50355035-50365036-static struct btrfs_bio *extract_bbio_from_bio_private(void *bi_private)50375037-{50385038- return (struct btrfs_bio *)(((uintptr_t)bi_private) & ~((uintptr_t)3));50395039-}50405040-50415041-static unsigned int extract_stripe_index_from_bio_private(void *bi_private)50425042-{50435043- return (unsigned int)((uintptr_t)bi_private) & 3;50445044-}50455045-50465022static void btrfs_end_bio(struct bio *bio, int err)50475023{50485048- struct btrfs_bio *bbio = extract_bbio_from_bio_private(bio->bi_private);50245024+ struct btrfs_bio *bbio = bio->bi_private;50495025 int is_orig_bio = 0;5050502650515027 if (err) {50525028 atomic_inc(&bbio->error);50535029 if (err == -EIO || err == -EREMOTEIO) {50545030 unsigned int stripe_index =50555055- extract_stripe_index_from_bio_private(50565056- bio->bi_private);50315031+ btrfs_io_bio(bio)->stripe_index;50575032 struct btrfs_device *dev;5058503350595034 BUG_ON(stripe_index >= bbio->num_stripes);···50575084 }50585085 bio->bi_private = bbio->private;50595086 bio->bi_end_io = bbio->end_io;50605060- bio->bi_bdev = (struct block_device *)50615061- (unsigned long)bbio->mirror_num;50875087+ btrfs_io_bio(bio)->mirror_num = bbio->mirror_num;50625088 /* only send an error to the higher layers if it is50635089 * beyond the tolerance of the btrfs bio50645090 */···51835211 struct btrfs_device *dev = bbio->stripes[dev_nr].dev;5184521251855213 bio->bi_private = bbio;51865186- bio->bi_private = merge_stripe_index_into_bio_private(51875187- bio->bi_private, (unsigned int)dev_nr);52145214+ btrfs_io_bio(bio)->stripe_index = dev_nr;51885215 bio->bi_end_io = btrfs_end_bio;51895216 bio->bi_sector = physical >> 9;51905217#ifdef DEBUG···52445273 if (atomic_dec_and_test(&bbio->stripes_pending)) {52455274 bio->bi_private = bbio->private;52465275 bio->bi_end_io = bbio->end_io;52475247- bio->bi_bdev = (struct block_device *)52485248- (unsigned long)bbio->mirror_num;52765276+ btrfs_io_bio(bio)->mirror_num = bbio->mirror_num;52495277 bio->bi_sector = logical >> 9;52505278 kfree(bbio);52515279 bio_endio(bio, -EIO);···53225352 }5323535353245354 if (dev_nr < total_devs - 1) {53255325- bio = bio_clone(first_bio, GFP_NOFS);53555355+ bio = btrfs_bio_clone(first_bio, GFP_NOFS);53265356 BUG_ON(!bio); /* -ENOMEM */53275357 } else {53285358 bio = first_bio;
+20
fs/btrfs/volumes.h
···152152 int rotating;153153};154154155155+/*156156+ * we need the mirror number and stripe index to be passed around157157+ * the call chain while we are processing end_io (especially errors).158158+ * Really, what we need is a btrfs_bio structure that has this info159159+ * and is properly sized with its stripe array, but we're not there160160+ * quite yet. We have our own btrfs bioset, and all of the bios161161+ * we allocate are actually btrfs_io_bios. We'll cram as much of162162+ * struct btrfs_bio as we can into this over time.163163+ */164164+struct btrfs_io_bio {165165+ unsigned long mirror_num;166166+ unsigned long stripe_index;167167+ struct bio bio;168168+};169169+170170+static inline struct btrfs_io_bio *btrfs_io_bio(struct bio *bio)171171+{172172+ return container_of(bio, struct btrfs_io_bio, bio);173173+}174174+155175struct btrfs_bio_stripe {156176 struct btrfs_device *dev;157177 u64 physical;
+2-6
fs/ext4/ext4.h
···209209 ssize_t size; /* size of the extent */210210 struct kiocb *iocb; /* iocb struct for AIO */211211 int result; /* error value for AIO */212212- atomic_t count; /* reference counter */213212} ext4_io_end_t;214213215214struct ext4_io_submit {···2650265126512652/* page-io.c */26522653extern int __init ext4_init_pageio(void);26542654+extern void ext4_add_complete_io(ext4_io_end_t *io_end);26532655extern void ext4_exit_pageio(void);26542656extern void ext4_ioend_shutdown(struct inode *);26572657+extern void ext4_free_io_end(ext4_io_end_t *io);26552658extern ext4_io_end_t *ext4_init_io_end(struct inode *inode, gfp_t flags);26562656-extern ext4_io_end_t *ext4_get_io_end(ext4_io_end_t *io_end);26572657-extern int ext4_put_io_end(ext4_io_end_t *io_end);26582658-extern void ext4_put_io_end_defer(ext4_io_end_t *io_end);26592659-extern void ext4_io_submit_init(struct ext4_io_submit *io,26602660- struct writeback_control *wbc);26612659extern void ext4_end_io_work(struct work_struct *work);26622660extern void ext4_io_submit(struct ext4_io_submit *io);26632661extern int ext4_bio_write_page(struct ext4_io_submit *io,
+5-4
fs/ext4/extents.c
···36423642{36433643 struct extent_status es;3644364436453645- ext4_es_find_delayed_extent(inode, lblk_start, &es);36453645+ ext4_es_find_delayed_extent_range(inode, lblk_start, lblk_end, &es);36463646 if (es.es_len == 0)36473647 return 0; /* there is no delay extent in this tree */36483648 else if (es.es_lblk <= lblk_start &&···46084608 struct extent_status es;46094609 ext4_lblk_t block, next_del;4610461046114611- ext4_es_find_delayed_extent(inode, newes->es_lblk, &es);46124612-46134611 if (newes->es_pblk == 0) {46124612+ ext4_es_find_delayed_extent_range(inode, newes->es_lblk,46134613+ newes->es_lblk + newes->es_len - 1, &es);46144614+46144615 /*46154616 * No extent in extent-tree contains block @newes->es_pblk,46164617 * then the block may stay in 1)a hole or 2)delayed-extent.···46314630 }4632463146334632 block = newes->es_lblk + newes->es_len;46344634- ext4_es_find_delayed_extent(inode, block, &es);46334633+ ext4_es_find_delayed_extent_range(inode, block, EXT_MAX_BLOCKS, &es);46354634 if (es.es_len == 0)46364635 next_del = EXT_MAX_BLOCKS;46374636 else
+12-5
fs/ext4/extents_status.c
···232232}233233234234/*235235- * ext4_es_find_delayed_extent: find the 1st delayed extent covering @es->lblk236236- * if it exists, otherwise, the next extent after @es->lblk.235235+ * ext4_es_find_delayed_extent_range: find the 1st delayed extent covering236236+ * @es->lblk if it exists, otherwise, the next extent after @es->lblk.237237 *238238 * @inode: the inode which owns delayed extents239239 * @lblk: the offset where we start to search240240+ * @end: the offset where we stop to search240241 * @es: delayed extent that we found241242 */242242-void ext4_es_find_delayed_extent(struct inode *inode, ext4_lblk_t lblk,243243+void ext4_es_find_delayed_extent_range(struct inode *inode,244244+ ext4_lblk_t lblk, ext4_lblk_t end,243245 struct extent_status *es)244246{245247 struct ext4_es_tree *tree = NULL;···249247 struct rb_node *node;250248251249 BUG_ON(es == NULL);252252- trace_ext4_es_find_delayed_extent_enter(inode, lblk);250250+ BUG_ON(end < lblk);251251+ trace_ext4_es_find_delayed_extent_range_enter(inode, lblk);253252254253 read_lock(&EXT4_I(inode)->i_es_lock);255254 tree = &EXT4_I(inode)->i_es_tree;···273270 if (es1 && !ext4_es_is_delayed(es1)) {274271 while ((node = rb_next(&es1->rb_node)) != NULL) {275272 es1 = rb_entry(node, struct extent_status, rb_node);273273+ if (es1->es_lblk > end) {274274+ es1 = NULL;275275+ break;276276+ }276277 if (ext4_es_is_delayed(es1))277278 break;278279 }···292285 read_unlock(&EXT4_I(inode)->i_es_lock);293286294287 ext4_es_lru_add(inode);295295- trace_ext4_es_find_delayed_extent_exit(inode, es);288288+ trace_ext4_es_find_delayed_extent_range_exit(inode, es);296289}297290298291static struct extent_status *
+2-1
fs/ext4/extents_status.h
···6262 unsigned long long status);6363extern int ext4_es_remove_extent(struct inode *inode, ext4_lblk_t lblk,6464 ext4_lblk_t len);6565-extern void ext4_es_find_delayed_extent(struct inode *inode, ext4_lblk_t lblk,6565+extern void ext4_es_find_delayed_extent_range(struct inode *inode,6666+ ext4_lblk_t lblk, ext4_lblk_t end,6667 struct extent_status *es);6768extern int ext4_es_lookup_extent(struct inode *inode, ext4_lblk_t lblk,6869 struct extent_status *es);
+2-2
fs/ext4/file.c
···465465 * If there is a delay extent at this offset,466466 * it will be as a data.467467 */468468- ext4_es_find_delayed_extent(inode, last, &es);468468+ ext4_es_find_delayed_extent_range(inode, last, last, &es);469469 if (es.es_len != 0 && in_range(last, es.es_lblk, es.es_len)) {470470 if (last != start)471471 dataoff = last << blkbits;···548548 * If there is a delay extent at this offset,549549 * we will skip this extent.550550 */551551- ext4_es_find_delayed_extent(inode, last, &es);551551+ ext4_es_find_delayed_extent_range(inode, last, last, &es);552552 if (es.es_len != 0 && in_range(last, es.es_lblk, es.es_len)) {553553 last = es.es_lblk + es.es_len;554554 holeoff = last << blkbits;
+39-48
fs/ext4/inode.c
···14881488 struct ext4_io_submit io_submit;1489148914901490 BUG_ON(mpd->next_page <= mpd->first_page);14911491- ext4_io_submit_init(&io_submit, mpd->wbc);14921492- io_submit.io_end = ext4_init_io_end(inode, GFP_NOFS);14931493- if (!io_submit.io_end)14941494- return -ENOMEM;14911491+ memset(&io_submit, 0, sizeof(io_submit));14951492 /*14961493 * We need to start from the first_page to the next_page - 114971494 * to make sure we also write the mapped dirty buffer_heads.···15761579 pagevec_release(&pvec);15771580 }15781581 ext4_io_submit(&io_submit);15791579- /* Drop io_end reference we got from init */15801580- ext4_put_io_end_defer(io_submit.io_end);15811582 return ret;15821583}15831584···22342239 */22352240 return __ext4_journalled_writepage(page, len);2236224122372237- ext4_io_submit_init(&io_submit, wbc);22382238- io_submit.io_end = ext4_init_io_end(inode, GFP_NOFS);22392239- if (!io_submit.io_end) {22402240- redirty_page_for_writepage(wbc, page);22412241- return -ENOMEM;22422242- }22422242+ memset(&io_submit, 0, sizeof(io_submit));22432243 ret = ext4_bio_write_page(&io_submit, page, len, wbc);22442244 ext4_io_submit(&io_submit);22452245- /* Drop io_end reference we got from init */22462246- ext4_put_io_end_defer(io_submit.io_end);22472245 return ret;22482246}22492247···30673079 struct inode *inode = file_inode(iocb->ki_filp);30683080 ext4_io_end_t *io_end = iocb->private;3069308130703070- /* if not async direct IO just return */30713071- if (!io_end) {30723072- inode_dio_done(inode);30733073- if (is_async)30743074- aio_complete(iocb, ret, 0);30753075- return;30763076- }30823082+ /* if not async direct IO or dio with 0 bytes write, just return */30833083+ if (!io_end || !size)30843084+ goto out;3077308530783086 ext_debug("ext4_end_io_dio(): io_end 0x%p "30793087 "for inode %lu, iocb 0x%p, offset %llu, size %zd\n",···30773093 size);3078309430793095 iocb->private = NULL;30963096+30973097+ /* if not aio dio with unwritten extents, just free io and return */30983098+ if (!(io_end->flag & EXT4_IO_END_UNWRITTEN)) {30993099+ ext4_free_io_end(io_end);31003100+out:31013101+ inode_dio_done(inode);31023102+ if (is_async)31033103+ aio_complete(iocb, ret, 0);31043104+ return;31053105+ }31063106+30803107 io_end->offset = offset;30813108 io_end->size = size;30823109 if (is_async) {30833110 io_end->iocb = iocb;30843111 io_end->result = ret;30853112 }30863086- ext4_put_io_end_defer(io_end);31133113+31143114+ ext4_add_complete_io(io_end);30873115}3088311630893117/*···31293133 get_block_t *get_block_func = NULL;31303134 int dio_flags = 0;31313135 loff_t final_size = offset + count;31323132- ext4_io_end_t *io_end = NULL;3133313631343137 /* Use the old path for reads and writes beyond i_size. */31353138 if (rw != WRITE || final_size > inode->i_size)···31673172 iocb->private = NULL;31683173 ext4_inode_aio_set(inode, NULL);31693174 if (!is_sync_kiocb(iocb)) {31703170- io_end = ext4_init_io_end(inode, GFP_NOFS);31753175+ ext4_io_end_t *io_end = ext4_init_io_end(inode, GFP_NOFS);31713176 if (!io_end) {31723177 ret = -ENOMEM;31733178 goto retake_lock;31743179 }31753180 io_end->flag |= EXT4_IO_END_DIRECT;31763176- /*31773177- * Grab reference for DIO. Will be dropped in ext4_end_io_dio()31783178- */31793179- iocb->private = ext4_get_io_end(io_end);31813181+ iocb->private = io_end;31803182 /*31813183 * we save the io structure for current async direct31823184 * IO, so that later ext4_map_blocks() could flag the···31973205 NULL,31983206 dio_flags);3199320732003200- /*32013201- * Put our reference to io_end. This can free the io_end structure e.g.32023202- * in sync IO case or in case of error. It can even perform extent32033203- * conversion if all bios we submitted finished before we got here.32043204- * Note that in that case iocb->private can be already set to NULL32053205- * here.32063206- */32073207- if (io_end) {32083208+ if (iocb->private)32083209 ext4_inode_aio_set(inode, NULL);32093209- ext4_put_io_end(io_end);32103210- /*32113211- * In case of error or no write ext4_end_io_dio() was not32123212- * called so we have to put iocb's reference.32133213- */32143214- if (ret <= 0 && ret != -EIOCBQUEUED) {32153215- WARN_ON(iocb->private != io_end);32163216- ext4_put_io_end(io_end);32173217- iocb->private = NULL;32183218- }32193219- }32203220- if (ret > 0 && !overwrite && ext4_test_inode_state(inode,32103210+ /*32113211+ * The io_end structure takes a reference to the inode, that32123212+ * structure needs to be destroyed and the reference to the32133213+ * inode need to be dropped, when IO is complete, even with 032143214+ * byte write, or failed.32153215+ *32163216+ * In the successful AIO DIO case, the io_end structure will32173217+ * be destroyed and the reference to the inode will be dropped32183218+ * after the end_io call back function is called.32193219+ *32203220+ * In the case there is 0 byte write, or error case, since VFS32213221+ * direct IO won't invoke the end_io call back function, we32223222+ * need to free the end_io structure here.32233223+ */32243224+ if (ret != -EIOCBQUEUED && ret <= 0 && iocb->private) {32253225+ ext4_free_io_end(iocb->private);32263226+ iocb->private = NULL;32273227+ } else if (ret > 0 && !overwrite && ext4_test_inode_state(inode,32213228 EXT4_STATE_DIO_UNWRITTEN)) {32223229 int err;32233230 /*
+5-1
fs/ext4/mballoc.c
···21052105 group = ac->ac_g_ex.fe_group;2106210621072107 for (i = 0; i < ngroups; group++, i++) {21082108- if (group == ngroups)21082108+ /*21092109+ * Artificially restricted ngroups for non-extent21102110+ * files makes group > ngroups possible on first loop.21112111+ */21122112+ if (group >= ngroups)21092113 group = 0;2110211421112115 /* This now checks without needing the buddy page */
+45-76
fs/ext4/page-io.c
···6262 cancel_work_sync(&EXT4_I(inode)->i_unwritten_work);6363}64646565-static void ext4_release_io_end(ext4_io_end_t *io_end)6565+void ext4_free_io_end(ext4_io_end_t *io)6666{6767- BUG_ON(!list_empty(&io_end->list));6868- BUG_ON(io_end->flag & EXT4_IO_END_UNWRITTEN);6767+ BUG_ON(!io);6868+ BUG_ON(!list_empty(&io->list));6969+ BUG_ON(io->flag & EXT4_IO_END_UNWRITTEN);69707070- if (atomic_dec_and_test(&EXT4_I(io_end->inode)->i_ioend_count))7171- wake_up_all(ext4_ioend_wq(io_end->inode));7272- if (io_end->flag & EXT4_IO_END_DIRECT)7373- inode_dio_done(io_end->inode);7474- if (io_end->iocb)7575- aio_complete(io_end->iocb, io_end->result, 0);7676- kmem_cache_free(io_end_cachep, io_end);7777-}7878-7979-static void ext4_clear_io_unwritten_flag(ext4_io_end_t *io_end)8080-{8181- struct inode *inode = io_end->inode;8282-8383- io_end->flag &= ~EXT4_IO_END_UNWRITTEN;8484- /* Wake up anyone waiting on unwritten extent conversion */8585- if (atomic_dec_and_test(&EXT4_I(inode)->i_unwritten))8686- wake_up_all(ext4_ioend_wq(inode));7171+ if (atomic_dec_and_test(&EXT4_I(io->inode)->i_ioend_count))7272+ wake_up_all(ext4_ioend_wq(io->inode));7373+ kmem_cache_free(io_end_cachep, io);8774}88758976/* check a range of space and convert unwritten extents to written. */···93106 "(inode %lu, offset %llu, size %zd, error %d)",94107 inode->i_ino, offset, size, ret);95108 }9696- ext4_clear_io_unwritten_flag(io);9797- ext4_release_io_end(io);109109+ /* Wake up anyone waiting on unwritten extent conversion */110110+ if (atomic_dec_and_test(&EXT4_I(inode)->i_unwritten))111111+ wake_up_all(ext4_ioend_wq(inode));112112+ if (io->flag & EXT4_IO_END_DIRECT)113113+ inode_dio_done(inode);114114+ if (io->iocb)115115+ aio_complete(io->iocb, io->result, 0);98116 return ret;99117}100118···130138}131139132140/* Add the io_end to per-inode completed end_io list. */133133-static void ext4_add_complete_io(ext4_io_end_t *io_end)141141+void ext4_add_complete_io(ext4_io_end_t *io_end)134142{135143 struct ext4_inode_info *ei = EXT4_I(io_end->inode);136144 struct workqueue_struct *wq;···167175 err = ext4_end_io(io);168176 if (unlikely(!ret && err))169177 ret = err;178178+ io->flag &= ~EXT4_IO_END_UNWRITTEN;179179+ ext4_free_io_end(io);170180 }171181 return ret;172182}···200206 atomic_inc(&EXT4_I(inode)->i_ioend_count);201207 io->inode = inode;202208 INIT_LIST_HEAD(&io->list);203203- atomic_set(&io->count, 1);204209 }205210 return io;206206-}207207-208208-void ext4_put_io_end_defer(ext4_io_end_t *io_end)209209-{210210- if (atomic_dec_and_test(&io_end->count)) {211211- if (!(io_end->flag & EXT4_IO_END_UNWRITTEN) || !io_end->size) {212212- ext4_release_io_end(io_end);213213- return;214214- }215215- ext4_add_complete_io(io_end);216216- }217217-}218218-219219-int ext4_put_io_end(ext4_io_end_t *io_end)220220-{221221- int err = 0;222222-223223- if (atomic_dec_and_test(&io_end->count)) {224224- if (io_end->flag & EXT4_IO_END_UNWRITTEN) {225225- err = ext4_convert_unwritten_extents(io_end->inode,226226- io_end->offset, io_end->size);227227- ext4_clear_io_unwritten_flag(io_end);228228- }229229- ext4_release_io_end(io_end);230230- }231231- return err;232232-}233233-234234-ext4_io_end_t *ext4_get_io_end(ext4_io_end_t *io_end)235235-{236236- atomic_inc(&io_end->count);237237- return io_end;238211}239212240213/*···286325 bi_sector >> (inode->i_blkbits - 9));287326 }288327289289- ext4_put_io_end_defer(io_end);328328+ if (!(io_end->flag & EXT4_IO_END_UNWRITTEN)) {329329+ ext4_free_io_end(io_end);330330+ return;331331+ }332332+333333+ ext4_add_complete_io(io_end);290334}291335292336void ext4_io_submit(struct ext4_io_submit *io)···305339 bio_put(io->io_bio);306340 }307341 io->io_bio = NULL;308308-}309309-310310-void ext4_io_submit_init(struct ext4_io_submit *io,311311- struct writeback_control *wbc)312312-{313313- io->io_op = (wbc->sync_mode == WB_SYNC_ALL ? WRITE_SYNC : WRITE);314314- io->io_bio = NULL;342342+ io->io_op = 0;315343 io->io_end = NULL;316344}317345318318-static int io_submit_init_bio(struct ext4_io_submit *io,319319- struct buffer_head *bh)346346+static int io_submit_init(struct ext4_io_submit *io,347347+ struct inode *inode,348348+ struct writeback_control *wbc,349349+ struct buffer_head *bh)320350{351351+ ext4_io_end_t *io_end;352352+ struct page *page = bh->b_page;321353 int nvecs = bio_get_nr_vecs(bh->b_bdev);322354 struct bio *bio;323355356356+ io_end = ext4_init_io_end(inode, GFP_NOFS);357357+ if (!io_end)358358+ return -ENOMEM;324359 bio = bio_alloc(GFP_NOIO, min(nvecs, BIO_MAX_PAGES));325360 bio->bi_sector = bh->b_blocknr * (bh->b_size >> 9);326361 bio->bi_bdev = bh->b_bdev;362362+ bio->bi_private = io->io_end = io_end;327363 bio->bi_end_io = ext4_end_bio;328328- bio->bi_private = ext4_get_io_end(io->io_end);329329- if (!io->io_end->size)330330- io->io_end->offset = (bh->b_page->index << PAGE_CACHE_SHIFT)331331- + bh_offset(bh);364364+365365+ io_end->offset = (page->index << PAGE_CACHE_SHIFT) + bh_offset(bh);366366+332367 io->io_bio = bio;368368+ io->io_op = (wbc->sync_mode == WB_SYNC_ALL ? WRITE_SYNC : WRITE);333369 io->io_next_block = bh->b_blocknr;334370 return 0;335371}336372337373static int io_submit_add_bh(struct ext4_io_submit *io,338374 struct inode *inode,375375+ struct writeback_control *wbc,339376 struct buffer_head *bh)340377{341378 ext4_io_end_t *io_end;···349380 ext4_io_submit(io);350381 }351382 if (io->io_bio == NULL) {352352- ret = io_submit_init_bio(io, bh);383383+ ret = io_submit_init(io, inode, wbc, bh);353384 if (ret)354385 return ret;355386 }356356- ret = bio_add_page(io->io_bio, bh->b_page, bh->b_size, bh_offset(bh));357357- if (ret != bh->b_size)358358- goto submit_and_retry;359387 io_end = io->io_end;360388 if (test_clear_buffer_uninit(bh))361389 ext4_set_io_unwritten_flag(inode, io_end);362362- io_end->size += bh->b_size;390390+ io->io_end->size += bh->b_size;363391 io->io_next_block++;392392+ ret = bio_add_page(io->io_bio, bh->b_page, bh->b_size, bh_offset(bh));393393+ if (ret != bh->b_size)394394+ goto submit_and_retry;364395 return 0;365396}366397···432463 do {433464 if (!buffer_async_write(bh))434465 continue;435435- ret = io_submit_add_bh(io, inode, bh);466466+ ret = io_submit_add_bh(io, inode, wbc, bh);436467 if (ret) {437468 /*438469 * We only get here on ENOMEM. Not much else
···50505151/**5252 * struct drm_fb_helper_funcs - driver callbacks for the fbdev emulation library5353- * @gamma_set: - Set the given gamma lut register on the given crtc.5454- * @gamma_get: - Read the given gamma lut register on the given crtc, used to5555- * save the current lut when force-restoring the fbdev for e.g.5656- * kdbg.5757- * @fb_probe: - Driver callback to allocate and initialize the fbdev info5858- * structure. Futhermore it also needs to allocate the drm5959- * framebuffer used to back the fbdev.5353+ * @gamma_set: Set the given gamma lut register on the given crtc.5454+ * @gamma_get: Read the given gamma lut register on the given crtc, used to5555+ * save the current lut when force-restoring the fbdev for e.g.5656+ * kdbg.5757+ * @fb_probe: Driver callback to allocate and initialize the fbdev info5858+ * structure. Futhermore it also needs to allocate the drm5959+ * framebuffer used to back the fbdev.6060+ * @initial_config: Setup an initial fbdev display configuration6061 *6162 * Driver callbacks used by the fbdev emulation helper library.6263 */
-9
include/drm/drm_os_linux.h
···8787/** Other copying of data from kernel space */8888#define DRM_COPY_TO_USER(arg1, arg2, arg3) \8989 copy_to_user(arg1, arg2, arg3)9090-/* Macros for copyfrom user, but checking readability only once */9191-#define DRM_VERIFYAREA_READ( uaddr, size ) \9292- (access_ok( VERIFY_READ, uaddr, size ) ? 0 : -EFAULT)9393-#define DRM_COPY_FROM_USER_UNCHECKED(arg1, arg2, arg3) \9494- __copy_from_user(arg1, arg2, arg3)9595-#define DRM_COPY_TO_USER_UNCHECKED(arg1, arg2, arg3) \9696- __copy_to_user(arg1, arg2, arg3)9797-#define DRM_GET_USER_UNCHECKED(val, uaddr) \9898- __get_user(val, uaddr)999010091#define DRM_HZ HZ10192
+6-2
include/linux/journal-head.h
···30303131 /*3232 * Journalling list for this buffer [jbd_lock_bh_state()]3333+ * NOTE: We *cannot* combine this with b_modified into a bitfield3434+ * as gcc would then (which the C standard allows but which is3535+ * very unuseful) make 64-bit accesses to the bitfield and clobber3636+ * b_jcount if its update races with bitfield modification.3337 */3434- unsigned b_jlist:4;3838+ unsigned b_jlist;35393640 /*3741 * This flag signals the buffer has been modified by3842 * the currently running transaction3943 * [jbd_lock_bh_state()]4044 */4141- unsigned b_modified:1;4545+ unsigned b_modified;42464347 /*4448 * Copy of the buffer data frozen for writing to the log.
+33
include/linux/kref.h
···1919#include <linux/atomic.h>2020#include <linux/kernel.h>2121#include <linux/mutex.h>2222+#include <linux/spinlock.h>22232324struct kref {2425 atomic_t refcount;···9796static inline int kref_put(struct kref *kref, void (*release)(struct kref *kref))9897{9998 return kref_sub(kref, 1, release);9999+}100100+101101+/**102102+ * kref_put_spinlock_irqsave - decrement refcount for object.103103+ * @kref: object.104104+ * @release: pointer to the function that will clean up the object when the105105+ * last reference to the object is released.106106+ * This pointer is required, and it is not acceptable to pass kfree107107+ * in as this function.108108+ * @lock: lock to take in release case109109+ *110110+ * Behaves identical to kref_put with one exception. If the reference count111111+ * drops to zero, the lock will be taken atomically wrt dropping the reference112112+ * count. The release function has to call spin_unlock() without _irqrestore.113113+ */114114+static inline int kref_put_spinlock_irqsave(struct kref *kref,115115+ void (*release)(struct kref *kref),116116+ spinlock_t *lock)117117+{118118+ unsigned long flags;119119+120120+ WARN_ON(release == NULL);121121+ if (atomic_add_unless(&kref->refcount, -1, 1))122122+ return 0;123123+ spin_lock_irqsave(lock, flags);124124+ if (atomic_dec_and_test(&kref->refcount)) {125125+ release(kref);126126+ local_irq_restore(flags);127127+ return 1;128128+ }129129+ spin_unlock_irqrestore(lock, flags);130130+ return 0;100131}101132102133static inline int kref_put_mutex(struct kref *kref,
+27-2
include/linux/mlx4/qp.h
···126126127127struct mlx4_qp_path {128128 u8 fl;129129- u8 reserved1[1];129129+ u8 vlan_control;130130 u8 disable_pkey_check;131131 u8 pkey_index;132132 u8 counter_index;···141141 u8 sched_queue;142142 u8 vlan_index;143143 u8 feup;144144- u8 reserved3;144144+ u8 fvl_rx;145145 u8 reserved4[2];146146 u8 dmac[6];147147+};148148+149149+enum { /* fl */150150+ MLX4_FL_CV = 1 << 6,151151+ MLX4_FL_ETH_HIDE_CQE_VLAN = 1 << 2152152+};153153+enum { /* vlan_control */154154+ MLX4_VLAN_CTRL_ETH_TX_BLOCK_TAGGED = 1 << 6,155155+ MLX4_VLAN_CTRL_ETH_RX_BLOCK_TAGGED = 1 << 2,156156+ MLX4_VLAN_CTRL_ETH_RX_BLOCK_PRIO_TAGGED = 1 << 1, /* 802.1p priority tag */157157+ MLX4_VLAN_CTRL_ETH_RX_BLOCK_UNTAGGED = 1 << 0158158+};159159+160160+enum { /* feup */161161+ MLX4_FEUP_FORCE_ETH_UP = 1 << 6, /* force Eth UP */162162+ MLX4_FSM_FORCE_ETH_SRC_MAC = 1 << 5, /* force Source MAC */163163+ MLX4_FVL_FORCE_ETH_VLAN = 1 << 3 /* force Eth vlan */164164+};165165+166166+enum { /* fvl_rx */167167+ MLX4_FVL_RX_FORCE_ETH_VLAN = 1 << 0 /* enforce Eth rx vlan */147168};148169149170struct mlx4_qp_context {···204183 u8 mtt_base_addr_h;205184 __be32 mtt_base_addr_l;206185 u32 reserved5[10];186186+};187187+188188+enum { /* param3 */189189+ MLX4_STRIP_VLAN = 1 << 30207190};208191209192/* Which firmware version adds support for NEC (NoErrorCompletion) bit */
···3737 * if it is 0, pull-down is disabled.3838 * @PIN_CONFIG_DRIVE_PUSH_PULL: the pin will be driven actively high and3939 * low, this is the most typical case and is typically achieved with two4040- * active transistors on the output. Sending this config will enabale4040+ * active transistors on the output. Setting this config will enable4141 * push-pull mode, the argument is ignored.4242 * @PIN_CONFIG_DRIVE_OPEN_DRAIN: the pin will be driven with open drain (open4343 * collector) which means it is usually wired with other output ports4444- * which are then pulled up with an external resistor. Sending this4545- * config will enabale open drain mode, the argument is ignored.4444+ * which are then pulled up with an external resistor. Setting this4545+ * config will enable open drain mode, the argument is ignored.4646 * @PIN_CONFIG_DRIVE_OPEN_SOURCE: the pin will be driven with open source4747- * (open emitter). Sending this config will enabale open drain mode, the4747+ * (open emitter). Setting this config will enable open drain mode, the4848 * argument is ignored.4949- * @PIN_CONFIG_DRIVE_STRENGTH: the pin will output the current passed as5050- * argument. The argument is in mA.4949+ * @PIN_CONFIG_DRIVE_STRENGTH: the pin will sink or source at most the current5050+ * passed as argument. The argument is in mA.5151 * @PIN_CONFIG_INPUT_SCHMITT_ENABLE: control schmitt-trigger mode on the pin.5252 * If the argument != 0, schmitt-trigger mode is enabled. If it's 0,5353 * schmitt-trigger mode is disabled.
+2-2
include/linux/spi/spi.h
···5757 * @modalias: Name of the driver to use with this device, or an alias5858 * for that name. This appears in the sysfs "modalias" attribute5959 * for driver coldplugging, and in uevents used for hotplugging6060- * @cs_gpio: gpio number of the chipselect line (optional, -EINVAL when6060+ * @cs_gpio: gpio number of the chipselect line (optional, -ENOENT when6161 * when not using a GPIO line)6262 *6363 * A @spi_device is used to interchange data between an SPI slave···266266 * queue so the subsystem notifies the driver that it may relax the267267 * hardware by issuing this call268268 * @cs_gpios: Array of GPIOs to use as chip select lines; one per CS269269- * number. Any individual value may be -EINVAL for CS lines that269269+ * number. Any individual value may be -ENOENT for CS lines that270270 * are not GPIOs (driven by the SPI controller itself).271271 *272272 * Each SPI master controller can communicate with one or more @spi_device
···866866struct raw_hashinfo;867867struct module;868868869869+/*870870+ * caches using SLAB_DESTROY_BY_RCU should let .next pointer from nulls nodes871871+ * un-modified. Special care is taken when initializing object to zero.872872+ */873873+static inline void sk_prot_clear_nulls(struct sock *sk, int size)874874+{875875+ if (offsetof(struct sock, sk_node.next) != 0)876876+ memset(sk, 0, offsetof(struct sock, sk_node.next));877877+ memset(&sk->sk_node.pprev, 0,878878+ size - offsetof(struct sock, sk_node.pprev));879879+}880880+869881/* Networking protocol blocks we attach to sockets.870882 * socket layer -> transport layer interface871883 * transport -> network interface is defined by struct inet_proto
···750750751751static void __free_preds(struct event_filter *filter)752752{753753+ int i;754754+753755 if (filter->preds) {756756+ for (i = 0; i < filter->n_preds; i++)757757+ kfree(filter->preds[i].ops);754758 kfree(filter->preds);755759 filter->preds = NULL;756760 }
+40-13
kernel/trace/trace_kprobe.c
···3535 const char *symbol; /* symbol name */3636 struct ftrace_event_class class;3737 struct ftrace_event_call call;3838- struct ftrace_event_file **files;3838+ struct ftrace_event_file * __rcu *files;3939 ssize_t size; /* trace entry size */4040 unsigned int nr_args;4141 struct probe_arg args[];···185185186186static int trace_probe_nr_files(struct trace_probe *tp)187187{188188- struct ftrace_event_file **file = tp->files;188188+ struct ftrace_event_file **file;189189 int ret = 0;190190191191+ /*192192+ * Since all tp->files updater is protected by probe_enable_lock,193193+ * we don't need to lock an rcu_read_lock.194194+ */195195+ file = rcu_dereference_raw(tp->files);191196 if (file)192197 while (*(file++))193198 ret++;···214209 mutex_lock(&probe_enable_lock);215210216211 if (file) {217217- struct ftrace_event_file **new, **old = tp->files;212212+ struct ftrace_event_file **new, **old;218213 int n = trace_probe_nr_files(tp);219214215215+ old = rcu_dereference_raw(tp->files);220216 /* 1 is for new one and 1 is for stopper */221217 new = kzalloc((n + 2) * sizeof(struct ftrace_event_file *),222218 GFP_KERNEL);···257251static int258252trace_probe_file_index(struct trace_probe *tp, struct ftrace_event_file *file)259253{254254+ struct ftrace_event_file **files;260255 int i;261256262262- if (tp->files) {263263- for (i = 0; tp->files[i]; i++)264264- if (tp->files[i] == file)257257+ /*258258+ * Since all tp->files updater is protected by probe_enable_lock,259259+ * we don't need to lock an rcu_read_lock.260260+ */261261+ files = rcu_dereference_raw(tp->files);262262+ if (files) {263263+ for (i = 0; files[i]; i++)264264+ if (files[i] == file)265265 return i;266266 }267267···286274 mutex_lock(&probe_enable_lock);287275288276 if (file) {289289- struct ftrace_event_file **new, **old = tp->files;277277+ struct ftrace_event_file **new, **old;290278 int n = trace_probe_nr_files(tp);291279 int i, j;292280281281+ old = rcu_dereference_raw(tp->files);293282 if (n == 0 || trace_probe_file_index(tp, file) < 0) {294283 ret = -EINVAL;295284 goto out_unlock;···885872static __kprobes void886873kprobe_trace_func(struct trace_probe *tp, struct pt_regs *regs)887874{888888- struct ftrace_event_file **file = tp->files;875875+ /*876876+ * Note: preempt is already disabled around the kprobe handler.877877+ * However, we still need an smp_read_barrier_depends() corresponding878878+ * to smp_wmb() in rcu_assign_pointer() to access the pointer.879879+ */880880+ struct ftrace_event_file **file = rcu_dereference_raw(tp->files);889881890890- /* Note: preempt is already disabled around the kprobe handler */882882+ if (unlikely(!file))883883+ return;884884+891885 while (*file) {892886 __kprobe_trace_func(tp, regs, *file);893887 file++;···945925kretprobe_trace_func(struct trace_probe *tp, struct kretprobe_instance *ri,946926 struct pt_regs *regs)947927{948948- struct ftrace_event_file **file = tp->files;928928+ /*929929+ * Note: preempt is already disabled around the kprobe handler.930930+ * However, we still need an smp_read_barrier_depends() corresponding931931+ * to smp_wmb() in rcu_assign_pointer() to access the pointer.932932+ */933933+ struct ftrace_event_file **file = rcu_dereference_raw(tp->files);949934950950- /* Note: preempt is already disabled around the kprobe handler */935935+ if (unlikely(!file))936936+ return;937937+951938 while (*file) {952939 __kretprobe_trace_func(tp, ri, regs, *file);953940 file++;···962935}963936964937/* Event entry printers */965965-enum print_line_t938938+static enum print_line_t966939print_kprobe_event(struct trace_iterator *iter, int flags,967940 struct trace_event *event)968941{···998971 return TRACE_TYPE_PARTIAL_LINE;999972}100097310011001-enum print_line_t974974+static enum print_line_t1002975print_kretprobe_event(struct trace_iterator *iter, int flags,1003976 struct trace_event *event)1004977{
+15-4
kernel/workqueue.c
···296296static struct workqueue_attrs *unbound_std_wq_attrs[NR_STD_WORKER_POOLS];297297298298struct workqueue_struct *system_wq __read_mostly;299299-EXPORT_SYMBOL_GPL(system_wq);299299+EXPORT_SYMBOL(system_wq);300300struct workqueue_struct *system_highpri_wq __read_mostly;301301EXPORT_SYMBOL_GPL(system_highpri_wq);302302struct workqueue_struct *system_long_wq __read_mostly;···14111411 local_irq_restore(flags);14121412 return ret;14131413}14141414-EXPORT_SYMBOL_GPL(queue_work_on);14141414+EXPORT_SYMBOL(queue_work_on);1415141514161416void delayed_work_timer_fn(unsigned long __data)14171417{···14851485 local_irq_restore(flags);14861486 return ret;14871487}14881488-EXPORT_SYMBOL_GPL(queue_delayed_work_on);14881488+EXPORT_SYMBOL(queue_delayed_work_on);1489148914901490/**14911491 * mod_delayed_work_on - modify delay of or queue a delayed work on specific CPU···20592059 if (unlikely(!mutex_trylock(&pool->manager_mutex))) {20602060 spin_unlock_irq(&pool->lock);20612061 mutex_lock(&pool->manager_mutex);20622062+ spin_lock_irq(&pool->lock);20622063 ret = true;20632064 }20642065···43124311 * no synchronization around this function and the test result is43134312 * unreliable and only useful as advisory hints or for debugging.43144313 *43144314+ * If @cpu is WORK_CPU_UNBOUND, the test is performed on the local CPU.43154315+ * Note that both per-cpu and unbound workqueues may be associated with43164316+ * multiple pool_workqueues which have separate congested states. A43174317+ * workqueue being congested on one CPU doesn't mean the workqueue is also43184318+ * contested on other CPUs / NUMA nodes.43194319+ *43154320 * RETURNS:43164321 * %true if congested, %false otherwise.43174322 */···43274320 bool ret;4328432143294322 rcu_read_lock_sched();43234323+43244324+ if (cpu == WORK_CPU_UNBOUND)43254325+ cpu = smp_processor_id();4330432643314327 if (!(wq->flags & WQ_UNBOUND))43324328 pwq = per_cpu_ptr(wq->cpu_pwqs, cpu);···49054895 BUG_ON(!tbl);4906489649074897 for_each_node(node)49084908- BUG_ON(!alloc_cpumask_var_node(&tbl[node], GFP_KERNEL, node));48984898+ BUG_ON(!alloc_cpumask_var_node(&tbl[node], GFP_KERNEL,48994899+ node_online(node) ? node : NUMA_NO_NODE));4909490049104901 for_each_possible_cpu(cpu) {49114902 node = cpu_to_node(cpu);
+13
net/batman-adv/distributed-arp-table.c
···837837838838 dat_entry = batadv_dat_entry_hash_find(bat_priv, ip_dst);839839 if (dat_entry) {840840+ /* If the ARP request is destined for a local client the local841841+ * client will answer itself. DAT would only generate a842842+ * duplicate packet.843843+ *844844+ * Moreover, if the soft-interface is enslaved into a bridge, an845845+ * additional DAT answer may trigger kernel warnings about846846+ * a packet coming from the wrong port.847847+ */848848+ if (batadv_is_my_client(bat_priv, dat_entry->mac_addr)) {849849+ ret = true;850850+ goto out;851851+ }852852+840853 skb_new = arp_create(ARPOP_REPLY, ETH_P_ARP, ip_src,841854 bat_priv->soft_iface, ip_dst, hw_src,842855 dat_entry->mac_addr, hw_src);
+14-6
net/batman-adv/main.c
···163163 batadv_vis_quit(bat_priv);164164165165 batadv_gw_node_purge(bat_priv);166166- batadv_originator_free(bat_priv);167166 batadv_nc_free(bat_priv);168168-169169- batadv_tt_free(bat_priv);170170-167167+ batadv_dat_free(bat_priv);171168 batadv_bla_free(bat_priv);172169173173- batadv_dat_free(bat_priv);170170+ /* Free the TT and the originator tables only after having terminated171171+ * all the other depending components which may use these structures for172172+ * their purposes.173173+ */174174+ batadv_tt_free(bat_priv);175175+176176+ /* Since the originator table clean up routine is accessing the TT177177+ * tables as well, it has to be invoked after the TT tables have been178178+ * freed and marked as empty. This ensures that no cleanup RCU callbacks179179+ * accessing the TT data are scheduled for later execution.180180+ */181181+ batadv_originator_free(bat_priv);174182175183 free_percpu(bat_priv->bat_counters);176184···483475 char *algo_name = (char *)val;484476 size_t name_len = strlen(algo_name);485477486486- if (algo_name[name_len - 1] == '\n')478478+ if (name_len > 0 && algo_name[name_len - 1] == '\n')487479 algo_name[name_len - 1] = '\0';488480489481 bat_algo_ops = batadv_algo_get(algo_name);
+6-2
net/batman-adv/network-coding.c
···15141514 struct ethhdr *ethhdr, ethhdr_tmp;15151515 uint8_t *orig_dest, ttl, ttvn;15161516 unsigned int coding_len;15171517+ int err;1517151815181519 /* Save headers temporarily */15191520 memcpy(&coded_packet_tmp, skb->data, sizeof(coded_packet_tmp));···15691568 coding_len);1570156915711570 /* Resize decoded skb if decoded with larger packet */15721572- if (nc_packet->skb->len > coding_len + h_size)15731573- pskb_trim_rcsum(skb, coding_len + h_size);15711571+ if (nc_packet->skb->len > coding_len + h_size) {15721572+ err = pskb_trim_rcsum(skb, coding_len + h_size);15731573+ if (err)15741574+ return NULL;15751575+ }1574157615751577 /* Create decoded unicast packet */15761578 unicast_packet = (struct batadv_unicast_packet *)skb->data;
···12171217#endif12181218}1219121912201220-/*12211221- * caches using SLAB_DESTROY_BY_RCU should let .next pointer from nulls nodes12221222- * un-modified. Special care is taken when initializing object to zero.12231223- */12241224-static inline void sk_prot_clear_nulls(struct sock *sk, int size)12251225-{12261226- if (offsetof(struct sock, sk_node.next) != 0)12271227- memset(sk, 0, offsetof(struct sock, sk_node.next));12281228- memset(&sk->sk_node.pprev, 0,12291229- size - offsetof(struct sock, sk_node.pprev));12301230-}12311231-12321220void sk_prot_clear_portaddr_nulls(struct sock *sk, int size)12331221{12341222 unsigned long nulls1, nulls2;
+1-1
net/ipv4/ip_output.c
···8484EXPORT_SYMBOL(sysctl_ip_default_ttl);85858686/* Generate a checksum for an outgoing IP datagram. */8787-__inline__ void ip_send_check(struct iphdr *iph)8787+void ip_send_check(struct iphdr *iph)8888{8989 iph->check = 0;9090 iph->check = ip_fast_csum((unsigned char *)iph, iph->ihl);
+2
net/ipv6/ip6_gre.c
···10811081 }10821082 if (t == NULL)10831083 t = netdev_priv(dev);10841084+ memset(&p, 0, sizeof(p));10841085 ip6gre_tnl_parm_to_user(&p, &t->parms);10851086 if (copy_to_user(ifr->ifr_ifru.ifru_data, &p, sizeof(p)))10861087 err = -EFAULT;···11291128 if (t) {11301129 err = 0;1131113011311131+ memset(&p, 0, sizeof(p));11321132 ip6gre_tnl_parm_to_user(&p, &t->parms);11331133 if (copy_to_user(ifr->ifr_ifru.ifru_data, &p, sizeof(p)))11341134 err = -EFAULT;
+12
net/ipv6/tcp_ipv6.c
···18901890}18911891#endif1892189218931893+static void tcp_v6_clear_sk(struct sock *sk, int size)18941894+{18951895+ struct inet_sock *inet = inet_sk(sk);18961896+18971897+ /* we do not want to clear pinet6 field, because of RCU lookups */18981898+ sk_prot_clear_nulls(sk, offsetof(struct inet_sock, pinet6));18991899+19001900+ size -= offsetof(struct inet_sock, pinet6) + sizeof(inet->pinet6);19011901+ memset(&inet->pinet6 + 1, 0, size);19021902+}19031903+18931904struct proto tcpv6_prot = {18941905 .name = "TCPv6",18951906 .owner = THIS_MODULE,···19441933#ifdef CONFIG_MEMCG_KMEM19451934 .proto_cgroup = tcp_proto_cgroup,19461935#endif19361936+ .clear_sk = tcp_v6_clear_sk,19471937};1948193819491939static const struct inet6_protocol tcpv6_protocol = {
+12-1
net/ipv6/udp.c
···14321432}14331433#endif /* CONFIG_PROC_FS */1434143414351435+void udp_v6_clear_sk(struct sock *sk, int size)14361436+{14371437+ struct inet_sock *inet = inet_sk(sk);14381438+14391439+ /* we do not want to clear pinet6 field, because of RCU lookups */14401440+ sk_prot_clear_portaddr_nulls(sk, offsetof(struct inet_sock, pinet6));14411441+14421442+ size -= offsetof(struct inet_sock, pinet6) + sizeof(inet->pinet6);14431443+ memset(&inet->pinet6 + 1, 0, size);14441444+}14451445+14351446/* ------------------------------------------------------------------------ */1436144714371448struct proto udpv6_prot = {···14731462 .compat_setsockopt = compat_udpv6_setsockopt,14741463 .compat_getsockopt = compat_udpv6_getsockopt,14751464#endif14761476- .clear_sk = sk_prot_clear_portaddr_nulls,14651465+ .clear_sk = udp_v6_clear_sk,14771466};1478146714791468static struct inet_protosw udpv6_protosw = {
+2
net/ipv6/udp_impl.h
···3131extern int udpv6_queue_rcv_skb(struct sock * sk, struct sk_buff *skb);3232extern void udpv6_destroy_sock(struct sock *sk);33333434+extern void udp_v6_clear_sk(struct sock *sk, int size);3535+3436#ifdef CONFIG_PROC_FS3537extern int udp6_seq_show(struct seq_file *seq, void *v);3638#endif
···200200 * We probably cannot handle all device-id machines,201201 * so restrict to those we do handle for now.202202 */203203- if (id && (*id == 22 || *id == 14 || *id == 35)) {203203+ if (id && (*id == 22 || *id == 14 || *id == 35 ||204204+ *id == 44)) {204205 snprintf(dev->sound.modalias, 32,205206 "aoa-device-id-%d", *id);206207 ok = 1;
+1-1
sound/oss/Kconfig
···250250menuconfig SOUND_OSS251251 tristate "OSS sound modules"252252 depends on ISA_DMA_API && VIRT_TO_BUS253253- depends on !ISA_DMA_SUPPORT_BROKEN253253+ depends on !GENERIC_ISA_DMA_SUPPORT_BROKEN254254 help255255 OSS is the Open Sound System suite of sound card drivers. They make256256 sound programming easier since they provide a common API. Say Y or
+7-2
sound/pci/hda/hda_generic.c
···606606 return false;607607}608608609609+/* check whether the NID is referred by any active paths */610610+#define is_active_nid_for_any(codec, nid) \611611+ is_active_nid(codec, nid, HDA_OUTPUT, 0)612612+609613/* get the default amp value for the target state */610614static int get_amp_val_to_activate(struct hda_codec *codec, hda_nid_t nid,611615 int dir, unsigned int caps, bool enable)···763759764760 for (i = 0; i < path->depth; i++) {765761 hda_nid_t nid = path->path[i];766766- if (!snd_hda_check_power_state(codec, nid, AC_PWRST_D3)) {762762+ if (!snd_hda_check_power_state(codec, nid, AC_PWRST_D3) &&763763+ !is_active_nid_for_any(codec, nid)) {767764 snd_hda_codec_write(codec, nid, 0,768765 AC_VERB_SET_POWER_STATE,769766 AC_PWRST_D3);···41624157 return power_state;41634158 if (get_wcaps_type(get_wcaps(codec, nid)) >= AC_WID_POWER)41644159 return power_state;41654165- if (is_active_nid(codec, nid, HDA_OUTPUT, 0))41604160+ if (is_active_nid_for_any(codec, nid))41664161 return power_state;41674162 return AC_PWRST_D3;41684163}
···667667 /* On wm0010 only the CLKCTRL1 value is used */668668 pll_rec.clkctrl1 = wm0010->pll_clkctrl1;669669670670+ ret = -ENOMEM;670671 len = pll_rec.length + 8;671672 out = kzalloc(len, GFP_KERNEL);672673 if (!out) {
-6
sound/soc/fsl/imx-ssi.c
···540540 clk_prepare_enable(ssi->clk);541541542542 res = platform_get_resource(pdev, IORESOURCE_MEM, 0);543543- if (!res) {544544- ret = -ENODEV;545545- goto failed_get_resource;546546- }547547-548543 ssi->base = devm_ioremap_resource(&pdev->dev, res);549544 if (IS_ERR(ssi->base)) {550545 ret = PTR_ERR(ssi->base);···628633 snd_soc_unregister_component(&pdev->dev);629634failed_register:630635 release_mem_region(res->start, resource_size(res));631631-failed_get_resource:632636 clk_disable_unprepare(ssi->clk);633637failed_clk:634638
-5
sound/soc/kirkwood/kirkwood-i2s.c
···471471 dev_set_drvdata(&pdev->dev, priv);472472473473 mem = platform_get_resource(pdev, IORESOURCE_MEM, 0);474474- if (!mem) {475475- dev_err(&pdev->dev, "platform_get_resource failed\n");476476- return -ENXIO;477477- }478478-479474 priv->io = devm_ioremap_resource(&pdev->dev, mem);480475 if (IS_ERR(priv->io))481476 return PTR_ERR(priv->io);